Climate change scenarios
On the basis of analyses done by Cayan et al. (2008), climate change scenarios were selected from those used in the IPCC Fourth Assessment. Two emission scenarios were selected to range from optimistic to business-as-usual. Two models were required to contain realistic representations of some regional features, such as the spatial structure of precipitation and important orographic features, and to produce a realistic simulation of aspects of California's recent historical climate - particularly the distribution of monthly temperatures and the strong seasonal cycle of precipitation that exists in the region and throughout the western states. Because the observed western US climate has exhibited considerable natural variability at seasonal to interdecadal time scales, the historical simulations by the climate models were required to contain spatial and temporal variability that resembles that from observations at shorter time scales. Finally, the selection of models was designed to include models with differing levels of sensitivity to greenhouse gas forcing. On the basis of these criteria, two GCMs were identified: the parallel climate model [PCM] (with simulations from NCAR and DOE groups; see Washington et al. 2000; Meehl et al. 2003) and the NOAA geophysical fluid dynamics laboratory [GFDL] CM2.1 model (Stouffer et al. 2006; Delworth et al. 2006). The choice of greenhouse gas emission scenarios which focused on A2 (medium-high) and B1 (low) emissions was based upon implementation decisions made earlier by IPCC (Nakic'enovic' et al. 2000).
The B1 scenario assumes that global CO2 emissions peak at approximately 10 gigatons per year [Gt/year] in the mid-twenty-first century before dropping below current levels by 2100. This yields a doubling of CO2 concentrations relative to its pre-industrial level by the end of the century (approximately 550 ppm), followed by a leveling of the concentrations. Under the A2 scenario, CO2 emissions continue to climb throughout the century, reaching almost 30 Gt/year.
Statistical downscaling
The two general approaches for interpolating GCM outputs are statistical and dynamical downscaling. In dynamical downscaling, the GCM outputs are used as boundary conditions for finer-resolution regional-scale GCM models. This technique is computer intensive, requires detailed, finer-scale full physical weather and ocean models, and will not be used here. Statistical downscaling methods apply statistical relations between historical climate records at coarse resolutions and fine resolutions to interpolate from coarse model outputs to finer resolutions. This requires much less computational effort but generally involves extreme simplifications of the physical relations. One recent example is a deterministic, linear approach that relies on the spatial patterns of historical climate data called constructed analogues. By linear regressions with the current weather or climate pattern as the dependent variable and selected historical patterns as independent variables, high-quality analogues can be constructed that tend to describe the evolution of weather or climate into the future for a time (Hidalgo et al. 2008). The approach implicitly assumes stationarity in time and space (Milly et al. 2008) and was inspired by an approach for predicting climatic patterns by van den Dool et al. (2003).
The statistical downscaling method of constructed analogues was developed at Scripps Institution of Oceanography by Hidalgo et al. (2008) and used here for these four scenarios. Models selected for downscaling have been downscaled from coarse-resolution GCM daily and monthly maps (approximately 275 km) to 12-km national maps (binary files can be found at http://tenaya.ucsd.edu/wawona-m/downscaled/). This method uses continental-scale historical (observed) patterns of daily precipitation and air temperature at coarse resolution and their fine-resolution (approximately 12 km) equivalents with a statistical approach to climate prediction based on the conceptual framework of van den Dool et al. (2003). This method assumes that if one could find an exact analogue (in the historical record) to the weather field today, weather in the future should replicate the weather following the time of that exact analogue. This approach is analogous to the principal component analysis with multiple dependent variables that represents various similar historical snapshots. Procedurally, a collection of historically observed coarse-resolution climate patterns is linearly regressed to form a best-fit constructed analogue of a particular coarse-resolution climate-model output. The constructed analogue method develops a downscaled, finer-resolution climate pattern associated with the climate-model output from the (same) linear combination of historical fine-resolution patterns as was fitted to form the coarse-resolution analogue. Thus, the regression coefficients that form the best-fit combination of coarse-resolution daily maps (at 275-km resolution) to reproduce a given climate-model daily pattern are applied to the fine-resolution (12-km resolution) maps from the same (historical) days.
The downscaling method of constructed analogues illustrates a high level of skill, capturing an average of 50% of daily high-resolution precipitation variance and an average of around 67% of average air temperature variance, across all seasons and across the contiguous United States. The downscaled precipitation variations capture as much as 62% of observed variance in the coastal regions during the winter months. When the downscaled daily estimations are accumulated into monthly means, an average 55% of the variance of monthly precipitation anomalies and more than 80% of the variance of average air temperature monthly anomalies are captured (Hidalgo et al. 2008).
Spatial downscaling and bias correction
Spatial downscaling here refers to the calculation of fine-scale information on the basis of coarse-scale information using various methods of spatial interpolation. This downscaling is required for the application of statistically downscaled climate parameters from the 12-km resolution to grid resolutions that more adequately address the patchiness of ecological and environmental processes of interest. Bias correction is a necessary component in developing useful GCM projections. Without this correction applied to GCM data, which then is used in local hydrologic or ecological models, the results could be erroneous, resulting in the over or under estimation of the climatic variables. Bias correction requires a historically measured dataset for correction that is at the same grid scale as the spatially downscaled parameter set. Therefore, the initial spatial downscaling was done to 4 km, which is the resolution of an existing historical climate dataset that is spatially distributed and grid-based. The PRISM dataset developed by (Daly et al. 1994) is a knowledge-based analytical model that integrates point data of measured precipitation and air temperature with a digital elevation model reflecting expert knowledge of complex climatic extremes, such as rain shadows, temperature inversions, and coastal effects, to produce digital grids of monthly precipitation and minimum and maximum air temperatures. Historical climatology is available from PRISM as monthly maps (http://www.prism.oregonstate.edu/). The spatial downscaling is done using the 4-km resolution digital elevation model in PRISM prior to bias correction.
Spatial downscaling is performed on the coarse-resolution grids (12 km) to produce finer-resolution grids (4 km) using a model developed by Nalder and Wein (1998) modified with a nugget effect specified as the length of the coarse-resolution grid. Their model was developed to interpolate very sparsely located climate data over regional domains and combines a spatial gradient and inverse distance squared [GIDS] weighting to monthly point data with multiple regressions. Parameter weighting is based on location and elevation of the new fine-resolution grid relative to existing coarse-resolution grid cells using the following the equation:
(1)
where Z is the estimated climatic variable at a specific location defined by easting (X) and northing (Y) coordinates and elevation (E); Z
i
is the climate variable from the 12-km grid cell i; X
i
, Y
i
, and E
i
are easting and northing coordinates and elevation of the 12-km grid cell i, respectively; N is the number of 12-km grid cells in a specified search radius; C
x
, C
y
, and C
e
are regression coefficients for easting, northing, and elevation, respectively; d
i
is the distance from the 4-km site to 12-km grid cell i and is specified to be equal to or greater than 12 km (the nugget) so that the regional trend of the climatic variable with northing, easting, and elevation within the search radius does not cause the estimate to interpolate between the closest 12-km grid cells, which causes a bull's-eye effect around any 4-km fine-resolution grid cell that is closely associated or co-located in space with an original 12-km grid cell. For example, in the case of the 12-km to 4-km downscaling step, a search radius of 27 km is used to limit the influence of distant data but allow for approximately twenty-one 12-km grid cells to estimate the model parameters for temperature and precipitation for each 4-km grid cell with the closest cell having the most influence. This interpolation scheme incorporates the topographic and elevational effects on the climate.
Statistical downscaling approaches use both the spatially downscaled grids and measured data for the same period to adjust the 4-km grids so that certain statistical properties, in this case the mean and standard deviation, are the same as the measured data set. To make the correction possible, the GCM is run under the historical forcings to establish a baseline for modeling to match the current climate. Baseline for this study is based on the PCM and GFDL model runs for 1950 to 2000, where the climate change forcings are absent from the model, and uses recent (pre-2000) atmospheric greenhouse gas conditions. The baseline period can be any time period but should encompass the variation imposed by the major climate cycles, such as the Pacific decadal oscillation (approximately 25 to 30 years; Gurdak et al. 2009), as these are still present in the hindcast GCM, as analyzed by Hanson and Dettinger (2005). This baseline period is corrected (transformed) using the PRISM data from the same time period.
There are different statistical downscaling methods that can be used to ensure that GCM and historical data have similar statistical properties. One commonly used method is the bias correction and spatial downscaling [BCSD] approach of Wood et al. (2004) that uses a quantile-based mapping of the probability density functions for the monthly GCM climate onto those of gridded observed data, spatially aggregated to the GCM scale. This same mapping is then applied to future GCM projections, allowing the mean and variability of a GCM to evolve in accordance with the GCM simulation, while matching all statistical moments between the GCM and observations for the base period. Recently, one hundred twelve 150-year GCM projections were downscaled over much of North America using the BCSD method (Maurer and Hidalgo 2008).
We use a method described by Bouwer et al. (2004) that uses a simple adjustment of the projected data to match the baseline mean and standard deviation. This correction is done on a cell-by-cell basis so that the correction is not global but embedded in the spatial interpolation for each location for just that month. Using the standard deviation in the formulation, the bias correction allows the GCM to be transformed to match the mean and the variability of the climate parameter to the baseline period. The equation for both temperature and precipitation is
(2)
where Cunbiased is the bias-corrected monthly climate parameter (temperature or precipitation), Cbiased is the monthly downscaled but biased future climate parameter, CamGCM is the average monthly climate parameter downscaled but biased for the baseline period, σamGCM is the standard deviation of the monthly climate parameter for the baseline period, σamPRISM is the standard deviation for the climate parameter from PRISM for the baseline period, and CamPRISM is the average monthly PRISM climate parameter for the baseline period. This method was applied for this study incorporating both mean and standard deviation on a cell-by-cell data at 4-km resolution for the baseline time period for each month.
Processing sequence
The 12-km resolution data has been obtained from Scripps for 1950 to 2000, representing current climate, and 2000 to 2100 representing future climate for the three scenarios and two models. The sequence of steps for processing the data is as follows: (1) The monthly 12-km data are spatially downscaled using GIDS to a 4-km grid designed to match grids from the PRISM digital elevation model. (2) The monthly 4-km data for 1950 to 2000 are used to develop the bias correction statistics (mean and standard deviation) using measured or simulated current climate data for 1950 to 2000 from PRISM and from each of the two GCM models. (3) These corrections are then applied to the 2000 to 2100 monthly data. (4) Monthly data are further downscaled using GIDS to a 270-m scale for the southwest Basin Characterization Model [BCM] (a regional water-balance model; Flint and Flint 2007), including California. The processing sequence, including the step involving the downscaling of the GCM grids to the 12-km grids using constructed analogues, is presented in Figure 1.
Comparison of downscaled climate parameters and measured climate data
An analysis was done to assess whether the spatial downscaling process introduced additional uncertainty into the final estimates of the climatic parameters. Measured monthly precipitation and maximum and minimum air temperatures from meteorological stations throughout California operated by the California Irrigation Management Information System [CIMIS] and National Weather Service [NWS] were compared to the 4-km PRISM grid cell occupied by each station (Figure 2). The station data were also compared to the 4-km data that was downscaled to 270 m to determine which of those scales was closer to the measured data. Figure 2 illustrates the physical conditions that are represented by each grid resolution in comparison with the location of the Hopland FS CIMIS station in the northern part of the Russian River basin in Sonoma County. This station is located at a 354-m elevation, while the average elevation of the 4-km grid cell is 608 m (Figure 2a). The 270-m cell in which the station is located is 366 m, much closer to the station location. As a result, the representation of the data by the downscaling, which specifically takes into account the elevation of each cell, can more accurately reflect the measured data. While this example explains how the downscaling can improve the gridded estimates by incorporating the determinism that location and elevation may lend to the estimate of climate parameters, this may not always be the case, depending on whether the PRISM estimate closely matches the measured data and whether the topography is flat or very spatially variable.
Application of future climate grids to a hydrologic model and characterization of topoclimates
Downscaled monthly climate parameters, precipitation, and maximum and minimum air temperatures were applied to a regional hydrologic model (BCM; Flint and Flint 2007; Flint et al. 2004). This model relies on the calculation of hourly potential evapotranspiration [PET] determined from solar radiation that is simulated using topographic shading to calculate the water balance for every grid cell. Resulting estimates of actual evapotranspiration [AET] based on changes in soil moisture with changes in climate from projections can be used to calculate climatic water deficit [CWD].
CWD is the annual evaporative demand that exceeds available water and has been found to be a driver for ecological change (Stephenson 1998) and is correlated to distributions of vegetation. This correlation can be used to investigate potential changes in distribution with changes in climate. It is calculated as PET minus AET. In the BCM, AET is calculated on the basis of soil moisture content that diminishes over the dry season; therefore, in Mediterranean climates with minimal summer precipitation, PET exceeds AET, thus accumulating the annual deficit.
The topoclimate is described in the BCM in the solar radiation model and resulting calculation of PET, whereby hillslopes with lower energy loads (lower potential evapotranspiration) are likely to have less of an impact on the basis of rising air temperatures from climate change. The fine-scale discretization of soil properties allows for the distinction of soils on the landscape with varying soil water holding capacities. Deep soils such as those in valley bottoms can extend the amount of water available for AET further into the dry season, whereas shallow soils such as those on ridgetops can limit the amount of water available, regardless of magnitude of precipitation, as it will run off or recharge when the soil capacity is filled. These details are captured by the scale at which the climate is downscaled, and the hydrologic model is applied to the landscape. This application of CWD integrates the climate, energy loading, drainage, and available soil moisture to provide hydrologic response to changes in climate that reflect distinct landscapes and habitat characteristics.