Downscaling future and past climate data from GCMs

Future and past climate are often data generated with Global Climate Models (GCMs, also know as General Circulation Models). There is number of different GCMs, and they give different results. The weather simulated by these models depends in part on the assumed atmospheric concentration of greenhouse gasses. “Emission scenarios” describe projected future atmospheric concentrations of greenhouse gasses. Thus, projected weather for a given period in the future depends on the model and the emission scenario used, as well on the model run (each run is different as weather is partly a stochastic phenomenon).

These computer models simulate weather in different layers of the atmosphere for small time steps and they are numerically complex. To allow for relatively fast computations (and deal with computer memory limitations), the world is divided into a rather limited number of spatial units (grid cells). The resulting model output is therefore rather coarse, typically in the order of 2 to 3 degrees (one degree of longitude is ~ 111 km). This is problematic for studies considering variation at much higher spatial resolution, such as the change in the range of an endemic species with a small range size. To address this problem many workers have downscaled GCM output.

Downscaling can be accomplished in a number of ways. For example, some approaches use observed weather data to describe relationships between larger-scale climate variables (e.g. atmospheric pressure at 1000 m) and local surface climate variables (e.g. surface rainfall). This relationship is then applied to GCM output under the assumption that the GCMs perform best for the larger-scale variables; and that the relationships found remain valid in a changed climate.

The data available here were produced with a method that is simple and quick, and can therefore be easily applied to the whole world and to many models. It start with the projected change in a weather variable (i.e. minimum temperature in June). This is computed as the (absolute or relative) difference between the output of the GCM run for the baseline years (typically 1960-1990 for future climate studies and “pre-industrial” for past climate studies) and for the target years (e.g. 2050-2080). These change are interpolated to a grid with a high (~ 1 km) resolution. The assumption made is that the change in climate is relatively stable over space (high spatial autocorrelation).

These high resolution changes are then applied to high resolution interpolated climate data for the “current period” (in this case the WorldClim dataset). This step I refer to as “calibration”. Callibration is a necessary step because GCMs do not accurately predict the current climate in all places. For that reason, you cannot directly compare observed current climate with predicted future (or past) climate. It is also problematic to compare the response to simulated current conditions with a response to simulated future conditions because the simulated current conditions could be far from reality.

Rather one should look at a response to the projected change in climate relative to a response to the current climate. For temperature I use absolute differences, but for rainfall I use relative differences. This is because otherwise results can be awkward in areas with strong rainfall gradients.