RSS

Climate Models

Fixing the Model …. Halstead Harrison

Thursday’s Departmental Colloquium was given by J. David Neelin, prof and chair or the atmospheric sciences department at UCLA. [I think.]  He was chosen to visit here by our grad students, paid for by a fund I’ve contributed to.  Which gives me a ticket to snark.  A bit.

Neelin spoke about tuning global climate models that contain adjustable parameters .. as all of them do.  The idea is to adjust those parameters in some sensible way to diminish the differences between a model’s simulation of this or that, compared with observations.  Neelin tuned precipitation compared with ‘reanalysis’, a jargon term that describes neatening up sparse data by passing observations thru another model to fill in gaps and smooth a bit.  All this sounds fairly pedestrian so far, and it is.

The trouble comes when a dozen or more parameters are to be simultaneously optimized within climate models, which in every test must be integrated over many years.  Neelin tuned N = 10 parameters, over 25 years, a task that requires N(N-1)/2 = 1125 modeled years, with each year requiring several hours to compute.  Expensive, if you have to do it often.

Along the way, some of those parameters may slip over the edge of physical plausibility.  Reflectivities greater than 1, for example.  OK.  Not a problem.  We’ll just constrain them.
Neelin’s minimum root-mean-square differences between precip- itation from the model and observations from reanalysis was about ~2.0 mm/day, or ~0.7 meters/year.  The globally averaged annual precip is about 1 meter/yr.  None of the tuned parameters were wildly implausible.

Not especially bad, not especially good: Goldie Locks on Prozac.

I was proud of our students, who asked two good questions: 1) Do you get the same parameters, or nearly, if you try to minimize the differences in _variance_ between models and observations, not the means?, and 2) Do you get the same parameters, or nearly, if you change the model’s spatial resolution?  A third good question that was not asked might have been 3) Do you get the same parameters, or nearly, if you optimize for different fields, for example precipitation and temperature?

Ans: Neelin waffled a bit, but seemed to answer “We haven’t tried this”, and “We haven’t tried that”.  My guess: all of these good questions would generate ambiguous answers, closer to “No”, than “Yes”.

Was all this ‘science’?  Depends.  Perhaps it may better be described as testing tools that may .. or may not .. do real science, if properly addressed.  I hold that it is not possible
to discover ‘truths’ in models: only the consequences of assumptions you put into them.

Cheers,
Halstead


Comments are closed.