This whole time, I’ve been working on what I thought would be different case studies of modeling in the Chesapeake Bay. I thought there might be some overlap between the different cases, but I assumed that they would be relatively discrete and easy to parse out and evaluate the different effects of each modeling practice. What I’ve found is a lot more complex than that. What I’ve discovered, instead of separate cases, is more like an “Indra’s net” of models operating alongside, on top of, and even within one another. This doesn’t negate the original purpose of my project, but it does make things a lot more interesting.
There are three aspects to this Indra’s net – three ways that models reflect and refract one another. First, I’ve confirmed Paul Edwards’s finding that the divide between model results and so-called empirical data is artificial. Models depend on data, but that data can’t exist without models. Data is not collected uniformly. On the Bay there are thousands of buoys and other data collection devices often with different measuring devices, using different methods. Even when these use the same equipment and are measuring the same things, weather conditions and other factors cause disparities in the data that don’t reflect actual conditions. Add historical data to the mix, and you get a jagged map of data that doesn’t appear to be showing the same thing. Models are used to smooth out these differences and make the data usable across the watershed – they take many different measurements taken over a large spatial range and over a long time and turn them into a unified data set. As a result, the models built from the data already contain the models that are used to smooth out the data sets. Models within models.
Another way that models intersect with one another is by influence. This may be particularly true in the Chesapeake Bay where you have the Bay Program and the Chesapeake Research Consortium working to bring the best scientific tools to bear on nutrient management in the watershed. The result is a host of models that are continually being developed and redeveloped in response to one another. This model shows an increase in nitrogen here, the other shows a decrease in the same spot – something’s wrong, so the two modeling groups try to figure out the cause of the disparities and fix them. In other cases, one model demonstrates a more effective method for calculating loading values, so its results can be integrated into the other model. The models are, in other words, mutually constituting.
Finally, multiple models are used to validate one another. This has been a growing trend in the Bay Program in the last few years. I remember attending – and presenting at – a multiple models workshop a few years ago. That presentation developed into the project I’m working on now. At the workshop, modelers discussed the ways that multiple models can be integrated. For example, taking the average of different models generally provides a more reliable result than any one model by itself. The problem is that modeling is a heavy investment, and it would be impossible for the Bay Program to fund a second or third model to use in conjunction with the CBMS. Instead, what they’ve been trying to do for the new version is integrate multiple models at every level. That means that multiple models are built into the CBMS at every level, and that multiple models are used to validate the data for input and calibration. The CBMS itself is becoming a model of models in many ways.
Indra’s net of jewels that reflect one another infinitely is the perfect metaphor for the way modeling works in the Chesapeake Bay. The complex interactions of the different modeling projects have added a layer of density to the project that I hadn’t fully expected before I began. Things are really moving now, and I’m anxious to see how this project develops over time