Modeling and the Art of Noticing

Modeling gets a bad rap. They are too abstract, we say, too disconnected from reality! We want real, empirical research. We want sensitive and nuanced understandings of the way things work. Models, because they create artificial worlds, are dissociated from the real. Because they are based on numerical calculations, do not allow for nuance. Based on my experience with modeling and my discussions with modelers, I think these are misconceptions that only help to push us further away from our understanding of modeling and the kinds of noticing that can be done with models. Instead of disregarding models, we should engage more with them so that we can ensuring that modeling is done right – as an art of noticing – rather then allowing them to be misused.

Anna Tsing in her latest book calls for a science based on “arts of noticing.” These arts would involve a kind of thick description for the natural world that would focus on particular histories and localized processes. One example she offers is Japanese forestry, which recognizes the essential role that peasant societies have played in creating and maintaining the pine landscapes that foster matsutake mushrooms. By attending to these particularities, the foresters are able to forgo broad generalizations – e.g. erosion is bad – in favor of a more nuanced perspective – erosion creates soil conditions that favor pine rather than deciduous forests.

From this perspective models could easily be dismissed, because, on the surface, they are based in numerical abstractions rather than particular historical dynamics, and are divorced from reality. In fact, Tsing herself makes such a dismissive statement: “Natural history descriptions, rather than mathematics modeling, is the necessary first step – as in the economy” (144). However, as part of my dissertation research, I’ve been talking with modelers and doing modeling myself, and I think this is a mistaken conception of the process. It’s possible for models to be abstract and divorced from reality, but not a necessary part of their function. Modeling can be an art of noticing.

First, let me dispel the notion that models are not based in reality. They are – or rather, any good functioning scientific model must be. The numerical processes, and systems of differential equations that drive models are not derived from nothing, they are based on empirical observations of the real world. Scientists measure flows, concentrations, quantities, and other features of the world and then derive mathematical equations to accurately represent what they see. If the equations don’t match observations, then they are discarded and reformulated until they do – and this is an ongoing process because equations never perfectly match the observations. As a result of this ongoing process, modeling can reveal areas in which we are lacking information because we simply hadn’t thought to collect that kind of data. A good example of this from my research is an understanding of lag times for the movement of water from land to river. The models weren’t capturing some aspects of our observations, and so it was recognized that there is a lag time between when a drop of water hits the ground and when it flows down into the stream. It seems obvious, but isn’t how the models represented flow before. This sparked research into the way that water moves through the landscape and how long it takes to get from land to river – the answers have been astonishing. It can take years for water to migrate through the soil and percolate up into the water column. This has serious implications for nutrient management as it suggests that we are dealing much more with a legacy issue than an application issue.

You might say “Okay, so models are based in reality, but they’re still abstract – not based in the particularities of landscape and history.” History is a tricky one, but, I would argue, not necessarily the fault of the models – history can be applied alongside modeling rather than built into it and we can use the two to develop a much richer description of what’s going on in a landscape. It’s a question of the contexts in which we use the models, not a function of the models themselves, and we – as social scientists – need to push for the incorporation of history and broader social forces when we have the opportunity (this is one of my goals in my own research).

On the other hand, models are based in the physical particularities of a landscape – at least those that are made use of in particular landscapes are. It’s possible, of course, to create a model of an entirely artificial landscape to test out various numerical modeling methods. However, every model that I’ve seen is restructured and “calibrated” to the particular landscape involved – in my case, the Chesapeake Bay watershed. There are elements of the soil, the geology, the biotic environment, etc. that must be taken into consideration when applying a model. Sometimes it means that one model simply cannot be used because it has been calibrated to a very different landscape, and so researchers need to find one that will work in their region or build one from scratch. This has to do with the fact that, although they are mathematical and both the inputs and outputs are quantitative, it seems that the way models work internally is very qualitative. All of these mathematical processes converge in complex dynamics that resemble much more the flow of water than the calculation of values. As a result, models can help us understand things that are happening in a system that can’t be observed directly, but are a consequence of known dynamics. A good example of this is edge of stream movements of water. I can’t say that I understand it yet, but the flow of water at the edge of a stream is not something we can observe even with complex instruments. However, they are processes that result from known hydrodynamics, and so, when we run the models, we can get a sense of what’s going on in those places we can’t sense directly. That’s not to say that the modeled processes are the same as what actually happens, but it can help us understand.

All of this underscores that fact that we actually model all the time – modeling is an essential part of “noticing” that Tsing – for all the insight she provides – simply ignores. The forester out on the landscape is not simply taking in information with her senses, she is processing that information through a set of conceptions about the landscape that she has and then drawing conclusions. This process is always present – data requires models to be made sense of, but models must be altered when they cannot effectively make sense of the data. The friction – to use Tsing’s own model – between the conception and the observation can be a productive one, in other words. But this depends on how models are used. Models can be simply input-output streams that take quantitative data and turn it into more quantitative data – used to manage some aspect of the landscape (e.g. nutrient runoff). It is the interaction between reality and model that generates productive friction, and I think that’s the value of recognizing arts of noticing, and recognizing modeling as an essential part of those arts. If we maintain this model/reality split, then we essentially cede modeling to those who would use it for more abstract and insensitive approaches – global finance, neoliberal governance, etc. Instead, we must embrace modeling, and ensure that it is part of a broader art of noticing.


The Top of the Watershed: Cooperstown, NY

This slideshow requires JavaScript.

Since my research framework has shifted away from a narrow focus on the Chesapeake Bay Model and more towards the material, social, technological, and scientific construction of the watershed, I’ve been trying to get a better sense of the watershed – particularly in the New York portion where I live. On New Years Eve, Trish and I drove up to Cooperstown, NY to see Lake Otsego – the headwater of the Susquehanna River, which contributes the largest portion of water to the Bay and also forms its main stem. We arrived late in the afternoon, and it quickly began turning dark, but I got some decent photos while we were there, and learned a lot about the lake and the town at its base.

Interestingly to me, we did not see many references to the Chesapeake Bay in the signage along the lakeshore despite the heavily environmental focus. The only reference we noticed was a large sign discussing the history of the lake and town, which mentioned that the stone at the mouth of the lake – known as “Council Rock” marks the beginning of the Susquehanna river, which winds 464 miles before flowing into the Chesapeake Bay.

I’ll have more to say about the watershed and living in its upper regions shortly, but I wanted to share these pictures as my small way of contributing to the construction of the watershed.

I Made Another Model: Hawks and Doves

This weekend, I made another model. This one is built on a model explored by Carl Lipo and Terry Hunt in their book The Statues that Walked exploring the evolutionary value of cooperation and competition. At least, it’s built on the concept, since I haven’t read the book and I haven’t seen their original model. But the concept is simple. You have hawks and doves – hawks always want to fight, and doves always want to cooperate and avoid confrontation. Doves share points by cooperating, hawks take points by intimidating doves and by fighting other hawks. However, hawks also loose points if they loose a fight with another hawk. In one scenario I read – and used as an activity in my class – the “reward” is 10 points, which is either split by cooperating doves or claimed entirely by intimidating or winning hawks. The penalty for hawks losing a fight is 30 points.

After running this scenario in class, I wanted to explore other possibilities, so I decided to write my own model where you could vary the costs and benefits of cooperation and engaging in combat, and where doves could also lose points when they are intimidated. I also built in a system where you could have hawks chase doves and/or one another, and you could have doves run from hawks and/or towards one another. Here’s the interface:

Screen Shot 2015-11-23 at 11.58.47 PM

The results of playing with these variables are some interesting patterns of interaction – mostly it’s fun to watch the flocking and dispersal behavior of the different agents depending on how you configure their behavior (watch a video here: Hawks-Doves). You can download the model here if you’re interested in running it yourself.

Obviously, there are still major assumptions built into the model (e.g. that agents can be split into discrete groups of hawks and doves, that they don’t change, etc.), and there are still serious limitations to the model, but it’s an interesting exercise, and fascinating to see the different patterns or behavior and outcomes depending on the variables. Please let me know if you see any way I can improve this model.

Update 11/24/2015 11:31AM: I modified some of the interface on the model and included a setting that will allow you to run the scenario repeatedly. The new model can be downloaded here.

I built a model!

Since I began doing research on modeling, I’ve been thinking about learning to do some modeling of my own as an ethnographic exercise. I couldn’t justify writing about the practice of modeling without some experience of my own so that I could understand first-hand what it’s like to build a model from start to finish. As a result, this weekend I spent some time learning to construct a simple model using NetLogo – an open source agent based modeling software. Here are my results.

First, I had to decide what kind of model to build. I wanted it to be something simple so that I wouldn’t get in over my head, but also complex enough that I would actually be challenged by it. A lot of ideas came to me, but finally I settled on building a simple model of erosion. NetLogo comes with a variety of sample models that you can play with, and there is an erosion model among them, but I don’t like the way it works. The only agents are patches – stationary agents that can have different characteristics and affect other agents – and the flow of water is simulated by making the patches themselves simulate water flow. What I wanted was more like the Grand Canyon flow model that comes with the package created by Uri Wilensky. In this model, little drops of water (representing some unknown quantity of water) flow across a landscape of patches with elevations drawn from a GIS data set from a region of the Grand Canyon. It’s a cool little model, but there’s no erosion built in. As a result, the water simply flows down, pools, and then flows out when it reaches the edge of the map – the landscape doesn’t change in response to the water. So what I wanted was a hybrid of these two approaches.

I began constructing from scratch. I don’t remember the exact order in which I constructed the model, but I had to do a number of things: 1) I had to generate a random background of patches with varying elevations whose color matches the elevation, 2) I had to create water and randomly distribute it around the landscape, 3) I had to make the water flow from higher elevation to lower elevation, and 4) I had to make the water erode the landscape by reducing the elevation of each patch whenever water flows through it. It all seemed simple, and it was, but it took a lot of time to figure out exactly how to get all of the agents to do what I wanted them to do. I won’t go into detail on the entire process of building the model, but here are some of the issues that I ran into:

  1. Making the background and assigning each patch an elevation was easy enough, but getting the patches to adjust their color to reflect their elevation took some time to figure out. It turns out that there is a function that does this, but I had to wade through a lot of documentation and the code of a few other models to figure out how it works.
  2. Getting the water to flow was also pretty easy – there’s a built in function (downhill) that makes this possible. However, when I did this, the water tended to simply pool infinitely in single patches. I wanted the water to flow more, so I consulted the Grand Canyon model to see how they made water flow. By assigning a height to the water and using that in conjunction with the elevation, it was possible to limit the amount of pooling in each individual cell.
  3. Getting the landscape to erode was a real problem at first. I had the water flowing, but it wasn’t altering the landscape at all. I realized that I couldn’t have the water “pick up” a portion of the land and carry it away. I could do that, but it would have been very complicated, so instead I wanted to have the patches evaluate how much water was on it at any given time, and whether or not the water would be flowing in the next turn. With both of those evaluations, the patches would then respond by reducing its elevation proportionately. I played around with the if/else statements, but getting the procedure to work properly took a lot of time and required several phases of trial and error. The problem I had then was that certain portions of the landscape were eroding infinitely, so I had to again limit the amount of erosion possible on any given patch, and also make it so that the water that wasn’t flowing, but simply pooling, wouldn’t affect the elevation.

I did get all of these things to work, though not perfectly, and the water wasn’t behaving in exactly the way that I wanted it to, so I decided to take another approach. The whole time I was trying to build a model entirely from scratch, but after I had done all of that work, I realized that I could just use the code built for the Grand Canyon model and add in an erosion function like the one I had built in my own. I copied and pasted the erosion function into the other model and modified it to fit the parameters defined in that model – this also took some trial and error, but ended up being pretty straight forward. Here are some examples of the model runs:

Screen Shot 2015-11-16 at 12.40.25 PM

This is a screenshot of my original model. You can see the landscape with water, mostly pooling in different places. The top three sliders on the side adjust the rate of erosion, the rate of evaporation, and the amount of rain. The fourth slider is for setting up the terrain at the beginning – it determines how much of a gradient there is in the landscape. If it’s set low on setup, then there will be large differences in the elevations between patches, so you could have a patch with 1000 elevation next to a patch with 0 elevation. If it’s set high, then the landscape is more evenly distributed. The draw function simply has the water draw a path as it moves – this is useful for seeing how water flows, but I don’t think it does very much in this model.

Screen Shot 2015-11-16 at 12.40.06 PMScreen Shot 2015-11-16 at 12.43.12 PM

These two are before and after images of the terrain. The first is how the terrain looks with ruggedness set to a mid-range immediately after setup. The second is what it looks like after about 1000 ticks. You can see that there is more dark area on the second, and the water is beginning to pool in certain places. Since the landscape starts off fairly flat, there aren’t any major streams where water tends to collect.

Screen Shot 2015-11-16 at 12.07.53 PMThis is what the Grand Canyon elevation model looks like at the start. As I said, it draws elevation data from a file in which elevation from a portion of the Grand Canyon is stored. That’s why it looks smoother and more dramatic than mine above.




Screen Shot 2015-11-16 at 12.07.36 PM

This is what the Grand Canyon model looks like after I’ve run it with my erosion function included. Obviously some parts get more eroded than others, and this becomes a self-reinforcing process forming these streams and tributaries.




Screen Shot 2015-11-16 at 3.57.40 PM

This is what the model looks like when it’s running. You can see the water running over the landscape, pooling in certain places, and flowing from one pool to the next. Over time, depending on the flow rate and the evaporation rate, these pools of water will grow and become streams. One of the ways my model differs from the original in its outcome is that the water in the original tends to collect more in larger pools. Mine tends to stay in narrow paths because the earth on those paths is eroding faster than elsewhere.

There are some obvious limitations to this model. First, it’s not very empirically based aside from the elevation data. I don’t know how much water one drop represents except that it’s “height” is 10 (I also don’t know what units the elevation is measured in). As a result, I have no idea if the erosion rate corresponds to what one would expect for this kind of landscape with the given water flow. I also have it set to prevent it from eroding too far into the negative values – mostly because I want to be able to see the cumulative effects of erosion across the landscape rather than concentrating it in potentially infinitely deep sections of the water bed.

The only thing I still want to do with it is to make it so that I can randomly generate a landscape again instead of using the Grand Canyon data. That won’t be hard, I just haven’t had time to create the code. Here is the code for my model. If you want, you can download it and play around with it – see the effects of erosion on this section of the Grand Canyon (you’ll need the elevation data file, here – just put it in the same folder as the model code – and you might also have to play with the formatting a bit).

breed [waters water]
breed [raindrops raindrop]

waters-own [age]

patches-own [elevation]

globals [
water-height ;; how many feet tall one unit of water is
border ;; keep the patches around the edge in a global
;; so we don’t ever have to ask patches in go

;; Setup Procedures

;; reading the external file is startup rather
;; than setup so we only do it once in the model
;; running the model does not change the elevations

to startup
;; read the elevations from an external file
;; note that the file is formatted as a list
;; so we only have to read once into a local variable.
file-open “Grand Canyon data.txt”
let patch-elevations file-read
set color-max max patch-elevations + 200 ;; put a little padding on the upper bound so we don’t get too much
;; white and higher elevations have a little more variation.
let min-elevation min patch-elevations
;; adjust the color-min a little so patches don’t end up black
set color-min min-elevation – ((color-max – min-elevation) / 10)
;; transfer the date from the file into the sorted patches
( foreach sort patches patch-elevations
[ ask ?1 [ set elevation ?2 ] ] )
set-default-shape turtles “circle”

;; just clean up the marks that the raindrops have made
;; and set some global variable to defaults
to setup
ask patches
[ set pcolor scale-color brown elevation color-min color-max ]
set water-height 10
set border patches with [ count neighbors != 8 ]

;; Runtime Procedures
to go
;; check for mouse clicks on empty patches.
;; if we’ve got a winner make a manual raindrop that
;; is red.
if mouse-down? and not any? turtles-on patch mouse-xcor mouse-ycor
;; even when raindrops are hidden
;; newly created manual drops will
;; be visible
create-raindrops 1
[ setxy mouse-xcor mouse-ycor
set size 2
set color red
;; make rain-rate drops randomly
create-raindrops rain-rate
[ move-to one-of patches
set size 2
set color blue ]

ifelse draw?
[ ask turtles [ pd ] ]
[ ask turtles [ pu ] ]

ask raindrops [ flow ]

ask waters [grow-old]

ask border
;; when raindrops reach the edge of the world
;; kill them so they exit the system and we
;; don’t get pooling at the edges
ask turtles-here [ die ]

to flow ;; turtle procedure
;; get the lowest neighboring patch taking into account
;; how much water is on each patch.
let target min-one-of neighbors [ elevation + (count turtles-here * water-height) ]
;; if the elevation + water on the neighboring patch is
;; lower than here move to that patch.
ifelse [elevation + (count turtles-here * water-height)] of target
< (elevation + (count turtles-here * water-height))
[ move-to target ]
[ set breed waters ]
to erode
ask patches [
if count raindrops-on self >= 1 ;if there are raindrops on this patch
and [elevation] of min-one-of neighbors [elevation] < [elevation] of self ;and the raindrops are flowing
[set elevation (elevation – count raindrops-on self)] ;reduce elevation of this patch by one
;for each raindrop present
ask patches
[ set pcolor scale-color brown elevation color-min color-max ] ;change patch color to indicate erosion

to grow-old
set age age + 1 ;water ages 1 each tick
if evaporation-rate >= 1 and age > (11 – evaporation-rate) [die] ;waters die after a certain
;number of ticks depending on the
;evaporation rate selected
; Copyright 2006 Uri Wilensky.
; See Info tab for full copyright and license.



Constructing a Watershed

Walking from my home across the walking bridge towards downtown Binghamton, we passed through a small park at the confluence of the Chenango and the Susquehanna rivers. I hadn’t seen this park before because I had always crossed the other bridge, but as we rounded the corner an historical marker caught my eye.


This took me aback, and I spent the rest of the evening researching the Chenango Canal and the Erie Canal. What caught my attention and grabbed hold of my imagination in reading this sign was the idea that, at one time, the Hudson River watershed, the Chesapeake Bay Watershed, and the Great Lakes Basin were all interconnected by a system of man-made canals. I don’t know how much water was actually flowing between these watersheds, but it seems probable that there was at least some mixing of the waters, and the result would have been a human-constructed super-watershed. This discovery blew my mind, and made me think again about the construction of a watershed as both a natural and a social reality.

A watershed is an interesting thing. Obviously it is a natural geographic boundary defined by elevation and geomorphology directing the flow of water. Natural processes like the flow of groundwater versus that of surface water, the movement of water from land to stream, the changes to the landscape caused by plants, animals, and weather all play a role in the quality and quantity of water within the watershed. But as we change the landscape and attempt to grapple with increasing water quality and availability issues, the watershed also becomes a social reality. Modeling, I would argue, plays a significant role in constructing the watershed – not only in a representational way (i.e. the way we think about the watershed) but also in a performative sense (the relationships that constitute the watershed as a socio-ecological system).

There is no particular reason why the Chesapeake Bay Model has to be a watershed scale model. The estuary had been modeled for years prior to the initial introduction of the watershed model, and other systems are managed using models that focus only on the water system itself. Modeling the watershed certainly adds data and makes for a more comprehensive model – it is, as many of my modeler friends would say, “the best science.” But there are tradeoffs. The watershed model is massive and complex. It has to simulate 64,000 square miles of land across a number of different geological zones. That’s no small task. The Bay Model does it well, but only after several iterations of the model, and they’re always working to improve the way that it represents different factors that affect water quality. Would it not be easier to simply take monitoring data of the inputs and run them through a complex estuary model? I’m not the one to answer or ask that question, but it seems to me that there are many options, and the watershed model is not a given for the Chesapeake Bay Program from a purely scientific or management perspective.

On the other hand, modeling the watershed has had a significant effect on the construction of social and political relationships surrounding the Chesapeake Bay’s water quality. The Bay Program itself is an excellent example. The Bay Program was founded in 1983 – the “Year of the Bay” as it has been called. At the time, the only partners in the Program were the states immediately bordering the Bay – Maryland, Virginia, Pennsylvania – as well as the District of Columbia and the Federal Government (represented by the EPA). The modeling of the Chesapeake Bay was underway, and the first version of the watershed model had been completed in 1982. In 1987, the next version (called Phase 1) of the model was released, and, for the first time, the watershed model was coupled to a simple estuary model. It was in this year that the next Bay Agreement was also signed – still only including those partners immediately bordering the Bay. This was also the first signed agreement in which the watershed was mentioned as a scale of intervention, particularly in reference to population growth, however most of the language still refers primarily to the estuary and its ecosystem.

Over the next decade and a half, the watershed model and the estuary model were improved, and an airshed model was also added to the suite. Then, in 2000, the signatories once again pledged to clean up the Bay with the Chesapeake 2000 agreement. For the first time, the watershed scale becomes the primary focus of the Bay Program. It’s also the first time that the headwater states – New York, Delaware, and West Virginia – are included in the plan. Now, with the 2010 TMDL “pollution diet” imposed by the EPA, the watershed scale is firmly cemented in the social structure of the Bay as all states within the watershed are responsible for some degree of nutrient reductions.

This connection between the development of the watershed model and the construction of the  Chesapeake Bay Program suggests that the modeling played a significant role in demonstrating the limitations of an estuary focused approach (i.e. including only the adjacent states in the agreement). These refinements, then, made possible – even inevitable – the construction of a watershed-scale management structure – the Chesapeake Bay Program itself. In the same way that a series of canals once linked three watersheds together, the watershed model has linked the Chesapeake watershed states together into a management super-structure that may (or may not?) be more capable of addressing the nutrient pollution issues that face the Chesapeake Bay.

As I explore more of the watershed now that I am living in its northern-most expanse, I encounter more and more these reminders that I am still in the watershed. Signs announcing my entry and exit from the watershed. Information posters at rest stops that talk about the watershed as an integrated system. It reminds me that the watershed is not simply a natural region. If we are to be effective at managing the problems facing the Bay and its tributaries, the watershed must also become a social reality. Modeling plays a significant role in that process of constructing social relationships and performing the Chesapeake as a watershed. Understanding how modeling affects that social reality, and the ways that the watershed can be imagined and performed differently is the subject of my ongoing research.


Modeling Modeling

As I continue to engage with modelers and ask for their perspectives on the role of modeling in our understanding of ecological systems, I’m finding myself looking for ways of thinking about models and modeling – I’m looking for models of models. What is a model? What is its relationship to the people who produce it? To the ecological things and processes it is intended to represent? Here are a few thoughts that I’ve come up with that I will explore as I continue my research and writing.


First, models are representations. Abstracted, simplified, objects and systems that stand in for other objects and systems. If I were building a table-top model of a town (images of the movie Beetlejuice come to mind), I would look for materials that visually resemble the elements that make up the town – small toy cars, foam, fragments of wood and metal, colored paper, and so on. I would rearrange them into a pattern that resembles the overall pattern of the town itself – aligning roads and buildings in line with those of the town, arranging trees and grass in relation to these. I would pay attention to detail, but would also avoid details where they might get in the way – there is an art to abstraction, selecting the right details to convey the overarching pattern without getting bogged down in them. All of this to create the effect of looking at the town from afar – from a high mountain or airplane.

In fact, representation and modeling are essentially synonymous. All forms of representation – drawing, painting, sculpture, writing, etc. – can be seen as kinds of modeling. There are, of course, non-representational styles and modes of expression – even these might be a kind of modeling, but what exactly is being modeled is more difficult to tell. In any case, this brings me to the second way of thinking about modeling. Models are objects. It’s easy to forget that models, maps, and other representations have an existence independent to themselves because we always tend to think of these things in relation to the thing they are meant to represent. Questions like how accurate a representation is it, what it can tell us about the thing or system, these are questions that link us back to the original object. Rarely do we ask of a model what it means in itself and what it does independent of its relationship to the original. A painting evokes a sense of wonder and awe, a sculpture disturbs, a drawing reminds. Although passive in themselves, these objects act upon us simply by being, and in that sense the model is part of the thing being modeled.

As object, models are generally heterogenous – made up of many different kinds of materials. Different materials enable different modes and methods of representation, and so a model is an assemblage. The modeler must assemble relationships between these different materials in order to construct a representation. Even the modeler herself is part of the assemblage in many ways – putting her body into the work, her fingerprints in the clay. It’s the relationships between these materials that is productive – that makes the model more than just a pile of stuff.

Objects also grow old, decay, get covered with dust, and fade away – the relationships that hold the assemblage together wear down. So modeling, if it is to remain present, is also a process (agencement). In the same way that the model is never whole or perfectly accurate (models are always wrong, I keep hearing), a model is also never complete. The hobbyist continually works on her table-top model of the town, always looking for the right materials, trying to keep up with the changes time takes on the town. At times the model is a snapshot of time, but even then, there must be a continual engagement with it in order to keep it up – paintings need retouching, sculptures must be cared for, etc. Often, the model is left behind, forgotten, or discarded. It has served its function or the work required to restore it is too excessive and it’s better to build a new model instead. In all of these cases, there is a set of processes at work, either wearing the model down over time, or keeping it continually fresh and new.

I’ve used examples of artistic representation here, but all of this is true for the models I work with as well. These models are numerical – composed of elaborate equations that are too complex for individuals or even large groups of people to process. The equations represent different processes within the ecological systems – the interaction of nutrients in the ground and in water, the flow of water over the land, the effects of nutrients on organisms, and so on. All of these are objects apart from the specific systems they represent – one model can be adapted and transferred to another system as necessary. They are also themselves being represented in computer systems as the movement of switches and electrical current – this is what allows them to be computed. These elements – computer systems, numerical functions, etc. – are assembled in relation to one another to produce the larger model. Finally, all of this must be continually made and remade – new functions are introduced, computing power is increased, our understanding of the relationships between elements changes, etc. Models are always kept new or are set aside as obsolete.

There may be many other models for models, but these four complement one another and can be assembled to produce a broader picture of modeling. The test now is to see how my model of models fits with the data I’ve been collecting, and then to see what effects or functions the model can have in the larger world.


Indra’s Models

This whole time, I’ve been working on what I thought would be different case studies of modeling in the Chesapeake Bay. I thought there might be some overlap between the different cases, but I assumed that they would be relatively discrete and easy to parse out and evaluate the different effects of each modeling practice. What I’ve found is a lot more complex than that. What I’ve discovered, instead of separate cases, is more like an “Indra’s net” of models operating alongside, on top of, and even within one another. This doesn’t negate the original purpose of my project, but it does make things a lot more interesting.


There are three aspects to this Indra’s net – three ways that models reflect and refract one another. First, I’ve confirmed Paul Edwards’s finding that the divide between model results and so-called empirical data is artificial. Models depend on data, but that data can’t exist without models. Data is not collected uniformly. On the Bay there are thousands of buoys and other data collection devices often with different measuring devices, using different methods. Even when these use the same equipment and are measuring the same things, weather conditions and other factors cause disparities in the data that don’t reflect actual conditions. Add historical data to the mix, and you get a jagged map of data that doesn’t appear to be showing the same thing. Models are used to smooth out these differences and make the data usable across the watershed – they take many different measurements taken over a large spatial range and over a long time and turn them into a unified data set. As a result, the models built from the data already contain the models that are used to smooth out the data sets. Models within models.

Another way that models intersect with one another is by influence. This may be particularly true in the Chesapeake Bay where you have the Bay Program and the Chesapeake Research Consortium working to bring the best scientific tools to bear on nutrient management in the watershed. The result is a host of models that are continually being developed and redeveloped in response to one another. This model shows an increase in nitrogen here, the other shows a decrease in the same spot – something’s wrong, so the two modeling groups try to figure out the cause of the disparities and fix them. In other cases, one model demonstrates a more effective method for calculating loading values, so its results can be integrated into the other model. The models are, in other words, mutually constituting.

Finally, multiple models are used to validate one another. This has been a growing trend in the Bay Program in the last few years. I remember attending – and presenting at – a multiple models workshop a few years ago. That presentation developed into the project I’m working on now. At the workshop, modelers discussed the ways that multiple models can be integrated. For example, taking the average of different models generally provides a more reliable result than any one model by itself. The problem is that modeling is a heavy investment, and it would be impossible for the Bay Program to fund a second or third model to use in conjunction with the CBMS. Instead, what they’ve been trying to do for the new version is integrate multiple models at every level. That means that multiple models are built into the CBMS at every level, and that multiple models are used to validate the data for input and calibration. The CBMS itself is becoming a model of models in many ways.

Indra’s net of jewels that reflect one another infinitely is the perfect metaphor for the way modeling works in the Chesapeake Bay. The complex interactions of the different modeling projects have added a layer of density to the project that I hadn’t fully expected before I began. Things are really moving now, and I’m anxious to see how this project develops over time


Apparatus, Infrastructure, Institution

I’ve been reading McKenzie Wark’s latest book Molecular Red (slowly…because I’ve been “busy”…), and I feel that it provides an excellent synthesis of different threads of thought (Marxist, posthuman, science fictional, etc.) which will be extremely useful in my dissertation research. One of the things he explores is the notion of the “apparatus” in Karen Barad’s work on particle physics. The apparatus is what enables us to know the elusive, mysterious objects that compose the world that we inhabit. Juxtaposing two versions of realism – objective realism in which what’s considered to be real is an objective world apart from our conceptions of it, and process realism in which what’s real is the processes by which we encounter the world around us – Wark describes the way that the apparatuses that mediate our encounters with the world, “make a cut.” In other words, it’s not that these apparatuses give us a unique view into a world that is simply there waiting for us to view it – a particularly voyeuristic and patriarchal way to understand knowledge – it’s that the apparatuses make a cut into the world that enables us to encounter it in a particular way. This is the materiality of knowledge production, which is often ignored by the scientists and philosophers who are engaged in the processes.


After exploring the implications for this approach to realism – inspired not only by Barad, but by Haraway, Mach, Feyerabend, Bogdanov, and Platonov – he draws on Paul Edwards’s work on climate science to discuss the complex infrastructure that binds these apparatuses together to construct a global knowledge. Edwards’s book has been massively influential for me in my dissertation research, so it’s understandable that I would be interested in a theoretical approach that links it to other theorists that have interested me. For Wark, infrastructure is what links the various apparatuses of knowledge together to create a vast knowledge system. It’s not just that “knowledge is power” or that we cycle through a set of paradigms – these are idealist notions of knowledge – it’s that the production of knowledge is simultaneously the production of a world.

“Edwards: ‘Data are things.’ If we are to avoid a commodity or corporeal fetishism of such things, then critique has to inquire as to how data is produced. Data are the product of a whole series of labors, of observing, recording, collecting, transmitting, verifying, reconciling, storing, cataloguing, and retrieving.”

The result of all of this labor is not just knowledge, but a complex architecture, a structure of material relations between people, objects, technologies, and organisms. In my research on modeling, I am beginning to resolve the structure of relations that underlies our understanding of the Chesapeake Bay watershed, but what I’m most interested in are the institutional structures that come out of those knowledge practices.

The Chesapeake Bay Program (CBP) is a sprawling institution with many links to various other institutions – universities, federal, state and local agencies, environmental management institutions, interest groups, and so on. It’s so vast and substantial I’ve taken to thinking of it as the “leviathan of the Bay.” All of these connections are, I would argue, constructed around a knowledge infrastructure that is predominantly engaged in the development of the Chesapeake Bay Modeling System (CBMS). This is a large, complex model that links watershed and airshed inputs to estuarine water quality allowing researchers and managers understand how the system works and what kinds of activities are most effective for cleaning it up. It’s not a deterministic relationship where the development of the model – because it is so large and complex – produces the institutional structure in which it operates. Rather it’s a mutually constituting relationship – the large institution of the CBP contributes to the construction of the large CBMS, and the large CBMS depends upon the large institution of the CBP to operate.

My contention here is that, if we are to take a socio-ecological approach seriously, then we have to attend to the processes by which socio-ecologies are produced, and our knowledge production practices cannot be separated from the socio-ecologies they inhabit. The apparatuses we use to know the Bay, and the infrastructures and institutions that link those apparatuses together are as much a part of the Bay socio-ecology as the phytoplankton, crabs, and aquatic vegetation the CBP is trying to manage. Thinking this way about the socio-ecology of the Bay might allow us to explore different ways of constructing it – different kinds of apparatuses, or different ways of linking them together – that might produce a better relationship among the various actors who compose it. In order to do that, we need what Wark describes as a “low theory” – one that theorizes relationships by working through the messy, complex, and difficult connections that are made in the process of producing knowledge. Hopefully, this is what my research will contribute.

Performing the Bay

An interesting insight I’ve had in the early phases of my research has to do with the issue of performance in the way that the models are constructed. Of course, I’m particularly attentive to these kinds of issues, because I’m using a performativity framework to make sense of my research, but it’s interesting to have that framework validated by some of the actions and discourse of the people who actually do the modeling. Furthermore, it’s nice to be surprised by the ways in which the framework is validated – to have a novel insight come out of the process of refracting the practices I’ve been observing through the lens of performativity.

For the last few months, I’ve been attending various meetings and conferences in which the models that I am studying have been developed and discussed. Of course, most of the work goes on behind the scenes, and I haven’t had a chance to fully observe those processes yet, but it has nevertheless been a very informative opportunity to take part in these meetings. I get an opportunity to see the various facets of the model hashed out among different people representing different backgrounds, conceptual frameworks, and interests. The construction of models is, in this sense, always a collaborative process.

One of the things I’ve seen often in these meetings has been depictions of the modeled results paired against depictions of results from monitoring sites. It’s a fairly common practice, and probably fairly mundane for those familiar with modeling. Comparing the model to the actual system (as refracted through monitoring equipment, at least) is an important part of the process of constructing and validating a model. It’s possible to understand this practice in a representationalist framework: modelers want their models to accurately represent the system, so presenting the two sets of data together enables us to see how closely the representation comes to reality. That’s useful, maybe, but I think the performative framework provides a more interesting way of understanding this practice.

In thinking about modeling as performance and as performative, it occurs to me that this practice of presenting modeled data alongside monitored data is a kind of performance – an audition, perhaps. The model is being asked to perform the role of the Bay (at least some subset of its attributes, but that’s true of all performance and role play). It’s in these venues where the model’s performance is critiqued, and then the modelers go back to work with the model and improve its performance for subsequent rehearsals (like reciting lines). All of this is in preparation for when the model has to perform for the regulatory bodies, the environmental management staff, and the general public, and, hopefully, if the modelers have done their work, and the model’s performance has been fully vetted, it will convince the audience by performing the Bay effectively.


That’s not all, though! Saying that the model performs the Bay and comparing these demonstrations to rehearsals is not in itself revelatory nor is it explicitly performative. Where the performative framework becomes really interesting is in understanding how performances do more than represent or enact a role, they also come to shape the realities upon which they act. I can see two ways this happens in this case. First, there is the relationship between the role and the performer. An actor may have some agency to improvise and perform a part in a variety of ways, but those performances are always structured by the role itself and by the expectations of the audience. An actor cannot play Hamlet however s/he wants, the actor must, to some degree, internalize the character of Hamlet and perform as if s/he were that character. This requires a process of conditioning where the actor disciplines herself to perform the character (some schools of acting are more intense about this process than others – method acting, for example, is notoriously so).

With that in mind, the Bay model could be said to be undergoing a similar conditioning process, and these rehearsals are where the process is demonstrated and refined. The model must internalize the Bay as the character it will perform, and it’s through this process that the model is shaped not as a representation, but as an actor playing its part.

The second way these practices are performative rather than simply performances is in the relationship between the actor/character and the world with which they act. By world, I am referring to all of the other actors involved in the performance, and I mean “actors” in a broad sense. In a film or a play, there is a sense in which the stage and props are actors, the other actors are as well, of course, but so too is the audience. All of these actors perform their own roles, and this shapes the way that our actor/character performs as well and vice versa. As a result, through this process of many actors performing their roles, a world is created (not a static container for the actors, but the emergent product of their performances).

This is where the metaphor of actors and stages breaks down or has to be expanded, because in the case of the model, there is no “fourth wall” – no imaginary boundary between the performance and reality. Instead, the actors perform their roles within the “real” world. There is a sense, then, in which the model performs the Bay alongside the Bay itself, which is performing it’s own role in a much broader drama. As a result, the two are, to some degree at least, mutually constituting – the Bay and the Bay model perform a world together. The question that my research is meant to answer, to some extent, is what kind of world is being performed?

The performative framework allows us to understand the processes and practices of knowledge production (and others as well) from a non-representational perspective. In my view, the representational perspective is limiting because it separates our knowledge and representations from the world. The performative perspective embeds knowledge in the world as practices and relationships. Understanding how these practices contribute to the production of a world in relation to the objects of their representation (e.g. the Bay), will allow us to imagine other possible ways of producing those worlds, and will encourage us to enact new ways of performing them.