Boundaries are arbitrary. Choosing the best ones is a big deal. One of the most critical parts of systems modeling is defining the bound...
|Boundaries are arbitrary. Choosing the best ones is a big deal.|
Let's be frank: systems don't really exist; they're just modeling constructs that humans use to predict future behaviours and events. If you don't believe me, here's a simple exercise that will prove the point.
|Where's the engine?|
Go open the hood of your car and look inside. The image to the left is what you'd see under the hood of a Ford Fiesta. If you know what modern car engines look like, please try to ignore that information. Pretend you're an alien, or someone who has somehow never ever seen what's under the hood of a car.
Now, consider this: where does the engine "end" and the rest of the car "begin?" In other words, where's the boundary between the engine and the rest of the car?
Right. You can't tell. That's because the boundary isn't really there in an actual car. A car is just a bunch of parts all connected together. Whether a given part is part of the engine is an arbitrary decision taken by the engineers who developed the car.
Well, it's not completely arbitrary. Automotive engineers have some very good reasons for defining the boundary between engine and not-engine, but those reasons are rooted in things other than the actual nature of the parts that end up making up the car.
Consider another related example: say your car's engine has a slow oil leak. Now consider a drop of oil that has worked its way out of the engine and is now in a microscopic space somewhere in the engine gasket. Is that drop of oil still part of the engine? That is, has it crossed the boundary that separates engine from not-engine? It's still physically "inside" the volume of space that the engine occupies, so we could say that it is inside the engine (inside its boundaries). On the other hand, the drop of oil is no longer serving its function in the engine; on this measure, then, we could say that the drop is outside the boundary of the engine. So the drop of oil is physically within the boundaries, but functionally outside.
What this really means is that there's more than one set of boundaries possible for a system. Indeed, there is an infinite set of possible boundaries - although most of them would be silly for various reasons. So the question becomes: which boundaries are "best?"
Here's another example: say you're designing a system - say, an elevator - that has multiple electrical components in it. All the electricity to run the elevator comes from a single ultimate external source. This means that somewhere between the source and the elevator's electrical components, the flow of electricity must be divided up so that each electrical component gets what it needs. If you design the elevator assuming that you will be provided with a single source of electric power, then you are (possibly implicitly) assuming the responsibility for dividing up that power among the elevator's various components. If, on the other hand, you expect multiple electrical inputs for your elevator, you're basically telling someone else that dividing up the electric power is their problem, not yours. Ultimately, when the elevator is installed and operating, all the same parts and assemblies will be present; but where you draw the boundaries of the elevator system will define who is in charge of, and responsible for, different elements of the overall artifact. You can see how being clear on where the boundaries are can have huge impacts on all the agents involved in designing, building, maintaining, operating, and decommissioning a system.
And this doesn't only apply to engineered things, but also to organizations, communities, societies, political "systems," economies, and ecological environments.
So, establishing good system boundaries is really, really important.
We want to set the "best" boundaries. But how do we decide what "best" means? What criteria can we use to wade through all the possible sets of boundaries to find the best one?
Let's make things even trickier. Consider an aquarium tank with a fish swimming around in it. You can identify the water by thinking of it as not-fish - everything that isn't fish must be water. To to this, though, you must already know what "fish" is; if you don't know "fish," you can't know "not fish" (i.e., water). So far, so good.
However, you can also identify the fish by thinking of it as not-water - everything that isn't water must be fish. But - and here's where the trouble comes - this means you must already know what "water" is.
This is a paradox: you must know what "fish" is to learn "water," but you must know what "water" is to learn "fish."
In fact, this is a false paradox in practice because we have other information available (our past experiences with both water and fish) that allow us to know what they are before we see the fish in the aquarium tank. It irks me something awful that some people use this paradox to argue that systems thinking is somehow wrong.
Still, the paradox is useful because the way you resolve it also provides, I think, an excellent way to identify system boundaries.
The fish/water situation is a paradox because we only have the fish as an entity, and the water as an entity. In this case, we can resolve the paradox by changing the kinds of entities we consider. In particular, consider the properties of things in our "universe" rather than the things themselves. There are three in particular that are very important as the meso-scale (the human-level scale) of things: colour, shape, and movement.
Look around you. If you think of what you see as an image, then every region of that image will have colour, shape, and movement (albeit some things will have an absence of movement). Now ask yourself: isn't it true that you notice where something is by the fact that its colour is different from its background? Doesn't the region of a certain colour mark a particular shape, which distinguishes it from other, nearby/adjacent shapes? And, if the object is moving, doesn't the movement itself let you demark its edges as it obscures things behind it, thereby identifying its shape?
Here's how we solve the paradox: look at characteristics of the aquarium and the stuff in it. Look to see where regions are demarcated by colour, shape, and movement. The fish is a different colour than its surroundings, which it turn let you determine its shape, which is distinctive; and it moves with respect to the other stuff in the aquarium in a manner different from anything else in the aquarium. You can therefore identify the fish without having to know what not-fish is.
Most importantly, where colour, shape, and movement charge markedly will locate the boundaries of the fish.
Running this argument for the water is a bit trickier, but it will still work - I leave that as an exercise for the reader.
Also, it happens that colour, shape, and movement are three properties that are processed by different parts of the human brain, and that the results are merged into a single "experience" before reaching consciousness. That's why you only see "fish" and "water." So this is a very natural way to set boundaries - it's consistent with how we evolved and how our brains operate.
Of course, it's not quite that simple. Colour, shape, and movement are three very concrete properties we can use to set system boundaries, but they're not the only ones. In the general case, there are many different properties that could be used. Each set of properties will yield different sets of boundaries.
It may seem that we've come full circle - only now we have to look for the best set of properties rather than the best set of boundaries - but we're in fact in a better position, because it's much easier to think about variable properties than it is to think about boundaries. Boundaries are imaginary, but the properties can be quite concrete. Our brains tend to like concrete things better. And since there's a mechanistic procedure to find boundaries from variable properties, everything becomes more manageable.
Consider: where are the boundaries of your home? (Assume you live in a detached residence.) Obviously, the walls, foundations, and roof mark a boundary. Colour and shape take care of that. But what about your water line and sewage drain? Once you've flushed the toilet, is the sewage still within your home? What about when its in the line between your home and the street sewer main? Is it still your sewage once its in the main sewer line? And why would that be? What changed? What stayed the same?
In many ways, a more rational boundary for your home is the extent of the "property" that is yours. This will include the residence, but also the land around it, the sewer and water and gas lines under it, etc. The boundary now marks a change in control and responsibility. On your property, you are responsible and have a significant amount of control. Beyond the boundary of your property, you are not as responsible and have significantly less control. The boundary is marked by the changes in the properties of responsibility and control. There are also changes in movement at the boundary: water exits the mains and enters a smaller dedicated pipe within your property. Ditto for electricity and sewage.
Of course, it's not all peaches and cream. What about the raccoons that rummage through your garbage? Are they your responsibility while they're on your property? What about birds that poop on your lawn chairs? What about rain and snow?
You should be getting a bit confused by now. You probably are responsible if a house-guest trips on a rake in your backyard and breaks their leg, but you're not responsible for a bird that flies into your picture window (this being another good case of where colour - or the lack thereof - makes the window invisible by making its boundaries invisible). You should be noticing that there's no easy way to develop a rule that lets you decide if you're responsible or not for different things on your property, but instead, that can be easy to find where concrete properties change.
It's not just you. There really is no such rule, not at least one that is based on evidence, science, and logic. There are plenty of legal rules on the matter, but they are both incomplete and imperfect (hence the many, many court cases every year involving odd events on people's property), and they're completely arbitrary - the result of centuries of arguing and random decisions taken by people ill equipped to render a rational verdict.
Notice how glaringly obvious this problem becomes when you think in terms of a system that are bounded where properties change values. That is to say, it's hard to know where boundaries are because they are not based on where properties change their values, but rather are based on arbitrary and ad-hoc decisions with little or no valid justification. Perhaps a better way to decide where your property really ends is to look at all the variability of all the properties, and make people responsible (and give them control) over everything within those boundaries. (But I'll leave that to the politicians and lawyers for now.)
Finally, this brings us back to one of the most fundamental concepts of systems science: stocks and flows. A "flow" is some quantity of stuff that moves between "stocks," which are buffers that accumulate and discharge the stuff in flows. The classic example is a bathtub: the water in the tub itself is the stock, and the water coming from the faucet and going into the drain are the flows. Batteries, refrigerators, and parking lots at car dealerships are other examples of stocks. The variable properties that I've been writing about can help you find where the boundaries are as flows cross from one system into another (i.e., between stocks).
For instance, heat will leak out of your house in the winter because the walls and roof are not adiabatic - they're not perfect thermal insulators. Temperature can be a good marker to find the wall-boundary of your house in this case. Imagine a straight line passing through a region of the house's interior, then through the wall, and then for some distance outside. Now measure the temperature at different points along that line and plot the results. You will see that the temperature variations both inside and outside the house are minor compared to the variation through the actual wall. The wall is the thermal boundary because the key property of temperature changes substantively at the boundary.
Of course, this doesn't always work out as one might expect. For instance, and in analogy to heat passing through your walls, the water coming up the line from the water main into your house doesn't change its flow at all when it passes that arbitrary boundary that lawyers have determined mark the edge of your property. But.... There is a substantive, objective flow change as the water from the main is drawn into the line for your house; water in the mains will have to break into two flows, and one of the flows will have to change direction to come to your house. That's a serious change in properties of the water.
(Indeed, this, to me, seems a much better place to put the boundary between your water and the city's water, because it's physically obvious to all parties and not some abstract, arbitrary, mathematically planar - and therefore physically impossible - edge.)
So far, I've focussed on describing things that already exist. Now, let's consider how this may impact designing new stuff.
We design things that don't already exist, so there are obviously no boundaries between it and its environment yet. So in preparation to come up with alternative design concepts, we first try to place boundaries within which our design will exist. Since boundaries mark where properties change value, we start by identifying the properties of interest, and that we can do by thinking about the system flows that will come into and go out of the system we're designing.
This post isn't meant to be a tutorial on systems design, so I'm not going to go into a detailed example from scratch. Instead, here's a quick preview of something I will probably write about in more detail at a later date.
Say you're designing a digital camera, and say the camera has a screen on it that can be used as a viewfinder and previewer. Two principal flows of the screen are (a) light emitted by the screen itself during operation, and (b) ambient light reflected off the screen. (In the later case, the ambient light is both an input and an output.) If the amount of ambient light reflected from the screen overpowers the light emitted by the screen, then the user won't be able to see what's "on the screen." You could just make the screen brighter, but that will eat into battery life. You could put some sort of anti-glare cover on the screen to try to cut down reflection without also cutting transmission of light through the screen. Notice that both these solutions alter the camera being designed.
But the problem can be treated differently. I dropped a hint in the previous paragraph. Do you see it?
Answer: The behaviour of the screen is pointless unless there's a user to see the image on the screen. Indeed, the only place where it really matters that ambient reflected light is minimized, is at the user's eye. Notice that this is nowhere near the actual camera screen. From a purely functional point of view, it doesn't matter where we put the boundary between camera and not-camera, so long as by the time the light from the screen enters the user's eye, it is not overwhelmed by ambient light. That is, with regards to the visibility of the camera's screen, the camera's boundaries can extend as far as the user's eye. I hope you appreciate just how weird that is, compared to how we normally think about cameras.
This opens up many possibilities, a couple of which are (a) a screen the emits polarized light and glasses a user wears that filter all but the polarized light, and (b) transmit the screen image to Google Glass (or some similar device) for display to the user. There are many others, which I leave as an exercise for the reader.
Whether these design ideas are feasible is not the point; the point is that thinking clearly and precisely about boundaries can open new possibilities for innovative designs.
Finally, let's revisit an example I've already mentioned: thermal insulation of house walls. From a thermal perspective, the function of a wall is to provide thermal insulation. The thermal boundary between inside and outside the house is at the wall, because temperature (the key property) changes values substantively through the wall's thickness. The wall itself is the boundary. Notice that here is obvious that we don't have a mathematical boundary but rather more like a boundary layer. As such, we can consider changes to the nature of that layer at any point in its thickness.
When you're designing a house, you can do the thermal design of the wall by first collecting data on what the range of acceptable interior temperatures and expected exterior temperatures will be, and finding the largest typical difference between the two. This defines the temperature gradient that must exist between inside and outside. In turn, this defines how much thermal resistance the wall must provide. There are two characteristics that describe a wall's thermal insulation: the thermal resistance of the insulating materials inside the wall, and the thickness of the insulation in the wall. However, the thicker the wall, the more materials are needed (increasing its environmental impact), and the more expensive it will be. So thin walls are cheaper and greener to build, but thick walls are better insulators. The designer must make a tradeoff. (This of course ignores all the other requirements that walls must meet, and that will introduce even more tradeoffs that designers must make. But let's keep things simple.)
To lower the requirement for thermal insulation, one can only alter the interior and exterior temperatures. There are three ways that heat passes through a boundary: convection, conduction, and radiation. Which combination of phenomena apply will vary from place to place. One solution, favoured in Europe, is to put shutters on windows. These shutters are external to the wall (that contains the window). As such, they trap thermal energy outside the wall, thus lowering the need for insulation in the wall. This is because a significant portion of the heat is delivered by radiation - sunlight that strikes the window/wall will very effectively deliver heat to it; the shutters literally block that radiation from reaching the wall, thus altering the requirements for thermal insulation of the wall.
A more high-tech solution is to coat the outside of the wall with a substance that reflects those frequencies most likely to be converted to heat by the material of the wall itself. Such materials already exist, but are expensive. There is also aerogel, which insulates against convection and conduction, but is a poor radiative insulator. One might then consider aerogel in combination with a radiative insulator. But again, cost is a factor against such solutions.
What insulation is chosen is not the point. The point is that thinking about boundaries, how they are characterized, and how they function can open opportunities to think of design situations in new ways and thus improve the odds of finding innovative, superior interventions.