Complexity Science deals with 'systems' made up of many interacting components. Many things may be referred to as a 'system'. A city, an industrial sector, the whole economy, or the road network are examples of systems. The components in a system might be people, organisations such as businesses and government, or the physical environment. A system becomes complex, rather than complicated, when there are many interactions between the different components in the system, and perhaps there are many different types of components. These interactions, and the influences they have, make the system difficult to understand, make prediction difficult, and make the system exhibit certain behaviours such as tipping points. Complexity Science has developed over the last fifty years or so, to help us study and understand complex systems. To help us get a stronger grasp on what Complexity Science is, and what it has achieved, we explore some key concepts below.
You may also find our Jargon Buster useful.
Emergence is a central concept to complexity. It refers to the appearance of larger patterns, regularities or phenomena, that arise from the interaction of smaller components of a system. Importantly, emergent phenomena are difficult, or impossible, to describe using only the smaller components of a system, or their simple aggregation.
An intuitive example is a traffic jam. A traffic jam can be said to be an emergent phenomena because it arises through the interaction of many cars, and the road they are driving on, but is not fully described by saying there are simply 'many cars'.
Another example may be a flock of birds. Think of a group of Starlings flying in a group. The flock is an emergent phenomena because it results from the group of birds flying with each other. However, we do not fully describe what a flock of birds is, by simply stating there are lots of birds. The behaviour of the flock as a whole exhibits properties we cannot easily describe, or even understand, by looking at just one bird.
Are there examples of emergent phenomena in your system? Do we understand how they arise from the interaction of components in the system? Do we understand how interventions might affect these interactions, the components, and the phenomena itself?
Path dependence refers to the basic idea that 'history matters'. That is, the past position, outcomes, or dynamics of a system, will have consequences for how the system behaves in the future. For example, the affect of an intervention made to a system could be significantly different depending on what position the system is currently in, and where it was in the past.
An intuitive example is that of consumer products' 'lock-in' (e.g., VHS vs Betamax, Computer Operating Software, QWERTY keyboard layout), whereby the market for certain products can be independent of the quality of that product, because of the past decisions of consumers and performance of products. For example, new word processing software products do not simply have to be high quality word processing software, but also need to be able to 'read' popular existing formats. In a similar manner, a new keyboard design, even one with demonstrable advantages, is less likely to be adopted widely, because so many users are familiar to the QWERTY layout.
Is your system path dependent? What past events or positions are particularly important to the effect of potential interventions?
Adaptation refers to a system's ability to react or adapt to changes in its situation or environment, coming from both internal dynamics, and external interventions. Feedback loops (in which one effect has an effect on its cause) are likely to be a key feature in enabling a system's adaptation. Adaptation is a key concept in Complexity Science, and in the steering approach, as it is one of the main ways in which traditional management strategies can fail. It is a system's adaptation to an intervention that can cause often cited 'unintended consequences'.
One clear example, which causes a lot of debate, is the reaction of tax payers to changes in taxes; for example, setting higher rates of tax on higher earners. Some commentators suggest raising taxes in this way is actually counter-productive, as high income tax payers may move away, or employ means to avoid it, in response to the higher tax. If this were the case, it would be a clear example of adaptation in the system.
Are there actors, or feedbacks, in your system that might mean it adapts to interventions? How could we use these dynamics to our advantage?
Your Questions: Understanding
Self-organisation refers to the process by which some form of order or coordination at the higher system level is achieved through the interaction of lower-level components, that do not have this order as an explicit goal driving their behaviour.
The economy is often said to be self-organising, as the interaction of many individuals seeking to produce (i.e., work) and buy goods and services (i.e., leisure) to improve their quality of life, leads to a situation in which the production and supply of goods and services is managed without the need for a central decision-maker. This is hotly debated amongst economists, with some suggesting central control by government is often needed to prevent undesirable outcomes, such as under-provision of basic services (e.g., health, education, street-lighting). This is why we often see the state providing these services, paid for by taxes, rather than leaving their provision to the market.
A more theoretical example, taken from the doctoral work of ERIE member James Allen, of self-organisation is the way in which cooperators on a network organise themselves against defectors. Imagine a chess board, and that each square on the board represents a person, and each can choose to either cooperate or defect. Those that cooperate 'donate' to the group at a cost to themselves, whilst those who defect 'free-ride' on the efforts of others. Whether a person chooses to cooperate or defect depends on how well they are performing against their neighbours on the board. As the people repeatedly choose whether they will cooperate or defect, what is observed is that slowly over time those that are cooperating organise themselves into distinct clusters (i.e., groups) on the board. These clusters of cooperators are therefore able to survive 'exploitation' by the defectors.
In this system there is no guiding hand (i.e., central control) suggesting that the cooperators should all group into clusters. Instead, they self-organise into this pattern due to the local rules between each individual. If a cooperator observes a defector who is performing better, this cooperator will imitate that defector, and vice versa.
If the cooperators are represented by red squares, and the defectors represented by blue, then this self-organisation can be seen in the two diagrams below. The left image shows an early snapshot of the positions of the cooperators and the defectors on the board. Here it can be seen that they are mixed up in a random way, with no example of any organisation. The right hand plot is shown at a later time of this theoretical game. Now the cooperators and defectors have organised themselves into dense clusters.
This example demonstrates that if a system is initially composed of those who donate to others at a cost to themselves compared to those that do not donate, then these two groups will organise themselves so that they neighbour others that behave in the same way.
Your Questions: Understanding
Tipping points are one of the main drivers of 'nonlinear' dynamics in complex systems. These are points, or thresholds, at which significant changes in overall system behaviour can be seen. Some tipping points can also lead to irreversible changes, for example, 'runaway' climate change, in which feedback loops mean a system is unlikely to ever return to the tipping point.
Tipping points can be observed in many real world examples, such as fish stocks, financial markets, and social media adoption. They key is to understand that change may not happen gradually, step by step, but may happen suddenly. In the example of social media adoption, a new social media website, may have few users for a long time, but then suddenly 'take-off' - the rates of change differ through time. The cause(s) of the tipping point may be many. It may be that some external change made the website more popular, such as celebrity endorsement, or it may simply be that the power of 'word of mouth' shifted once a certain number of people had signed up to the site.
Networks are a powerful tool for thinking about how components of a system are connected. You may be familiar with network diagrams - called 'graphs' - with dots joined by lines. These network diagrams are made up of 'nodes' and 'edges'. The nodes represent the entities, or components, of a system; they may be people, or firms, or other organisations. The edges are the connections between them. These connections could represent anything from a friendship, to a communication, to a trade deal. The power of the approach is in mapping who and what has connections, and thus influence, with whoelse and whatelse.
Recall the theoretical example we discussed in the section on Self-organisation. This example can help demonstrate the power of thinking about the world using networks. Recall in this 'game', each person or firm either donates to help the group, or does not and instead 'free-rides' on the efforts of others. Those that donate are described as cooperators, those that free-ride are described as defectors.
When the cooperators are placed on the nodes of the network, with the edges describing who they cooperate with, it is found that the final amount of cooperation is much higher than when the dynamics are in simulated populations that are not placed on networks, i.e., they can all mix with each other. The reason for this is that the cooperators are able to form clusters together on the network. These clusters can then avoid the exploitation of the surrounding defectors, as cooperators playing against cooperators are able to perform better than defectors playing defectors. The chess board example is akin to a network, because players can only cooperate or defect with their neighbours, and not with all other players.
More interesting structures than the chessboard can be studied. For example, if each node in the model is connected to different numbers of neighbours (rather than just the four in the case of the chess board) then this leads to higher amounts of cooperation. The reason for this is that those players with larger numbers of neighbours act as hubs where cooperation is able to survive for long periods, therefore helping the surrounding cooperators.
It is also the case that in many real world systems it is not only the relationships on one network that are important, and that instead each person may be present on a number of networks e.g., one network may be those that you meet face-to-face, and another may be the network of people that you communicate with by email. If the spread of cooperation is studied on these networks it is found that the additional 'layers' in the networks can lead to higher levels of cooperation because the cooperators on one layer are able to 'free-ride' on another, enabling them to perform better, and survive exploitation on the layer on which they are cooperating.
Your Questions: Understanding