Development in Progress
The concept of progress is at the heart of humanity’s story. | Jul 16, 2024
The concept of progress is at the heart of humanity’s story. From the present, it is possible to imagine a future of abundance in which our great challenges have been addressed by the unique human ability to modify the universe toward our own ends. Many believe that we will attain this future through a combination of expanding human knowledge and advanced technologies.
This article explains how our current idea of progress is immature: it is developmentally incomplete. Progress, as we define it now, ignores or downplays the scale of its side effects. Our typical approach to technological innovation today harms much that is not only beautiful and inspiring, but also fundamentally necessary for the health and well-being of all life on Earth. Developing a more mature approach to our idea of progress holds the key to a viable, long-term future for humanity.
The way we understand what progress is and how we achieve it has profound implications for our future. Ultimately, it shapes our most significant actions in the world—it affects how we make changes and solve problems, how we think about economics, and how we design technologies. Whatever is not included in our definition and measurement of progress is often harmed in its pursuit. Its side effects (or externalities) occur in a complex cascade, often distributing harms throughout both time and space. The second- and third-order effects of our actions in the world can be difficult to attribute to their original cause, and are frequently more significant than we realize.
As technology gets more powerful, its effects on reality become increasingly consequential. On our current trajectory, these effects will end civilization’s story long before we merge with machines, or before we have built a self-sustaining colony elsewhere in the solar system. We are not as close to a multi-planetary future as we are to the kind of damage to the biosphere that either destroys or significantly degrades civilization. If we continue to measure and optimize progress against a narrow set of metrics—metrics focused primarily on economic and military growth, which do not account for everything on which our existence depends—our progress will remain immature and humanity will continue its blind push toward a civilizational cliff edge.
In this article, we use the phrase “the progress narrative” to refer to the way we think and talk about progress in society. The progress narrative is the pervasive idea within our culture that technological innovation, markets, and our institutions of scientific research and education enable and promote a general improvement in human life. This article questions the accuracy, incentives, and risks of this narrative, examining the reasons that the idea has held such a central role in shaping the development of our global civilization. In doing so, it attempts to outline the progress narrative earnestly and clearly, noting that it is often driven by an honest desire to see positive change in the world. The intention is not to point the finger of blame, or to deconstruct for the sake of argument. It is to inform a way forward and outline a path ahead toward potential solutions.
Drawing on a range of sources, the article takes an interdisciplinary approach to exploring the reality of humanity’s current trajectory. Several prevalent progress myths are reexamined, including apparent improvements in life expectancy, education, poverty, and violence. The roots of these inaccuracies are exposed by widening the aperture of our view. Even if we are living longer, many measures of the quality of life we are living are in decline. Our educational outcomes are in many ways deteriorating, even if access to education is improving. At a global level, despite the common narrative, it is not clear at all that poverty is actually reducing. And the tools of violence have increased vastly in scale of impact since the end of World War II; we now routinely create the kind of weaponry previously reserved for dystopian science fiction.
To convey a sense of the extent of unintended consequences that can result from a single innovation, the primary case study explores the invention of artificial fertilizers. This development enabled a significant increase in the amount of food (and therefore people) that could be produced. The externalities of this innovation have had far-reaching consequences for human health and the wider biosphere. An assessment of these side effects helps us to open our eyes a little more widely, so that we may glimpse a fraction more of the complex reality that is generally omitted from the simplified narrative of progress.
Our idea of progress needs to mature. If humanity is to survive and thrive into the distant future, we must transform and elevate the very idea of progress into something truly good and worthy of our shared pursuit and aspiration. As we understand more about the universe and find new ways of changing it with our technologies, we must account for the endless ripple of cause-and-effect beyond our immediate goals. We must factor both the upsides and the downsides that will continue to impact reality long after the technologists of today are gone.
For a change to equal progress, it must systematically identify and internalize its externalities as far as reasonably possible.
For our idea of progress to be mature, it must take account of its side effects and plan to resolve them in advance—it must internalize its externalities. In the second part of this article, four specific methods for internalizing externalities are outlined, alongside some clear examples of what such a process might entail.
The possibility of a mature kind of progress is both grounded and optimistic. It’s a proposal that the human capacity for both wisdom and ingenuity is far greater than we currently imagine. We are capable of holding the unknowable complexity of reality at the very center of how we take action in the world, and mitigating the consequences of the gaps in our knowledge in advance. This enables a real kind of progress that reduces suffering, builds a better understanding of the universe and our place within it—and increases our chances of both surviving and thriving into the distant future.
In 1921, the problem of “engine knocking” was solved by Thomas Midgley Jr., a chemist working at General Motors. Knocking is a characteristically noisy problem known to limit engine performance and damage internal components, and Midgley proposed the addition of Tetraethyl Lead (TEL) to gasoline as an anti-knock agent. Although TEL solved the problem, thereby increasing performance and fuel efficiency, its release into the atmosphere also caused incalculable damage.[1] Lead is a potent neurotoxin that is harmful to all life, but particularly to children, causing cognitive problems and delays in development.[2] In 1979 alone, American cars released more than 200 million pounds of aerosolized lead into the atmosphere.[3]
In 2015, it was assessed that due to this single innovation, environmental lead exposure cost humanity nearly one billion points in IQ and significantly increased the base rate of violent behavior.[4] More recent studies suggest that the scale of lead poisoning far exceeds previous estimates; in 2019 alone, it is projected that 5.5 million people died of heart disease caused by lead poisoning, making it the leading cause of cardiovascular death worldwide, ahead of smoking and nutrition.[5] The impact of lead on global IQ has also been significantly underestimated, with updated figures suggesting a loss of 785 million IQ points, in children under five alone, in the same single year.[6]
Despite the immense scale of the impacts, it took until 2021—one hundred years across a century of heavy use of internal combustion engines—for the last country to ban leaded gasoline.[7] Although officially prohibited for most vehicles, leaded gasoline is still in use in the US today, in light aircraft, farming machinery, racing cars, and boats, and is still illegally used on the road in many developing nations.[8] Over long geological timescales, it may be that a planet like Earth can only evolve a biosphere capable of producing our kind of intelligent life because toxic elements like lead are confined to rocks deep within its crust.[9] Still, we have invested incredible amounts of energy and ingenuity in systems of lead extraction and refinement, and built entire industries dependent on its continued production. In the modern era, it has become common practice to mine toxins such as lead from deep within the planet and transfer them, via consumer products, into our bloodstream.[10]
What sort of world might we be living in now, if not for lead? What do hundreds of millions of deaths, many billions of IQ points lost, and a less peaceful disposition mean for collective coordination and sensemaking?[11] What about the countless other less infamous pollutants to which we are all now exposed? The Global Burden of Disease study estimated that pollution-related disease was responsible for 9 million premature deaths in a single year.[12] This conservative estimate represents 16 percent of total global mortality, and still fails to capture the kinds of damage that are not immediately fatal but nonetheless significant and debilitating.
There are more than 279 million known chemical substances.[13] In this unimaginably vast number, there are countless other chemicals with similar or worse effects on our capacities and capabilities, acting both alone and in complex combinatorial interplay.[14] It is humbling to realize that we are unlikely ever to fully appreciate the extent of the effects caused by our overconfidence in our tools and technologies.
It is humbling to realize that we are unlikely ever to fully appreciate the extent of the effects caused by our overconfidence in our tools and technologies.
Beyond leaded gasoline, there is a long list of other inventions that have caused incalculable suffering and killed many millions of innocent people.[15] Vioxx was a widely used painkiller, which also increased risk of heart disease, with estimates of unnecessary deaths in the tens of thousands.[16] Asbestos is a useful flame-retardant construction material, which to this day causes many types of cancer and approximately two hundred fifty-five thousand excess deaths globally each year.[17] DDT is a pesticide that was advertised as a miracle chemical and sprayed directly onto people and food; despite restrictions in its use, DDT still causes damage to the environment and to many aspects of human health, including as a cause of cancer, fertility problems, and impairments in infant development.[18] The most famous case may be thalidomide, that was prescribed in the 1960s during pregnancy to relieve morning sickness; it caused thousands of babies either to die in the womb or in infancy, and left many others with severe deformities of limbs, ears, the heart, and other internal organs.[19]
There are many more examples of pharmaceuticals, agricultural chemicals, building materials and consumer products which were once used widely, before their harms could be prosecuted to a degree that resulted in their prohibition.[20]The vast majority of harmful chemicals and technologies of all kinds have not been successfully banned, often despite overwhelming scientific data about their harms; air pollution, for instance, is one of the world’s leading causes of death, and yet very few of the contributing chemicals or producers have been banned.[21] The examples provided throughout Part I of this article do not come from a single industry. They come from all sectors of industrial activity.
In all of the cases noted above, for at least a brief time, we thought that each product was a positive and desirable innovation.[22] Among both experts and members of the broader public, each one was seen as a beneficial tool for the management of a problem we wanted to solve—as a form of progress. As it turned out, our conception of progress in these cases was naive. Humanity was deficient in broader awareness and understanding of other highly consequential effects of their use. This article suggests that the same phenomenon—naivety of the totality of a technology’s effects—is true more often than not for the technologies that we create. As technological advancement accelerates, we must also factor the joint problems of the increased scale of impact and speed of deployment. The consequences of side effects of new technologies grow as their power and reach increase.
Thalidomide, Vioxx and asbestos are widely known because they have clear negative externalities (costly or unpleasant side effects) that are both severe and fast to manifest. Many other externalities exist on the boundary of this category, with effects that are severe but just a little slower to filter into human consciousness. It would be reasonable to suggest, for example, that in the near future we will think of the volatile organic compounds (VOCs) associated with domestic carpets and construction materials much as we think of DDT now.[23] Newer classes of pesticides undoubtedly fall into a similar category. Their effects on human health and the environment have just not had the same head start; in time, it is likely that we will look back on neonicotinoids, pyrethroids, sulfoximines, and phenylpyrazoles in a similar way.[24] The world is also beginning to wake up to the impacts of Polyfluoroalkyl Substances (PFAS) on human health and the environment. PFAS are used in waterproof fabrics, nonstick pans and some firefighting foams, and are often referred to as “forever chemicals” because they resist environmental degradation and simply accumulate over time. PFAS are linked to many forms of biological damage, including disruption to the cardiovascular, endocrine, and reproductive systems, as well as impaired liver function and an increased risk of cancer.[25] A study has suggested a cost of about seven thousand times the annual global GDP to remove and destroy just one small subclass of PFAS chemicals from the environment.[26] PFAS are now found everywhere, including in rainfall in the most pristine parts of the planet.[27] No matter how much money you have, and no matter where you build your doomsday bunker, you can no longer avoid the diseases of the anthropocene.
No matter how much money you have, and no matter where you build your doomsday bunker, you can no longer avoid the diseases of the anthropocene.
We rely on institutions in society to manage the risk of harm from DDT, asbestos, and other inventions on our behalf. Using the available evidence from specific academic studies and the broader research literature, chemical exposure limits are set and communicated to industry and the wider public. While the intention to control toxic substances in the environment is obviously worthy of support, it’s also important to note that in our evolutionary environment, the amount of these substances was zero. There were no synthetic chemicals in the biosphere that produced intelligent life.
...it’s also important to note that in our evolutionary environment, the amount of these substances was zero. There were no synthetic chemicals in the biosphere that produced intelligent life.
The way in which we set exposure limits for chemicals in the environment provides a false sense of security. A single threshold limit can never capture the nuance of biological reality; a certain amount of a given chemical may have a very different effect on a child, for example, than it would on a fully grown adult. Exposure limits must be set in part because an industry exists for the production of such chemicals. The market demands the chemical for a purpose, and so the default position is, in effect, that each chemical is safe up to a limit until proven otherwise. A market pressure exists for the safety limit to be above zero.
If we are exposed to hundreds of known carcinogens, each one at or below the legal limit, what is the cumulative carcinogenic effect in the body? Unfortunately, we have no test that can tell us anything about the cumulative effects of all of these chemicals in our air, food, and water. There is good evidence, however, to suggest that our rising rates of cancers, endocrine disorders, and complex chronic disease are linked to increasing exposure to this range of novel compounds.[28] We know very little when it comes to the full set of interactions and combined effects of synthetic molecules within the complexity of a complete biological organism. There is no single immediate measurable impact that can serve as a focus for regulation. Instead, effects are delayed, cumulative, and look a lot like many other systemic disorders. In our globalized world, everyone suffers from these impacts, and so there is often little opportunity to notice problems between populations with more or less acute exposure. We are all exposed together. Legislation fails to be a meaningful lever under such circumstances. These impacts are driving us toward a civilizational death by a thousand cuts.[29]
Some of the externalities of new technologies we plan for and understand, and some we fail to anticipate in advance. Many hoped that social media would connect people and build digital communities; in the West at least, we took no action to account for how it would also drive political polarization, damage mental health, and present a useful vector for disinformation and psychological warfare.[30] Sometimes we simply fail to conduct sufficient pre-release testing and risk assessment, and sometimes it is genuinely difficult to predict outcomes when intervening in complex systems.[31] On other occasions, we know of potentially damaging effects in advance. When issues come to light further down the line, those responsible can rely on the difficulty of predicting outcomes as a form of plausible deniability.
In many cases, it has been proven that manufacturers knew of the negative side effects of their technologies long before the issues came to light.[32] Despite knowing, they either did nothing to mitigate risks, or in some cases actively hid or destroyed evidence of their awareness to avoid punishment.[33] When attribution of harm is clear, spotting failures in our attempts at progress over short timescales is relatively easy, as with thalidomide or asbestos. In most cases though, the harm is hidden, abstracted away from its origins, lost in the infinite complexity of the biosphere. These features of the tech development process make it easy to write these examples off as outliers. But they aren’t outliers—in this particular moment, their consequences are simply more visible than average.
When attribution of harm is clear, spotting failures in our attempts at progress over short timescales is relatively easy, as with thalidomide or asbestos. In most cases though, the harm is hidden, abstracted away from its origins, lost in the infinite complexity of the biosphere.
In the past, the tools we made had effects that were bounded by the scale of their power and the breadth of their distribution, and so the consequences of inadequate design and planning impacted the world at a slower rate. Now, we live in a deeply interconnected global civilization in which events in one place have the potential to rapidly and meaningfully impact life elsewhere. A virus emerging in Wuhan can shut down the world. Newly published software is available to anyone on Earth with an Internet connection. In this world, the unforeseen consequences of new technologies can become global long before we have fully understood them.
The vast majority of the most consequential and difficult problems we face—climate change, nuclear war, species extinction—are the unintended outcomes of humans attempting to solve other problems.
The vast majority of the most consequential and difficult problems we face—climate change, nuclear war, species extinction—are the unintended outcomes of humans attempting to solve other problems. In our efforts to solve the problem of World War II, for example, we invented nuclear weapons, which played a role in ending the war, and yet at the same time delivered humanity into a far more precarious and insecure world. For many of our greatest problems, at some point in the past we designed technical solutions to address them, and in the time since the solutions have had other effects that we either did not predict or did not mitigate sufficiently in advance. The problems the world faces today are not caused by our inability to achieve our goals—they are a direct result of our success. They are a result of how destructive we are in the pursuit of our goals.
Technologies change the world and our experience of living within it. But not all change is necessarily progress. Some changes may benefit one group while harming another, or benefit one goal at the expense of other goals. Such instances of change can only be considered true progress if we blind ourselves to these other negative effects. By defining progress too narrowly, we can call the positive outcomes in the here and now “progress,” while conveniently ignoring the harms occurring elsewhere. What we call progress is, in many cases, simply the enactment of harms elsewhere in time and space.
In developmental psychology, an inability to see the world through any lens other than that of our own narrow goals or interests is a trait associated with immaturity.[34] When we are young, we are immature: we may lash out at our parents, and act according to our immediate emotions and desires, unable to empathize with those we may be hurting or understand that we may be damaging things that we both value and need to survive. We rely on the love and generosity of our caregivers without fully realizing it. As we grow up, we progress through stages of development, and we build (among other things) the capacity to hold abstract ideas, understand concepts of greater complexity, and take on a wider range of perspectives of an increasing variety of people and considerations. We develop the ability to see the world through others’ eyes, think about our actions over longer time horizons, and consider a greater number of coincidental impacts of our choices. These capacities are some of the hallmarks of maturity.
Applying this framework of maturity, it can be said that our current definition of progress is immature. It fails to view the world from a broader set of perspectives. It harms much that we both value and need. A mature perspective on progress must factor how the changes we make will impact the wider world beyond our immediate intentions, over time. It must earnestly endeavor to consider all the types of cause and effect that will flow from our innovations. Progress worth believing in—progress that is really about increasing betterment, increasing the goodness in the world—must still be able to be considered “good” once it has taken account of all perspectives and externalities. Of course, this doesn’t mean that there are never difficult trade-offs; it simply means that we need to seriously balance the interests of all stakeholders and all types of value in our search for the most holistically positive solution.
Progress worth believing in—progress that is really about increasing betterment, increasing the goodness in the world—must still be able to be considered “good” once it has taken account of all perspectives and externalities.
Progress is a statement about the world being in a different state. When we take action in the world—when we make a change—it is often the case that this difference in state is worse in some meaningful ways that may not be connected to our original intentions. Many of the changes that we currently call progress are not actually progress. Such changes may be representative of advancement, in that we can see technical improvements in many fields: tools that are improving in terms of their efficiency, increasing their impact in the world, or expanding their capabilities, for example. These first-order effects are easier to notice than other side effects that emerge further away in space and time. Externalized harms tend to be much harder to observe directly, which allows us to mistake such instances of technological advancement for real progress.
We can call this fake progress, immature progress, or naive progress; all of these are relevant ways to frame the same core idea. The point is that how we define progress determines the future we build, and if we continue to define it in any way that does not take account of its full set of effects in the world, we will build a future that systematically harms life and undermines the things we both value and need. This is because in our current approach to tech development, the harms are the norm rather than the exception—and they are lasting, cumulative, and at scale with economic growth. In its current shape, our world system depends on exponential growth. Without a change in our approach, in the presence of increasingly powerful technologies, the scale of the impact of their externalities will be similarly exponential. It should be obvious that this trajectory cannot be maintained on a finite planet.
A brief review of the true cutting edge of advanced technologies helps us to understand the kind of progress we are really pursuing. Military capability development has always been a key driver of technological advancement, and an incredible amount of money, time, and creativity continues to be poured into our capacity to destroy and kill at a truly unbelievable scale. It would be difficult, however, to argue that the latest developments in advanced weaponry are leading us toward the apex of human flourishing. Nation-states are currently racing to deploy space-based Directed Energy Weapons (DEWs), including ultra-short pulse laser and high-power microwave systems, as part of an orbital “kill web” with the capability to fire continuously at targets anywhere on Earth.[35] AI-piloted autonomous drone swarms are combat-ready.[36] Hypersonic missiles with nuclear payloads, capable of traveling at five times the speed of sound, have been successfully tested around the world.[37] These are technically astounding capabilities, created to intimidate and kill at a level unprecedented in history. Our governments and private corporations are applying vast quantities of human ingenuity, capital, and construction effort to the creation of planetary-scale, ubiquitous surveillance and kill machines. While all this effort is advancing the state-of-the-art in technological terms, can we honestly call this progress? What is better about a world in which you and your family are at risk of death by autonomous kill-drones? In the sense of what really matters in a human lifetime, is there meaningful value in any of this prodigious technological advancement? It might be powerful, or even awesome, but is it good or beautiful?[38]
In the sense of what really matters in a human lifetime, is there meaningful value in any of this prodigious technological advancement? It might be powerful, or even awesome, but is it good or beautiful?
When we encounter arguments that criticize the achievements of civilization, we may feel an internal response that stems from a sense of shared identity with the results of the advancements that we see around us in the world. This response can be noble in origin, in that it may reflect a desire to feel adequately grateful for the lives that built the civilization from which we now benefit. A common response to any kind of critique of progress is that perspectives that are not wholly positive are overly critical of the choices and actions of our forebears, who couldn’t have known any better. Nothing presented here implies that the technology, culture, or advancement we have inherited must necessarily be discarded; this article does not suggest that there is nothing good in the civilization we have built, nor does it promote ingratitude for the benefits around us today. This is a critique though, and it attempts in a fair and balanced way to account for the harm caused by the kind of progress we have pursued. It recognizes, for instance, that many have died to bring the world into its current state, and that many other beings are still being harmed to keep things going as they are.
The perspective shared here acknowledges that while many of the harms were unconsciously caused, many others were knowingly enacted. This article simply suggests that it is necessary for anyone taking action or making change in the world to acknowledge and factor both positive and negative impacts of their actions, and that humanity can and must do better now than it ever has done before in this task. Only by seeking to address the negatives can we meaningfully improve outcomes. All individuals, over the full course of a lifetime, create a vast web of cause and effect through their actions in the world. Some figures in history responsible for terrible atrocities also performed acts of great charity, or built things that made a positive difference to others around them.[39] At minimum, we can acknowledge the complexity inherent to a human lifetime—or a technological innovation—and know that improvement is possible. As noted above, the intention is not to deconstruct for the sake of argument; it is to inform a way forward and outline a path toward solutions. Progress is in need of development. Through intentional choice, we can help it to grow up.
The way we think about progress is referred to here as the progress narrative. The progress narrative, as we understand it now, is a connected set of cultural memes, all of which contribute to the idea that the accumulation of knowledge and innovation in technology are the driving forces behind the betterment of humanity’s state of existence. Some of the key voices involved in the development of our modern progress narrative include Hans Rosling, Stephen Pinker, and Carl Sagan.[41] These scientists, writers, and academics have helped to establish a worldview that is steeped in optimism. To its supporters, the progress narrative is an uplifting vision of humanity’s accomplishments and its path into the future. In this worldview, progress is something for us to work toward together, in shared gratitude for the efforts of the countless people who came before us. It has been said that “without vision, man perishes.” The progress narrative presents an ennobling story that builds a connection between past, present and future and invites its followers to be a part of a journey toward something better. In the postmodern, Western world, our idea of progress has come to provide a secular variation on the code of ethics and teleology that we used to get from our gods.
Across its many forms, the progress narrative states that technology solves our problems and makes our lives easier and better, leading to a general increase in good things and a general decrease in bad things. The progress narrative tells us that technology gives us the tools to manage the harder aspects of nature; it protects us from dangers, keeps us warm when it’s cold and cool when it’s hot. It relieves our pain, cures our diseases, and serves humanity’s hierarchy of needs. Technology also makes life subjectively better than it used to be in the past. It entertains us, educates us, and assists in our creative endeavors. The implication behind the progress narrative is that the more material wealth we create, the more freedom we have to live our lives according to our true desires. In this worldview, technology is the answer to most questions, the solution to our greatest problems, and the path toward a world of abundance for all.[42]
For those willing to acknowledge that technology may sometimes deliver unwanted effects, progress is often identified at a deeper level as the general accumulation of human knowledge; as long as we are accruing knowledge about the world, things will tend to get better on average over time, despite occasional mistakes or costs.[43] Any human born today does not need to rediscover calculus; they can simply learn it from others. Any philosopher at work today has access to the entire canon of philosophy and does not need to generate those insights again. We are also born into a world built by others, from which we can now benefit. We can travel abroad in jets, build businesses in offices within cities full of potential employees, and manufacture goods in industrial parks designed precisely for such activity. This perspective rests on the idea that the developments of modernity, such as literacy, democracy, free markets and science, are prosocial technologies for collective intelligence. This worldview suggests that together, these fundamental components of progress raise humanity up out of the past and guide us into the future.
These arguments feel good. There is a natural comfort in this kind of worldview, in that it’s easier to bear the burdens of the present if we are confident that they will be lighter in the future, or that our sacrifices of today are contributing to a better world of tomorrow. As is so often the case, our motivations for this kind of reasoning reveal more than the arguments themselves. There is now a bewildering array of information about the state of the world and the trajectory of various aspects of our civilization. Without an earnest attempt to factor all relevant data and the context in which it is embedded, our conclusions will be misleading.
If we don’t consider the other aspects of reality impacted by the technologies that led to the progress in question, we are simply failing to model the world accurately. We are looking through a narrow aperture onto one limited instance, without zooming out to understand wider effects. With a narrow view of reality, we blind ourselves to the critical questions: progress for whom? And at what cost? Throughout history, it has been clear that the upsides of progress have rarely been distributed equally.[44] Perhaps the single clearest example of inequality of progress exists between the human and nonhuman worlds. The progress narrative is wholly anthropocentric, and nonhuman life on Earth has been almost exclusively harmed by progress.[45]
The progress narrative is wholly anthropocentric, and nonhuman life on Earth has been almost exclusively harmed by progress.
Arguments in support of the progress narrative will often note that all actions in the world have costs, and that these costs must be incurred if we are to achieve the promise of a future of abundance for all. But it is obviously not the case that all trade-offs are equal. In some cases, the gains are lower in value than what is lost (i.e. they are negative sum), which leads to a reduction in total value in the system. In others, the gains are equal in value to losses (i.e. they are zero sum). Less commonly, both sides gain relative to their previous positions, increasing the overall value in the system (positive sum). An important insight is that many perceived zero-sum trade-offs are in fact negative-sum, because they are the first move in an ongoing arms race: initial gains lead to a motive for retaliation, which creates a requirement for both parties to devote resources to the arms race, leading to the same kind of overall reduction in system value. Truly positive sum trade-offs will lead to better outcomes not only for those directly involved, but also for adjacent or dependent living beings and systems. It is this kind of trade-off that should always be pursued first.
The trade-offs involved in taking heroin and trade-offs involved in regular exercise, for example, are profoundly different. Drug use and exercise both involve types of pleasure and pain spread over different timescales and in different doses. The baseline from which one experiences the highs of heroin will erode over time, as other parts of life bear the cost of the damage to health and impact of behaviors that accompany addiction. The baseline from which one may gain the highs of regular exercise will improve over time, enhancing other aspects of life, despite the difficulty endured at the outset. Those benefiting from our current form of progress offer a defense by noting that there are trade-offs everywhere and using this argument as an excuse to avoid acknowledging or internalizing negative effects.
It is important to acknowledge that technology is in some cases obviously beneficial and positive. Few would be happy to give up the comfort of central heating in the middle of a cold winter. Fewer still would elect to have major surgery without anesthetic. No one wants to return to a world ravaged by smallpox. Any successful human future is also simply likely to involve advanced technologies, because technology adoption confers power that can be used to win competitive games (such as in markets or warfare). This means that groups attempting to pursue a low-tech future are not likely to persist into any kind of future majority position. Likewise, anyone intentionally rejecting a life dependent on industrial technology will probably fail to have a meaningful impact on the harms associated with (for example) overfishing, AI development, and military manufacturing. It is also the case that any long-term viable future must internalize technological risks by mitigating them in advance of deployment, and success in this endeavor will be hard to achieve without the power and insight afforded by advanced technologies.[46] On a more hopeful note, appropriate technologies designed with due care and consideration have the potential to be beneficial in the broadest possible terms. A future built with technologies that properly account for their side effects could lead to a kind of future that many would want to experience.
A future built with technologies that properly account for their side effects could lead to a kind of future that many would want to experience.
To question the progress narrative is not to yearn for a return to the past, or to wring our hands in fear at the new and unfamiliar. As our technology gets more powerful, its effects on base reality become increasingly consequential. Our current approach means that we are on a path that leads inevitably to a scaling of the kind of mistakes made in the invention of DDT and asbestos. It is this trend that must change.
The idea that humanity has witnessed a steady march of progress from the dawn of civilization to the present day is dispelled by even the briefest study of history. None of the great civilizations of the past exist today; all succumbed to the dynamics of collapse, whether enforced from the outside by conflict or driven internally by institutional decay or environmental overuse.[47] There is broad agreement that many of these societies were highly advanced. They were capable of maintaining complex societal structures and generating new cultural and intellectual insights, often expressed as either novel technologies or ideas. So many of these insights have been lost.
We are motivated to avoid drawing comparisons between the collapse of the past and our state in the present. We tell ourselves that this time it’s different, even though it is hard to imagine the citizens of ancient Rome feeling any other way. Cases of civilizational collapse are everywhere in the historical record, and somehow so few seemed to see it coming.[48] When societies collapse, they rarely leave behind a perfect inventory of the technologies they created for the benefit of their successors. There is no way to know the depth of the shadow of lost knowledge, but we have some hints. In 1901, an artifact known as the Antikythera Mechanism was discovered in a shipwreck in the Aegean Sea.[49]Manufactured more than two thousand years ago and consisting of more than thirty bronze meshing cogs and gears, the Antikythera Mechanism represents a kind of technical capability that was previously accepted as impossible in its time. The device was capable of predicting solar and lunar cycles (including eclipses) and tracking the irregular motion of the moon. It took one and a half thousand years for similar technology to be reinvented after its apparent loss, alongside the culture that built it, somewhere in the Mediterranean. The same can be said of the use of concrete by the ancient Romans, which was also lost until its rediscovery in the eighteenth century.[50] The story we tell ourselves about progress tends to leave out such instances of undoing and collapse. What else disappeared in the destruction of the library of Alexandria, or in the relatively sudden decline of Rome? Our ideas of deep history are evolving constantly, and every new discovery shines a miniscule beam of light back into an abyss of darkness about which we know very little.
The striking rate of technological change in our world today is different in kind to anything the past has yet revealed. The accelerated innovation of the post-industrial era has been fuelled by a rapid increase in global population, extraction, and pollution, and these trends cannot continue to increase forever.[51] The progress with which we are familiar comes from innovation not only in technology, but also in finance and globalization, and it is driven by cheap labor in particular regions of the world (in which most of the pollution also tends to accumulate). This, too, cannot continue forever. Our current immature version of progress borrows from the future by artificially growing the supply of money, within the context of a linear materials economy, on a finite planet. As we pass planetary boundaries (i.e. point-of-no-return thresholds) of extraction and pollution, the biosphere is signaling to us that there are complex consequences for continuing to turn nature into economic growth without due care.
Our current immature version of progress borrows from the future by artificially growing the supply of money, within the context of a linear materials economy, on a finite planet.
Supporters of the progress narrative suggest that these problems can be addressed by further technological innovation. A set of canonical examples of progress are often raised to demonstrate how human ingenuity is capable of overcoming such challenges. The same set of examples is also commonly used to establish that the world of the present is a better place to live when compared with the world of the past, including the global increase in life expectancy, reduction in extreme poverty, increase in literacy and access to basic education, and decline in violent conflict. A broader perspective on these issues reveals that the data supporting each claim has been cherry-picked from a far more ambiguous dataset.
The cherry-picking and decontextualization of facts is a fundamental feature of the progress narrative. We are told that a great number of studies point to the same thing, with no reference to other studies pointing to alternative interpretations. In the absence of this broader context, it appears there is overwhelming consensus; yet once we know more, the resulting picture is far more nuanced. In many cases, a few narrow and useful metrics are cherry-picked from a broader dataset and presented as representative of the only kind of progress that anyone might desire.[52] Optimization against these narrow metrics, which could never represent all of the things that really matter and on which the quality of human life depends, is an ideal strategy to win statistical warfare and demonstrate “undeniable” progress. Below, we consider each of these canonical claims and attempt to widen the aperture of our view onto both the facts and their consequences.
In any example that could be presented as an argument for or against any kind of progress, it is close to impossible to enumerate the complete set of relevant details. The counter-examples provided here are not the end of the story; there is always much more that can be said. The intention is to point toward the underlying principle that any given instance of progress is subject to a range of relevant perspectives, and that frequently, harms are talked about far less than obvious and narrow benefits.[53]
It is not difficult to find graphs depicting the steady improvement in life expectancy over the last two centuries.[54]Presented without wider context, the implication is that people are simply living longer lives, and that this is a good thing. While it is true that life expectancy has increased due to improvements in general medicine, a significant portion of the increase is due to the decline in infant mortality, which has caused a jump in the average statistical age humans now reach.[55] It is a common misconception, exacerbated by graphs showing steep increases in life expectancy, that pre-modern humans often failed to survive into their forties and beyond. Evidence from skeletal and dental remains tells us that once early humans made it past childhood, their chances of reaching our current standards of old age vastly improved.[56] At the same time, while life expectancy has increased over the last two hundred years of industrial growth, we have simultaneously toxified the environment, eradicated countless other species, and vastly increased the unnatural disease burden globally.[57]
...while life expectancy has increased over the last two hundred years of industrial growth, we have simultaneously toxified the environment, eradicated countless other species, and vastly increased the unnatural disease burden globally.
Improvements in life expectancy have not even been consistent. Even with advanced healthcare and far fewer deaths in early life, American lifespans recently endured a pronounced period of decline. Since 2014, the clear upward trend in life expectancy has changed, with year-on-year reductions attributed to chronic disease, overdose, gun-related homicide, suicide, and road traffic accidents.[58] More relevant, however, is the quality of the additional life that we are living, and there is little evidence to suggest that we are passing our additional years in a state of good health and happiness. The average person over sixty in the US now takes fifteen prescription medications a year.[59] Many of these drugs have a range of damaging side effects, which must be borne alongside increasing rates of neurodegenerative disorders (such as Alzheimer’s), as well as depression and advanced physical ailments.[60] In historical terms, this is not a typical end-state for the human experience. It is not normal for an increasing proportion of older people to spend their artificially prolonged years, often depressed and alone, largely ignored by their families, waiting to die in front of care-home televisions.[61]
Quality of life among younger members of society has also demonstrably declined.[62] Obesity, diabetes, cancers, and autoimmune disorders are now increasingly common afflictions across generations.[63] Scores relating to general happiness, wealth inequality, and trust (in others, in governments, and our societal institutions) are all in multi-decade decline.[64] Suicide rates for children and teens have increased dramatically over the last twenty years.[65] Within the most developed parts of the world—the countries benefiting most according to the progress narrative—the right to euthanasia is often a leading human rights issue.[66] While the pursuit of a legal right to die under some circumstances is a viable ethical goal, it is also the case that the developed world’s demand for euthanasia is driven in part by the burdens of anthropogenic (human-caused) disease, chronic unhappiness, and profound existential emptiness into which the progress narrative has delivered us.[67] If civilization was actually progressing in comparative betterment, people’s desire for life would in all likelihood be increasing, not decreasing. A lonely, painful, care-home death is something most want to avoid.[68] It may be that this motivation drives at least a part of the right-to-die debate, and yet many fail to see that a far greater number of humans are dying under such circumstances because we have distanced ourselves from the idea of death being a natural part of life, and attempted to reframe it instead as just another problem to be solved by technology.[69]
The mental health crisis in young people is perhaps an even more insidious example of the hollowness of life expectancy as a measure of progress. For most of human history, people had limited exposure to extremes of human beauty (in all its forms). Modern society, through technology, has hypernormalized such extreme forms of beauty and attractiveness.[70] Pronounced body dysmorphia, and the phenomena of self-harm and cutting, relatively uncommon throughout history, now appear to be far more common amongst teenagers.[71] With millions of images being artificially enhanced each day (exacerbated now by the default use of AI beauty filters), our current media environment is destroying our children’s sense of bodily proportion and forcing them to grow up feeling ugly and worthless.[72] These outcomes are a direct result of the technologies that we call progress. Is a longer life with chronic mental health problems and a higher burden of disease a good indication of progress?
Whether or not extreme poverty around the world has declined significantly depends on how you choose to look at the data. A number of commonly used charts show a steep decline, drawn from World Bank data that sets the bar extraordinarily low in determining what constitutes “extreme” poverty.[73] Even setting a threshold of $6.85 per day reveals that there has been almost no reduction in poverty over the last thirty years.[74] In some parts of the world, by even the most stringent measures, extreme poverty is increasing, and almost half of humanity lives on less than five and a half dollars a day.[75] No one would reasonably argue that this amount of money represents the kind of value that leads to a life of flourishing health and happiness. For an alternative perspective, we can consider comparative numbers over longer timescales: the total number of people living in extreme poverty today is roughly the same as it was in 1800.[76] During the COVID-19 pandemic, the global rate of extreme poverty (as well as overall wealth inequality) rose significantly due to supply chain disruption and the closure and takeover of small businesses.[77] As our global civilization becomes increasingly interconnected, it develops a complex web of dependencies that makes it more fragile.[78]
The decline in poverty is at the heart of much of the progress narrative, and yet it is based on the assumption that for all of human history prior to the industrial capitalism of the nineteenth century, people were generally starving and impoverished.[79] It is inevitable that measures of poverty based on the dollar will show a decline that matches the increase in GDP over a given time period. This approach fails to account for the ways in which people met their needs that did not require dollars, such as subsistence farming, access to the commons, and other kinds of hunting and foraging that sustained humanity for hundreds of thousands of years.[80]
Consumption can only ever be a partial measure of poverty, which is of course multidimensional in reality. Deprivation can be experienced in health, education, living standards, and access to communities, social groups, and nature. An honest appraisal of progress made against global metrics within these domains is not encouraging.[81] Even in the presence of material wealth, there can be an inner kind of impoverishment. The degree of loneliness, angst, and mistrust between people is significantly higher in industrialized countries, and continuing to increase.[82] Experiences of awe, gratitude, and wonder, and a sense of meaning and purpose, are increasingly rare.[83] Non-addictive sources of positive feeling are less common, and this phenomenon is clearest in those with the most material wealth.[84] We have also never been more aware of the disparities in wealth inequality than we are now, as the lifestyles of the ultra-wealthy are presented as an endless source of entertainment and escapism across all types of media. While profound deprivation clearly makes people less happy, it is not true that ever-increasing income correlates with ever-increasing happiness.[85]This is because in our striving for more, we trade the real treasures of connection, meaning, and intimacy for the relatively worthless tokens of status. We have created an artificial world that generates systemic unhappiness by disconnecting us from each other and from nature, and sells us addictive forms of pleasure as a solution to our dissatisfaction.
This is because in our striving for more, we trade the real treasures of connection, meaning, and intimacy for the relatively worthless tokens of status.
From this perspective, it is not clear at all that the Western quality of life toward which most of the world strives is actually improving the most truly valuable aspects of existence. Lives within developed parts of the world, representative of the pinnacle of the progress narrative, are in some important ways less happy than those in developing parts of the world.[86] Still, billions of people in India, Africa, and China want and expect the same material quality of life that is broadcast around the world from Hollywood, and to achieve it will demand incredible energy and material costs.[87] As the Earth is already hitting critical tipping points in relation to pollution, the oceans, and the climate, it appears unlikely that the planet—regardless of our political systems—will tolerate such demands.[88]
It is worth acknowledging too that our debate around what should constitute a state of extreme poverty is occurring within the context of a world that, from the perspective of most humans ever to have existed, is filled with pure magic. Electric cars, smartphones, virtual reality, and space-based internet are the long-imagined hallmarks of a high-tech future—and yet here we are, with billions continuing to live in a state of meaningful poverty.
There is no doubt that literacy and access to basic education, as defined by our modern societies, has improved globally since the Industrial Revolution.[89] Once again, however, this statement shines a narrow beam onto a particular part of a much more complex story. Prior to the advent of public education, the wealthiest members of society had access to a quality of education that has now been largely lost. Aristocratic tutoring for upper classes provided learning of unparalleled breadth and depth, while others in functioning pre-industrial societies had access to trade guilds that produced master craftsmen—also now largely lost.[90] At the same time, we are spending more on systems of education than ever before, and yet both literacy and educational outcomes are, in fact, in decline around the world.[91] While some countries (such as China and Singapore) have demonstrated educational improvements in certain subjects, most regions demonstrate varieties of the same phenomenon of decline: long-term studies show stagnation or reduction in education quality across the developing world, while the wealthiest countries, such as the US, Germany, and France, have experienced a major decline in reading, math, and science.[92]
Education is also more than just formal schooling. Societies that fail to pass on crucial information about how and why they work cannot be sustained indefinitely. The pace of our technological innovation has exceeded the pedagogical capacity of existing educational institutions. As our institutions fall further behind in their understanding of everything they are supposed to govern, the intergenerational transmission of knowledge that is critical for the maintenance of our increasingly complex civilization begins to break.[93] The progress narrative points to the simple metric highlighting access to education, and avoids the more problematic data regarding outcomes.
The progress narrative points to the simple metric highlighting access to education, and avoids the more problematic data regarding outcomes.
In the past, education was as much about context as it was about content. Modern educational systems focus almost entirely on content: the information that must be inserted into a child’s mind to make them a functioning member of society. This approach misses the fundamental point of education, which for most of human history has been just as much about learning how to learn, how to bond, and how to get along with others as it is about information regarding the wider world. When education became primarily about content, one of the many effects was a reduction in the value of elders, who previously spent time with children as sources of wisdom about life and living. Older generations provided a means of critical cognitive and social development, helping children become the kind of adults who could work together toward common goals, optimizing for group dynamics over individuals.[94] In many parts of the world, this has been largely lost. At the same time, the allocation of money has replaced the allocation of time spent with our children. Much of this money is spent on salaries for people who do not love or care for our children in the same way that we do.[95]The close bonds between generations that previously supported development and learning have largely been stripped from contemporary pedagogy.
Finally, the story our civilization tells itself about education also necessarily denigrates other perfectly valid approaches to learning about the world that had to make way for the kinds of learning we need to sustain globalized economic growth. Over tens of thousands of years, groups of humans built lives in relative balance with the natural world, passing knowledge down between generations that prioritized the transfer of skills and wisdom that kept their societies healthy and whole.[96] This approach may not have resulted in smartphones and air travel, but it also didn’t result in nuclear weapons and industrial pollution. For those promoting more innovation as the answer to the challenges of our present time, it must also be acknowledged that it is innovation that caused the problems we face today.
The final example typically raised in support of the progress narrative is that of “a general decline in violent conflict.” As it turns out, both how we measure conflict and how we select the time period for analysis matter a great deal for our understanding of how violence has changed in the modern era. The great wars of the twentieth century put industrial technologies to work in service of mechanized death. Deaths in war spiked twice in the first half of the last century (due to World Wars I and II), which in the grand scheme of the human story was only a moment ago in time.[97] In 2022, deaths in armed conflicts around the world doubled, largely due to the most significant land war in Europe since 1945.[98] The total number of armed conflicts worldwide has also been on a steady upward trend over the last two decades.[99] While it may be simple to demonstrate that direct conflict between Great Powers has declined in the short period since the end of World War II, this peace has been delivered at a high cost. Exponential economic growth and increasingly interdependent trade ties have been used to disincentivize direct warfare between nations.[100] The cost of this temporary solution has been borne by nature and by human health.
At the same time, the ways in which wars are fought has changed. To a certain extent, modern warfare has simply subverted the need for bullets in its opening phases: psychological, cyber, and information warfare are now continuous, intense, and escalating between great powers.[101] Over the most meaningful timescales, does this eventually lead to less overall total violence? We don’t yet have enough data to say conclusively. We may celebrate the subversion of direct conflict on one level, while at the same time acknowledging that modern irregular warfare between nation-states does not necessarily preclude the use of tanks and missiles in the longer run.[102] Current conflicts in Europe and across the Middle East may serve to highlight this concern. While many nations are busy conducting cyber campaigns, they are still committing a major portion of global GDP to the development of increasingly destructive weaponry. Through innovation in nuclear capabilities and other advanced military technologies, the total destructive energy available for future kinetic war is trillions-fold greater than it ever has been before.[103] Technology-driven warfare now involves an ever-increasing set of capabilities and domains, with the potential for a scale of violence unlike anything we have ever seen before.
While many nations are busy conducting cyber campaigns, they are still committing a major portion of global GDP to the development of increasingly destructive weaponry.
Opening our eyes a little wider to see these claims as part of a more nuanced whole reveals a general principle for modernity: all of our incredible inventions have consequences that we would rather they didn’t, no matter how useful they are to us.[104] No one wants climate change, but it is an inevitable side effect of our rates of industrial growth and globalization over the last few centuries. Plastics are one of the “four pillars of modern civilization,” utterly indispensable to society due to their use in packaging, clothing, construction, medicine, and consumer products.[105] And yet they also form toxic nanoparticles that now permeate every domain of the biosphere, poisoning plants and animals, and circulating in our bloodstreams, leading to inflammation, cancers, and cell death, as well as disruption of hormonal cycles, fertility, and prenatal development.[106] Antibiotics are truly a wonder of the modern world, saving millions of lives from death through bacterial infection. At the same time, their use has led to antibiotic-resistant bacteria, deadly chronic infections, profound disruption to the human microbiome, and negative impacts on development when prescribed to babies and children.[107]
These brief examples are not outliers. This is a pattern common to all technology, and for proponents of the progress narrative willing to acknowledge this reality, it is often justified by the idea of improvement over the long arc of history: yes, new technologies sometimes come with hidden costs or unforeseen consequences, but despite these setbacks things still get better over time. The ultimate trajectory is upwards. One of the examples often raised in this context is how humanity solved its problem of hunger.
Our fear of famine, and the actions it drives us to, are a core feature of collective human memory and a strong motivator for ingenuity in the face of privation. The progress narrative states that the invention of modern agriculture—specifically the Haber-Bosch process—freed us from this fear and laid the foundations for the technological acceleration we are experiencing today.[108]
The period of transformation in agricultural practices that occurred over the middle part of the last century is known as the Green Revolution, and at its heart is Haber-Bosch. The Haber-Bosch process was developed in 1913, when Carl Bosch demonstrated an industrial-scale application of Fritz Haber’s successful fixation of atmospheric nitrogen, which occurred just four years earlier in 1909.[109] The process allowed for the production of ammonia and the development of synthetic fertilizers, beginning a shift away from traditional organic farming methods and toward improved crop yields in depleted soils. Plants need nitrogen to grow, and although it is abundant in the air, the synthesis of accessible nitrogen in the soil is an extremely slow process.[110] Pre-industrial agriculture made use of naturally occurring fertilizers, such as manure or guano, to enhance food production through the addition of excess nitrogen to the land.[111] In the absence of fertilizer, repeated cultivation depletes nitrogen in the soil, crops fail to grow, and eventually, people go hungry.
An increase in the reliable production of crops largely freed humanity from the threat of famine. It also improved food affordability and land use efficiency, and at the same time led to a reduction in food resource conflict.[112] One of the most significant impacts of the expansion of industrial agriculture was a boom in world population. Without the Haber-Bosch process, almost two-fifths of the world’s current population would not be in existence today.[113] Within this portion of humanity are billions of individuals whose hopes and dreams are as valid as anyone else’s, but whose existence is predicated almost entirely on the use of a technology for growing more crops than the combination of nature and human capacity would otherwise permit. It has been estimated that nearly half of the nitrogen found in human tissue originated from the Haber-Bosch process.[114]
Surplus food has had a profound effect on civilization. It led to more people and therefore more economic activity. More growth has driven innovation and accelerated industrial activity, which has come with both positive and negative consequences (an increase in living standards on the one hand, and destruction of the natural world on the other). The Green Revolution led to new plant breeding techniques, pesticides, infectious disease control, irrigation technology, erosion control, and mechanization, all of which have had a complex array of downstream effects.
The full range of consequences that flow from the invention of the Haber-Bosch process are difficult to quantify and evaluate, but making an attempt to do so begins to clarify the totality of the impacts on individuals, communities and the planet as a whole. In attempting to be complete, we approach a better understanding of what is really happening in the world, how our lives are affected, how things have changed, and how the past really relates to the present and the future. In striving to understand all the relevant effects, we get closer to a true understanding of how our actions affect the world, which means we can mitigate risks more effectively. This is a positive and optimistic aim. Minimizing negative externalities of technologies makes a safer, healthier, and ultimately better world for everyone alive now and for generations to come, who will inherit whatever we choose to leave them.
Minimizing negative externalities of technologies makes a safer, healthier, and ultimately better world for everyone alive now and for generations to come, who will inherit whatever we choose to leave them.
Many of the first-, second-, and third-order effects of the Haber-Bosch process have taken decades of investigation to begin to understand. The list below is incomplete and intends only to provide a brief overview of the complex effects that a single high-impact innovation can have on civilization. The causal connection between Haber-Bosch and the points made below is varied; again, the aim is to shine a light onto the complexity that can flow from a single invention. Some changes manifest close in time and space to their ultimate cause, while others emerge further down a cascade of cause and effect. Many of the side effects listed here are overlapping, with an unavoidable element of redundancy. They are organized into three broad categories: effects on human health and well-being; effects on the biosphere; and effects on the structures of civilization.
Turning to face reality can be painful. Reading through the list of externalities of Haber-Bosch can give a sense of nihilistic overwhelm. How can there be so many costs associated with one of the most frequently cited examples of technological progress? Could anyone have known that solving famine would simply kill us, albeit more slowly, in a set of new and unusual ways?
The high-level list of consequences of industrial agriculture should begin to give a sense of the reality that often sits behind a good narrative. Yes, Haber-Bosch largely freed us from famine. This is a good thing. But what end do we serve by turning away from the unexpected and consequential problems it has also caused? Few would argue that it would benefit our children to pretend that these costs do not exist. This is the price of willful ignorance, and why an accurate assessment of reality should lead us to feel called to look more closely at the consequences of our actions, to help correct past mistakes from which we may now be privileged to learn.
The good news is that we already know how to do better. The field of regenerative agriculture has accrued a great deal of encouraging data on the benefits of holistic farming and grazing techniques. The knowledge we have gained in the last two hundred years of scientific investigation of the world has deepened our understanding of the benefits of ancient and traditional agricultural practices, and points us toward a solution to at least one major problem set (in terms of the impact on nutrition, ecosystems, and human health).[152] Through the removal of pesticides and other synthetic chemicals in our food chain—the primary drivers of negative externalities—regenerative agricultural practices have the potential to restore soil health, improve water management, and rebuild biodiversity. Further development in the field suggests improvements both in the nutritional content of our food, as well as a reduction in plastic, metal, and chemical contamination of our diets.[153] As an example of a mature approach to progress, the positive externalities of regenerative agriculture are explored in more detail in Part II.
Pesticides both directly and indirectly impact the nutritional content of our food. They alter a plant’s ability to uptake nutrients from the soil, affect the microbial ecosystem around the roots that plays a critical role in nutrient availability, and impact the synthesis of vitamins and the storage of minerals through changes to plant physiology.[154] They also impact soil structure, acidity, and general agricultural ecosystem biodiversity, all of which disrupt processes that contribute to nutrient cycling and soil health.[155]
Deficiencies in vitamins and minerals play a role in complex traits such as behavior and cognition. Early-life iron deficiency leads to poor cognitive development and behavioral problems, and in adults causes fatigue and reduced cognitive function. Iodine deficiency can affect intelligence and growth.[156] Magnesium is important for neurological health, and low levels in the body appear to contribute to depression, anxiety, and issues with attention.[157] Zinc deficiencies are implicated in a range of similar processes, as well as in mood disorders, immunity and fertility.[158]Vitamin B12 deficiency is known to cause problems with memory, cognition, and brain aging.[159] The strength of these effects is often dependent on the extent of the deficiency and the stage of development at which it occurs; pregnant women and babies, for example, are particularly vulnerable.[160] Both breast milk and infant formula, tested worldwide, are contaminated not only with pesticides and herbicides, but also with toxic metals, industrial chemicals, packaging materials, pharmaceuticals, and a range of other concerning compounds.[161]
Many farmers reasonably assert that pesticides are an essential tool in modern farming. Without pesticides, crops are liable to devastation by insects, weeds, and pathogens, and even if these hazards are avoided, the land-use efficiencies they enable have profound implications for yields and food security.[162] Many farms would cease to be commercially viable without them. This is an example of a technology creating a profound dependency that cannot easily be replaced or removed. The harms of pesticides are therefore simply endured. The incentives of the market promote minimal safety assessment and rapid exploitation of every profitable area of development, which in time closes the door on other, potentially more holistically beneficial market approaches. At the same time, vested interests promote narratives that minimize the risks and exaggerate the benefits.[163] And so we end up in a place in which the vast majority of the food we eat is contaminated with pesticide residues, from which the list of harms grows with every new study that is released.[164] Is it progress to build a world in which we avoid famine by producing food covered in poisonous residues and lacking in the elements of nature that probably contributed to the development of our unique ingenuity in the first place?
Is it progress to build a world in which we avoid famine by producing food covered in poisonous residues and lacking in the elements of nature that probably contributed to the development of our unique ingenuity in the first place?
Our understanding of the impact of pesticides and herbicides on the complex and delicate systems that grow and sustain life is woefully inadequate. The only thing of which we may be certain is that our awareness of the true costs is extremely limited. It is reasonable to wonder whether the traits and capacities of people around the world now might be rather different if we hadn’t built ourselves a food supply dependent on chemicals that hamper our cognition, behavior, and mood. Perhaps some of the great challenges we face now would have already been addressed by populations with an adequate supply of micronutrients and correspondingly better functional health. The cautionary tale of Haber-Bosch serves as an example of how the costs of new technologies are typically externalized to the natural world, of which humanity is an inevitable part. In many cases of tech innovation, the only internalized cost is that of production.
Perhaps some of the great challenges we face now would have already been addressed by populations with an adequate supply of micronutrients and correspondingly better functional health.
Part I of this piece has highlighted the dangerous flaws at the core of the progress narrative, demonstrating that the way we think about progress ignores the harm caused to many aspects of reality on which human existence ultimately depends. Part II will outline some approaches to internalizing these costs and begin the process of maturing the concept of progress from the failure mode in which it is currently trapped.
Part II of this article is about how to deliver real civilizational improvement—an approach to making changes in the world that would be sufficient not only for survival, but also for both humanity and the planet to thrive in perpetuity. It describes how the concept of progress developed from the earliest phases of civilization, before exploring the fundamental limitations of our current definition and how we might encourage its development toward more broadly positive outcomes for all.
Negative externalities are not an occasional bug of progress; they are a fundamental feature of our current approach to tech development. A world that acknowledges the risks and seeks to mitigate them in advance is a far healthier and safer place for our children, and we can do a much better job of forecasting the consequences than we do now. With a fraction of the effort expended on current tech innovation, we can improve our approaches to thinking ahead and limiting the kind of outcomes that lead to destruction, discomfort, and death. But first, we must open our eyes as widely as possible and look frankly at the dynamics driving technological innovation today. The race for market dominance does not incentivize the kind of respect for risk that is necessary if we are to protect and serve future generations.
To take care with powerful new tools is to be pro-humanity, not anti-progress.
To take care with powerful new tools is to be pro-humanity, not anti-progress. In order to mitigate negative externalities, we need improved approaches to how we conceive of and solve problems, and systematic caution with new technological power. The process for internalizing costs while maintaining the viability of the essential components of our global civilization represents an extraordinary and yet necessary challenge. A couple of examples of how this might work in practice are outlined below.
Social media represents a powerful example of how we might design technologies for comprehensively better outcomes. In most cases, social media has been built on an advertising revenue model. Platforms harvest users’ attention, experiment with approaches to changing their behavior on behalf of advertisers, and in the process fundamentally alter their minds and choices in an effort to keep them engaged.[165] From the outset, social media companies selected a path that allowed them to privatize the gains of this model and socialize the losses. The negative externalities are inflicted on the public, bearing as they must a growing array of mental health problems, increasing rates of addiction, a collapse in attention spans, a profound loss of privacy, as well as the undermining of real-life social interaction and development.[166] Platforms also allow for the manipulation of opinions by both state and non-state actors, as well as deepening political polarization, epistemic breakdown via increasing misinformation, and escalation in information warfare.[167]
Most people don’t want to spend their time endlessly scrolling on Instagram or TikTok, and yet even when they set themselves the specific goal of reducing their use, many find it difficult to stop. The algorithm tends to win, because the majority of social media design pits the will of the individual against the might of multi-billion-dollar machines that use AI-enhanced split-testing to refine techniques to generate ever greater engagement with content. Prior to mass adoption of these technologies, some warned that they would generate addiction and impact society in harmful ways.[168] In order to win the race to network dominance, platforms were incentivized to promote an overly positive narrative of the potential benefits of their technologies and press on with their plans. Tech markets tend to deliver monopolistic, winner-takes-all outcomes, because of the insurmountable advantages gained by first-movers once they have established early access to customers and the data they provide. With more data, tight feedback loops between analysis and algorithmic improvement can be built, improving the chances of securing downstream benefits such as greater access to finance and further investment in infrastructure. The net result is a growing differential advantage in attracting ever more customers, making it harder for competitors to survive.[169] In the case of social media, when the harms started to become apparent, companies already benefiting from these monopolistic dynamics could point to the difficulty of predicting outcomes in advance and make cosmetic operational adjustments to appease critics.[170]
Social media companies have a fiduciary responsibility to their shareholders, but what if they had a fiduciary responsibility to the person whose data they are gathering and behavior they are altering instead? By changing a few core design features, social media companies could enhance a user’s ability to make sense of the world, rather than harming it.
By changing a few core design features, social media companies could enhance a user’s ability to make sense of the world, rather than harming it.
Modern media of all kinds keep us engaged by appealing to the brain’s limbic system, which is primarily responsible for emotional processing. The content that we see on our social media feeds appeals to subconscious reward circuits that are both positive and negative: content that is funny, attractive, or confirms our current beliefs on the one hand, or shocks, outrages, or upsets us on the other. By keeping us in an emotionally primed state, in which we are detached from conscious choice-making processes, we are more likely to engage with adverts and purchase products. In essence, social media presents us with content that is curated specifically to engage us as individuals, against the parts of our rational minds that may have set reflective intentions for the day ahead. The algorithms that determine what we see are designed to “hijack” our limbic systems, often to the detriment of parts of the brain tasked with higher-order functions such as cognitive judgment, the evaluation of multiple perspectives, and critical analysis.[171]
The current design of social media algorithms is based on its ability to keep people liking, sharing, and commenting on posts, and ultimately convert users to “ad clicks,” which happens to select for stimuli that downgrade our higher forms of cognition and upregulate our most automatic and instinctive responses. But how else might we design the algorithms, if we were aiming to upregulate the most worthy and holistically beneficial content? One possibility is to build (or retrofit) social media algorithms to upregulate content with positive sentiment across ideological divides. By upregulating content that inspires similar responses from generally opposing groups, we could begin to generate goodwill and a sense of commonality across many individuals previously thought of as having significant ideological differences. Positive feedback loops would develop alongside growing engagement. By upregulating content that previously opposing groups both view as positive, social media could become a force for synergy rather than division.
The concept described above is indicative of the type of design approach that may begin to generate positive rather than negative externalities—but there are many other ways we could alter the defining characteristic of social technologies that currently trend toward division. Other ideas include, for example: the recommendation of potential friends or contacts from outside your network cluster to increase exposure to a greater variety of worldviews; the promotion of content that is dialectical to your own current views; a slowing of the loading rate of the “infinite scroll” that increases the longer you have been on-site; and the use of software tools to detect and downregulate content that is modified with AI filters.[172] Such approaches could begin to deliver the kind of social media that lowers the negative impact on our mental health and instead inspires a sense of unity between differing perspectives. It could also begin to expose people to different worldviews, help correct for biases, improve sensemaking and understanding of the world, reduce polarization, promote good faith dialogue, and minimize the impact of propaganda and information warfare. These are positive externalities that we could make the intentional choice to enable now. Instead, we prioritize near-term profitability, at the expense of a healthy population and a stable and functional society.[173]
The example of social media highlights how not all changes in society are necessarily progress, even when claimed to be so during design or deployment. While some changes are worthy and valuable advancements in both our understanding and experience of living in the world, many other changes prioritize narrow, first-order or near-term outcomes at the expense of genuine, long-term, holistic betterment. But why is our idea of progress so strongly linked to narrow, technological advancement, and has it always been this way? The answer lies in how we came to inherit the modern concept of progress in the first place. The extent to which we are wrong in our current approach was determined, at least in part, by the earliest steps we took to modify the world around us and lay the foundations of civilization.
Reasonable arguments can be made for the concept of progress originating at a number of well-known prehistoric junctures, including the emergence of toolmaking, control of fire, or early forms of social organization. For the sake of simplicity, the approach here will offer one of the most common areas associated with civilizational studies: the development of early forms of agriculture. Significant nutritional surplus, enabled for the first time by the earliest agrarian practices, represented a defining moment in humanity’s relationship with nature and time. Nutritional surplus was a critical step that enabled us to think systematically about the linear advancement of a group or ideology.
Prior to the development of settled agricultural practices, humans rarely generated any kind of significant nutritional surplus. One of the benefits of this more precarious state of nature was that there was no stored food for rivals to covet and steal. As groups began to produce more than they could consume in the near-term, the incentive to take the additional resources by force naturally arose, and so surplus became one of the primary motivations for larger-scale warfare.[174] When pre-agrarian societies conducted inter-group warfare, conflict could not involve protracted military campaigns because of limited food supplies. Agriculture increased both the capacity and motivation for warfare. The practice of military expansionism depends on surplus, because surplus enables both larger populations and the emergence of military classes within a society.[175] Conquering and crusading require advanced logistics and the longer-term storage and distribution of food. Surplus is therefore a necessary step in the development of an expansionist, materially advanced civilization.
Another critical component of the early idea of progress was the invention of the written word. Accounting law was a key driver of the emergence of writing, as it enabled exchange, which itself required a means of record-keeping.[176] When papyrus eventually became the foundation for distributed communication, it carried and sustained the ideas that supported coordination and, importantly, justified outcomes in battle. As expansionism won more surplus for the victor, the technology of writing allowed the great tales of struggle and success to be cast into in-group folklore. In civilizational terms, writing allows for collective memory, giving societies the capacity to store decontextualized ideas about the past and the journey to the present that provides the structure for the progress narrative. Some of the earliest writing cultures, including for example Egyptian, Sumerian, and Hebrew societies, were among the first to write down stories that led toward a culmination, a future event that focused collective endeavor.[177] Throughout history, the idea of progress has been closely connected to the advancement of physical and social technologies, both of which developed to a significant degree within the competitive dynamics of warfare.[178]
For as long as there have been wars, the winners have been motivated to tell the stories that justified their victories.[179]Here in the present, we don’t hear the perspectives of the people and cultures that were annihilated in the process. Embedded within this repeating dynamic of history are countless lost alternative narratives of the world, systems of language and value, and forms of culture and art that were purposefully destroyed and written out of collective memory (or in many cases, with the dead recast as antagonists). An unfathomable amount of human creativity and beauty has been irrevocably and unnecessarily lost through this process of conquest and dominion. Arms races have existed for as long as groups of humans have fought, and the creative pursuit of new and advanced weaponry has been a driver of technological development as a result. Military capability development has therefore been (and remains to this day) another key factor in how technology and the idea of progress became deeply intertwined.
An unfathomable amount of human creativity and beauty has been irrevocably and unnecessarily lost through this process of conquest and dominion.
The eventual advent of the Industrial Revolution, emerging out of the Scientific Revolution, marked a major step in the use of energy to automate manufacturing, transport, agriculture, and production, fueling the start of an exponential phase of technological development that persists today. In the two hundred years since the first industrial processes began to accelerate change in society, our idea of progress has become more deeply coupled to the advanced technologies that dominate our everyday experience of life. Many will be able to witness in their own personal recollections this profound change (take for instance our ever-present smartphones, communicating across networks of satellites, all connected to the Internet). All of these technologies emerged from the process of scientific investigation of the world that has since become the core of the progress narrative.[180]
When we first began to use science to understand the world, it gave us the ability to test some of our ideas and beliefs to determine whether or not they were true. This gave us a process, rather than an authority, with the ability to tell us something meaningful about the world. The unifying and universal nature of the process of science was central to the later developments of democracy and the institutions of modernity that constitute our world system today.
The modern progress narrative tells us that through this process of observation and experimentation, we get ever closer to a complete understanding of reality, and that at the same time, we build tools and generate new ideas to improve lives and reduce suffering. The implication is that through this process, we are achieving a future of increasing abundance for all. As AI and other forms of advanced technology have emerged in recent years, it is in some cases stated explicitly that at the culmination of this journey is humankind as deity, with a god-like control over nature.[181]
In the early phases of the Scientific Revolution, it was broadly accepted that the application of science to understanding the world was necessarily restricted to certain domains. Science was not considered as a way to know everything, and some aspects of human experience—including religion and the mind, for instance—were considered to be phenomena that could not be fully elucidated by scientific methods alone.[182] Instead, science was seen as a means of interpreting parts of the world that were both measurable and repeatable, which provided two clear forms of value: its application in the form of technology, and the ability to predict outcomes on the basis of inputs. The scientific study of the physical world gave us tools that conferred competitive advantages in markets and warfare, and as a result, its influence grew in prominence relative to worldviews that did not provide the same advantages. Both technology and the ability to predict confer power, and power wins competitive games, regardless of whether or not the victory is in any way better for those impacted by the outcome. This increase in ability to win competitive games has led to an increasing dominance of the scientific worldview as a framework for understanding the whole of reality. The advancement of knowledge through empirical experimentation has become the backbone of the human approach to interpreting the world, and its increasing centrality contributed significantly to the formation of contemporary society.[183]
Science must often ask the question: how should we study complex phenomena? In many cases, the answer is that we should study parts of complex systems first. Implicit to the scientific worldview is reductionism: an understanding of the universe that seeks to explain complex phenomena by breaking them down into their fundamental components. Reductionism is extremely useful in some important ways; for one, as we seek to understand the overwhelming complexity of the universe, it provides us with somewhere to begin. It’s impossible to study everything at once, and so it helps us to answer the question: which subset of the entire universe should we begin with? From this starting point, reductionism allows us to break down aspects of complex systems and intervene to deliver desirable outcomes (and we have become relatively adept at this process in the fields of medicine and engineering, for example). Some of the most brilliant scientific and philosophical minds of the last few hundred years have critiqued the limits of reductionism, and a summary of these arguments is beyond the scope of this paper.[184] Some, however, are critical to an understanding of the problem with how we think about progress today.
Science does not study the world from a first-person perspective: it does not explain precisely what it’s like to be you, or what it feels like to hold your child, as these features of reality cannot be measured and can only be experienced or inferred.
Science studies the world from a third-person perspective: it uses observation and experimentation to probe the workings of the universe beyond our first-person experience, and through repeated measurement and testing determines the accuracy of our hypotheses. Science does not study the world from a first-person perspective: it does not explain precisely what it’s like to be you, or what it feels like to hold your child, as these features of reality cannot be measured and can only be experienced or inferred. A useful analogy is that of a meditator whose brain activity is being monitored by electroencephalogram (EEG). A scientist can measure changes in the frequency or wavelength of the meditator’s brain waves, and demonstrate that meditation affects EEG readings in a repeatable and predictable manner. But the EEG trace is a third-person representation of the meditator’s experience, and it cannot tell you about the first-person experience of what it feels like to be in deep meditation. While the measurements might reveal significant changes in brain activity, they cannot tell us anything about the feeling of internal stillness, or the growing self-familiarity, or the deepening awe at the mysteries of the universe.
This is one key problem of the scientific worldview. It is describing an important but incomplete view of the world. It is missing some critical information about reality, which includes almost all of the most meaningful first-person features of the human experience, such as consciousness itself, as well as most other subjective, experiential, and emotional phenomena.[185] It also includes many of the experiences that are commonly referenced as being the most valuable by those looking back on their lives from their deathbed.[186] Science is not looking at what it is like to be me (the first-person), and it is also not looking at what it is like to share a sense of relational meaning with others (the second-person). It should not be controversial to suggest that these are important things that are missing from the philosophy of science.
Another distinct problem of reductionism is that even within the third-person perspective, certain physical phenomena cannot be described by a perfect understanding of their constituent parts. In biology, for instance, we may study complex organisms from a wide range of perspectives. We can look at DNA, proteins, organelles, cells, tissues, or organs, and investigation into these “levels” of the organism will yield valuable insights into the structure and function of each one (as well as how they function together). But the only possible path of inquiry available to a reductionist methodology necessarily reduces the higher-level features of these systems to the sum of their parts. No individual component inherent to the cell tells us that the property of cellular respiration and the extraction of energy from food occurs when arranged as a cell. The same phenomena may be observed at all “levels” within complex organisms; an understanding of DNA alone tells us very little about the total behavior of the neuroendocrine system and how it affects genetic transcription; an understanding of the nucleus won’t tell us everything about its role in cellular signaling and response to changes in the cell’s environment; an understanding of all the types of cells in the body won’t tell us about the complex motor patterns that may be observed in the movement of the whole being.
Our typical approach to studying parts of complex organisms might help us to understand elements of their function and design specific interventions, but it cannot completely explain the moving, reproducing, sentient thing we see before us.
Our typical approach to studying parts of complex organisms might help us to understand elements of their function and design specific interventions, but it cannot completely explain the moving, reproducing, sentient thing we see before us.[187] It also fails to account for top-down causation: the processes by which higher-level parts of a system influence and determine the behavior of lower-level parts, such as how cells in one context (e.g. a white blood cell in the liver) behave in a certain way, while in others (e.g. in the brain), the same cells may give rise to completely different outcomes or properties.[188] In essence, our reduction of the organism to an entity composed of cells, or molecules, or organ systems—or any given part—fails to explain measurable, third-person phenomena emergent at higher (or lower) levels.[189] Examples beyond cellular respiration include the phenomenon of replication in systems composed of non-replicating subcomponents, ecosystem dynamics arising from interactions between a range of individual organisms and species, and structures such as limbs and organs developing from a set of embryonic cells during gestation.
Although such higher-level phenomena are typically referred to as being “emergent” properties of whole systems, this is a misnomer. The concept of emergence in such cases inherently assumes that the causation—the fundamental origin of the phenomena in question—is bottom-up, in that it is a product of the assembly of the lower parts of the system and only “emerges” once the parts are together in the form of the whole. This error in our perspective comes from a worldview that extracts parts of systems and attempts to define them as real, separate, individual things, when nature does not in fact produce these things as real, independent objects. The direction of causation is both wrong and (again) too narrow: it is not only bottom-up, it is top-down, bottom-up, and middle-out. For example, the human heart cannot be considered as a real, separable object in its own right. Nature does not make human hearts as independent objects; nature makes human bodies, from which we dissect human hearts and in the process define them, by default, as specific independent objects. The features of reality that we call “emergent” are only emergent because we have artificially broken apart a whole thing in the act of studying it; they are only emergent from a reductionist perspective. It is the act of deconstruction that allows us to define the whole system as an assemblage of parts. The reductive process removes features of reality that we then label as “emergent” once we have attempted to piece it all back together. What we think of as emergence is, in many cases, probably better construed as a kind of synergism: naturally occuring properties of complex systems that manifest only in a state of systemic wholeness.[190]
Another limitation of reductionism can be found in the human study of mathematics. Mathematics provides a tool for predicting reality, and sometimes what we choose to measure in our pursuit of understanding and predicting the world produces a number that correlates with what we observe in nature. The conclusions that we draw from measurements that we make are based on correlations between numbers and reality, which means that we commonly reduce the underlying ontology of the world to numerical outputs. In effect, mathematical models generate a simulation of nature—they are an attempt to make a map of reality, not reality itself—and the extent to which our simulation of nature matches our observation can often lead us to limited understandings of the mechanics of reality. For example, in the formation of bubbles, nature does not calculate pi to infinity in order to make a perfect sphere. Nature simply abides by mechanistic laws, and perfect spheres are an abstract mathematical concept. They do not exist in nature.[191]
It is also important to consider how existing biases and values “prime” us toward certain starting points when we seek to understand the world through science. Before we formulate questions or design experiments, we often have some preconceived notions as to what we imagine as likely to be important to the question at hand. This directs our attention toward certain subsets of the universe that we may not have focused on were it not for our preexisting biases. This can be described broadly as selective inattention, or a self-fulfilling prophecy, or blindspots, or reinforcement; but the important point is that as soon as we start to get accurately predictive results from whatever route we were primed to take, we increase our confidence in how “right” we are about how things work, and we become less motivated to think about any other pathways that might explain the results we are seeing in the world. This process generates hubris, as well as a lack of attentiveness toward all of the relevant concerns that remain unknown.[192]
If the lens through which we view the world optimizes for the third person and misses the first- and second-person aspects of the world, we are likely to make choices and take actions that fail to serve and protect the things we value most. The changes in the world that we would like to call “progress” are not likely to be true improvements of the most meaningful and valuable things.[193] While you cannot measure fulfillment or meaning, you can measure certain subcomponents or proxies of these first-person experiences, such as comfort (i.e. safety, access to resources, etc.), or the amount of dopamine released in the brain. This approach leads inevitably to a world focused on improving narrow and incomplete proxy metrics.[194]
If the lens through which we view the world optimizes for the third person and misses the first- and second-person aspects of the world, we are likely to make choices and take actions that fail to serve and protect the things we value most.
Science can tell us about what is in the world, but it cannot tell us about what ought to be. The is/ought distinction cannot be bridged by scientific investigation; what is falls essentially within the realm of the third person, while what ought to be falls within the realm of the second (i.e. between and in concert with other beings).[195] In the absence of guiding values to help us determine the “goodness” of a particular outcome, decisions tend to be made in a manner that prioritizes winning (and, of course, hedonism); in other words, operating within the world on the basis of what is but not what ought to be tends to result in choices being determined by the logic of game theory—what it takes to win (or to feel good), in a narrow sense, regardless of the costs. In other words, science can provide insights into how to achieve our goals more effectively, but it cannot tell us about the goodness of our goals. That knowledge comes from elsewhere.
Determining which goals are good goals is largely what we consider to be wisdom, which is distinct from what is simply knowledge. Wisdom in relation to goal-setting considers how our success in achieving our goals might affect the wider world, and how it might affect us in ways we hadn’t imagined. As a result, it tends to avoid conclusions that lead to the acquisition and concentration of power, which often leads to scenarios that involve inequality, exploitation, and enduring types of harm, as the powerful prefer to maintain their power at the expense of the powerless. It tends to practice restraint, which is important, because sometimes the things we want in the near-term are meaningfully detrimental to our longer-term goals or underlying values. Wisdom also tends to avoid the development of social traps, such as arms races, in which individuals or groups, driven by their own interests in winning a competition, take actions that are beneficial in the short term but harmful to everyone (including themselves) in the long term. The reliance on game-theoretic decisions in a world defined by science alone eventually delivers global multi-polar traps, with escalating technological and military arms races, increasingly powerful world-ending weaponry, and environmental destruction. This game cannot go on forever.
When attempting to understand complex systems in terms of their parts, one particular downside is that you end up with increasing specialization and the siloing of knowledge. This is evident in the structure of our institutions of government and academia. In the way we design such institutions, we end up formalizing the belief that the whole of a system is fully reducible to its parts, when in fact none of the parts contain either the potential or reality of the whole. We build governments composed of separate departments (the parts), which are supposed to work together to manage the entirety of the nation (the whole), but instead we get departments that work on directly contradictory goals and in competition for the same limited budget. We build universities composed of separate faculties (the parts), which are supposed to work together to generate knowledge (the whole), and yet we get increasing narrow specialization, decreasing generalizability, and fragmented interdisciplinary collaboration between fields.
Consider, for example, the concept of health. The health of anything, whether a person or other organism, or a society, is a property of the whole system and therefore cannot be measured in a specific or direct manner. This is why our approach to medicine is focused instead on the more tractable subcomponent of disease (and in particular the individual molecular targets of disease). An approach to health that is focused on disease and death can do a good job of keeping us alive in the near-term, but beyond the absence of known problems, it has less to say about what constitutes true good health. We may take readings of blood pressure, temperature, pH, blood cell counts, oxygen levels, neural activity, or perform genetic tests—and yet we cannot construct any finite set of metrics that would represent a complete description of health. Certain states (such as infectious disease or poor mental health) may be clear indications that our health is compromised, but health itself cannot be quantified once we have resolved such limited instances. Are you healthy if your test results are negative, but you are addicted to your smartphone? Are you healthy if you are fit, strong, and full of energy, but you carry a gene that means you’re more likely to develop cancer in two decades’ time? Is it even possible to be healthy in a biosphere that is poisoned by hundreds of millions of novel synthetic chemicals? The thing we really want to optimize is not in itself definable or measurable, as it is greater than the sum of any of the parts that we might choose to measure; it is also relative, subjective, and subject to an effectively infinite number of variables. The definitions and measurements of health available to us are proxies or subcomponents of the higher-level concept.
Universities commonly study aspects of physiological health in the medical department, psychological health in the psychology department, and ways in which society impacts health in the sociology department. Each department has its own culture, methodology, and metrics, many of which are again neither commensurable with nor complimentary to a meaningful understanding of health. Frequently, we perform an even more significant act of reduction when we select a single index as representative of the health or status of the whole system, such as GDP as a measure of a society, BMI (Body Mass Index) as a measure of a body, or standardized test scores as a measure of an intellect. Many of the tools that we use to investigate the world are not naturally or inherently good at meaningfully improving either the interior, first-person aspects of existence, or the outcomes at the level of the complete system. Our approach to optimizing the world—what we think of as progress—might be able to help us win in the near-term, but it is unable to optimize the aspects of the universe that ultimately we value the most.
Our approach to optimizing the world—what we think of as progress—might be able to help us win in the near-term, but it is unable to optimize the aspects of the universe that ultimately we value the most.
Science and technology together can deliver narrow advancement, but without an ethical compass to guide and bind them, it is not a given that they will deliver true civilizational betterment. These are the foundations on which our technologies are built, and so it should be of no surprise that they impact reality in a manner that is typically far beyond our expectations. The progress narrative reinforces itself through the same mechanism, as we demonstrate the success of our changes in the world by measuring them. As we have seen, the act of measurement (and what we choose to measure) leaves out many things that we value and on which life as a whole ultimately depends.
Perhaps the most influential worldview today with a strong perspective on progress is techno-optimism, which is the view that “technology, when combined with human passion and ingenuity, is the key to unlocking a better world.”[196] Techno-optimism is a contemporary version of the progress narrative, which has emerged in recent decades in anticipation and support of a coming revolution in advanced digital, biological, and manufacturing technologies. The speed of development in artificial intelligence in particular has brought techno-optimism to the center of cultural conversations about the future. As the race for market dominance in AI has intensified, however, so too has the concern of its effects on employment, creative industries, public sensemaking, and even humanity’s near-term survival.[197] These concerns have been widespread enough to lead to a response from those invested in the progress narrative in general, as well as those financially invested in the success of specific AI developers.
As AI has landed in public consciousness, techno-optimists of all kinds have been arguing for variations on the theme of more technological innovation, as fast as possible.[198] For some, this argument plays into the broader hope that technology will save humanity from itself. The techno-optimist proposal is that our problems with the climate can be solved by planetary-scale geoengineering, our problems with disease can be solved with nanotechnology and gene editing, and our problems with collective coordination can be solved by artificial superintelligence. This latter prospect is deeply seductive, as it suggests a silver bullet for all of our concerns. The coming superintelligence will know more than anyone could ever know across all domains of learning, and present us with solutions for every class of problem.[199]The implication is that there is a moral imperative to get there as soon as possible.
Support for the techno-optimist perspective has been promoted over the last year in response to increasing concern and growing calls for caution. At the foundation of this recent movement is the concept of accelerationism: the idea that increasing rates of technological advancement are ultimately inevitable and net-positive.[200] The accelerationist perspective may be considered as the continuation of a long line in Western philosophical thinking regarding the combined power of capitalism and technology. Accelerationism’s core thesis is that by increasing the rate of capital growth (and associated technological innovation), civilizational development accelerates via pains of upheaval toward a place that we are heading anyway, through the slow disorder and fragmentation of the present.[201] The same moral imperative described in relation to AI is commonly applied to the acceleration of tech development more broadly: supporters assert that it is a good and right course of action, as the speeding up of our processes of growth and innovation will minimize the suffering and injustice of the present. The accelerationist approach, however, does not address the question of how to stop our attempts at problem solving via tech innovation from causing worse problems in the future. Nor does it seriously address the increasing scale and impact of negative externalities. In this way, the techno-optimist and accelerationist worldviews are simply another instantiation of an immature idea of progress that turns away from the real world in favor of a compelling, yet incomplete and ultimately destructive narrative.
In this way, the techno-optimist and accelerationist worldviews are simply another instantiation of an immature idea of progress that turns away from the real world in favor of a compelling, yet incomplete and ultimately destructive narrative.
Current debate around the safety and utility of AI systems reflects the power of advanced technology to capture the human imagination. We see the incredible views of our universe revealed by the James Webb Space Telescope, we hear of the landing of rovers and minicopters on Mars, and it is easy to feel that we are surrounded by an inspirational kind of progress. At the same time, however, any reasonable person must acknowledge that in this time of advanced medicine and space exploration, there are also a vast number of painful realities that are much less comfortable for us to dwell on. For instance, despite (and also because of) our powerful tools of global tracking and surveillance, each year hundreds of thousands of children are still trafficked into the illicit sex trade.[202] Or that our actions cause the extinction of dozens of species each day.[203] Or that there are more animals in factory farms in the US alone than there are people on Earth, and that most suffer for their full lifetime a torture of confinement and distress, often never once even seeing the sky.[204]
When we make an earnest attempt to look at all the good and bad effects of our current world system, it is difficult to assert in good faith that an inspirational kind of progress is occurring steadily and beautifully for all. An assessment of the impacts of our advancement can give the impression that we achieve good effects in some places, and bad effects in others, and that perhaps with a more targeted approach we could reduce the bad effects and optimize the good. This approach would simply treat the symptom, rather than the cause; in the way we think about and define progress, it is crucial to understand that the bad effects are the direct and indirect results of our processes for designing and implementing what most people think of as progress today.
For every positive application of a new technology, there are many counter-examples of harm externalized elsewhere. Our current understanding of progress has elements that are both inspiring and true, as well as devastating and false. The supporters of the progress narrative tend to emphasize the upsides. The most marginalized communities in society will often hold the most clearly critical views on the progress narrative, as they (and often their parents before them) have suffered with the bad end of the deal. Many others without a major voice do not subscribe to the progress narrative—we simply don’t tend to hear their perspectives quite as frequently.[205]
It is straightforward to understand why the wealthiest in society would support the progress narrative. A life of exclusivity, surrounded by curated beauty, can go a fairly long way in the simulation of a truly meaningful life. But why do others believe the progress narrative, when it is clear that their world is in some important ways worse than the world their parents lived in? Many young people today can’t buy a house or afford healthcare, even though their parents could at the same age.[207] A quick answer might note the scale of entertainment and distraction, or maybe the power of hope: the hope that one day, the experiences you cannot access or afford will be available to you, just as they are to the billionaires of the progress narrative now. Although there are many reasons, one insightful perspective for making sense of a belief in the progress narrative in the presence of decline is the phenomenon of Stockholm syndrome. The idea of Stockholm syndrome is used as a way to explain seemingly counterintuitive responses and behaviors—such as loyalty, sympathy, and bonding with oppressors—in the context of hostage scenarios or other forms of captivity. Under duress, a victim no longer has any control over their safety and well-being, and is utterly dependent on their captor for their basic needs. Emotional connection with an oppressor can be seen as a coping mechanism in extreme situations.
A life of exclusivity, surrounded by curated beauty, can go a fairly long way in the simulation of a truly meaningful life.
Those who clearly do not fairly or progressively benefit from our current form of progress but who still believe in it can be thought of as suffering from Stockholm syndrome. Effectively held captive by the current world system, sufferers respond with positive feelings toward (and a sense of shared identity with) the system itself, and these feelings are used to resolve the cognitive dissonance that arises as a result of the contradictions of their situation. We are “captive” in that we each have little personal control over the direction of the world, and we alter our perception of our captor by casting it in a more positive light. We may also observe the workings of the world and come to an understanding that there are two roles or scenarios open to us: that of the oppressor, or that of the oppressed. A psychological state that identifies with the role of the oppressor can seem preferable, because the belief that we are destined to be the oppressed forever is too painful to accept. As noted in Part I, it is far more comfortable to inhabit a worldview that suggests that the burdens of the present will be lighter in the future. The day-to-day experience of the oppressed is far less bearable—and we likely feel powerless to change it anyway.
It is also the case that in a world full of conveniences, it can be easy to focus on the comfort modernity provides as a way to avoid looking too closely at its lack of meaning and fulfillment. Pleasures that have never been experienced, and particularly those that have never been seen or imagined, cannot be missed. Pleasures that have been known, however fleetingly, are not easily given up. The more challenging the daily grind of our lives, the more we need the addictive hit—the screens, the swiping and scrolling, the infinite entertainment options, the array of refined sugar products, the quest for the most likes on social media, the productivity optimization, the ubiquitous porn, the fast food home delivery—to keep us distracted and fleetingly satisfied. These conveniences are driving up rates of obesity and agoraphobia, and impacting our most basic capacities to prepare food at home, form intimate relationships, and maintain a fulfilling social world. It is also becoming ever easier to escape into a personally tailored digital world rather than think about the cost and difficulty of the real one. For others it is more comfortable to remain focused on striving and achievement, and demonstrating our worth by beating others in the game. As with most other addictive experiences, these hits do not make us healthier or happier—and yet like the addict, we’re willing to pay the cost, even when it is probably our lives. At the very least, the cost that we must bear is extracted from the meaning and quality of the short and irreplaceable lifetime each of us gets.
As with most other addictive experiences, these hits do not make us healthier or happier—and yet like the addict, we’re willing to pay the cost, even when it is probably our lives.
Some of the most impressive instances of tech innovation cause some of the most significant harms. The battery in the device on which you are reading these words requires cobalt, which is currently mined using child labor, dependent on militia violence and rainforest clearcutting.[208] The manufacture, use, and disposal of these same devices produce a set of known toxic byproducts, many of which are implicated in the illnesses that kill our loved ones after prolonged treatments and protracted deaths.[209] These uncomfortable realities are also part of the world shaped by our current idea of progress, a world that simply works out better for some than for others. Those willing to accept the benefits of the innovation we have now must also accept that it offers a morally untenable position. Even just pragmatically, it is unlikely that humanity can survive it. In time, an ideology that drives uncontrolled exponential technological development on a finite planet can only result in negative side effects so significant that they break the biosphere in a catastrophic way.
In time, an ideology that drives uncontrolled exponential technological development on a finite planet can only result in negative side effects so significant that they break the biosphere in a catastrophic way.
Some techno-optimists suggest that there are high-tech solutions to these problems in the form of escape routes from a damaged planet or a collapsing civilization. Billionaires build extensive underground survival complexes.[210] Others plan for an off-world future on Mars, or detachment from their mortal bodies by uploading their minds to the cloud. The Earth must be conserved, however, for any of these future dreams to materialize. Regardless of whether or not it’s technically possible, living in a digital realm still requires physical infrastructure—as well as all of the supply chains, social contracts, and institutions necessary to maintain it in perpetuity. This is only one of the many reasons that we must endeavor continually toward comprehensiveness in our attempts to understand the full range of effects of our actions in the world. For the techno-optimist dream to materialize, the natural systems on which it is built must be healthy, resilient, and well-governed.
Whether or not we produce a healthy or unhealthy type of advancement is determined ultimately by the fundamental drivers of human behavior, which include our incentives for taking any kind of action in the world. Incentives can be described as perverse when they harm other aspects of reality that we value or on which we depend; for example, our incentive to maximize profit margins is perverse when it also drives industrial pollution.
Perverse incentives occur when people are encouraged to take particular actions (such as doing a job or solving a problem) by the promise of reward; in other words, perverse incentives are driven by extrinsic motivations. When we try to get people to do things that they are not intrinsically motivated to do, we need to offer a reward to motivate the desired actions. Most people are extrinsically motivated to go to work everyday to earn money, and not necessarily because it is precisely what they would want to do with their time given the choice. Much of the world runs on extrinsic motivation, and when we use it to direct human activity, we tend to define our desired outcomes too narrowly. Defining anything “too narrowly” means to take for granted the systems within which it is embedded and the relationships on which it depends. These include realities of nature, the finite quantities of the biosphere from which everything is made, and the ways in which it affects and changes our bodies and minds. Perverse incentives are common to human systems: our system of government forces politicians to prioritize short-term reelection over long-term positive outcomes for the public; our healthcare system encourages the prioritization of treatments with greater profit margins over preventative or more directly effective interventions, due to the influence of insurance, policy, and pharmaceutical lobbying.
The side effects of our current kind of progress are driven by the perverse incentives embedded within large-scale human systems. In society, when somebody takes a reasonable action in order to gain some kind of advantage, a competition can begin as others seek to gain similar advantages. As the competition grows, a trap can develop between participants, in which other things of value (such as time for rest, or protection of the local environment, for example) are sacrificed for near-term gains. Over time, these gains become increasingly limited and generate ever greater externalities. As more and more of value is sacrificed, everyone ends up in a worse overall position than they were in at the start.[211] Social media again provides a good example of these dynamics: the introduction of brief, highly engaging videos by TikTok in 2017 drew users away from competitors such as Instagram and YouTube, which forced them also to prioritize shorter, “stickier” content over longer-form video or still images.[212] The cost of this attentional race-to-the-bottom is externalized on users, leading to a further degradation of attention spans and the upregulation of simpler, more addictive, and less nuanced content. Shorter videos, arranged in an endless scroll, will naturally tend to drive a reduction in capacity for meaningful cognitive engagement and a lack of emotional depth; they will oversimplify complex issues, favor performance over the authentic exchange of ideas, and contribute to polarization on divisive subjects.
This results in the fake kind of progress we have now—“progress” that needs a narrative fuelled by cherry-picked examples, a reduction in human empathy to downplay the harms, and a great deal of motivated reasoning to continue to propagate the story.
These traps drive us toward a world that seeks to internalize profits and externalize costs. This results in the fake kind of progress we have now—“progress” that needs a narrative fuelled by cherry-picked examples, a reduction in human empathy to downplay the harms, and a great deal of motivated reasoning to continue to propagate the story.[213] The kind of progress that ignores its externalities is far easier to achieve than the kind of progress that takes true account of its costs, because those spending limited resources on internalizing costs are outcompeted by those who do not. Real progress would require internalizing externalities, binding social traps, and rethinking our approach to problem-solving, advancement, and technology more generally. It isn’t possible to practice real, authentic progress in the presence of the fake, immature version, so we have a choice: either we pursue real progress together, or we continue a rivalrous race toward a cliff edge.
As the incentive to internalize externalities does not tend to arise naturally in the market, perverse incentives have to be bound by some external force. The law is the standard framework used to bind perverse incentives. A classic example is pollution of the commons: it may be cheaper to dump waste from your manufacturing process in a nearby river—and if the survival of your business and the security of your family is at stake, this is likely to become an appealing option. Quietly passing this cost to the environment (and thereby all other people) has been a regular course of action in the past. In a democratic society in which the law is supposed to represent the collective will of the people, it is the law’s role to step in and disincentivize this decision.
Legal enforcement is the means by which the government “checks” the activities of the market to ensure that damaging, exploitative, or unjust pathways to profit are blocked. In the theory of democracy, the government—a government of the people, for the people, by the people, representing their collective values and will—makes some activities illegal and associates others with taxes and fines, both as a disincentive and also to pay for resolution if they do still occur. But the market is more than a passive partner. An incentive exists for those operating within the market to figure out ways to influence the government (and thereby the law). Unless the people are “checking” the government (i.e. actively seeking to understand and contextualize the state’s activities), the market will work to alter the legal oversight of market activities.
In twenty-first-century American democracy, the opinions of ordinary citizens have close to zero impact on public policy, while legislative outcomes are instead strongly correlated with lobbying dollars spent and the opinions of economic elites.
In twenty-first-century American democracy, the opinions of ordinary citizens have close to zero impact on public policy, while legislative outcomes are instead strongly correlated with lobbying dollars spent and the opinions of economic elites.[214] This serves to highlight one key example of how the market captures the state: private companies employ lawyers to write laws in their interest, and the same companies pay lobbyists to press the state to accept them. The market also makes use of the practice of “revolving doors,” in which those with career experience (and often vested interests) in private industries are employed by the state and tasked with regulating the industries from which they came (for instance, the recent formation of an AI Safety and Security Board in the Department for Homeland Security).[215]Similarly, many people climbing the ladder in government hope for a lucrative end-career position in the industries they regulate, and their chances of securing such a role are far lower if they have spent their time in government enacting stringent regulatory oversight.
Public-Private Partnerships (PPPs) are another tool of market influence. PPPs are legal agreements between the state and the private sector toward common goals, and yet the ultimate beneficiary tends to be determined largely by who writes the operational agreement that underpins the PPP’s activities. With its greater resources, the private sector can afford lawyers with the capacity to craft subtly beneficial terms and loopholes within long and complex legal agreements that few people are competent to interpret. Finance for political campaigns is another key tool for private interests to influence government activities. Political donations, to one degree or another, buy access and influence over those who end up determining which laws are passed and which companies or sectors of the economy are granted generous subsidies or spared significant taxes. The list could go on to cover the extraordinary scale of (highly successful) corporate lobbying, targeted tax credits, and the role of political influence over the award of government contracts.
The pathways to state capture described above demonstrate how financial power can shape legal power. If the law presents a limitation, then through these means—even without the need for outright bribes—changes to law can be brought about with money. In this way, money has the power to break the binding that the law places on perverse incentives. In fact, in a number of industries, the pursuit of a change to the law through lobbying is the single capital investment with the highest potential returns. Agricultural subsidies secured with millions spent on lobbying mean that some agri-corporations don’t even need to remain profitable to operate; the pharmaceutical industry can justify the expense of hundreds of millions of dollars worth of lobbying fees, when the resulting legislation protects it from class action lawsuits that would cost billions.[216]
At the same time, the law must continuously seek to understand the world that it needs to regulate, so that it can determine which parts need its protection. The rate of technological innovation now far exceeds the capacities of our legal institutions to make sense of changes and respond rapidly. The law is not effective at binding harmful activities in certain market sectors, in part because it can no longer model them clearly enough (and even in sectors that it can model clearly, as described above, the law can be inadequate in the presence of vested interests and efforts to influence its oversight). This may be referred to as regulatory inadequacy: inadequacies in our regulations occur when rules and laws are not efficient or complete enough. Many legal frameworks are designed with only a partial knowledge of the issues they aim to regulate, and this is becoming an increasingly consequential problem. Examples may be observed again in social media technologies. The law could not keep up with the scale and pace of change driven by social media platforms (e.g. political interference, mental health impacts, nation-state information warfare, etc.), and when the effects started to become clearer, billions of dollars had already been invested, livelihoods had been established, and ways of life had been altered. The “progress” and its damaging consequences could not be undone.
The perverse incentives at the root of our systems of social organization present a challenge for anyone seeking to reimagine the concept of progress and ensure that it represents holistic betterment, and not just narrow optimization. For our relationship with progress to mature, a number of criteria must be met. Our actions in the world must account for all impacted stakeholders. In the process of creating a new product, innovation, or change, we must consider the other values that could be harmed in the pursuit of its own limited value set. We must consider the total ripple effect of its activities in the world, asking questions such as: what other aspects of reality will this activity touch, and over what timescales? What are the 1st-, 2nd-, 3rd-, and nth-order effects of this activity?
The spirit of this approach is rooted in caring enough about the fundamental value of reality in order to notice the ways in which it may be harmed. As with maturity in humans, maturity in relationship to progress necessarily involves caring, noticing, and then making changes to address issues identified. The underlying aim must be to innovate in a way that is net-neutral to net-positive in relation to everything that is touched by our changes in the world, both now and in the future.
If a change in the world is measured and optimized against a set of narrow metrics—i.e. metrics that do not account for everything that the change effects in space and time—this indicates that the change being made is perverse and that it will generate externalities. For a change to equal progress, it must systematically identify and internalize its externalities as far as reasonably possible. Its underlying incentives must be bound to the well-being of all life, and it must uphold and protect the social contract of society that motivates people to work together at scale.
But what motivates us to do anything at all? While incentives may be thought of as the external reasons for taking a particular action, we are also subject to internal motivations that drive our behavior.[217] Beneath our motivations are our desires. When we desire something, we are motivated to pursue it through our actions and behaviors in the world.
If we consider the broad spectrum of human needs (such as food and shelter, safety and security, love and belonging, etc.), then we may begin to appreciate the origins of the desires that give rise to our motivations.[218] During childhood, we are all dependent on the people around us, the natural world, and the systems that sustain us for our needs to be met. We need our family to feed us, keep us safe and make us feel loved and connected to the world into which we are born. If these needs are unmet when we are growing up, they do not simply fade away when we become adults. Many of us carry the imprints of our unmet needs from childhood—commonly the needs for security, love, and connection—for the rest of our lives, allowing them to guide our behaviors and give rise to “unhealthy” (or immature) motivations. When the desires for belonging, for esteem, and for recognition from our family and peers are unmet (or require of us specific performances to be met), we end up in a state of disconnectedness from the people and the wider world around us.
A desire that arises in a state of disconnection will cause problems. This is because the lack of connection means that we lack the will to care about or notice the other effects that our desires drive in the world. In a state of separation, we are attuned to the consequences of our actions in the narrowest sense: the effects on us as individuals, and in the timescale that is most relevant to us and our considerations. Humans are social primates, and when our relationships are degraded or distorted, our desires can become pathological as we seek to fill the emptiness caused by our lack of connection. Addressing the emptiness we feel inside becomes a key motivation for our choices and actions in life, and in our disconnectedness we often fail to adequately consider the ways in which they will affect others. An early step in a path toward developmental maturity is the realization that our desires are disconnected from others and the wider world, and that our actions, motivated by immature desires, are causing harm.
Humans are social primates, and when our relationships are degraded or distorted, our desires can become pathological as we seek to fill the emptiness caused by our lack of connection.
Desires that arise within a person who feels connected to themselves, other beings, and the wider world will account for how they are intrinsically bound to everyone else’s desires. The desire that a mother has for the well-being of her child is an example of a desire that arises in connection. The maternal desire for her child’s well-being emerges from a lack of rivalry and the deep fulfillment associated with being in service to the child’s needs. This is a naturally arising instance of mature motivation, stemming from desire that is rooted in connection to another being. The actions that a mother takes for the betterment of her child reflect a holistic understanding of what is good for the child, their environment, and their community, both now and in the future. This is the kind of desire that, if acted on, leads to authentic progress.
The changes that we make in the world under the guise of the progress narrative are rarely motivated by a mature desire for the betterment of humanity and all living beings. Instead, they are far more commonly motivated by a range of immature desires, such as a basic curiosity, a reckless desire to know what is possible in reality without due care for the costs, a desire for money or status, or to be seen as the smartest or most successful. At a deeper level, our motivations may rest on an unhealthy desire to prove oneself to one’s parents or figures of authority, as a demonstration of worthiness, or as an expression of the hope that its attainment will fill that internal lack that is not easy to define, but nonetheless ever-present.
Immaturity in our desires and motivation has never been more consequential than it is now. Humanity has developed the power to affect the world at a larger scale than ever before, and yet none of us as individuals are meaningfully connected to the consequences of our actions. Most of the objects that constitute our surroundings required global supply chains for their manufacturing and distribution before they became part of our reality. We live in a world in which the connection between our senses and our actions has been broken, in that we cannot see or feel the effects of our decisions. When we turn on a light, we do not know where the energy for its function came from, whether it was generated in a nuclear power plant or whether it came from the burning of coal. If the latter, then did the coal come from China, India, or Wyoming? Which trees were felled for the construction of the mine from which it came, and which ecosystems were destroyed? Which beings died to make way for the energy we receive at the flick of a switch? If we cannot sense the effects of our actions and choices, we cannot care properly about whether they are good or bad, and we can be complicit in harm. At a tribal scale, we had to live with the consequences of all of our actions and decisions. If a tribe made the decision to pollute its environment, it was forced to reckon with the outcome, even if it meant simply moving somewhere else to avoid it. In our current system, at the global scale, we sense very few of the consequences of our actions, and our connection to the ways in which we impact the world is disrupted. There is also nowhere else to go.
When we turn on a light, we do not know where the energy for its function came from, whether it was generated in a nuclear power plant or whether it came from the burning of coal. If the latter, then did the coal come from China, India, or Wyoming? Which trees were felled for the construction of the mine from which it came, and which ecosystems were destroyed? Which beings died to make way for the energy we receive at the flick of a switch?
Maturity in motivation is about recognizing the underlying values that are being served by our desires and making a deeper assessment. Mature motivation is connected to a mature ego, a stage of personal development within which it is possible to see that some drives are more about immediate individual gratification than they are about fulfilling constructive and socially beneficial goals. We all come from group settings and have been (and in the most meaningful ways remain) utterly dependent on the complex web of people, organisms, elements, and systems that constitute our surroundings.[219] While you were developing in your mother’s womb, you were dependent on her in the most direct way imaginable. It is an illusion to think that this kind of interconnectedness ends at the point of birth. Throughout life, at every stage of development, from conception to this precise moment, even in times of almost total isolation or loneliness, at all times, we are dependent on the people around us, the systems that serve our needs, and the basic foundations of nature for our survival.
Try to imagine who you would be without plants. Without plants, there would be no atmosphere for you to breathe, no food chain for your nutrition, no animals—no you. You could not exist without plants, and the same goes for the soil, air, water, microbial life, fungi, the Earth’s gravitational field, the sun—for nearly every part of the web of life within which you are inextricably embedded. Who would you be, without everything in the biosphere as it is? We all exist in utter dependence on so many things that we do not include in our definition of “self,” and yet if our sense of “self” is based in this kind of incomplete thinking, it becomes possible to advantage ourselves at the expense of the things on which we depend. The “I” is not a meaningful concept in the absence of the “we.”[220] They go together, and the kind of progress that blinds itself to this interconnectivity risks damaging the things that we need to survive and harming fundamental aspects of what it really means to be human. A mature version of progress recognizes this reality in its design and execution.
Without plants, there would be no atmosphere for you to breathe, no food chain for your nutrition, no animals—no you.
When human societies grow, individuals are able to pass the costs of their activities to others in the system in a way that was not possible at a smaller scale. Moral people sometimes become part of immoral machines. When humans lived at smaller scales and in tribal contexts, actions that attempted to externalize harm were highly visible, and mechanisms evolved to correct for individual behaviors that damaged the wider group. At scale, these corrective mechanisms must be replaced by law and enforcement, and as we have seen, these protections fail both when the law can be bought and when technological development outpaces the law’s ability to keep up. As Haber-Bosch demonstrates, they also fail when downstream effects are both complex and distant enough in space and time from their original cause.
A mature approach to addressing the negative externalities of the Haber-Bosch process recognizes the scale and complexity of harms driven by industrial farming and seeks to offer an alternative path. How could advancing the application of regenerative agriculture address the upstream drivers of issues associated with current farming practices around the world?
Soil is one of the critical differences between Mars and the Earth. Soil (along with the oceans) gives us our atmosphere, which comes from the gas exchange that occurs between organisms rooted in and dependent on the soil. It is more accurate to think of soil as a living ecosystem than as an inert substrate, because healthy soil contains a vastly complex microbiome of bacterial species, which interact with structural elements in the soil to produce a living substance that is far greater than the sum of its parts. Healthy soil is capable of facilitating nutrient cycling, stabilizing the hydrological cycle, and maintaining ecological balance. Where industrial farming depletes and degrades soil (which is why we need to add synthetic fertilizers to keep it capable of producing plants), regenerative practices do the opposite: they aim to enhance the soil in terms of both quality and quantity, year on year.[221] In this way, regenerative agriculture embodies a key principle for the long-term viability of any civilization: a reciprocal relationship with nature. Nature has a balance sheet, and if our approach amounts to taking and not sufficiently giving back, the balance sheet will show a deficit that, if uncorrected, will lead to the collapse of natural, life-giving systems.
Nature has a balance sheet, and if our approach amounts to taking and not sufficiently giving back, the balance sheet will show a deficit that, if uncorrected, will lead to the collapse of natural, life-giving systems.
There are many methods and approaches that constitute the full spectrum of regenerative practices, all of which are context-dependent. The approach taken in a tropical rainforest is necessarily different from the approach taken in a drier environment. Regenerative agriculture may include apparently opposite or contradictory methods because of this context-dependence. For example, in one location a reduction in plowing and tilling may be the most beneficial way to heal the soil, while in another, deeper-than-usual tilling would be the right approach in order to encourage greater root penetration. Other standard practices include the planting of crops to cover the soil, companion planting of complementary species to balance nitrogen, rotation of the sequence of crop growth, integration of trees and shrubs in agricultural landscapes, restoration of natural grazing patterns, and enhanced composting and mulching for nutrient cycling—all of which drive significant improvements at the level of the soil.
When we take action to improve topsoil, the plants that grow from the land are improved as a second-order effect—a positive externality. At the next “level,” the humans and animals consuming these plants benefit too, as they are no longer consuming the toxic residues from pesticides, herbicides, fungicides, and synthetic fertilizers. Greater micronutrient quantities lead to an improvement in health, fertility, vitality, and cognition, as well as a reduction in the burden of anthropogenic disease, the cost of healthcare, and a population-level reliance on pharmaceuticals. Composting and mulching allow many micronutrients (which are absent from NPK fertilizers) to return to the soil and replenish what was taken during harvest. As synthetic fertilizer use declines and is replaced by compost and other natural fertilizers, microbial diversity rebounds and soil health improves. Water quality is restored in the absence of chemical effluent; water retention in the topsoil improves, and waterways and dead zones in coastal regions gain an opportunity to heal. Taken together, this circular process of taking and then returning to the land is an instance of a virtuous cycle: a single set of actions opens a space for a chain of positively reinforcing outcomes, which feed back into the inputs to raise up the overall baseline of the system, allowing it to grow and improve over time.
Regenerative agriculture allows many overlapping ecosystems to begin to recover, with a range of positive downstream effects.[222] Importantly, the complex issues associated with pesticides, herbicides and agricultural chemicals in the human body are eliminated. Given the magnitude of the effects on human vitality and psychology from a combination of pesticide toxicity and micronutrient depletion, it is hard to fathom the scale of the benefits this might bring for society, human functional health, and our ability to coordinate at scale. Note the parallel with lead poisoning: again, we have no real sense of the scale of impact on human potential and societal coordination. We can be sure though that in the absence of such toxins, the direction of change will be positive in relation to our current problem set.
The key point is that by focusing on a simple set of changes, we can begin to externalize positive effects, rather than the existing set of negative effects. This would be a real kind of progress—progress that does not simply turn away from damage inflicted elsewhere in time and space. This approach is about removing the activity that is driving current negative externalities.[223]
Some readers may be willing to accept that the scale of the costs of technological innovation are underestimated, yet at the same time feel deep down that there is still a Star Trek-like, high-tech future ahead of us. A high-tech future remains a possibility, but it also remains the case that sometimes, when things break, they are broken forever. The biosphere in which we live is not a space of infinite capacity and resilience. We cannot take from nature and turn it into money and trash forever; either we change our approach or the system will inevitably self-terminate.
As David Foster Wallace once noted, important realities are often the ones that are hardest to see and talk about.[224]Because we now live in and are formed by habitats composed almost entirely of non-natural spaces, synthetic materials, and inexplicable machines, it is easy to forget who we are and where we came from. It is hard to see all the ways in which human existence is backwards now. There are powerful and sublime states of human existence that we cannot mourn because we have never experienced them. It is impossible to feel the pain of loss in relation to benefits we can barely imagine. We have lost even the means of comparison between a life in which almost all of our time is spent in human-built spaces and another, in which our bare feet are never far from the touch of the land. The benefits have never been known, and so they cannot be lost. We have forgotten.
We have lost even the means of comparison between a life in which almost all of our time is spent in human-built spaces and another, in which our bare feet are never far from the touch of the land.
Our species was selected for its ability to both adapt to and modify its environment. Unlike other species, we extend ourselves into the world using tools that we develop according to the needs of our environment. Humans have had to be proficient at throwing spears, at making clothes to keep warm, and at typing on keyboards, none of which are coded in our DNA, but conditioned by the surroundings and culture of our early years of development.[225] From the perspective of our high-tech present, it is difficult to see the grave risk that comes with this adaptation. As our global civilization moves us further and further from our evolutionary environment, it increasingly contains things to which we cannot meaningfully adapt, and which will slowly degrade both what it means to be human, as well as the natural world on which all life depends. We are progressively making a world to which we are not genetically fit, and with which we are increasingly misaligned. Our immature perspective on progress blinds us to this risk.
We are progressively making a world to which we are not genetically fit, and with which we are increasingly misaligned.
A key engine of the progress narrative is optimism. In many ways, the progress narrative is the optimism narrative, and our current definition of progress and optimism are two sides of the same coin.
The labels of optimist and pessimist are used commonly in society to categorize people according to their general worldview. Those who tend to expect upsides are the “optimists” and those more inclined to consider potential downsides are the “pessimists.” This is another reductionist view that happens also to be a useful form of propaganda for supporters of the progress narrative, which makes use of the label of “pessimist” as a pejorative. The pessimist is cast as the dull and nihilistic doomer, while the optimist can assume the role of the enthusiastic and energetic leader (the “builder of the future”).[226] This framing serves the purpose of the market and the interests of productivity, disposing many of us toward making and selling things that perhaps we don’t need or even particularly want, at varying degrees of risk.
An alternative perspective is that considerate pessimism is an expression of care and responsibility. Caring about the fundamental value of reality, the pessimist attempts to see effects in the world clearly, and feels an empathy that causes them to consider the consequences of their actions more comprehensively. From this perspective, naive optimism can be a kind of willful blindness—a form of sociopathy that forbids the thought that there might be costs to our actions that may be best considered in advance. We can call this toxic optimism.
When we feel unfulfilled in life but are committed to the path that we’re on, an optimistic outlook can provide a useful excuse for not looking too closely at the reasons for our lack of fulfillment. Optimism can be a part of the story that we tell ourselves about how things will get better in the future. We can fill our lives with the hypernormal stimuli of status, money, and entertainment. We can point to how enjoyable and transiently satisfying such experiences are, and never have to consider the lack of real intimacy and meaning in our lives, nor deal with the causes. Optimism and hope can be useful tools for human psychologies in denying the scarier or more consequential aspects of reality. From this perspective, it is the optimist who is the nihilist, the empty ghost pursuing a hit of hypernormal, addictive stimuli to distract from the gaping void in their soul.[227]
Optimism and hope can be useful tools for human psychologies in denying the scarier or more consequential aspects of reality. From this perspective, it is the optimist who is the nihilist, the empty ghost pursuing a hit of hypernormal addictive stimuli to distract from the gaping void in their soul.
When we call an expression of care for reality pessimism, we endorse a commitment to irresponsibility and nihilism. The suggestion that a certain action might not be a good idea may be articulating precisely the opposite of nihilism: a statement of caring responsibility for what is happening in the world. The idea that someone else’s care for reality is based only in fear and risk-aversion can be used as a means of discounting their perspective, and justifying an approach that rushes ahead with poorly conceived plans, risking the health and well-being of other beings. In reality, healthy pessimism is an expression of care and responsibility, as well as empowerment. Empowerment is a critical component, because healthy pessimism recognizes agency and seeks to take action in the world.
Toxic pessimism, on the other hand, looks like disempowerment and a preoccupation with negative outcomes, often to the detriment of reasonable pathways toward action. When pessimism is unhealthy, it leads to a defeatist attitude that turns away too easily from potential holistic improvement, and discounts strategies that may be useful given adequate time and consideration. Toxic pessimism risks driving a self-fulfilling prophecy of failure. It can look like hopelessness, and insidiously undercut constructive approaches. In modernity, the toxic forms of both optimism and pessimism are far more prevalent than their healthy forms.
A more holistic approach to optimism and pessimism involves elements of both, in an awareness of and engagement in the dialectical relationship between the two. For instance, it’s obvious that it’s a bad idea to be purely optimistic about a strategy, because optimism can blind us to our own biases and to the value of an awareness of things that may impact our plans. A better approach involves a healthy dose of pessimism about the quality of our strategy, because then we will be more attuned to its flaws and pitfalls, which will help it succeed in the long run.
How to feel about a strategy presents an opportunity for a healthy kind of optimism too. Healthy optimism is the faith that the total set of possibilities before us is vast, and that we have explored very little of the landscape of potential interventions in the world. Healthy optimism is the faith that we can always do better, that there is always more to learn that will improve our strategy. This kind of optimism is not about clinging to a particular proposition with blind certainty. It is based instead on the humble recognition of how much still exists outside of our current awareness and how we are therefore obligated to keep trying, in service of all that we value.
In many ways, the progress narrative is the optimism narrative, and our current definition of progress and optimism are two sides of the same coin.
So far, Part II of this essay has explained what is wrong with progress and why; this final section provides some examples of how to deliver authentic progress in practice. This necessarily involves an explanation of techniques and processes. Although this means a different kind of reading experience, it is also the only way to demonstrate that there are valid and practical methods for addressing the profound challenges outlined in Part I—and the only way to empower readers with a sense of the real possibility of change. Without an explanation of these approaches, this paper would fail to point out the path of healthy optimism that lies ahead, a path that is open for us to take, should we choose to do so. Once the gestalt of the potential applicability and scope of these approaches is felt, a sense of hope and even excitement can arise: our challenges are enormous, but fundamentally tractable. There is work to do. We can make a difference.
These processes help innovators, technologists, and entrepreneurs take action that methodologically internalizes externalities. They are not anti-progress, just as they are not anti-science, anti-technology, or anti-democracy. A proposal for a more mature version of progress is simply against the immature versions of these concepts.[228] The world needs science that connects disparate fields, that integrates the humanities with the sciences in a way that allows people working in each domain to benefit from the best thinking in the others. Science conducted in isolation risks losing the distinction between what “is” in reality, and what “ought” to be. For applied science (in the form of new technology) to be guided by the most meaningful values, a deeper understanding of the kinds of learning made possible by the humanities is critical. Now that humanity has the power to alter its fundamental reality, it is vital that we have something meaningful to say about how best to steward such power.
The following section speaks to how humanity can move forward in a more mature way. The list of processes outlined below is by no means exhaustive, and should be considered as illustrative of the kind of techniques that are necessary. In practice, each of these processes should be used in an overlapping and inter-informing manner, in which users flow from one to the next to build a more comprehensive understanding of the best possible outcome. The world needs to be innovating in this space; we need more thinkers to expand this list, and take steps into the landscape of unexplored possible routes toward a viable future.
Common problem-solving methods tend to focus on the search for new solutions to the problem at hand. In most cases, however, focusing instead on upstream causes would allow us to consider whether our goals might be best served by addressing the origin of the problem, rather than the problem we see before us. Developing prudence in our approach to problem-solving would help to reduce the risk of negative externalities of new and perhaps poorly considered and designed technologies.
Life in modernity has led us to forget that not all desires should be fulfilled, and not all effort or discomfort removed from our lives. Challenge is key to becoming who we are—to our health, our well-being, and our potential for growth and development.
There are strong incentives to seek technological fixes for things that are simply features of reality that are worth embracing, rather than legitimate problems to solve. Life in modernity has led us to forget that not all desires should be fulfilled, and not all effort or discomfort removed from our lives. Challenge is key to becoming who we are—to our health, our well-being, and our potential for growth and development. Difficulties can give rise to strength, and while some difficulties are truly harmful or drive negative externalities of their own (and should therefore be addressed), others may be better understood as a critical part of what drives our development, or makes life meaningful. Modern life makes it easy to lose touch with this reality in favor of the conveniences it provides.
When we take steps to solve problems that may not even be best addressed through a new solution, we can create outcomes that leave us in a worse position overall. Many problems are the result of effective solutions to previous problems, and solutions to these problems will in turn necessitate more new solutions. It is this process that traps society on a path to increasing catastrophe and degradation rather than authentic progress. We can resolve this dynamic through the application of a set of simple, principled steps.
Below is a simple process that can be used to ensure that any attempt at problem-solving is more likely to create deep and lasting success and less likely to create other new problems as a result. This approach aims to address problems so that each solution creates an authentically healthier world, externalizing benefits rather than costs.
The concept of yellow teaming was inspired by the better-known practice of red teaming. The idea of the “red team” was developed by the military to evaluate strategy by simulating an adversary’s perspectives and actions. Later, cybersecurity firms used the same approach to explore attack pathways against a client’s digital infrastructure and produce reports of security issues. In many cases, red teaming involves actively trying to break or corrupt a product to understand all the ways in which failure may occur.
The “yellow team” concept takes this idea in another direction, assessing a project and its implementation in the context of all other aspects of reality that it will touch over the full course of its lifetime.[229] Where red teaming attempts to assure that a plan doesn’t fail, yellow teaming attempts to ensure it doesn’t cause unexpected harms or problems elsewhere. It aims to account for how our typical approaches to solution design tend to make problems worse in the long run, and provide guidance to address such issues in advance, thereby minimizing the risk of negative externalities.
The practice of yellow teaming asks a set of probing questions to help reveal the broader impacts of any technology under development. Questions are aimed at helping builders to think through impacts across domains, including the environment, human health and psychology, the foundations of nature, communities, political economies, existing technologies, and different jurisdictions. It also helps designers to consider the unforeseen ways in which their ideas could be exploited for purposes far beyond their original intent, including paths to weaponization, corruption, and conflict. Yellow teaming, as with synergistic design (covered below), are approaches to axiological design: design that is grounded in a consideration of values and ethics, and integrates the broader implications of a technology into the design process.[230] Some examples of opening higher-level yellow team questions (from which subsequent lower-level questions then emerge) include:
Further questions focus on impacts in the context of time, space, and power. How does a new tool create, increase, or decrease power in society? Where is power being conferred, and who will be empowered by its use? What previous ways of being in the world will be made obsolete (e.g. screens and their impact on reading)? Is it benefiting the present at the expense of the future? Will there be responses and counter-responses from competitors? How does it drive innovation arms races (i.e. how does it change the power landscape, and how are those impacted likely to respond)? The technologies that are likely to be created in response to the use of a new technology are also part of the causal consideration embodied by the yellow team approach. Sometimes it may become clear that social technologies (e.g. changes to the motivational landscape) are necessary prior to deployment to ensure that a tool doesn’t simply launch a new arms race. The yellow team approach is about designing health, social, and ecological metastability into a future landscape that will be shaped by a new technology.[231]
When we think through the effects a technology might have in the world, it seems reasonable to think linearly: we can expect it to cause this particular effect here, which could then lead to this particular side effect over there, and so on. This approach, however, fails to account for the fact that when a new technology is released, it will inevitably be used in every possible way afforded by its design and function by all potential users. The idea that all new technologies enable new affordances is therefore a key part of yellow teaming. What are the full set of affordances that this technology enables, and how do these affordances link to motivations that are likely to exist out there in the world? The world delivered by a new technology will depend on the motivations that are upregulated by the affordances enabled by that new technology. Twitter was designed as a microblogging platform, and yet its release afforded users the opportunity to rapidly amplify narratives through the use of bots, anonymous accounts, and troll factories, which makes it a useful tool in targeted social engineering, information warfare, and political propaganda.[232] This is the world we have now. A yellow teaming process could have led to the emergence of a totally different kind of social media and with it, a totally different world.
A yellow teaming process could have led to the emergence of a totally different kind of social media and with it, a totally different world.
The power that is afforded to us by our current technologies enables destruction and creation on an unprecedented scale. The ability to destroy the world (with nuclear weapons, for instance) or alter the source code of our biology (via genetic engineering) is much closer to the power of gods than it is to the power of other primates, and yet if we deploy such power without the wisdom of gods, we risk catastrophe. Almost all wisdom cultures contain some element of the concept of restraint—the idea that sometimes it is important to refrain from certain choices or actions, no matter how tempting they may be.
What would the wisdom of gods look like in relation to today’s tech innovation landscape? At the level of the state, another word for restraint is regulation. This is, after all, precisely the role that the government should play in the maintenance of free markets: the constraint of unethical and damaging activities for which markets would otherwise exist (such as organ-harvesting or people-trafficking). It is worth acknowledging that “better regulation” sounds like an obvious and unexciting answer to the problem of great risks; yet at the same time, it must be recognized that our primary lever for containing great risks now is still based in mechanisms of governance and regulation, without which the disasters of leaded gasoline, thalidomide, and asbestos would have been far worse. While new ways of thinking are undeniably necessary, it is worth improving the mechanisms that exist now too.
New regulatory frameworks, specifically designed to mitigate the risks from only the most dangerous new technologies prior to deployment, are necessary as soon as possible. The aviation industry is subject to regulation in order to control for both malicious intent (such as terrorist activity) and accidental harms (such as mechanical failures). Regulation is stringent, because the scale of the consequences of failure of either kind are so significant. A subset of new technologies is characterized by rapid increases in the speed to scale, rate of power growth, complexity of downstream effects, and the impact of worst-case scenarios—and some of these technologies are implicated in plausible scenarios that could lead to globally catastrophic events. AI, synthetic biology, and nanotechnology (for example) are exponential and existential: their rates of development and scale of impact are increasing exponentially, and the unintended consequence of their use may have the potential to threaten humanity’s survival. For this type of advanced technology, rigorous safety analysis focused on regulatory processes capable of containing such harms need to be completed in advance of securing legal approval to proceed.
New powers of oversight must be created by regulatory bodies with incentives and institutional architectures adequate to the scale and power of these new technologies, with strong enough checks and balances in place to deal with the potential for corruption that arises in the stewardship of power. The right approach rests on the foundations of the precautionary principle: the principle that under uncertainty, and when there is a risk of significant or irreversible harm, it is advisable to take precautions in advance of any deployment. There is a wide range of other criteria that must be considered for technologies with the potential for catastrophic outcomes, which include for example, scrutability (i.e. how “understandable” the technology is, and thus how predictable its effects are in the world) and combinatorial effects (i.e. how harm can be caused by this tech in combination with other types and ecosystems of technologies, and whether it could exacerbate risks in other areas of tech development). New regulation of advanced technologies must be based on the understanding that in scenarios in which there is both significant uncertainty as well as severe consequences, the burden of proof must be on safety, and not risk.
Synergistic satisfiers are solutions to problems that address multiple needs at the same time.[233] This simple principle can be applied to how we design new tools and products. By looking for synergy between solutions to disparate problems—or approaches that give rise to multiple positive externalities from a single intervention—we can expand the breadth of our gaze to include more than just the narrow, product-focused pipeline of typical technology design.
The case studies of social media and regenerative agriculture referenced above are examples of synergistic design. In the case of social media, through the alteration of the platforms used by billions of people around the world, we could simultaneously improve individual and collective mental health, enhance users’ cognitive capacity for understanding the world, grow civic participation, heal family dynamics, and reduce radicalization, violence, disinformation, and polarization. This example encapsulates the spirit of synergistic design, which is all about many compounding positives flowing from a limited set of changes.
The reason that industrial agriculture plays a central role in this paper is because it externalizes many harms across many sectors in the process of its narrow optimization of food production. The reason that regenerative agriculture is a valuable counter-example is because it addresses these harms and externalizes positive effects to the areas that are currently accruing damage. This is what makes it an example of a synergistic satisfier. The same kind of compounding benefits may be observed: improvements in physical and mental health, a growth in environmental resilience, reduction in species extinction, healing of ocean dead zones, and then, in time, improvements in the economy, such as a reduction in deficit spending on healthcare costs.
Regenerative agriculture may be thought of as a specific application of the broader philosophical principle of permaculture, which is an approach to land use and food production that mirrors the patterns of nature and integrates human activity with ecosystems. Permaculture—and its instantiation in the specific practices of regenerative agriculture—aims to provide for human needs while serving multiple other functions within the complex web of interdependencies that constitute the local environment.
Every part of a permaculture system is designed to serve multiple values and perform multiple functions. Permaculture is an example of an approach that embodies the principles of synergistic design and anti-fragility—both components of ecological design, which itself draws inspiration from natural systems. In natural systems, each element serves multiple purposes, and each purpose is served by multiple elements. Trees, for example, do not only produce fruit, but also provide a habitat for thousands of other organisms, support beneficial pollinators, provide shade in high summer, and act as a protective windbreak for other plants. In the practice of permaculture, each plant is selected as part of a mixed ecosystem that serves and benefits other plants and organisms. The most generative areas of the landscape, such as the margins between fields and forests, are protected for the interaction between adjacent ecosystems, promoting synergy between high-level elements of the overall system. In permaculture design, the approach to the integration of human needs and the natural world aims to use the sustainability principles inherent to nature to build resilience and over time direct efforts toward closed-loop systems. It is an approach rooted in stewardship (as opposed to exploitation) of the biosphere.
In permaculture design, the approach to the integration of human needs and the natural world aims to use the sustainability principles inherent to nature to build resilience and over time direct efforts toward closed-loop systems.
There are thousands of further examples similar to those provided above. Social media and permaculture are informative together because they span two very different realms of food production and the growing digital world. Much good work has already been done on synergistic design frameworks across other domains in society, including in models for sustainable economics, future systems of education, corporate strategy, and urban design.[234]
From this last section of design considerations, it should be clear that further deep work is needed in design methodology. The intention of these brief descriptions is to ground the ideas in a sense of realistic attainability. Imagine a decentralized movement in which these ideas and practices begin to establish a foothold in early design processes across sectors in the global economy. Imagine that yellow teaming and synergistic design are taught in university to engineers, scientists, law students, and architects. Imagine that at the same time, other movements also begin to promote the removal of money from politics, the legal internalization of externalities, the creation of systems of corporate transparency and accountability, enhanced oversight of industry, upgraded regulatory practices, restricted lobbying and campaign finance, and the enactment of Extended Producer Responsibility laws.[235] Such movements could give rise to a very different world than the one in which we live now. This is the path of healthy optimism: the faith that these aims, and others we have barely begun to imagine, can deliver a long, fulfilling, and healthy future for our children.
With characteristic economy, the naturalist John Muir wrote that “when we try to pick out anything by itself, we find it hitched to everything else in the universe.”[236] At the heart of a more sophisticated understanding of progress must be a humble awareness of the interconnectedness not only of the natural world, but increasingly of the global civilization on which our way of life now depends.
At the moment, there is little meaningful opposition to the ideology of relentless and increasingly rapid technological progress as the world’s primary obligate aim. It is the worldview of the small group of technologists and financiers that has fundamentally transformed societies over the last few decades, and it is the force driving the current AI arms race. Arms races, whether for new commercial technologies, nuclear weapons, or advanced rockets, have a tendency to lead toward outcomes in which everyone is far less safe than they used to be.
With vast wealth, power, and popular support firmly behind it, our immature idea of progress is the most dangerous ideology in the world—far more so than any other radical worldview from across political or religious spectra.
The potentially catastrophic externalities of our current path of narrow technological progress are largely ignored in the zeitgeist. With vast wealth, power, and popular support firmly behind it, our immature idea of progress is the most dangerous ideology in the world—far more so than any other radical worldview from across political or religious spectra. No other ideology drives the production of increasingly powerful physical technologies, with consequences for believers and unbelievers alike. No other ideology idolizes technology in the name of its constructive capacity, thereby accelerating growth in its total destructive capacity at the same time. Most of humanity is blind to the harm caused by this ideology, and instead actively pursues its goals, unable or unwilling to see where the road leads; though more and more people are seeing the reality of our path, most still feel stuck, victims of Stockholm syndrome. In a world of exponential growth, extraction, pollution, and arms races, this path can only lead to collapse.
But collapse is not inevitable. We all have at least some direct experience of what it’s like to grow up. It’s often hard, unfair, and complicated, but one way or another, and with varying degrees of success, we all have to attempt it. Within us all, we each have the capacity to mature, to glimpse the reality of how little we truly know, and look back over the journey to the present. For our global civilization, the same journey is overdue. To mature, we must approach reality with sufficient love and care to put aside our immature desires and attend to the world with humility and open curiosity. It is only then that the ideological veil covering our gaze will lift. Only then will our global civilization be able to grow up and become the wise steward of the power it has created.
As far as we know, places like our biosphere are rare in the vastness of the cosmos. There is no statement that can capture even a fraction of the value that has come into being on the surface of this small planet, or what it means to experience it over the course of a lifetime. To say that it is infinitely precious will have to suffice. What we can say definitively though, is that it is incomparably tiny and that everything we care about depends on it. For the things that we care about to persist, this infinitely precious place must be served and protected in a manner that we are demonstrably failing to achieve now. Our world of economics, politics, infrastructure, and institutions is not a foregone conclusion—it is defined by the choices and actions of humans, and it can be remade by humans. What we need to avert catastrophe is both fundamentally possible, and at the same time no less than what is required to bring about a radically healthier, kinder, and safer world. Bringing this potential world into being represents a far better story for humanity than that which is offered by the current progress narrative. Being a part of the development in progress, in service to all life in perpetuity, would be a far more meaningful existence than the one you are living now.
The noun was coined by the American ecological psychologist James J. Gibson. It was initially used in the study of animal-environment interaction and has also been used in the study of human-technology interaction. An affordance is an available use or purpose of a thing or an entity. For example, a couch affords being sat on, a microwave button affords being pressed, and a social media platform has an affordance of letting users share with each other.
Agent provocateur translates to “inciting incident” in French. It is used to reference individuals who attempt to persuade another individual or group to partake in a crime or rash behavior or to implicate them in such acts. This is done to defame, delegitimize, or criminalize the target. For example, starting a conflict at a peaceful protest or attempting to implicate a political figure in a crime.
Ideological polarization is generated as a side-effect of content recommendation algorithms optimizing for user engagement and advertising revenues. These algorithms will upregulate content that reinforces existing views and filters out countervailing information because this has been proven to drive time on-site. The result is an increasingly polarized perspective founded on a biased information landscape.
To “cherry pick” when making an argument is to selectively present evidence that supports one’s position or desired outcome, while ignoring or omitting any contradicting evidence.
The ethical behavior exhibited by individuals in service of bettering their communities and their state, sometimes foregoing personal gain for the pursuit of a greater good for all. In contrast to other sets of moral virtues, civic virtue refers specifically to standards of behavior in the context of citizens participating in governance or civil society. What constitutes civic virtue has evolved over time and may differ across political philosophies. For example, in modern-day democracies, civic virtue includes values such as guaranteeing all citizens the right to vote, and freedom of culture, race, sex, religion, nationality, sexual orientation, or gender identity. A shared understanding of civic virtue among the populace is integral to the stability of a just political system, and waning civic virtue may result in disengagement from collective responsibilities, noncompliance with the rule of law, a breakdown in trust between individuals and the state, and degradation of the intergenerational process of passing on civic virtues.
Closed societies restrict the free exchange of information and public discourse, as well as impose top down decisions on their populus. Unlike the open communications and dissenting views that characterize open societies, closed societies promote opaque governance and prevent public opposition that might be found in free and open discourse.
A general term for collective resources in which every participant of the collective has an equal interest. Prominent examples are air, nature, culture, and the quality of our shared sensemaking basis or information commons.
The cognitive bias of 1) exclusively seeking or recalling evidence in support of one's current beliefs or values, 2) interpreting ambiguous information in favor of one’s beliefs or values, and 3) ignoring any contrary information. This bias is especially strong when the issues in question are particularly important to one's identity.
In science and history, consilience is the principle that evidence from independent, unrelated sources can “converge” on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own.
While “The Enlightenment” was a specific instantiation of cultural enlightenment in 18th-century Europe, cultural enlightenment is a more general process that has occurred multiple times in history, in many different cultures. When a culture goes through a period of increasing reflectivity on itself it is undergoing cultural enlightenment. This period of reflectivity brings about the awareness required for a culture to reimagine its institutions from a new perspective. Similarly, “The Renaissance” refers to a specific period in Europe while the process of a cultural renaissance has occurred elsewhere. A cultural renaissance is more general than (and may precede) an enlightenment, as it describes a period of renewed interest in a particular topic.
A deep fake is a digitally-altered (via AI) recording of a person for the purpose of political propaganda, sexual objectification, defamation, or parody. They are progressively becoming more indistinguishable from reality to an untrained eye.
Empiricism is a philosophical theory that states that knowledge is derived from sensory experiences and relies heavily on scientific evidence to arrive at a body of truth. English philosopher John Locke proposed that rather than being born with innate ideas or principles, man’s life begins as a “blank slate” and only through his senses is he able to develop his mind and understand the world.
It is both the public spaces (e.g., town hall, Twitter) and private spaces where people come together to pursue a mutual understanding of issues critical to their society, and the collection of norms, systems, and institutions underpinning this society-wide process of learning. The epistemic commons is a public resource; these spaces and norms are available to all of us, shaped by all of us, and in turn, also influence the way in which all of us engage in learning with each other. For informed and consensual decision-making, open societies and democratic governance depend upon an epistemic commons in which groups and individuals can collectively reflect and communicate in ways that promote mutual learning.
Inadvertent emotionally or politically -motivated closed-mindedness, manifesting as certainty or overconfidence when dealing with complex indeterminate problems. Epistemic hubris can appear in many forms. For example, it is often demonstrated in the convictions of individuals influenced by highly politicized groups, it shows up in corporate or bureaucratic contexts that err towards certainty through information compression requirements, and it appears in media, where polarized rhetoric is incentivized due to its attention-grabbing effects. Note: for some kinds of problems it may be appropriate or even imperative to have a degree of confidence in one's knowledge—this is not epistemic hubris.
An ethos of learning that involves a healthy balance between confidence and openness to new ideas. It is neither hubristic, meaning overly confident or arrogant, nor nihilistic, meaning believing that nothing can be known for certain. Instead, it is a subtle orientation that seeks new learning, recognizes the limitations of one's own knowledge, and avoids absolutisms or fundamentalisms—which are rigid and unyielding beliefs that refuse to consider alternative viewpoints. Those that demonstrate epistemic humility will embrace truths where these are possible to attain but are generally inclined to continuously upgrade their beliefs with new information.
This form of nihilism is a diffuse and usually subconscious feeling that it is impossible to really know anything, because, for example, “the science is too complex” or “there is fake news everywhere.” Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power. Claims of “truth” are seen as unwarranted or intentional manipulations, as weaponized or not earnestly believed in.
Epistemology is the philosophical study of knowing and the nature of knowledge. It deals with questions such as “how does one know?” and “what is knowing, known, and knowledge?”. Epistemology is considered one of the four main branches of philosophy, along with ethics, logic, and metaphysics.
Derived from a Greek word meaning custom, habit, or character; The set of ideals or customs which lay the foundations around which a group of people coheres. This includes the set of values upon which a culture derives its ethical principles.
The ability of an individual or group to shape the perception of an issue or topic by setting the narrative and determining the context for the debate. A “frame” is the way in which an issue is presented or “framed”, including the language, images, assumptions, and perspectives used to describe it. Controlling the frame can give immense social and political power to the actor who uses it because the narratives created or distorted by frame control are often covertly beneficial to the specific interests of the individual or group that has established the frame. As an example, politicians advocating for tax cuts or pro-business policies may use the phrase "job creators" when referring to wealthy corporations in order to suggest their focus is on improving livelihoods, potentially influencing public perception in favor of the politician's interests.
Discourse oriented towards mutual understanding and coordinated action, with the result of increasing the faith that participants have in the value of communicating. The goal of good faith communication is not to reach a consensus, but to make it possible for all parties to change positions, learn, and continue productive, ongoing interaction.
Processes that occupy vast expanses of both time and space, defying the more traditional sense of an "object" as a thing that can be singled out. The concept, introduced by Timothy Morton, invites us to conceive of processes that are difficult to measure, always around us, globally distributed and only observed in pieces. Examples include climate change, ocean pollution, the Internet, and global nuclear armaments and related risks.
Information warfare is a primary aspect of fourth- and fifth-generation warfare. It can be thought of as war with bits and memes instead of guns and bombs. Examples of information warfare include psychological operations like disinformation, propaganda, or manufactured media, or non-kinetic interference in an enemy's communication capacity or quality.
Refers to the foundational process of education which underlies and enables societal and cultural cohesion across generations by passing down values, capacities, knowledge, and personality types.
The phenomenon of having your attention captured by emotionally triggering stimuli. These stimuli strategically target the brain center that we share with other mammals that is responsible for emotional processing and arousal—the limbic system. This strategy of activating the limbic system is deliberately exploited by online algorithmic content recommendations to stimulate increased user engagement. Two effective stimuli for achieving this effect are those that can induce disgust or rage, as these sentiments naturally produce highly salient responses in people.
An online advertising strategy in which companies create personal profiles about individual users from vast quantities of trace data left behind from their online activity. According to these psychometric profiles, companies display content that matches each user's specific interests at moments when they are most likely to be impacted by it. While traditional advertising appeals to its audience's demographics, microtargeting curates advertising for individuals and becomes increasingly personalized by analyzing new data.
False or misleading information, irrespective of the intent to mislead. Within the category of misinformation, disinformation is a term used to refer to misinformation with intent. In news media, the public generally expects a higher standard for journalistic integrity and editorial safeguards against misinformation; in this context, misinformation is often referred to as “fake news”.
A prevailing school of economic thought that emphasizes the government's role in controlling the supply of money circulating in an economy as the primary determinant of economic growth. This involves central banks using various methods of increasing or decreasing the money supply of their currency (e.g., altering interest rates).
A form of rivalry between nation-states or conflicting groups, by which tactical aims are realized through means other than direct physical violence. Examples include election meddling, blackmailing politicians, or information warfare.
Open societies promote the free exchange of information and public discourse, as well as democratic governance based on the participation of the people in shared choices about their social futures. Unlike the tight control over communications and suppression of dissenting views that characterize closed societies, open societies promote transparent governance and embrace good-faith public scrutiny.
The modern use of the term 'paradigm' was introduced by the philosopher of science Thomas Kuhn in his work "The Structure of Scientific Revolutions". Kuhn's idea is that a paradigm is the set of concepts and practices that define a scientific discipline at any particular period of time. A good example of a paradigm is behaviorism – a paradigm under which studying externally observable behavior was viewed as the only scientifically legitimate form of psychology. Kuhn also argued that science progresses by the way of "paradigm shifts," when a leading paradigm transforms into another through advances in understanding and methodology; for example, when the leading paradigm in psychology transformed from behaviorism to cognitivism, which looked at the human mind from an information processing perspective.
The theory and practice of teaching and learning, and how this process influences, and is influenced by, the social, political, and psychological development of learners.
The ability of an individual or institutional entity to deny knowing about unethical or illegal activities because there is no evidence to the contrary or no such information has been provided.
First coined by philosopher Jürgen Habermas, the term refers to the collective common spaces where people come together to publicly articulate matters of mutual interest for members of society. By extension, the related theory suggests that impartial, representative governance relies on the capacity of the public sphere to facilitate healthy debate.
The word itself is French for rebirth, and this meaning is maintained across its many purposes. The term is commonly used with reference to the European Renaissance, a period of European cultural, artistic, political, and economic renewal following the middle ages. The term can refer to other periods of great social change, such as the Bengal Renaissance (beginning in late 18th century India).
A term proposed by sociologists to characterize emergent properties of social systems after the Second World War. Risk societies are increasingly preoccupied with securing the future against widespread and unpredictable risks. Grappling with these risks differentiate risk societies from modern societies, given these risks are the byproduct of modernity’s scientific, industrial, and economic advances. This preoccupation with risk is stimulating a feedback loop and a series of changes in political, cultural, and technological aspects of society.
Sensationalism is a tactic often used in mass media and journalism in which news stories are explicitly chosen and worded to excite the greatest number of readers or viewers, typically at the expense of accuracy. This may be achieved by exaggeration, omission of facts and information, and/or deliberate obstruction of the truth to spark controversy.
A process by which people interpret information and experiences, and structure their understanding of a given domain of knowledge. It is the basis of decision-making: our interpretation of events will inform the rationale for what we do next. As we make sense of the world and accordingly act within it, we also gather feedback that allows us to improve our sensemaking and our capacity to learn. Sensemaking can occur at an individual level through interaction with one’s environment, collectively among groups engaged in discussion, or through socially-distributed reasoning in public discourse.
A theory stating that individuals are willing to sacrifice some of their freedom and agree to state authority under certain legal rules, in exchange for the protection of their remaining rights, provided the rest of society adheres to the same rules of engagement. This model of political philosophy originated during the Age of Enlightenment from theorists including, but not limited to John Locke, Thomas Hobbes, and Jean-Jacques Rousseau. It was revived in the 20th century by John Rawls and is used as the basis for modern democratic theory.
Autopoiesis from the Greek αὐτo- (auto-) 'self', and ποίησις (poiesis) 'creation, production'—is a term coined in biology that refers to a system’s capability for reproducing and maintaining itself by metabolizing energy to create its own parts, and eventually new emergent components. All living systems are autopoietic. Societal Autopoiesis is an extension of the biological term, making reference to the process by which a society maintains its capacity to perpetuate and adapt while experiencing relative continuity of shared identity.
A fake online persona, crafted to manipulate public opinion without implicating the account creator—the puppeteer. These fabricated identities can be wielded by anyone, from independent citizens to political organizations and information warfare operatives, with the aim of advancing their chosen agenda. Sock puppet personas can embody any identity their puppeteers want, and a single individual can create and operate numerous accounts. Combined with computational technology such as AI-generated text or automation scripts, propagandists can mimic multiple seemingly legitimate voices to create the illusion of organic popular trends within the public discourse.
Presenting the argument of disagreeable others in their weakest forms, and after dismissing those, claiming to have discredited their position as a whole.
A worldview that holds technology, specifically developed by private corporations, as the primary driver of civilizational progress. For evidence of its success, adherents point to the consistent global progress in reducing metrics like child mortality and poverty while capitalism has been the dominant economic paradigm. However, the market incentives driving this progress have also resulted in new, sometimes greater, societal problems as externalities.
Used as part of propaganda or advertising campaigns, these are brief, highly-reductive, and definitive-sounding phrases that stop further questioning of ideas. Often used in contexts in which social approval requires unreflective use of the cliché, which can result in confusion at the individual and collective level. Examples include all advertising jingles and catchphrases, and certain political slogans.
A proposition or a state of affairs is impossible to be verified, or proven to be true. A further distinction is that a state of affairs can be unverifiable at this time, for example, due to constraints in our technical capacity, or a state of affairs can be unverifiable in principle, which means that there is no possible way to verify the claim.
Creating the image of an anti-hero who epitomizes the worst of the disagreeable group, and contrasts with the best qualities of one's own, then characterizing all members of the other group as if they were identical to that image.
Discussion
Thank you for being part of the Consilience Project Community.
0 Comments