Challenges to Making Sense of the 21st Century


26 min read

Open societies face unparalleled learning crises from accelerating change in technology and science.

Conditions in the 21st century require individuals and societies to find a new sense of commitment to ongoing learning. Technological and scientific successes have remade the social world, leading to a much-heralded information age. The results for society and culture have included innovative changes in daily life, but also a growing number of increasingly complex public risks and a disabling sense of information overload. We argue that the rate at which information is becoming more complex has started to drastically outpace human capacities for learning. Either we increase our capacity for learning at scale, or we may be forced to abandon the projects and ideals of an open society—with disastrous consequences. We all need to recognize the problem and react accordingly.

Liberal democracies and related markets have historically been based on the notion that equal access to reliable and high-quality information enables an open society. This has required the existence of media, education, and political discourse sufficient to assure widespread capacities for adequate public sensemaking. Over time, science and technology have driven societal changes that have rendered this method of social organization both increasingly difficult to achieve and increasingly vital to accomplish. Changes in science, media, and the nature of our basic social structures (such as economics, politics, and warfare) have been building in complexity at an accelerating rate. Our ability to make sense of things can no longer keep pace. This is a situation in which effective innovations are needed that can upgrade our capacities for learning, fast. We have little choice but to learn new ways to make sense of the world together, in a new kind of public sphere. This new approach to sensemaking is needed to address an increasing number of consequential decisions facing our governments, communities, and families.

Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power.

The response to these conditions has been a general sense of being overwhelmed, often resulting in epistemic nihilism. This form of nihilism is a diffuse and usually subconscious feeling that it is impossible to really know anything, because, for example, “the science is too complex” or “there is fake news everywhere.” Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power. Claims of “truth” are seen as unwarranted or intentional manipulations, as weaponized or not earnestly believed in.

Our situation may also promote epistemic hubris, the belief that some form of knowledge can in fact clearly and definitely explain and predict those things that are most important in the world. Indeterminacy is overblown, and even the thorniest problems have a clear answer that should be accepted by everyone. While the philosophy of science itself is committed to overcoming this form of hubris, scientific findings are often misused and misunderstood, especially in highly politicized contexts. In these contexts, epistemic hubris and nihilism form a dangerous symbiosis. Individuals and cultural groups oscillate between the hopeless mood of “post-truth” culture and the peaks of polarizing certainties that emerge around politically significant scientific and geopolitical issues.

There is a line of thought that threads a needle between these two states. A stance of epistemic humility and commitment is offered here, which forms the core of what we begin to discuss as a new ethos for public sensemaking and civic engagement—an ethos of learning fit to meet the actual challenges of the 21st century. This ethos is not something that can be dictated by any group or authority. We are not claiming to have discovered it or invented it, nor are we prescribing some specific approach. We understand this ethos as emergent. It is a non-compulsory civic commitment between earnest citizens, a covenant among those who understand the seriousness of the consequences of the epistemic crisis. It is for those seeking the necessary pathways forward in collaboration and learning.

Changing Sciences: The Dawn of Hyperobjects

Consider the psychological position of the average citizen of a Western country today. For this individual, some of the most essential risks and decisions of their life involve invisible processes that require scientific explanation, often challenging to everyday experience. The vast majority are aware of and “believe in” the existence and consequences of important realities of public concern, such as nuclear radiation, climate change, and pandemics. Yet only a small minority truly understand these issues as they are discussed at the leading edge of scientific research, where there is often disagreement. An expert in macroeconomics is unlikely also to be an expert in virology, and these are only two of many fields relevant to understanding pandemics. As a result, the relationships between diverse phenomena are often overlooked. Public health and economic wellbeing are then played off against each other, rather than being addressed through a joint policy based on interdisciplinary synthesis and a method for weighing values and value trade-offs. Specialized knowledge proliferates in silos.

There has never been more scientific information about more consequential issues than at the current moment. And it is by virtue of modern society’s successes in organizing certain kinds of “knowledge production” that we are plunged as individuals and groups into a state where information overwhelms us. This state shrouds the most important issues in a “cloud of unknowing”—or worse, a cloud of false claims to definitive knowledge.

One end result of the massive technological and scientific enterprises of the modern world has been the discovery of realities that are so vast and complex that they exhaust even the best of our scientific measures and methods. They have been called hyperobjects by the ecological philosopher and literary critic Timothy Morton. The term refers to those objects of advanced science that we live with as part of everyday life, and yet which are nearly incomprehensible. Hyperobjects are discovered and revealed by leading scientific methods, literally rendered “visible” through data, and yet what these approaches discover is, in part, the limits of our ability to fully explain certain very important phenomena. Hyperobjects are so incalculably complicated, or so inherently complex, emergent, and dynamic that their full “behavior” can’t be explained exactly or exhaustively. This of course means a hyperobject’s “behavior” cannot be predicted with specificity either.

The list of hyperobjects in the news includes nuclear radiation, planetary-scale climate change, and pandemics, but the true list is much longer. Systemic injustice, world hunger, planetary-scale computational architectures, and bioregional zones (such as the Amazon rainforest), are further examples. Hyperobjects are so large, complex, nonlinear, multicausal, long lasting, and beyond human proportions that they disrupt effective public sensemaking and place great demands on individual psychology. As with climate change or radioactive waste, we all know that “it” is out there, but we can’t see “it,” nor truly understand “it,” at least not without the help of specialists—and even they disagree about important details.

Of course, there are many kinds of invisible objects that scientists understand sufficiently. For example, a microscope is required to see a plant cell, and the process of photosynthesis is difficult to understand—yet a house plant is not a hyperobject just because a scientist understands it much better than the average person ever will. To be clear, hyperobjects constitute a distinct class of realities only recently discovered by scientists, which extend across vast physical and temporal scales, and which require specific technologies to disclose, such as complex measurement systems, that often did not exist before the 21st century.

Understanding hyperobjects as a historically emergent and novel challenge to 21st-century public sensemaking is the first step of the argument presented here. Our most important practices and institutions for public sensemaking in open societies—from schools, to the news media, to political and civic discourse—were all created before hyperobjects became a focus of scientific and public concern. This is part of the learning crisis: we are playing cognitive catch-up with an ever-more-complex world of hyperobjects.

Here is a specific example. For millennia, humans more or less agreed in general terms about the nature of the planet’s oceans, and how to live with and make use of them. Today, scientists are trying to figure out what the ocean actually is, and how it works over massive expanses of time and space, all in order to assure the possibility of continued human life. The “ocean” was a sometimes scary and often beautiful thing that everyone could relate to easily by seeing it, swimming in it, eating fish from it, sailing on it, and navigating it. The average citizen understands that ocean. But the Earth’s oceans have since come to be understood by scientists as a massively complex hyperobject with trends and tendencies such as nitrogen levels, microplastics and other pollution, pH, stored CO2 interacting with the atmosphere, and varying amounts and types of flora and fauna. The future trends of these factors will impact the biosphere and human food systems profoundly.

What could be called the “hyper-ocean” is disclosed by means of a massive lattice-work of sensor networks, as advances in measurement technologies allow for complex assumptions and organizations of vast reams of data. The scientific rendering of any hyperobject requires techniques at the leading edges of measurement and quantification, as models of ocean temperature and acidification use mathematics from complexity and chaos science. Science renders for the public imagination the “hyper-ocean,” which is more detailed, predicative, and objective than any sunset view of the beautiful waves, cliffs, beach, and clouds. The “hyper-ocean” is an uncanny and literally incomprehensible reality forced into public sensemaking— forced because it appears that our future depends on the quality and trustworthiness of research about this bigger, massively complex “hyper-ocean.” We have no choice but to try to understand the “hyper-ocean,” and what it teaches us about the limits of our knowledge.

Note that it is precisely the success of oceanography and related fields (not their failures) that has yielded newfound humility about just how much we do not know (and may never know) about the planet’s oceans. It used to be that we could dump waste into the ocean and forget about it. Now when there is a major oil spill, like the Deepwater Horizon spill in the Gulf of Mexico in 2010, we are acutely aware of the danger (see Figure 1). We are equipped with sophisticated models and metrics, and yet because of the complexity and indeterminacy of the best science, our advanced tools convincingly reveal our ignorance: we simply do not know what the longer-term impacts of the spill will be, even if we can make well-justified guesses. Everyday citizens, such as fishermen who make their living at sea, inherited their grandfather’s ocean. They now realize that the “hyper-ocean” has complex probabilities and risks, the specific consequences of which (such as radically unpredictable fishing yields) are making many of their businesses technically uninsurable.

The point here is that there is no insuring against, controlling, or predicting hyperobjects—indeed, they cannot be definitively understood—and yet they loom large in public sensemaking.

What becomes of political choice and democracy in an age of hyperobjects? Historically, the state, academia, and the media have been legitimated in public perception by virtue of being “the ones who know and who can, thereby, predict and control.” The dawning age of hyperobjects might suggest that this form of legitimacy is over. This brings us directly to the next step in explaining the compounding challenges to public sensemaking in the 21st century.

Figure 1: NASA satellite image of the Deepwater Horizon oil spill.The hyperobject extended for thousands of miles, and will likely have impacts for hundreds of years to come. Note that the spill is a hyperobject within the larger hyperobject of the ocean, just to give a sense of the complexity. Image in Public domain courtesy of

The Makings of an Epistemological Crisis: Politics and Invisible Things

When the complexity of the demands placed on this kind of public sensemaking increases, related sensemaking capacities have to increase too, or else the social system is “flying blind.” This leads inevitably to a state of crisis. The sensemaking institutions and environments around us today have become increasingly less capable of meeting the challenges they face. Our schools, governments, news media, and informational technologies are being challenged by important new realities, the culmination of a long history of their own successes.

Public involvement in governance requires citizenry capable of adequately understanding the issues implicated in governance decision-making.

The nature of scientific knowledge-production has transformed in the past few decades, along with major advances in measurement, quantification, and computation. Many important sciences have been changing, as have their objects of study. The nature of the objects that make up our shared public world has changed. Topics for public sensemaking are becoming much more complex, indeterminate, and mediated through the expert cultures of scientists—and then again through the expert cultures of media organizations. Never before has society been so dependent upon the dissemination of highly sophisticated scientific information to understand and mitigate complex risks to public welfare. This leads us to consider a well-grounded sociological theory that suggests while societies have always faced risks, our society is outpaced by the complexity of the risks it faces. Our ability to learn about and mitigate risks lags behind their increasing complexity. In this situation, both scientific authorities and voices in the media take on unprecedented roles, with a new responsibility for providing the public with resources for sensemaking around these complex risks. The final, terminal step is seeing that contemporary digital media landscapes are wide open to manipulation, informational warfare, and disorienting, addictive user interfaces. Right when society requires unprecedented upgrades in public sensemaking processes, we face instead a perfect storm of factors contributing to an epistemological crisis.

Living Within Invisible Risks

The radiation produced by nuclear waste is invisible to the naked eye. Below certain thresholds, the negative health effects of exposure to radiation take years or decades to be seen, which means the risk is undetectable to the average person until it is too late. Profoundly dangerous materials are produced routinely as a part of nuclear energy and weapons programs. The risks involved are complex, long lasting, and nonlocal. This means that while scientists understand a great deal about nuclear fallout, at this point they do not know exactly what will happen if a catastrophic event occurs, how long its effects will last, or in what places they will be seen and felt. Our understanding of the impacts on biological organisms leaves a great deal unclear or simply unknown. There is no definite plan on the part of world governments and energy companies detailing exactly what to do with all the nuclear waste that is being produced—that is, no plan to manage the disposal of nuclear waste for the complete ten thousand to one million years during which the materials will be dangerous to biological life[4]. Let that sink in. Governments are engaging in sensemaking and coordinated action around scientific and ethical realities at the extremities of deep time and scientific indeterminacy. While it has been common for human societies to consider the future consequences of their collaborative work, the scale and ramifications of our commitments to the future are staggeringly novel.

Part of the difficulty in handling radioactive waste is that disposal is effectively impossible; instead, it must be specially handled and placed in nuclear-safe, long-term storage facilities. “Long-term” in this context means up to a million years, although scientists disagree about exactly how long the materials will be dangerous. Creating such a long-term storage facility requires unprecedented feats of engineering. It also requires unprecedented feats of communication, as the risks of what is contained within the storage site must be communicated across deep time to any future humans (or other intelligent biological lifeforms) that might stumble upon the site. What symbols and signs might communicate such a grave danger to future humans living in an almost completely unimaginable world? This was the remarkable task set for a group of scholars convened by the U.S. congress in 1984[5]. There was no consensus on how to proceed. The longest running project at creating a long-term storage site for our nuclear waste, deep within Yucca Mountain in Nevada, was defunded during the Obama administration. It appears that mitigating the impact of the invisible risks posed by radiation is a difficult thing to push through already contentious government and military budgets[6].

Public sensemaking on the issue has taken place only sporadically and insufficiently. In part, this is because it is so complex. It requires not only the convening of ongoing learning and open discourse about future technology and advanced nuclear science, but it also involves aspects of economics and the dynamics of political will. In the presence of wars, election cycles, short memories, and immediate, tangible things voters want, how is it even possible to do public sensemaking around such abstract and long-term issues?

Radioactive waste is one of the things being monitored by complex scientific instrumentation as part of the “hyper-ocean.” The Fukushima nuclear disaster thrust the radiation levels of the ocean into public view and opened an almost endless and unanswerable set of questions about the long-term impacts of the ongoing radiation leak[7]. Some scientists think that a portion of this radiation is insignificant due to the half-lives of certain nuclear species, while others have grave concerns for a multiplicity of potential outcomes. This is an invisible risk that affects all life on Earth. It is only able to be officially determined as dangerous by experts, who themselves disagree on acceptable levels of danger. This disaster and its consequences, like Chernobyl before it, epitomize what the German sociologist Ulrich Beck outlines in his work on the risk society[8].

Beck’s basic thesis is that modern societies built up technologies and governance based on the prediction and control of natural and social processes, but that this approach to social life has now reached its limit. Dams and hospitals, scientific research and development, factories and schools: in many respects, the modern project succeeded at what it set out to do. Yet it was all the while creating unintended second- and third-order effects that would undermine the project in the long run, until ultimately, claims Beck, the modern period of social organization ended. These second- and third-order effects include, for example, industrial pollution (once thought safe) leading to environmental degradation, which then cascades into human migration and eventual political crises in open societies. Technologies resulting from breakthroughs in digital communication eventually and unexpectedly undermine the quality and integrity of civic discourse and participation—a pattern discussed further in the next section. Today, a new kind of society is emerging. It is built around a globally shared, reflexive response to the widespread and indeterminate risks created by modern society. Beck calls this new epoch “reflexive modernity” or the risk society.

The risk society continues to do scientific research, but this research is increasingly focused on understanding the hyperobjects created as a byproduct of industrial society, such as climate change, pandemics, planetary limits, human migration, and high-tech warfare. The risk society also continues to build technologies based on advancing sciences, only they are increasingly aimed at mitigating the worst of the risks we have inherited. The risk society continues to engage in public sensemaking with an intention to preserve democracy, only now this requires deciphering the hyperobjects being addressed by science, which often involve a grave and incalculable risk to the public. The media becomes profoundly consequential in the risk society, on a planetary scale, as it holds a new form of responsibility with much more exacting and intensive demands[9].

The ethics that have governed the growth and flourishing of modern journalism and media are from another, prior society, and must be upgraded substantially to provide for the needs of the risk society.
©Adaptive Cultures

What Is To Be Done?

We are seeing an unprecedented proliferation of public risks that can only be characterized as hyperobjects. As outlined above, this is undermining the political legitimacy of modern democratic nation states because it is changing the demands made on our practices of public sensemaking. These states and their civil societies are grounded in declared and demonstrated problem-solving capacities, as part of the modern project of improved technological methods of prediction and control. Democracies are further based on the idea that citizens need to be informed about the state of the world in ways that allow them to vote in favor of certain coordinated group actions. Because the risk society involves public sensemaking about uncontrollable and unpredictable hyperobjects, a crisis of legitimacy ensues in which experts and everyday citizens alike call the bluff. The modern state and corporation simply cannot fully explain, predict, or control the unintended consequences of its past and ongoing activities.

Beck gives the example of insurance markets that face the unpredictable and likely catastrophic dynamics of planetary climate change. How do you insure against whatever losses might be incurred during climate change-induced civilizational collapse? The nonsense of the question reveals the paradoxes of the risk society; we have no choice but to live with an indeterminate amount of risk, which remains invisible unless rendered for us by experts. Ordinary people are forced into positions that are uninsurable, vulnerable and carry unprecedented personal risk in the context of a widely distributed public danger. Of course, humans have been living with risk as a part of social life forever. But never before have such a great number of risks been put before the public in the terms of complex, highly specialized science. Many of these risks ultimately result in political consequences breaking through into the immediacy of everyday life.

It is precisely because of the personal and occasionally existential nature of these novel risks that they must be covered and discussed by the media. Fukushima is a case in point. In the weeks after the disaster, news media around the world informed citizens about the planetary-scale fallout circulating in the ocean and the atmosphere. Where else could a non-specialist look? In this way, hyperobjects are also eventually made part of cultural production and everyday conversation. In particular, the media is responsible for framing the “official” expert discourse about the hyperobject of concern, translating the reality of the risk into the language of the general public. In this context, because the science is indeterminate while the political consequences are dire, the media tends to establish a single expert discourse.

A simple consensus is summarized for the public, in the sense of being made salient, presentable, and credible. Predictably, there then emerges a “counter-expert discourse,” that thrives by pointing out the indeterminacy of the science, or other dimensions of the issue that are typically downplayed, ignored, or legitimately misunderstood by the media. Public sensemaking becomes reduced to warring and politicized discourses around very real risks, when in reality, these risks are often too complex to claim determinate knowledge.

Today, public sensemaking takes place within the context of digital news and social media. This context fundamentally changes the nature of public sensemaking, at the precise moment that it is vexed by the novel appearance of hyperobjects in the public imagination. Politicized and risky hyperobjects are all over digital media, and because they are so salient, they have become the focus of information warfare, attention- capture, and a host of other problems for sensemaking.

The Epistemological Weapon in Our Pockets

Our smartphone has become necessary for effectively navigating society: travel, shopping, health records, and nearly all forms of mediated communication. The actual workings of the phone are mostly a mystery, unless you are a software, hardware, or network engineer. Even then, most engineers do not understand the details involved in mining and processing the rare earth minerals that comprise much of the physical hardware of the phone. The inner workings of the phone, indeed its whole construction, are invisible and beyond our comprehension. The screen of the phone connects you with another 21st-century necessity that is actually (literally) incomprehensibly complicated: the vast digital media landscape of the Internet. The Internet enables a vast and complex economic and infrastructural organization—a planetary-scale computational stack[10]—that outstrips the understanding of those who are most directly impacted by it. The world of digital information is a vast and complex hyperobject, and like most hyperobjects, it comes with grave and unpredictable risks.

The unintended consequences of the success of digital media technologies present us with unprecedented and complex dangers to public welfare. Chief among these dangers are interruptions, distractions, and misinformation that people encounter when trying to gain access to reliable information about risks. Living in a risk society, the very thing that should be making us aware of risks (the media) is putting us at greater risk, specifically by allowing public discussions of risk to become polluted with advertisements, entertainment, and misinformation. The noise is louder than the signal. Aside from addiction, attention disruption, and other aspects of psychological distress that emerge from long-term engagement with social media user interfaces, there are other very real epistemological risks involved[11].

The smartphone is most people’s main window into the media’s framing of the hyperobjects created by our civilization. Yes, this means you are using a hyperobject to gain visibility into other hyperobjects. If you are getting dizzy, that is a normal reaction. This begins to give a sense of how profound (and profoundly confusing) the challenges are to public sensemaking in the 21st century, and how deep the epistemological crisis goes. Unfortunately, there are further complications still.

Because of the salience and public significance of what is viewed on phones, and their ubiquity as a part of everyday life, digital media applications have become a site of value extraction and informational warfare. There is a large and growing literature on the dynamics of informational and unconventional warfare in the digital age, which catalogues all the ways that digital technologies can be used to engage in complex new forms of non-kinetic warfare[12]. This is war with memes and bits instead of with guns and bombs. These developments in the domain of warfare must be paired with a similarly burgeoning literature of revelations about the psychological manipulations perpetrated by advertisement companies, who worked for decades to optimize for themselves the platforms built by tech giants like Google and Facebook[13]. Both of these take place in a context in which newspapers and major media outlets are fundamentally compromised as they attempt to adapt to the digital age by engaging “users” rather than “readers.” Legacy media are also stretched by needing to articulate the risks posed by hyperobjects, as these are rendered by scientists often in disagreement. The media is looming over an increasingly complex public discourse about increasingly consequential dangers.

The situation is unprecedented, but not without foreshadowing. While hyperobjects did not exist in public awareness before recent advances in scientific practice, contentious indeterminate dangers certainly did. And these were discussed through newspapers, books, television and radio, all of which had epistemological impacts. However, there is good reason to believe that the impacts of digital technologies will be more far-reaching than prior revolutions in communications technologies. One of the reasons for this is the emergence of digitally mediated psychological warfare, which has established a home within contemporary digital media. Propaganda has existed for as long as warfare itself. “Modern propaganda” has been around since at least the Thirty Years’ War, when the printing press enabled the first population-centric mass propaganda campaigns. Yet the possibilities and invasiveness of current techniques are distinct, equivalent in their differences to the gulf between modern weapons and weapons technologies from the 17th century. It is like comparing a nuclear bomb to a gunpowder cannon. So, while there are some potentially useful historical parallels, it should be recognized that we face distinctly 21st-century sensemaking challenges. This is not a kind of problem we have seen before, nor even some new combination of old problems.

There is a hyperobject that can be labeled as “postmodern psychological warfare,” which is widespread, and works against public sensemaking. The “hyper-battlefield” is in your pocket, if not already in the palm of your hand. Note that this means there is a hyperobject (the hyper-battlefield) within the hyperobject (your smart phone) that you are using to gain visibility to hyperobjects in the world (such as pandemics and climate change). The Internet presented to you on your phone is not a neutral tool to aid you in your individual sensemaking. Neither is social media a neutral tool to aid everyone in open public sensemaking. Websites such as Facebook and YouTube are optimized to keep our attention, not to help us learn. They curate a seemingly infinite amount of content using real-time, individualized data tracking and analytics to accomplish their goal of capturing our attention. The platforms are agents rather than neutral curators, with goals that involve our attention which are different than the goals we have for our own attention. We may take to the Internet to learn something, but most of the sites you visit are not designed to help us learn; they are designed to keep us “engaged,” which means returning to the site again and again and staying on site for longer and longer. Engagement is not the same as learning, and in many cases it is antithetical. Engagement is only one of the epistemically problematic design goals to which you are subject on a regular basis when using digital technologies.

Understanding all this about digital technologies is the final step, where the snake eats its own tail, and the dynamic closure or “gestalt” of the epistemological crisis comes into view. We are in a perfect storm that is sinking all epistemological ships, or at least causing them to pass endlessly in the night.

To illustrate close to home: we have been witness to a global pandemic raging now for over a year. It is global, spreading and mutating faster than medical authorities can understand it, and it has been met with complex scientific, economic, and cultural responses. The overall scope of the pandemic and its ongoing multi-order effects are pushing the limits of our measures, methods, and models—not to mention our individual minds. The pandemic is a planetary hyperobject, resulting from the successes of global trade and travel, rendered visible by methods at the very limits of complex science. Its risks are the subject of legitimate disagreements in many specific sciences, yet these risks must be staged for the public almost completely by the media. Social and news media are not generally working in the interest of public sensemaking, and even when good work is produced, it remains subject to a broader environment riddled by the behavioral modification techniques of psychological warfare, as well as various forms of advertising. As a result, the capacity for responsiveness on the part of many populations was undercut. Polarization, misinformation, and mass modulation of behavior have transpired. What can be done?

Practicing Epistemic Humility: Towards a New Ethos of Learning

The future of democracies, as open societies, requires a fundamental upgrade in our educational and informational infrastructures. The dawn of our understanding of hyperobjects, in the context of a risk society, mediated through weaponized digital media, has created a perfect storm that is enveloping us. Hubristic denial has been one reaction, nihilistic opportunism another.

We propose humility as the best response. This should lead to ongoing mutual learning. By humility we imply curiosity, commitment, and a motivation to pursue further learning, rather than deference or subservience, as religious connotations of such a virtue can imply. We believe that this kind of humility provides the core of a new ethos of learning that is and ought to be emerging.

On the one hand, our situation has already elicited a retreat to certain forms of epistemic hubris, in which the indeterminacy of the science around certain complex issues is denied. The refrain that rings out in these cases is: “the science is settled.” Attempts to suggest otherwise in truly complex matters are met too often with unscientific dismissals; hence the ethic is one of hubris. While there are, of course, a limited number of situations in which the science can be settled, the majority of important issues involving complex public risks need to be understood as having indeterminacy as an inherent characteristic. This requires embracing uncertainty as unavoidable, while remaining oriented to understanding progressively more.

As explained in the discussion of hyperobjects above, scientists at the leading edges of many fields are speaking in terms of indeterminacies, probabilities, complexities, and incompleteness. The best scientists in fields that deal with hyperobjects, like epidemiology and ecology, rarely talk in terms of certainties, unless asked to do so by the press or political decision makers. The news media is largely structured so as to reduce complexity and indeterminacy into clear and agreed-upon certainties[14]. This mirrors the political and legal processes of liberal democracies, where complex issues must be reduced ultimately to a binding vote between definitive choices. In these contexts, actual indeterminacy cannot be accommodated easily by the structure of the sensemaking practices themselves. As a result, individuals and organizations can lock prematurely onto a belief that “the facts” are already known. These facts are then defended almost as an article of faith; the issue becomes more a matter of defending political battle lines than epistemology. Because they have been written into the actions and decisions we have made, these facts cannot easily be made subject to change or question.

Sometimes it can be hard to tell the difference between epistemic hubris and epistemic nihilism, because these reactions often result in similar behaviors. For example, an epistemic nihilist will also say things like “the science is settled”—only they don’t care if it really is. They have given up on notions of scientific truth; they are not naive believers in the accepted facts. But they understand the relation between knowledge and power, and use truth claims and “facts” strategically as aspects of political and economic strategy. This cynical manipulation of public sensemaking is one of the key dynamics driving mistrust, fragmentation, and widespread propagation of misinformation. This in turn results in the spread of various other kinds of nihilistic reactions, most of which result in behaviors that look more like “checking out” into apathetic acceptance of the media as ultimately cognitively disempowering. The lines between news and entertainment are eliminated. Earnest attempts to make sense of important public issues cease. A depoliticized retreat into “infotainment” and social media leads away from engagement in the public sphere.

©Adaptive Cultures

Neither hubris nor nihilism allow for learning. If you already know, you cannot learn.

If learning is impossible, because truth is irrelevant, then learning will also be ruled out as a nonstarter. Learning requires an attitude of epistemic humility, which threads the needle between hubris and nihilism, and puts us on a steep and narrow path out of the growing darkness of the perfect storm. Epistemic humility differs from nihilism because it does not claim that facts and truth are impossible or irrelevant.

Many things can be known, and they can be known with humility. This implies a broad commitment to recognizing possible limitations and errors, while remaining open to continued learning. Humility differs from hubris because it does not claim to know absolutely and definitely, but instead always leaves questions on the table, with open invitations for more. This kind of humility implies a commitment to appropriate methods and rigor; it is a commitment to not just the right intent but also the awareness that the right capabilities and technologies matter. Learning comes to be understood as a part of knowing, and that means having a posture that allows us to learn together. That simply can’t be done in the mood of “post-truth” nihilism, nor from the stance of already knowing and doubling down with polarizing hubris.

The stance of epistemic humility is proposed as core to a new ethos for digital media in the 21st century. It is the core of a new ethos of learning. To be clear, the idea here is not that our society needs to be humble and learn a set of new specific ideas. It is not as if some curricula could be prescribed containing requisite beliefs, which if adopted by everyone would resolve the sensemaking crisis. That is not what is meant by a new ethos of learning. Rather, to draw from the work of Jürgen Habermas, arguably one of the most preeminent theorists of deliberative democracy in the 20th century, our society needs to advance in its abilities and capabilities for learning, in general and in perpetuity. Habermas argues that a society is composed of subsystems of law, economics, and culture (among other basic structures), and it can be evaluated on aggregate according to its acquired capacities for ongoing learning.[15]

An ethos of learning involves public commitments to deeper principles of epistemology and communication; such principles can inform specific approaches to addressing the many learning crises unfolding around us. The focus is then not about which ideas and beliefs are widely held, and whether they are “true facts or fake news.” The focus is on putting in place broader civic virtues and practices that support public sensemaking as a process capable of legitimately transforming and changing widely held ideas and beliefs. In so far as a society can be said to “advance” or to make “progress” in general, it is largely within this dimension of institutionalized capacities for learning that real progress can be found. We must ask: has our society improved its own capacities to continue learning? Or has the social system stopped learning, falling into the traps of hubris and nihilism? A new ethos of a learning society can be planted not only in the obvious systems of education, but also in the capacity for governmental, bureaucratic, legal and economic systems to change the way they learn and alter their basic practices of sensemaking.

As the analysis presented here shows, we are not currently guided by an ethos of learning. Instead, we find ourselves in the midst of a widespread breakdown in sensemaking. The roots have been exposed, but that is only the first step towards what is a necessary regeneration of public sensemaking.


  1. See Tim Morton’s 2013 book, Hyperobjects: Philosophy and Ecology after the End of the World.

  2. Some of what follows has been adapted from a blog post by educational philosopher, and Consilience Project member, Zachary Stein:

  3. For a range of perspectives on the “ocean as hyperobject”, see this report from the United Nations University:

  4. See the 2012 Department of Energy Blue Ribbon Commission on America’s Nuclear Future: For a discussion of the sociological and scientific dynamics of invisible radiation risks, see Olga Kuchinskaya’s 2014 book, The Politics of Invisibility: Public Knowledge about Radiation Health Effects after Chernobyl.

  5. The Atlantic recalled this story in 2015: . Here is the 1984 report from the Department of Energy on the Human Interference Task Force: Primary document: See also: Thomas A. Sebeok, "Pandora’s Box in Aftertimes" in I think I am a verb : more contributions to the doctrine of signs

  6. Again see the 2012 Department of Energy Blue Ribbon Commission on America’s Nuclear Future:

  7. The Guardian covers this ongoing story about the risks involved with dumping radioactive waste into the ocean near Fukushima, as Japanese fishermen have also been confronted with a “hyper-ocean” :

  8. See Ulrich Beck’s work on this risk society in his 2009 book, World at Risk, which follows up from his original 1986 book, Risk Society: Towards a New Modernity. Beck is generally regarded as one of the premier sociologists of his generation.

  9. For the new and complex moral demands placed on the global media, in the context of a planetary risk society see Roger Silverstone’s 2007 book, Media and Morality: On the Rise of the Mediapolis.

  10. For more on the sheer scale and complexity of the planetary computational stack that is encircling the Earth, see Benjamin Bratton’s 2015, The Stack: On Software and Sovereignty

  11. For an overview of the research on psychological impacts of social media, see The Dark Side of Social Media: Psychological, Managerial, and Societal Perspectives. See also: the work of Tristan Harris at the Center for Humane Technology (, and the 2018 book by Jaron Lanier, Ten Arguments for Deleting Your Social Media Account Right Now.

  12. For an informative overview of these issues see the work of Samuel Wooly and Philip Howard, at the Oxford Internet Institute, including their 2019 book, Computational Propaganda: Political Parties, Politicians and Political Manipulation on Social Media. See also Peter Singer and Emerson Brooking’s 2019 book, Like War: The Weaponization of Social Media.

  13. See Shoshana Zuboff’s 2019, The Age of Surveillance Capitalism: The Fight for a Future at The New Frontier of Human Power.

  14. See Roger Silverstone’s 2007, Media and Morality: On the Rise of the Mediapolis.

  15. See Habermas’s 1996 book, Between Facts and Norms, and his 1984 classic of sociology, The Theory of Communicative Action.