Development in Progress
The concept of progress is at the heart of humanity’s story. | Jul 16, 2024
Conditions in the 21st century require individuals and societies to find a new sense of commitment to ongoing learning. Technological and scientific successes have remade the social world, leading to a much-heralded information age. The results for society and culture have included innovative changes in daily life, but also a growing number of increasingly complex public risks and a disabling sense of information overload. We argue that the rate at which information is becoming more complex has started to drastically outpace human capacities for learning. Either we increase our capacity for learning at scale, or we may be forced to abandon the projects and ideals of an open society—with disastrous consequences. We all need to recognize the problem and react accordingly.
Liberal democracies and related markets have historically been based on the notion that equal access to reliable and high-quality information enables an open society. This has required the existence of media, education, and political discourse sufficient to assure widespread capacities for adequate public sensemaking. Over time, science and technology have driven societal changes that have rendered this method of social organization both increasingly difficult to achieve and increasingly vital to accomplish. Changes in science, media, and the nature of our basic social structures (such as economics, politics, and warfare) have been building in complexity at an accelerating rate. Our ability to make sense of things can no longer keep pace. This is a situation in which effective innovations are needed that can upgrade our capacities for learning, fast. We have little choice but to learn new ways to make sense of the world together, in a new kind of public sphere. This new approach to sensemaking is needed to address an increasing number of consequential decisions facing our governments, communities, and families.
Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power.
The response to these conditions has been a general sense of being overwhelmed, often resulting in epistemic nihilism. This form of nihilism is a diffuse and usually subconscious feeling that it is impossible to really know anything, because, for example, “the science is too complex” or “there is fake news everywhere.” Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power. Claims of “truth” are seen as unwarranted or intentional manipulations, as weaponized or not earnestly believed in.
Our situation may also promote epistemic hubris, the belief that some form of knowledge can in fact clearly and definitely explain and predict those things that are most important in the world. Indeterminacy is overblown, and even the thorniest problems have a clear answer that should be accepted by everyone. While the philosophy of science itself is committed to overcoming this form of hubris, scientific findings are often misused and misunderstood, especially in highly politicized contexts. In these contexts, epistemic hubris and nihilism form a dangerous symbiosis. Individuals and cultural groups oscillate between the hopeless mood of “post-truth” culture and the peaks of polarizing certainties that emerge around politically significant scientific and geopolitical issues.
There is a line of thought that threads a needle between these two states. A stance of epistemic humility and commitment is offered here, which forms the core of what we begin to discuss as a new ethos for public sensemaking and civic engagement—an ethos of learning fit to meet the actual challenges of the 21st century. This ethos is not something that can be dictated by any group or authority. We are not claiming to have discovered it or invented it, nor are we prescribing some specific approach. We understand this ethos as emergent. It is a non-compulsory civic commitment between earnest citizens, a covenant among those who understand the seriousness of the consequences of the epistemic crisis. It is for those seeking the necessary pathways forward in collaboration and learning.
Consider the psychological position of the average citizen of a Western country today. For this individual, some of the most essential risks and decisions of their life involve invisible processes that require scientific explanation, often challenging to everyday experience. The vast majority are aware of and “believe in” the existence and consequences of important realities of public concern, such as nuclear radiation, climate change, and pandemics. Yet only a small minority truly understand these issues as they are discussed at the leading edge of scientific research, where there is often disagreement. An expert in macroeconomics is unlikely also to be an expert in virology, and these are only two of many fields relevant to understanding pandemics. As a result, the relationships between diverse phenomena are often overlooked. Public health and economic wellbeing are then played off against each other, rather than being addressed through a joint policy based on interdisciplinary synthesis and a method for weighing values and value trade-offs. Specialized knowledge proliferates in silos.
There has never been more scientific information about more consequential issues than at the current moment. And it is by virtue of modern society’s successes in organizing certain kinds of “knowledge production” that we are plunged as individuals and groups into a state where information overwhelms us. This state shrouds the most important issues in a “cloud of unknowing”—or worse, a cloud of false claims to definitive knowledge.
One end result of the massive technological and scientific enterprises of the modern world has been the discovery of realities that are so vast and complex that they exhaust even the best of our scientific measures and methods. They have been called hyperobjects by the ecological philosopher and literary critic Timothy Morton. The term refers to those objects of advanced science that we live with as part of everyday life, and yet which are nearly incomprehensible. Hyperobjects are discovered and revealed by leading scientific methods, literally rendered “visible” through data, and yet what these approaches discover is, in part, the limits of our ability to fully explain certain very important phenomena. Hyperobjects are so incalculably complicated, or so inherently complex, emergent, and dynamic that their full “behavior” can’t be explained exactly or exhaustively. This of course means a hyperobject’s “behavior” cannot be predicted with specificity either.
The list of hyperobjects in the news includes nuclear radiation, planetary-scale climate change, and pandemics, but the true list is much longer. Systemic injustice, world hunger, planetary-scale computational architectures, and bioregional zones (such as the Amazon rainforest), are further examples. Hyperobjects are so large, complex, nonlinear, multicausal, long lasting, and beyond human proportions that they disrupt effective public sensemaking and place great demands on individual psychology. As with climate change or radioactive waste, we all know that “it” is out there, but we can’t see “it,” nor truly understand “it,” at least not without the help of specialists—and even they disagree about important details.
Of course, there are many kinds of invisible objects that scientists understand sufficiently. For example, a microscope is required to see a plant cell, and the process of photosynthesis is difficult to understand—yet a house plant is not a hyperobject just because a scientist understands it much better than the average person ever will. To be clear, hyperobjects constitute a distinct class of realities only recently discovered by scientists, which extend across vast physical and temporal scales, and which require specific technologies to disclose, such as complex measurement systems, that often did not exist before the 21st century.
Understanding hyperobjects as a historically emergent and novel challenge to 21st-century public sensemaking is the first step of the argument presented here. Our most important practices and institutions for public sensemaking in open societies—from schools, to the news media, to political and civic discourse—were all created before hyperobjects became a focus of scientific and public concern. This is part of the learning crisis: we are playing cognitive catch-up with an ever-more-complex world of hyperobjects.
Here is a specific example. For millennia, humans more or less agreed in general terms about the nature of the planet’s oceans, and how to live with and make use of them. Today, scientists are trying to figure out what the ocean actually is, and how it works over massive expanses of time and space, all in order to assure the possibility of continued human life. The “ocean” was a sometimes scary and often beautiful thing that everyone could relate to easily by seeing it, swimming in it, eating fish from it, sailing on it, and navigating it. The average citizen understands that ocean. But the Earth’s oceans have since come to be understood by scientists as a massively complex hyperobject with trends and tendencies such as nitrogen levels, microplastics and other pollution, pH, stored CO2 interacting with the atmosphere, and varying amounts and types of flora and fauna. The future trends of these factors will impact the biosphere and human food systems profoundly.
What could be called the “hyper-ocean” is disclosed by means of a massive lattice-work of sensor networks, as advances in measurement technologies allow for complex assumptions and organizations of vast reams of data. The scientific rendering of any hyperobject requires techniques at the leading edges of measurement and quantification, as models of ocean temperature and acidification use mathematics from complexity and chaos science. Science renders for the public imagination the “hyper-ocean,” which is more detailed, predicative, and objective than any sunset view of the beautiful waves, cliffs, beach, and clouds. The “hyper-ocean” is an uncanny and literally incomprehensible reality forced into public sensemaking— forced because it appears that our future depends on the quality and trustworthiness of research about this bigger, massively complex “hyper-ocean.” We have no choice but to try to understand the “hyper-ocean,” and what it teaches us about the limits of our knowledge.
Note that it is precisely the success of oceanography and related fields (not their failures) that has yielded newfound humility about just how much we do not know (and may never know) about the planet’s oceans. It used to be that we could dump waste into the ocean and forget about it. Now when there is a major oil spill, like the Deepwater Horizon spill in the Gulf of Mexico in 2010, we are acutely aware of the danger (see Figure 1). We are equipped with sophisticated models and metrics, and yet because of the complexity and indeterminacy of the best science, our advanced tools convincingly reveal our ignorance: we simply do not know what the longer-term impacts of the spill will be, even if we can make well-justified guesses. Everyday citizens, such as fishermen who make their living at sea, inherited their grandfather’s ocean. They now realize that the “hyper-ocean” has complex probabilities and risks, the specific consequences of which (such as radically unpredictable fishing yields) are making many of their businesses technically uninsurable.
The point here is that there is no insuring against, controlling, or predicting hyperobjects—indeed, they cannot be definitively understood—and yet they loom large in public sensemaking.
What becomes of political choice and democracy in an age of hyperobjects? Historically, the state, academia, and the media have been legitimated in public perception by virtue of being “the ones who know and who can, thereby, predict and control.” The dawning age of hyperobjects might suggest that this form of legitimacy is over. This brings us directly to the next step in explaining the compounding challenges to public sensemaking in the 21st century.
Figure 1: NASA satellite image of the Deepwater Horizon oil spill.The hyperobject extended for thousands of miles, and will likely have impacts for hundreds of years to come. Note that the spill is a hyperobject within the larger hyperobject of the ocean, just to give a sense of the complexity. Image in Public domain courtesy of demis.nl
When the complexity of the demands placed on this kind of public sensemaking increases, related sensemaking capacities have to increase too, or else the social system is “flying blind.” This leads inevitably to a state of crisis. The sensemaking institutions and environments around us today have become increasingly less capable of meeting the challenges they face. Our schools, governments, news media, and informational technologies are being challenged by important new realities, the culmination of a long history of their own successes.
Public involvement in governance requires citizenry capable of adequately understanding the issues implicated in governance decision-making.
The nature of scientific knowledge-production has transformed in the past few decades, along with major advances in measurement, quantification, and computation. Many important sciences have been changing, as have their objects of study. The nature of the objects that make up our shared public world has changed. Topics for public sensemaking are becoming much more complex, indeterminate, and mediated through the expert cultures of scientists—and then again through the expert cultures of media organizations. Never before has society been so dependent upon the dissemination of highly sophisticated scientific information to understand and mitigate complex risks to public welfare. This leads us to consider a well-grounded sociological theory that suggests while societies have always faced risks, our society is outpaced by the complexity of the risks it faces. Our ability to learn about and mitigate risks lags behind their increasing complexity. In this situation, both scientific authorities and voices in the media take on unprecedented roles, with a new responsibility for providing the public with resources for sensemaking around these complex risks. The final, terminal step is seeing that contemporary digital media landscapes are wide open to manipulation, informational warfare, and disorienting, addictive user interfaces. Right when society requires unprecedented upgrades in public sensemaking processes, we face instead a perfect storm of factors contributing to an epistemological crisis.
The radiation produced by nuclear waste is invisible to the naked eye. Below certain thresholds, the negative health effects of exposure to radiation take years or decades to be seen, which means the risk is undetectable to the average person until it is too late. Profoundly dangerous materials are produced routinely as a part of nuclear energy and weapons programs. The risks involved are complex, long lasting, and nonlocal. This means that while scientists understand a great deal about nuclear fallout, at this point they do not know exactly what will happen if a catastrophic event occurs, how long its effects will last, or in what places they will be seen and felt. Our understanding of the impacts on biological organisms leaves a great deal unclear or simply unknown. There is no definite plan on the part of world governments and energy companies detailing exactly what to do with all the nuclear waste that is being produced—that is, no plan to manage the disposal of nuclear waste for the complete ten thousand to one million years during which the materials will be dangerous to biological life[4]. Let that sink in. Governments are engaging in sensemaking and coordinated action around scientific and ethical realities at the extremities of deep time and scientific indeterminacy. While it has been common for human societies to consider the future consequences of their collaborative work, the scale and ramifications of our commitments to the future are staggeringly novel.
Part of the difficulty in handling radioactive waste is that disposal is effectively impossible; instead, it must be specially handled and placed in nuclear-safe, long-term storage facilities. “Long-term” in this context means up to a million years, although scientists disagree about exactly how long the materials will be dangerous. Creating such a long-term storage facility requires unprecedented feats of engineering. It also requires unprecedented feats of communication, as the risks of what is contained within the storage site must be communicated across deep time to any future humans (or other intelligent biological lifeforms) that might stumble upon the site. What symbols and signs might communicate such a grave danger to future humans living in an almost completely unimaginable world? This was the remarkable task set for a group of scholars convened by the U.S. congress in 1984[5]. There was no consensus on how to proceed. The longest running project at creating a long-term storage site for our nuclear waste, deep within Yucca Mountain in Nevada, was defunded during the Obama administration. It appears that mitigating the impact of the invisible risks posed by radiation is a difficult thing to push through already contentious government and military budgets[6].
Public sensemaking on the issue has taken place only sporadically and insufficiently. In part, this is because it is so complex. It requires not only the convening of ongoing learning and open discourse about future technology and advanced nuclear science, but it also involves aspects of economics and the dynamics of political will. In the presence of wars, election cycles, short memories, and immediate, tangible things voters want, how is it even possible to do public sensemaking around such abstract and long-term issues?
Radioactive waste is one of the things being monitored by complex scientific instrumentation as part of the “hyper-ocean.” The Fukushima nuclear disaster thrust the radiation levels of the ocean into public view and opened an almost endless and unanswerable set of questions about the long-term impacts of the ongoing radiation leak[7]. Some scientists think that a portion of this radiation is insignificant due to the half-lives of certain nuclear species, while others have grave concerns for a multiplicity of potential outcomes. This is an invisible risk that affects all life on Earth. It is only able to be officially determined as dangerous by experts, who themselves disagree on acceptable levels of danger. This disaster and its consequences, like Chernobyl before it, epitomize what the German sociologist Ulrich Beck outlines in his work on the risk society[8].
Beck’s basic thesis is that modern societies built up technologies and governance based on the prediction and control of natural and social processes, but that this approach to social life has now reached its limit. Dams and hospitals, scientific research and development, factories and schools: in many respects, the modern project succeeded at what it set out to do. Yet it was all the while creating unintended second- and third-order effects that would undermine the project in the long run, until ultimately, claims Beck, the modern period of social organization ended. These second- and third-order effects include, for example, industrial pollution (once thought safe) leading to environmental degradation, which then cascades into human migration and eventual political crises in open societies. Technologies resulting from breakthroughs in digital communication eventually and unexpectedly undermine the quality and integrity of civic discourse and participation—a pattern discussed further in the next section. Today, a new kind of society is emerging. It is built around a globally shared, reflexive response to the widespread and indeterminate risks created by modern society. Beck calls this new epoch “reflexive modernity” or the risk society.
The risk society continues to do scientific research, but this research is increasingly focused on understanding the hyperobjects created as a byproduct of industrial society, such as climate change, pandemics, planetary limits, human migration, and high-tech warfare. The risk society also continues to build technologies based on advancing sciences, only they are increasingly aimed at mitigating the worst of the risks we have inherited. The risk society continues to engage in public sensemaking with an intention to preserve democracy, only now this requires deciphering the hyperobjects being addressed by science, which often involve a grave and incalculable risk to the public. The media becomes profoundly consequential in the risk society, on a planetary scale, as it holds a new form of responsibility with much more exacting and intensive demands[9].
The ethics that have governed the growth and flourishing of modern journalism and media are from another, prior society, and must be upgraded substantially to provide for the needs of the risk society.
We are seeing an unprecedented proliferation of public risks that can only be characterized as hyperobjects. As outlined above, this is undermining the political legitimacy of modern democratic nation states because it is changing the demands made on our practices of public sensemaking. These states and their civil societies are grounded in declared and demonstrated problem-solving capacities, as part of the modern project of improved technological methods of prediction and control. Democracies are further based on the idea that citizens need to be informed about the state of the world in ways that allow them to vote in favor of certain coordinated group actions. Because the risk society involves public sensemaking about uncontrollable and unpredictable hyperobjects, a crisis of legitimacy ensues in which experts and everyday citizens alike call the bluff. The modern state and corporation simply cannot fully explain, predict, or control the unintended consequences of its past and ongoing activities.
Beck gives the example of insurance markets that face the unpredictable and likely catastrophic dynamics of planetary climate change. How do you insure against whatever losses might be incurred during climate change-induced civilizational collapse? The nonsense of the question reveals the paradoxes of the risk society; we have no choice but to live with an indeterminate amount of risk, which remains invisible unless rendered for us by experts. Ordinary people are forced into positions that are uninsurable, vulnerable and carry unprecedented personal risk in the context of a widely distributed public danger. Of course, humans have been living with risk as a part of social life forever. But never before have such a great number of risks been put before the public in the terms of complex, highly specialized science. Many of these risks ultimately result in political consequences breaking through into the immediacy of everyday life.
It is precisely because of the personal and occasionally existential nature of these novel risks that they must be covered and discussed by the media. Fukushima is a case in point. In the weeks after the disaster, news media around the world informed citizens about the planetary-scale fallout circulating in the ocean and the atmosphere. Where else could a non-specialist look? In this way, hyperobjects are also eventually made part of cultural production and everyday conversation. In particular, the media is responsible for framing the “official” expert discourse about the hyperobject of concern, translating the reality of the risk into the language of the general public. In this context, because the science is indeterminate while the political consequences are dire, the media tends to establish a single expert discourse.
A simple consensus is summarized for the public, in the sense of being made salient, presentable, and credible. Predictably, there then emerges a “counter-expert discourse,” that thrives by pointing out the indeterminacy of the science, or other dimensions of the issue that are typically downplayed, ignored, or legitimately misunderstood by the media. Public sensemaking becomes reduced to warring and politicized discourses around very real risks, when in reality, these risks are often too complex to claim determinate knowledge.
Today, public sensemaking takes place within the context of digital news and social media. This context fundamentally changes the nature of public sensemaking, at the precise moment that it is vexed by the novel appearance of hyperobjects in the public imagination. Politicized and risky hyperobjects are all over digital media, and because they are so salient, they have become the focus of information warfare, attention- capture, and a host of other problems for sensemaking.
Our smartphone has become necessary for effectively navigating society: travel, shopping, health records, and nearly all forms of mediated communication. The actual workings of the phone are mostly a mystery, unless you are a software, hardware, or network engineer. Even then, most engineers do not understand the details involved in mining and processing the rare earth minerals that comprise much of the physical hardware of the phone. The inner workings of the phone, indeed its whole construction, are invisible and beyond our comprehension. The screen of the phone connects you with another 21st-century necessity that is actually (literally) incomprehensibly complicated: the vast digital media landscape of the Internet. The Internet enables a vast and complex economic and infrastructural organization—a planetary-scale computational stack[10]—that outstrips the understanding of those who are most directly impacted by it. The world of digital information is a vast and complex hyperobject, and like most hyperobjects, it comes with grave and unpredictable risks.
The unintended consequences of the success of digital media technologies present us with unprecedented and complex dangers to public welfare. Chief among these dangers are interruptions, distractions, and misinformation that people encounter when trying to gain access to reliable information about risks. Living in a risk society, the very thing that should be making us aware of risks (the media) is putting us at greater risk, specifically by allowing public discussions of risk to become polluted with advertisements, entertainment, and misinformation. The noise is louder than the signal. Aside from addiction, attention disruption, and other aspects of psychological distress that emerge from long-term engagement with social media user interfaces, there are other very real epistemological risks involved[11].
The smartphone is most people’s main window into the media’s framing of the hyperobjects created by our civilization. Yes, this means you are using a hyperobject to gain visibility into other hyperobjects. If you are getting dizzy, that is a normal reaction. This begins to give a sense of how profound (and profoundly confusing) the challenges are to public sensemaking in the 21st century, and how deep the epistemological crisis goes. Unfortunately, there are further complications still.
Because of the salience and public significance of what is viewed on phones, and their ubiquity as a part of everyday life, digital media applications have become a site of value extraction and informational warfare. There is a large and growing literature on the dynamics of informational and unconventional warfare in the digital age, which catalogues all the ways that digital technologies can be used to engage in complex new forms of non-kinetic warfare[12]. This is war with memes and bits instead of with guns and bombs. These developments in the domain of warfare must be paired with a similarly burgeoning literature of revelations about the psychological manipulations perpetrated by advertisement companies, who worked for decades to optimize for themselves the platforms built by tech giants like Google and Facebook[13]. Both of these take place in a context in which newspapers and major media outlets are fundamentally compromised as they attempt to adapt to the digital age by engaging “users” rather than “readers.” Legacy media are also stretched by needing to articulate the risks posed by hyperobjects, as these are rendered by scientists often in disagreement. The media is looming over an increasingly complex public discourse about increasingly consequential dangers.
The situation is unprecedented, but not without foreshadowing. While hyperobjects did not exist in public awareness before recent advances in scientific practice, contentious indeterminate dangers certainly did. And these were discussed through newspapers, books, television and radio, all of which had epistemological impacts. However, there is good reason to believe that the impacts of digital technologies will be more far-reaching than prior revolutions in communications technologies. One of the reasons for this is the emergence of digitally mediated psychological warfare, which has established a home within contemporary digital media. Propaganda has existed for as long as warfare itself. “Modern propaganda” has been around since at least the Thirty Years’ War, when the printing press enabled the first population-centric mass propaganda campaigns. Yet the possibilities and invasiveness of current techniques are distinct, equivalent in their differences to the gulf between modern weapons and weapons technologies from the 17th century. It is like comparing a nuclear bomb to a gunpowder cannon. So, while there are some potentially useful historical parallels, it should be recognized that we face distinctly 21st-century sensemaking challenges. This is not a kind of problem we have seen before, nor even some new combination of old problems.
There is a hyperobject that can be labeled as “postmodern psychological warfare,” which is widespread, and works against public sensemaking. The “hyper-battlefield” is in your pocket, if not already in the palm of your hand. Note that this means there is a hyperobject (the hyper-battlefield) within the hyperobject (your smart phone) that you are using to gain visibility to hyperobjects in the world (such as pandemics and climate change). The Internet presented to you on your phone is not a neutral tool to aid you in your individual sensemaking. Neither is social media a neutral tool to aid everyone in open public sensemaking. Websites such as Facebook and YouTube are optimized to keep our attention, not to help us learn. They curate a seemingly infinite amount of content using real-time, individualized data tracking and analytics to accomplish their goal of capturing our attention. The platforms are agents rather than neutral curators, with goals that involve our attention which are different than the goals we have for our own attention. We may take to the Internet to learn something, but most of the sites you visit are not designed to help us learn; they are designed to keep us “engaged,” which means returning to the site again and again and staying on site for longer and longer. Engagement is not the same as learning, and in many cases it is antithetical. Engagement is only one of the epistemically problematic design goals to which you are subject on a regular basis when using digital technologies.
Understanding all this about digital technologies is the final step, where the snake eats its own tail, and the dynamic closure or “gestalt” of the epistemological crisis comes into view. We are in a perfect storm that is sinking all epistemological ships, or at least causing them to pass endlessly in the night.
To illustrate close to home: we have been witness to a global pandemic raging now for over a year. It is global, spreading and mutating faster than medical authorities can understand it, and it has been met with complex scientific, economic, and cultural responses. The overall scope of the pandemic and its ongoing multi-order effects are pushing the limits of our measures, methods, and models—not to mention our individual minds. The pandemic is a planetary hyperobject, resulting from the successes of global trade and travel, rendered visible by methods at the very limits of complex science. Its risks are the subject of legitimate disagreements in many specific sciences, yet these risks must be staged for the public almost completely by the media. Social and news media are not generally working in the interest of public sensemaking, and even when good work is produced, it remains subject to a broader environment riddled by the behavioral modification techniques of psychological warfare, as well as various forms of advertising. As a result, the capacity for responsiveness on the part of many populations was undercut. Polarization, misinformation, and mass modulation of behavior have transpired. What can be done?
The future of democracies, as open societies, requires a fundamental upgrade in our educational and informational infrastructures. The dawn of our understanding of hyperobjects, in the context of a risk society, mediated through weaponized digital media, has created a perfect storm that is enveloping us. Hubristic denial has been one reaction, nihilistic opportunism another.
We propose humility as the best response. This should lead to ongoing mutual learning. By humility we imply curiosity, commitment, and a motivation to pursue further learning, rather than deference or subservience, as religious connotations of such a virtue can imply. We believe that this kind of humility provides the core of a new ethos of learning that is and ought to be emerging.
On the one hand, our situation has already elicited a retreat to certain forms of epistemic hubris, in which the indeterminacy of the science around certain complex issues is denied. The refrain that rings out in these cases is: “the science is settled.” Attempts to suggest otherwise in truly complex matters are met too often with unscientific dismissals; hence the ethic is one of hubris. While there are, of course, a limited number of situations in which the science can be settled, the majority of important issues involving complex public risks need to be understood as having indeterminacy as an inherent characteristic. This requires embracing uncertainty as unavoidable, while remaining oriented to understanding progressively more.
As explained in the discussion of hyperobjects above, scientists at the leading edges of many fields are speaking in terms of indeterminacies, probabilities, complexities, and incompleteness. The best scientists in fields that deal with hyperobjects, like epidemiology and ecology, rarely talk in terms of certainties, unless asked to do so by the press or political decision makers. The news media is largely structured so as to reduce complexity and indeterminacy into clear and agreed-upon certainties[14]. This mirrors the political and legal processes of liberal democracies, where complex issues must be reduced ultimately to a binding vote between definitive choices. In these contexts, actual indeterminacy cannot be accommodated easily by the structure of the sensemaking practices themselves. As a result, individuals and organizations can lock prematurely onto a belief that “the facts” are already known. These facts are then defended almost as an article of faith; the issue becomes more a matter of defending political battle lines than epistemology. Because they have been written into the actions and decisions we have made, these facts cannot easily be made subject to change or question.
Sometimes it can be hard to tell the difference between epistemic hubris and epistemic nihilism, because these reactions often result in similar behaviors. For example, an epistemic nihilist will also say things like “the science is settled”—only they don’t care if it really is. They have given up on notions of scientific truth; they are not naive believers in the accepted facts. But they understand the relation between knowledge and power, and use truth claims and “facts” strategically as aspects of political and economic strategy. This cynical manipulation of public sensemaking is one of the key dynamics driving mistrust, fragmentation, and widespread propagation of misinformation. This in turn results in the spread of various other kinds of nihilistic reactions, most of which result in behaviors that look more like “checking out” into apathetic acceptance of the media as ultimately cognitively disempowering. The lines between news and entertainment are eliminated. Earnest attempts to make sense of important public issues cease. A depoliticized retreat into “infotainment” and social media leads away from engagement in the public sphere.
Neither hubris nor nihilism allow for learning. If you already know, you cannot learn.
If learning is impossible, because truth is irrelevant, then learning will also be ruled out as a nonstarter. Learning requires an attitude of epistemic humility, which threads the needle between hubris and nihilism, and puts us on a steep and narrow path out of the growing darkness of the perfect storm. Epistemic humility differs from nihilism because it does not claim that facts and truth are impossible or irrelevant.
Many things can be known, and they can be known with humility. This implies a broad commitment to recognizing possible limitations and errors, while remaining open to continued learning. Humility differs from hubris because it does not claim to know absolutely and definitely, but instead always leaves questions on the table, with open invitations for more. This kind of humility implies a commitment to appropriate methods and rigor; it is a commitment to not just the right intent but also the awareness that the right capabilities and technologies matter. Learning comes to be understood as a part of knowing, and that means having a posture that allows us to learn together. That simply can’t be done in the mood of “post-truth” nihilism, nor from the stance of already knowing and doubling down with polarizing hubris.
The stance of epistemic humility is proposed as core to a new ethos for digital media in the 21st century. It is the core of a new ethos of learning. To be clear, the idea here is not that our society needs to be humble and learn a set of new specific ideas. It is not as if some curricula could be prescribed containing requisite beliefs, which if adopted by everyone would resolve the sensemaking crisis. That is not what is meant by a new ethos of learning. Rather, to draw from the work of Jürgen Habermas, arguably one of the most preeminent theorists of deliberative democracy in the 20th century, our society needs to advance in its abilities and capabilities for learning, in general and in perpetuity. Habermas argues that a society is composed of subsystems of law, economics, and culture (among other basic structures), and it can be evaluated on aggregate according to its acquired capacities for ongoing learning.[15]
An ethos of learning involves public commitments to deeper principles of epistemology and communication; such principles can inform specific approaches to addressing the many learning crises unfolding around us. The focus is then not about which ideas and beliefs are widely held, and whether they are “true facts or fake news.” The focus is on putting in place broader civic virtues and practices that support public sensemaking as a process capable of legitimately transforming and changing widely held ideas and beliefs. In so far as a society can be said to “advance” or to make “progress” in general, it is largely within this dimension of institutionalized capacities for learning that real progress can be found. We must ask: has our society improved its own capacities to continue learning? Or has the social system stopped learning, falling into the traps of hubris and nihilism? A new ethos of a learning society can be planted not only in the obvious systems of education, but also in the capacity for governmental, bureaucratic, legal and economic systems to change the way they learn and alter their basic practices of sensemaking.
As the analysis presented here shows, we are not currently guided by an ethos of learning. Instead, we find ourselves in the midst of a widespread breakdown in sensemaking. The roots have been exposed, but that is only the first step towards what is a necessary regeneration of public sensemaking.
The noun was coined by the American ecological psychologist James J. Gibson. It was initially used in the study of animal-environment interaction and has also been used in the study of human-technology interaction. An affordance is an available use or purpose of a thing or an entity. For example, a couch affords being sat on, a microwave button affords being pressed, and a social media platform has an affordance of letting users share with each other.
Agent provocateur translates to “inciting incident” in French. It is used to reference individuals who attempt to persuade another individual or group to partake in a crime or rash behavior or to implicate them in such acts. This is done to defame, delegitimize, or criminalize the target. For example, starting a conflict at a peaceful protest or attempting to implicate a political figure in a crime.
Ideological polarization is generated as a side-effect of content recommendation algorithms optimizing for user engagement and advertising revenues. These algorithms will upregulate content that reinforces existing views and filters out countervailing information because this has been proven to drive time on-site. The result is an increasingly polarized perspective founded on a biased information landscape.
To “cherry pick” when making an argument is to selectively present evidence that supports one’s position or desired outcome, while ignoring or omitting any contradicting evidence.
The ethical behavior exhibited by individuals in service of bettering their communities and their state, sometimes foregoing personal gain for the pursuit of a greater good for all. In contrast to other sets of moral virtues, civic virtue refers specifically to standards of behavior in the context of citizens participating in governance or civil society. What constitutes civic virtue has evolved over time and may differ across political philosophies. For example, in modern-day democracies, civic virtue includes values such as guaranteeing all citizens the right to vote, and freedom of culture, race, sex, religion, nationality, sexual orientation, or gender identity. A shared understanding of civic virtue among the populace is integral to the stability of a just political system, and waning civic virtue may result in disengagement from collective responsibilities, noncompliance with the rule of law, a breakdown in trust between individuals and the state, and degradation of the intergenerational process of passing on civic virtues.
Closed societies restrict the free exchange of information and public discourse, as well as impose top down decisions on their populus. Unlike the open communications and dissenting views that characterize open societies, closed societies promote opaque governance and prevent public opposition that might be found in free and open discourse.
A general term for collective resources in which every participant of the collective has an equal interest. Prominent examples are air, nature, culture, and the quality of our shared sensemaking basis or information commons.
The cognitive bias of 1) exclusively seeking or recalling evidence in support of one's current beliefs or values, 2) interpreting ambiguous information in favor of one’s beliefs or values, and 3) ignoring any contrary information. This bias is especially strong when the issues in question are particularly important to one's identity.
In science and history, consilience is the principle that evidence from independent, unrelated sources can “converge” on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own.
While “The Enlightenment” was a specific instantiation of cultural enlightenment in 18th-century Europe, cultural enlightenment is a more general process that has occurred multiple times in history, in many different cultures. When a culture goes through a period of increasing reflectivity on itself it is undergoing cultural enlightenment. This period of reflectivity brings about the awareness required for a culture to reimagine its institutions from a new perspective. Similarly, “The Renaissance” refers to a specific period in Europe while the process of a cultural renaissance has occurred elsewhere. A cultural renaissance is more general than (and may precede) an enlightenment, as it describes a period of renewed interest in a particular topic.
A deep fake is a digitally-altered (via AI) recording of a person for the purpose of political propaganda, sexual objectification, defamation, or parody. They are progressively becoming more indistinguishable from reality to an untrained eye.
Empiricism is a philosophical theory that states that knowledge is derived from sensory experiences and relies heavily on scientific evidence to arrive at a body of truth. English philosopher John Locke proposed that rather than being born with innate ideas or principles, man’s life begins as a “blank slate” and only through his senses is he able to develop his mind and understand the world.
It is both the public spaces (e.g., town hall, Twitter) and private spaces where people come together to pursue a mutual understanding of issues critical to their society, and the collection of norms, systems, and institutions underpinning this society-wide process of learning. The epistemic commons is a public resource; these spaces and norms are available to all of us, shaped by all of us, and in turn, also influence the way in which all of us engage in learning with each other. For informed and consensual decision-making, open societies and democratic governance depend upon an epistemic commons in which groups and individuals can collectively reflect and communicate in ways that promote mutual learning.
Inadvertent emotionally or politically -motivated closed-mindedness, manifesting as certainty or overconfidence when dealing with complex indeterminate problems. Epistemic hubris can appear in many forms. For example, it is often demonstrated in the convictions of individuals influenced by highly politicized groups, it shows up in corporate or bureaucratic contexts that err towards certainty through information compression requirements, and it appears in media, where polarized rhetoric is incentivized due to its attention-grabbing effects. Note: for some kinds of problems it may be appropriate or even imperative to have a degree of confidence in one's knowledge—this is not epistemic hubris.
An ethos of learning that involves a healthy balance between confidence and openness to new ideas. It is neither hubristic, meaning overly confident or arrogant, nor nihilistic, meaning believing that nothing can be known for certain. Instead, it is a subtle orientation that seeks new learning, recognizes the limitations of one's own knowledge, and avoids absolutisms or fundamentalisms—which are rigid and unyielding beliefs that refuse to consider alternative viewpoints. Those that demonstrate epistemic humility will embrace truths where these are possible to attain but are generally inclined to continuously upgrade their beliefs with new information.
This form of nihilism is a diffuse and usually subconscious feeling that it is impossible to really know anything, because, for example, “the science is too complex” or “there is fake news everywhere.” Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power. Claims of “truth” are seen as unwarranted or intentional manipulations, as weaponized or not earnestly believed in.
Epistemology is the philosophical study of knowing and the nature of knowledge. It deals with questions such as “how does one know?” and “what is knowing, known, and knowledge?”. Epistemology is considered one of the four main branches of philosophy, along with ethics, logic, and metaphysics.
Derived from a Greek word meaning custom, habit, or character; The set of ideals or customs which lay the foundations around which a group of people coheres. This includes the set of values upon which a culture derives its ethical principles.
The ability of an individual or group to shape the perception of an issue or topic by setting the narrative and determining the context for the debate. A “frame” is the way in which an issue is presented or “framed”, including the language, images, assumptions, and perspectives used to describe it. Controlling the frame can give immense social and political power to the actor who uses it because the narratives created or distorted by frame control are often covertly beneficial to the specific interests of the individual or group that has established the frame. As an example, politicians advocating for tax cuts or pro-business policies may use the phrase "job creators" when referring to wealthy corporations in order to suggest their focus is on improving livelihoods, potentially influencing public perception in favor of the politician's interests.
Discourse oriented towards mutual understanding and coordinated action, with the result of increasing the faith that participants have in the value of communicating. The goal of good faith communication is not to reach a consensus, but to make it possible for all parties to change positions, learn, and continue productive, ongoing interaction.
Processes that occupy vast expanses of both time and space, defying the more traditional sense of an "object" as a thing that can be singled out. The concept, introduced by Timothy Morton, invites us to conceive of processes that are difficult to measure, always around us, globally distributed and only observed in pieces. Examples include climate change, ocean pollution, the Internet, and global nuclear armaments and related risks.
Information warfare is a primary aspect of fourth- and fifth-generation warfare. It can be thought of as war with bits and memes instead of guns and bombs. Examples of information warfare include psychological operations like disinformation, propaganda, or manufactured media, or non-kinetic interference in an enemy's communication capacity or quality.
Refers to the foundational process of education which underlies and enables societal and cultural cohesion across generations by passing down values, capacities, knowledge, and personality types.
The phenomenon of having your attention captured by emotionally triggering stimuli. These stimuli strategically target the brain center that we share with other mammals that is responsible for emotional processing and arousal—the limbic system. This strategy of activating the limbic system is deliberately exploited by online algorithmic content recommendations to stimulate increased user engagement. Two effective stimuli for achieving this effect are those that can induce disgust or rage, as these sentiments naturally produce highly salient responses in people.
An online advertising strategy in which companies create personal profiles about individual users from vast quantities of trace data left behind from their online activity. According to these psychometric profiles, companies display content that matches each user's specific interests at moments when they are most likely to be impacted by it. While traditional advertising appeals to its audience's demographics, microtargeting curates advertising for individuals and becomes increasingly personalized by analyzing new data.
False or misleading information, irrespective of the intent to mislead. Within the category of misinformation, disinformation is a term used to refer to misinformation with intent. In news media, the public generally expects a higher standard for journalistic integrity and editorial safeguards against misinformation; in this context, misinformation is often referred to as “fake news”.
A prevailing school of economic thought that emphasizes the government's role in controlling the supply of money circulating in an economy as the primary determinant of economic growth. This involves central banks using various methods of increasing or decreasing the money supply of their currency (e.g., altering interest rates).
A form of rivalry between nation-states or conflicting groups, by which tactical aims are realized through means other than direct physical violence. Examples include election meddling, blackmailing politicians, or information warfare.
Open societies promote the free exchange of information and public discourse, as well as democratic governance based on the participation of the people in shared choices about their social futures. Unlike the tight control over communications and suppression of dissenting views that characterize closed societies, open societies promote transparent governance and embrace good-faith public scrutiny.
The modern use of the term 'paradigm' was introduced by the philosopher of science Thomas Kuhn in his work "The Structure of Scientific Revolutions". Kuhn's idea is that a paradigm is the set of concepts and practices that define a scientific discipline at any particular period of time. A good example of a paradigm is behaviorism – a paradigm under which studying externally observable behavior was viewed as the only scientifically legitimate form of psychology. Kuhn also argued that science progresses by the way of "paradigm shifts," when a leading paradigm transforms into another through advances in understanding and methodology; for example, when the leading paradigm in psychology transformed from behaviorism to cognitivism, which looked at the human mind from an information processing perspective.
The theory and practice of teaching and learning, and how this process influences, and is influenced by, the social, political, and psychological development of learners.
The ability of an individual or institutional entity to deny knowing about unethical or illegal activities because there is no evidence to the contrary or no such information has been provided.
First coined by philosopher Jürgen Habermas, the term refers to the collective common spaces where people come together to publicly articulate matters of mutual interest for members of society. By extension, the related theory suggests that impartial, representative governance relies on the capacity of the public sphere to facilitate healthy debate.
The word itself is French for rebirth, and this meaning is maintained across its many purposes. The term is commonly used with reference to the European Renaissance, a period of European cultural, artistic, political, and economic renewal following the middle ages. The term can refer to other periods of great social change, such as the Bengal Renaissance (beginning in late 18th century India).
A term proposed by sociologists to characterize emergent properties of social systems after the Second World War. Risk societies are increasingly preoccupied with securing the future against widespread and unpredictable risks. Grappling with these risks differentiate risk societies from modern societies, given these risks are the byproduct of modernity’s scientific, industrial, and economic advances. This preoccupation with risk is stimulating a feedback loop and a series of changes in political, cultural, and technological aspects of society.
Sensationalism is a tactic often used in mass media and journalism in which news stories are explicitly chosen and worded to excite the greatest number of readers or viewers, typically at the expense of accuracy. This may be achieved by exaggeration, omission of facts and information, and/or deliberate obstruction of the truth to spark controversy.
A process by which people interpret information and experiences, and structure their understanding of a given domain of knowledge. It is the basis of decision-making: our interpretation of events will inform the rationale for what we do next. As we make sense of the world and accordingly act within it, we also gather feedback that allows us to improve our sensemaking and our capacity to learn. Sensemaking can occur at an individual level through interaction with one’s environment, collectively among groups engaged in discussion, or through socially-distributed reasoning in public discourse.
A theory stating that individuals are willing to sacrifice some of their freedom and agree to state authority under certain legal rules, in exchange for the protection of their remaining rights, provided the rest of society adheres to the same rules of engagement. This model of political philosophy originated during the Age of Enlightenment from theorists including, but not limited to John Locke, Thomas Hobbes, and Jean-Jacques Rousseau. It was revived in the 20th century by John Rawls and is used as the basis for modern democratic theory.
Autopoiesis from the Greek αὐτo- (auto-) 'self', and ποίησις (poiesis) 'creation, production'—is a term coined in biology that refers to a system’s capability for reproducing and maintaining itself by metabolizing energy to create its own parts, and eventually new emergent components. All living systems are autopoietic. Societal Autopoiesis is an extension of the biological term, making reference to the process by which a society maintains its capacity to perpetuate and adapt while experiencing relative continuity of shared identity.
A fake online persona, crafted to manipulate public opinion without implicating the account creator—the puppeteer. These fabricated identities can be wielded by anyone, from independent citizens to political organizations and information warfare operatives, with the aim of advancing their chosen agenda. Sock puppet personas can embody any identity their puppeteers want, and a single individual can create and operate numerous accounts. Combined with computational technology such as AI-generated text or automation scripts, propagandists can mimic multiple seemingly legitimate voices to create the illusion of organic popular trends within the public discourse.
Presenting the argument of disagreeable others in their weakest forms, and after dismissing those, claiming to have discredited their position as a whole.
A worldview that holds technology, specifically developed by private corporations, as the primary driver of civilizational progress. For evidence of its success, adherents point to the consistent global progress in reducing metrics like child mortality and poverty while capitalism has been the dominant economic paradigm. However, the market incentives driving this progress have also resulted in new, sometimes greater, societal problems as externalities.
Used as part of propaganda or advertising campaigns, these are brief, highly-reductive, and definitive-sounding phrases that stop further questioning of ideas. Often used in contexts in which social approval requires unreflective use of the cliché, which can result in confusion at the individual and collective level. Examples include all advertising jingles and catchphrases, and certain political slogans.
A proposition or a state of affairs is impossible to be verified, or proven to be true. A further distinction is that a state of affairs can be unverifiable at this time, for example, due to constraints in our technical capacity, or a state of affairs can be unverifiable in principle, which means that there is no possible way to verify the claim.
Creating the image of an anti-hero who epitomizes the worst of the disagreeable group, and contrasts with the best qualities of one's own, then characterizing all members of the other group as if they were identical to that image.
Discussion
Thank you for being part of the Consilience Project Community.
0 Comments