Technology is Not Values Neutral: Ending the Reign of Nihilistic Design
We fail to take tech seriously when we do not grasp its full impact on humans | Jun 26, 2022 | 25 Min Read
Technology companies such as Facebook and Google have become some of the most influential organizations in the modern world. These companies are not ordinary businesses that just happen to operate at massive scale; in fact, they are influencing society in new and profound ways. Large tech companies are taking on some of the powers and responsibilities of institutions such as news media and governments, replacing previous systems and norms with centralized control based on mass data collection and algorithmic curation. Social media companies in particular have privatized the public sphere. If it continues, this trend threatens to break the functioning of democratic self-government.
When future historians look back on the early 21st century, they will probably consider the rapid rise and influence of internet technology companies to be one of the most striking and perhaps puzzling aspects of our societies. As far as the letter of the law is concerned, Facebook or Google are just businesses, legally incorporated in order to provide products and services to willing customers in a free market. Facebook and Google are not treated fundamentally differently from any other publicly traded corporation, or even a mom-and-pop store, despite their massive scale. The unprecedented scale of these companies, however, is central to what future historians might find most striking.
As of 2020, Facebook claimed 2.8 billion monthly active users around the world. Based on current estimates of global population, this suggests that approximately one third of the entire human race logs in to Facebook every month. Facebook’s userbase is nearly ten times the size of the population of the United States and double the population of China. The Roman Catholic Church is the largest single religious organization in the world, but the number of Facebook users is more than double the number of baptized Catholics. Most Catholics almost certainly spend more time on Facebook than at church. Facebook is the biggest, but other major tech companies also boast userbases that dwarf the populations of most nation states. More than a billion people use Gmail, roughly equivalent to the entire world population from the year 1800. More than one billion people use YouTube every month, watching more than one billion hours of video each day. Twitter, which is comparatively tiny, still has more monthly active users than the population of the United States.
Major social media companies, most of which didn’t even exist at the turn of the century, now mediate, coordinate, and influence the thoughts and behaviors of billions of people.
Major social media companies, most of which didn’t even exist at the turn of the century, now mediate, coordinate, and influence the thoughts and behaviors of billions of people. Influence at this scale is unprecedented, and most social media companies wield this power without even selling a product to their users. Their users—or more precisely, the users’ data and behaviors—are the product, and advertisers are the customers. Social media companies are not just another type of business; in fact, they have acquired a number of the functions of government, albeit in pursuit of profit rather than in service of the population’s interests. The success of today’s social media companies depends on their ability to influence the behavior of large numbers of people through mass data collection, centralized algorithmic control, and stewardship of enormous networks of users. This does not entail duplicating all the functions of the state—tech companies have little interest in operating sanitation services or maintaining standing armies—but it does entail large-scale coordination of behavior, meaning-making, and political education. This ability to alter behavior comes at the direct expense of existing systems of social influence, such as traditional media, scientific debate, or even government itself. Older institutions are displaced as users have more and more of their information environment shaped by social media platforms, which continue to pursue their own narrow interests.
This dynamic is recognized in China, where the party-state has cracked down on tech companies. In the West, however, governments have not understood the full scope of the problem, let alone responded adequately. Social media companies have privatized the public sphere, driving polarization at the cost of shared sensemaking, all in service of greater engagement and control. Since a democracy rests on shared sensemaking among the demos (“the people”), this risks hollowing out the democracy itself. Already, polarization has filtered upwards from the electorate to the elected representatives, leading to ever more gridlock and infighting. Western governments have been reactive and inconsistent in response, trying and failing to police “misinformation” on social media platforms and at the same time co-opt them to use against partisan rivals. Around the world, most governments have failed to target or even perceive the fundamental causes of the problem.
The relationship between the user of a social media platform and the platform itself is quite different from the usual relationship between a customer and a business. Traditionally, a business is supposed to offer a useful product or service for money. The customer believes they gain more value from the product or service offered than from the money they exchange for it, so they agree to pay and a transaction is made. The business gains revenue that it needs to fund its operations. Businesses are incentivized to offer products and services that satisfy their customers in order to keep getting their money; customers “vote with their wallets” for the best businesses. Everybody wins.
Most social media companies, however, don’t charge their users anything at all. Facebook, Google, and Twitter have never charged anything to view or upload content and are unlikely to do so in the future. Instead, they make money from advertisers seeking the attention of their users. These ads are worth money not only because social media platforms have the general attention of a large number of people (like a billboard on Times Square, for example), but because they are able to target specific advertisements at increasingly specific subgroups of users that are most likely to be interested in the content of the ad. This is only possible because social media platforms are able both to collect and make sense of large amounts of data about each user: their age, gender, location, profession, educational level, interests, viewing history, behavioral patterns, and so on.
This leads to the strange situation in which users are doing the hard work of providing the platforms with a continuous stream of valuable information about themselves that the platform needs.
For each of these social media platforms, its value is dependent on its ability to predict and alter user behavior. This in turn depends on the amount of data it can collect on all of its individual users, as well as its ability to make sense of that data with centralized algorithms. This leads to the strange situation in which users are doing the hard work of providing the platforms with a continuous stream of valuable information about themselves that the platform needs. As a result, users of tech platforms are less like customers and more like citizens of a virtual society wherein they diligently maintain their own records in accordance with virtual norms. This pattern is most obvious with social media companies that are free to use like Facebook, where an individual user gains directly by sharing their information with the platform’s network of users (and incidentally the company as well). The ability to reach this massive and growing network of users is the main source of value for both individual users and for advertisers. The same pattern remains partially present even for other tech platforms that charge fees to users and depend less fundamentally on exponential network dynamics, such as Amazon, Uber, or Netflix. In these cases, money is not the only thing of value that users are handing over to the platforms; again their data, especially through continued usage, provides each company with valuable insights into the same set of personal characteristics and identifiers.
Arguably, the data is even more important than the money in the long run. Tech companies have long shown a willingness to run at a loss in order to maintain their growth and market position. For social media companies, high valuations come from a large network of users, which makes the network more appealing to additional users as well as to advertisers. Both building this network and governing the users’ behavior relies on massive troves of user data. The dynamics are not identical for other tech companies, but they are more similar than different. Most crucially, the business model of either kind of tech company naturally trends towards a network-based monopoly (or occasionally duopoly), and user data is an important competitive edge that rivals cannot duplicate. The ability to collect and use this extensive data is what allows tech companies to offer platforms attractive to users in the first place and retain them in the future. As the former Amazon Services business chief James Thomson put it, “Amazon can basically anticipate what you’re going to need next—size up the inventory of which brands they are going to need in three to six months when you are ready to ‘unexpectedly’ buy those products.” Uber is rumored to be depending not on the economics of its core ride-hailing business to become profitable in the future, but on the massive amount of data it has collected through its operation.
The fundamental business models of network-based tech companies share a few key features. They depend on acquiring vast amounts of data generated by many individual human beings. This data then needs to be processed and analyzed in a centralized, automated manner through the use of algorithms. The algorithms then curate content to affect the thoughts and behaviors of users, steering them towards behaviors that benefit the company and its advertisers. Whether or not platforms charge fees to users, in most cases tech companies are incentivized to maximize the engagement on their platforms in order to maximize the amount of data collected. Human intervention on the part of the tech company is not totally eliminated, but it is largely minimized in favor of automated systems. For social media platforms, the result is the homogenization of user experiences into a handful of different categories. This is “standardized differentiation,” a process in which a product is offered in many different varieties, and its once-unpredictable variation is organized into legible categories to provide greater selection to customers. In this case, the product is of course the users.
While social media companies may brand their platforms and services in a variety of different ways, they are all fundamentally leveraging internet-enabled network effects, user-generated content, and AI curation to alter the thoughts and behaviors of vast numbers of people in a manner that benefits the company or its advertisers. Social media companies are not primarily providing a useful service to the public in exchange for money. In reality, their ability to alter the behavior of the public is the service.
Altering people’s behavior at mass scale is not something that happens without secondary effects. Previous routines, norms, and expectations are upended or abandoned as a result. This is what the tech industry calls “disruption.” But contrary to the connotations of the word, social media companies do not disrupt established norms in a range of unique and varied ways. Rather, they all “disrupt” them in the same way: by replacing a range of previously unique and varied local systems, norms, and behaviors with a centralized system dependent on mass data collection and algorithmic governance. This centralized system sorts users into categories based on their demographic information and engagement history, curating their experience based on the algorithmically discovered behavior type that the user best matches. This process involves not only the natural discovery of emergent patterns of user behavior, but also the “standardized differentiation” in which the company proactively homogenizes users into one of these legible forms based on their assigned categories.
Social media companies are thus competing directly with other institutions that govern people’s behavior, seeking to usurp parts of the authority of local systems and replace them with a single centralized authority located in a server farm. This dynamic has been the fundamental driver of many conflicts between tech companies and local governments and communities—and even nation states. The loudest and most persistent conflicts have been those between social media platforms and legacy information channels, leading to charges of “fake news,” “misinformation,” “filter bubbles,” and the like, as well as a semi-regular series of Congressional hearings.
Other tech companies have also fought their own battles, although these have not captured public attention so strongly and more closely resemble the usual legal and social conflicts that have accompanied new industries and megacorporations for the past 150 years, from Standard Oil to Walmart to Microsoft. The loudest of the current crop is undoubtedly Uber, which has fought a decade-long war with unions, regulators, both local and national governments, and even their own drivers over the legality, sustainability, and social consequences of their platform. Uber went so far as to use its data to specifically target law enforcement and government officials to prevent them from using the platform. Similar examples include Airbnb, Grubhub, Amazon, and others, which have all fought strenuous political and legal battles around the world over treatment of workers, anti-competitive business practices, the legality and classification of their services, and more.
Whereas tech companies like Uber or Amazon have effectively privatized and centralized sectors such as transportation or logistics, social media companies have privatized and centralized something much less tangible, but far more consequential: the public sphere. The sheer volume of attention and content that people direct onto Facebook, YouTube, Twitter, Instagram, TikTok, and other social media platforms is unprecedented. But these platforms are not just, or even primarily, used for entertainment, though that is how they might prefer to be branded. Social media is now the place where people read the news, seek information, keep tabs on elected officials, evaluate the credibility of experts and public figures, and both share and debate their deeply-held views about the world with their closest friends and family—as well as with complete strangers.
These activities are crucial for the health of a democracy, since democratic self-government depends on the ability of an educated and informed citizenry to sort falsehoods from facts, make authentic decisions about their values and worldviews, and vote accordingly. It is this process that keeps government accountable and determines the future direction that a democratic society will take. The quality and success of democratic self-government will therefore depend ultimately on the epistemic quality of a democracy’s public sphere. When the public sphere is healthy, accurate information about the world, transparency about influential decisions made by elites, and sound arguments are widely available and allow the public to authentically self-govern. When the public sphere is unhealthy, misinformation, intrigue, and propaganda drown out facts and reasoning. The public is instead locked into a tribal, polarized information war with itself, effectively making democratic government impossible. In other words, successful democratic government requires an epistemically healthy public sphere.
Maintaining the epistemic health of the public sphere does not happen automatically. The institutions that used to fulfill this role—universities, schools, news organizations, and, most importantly, the public itself—have been unable to prevent social media companies from privatizing the public sphere. Rather than serving the key function of enabling successful self-government at scale, the public sphere has been monopolized to serve the extraordinarily narrow interests of social media companies: amorally increasing time on site, engagement, ad revenue, profits, and power. This would be bad enough from the perspective of maintaining a healthy democracy, but it turns out that the most effective ways for social media companies to maximize their metrics are by stoking precisely the misinformation, intrigue, and partisan polarization that characterize an unhealthy public sphere.
These negative effects are also amplified because special interest groups, partisan operatives, foreign governments, and other politically motivated actors have found that social media is a very useful tool for spreading propaganda. There has always been a market for narratives and arguments that support the designs of the wealthy or influential, but the wide penetration, instantaneous speed, and recommendation algorithms of social media have made the impact of such content far stronger than they otherwise might have been—both to the benefit of the platform’s customers and to the detriment of the capacity for public sensemaking as a whole. Social media companies provide a battleground for a multitude of mutually opposing actors to wage information warfare on the population, which in turn drives user engagement. This vicious feedback loop benefits both politically motivated actors and social media companies, giving neither a strong incentive to alter their approach, while users are caught in the loop.
The fundamental dynamic of tech companies usurping governance authority has not been widely understood yet in the open societies of the world, but it does appear to have been understood in China. Since November 2020, the Chinese government has stepped up a widespread crackdown on China’s tech industry, starting with the suspension of the record-setting initial public offering (IPO) for the financial technology company Ant Group. Shortly after, an antitrust investigation was launched into Ant Group’s parent company, Alibaba Group. In April 2021, Ant Group announced that, rather than going public, it would become a holding company overseen by China’s central bank. Throughout 2021, the Chinese government issued new guidelines to tech companies restricting monopolistic behavior and passed data protection laws that, among other things, banned tech companies from collecting user data that is unnecessary to the provision of services without the user’s consent.
Notably, the Chinese government’s crackdown didn’t target a specific subset of tech companies for their narrow effects on wider society, in the way that, for example, many jurisdictions have sought to limit the effects of Uber. Instead, the Chinese government targeted the core mechanisms that underpin the success of tech companies as centralizing competitors for governance: mass collection of user data, the maximization of user engagement, and monopolistic business practices that defend their positions as unmatched data hoarders. China’s biggest social media app, WeChat, as well as China’s Uber, Didi, both temporarily suspended new user registrations in July 2021 in response to regulatory inquiries. In the same month, WeChat’s parent company Tencent announced it would implement facial recognition checks to ensure compliance with regulations banning minors from playing online games between 10:00 p.m. and 8:00 a.m.
China’s recent crackdown on its domestic tech industry is instructive, but China would not have had a domestic tech industry to crack down on in the first place if the government had not been long wary of the potentially destabilizing impact of the internet. The Chinese government has issued regulations concerning internet usage dating back to the 1990s. The combination of China’s “Great Firewall” policy of heavy online censorship, banning foreign tech companies, and requiring compliance with Chinese internet regulation has resoundingly succeeded in preventing non-Chinese tech companies from collecting the data of Chinese users, let alone establishing a market presence in the country. Early internet observers both predicted and hoped that the internet would lead to political upheaval in China; instead, the Chinese government co-opted the new technology and used it to fortify its own system of control. The open societies of the West, instead, have experienced the full brunt of internet-enabled social upheaval.
It is the core, defining feature of a successful tech company. Their success is dependent upon the mass collection of data and the algorithmic regulation of user behavior to maximize engagement.
Advances in networking technology and computation allowed tech companies to bypass existing systems of governance, privatizing and centralizing them for profit. Crucially, this is not just an aggressive strategy that a tech company may or may not choose to undertake, nor is it a contingent accident of history. It is the core, defining feature of a successful tech company. Their success is dependent upon the mass collection of data and the algorithmic regulation of user behavior to maximize engagement.
Technology is frequently a centralizing force, and the internet has proved to be no exception. More tech monopolies cut from the same cloth are not the solution. Furthermore, recent history would suggest that it would be unwise to expect self-regulation to solve the problems caused by tech companies. The interests of tech companies are in many ways at odds with the interests of the society in which they are embedded. Centralized algorithmic governance by a small number of nominally private entities is incompatible with authentic democratic self-government. So far, the for-profit governments have been winning this battle, surpassing and eroding the capacity of our democracy. Without direct intervention that strikes at the core mechanisms that allow tech companies to usurp the authority to govern, there will come a point when democracies can no longer govern themselves at all.
We fail to take tech seriously when we do not grasp its full impact on humans | Jun 26, 2022 | 25 Min Read
Bad faith communication has become normalized | Feb 23, 2022 | 10 Min Read
Verified facts can be used to support erroneous conclusions | Jan 30, 2022 | 8 Min Read
Some of our most popular technologies are becoming a means of mass coercion that open societies cannot survive | Dec 5, 2021 | 28 Min Read
Agent provocateur translates to “inciting incident” in French. It is used to reference individuals who attempt to persuade another individual or group to partake in a crime or rash behavior or to implicate them in such acts. This is done to defame, delegitimize, or criminalize the target. For example, starting a conflict at a peaceful protest or attempting to implicate a political figure in a crime.
Ideological polarization is generated as a side-effect of content recommendation algorithms optimizing for user engagement and advertising revenues. These algorithms will upregulate content that reinforces existing views and filters out countervailing information because this has been proven to drive time on-site. The result is an increasingly polarized perspective founded on a biased information landscape.
To “cherry pick” when making an argument is to selectively present evidence that supports one’s position or desired outcome, while ignoring or omitting any contradicting evidence.
A general term for collective resources in which every participant of the collective has an equal interest. Prominent examples are air, nature, culture, and the quality of our shared sensemaking basis or information commons.
The cognitive bias of 1) exclusively seeking or recalling evidence in support of one's current beliefs or values, 2) interpreting ambiguous information in favor of one’s beliefs or values, and 3) ignoring any contrary information. This bias is especially strong when the issues in question are particularly important to one's identity.
In science and history, consilience is the principle that evidence from independent, unrelated sources can “converge” on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own.
While “The Enlightenment” was a specific instantiation of cultural enlightenment in 18th-century Europe, cultural enlightenment is a more general process that has occurred multiple times in history, in many different cultures. When a culture goes through a period of increasing reflectivity on itself it is undergoing cultural enlightenment. This period of reflectivity brings about the awareness required for a culture to reimagine its institutions from a new perspective. Similarly, “The Renaissance” refers to a specific period in Europe while the process of a cultural renaissance has occurred elsewhere. A cultural renaissance is more general than (and may precede) an enlightenment, as it describes a period of renewed interest in a particular topic.
A deep fake is a digitally-altered (via AI) recording of a person for the purpose of political propaganda, sexual objectification, defamation, or parody. They are progressively becoming more indistinguishable from reality to an untrained eye.
Empiricism is a philosophical theory that states that knowledge is derived from sensory experiences and relies heavily on scientific evidence to arrive at a body of truth. English philosopher John Locke proposed that rather than being born with innate ideas or principles, man’s life begins as a “blank slate” and only through his senses is he able to develop his mind and understand the world.
An orientation towards a reality that is neither epistemic nihilism nor epistemic hubris. As opposed to an ethos of knowing, it is an ethos of learning, which The Consilience Project suggests is needed for grappling with the unique challenges of 21st-century sensemaking. This ethos implies curiosity and a motivation to pursue further learning, embracing facts and truth where these are possible to attain, but always remaining open to further learning—refusing to commit to absolutism or fundamentalism.
This form of nihilism is a diffuse and usually subconscious feeling that it is impossible to really know anything, because, for example, “the science is too complex” or “there is fake news everywhere.” Without a shared ability to make sense of the world as a means to inform our choices, we are left with only the game of power. Claims of “truth” are seen as unwarranted or intentional manipulations, as weaponized or not earnestly believed in.
Epistemology is the philosophical study of knowing and the nature of knowledge. It deals with questions such as “how does one know?” and “what is knowing, known, and knowledge?”. Epistemology is considered one of the four main branches of philosophy, along with ethics, logic, and metaphysics.
Derived from a Greek word meaning custom, habit, or character; The set of ideals or customs which lay the foundations around which a group of people coheres. This includes the set of values upon which a culture derives its ethical principles.
A category of risk that denotes the complete and total elimination of humanity or the planet. Example: Earth killer asteroid impacts
Discourse oriented towards mutual understanding and coordinated action, with the result of increasing the faith that participants have in the value of communicating. The goal of good faith communication is not to reach a consensus, but to make it possible for all parties to change positions, learn, and continue productive, ongoing interaction.
Processes that occupy vast expanses of both time and space, defying the more traditional sense of an "object" as a thing that can be singled out. The concept, introduced by Timothy Morton, invites us to conceive of processes that are difficult to measure, always around us, globally distributed and only observed in pieces. Examples include climate change, ocean pollution, the Internet, and global nuclear armaments and related risks.
Information warfare is a primary aspect of fourth- and fifth-generation warfare. It can be thought of as war with bits and memes instead of guns and bombs. Examples of information warfare include psychological operations like disinformation, propaganda, or manufactured media, or non-kinetic interference in an enemy's communication capacity or quality.
Refers to the foundational process of education which underlies and enables societal and cultural cohesion across generations by passing down values, capacities, knowledge, and personality types.
False or misleading information, irrespective of the intent to mislead. Within the category of misinformation, disinformation is a term used to refer to misinformation with intent. In news media, the public generally expects a higher standard for journalistic integrity and editorial safeguards against misinformation; in this context, misinformation is often referred to as “fake news”.
A prevailing school of economic thought that emphasizes the government's role in controlling the supply of money circulating in an economy as the primary determinant of economic growth. This involves central banks using various methods of increasing or decreasing the money supply of their currency (e.g., altering interest rates).
A form of rivalry between nation-states or conflicting groups, by which tactical aims are realized through means other than direct physical violence. Examples include election meddling, blackmailing politicians, or information warfare.
Open societies promote the free exchange of information and public discourse, as well as democratic governance based on the participation of the people in shared choices about their social futures. Unlike the tight control over communications and suppression of dissenting views that characterize closed societies, open societies promote transparent governance and embrace good-faith public scrutiny.
The theory and practice of teaching and learning, and how this process influences, and is influenced by, the social, political, and psychological development of learners.
The ability of an individual or institutional entity to deny knowing about unethical or illegal activities because there is no evidence to the contrary or no such information has been provided.
First coined by philosopher Jürgen Habermas, the term refers to the collective common spaces where people come together to publicly articulate matters of mutual interest for members of society. By extension, the related theory suggests that impartial, representative governance relies on the capacity of the public sphere to facilitate healthy debate.
The word itself is French for rebirth, and this meaning is maintained across its many purposes. The term is commonly used with reference to the European Renaissance, a period of European cultural, artistic, political, and economic renewal following the middle ages. The term can refer to other periods of great social change, such as the Bengal Renaissance (beginning in late 18th century India).
A term proposed by sociologists to characterize emergent properties of social systems after the Second World War. Risk societies are increasingly preoccupied with securing the future against widespread and unpredictable risks. Grappling with these risks differentiate risk societies from modern societies, given these risks are the byproduct of modernity’s scientific, industrial, and economic advances. This preoccupation with risk is stimulating a feedback loop and a series of changes in political, cultural, and technological aspects of society.
Sensationalism is a tactic often used in mass media and journalism in which news stories are explicitly chosen and worded to excite the greatest number of readers or viewers, typically at the expense of accuracy. This may be achieved by exaggeration, omission of facts and information, and/or deliberate obstruction of the truth to spark controversy.
A theory stating that individuals are willing to sacrifice some of their freedom and agree to state authority under certain legal rules, in exchange for the protection of their remaining rights, provided the rest of society adheres to the same rules of engagement. This model of political philosophy originated during the Age of Enlightenment from theorists including, but not limited to John Locke, Thomas Hobbes, and Jean-Jacques Rousseau. It was revived in the 20th century by John Rawls and is used as the basis for modern democratic theory.
Autopoiesis from the Greek αὐτo- (auto-) 'self', and ποίησις (poiesis) 'creation, production'—is a term coined in biology that refers to a system’s capability for reproducing and maintaining itself by metabolizing energy to create its own parts, and eventually new emergent components. All living systems are autopoietic. Societal Autopoiesis is an extension of the biological term, making reference to the process by which a society maintains its capacity to perpetuate and adapt while experiencing relative continuity of shared identity.
Used as part of propaganda or advertising campaigns, these are brief, highly-reductive, and definitive-sounding phrases that stop further questioning of ideas. Often used in contexts in which social approval requires unreflective use of the cliché, which can result in confusion at the individual and collective level. Examples include all advertising jingles and catchphrases, and certain political slogans.
A proposition or a state of affairs is impossible to be verified, or proven to be true. A further distinction is that a state of affairs can be unverifiable at this time, for example, due to constraints in our technical capacity, or a state of affairs can be unverifiable in principle, which means that there is no possible way to verify the claim.
Thank you for being part of the Consilience Project Community.