Technology is Not Values Neutral
The undeniable impact of “Big Tech” on humanity in recent decades should lead us to ask fundamental questions about technology. Exponential rates of technological change have brought the cutting edge of advanced technology into the heart of everyday experience. Rather than serving primarily in the background for labor or infrastructure, technological advances are in the immediate foreground—in the palm of our hands—central to the most intimate and consequential aspects of our lives, including education, communication, relationships, and politics. In recent decades, societies have been fundamentally transformed by the innovations of a relatively small number of technologists. The products and services of the modern digital economy continue to bring technology ever closer to the core of the human experience—and “the digital age” is just getting started.
It is clear that these changes have impacted nearly all aspects of material goods, manufacturing, supply chains, energy, and market transactions—technology means that the world works differently now. It is perhaps less clear to many that these changes have also impacted most aspects of culture, identity, community, language, meaning-making, and emotional patterning—technology means that worldviews, personalities, and values are different now. Technologies encode practices and values into the societies that adopt them. This happens in many ways, often unpredictably and unintentionally, as the second- and third-order effects of technologies.
Technologies in use today change our practices and values, right now, creating the future of humanity and its environments. Decades of environmentalist lobbying and education have made it more common for there to be real concerns about material and environmental consequences of technology. Such consequences are typically referred to as externalities. As hard as it was to raise awareness of these physical externalities of technology, it remains even more difficult to bring concern to the ethical, cultural, and psychological consequences—i.e., what could be called psychosocial externalities. Civilization now hinges on our ability to manage both kinds of concerns effectively. Changes to human behavior and value systems may play out over the long term as second- or third-order effects, but they are nevertheless part of the matrix of impacts in which technology innovation must place itself. Designing with these in mind is one of the great problems of our time, which this paper seeks to highlight for more widespread and concerted deliberation.
Values are Baked Into Technologies
Consider the smartphone and all the technologies that have interfaced and co-evolved with it. In 2022, the smartphone requires at least the basic capabilities of a personal computer, as well as the Internet, server farms, microchip supply chains, Wi-Fi, cameras, lenses, microphones, speakers, software apps, and communications satellites. Most phones are augmented with wearable biometric sensors, headsets, and may be linked to everything from your car to your washing machine. An ever-growing assortment of related innovations increase the smartphone’s power and presence in our lives.
It could be argued that the ecology of technologies surrounding the smartphone—the smartphone technology ecosystem—has resulted in an epoch-making shift in how humans relate to each other and the world around them. It has changed human behavior and psychology more profoundly in just two decades than perhaps any prior technology (religion, cultural movement, or empire) ever did in an equivalent timeframe. The scope of its impact on human society is difficult to estimate because of its far-reaching and intimate impacts on the very nature of human communication, thought, and social organization.
Think about all the ways in which our minds, relationships, and cultures changed with the widespread adoption of the smartphone technology ecosystem. How does it affect memory and navigation capacities, or attention span, or personal reflection? How has it affected the values enacted around a family dinner table, or the norms of social interactions in general? How has it affected our sense of time and space, and our expectations of relationship and communication across them? Was all of this intended or considered by the initial inventors and adopters? It was not. Only a small number of these impacts were desired and intended, and the vast majority were not even considered.
Most people today have built their lives around the choices made possible by the smartphone. As a result, their values have changed in innumerable ways, both subtle and not so subtle. The ways in which the affordances of smartphone apps impact self-understanding, mental health, ethics, and a host of other psychosocial dynamics has been well documented, especially in younger people. At this point the psychological and cultural effects of widespread, habitual, and nearly non-stop use of these devices has become apparent. Philosophers and public intellectuals are taking for granted the relevance of theorizing about the high-tech future of the smartphone technology ecosystem as an extension and deepening of the human mind.
In fact, the changes to human psychology and social reality resulting from smartphone technology ecosystems are intrinsic to the possibilities built into them from the start. Technologies always create the potential for new forms of behavior, values, and thought, even when the technology is not explicitly made to do so. Smartphones were designed with the values of communication and access to information in mind. The smartphone has become central to human existence because humans highly value communication and information. But an inevitable result of enabling communication at a distance is a change in how humans value face-to-face interaction. Instant messaging and “Facetime” have come to replace in-person contact as the default modalities of communication. Easy access to nearly unlimited information also inevitably changes the value we place on skills such as memorization, information recall, and the ability to study and learn from books. A GPS device on your phone is designed to get you where you need to go, and it does that. It was not designed to weaken your sense of direction and make you dependent upon it to feel safe in urban or rural areas. Yet it also does that. What value is there now in having a good sense of direction or in being able to give and remember directions and locations? This may seem trivial, but it is only the tip of the iceberg.
Technologies much less complex than the smartphone have the same kinds of far-reaching impacts on human minds and cultures. Any new human-made tool has the potential to shape what is considered to be valuable and why. Consider something as simple as a bathroom scale, which made it possible for almost anyone to have an accurate measure of their own body weight. Given the history of standardized measurement technologies, from one perspective this is a remarkable innovation—a reliable scale for weighing humans, small enough to fit next to your toilet.
As a product, the bathroom scale was clearly widely desired, first by medical professionals and then by nearly everyone. However, it also made body weight into one of the most overvalued indexes of health, especially in America. A technology in one of the most intimate places in our homes gives us an objective, numerical view of ourselves. It produces a number that comes to seem all important. Tech-enabled quantification makes for a certain ease of self-objectification. One’s body weight has come to be obsessively overvalued and used as a meaningful index of health beyond its actual diagnostic value.
It has been known for decades that body mass index (BMI), blood chemistry, blood pressure, cardiovascular capacity, and so on, are better indexes of health. Body weight is no doubt important, but taken alone it is not an accurate proxy for diet quality, exercise, or any other meaningful aspects of well-being. Yet weight loss remains one of the most highly valued health outcomes, with whole industries focused on this value. Beyond that, there is a clear correlation between the availability of bathroom scales and increases in cases of anorexia. The point is not that tracking one’s body weight is bad. The point is that the widespread and easy availability of doing so, through a simple advance in technology, made it possible to value body weight in a way that was not possible before. The scale increased the salience and visibility of body weight, changing the ways in which it could be valued, thought about, and otherwise related to psychologically. This had unforeseen outcomes.
Technologies are created with both values and material outcomes in mind. When technologies are brought into the world they create a future: material, social, psychological, and cultural. This future inevitably escapes the control of the early inventors, implementers, and adopters. Even a technology as simple as the plow is created with values in mind, such as ease of labor, speed of work, and a desire for food surpluses that create a feeling of safety. It serves these values by affecting material outcomes, such as preparing fields for agriculture. The plow would also lead to urban fortifications for storing and protecting food surpluses, and thus change the nature of human habitats, warfare, and architecture. The plow favors men as farmers due to the upper body strength needed to use it, which changed ideologies about the respective value of the sexes. The plow led eventually to certain forms of animal domestication, in the long run altering the relationship between humans and the natural world, and thus moving religious beliefs from animistic to theocentric. Normalization of animal-drawn plowing as the basis of agriculture made it difficult for humans to value animals as sacred and as equal or surpassing humans in worth. It is hard to worship the sacred spirit of an animal that must be beaten all day to pull a plow. Dominion over animals in this way justified the spread of a cultural narrative that humanity’s role is to control nature, rather than to be a part of nature, which laid the foundations for a mindset that eventually resulted in the industrial revolution. These and countless other changes to human minds, behaviors, and culture followed from the invention of the plow.
Of course, the plow was not created with these outcomes in mind, nor could they have been predicted by those who first built and used them. Technologies co-evolve with value systems—neither determines the other, even though they are intimately related. Whole epochs of civilization, as ways of life and culture, come to be defined by certain sets of technologies and infrastructures.
Technology, Ideology, and the Shape of Civilizations
Ideologies supporting the proliferation (and goodness) of technology began to drown out conservative (mostly religious) moral resistance by the 19th century. Instead, the zeitgeist shifted toward a faith in technology as either positive or neutral with regards to human nature. Contemporary techno-optimists have continued with the ideology of the inherent positivity of technology, which is now further supported with a postmodern perspective that may be summarized as “skeptical of universal values in general.” The argument is that technologies are only as bad or good as the people who use them. Moreover, cross-cultural standards of value (good versus bad) are suspect in postmodernism—therefore considering technology in terms of values is itself questionable. According to this view, technology is about what works for everyone—universal functionality—rather than what is good for everyone—universal morality.
It may be reasonable to summarize Facebook’s approach as: “our technology connects people; what they do after that is up to them.” This is an abnegation of responsibility for the impacts of technology. It is an ideology that places the onus of right action on “users'' rather than on designers. But this kind of thinking is understandable and can be powerfully convincing under certain circumstances. When a technology is designed to solve a real and urgent problem, and it works as intended, it can result in widespread uptake among the population. This leads people to the understandable conclusion that it is a good thing. The goodness of the technology feels obvious, particularly when the second- and third-order effects that some users may experience only become evident later or in other domains. To many, it appears that the technology itself is not an issue—it is simply up to people to use it responsibly.
Throughout the technological acceleration of the 20th century, philosophers and designers questioned how to think about the interface between humans and technology. Following on from the realization that technologies are not values neutral, a relatively small group of theorists and innovators sought new ways of designing and using technologies. Theorists of technology such as Lewis Mumford and Langdon Winner foresaw the necessity of addressing the second- and third-order effects of technologies on individuals, cultures, and value systems. Like most significant societal changes in worldview, these ideas were at first not widely known. But the trends we are experiencing today make this theory of technology design increasingly important.
These thinkers also clarified the crucial idea that technologies are not simple isolated things. Technologies are created and evolve together to form whole civilizations (see Box 1). It is important not to consider any technology as existing all by itself. Humans have come to live in massive networks of operationally related technologies, which have come to form whole ecologies and infrastructures supporting every aspect of conscious experience. Ultimately, this new human reality constitutes a technological epoch with distinct material characteristics and societal dynamics.
Evolutions of Design Thinking
As the development of modern technologies began to accelerate, there was an accompanying evolution of thought regarding the process of innovation—particularly in terms of how technology should be made, its impacts, and the overall goals and dynamics of its creation. Below, we suggest a few broad categories of the various positions and worldviews that have been adopted throughout the evolution of the modern world system. These categories are not mutually exclusive and should not be understood as empirically grounded sociological constructs—they are offered simply as a means to clarify the complex dynamics of contemporary culture.
Early modern technologists worked with what might be called naively optimistic design: design that assumes positive values are intrinsically associated with all technology-human interfaces. This yielded historically unprecedented technological innovations and a complete restructuring of human life around an expanding stack of increasingly complicated technologies. This rapid boom in progress made techno-scientific progress the default religion of the modern world. But naive design has also brought us to the brink of catastrophic risks due to a principled neglect of concern for possible negative second- and third-order effects, in both physical and psycho-social domains.
In response, what might be called luddite design arose: design seeking to undo or roll back technological advancements, based upon the assumption that negative values are intrinsically encoded into all technology-human interfaces. Luddite approaches often looked backward, trying to rescue forms of life that were being overthrown by a range of technological encroachments. In one sense, a luddite perspective is less about design and more about technology in general, in which less is more, changes are resisted, and ecological costs are held as significant priorities. In time, this kind of thinking became self-evidently inadequate as a response to techno-optimism, as new technologies tend to confer power to those that adopt them. Any worldview that makes the conscious decision to opt out of adopting new technologies becomes increasingly disempowered. Technologies have therefore advanced apace, and continue now to reshape the Earth and humanity.
As modernity advanced, science gained in power, and technological innovations increased, there arose a form of nihilistic design: design that effectively ignores value as an aspect of technology-human interfaces. This approach might also be called values agnostic design. This form of design dominates most technology innovation today, in which broad cultural assumptions support the idea that values are not baked into anything. Values are social realities rather than anything that is physically real: therefore the value of anything physical, including technology, is only what we make of it. The idea is that our values come from churches, schools, and families, and these institutions and social processes then impact how technologies are used. One can use implements of farming as weapons of war and vice-versa, as when beating swords into plowshares. Initially, it can seem obvious that technology only has the values we give to it. Indeed, this has been the dominant approach to thinking about technology for nearly a century.
The philosopher of technology, Langdon Winner, described aspects of this view in what he called the “Technological Orthodoxy.” Reviewing the history of design thinking and technology, he distilled the foundational system of beliefs and assumptions underlying modern technological development. The roots of this paradigm arose in the wake of the scientific revolution but have persisted through to the digital age. It is both reasonable and expected that this view would become widely held, given rapid rates of change, improving quality of life, obvious material abundance, and other clear benefits of much modern technology.
“Technological orthodoxy” entails a strict separation of values from technology. In part, this is related to understanding value itself as primarily socially constructed. This idea began with modernist scientific materialism. The assumption that physical matter constitutes all that exists leaves no room for “subjective qualities” such as value. This idea—that value is purely subjective—reached its climax with postmodernist expressions of skepticism towards all universal frameworks of value. The result has been a progressive deepening of the separation of technology design from considerations of value. Thinking about design in this way leads innovators to “move fast and break things” and focus effort on “disruptive” technologies. Whereas naively optimistic design made the assumption that technology could advance sacrosanct values, nihilistic design decoupled technological innovation from any serious considerations about value at all. The consequences of this approach are likely to be both far-reaching and difficult to understand completely.
As a way forward, we propose here a form of axiological design: design that factors and leverages the intrinsic values inevitably encoded in all technology-human interfaces. This is design that acknowledges that in addition to whatever its physical impacts, it is also affecting the behaviors and thus the psychology of the people using it (and with proliferation, the society as a whole). As such, technological innovations must have design goals and constraints related to the psychological and sociological effects as core elements of the design process.
Instead of assuming technology is either good or bad by definition, technology should be understood as intrinsically value laden and value creating. All technologies are created with the actualization of certain values as a goal; they are embedded with values from the start. As technologies take hold and proliferate, they should be understood also to have the potential for forming new, unknown, and unpredictable values. Technologies are both encoded forms of human values and at the same time encode potential and unknowable new values, for better and for worse. Nihilistic design diminishes the significance of such values, treating them as so flexible and subjective that they can be ignored as externalities. In fact, it is the case that the psychological and cultural impact of technology—how the technology changes human values—is at least as important for the future of life on Earth as the “success” of the technology according to economic or engineering metrics.
Insights From Design Science
Of course, we are not the first to suggest this direction for design science. Design ethics has been a relevant topic since the modern world began to look towards technology as formative and essential, and design approaches factoring values, psychology and ethics have had a resurgence in recent years. Technologists have promoted a range of overlapping approaches including (but not limited to) user-centered design, usable design, human-centered design, value-sensitive design, worth-centered design, ontological design, and more. Some of the insights and conclusions from these fields are summarized in the following five propositions about technology:
The first proposition above explains that values are built into technologies, while also following from their use. Technology is value-loaded: it comes with values built in. It is also values altering: technologies change and augment existing value systems. These two dynamics are related but not deterministic. We saw this above with the example of the smartphone and the plow. It is quite common that the values guiding the design of a technology are different from the value system changes that result from its widespread use. For example, the intention behind social media (aside from making money) is to bring people closer together. Yet research shows that heavy users of social media feel more isolated and disconnected from others, and that they value human relationships less than lighter users. Recall that the smartphone technology ecosystem appears to be in many ways undermining individual psyches, families, governments, markets, and cultures—all mediated through effects on human minds and behavior. None of this was intended.
Our second proposition states that no technology ever emerges alone. As the example of the smartphone demonstrates, multiple interrelated technologies emerge together, forming an ecology of technologies. But tech ecologies need not be high tech. Consider the fire/spear/pot/stirring instrument/plate/table ecosystem that has been with humanity since at least before the Bronze Age. This ecology then evolves, as new technologies are joined into an increasingly complicated and functionally bound structure. Eventually our utensils began to involve blacksmithing, and our fires became ovens, our tables made with power tools and from wood harvested thousands of miles away. When one technology advances, it brings along with it many others, while making possible (and sometimes necessary) new technologies. A vast array of related technologies eventually become the background suppositions of a society—its most basic infrastructures—constituting a technological epoch, or a fundamentally new kind of human-created habitat.
This is the subject of the third proposition: that technology comes to form a “second nature.” The adaptive capacities of homo sapiens are the result of millions of years of biological evolution, which have equipped the human body and nervous system to be both niche-adaptive and niche-creating, far beyond what other animals display. This means that humans can adapt to unique and challenging environmental conditions in ways that other animals cannot, while at the same time humans are able to shape and create unique environmental conditions to fit our needs and values. Our adaptive ability has not stopped just because we live primarily in human-made (technological) environments, rather than in mostly natural surroundings. Cities and homes in the 21st century constitute a fundamentally new kind of environment within which humans are continually adjusting themselves. We are thus formed and forming, shaping the environment as it shapes us.
The fourth proposition clarifies that technologies fundamentally change power dynamics in unpredictable ways. This creates new social realities and physical landscapes that require adaptation on the part of individuals and groups. A new technology affects existing practices of human competition and cooperation, and sets new selection pressures that confer advantage to early adopters of the technology. New technologies themselves become uniquely valuable insofar as they confer advantage. Use of the new technology therefore becomes effectively obligatory, as failure to make use of it results in the loss of competitive advantage. Cultural and personal value systems change to accommodate these new selection pressures.
At first the smartphone technology ecosystem was primarily for communication between small numbers of people. It was not that different from a previously normal phone, familiar to humans for a century. You could therefore take it or leave it without disadvantage. However, as this technology saturated social reality and the scope and impact of the technology became exponentially greater, it became massively disadvantageous not to have a smartphone. This is a slippery slope of self-reinforcing technology adoption, and this is a feature that characterizes almost all technologies. As technology spreads there is a cascade of new social pressures that reconfigure existing personalities, worldviews, and value systems. Sometimes, adopting a new technology becomes basically obligatory. Technology creates a need for itself; it makes itself valuable.
It should also be noted that the same technology can both centralize and decentralize power, depending on its implementation and use. Technologies will confer power to those who adopt them, which may reshape the dynamics of power and allow new kinds of power aggregation. In some cases, mastering one technology may allow you to master others. This can create a snowballing effect of power aggregation. Certain technologies have been considered as intrinsically decentralizing or centralizing. This means that some technologies tend strongly towards being used to centralize power (or not) by virtue of their design. But this is not true. Digital technology—the Internet as we know it—was thought to be intrinsically decentralizing, by its very nature. Yet China has managed to create a relatively centralized system that enables high levels of social coordination. This can be seen as simply a different version of America’s centralization of these same technologies, in which a few Big Tech players handle all the data-aggregation, surveillance, and cloud computing. In this case, centralized power is being used in the interest of profit rather than civics. Where China veers towards political control, America is veering toward chaos.
This brings us to the fifth proposition, which concerns the way technologies shape our minds: our ideas, attention, and sense of reality. This is true of all technologies, not just information technologies. When you have a hammer, your attention is drawn to things that might be nails. With a chainsaw and mill ready at hand, the forest starts to appear more like lumber than a living woodland. Our most prominent technologies tend to shape the metaphors we use to explain the world. Early in the 20th century, Freud described psychology using the metaphor of a steam engine; now psychologists often describe the mind in terms of computer metaphors. The availability of certain transportation technologies, such as airplanes, literally changed our perceptions of time and space—a phenomenon referred to as space-time compression. People are more likely to move from the areas they grew up in and marry someone from a “faraway place,” leading to changes in the very fabric of family and community life. Expectations, planning, self-esteem and various other aspects of identity formation are also tied up in technology, as demonstrated in the case of the smartphone technology ecosystem. For example, the “selfie”—and related photo editing and filtering abilities, enabled by the smartphone—has changed our self-perceptions and our sense of what is attractive and appropriate in the domain of self-expression.
The Future of Axiological Design
If we are to design a humane future beyond our current technology stack, we must begin to create technologies with these five propositions as core design principles. Second- and third-order effects on values, culture, power, and personality must be given due consideration. We can no longer limit ourselves to thinking only about the impacts across material economies, finance, and ecology. The question of how this might work at scale is still being explored.
An awareness that the five propositions above are true should result in the realization that humanity is currently in a dangerous place. Technology is intrinsic to our being and becoming as individuals and social systems. The rapid emergence and saturation of digital technologies—largely based upon fast-paced nihilistic design—has resulted in what might be called an accidental planetary computational stack. This means that the technological environments we find ourselves in, with their visible and invisible effects on our bodies, minds, cultures, societies, and ecosystems, did not result from design. They were not even significantly influenced by holistic considerations. Instead, the habitats and environments in which we live now emerged as the result of many varied local developments, driven largely by near-term market forces, with little consideration for the longer-term consequences. Technological innovation by the state has decreased in favor of private industry, alongside financial deregulation and the emergence of venture capital as the driving force for new technology development.
We are entering a new technological epoch with a powerful digital world available to us. Humanity stands on the brink of various possible futures. Technologies of the near future have the potential to free us and connect us, even liberate us from labor and tyranny. But the lessons of the last century have shown us that without a change to our fundamental approach, technology is likely to continue to damage the fabric of our minds, relationships, and cultures. Virtual reality could deepen society’s problem of addiction and desensitize us to the pleasure of nature and offline life, or it could be a tool for immersive learning, allowing us to inhabit the experiences of others in previously impossible ways. Robotic automation could create technological unemployment and an unprecedented underclass, or it could help liberate humanity from drudgery and make positive changes to economic systems. Generative AI could lead to ubiquitous deepfakes and the destruction of shared knowledge, or it could allow customized synopses of all the world's information, unique to our context and needs. The future we get will depend on both the legal regulation of use and dissemination of technology and the cultural influences on technology adoption. Primarily though, a positive future will depend upon elements intrinsic to the design of the technologies themselves. Without the kind of axiological design described here, technology will continue to downgrade humans.
Overcoming nihilistic design requires changes to frameworks, approaches, and methods at a fundamental level. This will necessarily involve a shift away from seeing engineering as primary and values and ethics as secondary. Instead, technologists should hold ethics as the fundamental priority in the design process, with engineering following its lead. Existing approaches to value-focused design offer a good place to start. These approaches begin a necessary pushback against the postmodernist denial of universal values, clearly and compellingly arguing for deliberate and concerted consideration of the ethics and values at stake in all design decisions. Axiological design must take values seriously enough to work out as part of its practice ways to constructively engage with ethical disagreements, moving beyond the impasses created by a default to relativism.
We must also move away from focusing on the impact of a single piece of technology towards a focus on ecologies of technologies. Sets of functionally bound technologies must become the unit of analysis for considering the second- and third-order effects across all domains of human life. We must examine the habit-forming, human-making, and values-shaping environments that emerge from an aggregation of many different ecologies of technology. Where one technology exists, so must many others. Corresponding changes in power and legal regulations flow from our system of overlapping technology stacks, all together forming a second nature to which humans must adapt by necessity.
If we are to establish this approach to the multi-domain impacts of ecologies of technologies, we must consider the design of technologies at all scales. Ecologies of technologies emerge in the context of existing social dynamics, such as inequality, conflict, and the usual games of power, money, and status. At one level certain technologies seem to favor the distribution of power and the freedom of users, yet at another these same technologies prove disproportionately useful for those who already have the resources and intelligence to take further advantage. While in principle everyone may have access to supercomputing via the cloud, in reality this “decentralized service” tends to further advantage those already most advantaged. Multinational corporations and governments can use cloud capabilities to accomplish massive outcomes—such as ubiquitous surveillance and the processing of big data—whereas everyday people use it to store their photos and music.
Finally, we must seek to clarify the fundamentals of human value systems. It is vital that we factor how psychological processes work in conjunction with technology to change human meaning-making and valuing. Preferable futures require a return to considerations of human nature and conditioning, and how the two interact to influence behavior and thus the experience of being human. These human-making dynamics are an unavoidable part of the consequences that flow from all innovations. As we have developed incredible new capacities, our history of naive and now nihilistic design has driven civilization into a massively complicated bottleneck of existential consequence. Existing ecologies of technologies and infrastructures must be judged by their effects on bodies, minds, families, cultures, and the environment. Future technologies must be designed according to methods that take human value and experience seriously enough to be constrained by their limits—such as sanity, dignity, and justice. The dangerous reign of nihilistic design must end if our civilization is not to.
For overviews see: Pavica Sheldon, James M. Honeycutt, and Philipp A. Rauschnabel, The Dark Side of Social Media: Psychological, Managerial, and Societal Perspectives (London: Academic Press, 2019). R. Mojtabai and M. Olfson, “National Trends in Mental Health Care for US Adolescents, 2005-2018,” JAMA Psychiatry 77:7 (2020): 703–714. K.M. Keyes, D. Gary, P.M. O’Malley, et al., “Recent increases in depressive symptoms among US adolescents: trends from 1991 to 2018,” Social Psychiatry and Psychiatric Epidemiology 54 (2019): 987–996. https://doi.org/10.1007/s00127-019-01697-8. ↩
David Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy (New York: Norton, 2022). ↩
See Zachary Stein, “The Global Crises of Measurement,” In Education in a Time Between Worlds (Bright Alliance, 2019). ↩
Martha Lampland and Susan Leigh Star, Standards and Their Stories: How Quantifying, Classifying, and Formalizing Practices Shape Everyday Life (Ithaca, NY: Cornell University Press, 2009). ↩
Consilience Papers, “The Case Against Naive Technocapitalist Optimism” , The Consilience Project, August 1, 2021. ↩
For a discussion of Facebook’s mission statement, see: Andy Wu, “The Facebook Trap” in Harvard Business Review, Oct 19, 2021, https://hbr.org/2021/10/the-facebook-trap. ↩
See for example: Lewis Mumford, Technics and Civilization (Hardcott, 1934) and Jaques Ellul, The Technological Society (Vintage Books, 1954). Also note Langdon Winner, Autonomous Technology (MIT Press, 1978). See also more contemporary work, such as Brett Frischmann and Even Selinger’s Re-Engineering Humanity (Cambridge University Press, 2018). ↩
Langdon Winner, “The Political Philosophy of Alternative Technology: Historical Roots and Present Prospects” Technology in Society 1, no. 1, 1979. ↩
For a useful overview and synthesis of these emerging fields see Batya Freidman and David G. Hendry, Values Sensitive Design: Shaping Technology with Moral Imagination (MIT Press, 2019). ↩
See, Lewis Mumford, Technics and Civilization (Hardcott, 1934) and Lewis Mumford, The Myth of The Machine (Hardcott, 1967). ↩
Although it is especially and more obviously true of information technologies, see: Marshall McLuhan, Understanding Media: The Extensions of Man (New York: McGraw-Hill, 1964). ↩
David Harvey, The Condition of Postmodernity: An Inquiry into the Origins of Cultural Change (Wiley-Blackwell, 1991). ↩
Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press, 2016). ↩
See Batya Freidman and David G. Hendry, Values Sensitive Design: Shaping Technology with Moral Imagination (MIT Press, 2019). ↩