top of page

A Deep Dive Into the Denizens of Morality, Technology, and Law: Are Digital Platforms Different?

-Selçukhan Ünekbaş*

 

Abstract


This blog post addresses the relationship between technology, morality, and law. Contemporary technologies like digital platforms raise many challenges in terms of legal adaptation and display moral implications. This blog post inquires whether digital platforms possess a morality of their own (substantivism), or rather, they are mere vehicles amplifying the goods and evils of humanity on a broader scale (instrumentality). It conjectures potential implications of the moral aspects of digital platforms in terms of increased governmental appetite to intervene in markets


Introduction


The features of law and technology as a field of inquiry are ambiguous. It is difficult to tease out guiding principles for a unifying framework to study technological change and law. Nevertheless, one can deduce several contours of the literature on law and technology. For instance, scholars routinely grapple with the pacing problem. This refers to the issue of timing: (if and) when to enact regulations to constrain the potential harms emanating from the use of technology, whilst preserving innovativeness and incentives for growth? Another problem faced by law and technology scholars relates to “compartmentalization” or “modularity”. Often, lawyers like to address technological issues in an insulated manner, for example through copyright law for works created by AI, or via labor law for issues raised by the gig economy. By contrast, an overarching theory of law and technology is, for the most part, elusive.


This blog post focuses on another problem faced by law and technology scholars: the issue of morality and technology. Specifically, it addresses whether popular contemporary technologies, such as digital platforms, possess a morality of their own, or rather, they are mere instruments, broadcasting the goods and evils of humanity to exponentially wider audiences. This is an exercise concerned with the “politics of technology. The majority of the post is dedicated to this descriptive task. As a conclusion, a brief section touches upon the normative issue of “technology policy”.


“Politics of Technology”: Hallmarks of the Technology – Morality Relationship


Expounding on the relationship between technology and morality has been a long-standing tradition in the fields of law, philosophy, and political science. Many great philosophers and thinkers explored the interaction between technology, morality, and law. As far back as in Republic, Plato famously argued that the act of governance through law is itself a techne, a technology, or a skill, which must be wielded in the service of “the good”.


Instrumentality


In more recent times, interpreting the relationship between technology and morality revolves around two dominant views. On one side stand the instrumentalists. Instrumentalists view technology as an inherently neutral channel, through which the intentions of users materialize. They attribute a great deal of importance on human agency and are often techno-optimists. For instance, in Atlas Shrugged and The Fountainhead, Ayn Rand depicts technology as essential for human progress. For Rand, the use and development of technology is inherently moral and “good”. Similarly, Hayek viewed technology as an expression of human creativity and a fundamental component of an emergent, spontaneous order. By contrast, in Capitalism, Socialism, and Democracy, Schumpeter argued that technology exerts can be a force of both positivity and negativity. However, for Schumpeter, possible moral erosions that come along with technology are offset by the enormous advantages in prosperity technological development brings to societies.


Not all instrumentalists are techno-optimists, though. For example, Bertolt Brecht, a German playwright known for his works on epic theatre, characterized technology as a powerful force that can be used for good and evil. Brecht argued that technology and morality are intertwined with human agency and conscious choice – a view also espoused by Jean-Paul Sartre. Just like Brecht, Sartre posited that humans were fundamentally responsible for their own actions. It followed from this freedom of human choice that technology merely reflected the choices made by individuals out of their free will. In line with Sartre’s human agency, Michel Foucault argued that the use of technology by humans eventually leads to the regulation and control of human behavior. For instance, in Discipline and Punish, Foucault argued that technology is instrumentalized by institutions (such as prisons) to coerce individuals into acting in accordance with societal norms and moral expectations.


As a summary, instrumentalists see in technology a potential to be unlocked. This potential can be used for good or evil, depending on the intentions of the user. Nevertheless, the main point remains: technology itself is inherently neutral. Only the actions of individuals using the technology give rise to moral implications. Thus, in an instrumentalist view of technology, laws’ role would be limited to regulating human behavior. By constraining and channeling the acts of human beings, an intrinsically neutral technology can be oriented towards the greater good.


Substantivity


As opposed to instrumentalists, substantivist views of technology are often (but not exclusively) based on Marxist or Post-Marxist views of capitalism. Substantivists describe technology in an embedded fashion. This embeddedness refers to the notion that technologies are deeply intertwined with political, cultural, and societal values. This contextual analysis of technology is sometimes pessimistic and places little importance on human agency. In such substantivist works, technology is understood as a force that subjugates the objects and subjects of a society. Thus, the pessimism stems from an understanding that not humanity, but technology shapes the rest of society and culture. By contrast, other substantivists argue that technology and society are in a dialectical relationship that co-creates moral imperatives.


Prominent thinkers of the substantivist tradition include many modern philosophers. These thinkers come from a broad spectrum of political affiliations. Few things unify thinkers as diverse as Robert Merton, Martin Heidegger, Karl Marx, and Jacques Derrida. Nevertheless, one can argue that all of these philosophers viewed technology as entailing moral values. For instance, Merton believed that societal values, norms, and beliefs influence the features of technology and the latter cannot be meaningfully detached from the former. Derrida espoused the view that technology inherently influences how we perceive the world around us, thus becoming a channel through which human values, including moral values, are communicated.


A common theme in substantivism is “the alienation effect” of technology on human relationships and morality. For instance, in The Question Concerning Technology, Heidegger viewed technology as a way of “revealing” or “enframing” the world, eventually reducing everything to unauthentic resources for use and manipulation. Merton talked about the unintended consequences of technology, which could lead to moral erosion should the developers and users act carelessly. In The Age of Secularization, Augusto del Noce equated the rise of technology and “scientific rationalism” with the demise of traditional norms of morality. Soren Kierkegaard’s vision of busyness as laziness can also be interpreted as a moral shortcoming of technology, given the fact that technology today is a mechanism with which people keep themselves distracted and preoccupied. Perhaps most famously, Karl Marx connected the concept of technology with exploitation (of the proletariat). Indeed, “the alienation of the assembly-line worker” was a profound problem corporations have grappled with in the 20th century, as Peter Drucker documented in his seminal book on General Motors, Concept of the Corporation.


“Technology Policy”: Are Digital Platforms Different, and What About the Law? Digital Platforms and Morality


As the eminent business organizations of the digital age, platforms present enormous opportunities and challenges for individuals, societies, and economies. On the economic side, these trade-offs include innovation and growth thanks to network effects at the expense of market contestability due to scale economies, data advantages, and market tipping. On the societal front, platforms can help uncover fringe views, for good or bad. Furthermore, platform technology can provide an alternative media channel, thus increasing plurality and improving ease-of-access, while at the same time curbing consumer choice, and hence damaging plurality and diversity of views. Individually, platforms can provide enormous benefits for users, while running the risk of alienating them, leading to mental issues and economic exploitation (such as by algorithmic discrimination). From the foregoing, it seems that digital platforms espouse an instrumentalist view of technology and morality. After all, the deleterious (or advantageous) effects of platforms materialize based on choices made by users (or developers). As analyst Ben Evans argues, technology merely amplifies societal problems.


However, it is also possible to construct a moral-substantivist view of digital platforms. Take the economic side of the argument. One can sensibly argue that discrimination is intrinsically coded into the digital economy. To understand this view, we must examine the economic context in which platforms operate and compare it to traditional economies. In brick-and-mortar industries, the primary bottleneck is supply and distribution. Traditional industries like manufacturing suffer from the classic economic problem of scarcity. Scarce resources mean that the vital question is getting supply and distribution “right”. If a firm is able to navigate scarcity, produce at a cost lower than its competitors, and distribute effectively, it achieves success in the traditional economy. As a corollary, in traditional industries, competition mainly occurs on price, and companies aspire to bring down costs by boosting their productive efficiencies.


The digital economy is fundamentally different. Due to the power of the Internet and the democratization of supply, distribution and production are no longer bottlenecks. In other words, in the digital economy, supply is nigh-infinite, and distribution has close to zero marginal costs. It follows that the importance of traditional bottlenecks are weakened in the digital economy. Instead, the crucial parameter is controlling demand. In a world of abundance, individuals have a near-infinite number of choices in front of them, but do not necessarily know what to pick. Informational asymmetries, consumer inertia, and “choice fatigue” mean that the important bottleneck shifts towards the ability of firms to manage, curate, and control users’ choices. Again, as a corollary, competition in the digital economy takes place not over price, but usually on other parameters, like quality, ease-of-use, or innovativeness.


The economic status quo of the platform economy brings forth two moral problems. First, the comparatively vulnerable position of users leaves them open to exploitation. Second, as curation becomes ever-more important, discriminatory practices proliferate. To curate is to necessarily discriminate. If a search engine places more relevant search results on prominent locations in a web page, it is by definition engaging in discriminatory treatment based on its perception of what is more appealing to users.


Aside from the economic viewpoint, digital platforms may create other moral hazards as well. For instance, people may perceive some technologies as obligatory”, for instance due to a sense of dependence. In this sense, economic features of technologies can exacerbate societal problems – for example, network effects may increase switching costs for users, thus increasing dependency and fostering a sense of powerlessness.


What About the Law?


So far, this post has focused on instrumentalist and substantivist views on technology and morality. It also analyzed some of the characteristics displayed by firms active in platform markets. This leaves the question of the law. Specifically, should the law intervene to “fix” the moral perils created by digital platforms? The answer to that question fundamentally hinges on whether one understands platform morals in an instrumental or substantive fashion.


If we follow the footsteps of Foucault, Sartre, and Schumpeter, we can argue that the law has a limited role. Since morality originates from human action, and thus, cannot be attributed to technology, law’s role is essentially confined to regulating individual behavior. More effective education policies, improving digital literacy, and ensuring that alternatives are available through well-functioning markets, would demarcate the playing field of the law. Additionally, governments can enact measures to reduce information asymmetries and empower individuals to better regulate their own behavior. Thus, the bulk of the effort would come from the populace, and not from the state.


Many substantivist thinkers also argued for a movement of detachment from technology by placing the onus on people. For example, Heidegger talked about adopting a distant way of thinking toward technology. He dubbed it “a non-technological way of thought”. Augusto del Noce called for a “free relationship with technology”. In Karl Marx’s view, the only way to escape the alienating effects of technology would be developing a new way of relating to technology. He called it socialism. Perhaps Michael Oakeshott reflects this puzzle best. As a scholar, Oakeshott’s views were very difficult to pigeonhole; yet he also talked about adopting a “traditional mode of thought” as opposed to a “technological mode of thought”. Hence, for many thinkers surveyed in this post, the role of law exists in regulating the user and helping her make better, well-informed choices.


Nevertheless, contemporary developments in the regulatory responses of states to digital platforms convey a differing message. Today, governmental efforts converge mostly around regulating the artefact instead of the user. These interferences with technology stem from an appeal to core values underpinning law and regulation. For example, in terms of competition law, we observe a greater appetite on behalf of antitrust authorities to mandate non-discrimination in product designs by digital firms. Similarly, as the recent EU case involving Meta suggests, the use of data by digital services is becoming increasingly controlled. However, legal intervention may not always produce desirable outcomes. In fact, an unintended consequence of legal measures can be the legitimization of immoral practices concerning technology. The coercive nature of legal norms may render the use of certain technologies non-optional. For instance, by explicitly fostering the use of certain technologies or imposing disadvantages for non-usage, law can lead to tech-dependency. One may argue that e-commerce is immoral due to creating large environmental costs, operating with questionable working conditions, and perpetuating consumerism. On the other hand, law arguably helps in keeping people addicted to e-commerce usage by cranking out financial incentives and tax breaks, enacting more favorable consumer rights, and spearheading digital transitions.[1]


Different explanations may exist for the rise of governmental interference in technology. Perhaps the reason behind this wave of new interventionism is a lost belief in market processes. Perhaps it is a paternalistic urge to protect citizens, sometimes against their will. Or maybe, the precautionary urge to remain safe rather than sorry, even in the face of uncertainty, is proving too strong for governments to resist. In any case, it is intriguing to conclude on the note that digital platforms present bespoke questions not only for regulators and policymakers, but also for philosophers and scholars of technology and morality.

[1] In some cases, law outright compels individuals to use certain technologies. For instance, refusing to vaccinate a newborn may be punishable as negligent behavior under criminal law.


 

*Selçukhan Ünekbaş is a Researcher in Law at the European University Institute (EUI) in Florence, Italy. His research formulates a more technological approach to European competition policy, focusing on the interactions between technological development and competition law. Selçukhan was trained as a lawyer in Turkey and Belgium, and his research was funded by the Turkish Ministry of Treasury and Finance, the Italian Ministry of Foreign Affairs, and the European Commission.

Comments


Recent

bottom of page