Skip to main content
Hello Visitor!     Log In
Share |

Disruptive Technologies, A Critical Yet Hopeful View



ARTICLE | | BY Carlos Alvarez Pereira

Author(s)

Carlos Alvarez Pereira

 Get Full Text in PDF

Abstract

A new perspective is attempted on the role played by Information and Communication Technologies (ICTs) in the evolution of human societies in the last few decades. Particular attention is paid to their (lack of) relationship with the challenges of sustainable development, presenting the view contrary to mainstream perception that for now ICTs have a negative impact on sustainability overall. This in turn is described as a result of how ICTs and innovation in general are presently conceived and framed in a way that actually inhibits their potential for human progress in harmony with the environment. Some hints are suggested on how to reverse this situation and make digital tech useful for life as a whole.

“Most of the necessary knowledge is now available but we do not use it.”
– Rachel Carson, “Silent Spring” (1962)

1. Disruptive, or not enough for Sustainability?

Nowadays, we humans devote a significant part of our time, attention and resources to digital artifacts. While there are many other domains where technology is evolving, ‘digital’ has become a synonym for “technology” and a mandatory part of the public sphere: as such, periodic launches of the latest smartphone model or a popular videogame going “real” in the streets of our cities get massive news coverage for free. And so, at least in the minds of the public in industrialized countries (and it is a lot), digital impetus is perceived as the best herald of science, technology and innovation, and the driving force of change in society. While “digital immigrants”, the elderly who grew up with book, pens and paper, are being left behind, the young see themselves as “digital natives”, whose behaviour keeps changing rapidly, in line with hundreds of new apps every year and the so-called “digitization” of society, the buzzword of the time. Technological innovation is speeding up, or so it seems, and introducing new products, altering processes, shaking markets, and ultimately changing our lives, by inducing transformations which are deemed as “disruptive”.

This concept of disruptive innovation based on technology is generally presented, and probably perceived by most as something positive opening the future to new solutions for many of our problems, if not all, with benefits for everybody and no negative consequences. It builds on the longstanding success of Science and Technology (S&T) which has made tangible many crazy wishes of human imagination like flying, travelling to outer space or chatting with other people wherever they are on the planet. And so it feeds our dreams by extrapolating past achievements to all the good things that will happen in the future because of the miraculous progress of technology. With it is revealed a desire for omnipotence, our aspiration to an infinite capacity to break the physical limits which restrain humans, including that of time and death.

Since the 1980s, an explosive growth happened in Information and Communication Technologies (ICTs) and their presence has become pervasive. The widespread frenziness provoked by the latest digital gadgets mirrors a true and exciting entrepreneurial spirit which is mobilized by the potential of technologies to address human challenges. But when looking into the future, little attention, if at all, is paid to the three centuries we have already lived in the context of knowledge creation and technological innovation and the learnings acquired about how these processes contribute to shape the evolution of our societies. Science and technology have strongly influenced the path followed by humanity since the 18th century, which means that they have also often been (still today) effective instruments of mass destruction, environmental degradation and social exclusion. This obscure role of S&T is generally hidden, either as unintended consequences to be corrected later or through the argument of “neutrality” by which new technologies are just tools and their good or bad usage depends entirely on society, not on the process of innovation itself.

In parallel with the explosion of ICTs, humanity became aware of the many and inter­twined challenges it faces to make life on this planet enjoyable and sustainable in the long run, a complex set of interrelated issues for which the Club of Rome coined the term “world problematique” back in the 1970s. The Brundlandt Commission popularized in 1987 the concept of “sustainable development” almost in sync with the launching of the first personal computers (IBM PC in 1981, Commodore 64 in 1982 and Macintosh in 1984). But Sustainable Development (SD) has to still prove it is not an oxymoron. In the last 30 years the price of moving towards higher levels of human development has been a great increase in ecological footprint and overall unsustainability, with several of the most critical planetary boundaries having been already crossed and the “Overshoot Day” happening earlier and earlier. So, we still have to find, now urgently, a pathway to decrease dramatically the negative impacts of human societies. And the only human way to do so is to greatly raise the standards of living of most of the world population without increasing their ecological footprint, while at the same time making developed countries reduce their footprint dramatically without major damage to their levels of human development.

The size and nature of this transformation are unprecedented. All types of human capac­ities will be required to achieve this transformation. And, since S&T play a key role in shaping our relationship with nature and our aspirations and values, should not the best and brightest of researchers and innovators make major contributions to address the challenges of the “problematique”? In particular, should we not use digital technologies to overcome the dilemmas created by our unsustainable way of life? Is digital disruption aligned with the goals of sustainable development? If not, how can we align them for the sake of humanity?

Surprisingly enough, the first answer to these questions is that we do not have an answer. Although sustainability has become part of the discourse as well as a real concern for the ICT industry, digital technologies and sustainability have been rarely analyzed together in a rigorous manner. The scientific literature about this topic is so far worryingly thin and in many aspects we do not even have the right questions yet, much less the responses.

But if we start by considering the direct impacts of ICTs in terms of sustainability, there is no doubt that the first-order effect is negative. The evidence is accumulating and has many different faces as follows.

  • Critical resources. ICTs as well as other high tech developments for renewable energy or electric vehicles depend for their production on many mineral resources: more than 50 different kinds of metals are used in a smartphone. Awareness is now growing about the criticality of those resources, in terms of physical access and geopolitics, China being by far the largest provider of the most critical ones. And this reality has a very ugly side: as The Guardian put it in 2012 at the time of the Second Congo War which claimed more than 5 million lives,

    “In unsafe mines deep underground in eastern Congo, children are working to extract minerals essential for the electronics industry. The profits from the minerals finance the bloodiest conflict since the Second World War; the war has lasted nearly 20 years...”1

    A list of Critical Raw Materials (CRMs) is defined and reviewed regularly by the European Union, and it contains now 20 items, including indium, germanium, niobium and the group of Rare Earth Elements (REE), which are key ingredients in every digital artifact. The degree of recycling of such materials is low, at most around 15 to 20%, and their demand is high and growing, hence their criticality. In this respect, ICT is not different from other industries intensive in the use of non-renewable resources of growing scarcity.*

  • Production processes. Producing microchips, the basic component of digital technologies, is not only intensive in critical materials, it is a process whose efficiency is extremely low as measured by input-output ratio. A single 2-gram DRAM chip is estimated to require 1600 grams of fossil fuels and 72 grams of chemical inputs (so the material input-output ratio is more than 800:1), as well as 32000 grams of water and 700 grams of gases (mainly nitrogen). As Williams, Ayres and Heller put it,

    “The production chain yielding silicon wafers from quartz uses 160 times the energy required for typical silicon, indicating that purification to semiconductor grade materials is energy intensive. Due to its extremely low-entropy, organized structure, the materials intensity of a microchip is orders of magnitude higher than that of “traditional” goods.”2

    Producing microchips is an extraordinary achievement of human intelligence but we consume them nowadays as if they were abundant and low-impact commodities, while they are definitely not.

  • Waste. Although they look very clean, digital devices are a major source of waste in the consumerist framing which still drives our behavior. Electronic waste (e-waste) is made of discarded electronic devices and components such as computers, mp3 players, televisions and mobile phones which contain hundreds of chemicals, including lead, mercury, cadmium, Brominated Flame Retardants (BFRs) and Polyvinyl Chloride (PVC). Many of these chemicals are known to cause cancer, respiratory illness and reproductive problems and they are especially dangerous because of their ability to migrate into the soil, water, and air and accumulate in our bodies and the environment.

    The US Environmental Protection Agency (EPA) estimates that e-waste is growing 2 to 3 times faster than any other source of waste, the total amount being over 50 million tons per year, with the USA and China being the largest contributors, while the % of recycling continues to be low. Although official directives exist on Waste Electrical and Electronic Equipment (WEEE) and Restriction of Hazardous Substances (RoHS), the dangerous and often illegal deconstruction of e-waste is a growing business worldwide, estimated at more than 10 billion US dollars annually. It includes practices such as the massive exports of e-waste from rich countries to the rest of the world or the exploitation in the USA of prison inmates working without adequate protection, in poor health and safety conditions.

  • Energy consumption. Of course, the digital tech sector is a huge consumer of energy. Mild as it is, a single Google search is equivalent to a standard light bulb operating for between 15 and 60 minutes.3 The operation of a smartphone is quite efficient (4 kWh per year) but the energy used to manufacture it amounts to 280 kWh, while it is meant to last only 2 to 3 years. And while the patterns of consumption are changing due to the evolution of devices from stand-alone PCs to efficient smartphones and tablets connected to growing cloud infrastructures, this does not prevent operating consumption from growing: it has stagnated around 830 billion kWh per year between 2010 and 2015, with less consumption in end-user devices but more in data centers, and the prospect is that it will grow at a 2% annual rate, up to 1020 billion kWh in 2025 (without taking into account energy spent in production).§
  • GHG emissions. Last but definitely not the least, the ICT sector is the fastest growing contributor to emissions, currently contributing around 2.25% of total emissions but with a compound annual growth rate of around 6%! This is due to the combined growth of networks, number of devices, time of usage and dependency of organizations on digital tech.

While being contemporaries, the aspiration for sustainable development and the expansion of ICTs have not been aligned, until now. On the one hand, environmentalists have been pushing their claims and proposals of solutions to policy-makers in order to convince them of enforcing regulations against harmful activities and to change the patterns of economic development. In that context, ICTs have at best a secondary position. There is no system­atic assessment of their role in The future we want resolution adopted as an outcome of Rio+20,** nor is one proposed in its recommendations for the future. ICTs are present in the SDGs but only in a few number of goals and targets.†† Not everything is negative in this respect, though. The International Telecommunication Union, a UN agency, identified in 2013 a number of clear challenges and developed methods to assess the impact of ICTs on energy consumption and policy guidance for developing countries on the application and use of ICTs to combat climate change and other environmental issues. The OECD even adopted in 2010 at ministerial level a document of “Recommendations on ICTs and the Environmentthat sets out 10 principles as a general framework addressing first, second and third order effects of ICTs. But will the recommendations be enforced with enough momentum?

On the other part of the equation, that of the ICT industry, after recognizing the negative direct effects mentioned earlier, sustainability has become part of the agenda, due to the costs of energy consumption and waste treatment as well as to avoid reputational risks. The telecom industry (both operators and manufacturers) created the Global eSustainability Initiative (GeSI) which issued in 2008 its SMART 2020 report‡‡ and the Electronics-Tool for Accountable Supply Chains (e-TASC) to help measure the sustainable performance of companies. The aspiration is that ICTs will help the emergence of sustainable development and in general of a “better world” by promoting a “smart” transformation of economic activities, a better and generalized access to education, health and knowledge, the empowerment of people and a greater transparency, as well as a growing awareness of sustainability issues, with a greater capacity to influence public opinions and agendas. Wherever information is relevant (where is it not?), digital tech can be there to improve current processes, or so it seems. But to be true, indirect impacts of ICTs have not been analyzed in detail, and even if they are, they are done so often only from the point of view of GHG emissions. And the conclusions of one of the few systematic studies are not very optimistic:

“While the overall impact of ICT on most environmental indicators seems to be weak, the impact of specific areas or types of ICT applications can be very relevant in either direction. On an aggregated level, positive and negative impacts tend to cancel each other out.”4

ICTs play different roles and serve different purposes. But of course, they depend on the societal logic in which the organizations are embedded. If profits are required for a business to survive and regulations do not ensure that sustainability goals contribute to profitability, how could we expect businesses to behave in an eco-friendly way? Likewise, ICTs can be disruptive but they, or the transformations they enable, do not necessarily improve sustainability or promote circularity in the reuse of non-renewable resources. How would they, if the purpose is not built in their design? Whether higher efficiency or dematerialization is achieved depends on decisions that are taken by managers outside the ICT sector, on the basis of commercial viability rather than environmental sustainability. As a consequence, we have no evidence yet of the order of magnitude of those sustainability gains, not if ICT-driven greater efficiencies provoke rebound effects à la Jevons (See Jevons’ Paradox or rebound effect).

On the contrary, we have a strong evidence of how the growing efficiency of micro­processing is exploited in a massive rebound effect on the other side of ICTs, when they fulfill no other purpose than consumption itself, just as devices of entertainment with very short cycles of usage. The positive effects of ICTs on sustainability are probably more than offset by the mass consumerism whose magnitude is to become the driving force of this industry: the number of cell phones is already larger than world population, but the truly astonishing figure is that of annual shipments, which was more than 1400 million units in 2015!

On the one hand, there are well intentioned but ineffective declarations recommending SD as a new paradigm. On the other, there is this extraordinary strength of a creative and fully deployed industry feeding and being fed by our consumerist addictions. It is pretty clear why, for the time being, the opportunity for an encounter between Sustainable Development and ICTs has been lost.

For three centuries our driving belief has been in the progress of humanity, of course reinforced by the success of S&T. But, while for generations born before the 1980s changing the world for the better required (or primarily) political and social innovations, now it seems that “disruptive innovation” has even displaced every other source of hope. In a sense, we put it at the core of societal evolution, and this is why we also think it should rescue us from all disasters, even those provoked by ourselves. But is it not too much to expect? Beyond a generic claim of ICTs to contribute to a better and “green” world, the actual lack of mutual recognition and cooperation between digital tech and sustainable development is very significant of the effort still to be made to harness the power of innovation for the progress of humanity.

2. The Future: Techno-utopian or Technolitarian?

Digital technologies are certainly a success story but their origins are not recent. They go back to a long series of scientific advancements that have been taking place since the early 19th century and, 30 years after first PCs, many ICT-driven changes have also taken place. We can analyze them from a historical perspective, without reference to a perfect future of dreams yet to come, but to what has actually happened. In particular, many of the promises of ICTs are already applied in leading-edge companies such as Google, Amazon, Apple and the like. Now, the question is, what is prominent in the history of these three decades?

"There are no limits to what we can achieve."

From a technical point of view, two main drivers are at the core of the process of digital development, both referred to as “laws” while they are actually educated guesses with an empirical validation but no evidence of a universal or eternal validity. The first is Moore's law (named after the founder of Intel) which holds true even now (it was stated 40 years ago) and states that technical progress in miniaturization makes it possible to double the number of transistors in a dense integrated circuit approximately every two years, thereby enabling the computing power of microprocessors to be increased extremely fast without increasing their cost (or so it seems), so that new digital artifacts and applications can be created at a faster pace. The second driver is Metcalfe's law stating that the value of a network is proportional to the square of the number of connected users. This means that a competitive diffusion process over a network can be very fast because the advantage of the leading player is more than linear, it grows faster and faster with the number of connections it gets. Software business, telecommunications and the Internet exhibit such strong network externalities.

These observed characteristics are now used as foundations for a new belief in “exponential innovation” as a process able to disrupt all areas of human practices for our benefit. Ray Kurzweil and Peter Diamandis are the best known promoters of this vision of infinite improvements which they interpret as “the way to a new world of abundance”,5 in which the needs of the billions of inhabitants of the planet would be met by using new technologies of water purification, solar energy, medicine, education, and the reuse or recycling of rare minerals. This “digital solutionism” favors the vision that every problem we face (real or imaginary, and whatever its relevance) has a digital solution6 and claims a “right to disrupt” any kind of activity, but does it really work? Actually, the improvement of existing processes in a purposeful way seems harder than trying to replace incumbent businesses by newcomers, and this in turn is harder than discovering a “blue ocean”, i.e. to create a completely new activity which did not exist (or in a limited way) and where no competitors of the old world will be found.7 This is where Microsoft, Google, Facebook and Twitter succeeded. Following Metcalfe's intuition, once a digital company is able to outdo its competitors in terms of number of clients or users, it will have very good chances of becoming a private monopoly in the category where its main business is, which is why utilities used to be publicly regulated or owned. But digital moguls have been able to dodge regulations and occupy a digital world divided into modern fiefs. This explains the paradox that digital tech was supposed to have levelling consequences but produced instead an extraordinary concentration of power and wealth in few hands, those of the gatekeepers of the cyber-space.

Digital tech presents itself as a sector offering neutral, general purpose tools to meet all human and societal needs. It claims innocence since its outcomes, good or bad, will depend on the usage that humans will make of them. To be more precise, the sector presents itself as a positive achievement whose negative impacts, if any, can only be attributed to bad usage, not to the conception of the technologies themselves. In our view, this perspective deserves the name of “digital ideology”. ICTs are certainly an expression of human genius but they are also truly dependent on the social and political contexts in which they were born and are developed, and are neither neutral nor exogenous to society. Entangled with societal evolution, they derive from human decisions, including design choices which create path dependencies and lock-ins since the networked nature of the digital world facilitates the emergence of monopolies. And those decisions are based on a certain modelling of reality and are not free of economic interests, political intentions and in general values in certain frameworks of interpretation, specific to times and places, and are not truly universal.

“Human achievements are not based on erasing physical limits but on better understanding them and finding ways to build on our limitations.”

Therefore, we should ask what futures we could build by using digital tech in one way or another and, more importantly, by designing their next generations in one way or another. For the time being, high risks are already here which could pave the way to “technolitarian” futures in which human and environmental purposes would be secondary to the logic of technological innovation. Those risks (maybe unwanted by the promoters of digitization, but still real) are related to underlying assumptions of the digital ideology.

First is the denial of physicality, through the self-illusion of “dematerialization” in the Singularity jargon. At a time when we need to recognize that the resources on which our life depends are actually quite limited, making us ignore that the challenge is of course a step in the wrong direction. While ICTs could be crucial in monitoring externalities of all kinds, that role is played in marginal or even contrarian ways, by asserting that there are no limits to what we can achieve. Digital ideology interprets limits as unbearable limitations and declares their obsolescence (except of those imposed by markets). Dematerialization is used as a claim to become free from them, as is implicit in terms like “zero cost” or the “cloud”, while we are still physical beings living in a finite planet with physical costs. Actually, digital infrastruc­tures are huge, and so is the amount of resources spent every year in the mass consumerism of digital artifacts with a minimal circularity of materials. Human achievements are not based on erasing physical limits but on better understanding them and finding ways to build on our limitations, which is the true foundation of our freedom: we do not fly as birds, we create artifacts transporting us in the air while still respecting physical laws. A different, real kind of dematerialization should certainly happen enabling human development to be free from the accumulation of material artifacts, but this is not what the digital industry is doing right now.

Second, digital innovation is increasingly focused on the disposability of humans, on replacing them by automated machines, potentially threatening every single job on Earth, skilled or not, up to that of President of the USA for which (not a joke) the IBM Watson software has been proposed.§§ Even analysts of stock markets are at risk of being replaced by machines in a self-devouring pirouette of financialization,8 pointing to the dystopia of a world owned by the happy few and operated by robots, while the 99% of us would have to struggle for the crumbs. Of course the story-telling is different: it says that all of us will enjoy a plentiful life of leisure on the beach while robots do all the necessary work, which looks like a weird dream of spoiled kids. But at a time when inequalities are rising everywhere, who can believe that our social structures will use technologies to produce that future except for a very few? Moreover, in an obssessive quest for tech-based performance, the Singularity offers to end human life, replacing us with digital replica “living” forever in digital networks. What emotions, love, sex or care will become in that case remains unclear, but is this anyway a dream for humanity or a nightmare? Does it not sound like a revival of eugenics, the movement for the “improvement” of the species which won strong recognition in the UK and USA in the first decades of the 20th century until it was discredited as part of the Nazi ideology?

And, again and again, we see the fantasy of omnipotence. The claim is that more digitization, connectivity, access to data and algorithms will produce a holistic Artificial Intelligence (AI), much superior to the human intelligence (while we still ignore what intelligence is), and that it could understand world's evolution and make it predictable, controllable and ready to be optimized for the benefit of all, of course by taking the right decisions better than humans. One could argue that more connectivity and digitization also bring new vulnerabilities, f.i. to electric transmission grids which would be more exposed to cyber-attacks. But, although important, this is not the main point. In the cult of AI, the assumption is implicit that all societal problems can be reframed to have technical solutions, and that only human weaknesses prevent us from doing what is better for all. No doubt, this is a subtle but totalitarian way of hiding that true decisions are not fully technical but concern political and moral dilemmas, about what we consider as values, what we interpret as good or bad, better or worse.

And by the way, a growing number of autonomous entities (human or not) and more connections between them makes life and society more complex, not less, and then more unpredictable and prone to so-called “emergent phenomena”, which could be positive or negative. Overall, this is a welcome trend since it opens the space of possibilities (life emerged from non-living elements), but it definitely excludes the perspective of a panoptic controllability of the world as a machine. AI and Big Data can be put at profit to create specific environments where predictability improves and this could be used for human benefit (as well as for perverse intentions), but it requires the understanding of specific contexts and goals, the involvement of human stakeholders and ultimately taking political decisions to make sure that sound purposes are enforced.

On the other hand, ICTs have also played a key role in the evolution of the public sphere, starting with the massive deployment of television. Enough time has passed since the TV and Internet were founded, so we can assess their impact on content production and diffusion, and on the formation of public opinion. Digital techs are credited with facilitating access to knowledge and art, as well as the free expression of citizens. Is this really happening? Not on the side of content creation: in the age of so-called “knowledge society”, artists and journal­ists have a much harder time making a living out of their creations, except for a handful of them.9 At the same time, a few “lords of the cloud” become the monopolistic owners of our attention, and in the frenziness of YouTube postings we, the public, get distracted by making our lives available for open scrutiny in search of worldwide recognition, although mundane and strictly ephemeral. We enjoy and suffer everyday the arrogance of novelty, the obsession with instant gratification and the reductionism of life to the limited, database-oriented nature of online interfaces.10 And what kind of knowledge brings the trivial access to pornography? Are we empowered citizens or is all that already invented by Berlusconian TV in late 1980s and now globally expanded, just the reminder that Guy Debord was right, that we live in the “société du spectacle”?

“Since more effort is devoted to improve machines than to expand the cognitive capacities of humans, it is unclear if we are really facilitating access to knowledge.”

Through our multiple addictions, including that of videogames keeping us in eternal adolescence, we are entertained to death11 and our conformist mass-media culture inhibits the genuine expression of humanity through artistic creation. Drowned as we are in an endless deluge of gossip, we get lost in the “trending topics” of the day and thinking in perspective becomes extremely difficult: if we connect to instant reality we are not able to think; if we disconnect from it, will our thinking be valuable or even heard? Alternative thinking exists and is probably richer and stronger than ever but we do not pay much attention to it. We have access to much more information, but since more effort is devoted to improve machines than to expand the cognitive capacities of humans, it is unclear if we are really facilitating access to knowledge. We live in a constantly accelerated time12 and we are not so interested in learn­ing when it is contrarian to the high-speed mainstream. In a sense, we live in a true gridlock of thinking, by which we are also able to unlearn very fast some wise lessons acquired at high cost in the past (f.i. that of a strong regulation of financial markets).

Moreover, ICTs are especially well suited to create extensive representations of reality and, in a dangerous twist, to create the illusion of a substitution of reality by its artificial representation. A self-referential reality is emerging where digital technologies talk all the time about themselves and try to capture all our attention to create lives only experienced online, way beyond what commercial TV started to do decades ago. This tends to reduce the richness and complexity of human life: algorithms are designed by the “lords of the cloud” to maximize the audience of their websites, not to enhance the diversity of life13 and when we are shopping online, our whole personality is downgraded to a consuming profile. Everything that the e-shop knows about us is cleverly used to make us buy more. Is an e-shop like Amazon to blame? The company brilliantly plays according to the rules of the game, promot­ing instant gratification in one-click consumption, reinforced by our permanent exposition to digital scrutiny. Also, “digital totalism”14 achieves a tour de force in making us think that our gadgets are more than they are and in the end that they are better than us, so we have to adapt ourselves to them instead of the other way around. If we do not understand how a new gadget works, it is our fault and never that of a poor design. Learned helplessness seems to be the generally accepted pattern of behaviour when dealing with digital technologies.

And scrutiny is constantly growing: the digital ideology legitimates the capture of every conceivable data, including those of public origin to be used for commercial purposes, and the representation of everything we do into data that can be captured, stored, analyzed and exploited. The nightmare of Bentham's panopticon is enabled by digitization, and the fantasy of omnipotence comes with a flavor of absolute control: in every ongoing dicussion about technology and security, the main thread is about more surveillance and control, rather than asking how technologies could help in creating more trust among humans. Big Data is in the end so close to Big Brother, not of a stalinist kind, rather an ultra-sophisticated corporate one. For good or bad reasons, affluent cyber-libertarians at the core of digital discourse distrust governments and existing political processes,¶¶ which is practical to justify tax avoidance, but they are definitely friends of big digital corporations whose power is deemed to be innocent by definition and which require everybody to be transparent while they are themselves not, in another twist of self-referential blessing.15

In the way ICTs are used today, an autistic dynamic is at work: a performative capacity is being deployed to create a world dependent on (what is assumed to be) their underlying logic, overriding the idea that they could be used as beneficial tools in our relationship with other humans and the environment. All in all, it is very hard to state that the public sphere and our social bonds are being enriched by becoming digital, it seems rather the other way around. Of course the way out of this wrong direction is not the denial of technological innovations but leveraging them to address the pressing challenges of humanity. But how to do that? How to go beyond pure critique to ensure that digital tech also contributes to the solutions? Maybe a closer loop at their societal dynamics could help.

3. The Dynamics & Framing of Digital Tech

The dynamics leading to the existence and development of ICTs are complex, and this complexity is a big part of their success. Ironically enough, although the digital world likes to depict itself as a bottom-up movement based on free will and the soft power of inventive people fighting against the establishment, it actually started in the very core of government, and the most traditional part of it: neither computers nor the Internet would exist without the driver of military research since the 1930s, especially in the USA through the Defense Advanced Research Projects Agency (DARPA) and its precedents. So, ICTs were actually developed as part of a top-down agenda with very specific purposes. But over time the field integrated other contributions and it is a stroke of genuine American genius to have mixed many different ingredients in the digital cocktail we know today. We identify at least six relevant factors that give ICTs their extraordinary momentum:

  • The strategic intention of the USA to keep its global dominance in pursuit of its national interests through a panoply of means not limited to the military, which includes keeping the leading edge in S&T. This intention is still much alive today as shown f.i. in the ongoing discussions on the governance of Internet.16
  • The success of government-driven agendas to foster the advances of basic research in physics and the great potential of applications of electronics, telecommunications, miniaturization, optics and other disciplines.
  • The enthusiasm and creative energy of relatively small groups of young “techies” willing to “change the world” (whatever this could mean), originated in the Californian “anti-establishment” movements in the 1960s and focused since the 1980s on a disruptive agenda with a mainly libertarian stance.
  • A unique capacity of the marketing and advertisement industry to develop attractive story-tellings in order to convince people of adhering to new gadgets, get rid of the “old” ones and do it again and again at a very high frequency. This industry was also created in the USA in the 1950s with the emergence of mass consumerism, but by using technologies it is now reaching new heights of excellence in designing the mental frameworks to foster our digital enthusiasm.
  • A long-term aspirational trend by people everywhere to acquire, at the same time, more personal autonomy and more participation and connectedness, to which the digital world brings a seemingly simple vehicle.
  • And, not least, the agility of financial markets to look for “blue oceans” once and again and to mobilize initial investments, once it becomes clear that digital techs are fantastic to keep alive a consumerist model of economic development.

Although there are many contradictions between them, all these elements are still acting together today and all are critical to the continuing expansion of ICTs. But of course their alignment with sustainability challenges is far from being granted. “Disruptive innovation” is now the rallying cry of this complex dynamics. The term itself was coined by Clayton Christensen in 199517 to characterize the process by which new market and value networks are created with the effect of disrupting existing ones. Although inspired by technological innovation, Christensen actually puts the focus on the business model, enabled or not by technological breakthroughs, as the key element of disruption:

“Generally, disruptive innovations were technologically straightforward, consisting of off-the-shelf components put together in a product architecture that was often simpler than prior approaches. They offered less of what customers in established markets wanted and so could rarely be initially employed there. They offered a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream.”18

This concept resonates with the “creative destruction” analyzed by Joseph Schumpeter in 1942, which itself can be traced back to Werner Sombart in 191319 and ultimately to Karl Marx. In Schumpeter's view, creative destruction is the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one”20 and as such is a further elaboration of the Marxist perspective of capitalist dynamics, i.e. it constantly destroys and reconfigures previous economic structures, and in doing so devaluates existing wealth in order to create new wealth.

For Marx, Sombart and Schumpeter, this process of ceaseless destruction and creation would ultimately lead to the collapse of capitalism itself. But the concept was later adopted by mainstream free-market economics with a positive meaning also shared by Christensen. In this perspective the mass manufacturing of standardized products at low-price points is critical for disruption to happen by opening new and larger markets, in the same way as the technological prowess of the automobile did not disrupt the market for transportation until the Ford Model T appeared in 1908. So, the effectiveness and societal consequences of innovation do not derive only from technological changes, but rather from their framing into institutional arrangements not necessarily linked to nor disrupted by inventions. In particular, as per its current definition, “disruptive innovation” means that everything new has to pass the market test; an innovative product is only successful if millions of units are sold once and again, no matter what the side effects are, positive or negative; and innovation becomes a synonym for modern market competition, which explains why Christensen focuses so much on cost advantage as the critical factor. Conversely, an innovation which is not successful in markets, whatever its merit from social or environmental points of view, is left behind or even totally forgotten.

“Financial profitability is a one-dimensional, reductionist metric unable to provide the right incentives to cope with the multi­ or infinite dimensionality of the complex challenges we face.”

Although its dynamics include many different elements, digital disruption is actually conceived as a linear path: it starts with publicly-funded, top-down scientific research, then goes to innovation funded by venture capital and ultimately reaches commercial survival maybe in 1% of the cases and true market success recognized by a monopolistic stock valuation in only 1 case or less out of 1000 start-ups. At early stages in this process short-term financial profit­ability acts as the dominant selection mechanism and the final outcomes are a failure in most of the cases and, in one per category, rentier exploitation of a one-player-wins-all dominance. This makes innovation as practiced today very ineffective as far as societal challenges are concerned. It creates an illusion of (debt-driven) growth which is increasingly uneconomic, adverse to the environment and socially unequal. Financial profitability is a one-dimensional, reductionist metric unable to provide the right incentives to cope with the multi- or infinite dimensionality of the complex challenges we face.

In previous sections we discussed the many dark sides of digitization. But maybe the darkest is what could be called the “innovation paradox”: in a world with a very high degree of ICT-enabled financialization, the worst enemy of true innovation is precisely its great exposure to short-term financial expectations. All the technological miracles we take now for granted have required huge efforts, a lot of patience, large investments over long periods of time and a good amount of serendipity. But further progress in innovation is now subject to an endless stream of speculative bubbles.21 Actually, the perception of accelerated innovation is high because its working economic model requires it to be widely publicized. The dogmatic perspective of techno-utopianism has to be widely assumed in order to ensure that vast public and private resources are invested fast in its spasmodic development. The running logic is that of short-term obsession, to cash in now on future and fully uncertain realizations of innovative ideas, which is a good recipe for inflating an already huge amount of fictitious capital and avoiding that enough investments are made at the right pace over enough time to reap the benefits for the common good. While the discourse of ICTs says that everything is possible, their evolution is a consequence of the way they were born, their historical contingencies and the lock-ins they have produced, but especially of their current dynamics, which are complex enough to feed their strong momentum, but not enough to contribute in a proper way to the challenges that humanity is facing. While the strength of young and enthusiastic entrepreneurship is for sure present, the current framing actually inhibits the possibility of addressing the challenges of “world problematique” in the appropriate time and space scales.

It is worth recalling that this framing of innovation has not been dominant except in the last few decades. Under the current view of societal evolution, we tend to forget that govern­ments have been (and are) the most consistent players in research and innovation, with a unique capability to mobilize public and private efforts through the multi-faceted capacities of the State: as nº1 client in any country, able to drive large-scale innovative demand; as regulator pushing companies to invest in S&T; and, not least, as an entrepreneur able to bear the burden of uncertainty and long-termism much better than private corporations.22 At the core of any major leap forward of S&T (including digital tech), it is easy to identify the foundational initiative of the State. Of course intervention by governments is not in the mainstream thinking of Western elites today (although it is, and very effectively, in the practice of non-Western countries). And probably the world is too complex anyway to rely just on the coming back of “good old times”. But on the other hand governments (and not corporations) are developing an agenda of (much needed) international agreements on SD. How can we solve this Gordian knot for the sake of humanity? Beyond the critique, how can we reconcile the excitement and wonders of S&T and digital tech with the challenges posed by the “problematique”?

4. Room for Hope: Digital for Life

In digital tech as in any other domain, changing the course of things requires huge amounts of social energy. For now this is not happening at a large scale, only seeds are being planted, initiatives such as “Computing within Limits”,*** “Slow Tech”††† or many local projects truly using ICTs in a smart way to promote sustainability (besides the “Smart Everything” hype). Transformation research does not explain yet how to go from local seeds to a global change. Our hypothesis is that more complexity is required to bring the innovation processes closer to how life happens, and by complex we mean rich in interactions and diverse enough to produce multidimensional outcomes and unexpected results. Innovation is more about technologies, and technologies are not only digital. Rather than being exogenous and linear, innovation is a complex and recursive process intertwined with society and depends not only on technical but also political “choices leading to specific designs and applications and not to others, which opens the possibility of altering its current trajectory so that it becomes consistent with sustainable development”.‡‡‡

"We have to prove, now and urgently, that sustain­able develop­ment is not an oxymoron."

More complexity means substituting financial profitability with positive contribution to societal challenges as the metric of success. While official R&D agendas declare that innovation has to be challenge-driven, in practical terms it is market-driven. This means that new designs are driven by prices, which in the absence of appropriate regulations do not reflect true costs of non-renewabilities and negative externalities. Prices are themselves driven by the distribution of power in society, which is related to the access to scarce resources but does not integrate the requirements of life conservation. Instead, sustainability has to be built in at the design stage. In the case of ICTs, probably a good way to do that would be to multiply by 10 or 100 the price of critical resources on which they depend. But beyond that, it would be worth exploring how to use them in a way such that negative externalities of processes would be more evident from the beginning at the design stage. In a sense, that would be to exchange more or better information against entropy increase. How much of this could be done is a basic question to assess the true potential of ICTs for sustainability, but until now, it remains almost unexplored.§§§ Monitoring negative externalities is left out as an ex-post task, when it is simply too late, pretty much as recycling only happens when waste has already been produced.

More complexity also means involving all stakeholders in decision-making processes, which is not only opening online consultations here and there (for which lobbyists of many kinds are much better prepared than citizens at large). It requires a more radical change of paradigm in S&T, towards Citizen Science, Co-Creation and Responsible Research & Innovation (RRI), concepts already invented and officially enacted f.i. in the Horizon 2020 programme of the European Union, but still to be developed beyond lip service. And ICTs could help in this, by being at the same time the object of reflection and the tools facilitating the active participation of stakeholders to address societal challenges for the common good in an “innovation democracy”.¶¶¶ They can (and in some cases they do) enable the mobilization of citizens, the creation of grassroots alternatives and the diffusion of knowledge, but we cannot take for granted that this will happen easily. Awareness is growing about the negative aspects of our development model and the risks of S&T as they work today, and with it come positive energies to face the challenges, but a lot has yet to be done to ensure proper involvement in new designs. Digital tech being until now mainly created by young men in California, the participation of women, older persons and people from the rest of the planet would certainly give a richer perspective of real challenges than conducing to the videogame society. And if stakeholder involvement is taken seriously, this will lead to stopping or decel­erating some developments that are too costly and have little benefit to society.

Of course, a stronger dialog between ICT and sustainability communities is also part of the more complex path to sustainable innovation. But a dialog requires willingness and commitment from both parties, starting with a recognition that the course of things has to be changed because we are failing in the path towards SD. In particular, the current idea that digital is “zero cost” and that it deserves to be free from regulations should be replaced since, as
T. Ranald Ide and his colleagues put it:

“The new wealth of nations is found in the trillions of digital bits of information pulsing through global networks. These are the physical/electronic manifestations of the many transactions, conversations, voice and video messages and programs that, taken together record the process of production, distribution and consumption in the new economy.” 23

As a consequence, they proposed to levy a tax on bits, very small but still large enough to generate fiscal revenues of billions of dollars which could be used to combat negative externalities of ICTs and fund SD designs. But on the ICT side, whose leaders are extremely successful and influential, it is unclear how much time it will take to get to such a shared vision.

More complexity also means designing in a way closer to life (which is sustainable by design). One way is to get inspiration from nature, as done in the “Blue Economy” proj­ects.24 Of special interest could be the attempt to artificially replicate photosynthesis in order to greatly accelerate its effects, as envisioned by Microsoft Research in its Computational Science Lab, but we cannot help mentioning that the vision of the Lab head is utterly pessimis­tic about our chances of finding a peaceful pathway to SD.25 In a wider view we should start using sustainability (in all its complexity) as the critical factor of design in new inventions, which includes invoking one of the most successful mechanisms of biological evolution, exaptation, i.e. the capacity to reuse an existing design for purposes other than those for which it was created. And, as said, to do all that we could exploit the huge potential of ICTs to better understand the relationship between entropy and information in all physical processes.

The combination of scientific knowledge and technological sharpness has a strong generative capacity, which could lead to many different global scenarii, to old-fashioned accumulation in very few hands and unsustainable ways of life (as happens today) as well as to the emergence of vibrant ecosystems for the benefit, diversity and sustainability of humankind. We have to prove, now and urgently, that sustainable development is not an oxymoron. The role of technological innovation in that mission is critical but not granted. To a large extent it is right now captured by financial speculation, not driven by societal challenges, focused on “solutionism” rather than on specific contexts and produced without an active involvement of the stakeholders (ultimately, humanity at large as well as the natural environment). So, it is not helping to drive our course away from socio-ecological disasters. But it could be the opposite.

Overcoming this situation requires mobilizing a mix of holistic vision, strategic intentions, scientific commitment, activist enthusiasm and story-tellings in a cocktail strong enough to connect with the deep human aspirations to autonomy and participation in a more genuine way than what digital tech does today. Of course this will also require financial resources, and therefore political decisions to foster the process towards a true “Innovation Democracy” that is able to master the potential of new inventions for the sake of life on Earth. By far we are not yet there. The seeds exist but they have to be assembled and fed with social energy. But instead of resorting to a blind faith in digital tech as our savior, the time has come to make a proper use of all the knowledge we already have.

Notes

  1. Frank Piasecki Poulsen, “Children of the Congo who risk their lives to supply our mobile phones” The Guardian 7 December 2012 https://www.theguardian.com/sustainable-business/blog/congo-child-labour-mobile-minerals.
  2. E.D. Williams, R.U. Ayres and M. Heller, “The 1.7 Kilogram Microchip: Energy and Material Use in the Production of Semiconductor Devices,” Environmental Science and Technology 36, no. 24(2002): 5504–5510.
  3. Robert Rattle, “ICTs roles in an Environmental Society” In M. Lapka & E. Cudlínová, eds. Towards an Environmental Society? (Prague: Karolinum Press, 2012).
  4. Lorenz M. Hilty et al., “The relevance of ICTs for environmental sustainability. A prospective simulation study,” Environmental Modelling & Software 21 (2006): 1618-1629.
  5. Peter Diamandis and Steven Kotler, Affluence. The future is better than you think (New York : Simon & Schuster, 2015).
  6. Evgeny Morozov, To Save Everything, Click Here (New York: PublicAffairs, 2014).
  7. W. Chan Kim and Renée Mauborgne, Blue Ocean Strategy (Boston: Harvard Bus Review Press, 2016).
  8. N. Popper, “The robots are coming for Wall Street” International New York Times 27 February 2016 https://www.nytimes.com/2016/02/28/magazine/the-robots-are-coming-for-wall-street.html?_r=0.
  9. Jaron Lanier, You are not a gadget. A manifesto (London: Penguin Books, 2011).
  10. Lanier. A manifesto.
  11. Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Business (New York : Penguin Books, 2006).
  12. Hartmut Rosa, Accélération : une critique sociale du temps : suivi d‘un entretien avec (l‘auteur Paris: La Découverte, 2013).
  13. Morozov, To save everything.
  14. Lanier. A manifesto.
  15. David Golumbia, “Cyberlibertarians' Digital Deletion of the Left” Jacobin Magazine 4 December, 2013 https://www.jacobinmag.com/2013/12/cyberlibertarians-digital-deletion-of-the-left/.
  16. Ian Traynor, “Internet governance too US-centric, says European Commission” The Guardian. 12 February 2014 https://www.theguardian.com/technology/2014/feb/12/internet-governance-us-european-commission.
  17. Joseph Bower and Clayton Christensen, “Disruptive Technologies: Catching the Wave” Harvard Business Review January-February 1995 https://hbr.org/1995/01/disruptive-technologies-catching-the-wave.
  18. Clayton Christensen, The innovator‘s dilemma: when new technologies cause great firms to fail (Boston: Harvard Business School Press, 2016).
  19. Werner Sombart, Krieg und Kapitalismus (New York : Arno Press, 1975).
  20. Joseph A. Schumpeter, Capitalism, Socialism and Democracy. (Mansfield Centre : Martino, 2011).
  21. Carlota Pérez, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (Cheltenham: Edward Elgar, 2014).
  22. Mariana Mazzucato, The Entrepreneurial State: Debunking public vs private sector myths (London: Anthem, 2015).
  23. Arthur J. Cordell, The New Wealth of Nations: Taxing Cyberspace (Toronto: Between The Lines, 1997).
  24. Gunter Pauli, The Blue Economy. Version 2.0. (New Delhi: Academic Foundation, 2015).
  25. Stephen Emmott, Ten Billion (New York: Books on Tape, 2013).

* See European Union. 2014 “Critical Raw Materials”. http://ec.europa.eu/growth/sectors/raw-materials/specific-interest/critical_en

See Silicon Valley Toxics Coalition. 2006. “Toxic Sweatshops”. http://svtc.org/our-work/e-waste/

Daniel Pargman. August 2016. “Designing for Sustainability: Breakthrough or suboptimisation?”. 4th International Conference on ICT for Sustainability (ICT4S). Amsterdam

§ Ralph Hintemann, Jens Clausen. August 2016. “Green Cloud? Current and future developments of energy consumption by data centers, networks and end-user devices”. 4th International Conference on ICT4S. Amsterdam

Climate Group for the Global eSustainability Initiative. 2008. “SMART 2020: Enabling the low-carbon economy in the information age”. http://www.smart2020.org/_assets/files/02_Smart2020Report.pdf.

** UN General Assembly. Resolution adopted on 27 July 2012. “The future we want”

†† David Souter. July 2015 “Advancing a sustainable Information Society for all”. UN Public Administation Program

‡‡ Climate Group for the Global eSustainability Initiative. 2008. Ibidem

§§ IBM. 2016. “Watson for President 2016”. http://watson2016.com

¶¶ John Perry Barlow. 1996. “A Declaration of Independence of the Cyberspace”. www.eff.org/cyberspace-independence

*** August 2016. “Computing within Limits: Visions of Computing beyond Moore's Law”. Workshop at the ICT for Sustainability Conference, ICT4S 2016. Amsterdam

††† August 2016. “Slow Tech: Clean ICT, an overview and case study exploration”. Workshop at ICT4S 2016.

‡‡‡ Robin Mansell. October 2012. “ICT Innovation and Sustainable Development”. IISD

§§§ Antonio Valero. 2016. Private communication

¶¶¶ Andrew C. Stirling. 2014. Towards innovation democracy? Participation, responsibility and precaution in the politics of science and technology. UK Government Office of Science

About the Author(s)

Carlos Alvarez Pereira

Vice President of The Club of Rome; Fellow, World Academy of Art & Science