“To save everything, click here” by Evgeny Morozov – Part 2

Previously I’d shared my first impressions of Evgeny Morozov’s book To Save Everything, Click Here: The Folly of Technological Solutionism. Having now finished reading it, here are my thoughts:

 

“This flight from thinking and the urge to replace human involvement with [supposedly objective] truths produced by algorithms is the underlying driving force of solutionism”

Evgeny Morozov

 

Morozov convincingly describes technological solutionism (a.k.a. ‘geeks can solve everything’) as a utopian ideology convinced of its own righteousness, and one that is blind to the history of previous failed movements (such as Scientism in the 1800s) that also viewed science and rational thought as a clean & unbiased answer to all of society’s problems.

That scientific governance can only work if everyone first agrees what the ‘ideal’ society should look like simply doesn’t cross their minds. They look for problems that their tool – internet technologies – can solve, and don’t bother asking whether another method such as social, legal, or economic reform would be more appropriate.

The book provides insights into the many areas where the consequences of internet technologies aren’t being discussed, such as:

Most internet technologies are built around the consumer mindset:  everything must be tailored to provide instant gratification and never challenge the user or make them feel any unpleasant emotions. Some books, articles and other content can now even being automatically created with algorithms and tailored to suit people’s tastes. Not only does this overstimulate our senses, it also acts as a ‘positive feedback loop’ and tends to make us even less patient when dealing our civic responsibilities where sacrifices, delays and ugly compromises are regularly required. And it gives politicians one more incentive to tell us what we want to hear

For algorithms to work, everything must be quantified. And so there is a cult of measurement where engineers at Google and Apple try to ‘measure’ how good a song is, or what the literary value of a novel is, by using tools like how many stars its online users give it. That some things are subjective and should be left unquantified just doesn’t fit

‘Big data’ is essentially just ‘big correlation’. When the internet companies predict a page’s click rate, or where a crime will most likely be committed next, it is a calculation based off correlations found in past data. This can be helpful, but it also has major limitations – as we saw with the failure of financial algorithms in the 2008 crisis

The Internet is assumed to be a ‘revolution’ and a ‘total break from the past’, when in fact more often than not it’s simply a tool that allows us to do things that we were already doing, just more efficiently. Take for example crowdsourcing, a method which the British government used – in the 1700s – to call for ideas on how to improve ship navigation

There is far too much faith placed in the algorithm. No matter how well it works technically, every design is biased with the assumptions of the designer (i.e. what is important in an internet search? what is not? what should be flagged as a possible terrorist communication? what shouldn’t?). Governments and companies almost always keep these algorithms secret so that they can’t be rigged, but that means they also have no accountability. Independent audits are needed to judge not just the technical performance, but also the legality and ethics of its decision philosophy

“The Internet” is seen as monolithic unchallengeable force, not a collection of technologies that can be reviewed and evaluated on their own merits. At the same time there is a “we can’t do anything about it because it’s the way of the future” defeatism. We’re told we just have to live with it because can’t possibly make any changes to it. Never mind that plenty of other technologies such as cars, power plants and cell phones have been modified and regulated (seat belts, emissions limits, contract limits) and are still going strong

The principle of scientific precaution is simply not being applied in the tech sector. Facebook, Google, Twitter and the like engage in large-scale social experimentation (often unconsciously) without any serious study of potential consequences for individuals or for society. Take for example emotional-recognition software, where algorithms are used to decipher if a person is lying or not in a video (such as a politician!). This sounds great, right? Now what if it becomes widespread and your friends systematically use it on you and everyone they know. What kind of a society would that foster? An honest one? Or a paranoid and distrustful one? What if the nation is in a major crisis and needs its leaders to give it hope even when there isn’t much? Would knowing your politician is bending the truth be helpful, or a poison pill? These are the kinds of major societal impacts relatively simple technologies can have, and yet new ones are being invented and implemented at breakneck speed without much forethought. It speaks to a dangerously irresponsible attitude of ‘you can design it – therefore you should’. The tech sector is in denial about the major socio-political role it has de facto given itself.

 

The book does have its flaws. In his criticisms of Silicon Valley it often feels like Morozov is grasping at straws, trying too hard to play devil’s advocate and find that one thing that’s wrong with every innovation. He also does not give Silicon Valley the full credit they deserve for the many wonderful tools they have created (such as the webpage containing this post). He went out looking for problems with the tech sector and that’s what he found.

The biggest issue for me is that he focused mainly on the individual impacts of each technology, and not their effect as a whole (in fact the premise of the book is that ‘The Internet” as such does not exist, its only a collection of internet technologies). This is accurate in many ways but misses the bigger picture of where technological society is headed (e.g. in terms of pace, complexity and endgame) and whether we want to go there at all.

Nonetheless, the author is very well read and the book is riddled with references to a wide range of sources – everything from Der Spiegel (and an interesting discussion of the German Pirate Party) to Nietzsche (“a mechanical world would be an essentially meaningless world”) to José Ortega y Gasset (“to be an engineer… is not enough to be an engineer”) to Walter Lippman, and of course a whole array of current Silicon Valley thinkers, most of whom he ruthlessly critiques.

 

My own take on the situation is that I don’t think Google and Facebook truly understand what they’re doing. They are undeniably the masters of the technology, but not of its consequences. In this sense they are as much victims and spectators of the unfolding changes as anyone else.

Similar to previous advances in the oil & plastics industries – at the time seen as unquestionably brilliant ideas to improve the world – it is quite possible that we will only realise, years later, the full weight of Silicon Valley’s impact.

Either way, this is a very important book and I highly recommend reading it.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.