For those of us who came of age sometime in the late 80s to early 90s the date October 21, 2015 has special significance in terms of both nostalgia and excitement. For those who don’t grasp the reference immediately this is the day that Marty McFly (Michael J Fox) and Doc Brown (Christopher Lloyd) travel to in the film Back to the Future 2. The purpose of the trip was to prevent Marty’s future son (also played by Fox) from taking part in an armed robbery that sends the McFly family on a downward spiral (Doc Brown having already taken the trip to the future and witnessed all this). Of course this phase of the Back to the Future odyssey really began at the end of the original film when Doc Brown returns from that first trip the future, sporting an outfit that could be safely described as futuristic, to inform Marty and his girlfriend (and future wife) of the urgent rescue mission to be taken. When the three pile into the time-machine (Delorean) Marty advises Doc back up to have enough road to get the car up to the 88 Mph needed to activate the flux capacitor that triggers the time circuits, Doc coolly responds (and was referenced by Reagan in his 1986 State of the Union address) ‘Roads? Where we’re going we don’t need roads.’
So they blasted off to a future with highways of flying cars, skateboards that float on air, robot servers that produce bottles of Pepsi on demand, jackets that instantly dry themselves when wet, and pizzas that ‘hydrate’ in seconds from a few inches to full size in some sort of everyday oven, along with a criminal justice system that had efficiently abolished lawyers.
Despite the excitement that such a future elicited in young moviegoers unfortunately it is a safe bet that by the end of the year hoverboards, power shoelaces, and robot waiters will still not be appearing and needless to say time travel itself still far alludes us.
If the visions of cheesy script writers back in 1989 have proved lacking there is still quite a market for the predictions of futurists. The most provocative version for this goes by the name of ‘The Singularity’. This would the moment when humanity and created technology basically become one. Popularized by computer scientist (and current Google employee) Ray Kurzweil, the process seems to go like this: based on what Kurzweil calls the law of accelerating returns. The Law of Accelerating returns, according to Kurzwuil in his book The Singularity is Near, ‘describes the acceleration of the pace of and the exponential growth of the products of an evolutionary process. These products include, in particular, information-bearing technologies, such as computation… The law of accelerating returns applies to all technology, indeed to any evolutionary process.’ Particular here is Moore’s law which states that overall computer processing power doubles every two years (or more specifically the number of transistors on an affordable CPU double every two years). Think of it this way: in 1997 the U.S. government, in order to design weapons without explicitly breaking the 1992 memorandum with Russia on nuclear testing, commissioned what was then the world’s most powerful supercomputer. It was called ASCI Red. Large as a tennis court and costing $55 million, its reign as the top computer lasted until 2000. By 2006 its equivalent was released by Sony as PS3 (Playstation 3). At this rate by about the year 2045 computer intelligence will supposedly exceed human intelligence by a billion times. Until then advances in biotechnology and nanotechnology will increase the lifespan (and intelligence) of humans until the Singularity comes about, biology is transcended, pleasure is perfected, and immortality is established (sometimes called the geek rapture) in a world in which artificial intelligence is continually producing more intelligent versions of itself apparently ad infinitum.
It would go without saying there are many critics that relegate this technological utopia to the realm of fiction. For instance, given the current limits of our knowledge about consciousness and intelligence it is hard at this point to see how it can be programmed into computers. Plus there are questions about Moore’s law. Writing in the MIT Technology Review in 2011 Paul Allen, co-founder of Microsoft, wrote ‘Kurzweil’s reasoning rests on the law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technological process can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t.’ Perhaps Michael Shermer, editor of Skeptic magazine, summed up the chorus of criticism by writing ‘My prediction for the Singularity: we are ten years away… and always will be.’
Still there are more plausible futurist scenarios. In their book The New Digital Age: Reshaping the Future of People, Nations, and Business, Eric Schmidt and Jared Cohen (also both Google people: Schmidt chairman and former CEO, Cohen director of Google Ideas) paint a Back to the Future-ish picture of driverless cars, automatic barbers, integrated clothing machines that keep inventory of clean clothes and suggest outfits to fit a daily schedule, various hologram machines for various purposes, tissue engineers growing new organs to replace old or diseased ones, and high tech beds with sensors that monitor sleeping rhythms to determine when precisely to wake up sleepers at their most refreshed.
All seemingly quite plausible. For instance driverless Google cars have already logged hundreds of thousands of miles without accident (and since it is a given that around 40,000 Americans die in car accidents every year, driverless cars, at least in that respect, would be a significant improvement). Robotics figures to continue to integrate at least parts of our everyday lives, as will 3D printing. Nanotechnolgy has great potential and stem cells might well hold the keep to eliminating various diseases.
That the future should feature somewhat longer lives and fun toys is not the question. It is in the practical social implications where futurist predictions probably come up short. Schmidt and Cohen posit separate realms of future (and present living): the physical world and the virtual world. On its face that wouldn’t seem far-reaching. Many people already identify with the concept of a virtual world given stuff like Instagram, Twitter, and Facebook. However Schmidt and Cohen take it further than most would consider. Some realities of their virtual world include Virtual Statehood: for example a minority group that expands across different state borders (such as the Kurds) can declare virtual statehood, complete with an online currency, therefore threatening the status quo in the physical world. Virtual warfare, certainly not a new concept but one that the authors argue will expand. Then there is virtual kidnapping, virtual health care, virtual, etc. Anything seemingly can be revolutionized simply by throwing the adjective ‘virtual’ in front of it. It is fair to wonder if something like statehood can be accomplished virtually. If a Palestinian or Chechen government declared independence online would physical statehood be any closer or would the act of declaring some kind of virtual statehood bring harsh consequences, in the form of tanks and bombs, in the physical world? Virtual kidnapping, in the form of stealing the online identities of wealthy people and using the details for ransom purposes, sounds more like an annoyance than a life-altering trauma. Virtual genocide, hackers of different stripes annihilating another group’s web community- again seems to be more annoying than deadly.
Yet even beyond geopolitics and acts of violence, there what constituents the main element to the Silicon Valley vision. A good feel for it all boils down to can be found in Matt Ridley’s The Rational Optimist and it is definitely a representation of the ethic of many proclaimed digital revolutionaries:
The market economy is evolving a new form in which to even speak of the power of corporations is to miss the point. Tomorrow’s largely self-employed workers, clocking on to Work online in bursts for different clients when and where it Suits them, will surely look back on the days of bosses and foreman, of meetings and appraisals, of time sheets and trade unions, with amusement.
Stripped of all its particular and what’s left here is the Randian individual. The individual, freed by technology to work when she wants, how she wants, making a fine, nonregulated living for herself, burning through any staid bureaucracy while performing great acts of philanthropy. The emancipation of the individual is a noble enterprise perhaps put most eloquently by none other than Marx and Engels in The Communist Manifesto (‘In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all’) which also happens to contain the greatest appreciation of capitol and operatic creative destruction- a concept associated with economist Joseph Schumpter describing the process of innovation that destroys established systems and industries and replaces them with new industries. Schumpter believed this is the main engine of economic growth.
- Yet we also know what Marx and Engels eventually saw coming and we know the capitalist utopia, or dystopia Silicon Alley aspires to. It’s plainly not at all clear that a society of permanently emancipated, isolated individuals makes for a greater world, especially with the loss of some other things often targeted for technological solutions like political parties or labor unions. After all what need do free, self sufficient individuals have for political parties when they can easily organize themselves online without any party filter? Perhaps not much yet it is instructive in that vain to note that here in the U.S. the internet has been around for the past four, even the past five presidential elections. In 1992, the last election where the internet certainly wasn’t a factor, the voter turnout was 58.3% according to the Bipartisan Policy Center. In 1996, as the internet was taking off, the turnout dropped to 51.4%. In 2000 it was 54.2%, in 2004 60.6%, it reached 62.3% in 2008 before dropping to 57.5% in 2012 (below the 1992 level). Clearly the internet has yet to produce a truly revolutionary effect on voter participation and it would be a tough sell to argue that the political establishment is any more progressive than it was before the internet, or even that the spectrum of issues debated has widened in any significant way. The two party system hasn’t been dented and poll numbers indicate that public faith in political institutions is at a low. Internationally, Twitter took bows for the Arab Spring of 2011 but its role in the original uprisings paled in comparison to Al-Jazeera, the product on an older technology. If the internet is to revolutionize politics by inspiring and empowering the masses, it has a ways to go.
-
“[A]nd man, having enslaved the elements, remains himself a slave.”
— Percy Shelley, “A Defence of Poetry” (1821)
A typical definition (Merriam-Webster to be exact) of the word technology reads something like ‘the use of science in industry, engineering, etc to invent useful things or to solve problems; a machine, piece of equipment, method, etc that is created by technology.’ Very straightforward, in fact it’s easy in that case to characterize every item every created by human beings as a form of technology. That view is justified.
Other have been more elaborate in trying to form a definition. Kevin Kelly, editor of Wired magazine, posits what he calls the ‘technium’ to describe the totality of technology. Kelly describes the technium as ‘the greater, global, massively interconnected system of technology vibrating around us’ including culture, art, social institutions, and all types of intellectual creations.
In The Nature of Technology, Brian Arthur points out that technology consists of other technologies in that new technologies arise from combinations of older ones. He also explains that every technology is based upon a phenomenon, capturing it and exploiting it for use.
Again all pretty straightforward: it is apparent that technology has something of an evolutionary aspect to its development, Arthur describes this as technologies sharing a common anatomical structure, and given some thought it is not a stretch to classify the arts and social institutions as a form of technology of a kind and therefore at least abstractedly interconnected.
Yet the passion of techno-enthusiasts goes beyond this. In his book What Technology Wants, Kelly argues ‘our system of tools and machines and ideas became so dense in feedback loops and complex interactions that it spawned a bit of independence. It began to exercise some autonomy.’ Arthur argues for a similar concept: ‘The collective of technology builds itself from itself from the activities of small organisms. So, providing we bracket human activity and take it as a given, we can say that the collective of technology is self producing.’
Has technology advanced to the point where it can be said to have an autonomous existence, which if so can at least vaguely bring to mind Terminator or Maximum Overdrive scenarios? Kelly claims that it would depend on one’s conception of autonomous. He writes:
Even we adults are not 100 percent autonomous since we depend upon other living species in our gut in the digestion of our food or the breakdown of toxins. If humans are not fully autonomous, what is? An organism does not need to be wholly independent to exhibit some degree of autonomy. Like an infant of any species, it can acquire increasing degrees of independence, starting from a speck of autonomy.
Kelly correctly points out that a measure of autonomy can come down to certain traits: self-repair, self-defense, self-maintenance, self-control of goals, self-improvement. There isn’t a technological system that displays all of these but some technologies can display some of them (such as drones flying on automatic, computer viruses reproducing themselves).
Still the revolutionary aspect of the theory seems to collapse on itself. Here is Kelly elaborating:
The technium wants what we design it to want and what we try to direct it to do. But in addition to those drives, the technium has its own words. It wants to sort itself out, to self-assemble into hierarchical level… to perpetuate itself, to keep itself going… want does not mean thoughtful decisions… but rather tendencies. Leanings. Urges. Trajectories… just like the unconscious drift of a sea cucumber as it seeks a mate.
- On one hand technology is dependent on its human designers for its very existence, but on the other hand there’s space for its own tendencies? How could human activity be bracketed? It’s unclear how these could autonomous from design. It is easy to find an entropic tendency toward increased complexity however tendencies and urges are not quite the same thing. A tendency is just that: a tendency, an inclination (ex: john has a tendency to oversleep on Monday mornings.). An urge is defined as a strong desire or impulse. A lion does not have a tendency to hunt antelope. Army ants don’t display a tendency to swarm. Behavior in most of the natural world is driven by instincts derived through natural selection. Tendencies can be reversed, or even stopped (even if artificial intelligence will one day be capable of its own greater creations that time isn’t inevitable). To compare human creativity to natural selection with technology being life sounds interesting but it misses the randomness of natural selection. History contains numerous examples of societies where technological invention slowed down or outright regressed. It figures that the Aboriginals reached Australia by sea yet by 1788 they had no seaworthy boats. The same is true for the inhabitants of Hawaii. The Tasmanians lost bone tools and fishing. The famous statues of Easter Island reveal technological precision that disappeared by the time of European conquest. Then there’s China which probably led the world in technological sophistication right up until 1400. China possessed ships longer and stronger than European ships, yet by the arrival of the Portuguese in 1514 China had no such sips left. Certainly this didn’t bode well for those societies, the same way a declining population ultimately doesn’t, but it does prove the extent that technology doesn’t have a life of its own.
-
“We’re beyond good and evil here, the technology, it’s neutral, eh.”
— Thomas Pynchon, The Bleeding Edge
If concretely defining what technology is proves to be both easy and difficult, it is perhaps more shady when placing it in the realm of good and evil. It’s hard to think of a piece of technology where this dichotomy wouldn’t apply. Nuclear power, while still controversial, can power a country’s grid, while at the same time create weaponry that can destroy life on the planet- yet weaponry that has also been credited even by disarmament advocates with creating a deterrent, hence peace, effect. Cars and trucks have opened up vast stretches of territory and made travel and related economic activity easier but in the U.S. are statistically certain to kill around 40,000 people a year; the number is higher in other countries. Fertilizer has greatly increased agricultural production but its runoff has severely polluted coastal and river waters The same could be said of planes. For the many commercial, cultural, and recreational opportunities planes offer there is the flip side of bombed out Dresden and the horror of 9/11. For all the popular romanticizing of the Wright Brothers, it is notable that when asked in 1905 what the purpose of his machine might be, Wilber Wright simply said ‘war’. Not long after the brothers tried to sell their patent to war offices of several countries. The U.S. eventually brought it for $30,000 and test bombing began in 1910.
This type of discussion inevitably brings up the haunting specter of the Luddites; ‘Luddite’ being what anyone deemed anti-technology can expect to be labeled though this label is fraught with political implications beyond technology as such, as it was from the beginning. The original Luddites, so passionately but not uncritically defended by Lord Byron in his only speech in the House of Lords,1 in a sense couldn’t even claim originality as machine breaking had a tradition in British protest before them. The target of Luddite rage, the stocking frame, had been around for two centuries, making the movement hardly a reaction against newness. It was quite a harsh time to be a stockinger (textile worker) or a worker in general: no minimum wage, the only rare ability of stockingers to own their own frames (machines), common child labor, Napoleon’s boycott of English trade, the criminalization of unions (through the Combination Acts of 1799 and 1800), the War of 1812 with the U.S., poor harvests in Britain between 1809-1812, wages less than rising food prices, it all painted a harsh, brutish picture. Their problem was not with technology per se, but with how it was applied. The way it was used to cheapen jobs, impoverish skilled workers, increase production without improving wages while concentrating wealth. Any of this sound familiar?
What would the Luddites made of the fact that, popular perceptions aside, the United States manufactures more steel today than in 1970? In fact, American factories produce the same output as much hyped China, more than double Japan’s output, and several times that of Germany and Korea. The U.S. manufacturing sector alone is larger than Britain’s economy and it’s still growing. For instance the U.S. produced 106 million tons of steel compared to 91 million in 1970. The difference? In 1970 531,000 workers produced the total; in 2007 only 159,000 workers were needed (see Gregg Easterbrook’s Sonic Boom: Globalization at Mach Speed). Some recent commentary has speculated that manufacturing is returning to U.S. shores due to low Southern wages and more will come in fields like robotics but the long term trend figures to remain the same. Union jobs, the eventual main benefit of large scale manufacturing jobs, are stuck at about 7% of private sector employment. This while well paying, union jobs numbers have plummeted and low paying, anti-union Wal-Mart is the largest employer in roughly half the states in the country. As for jobs in general, Apple recent reported quarterly profits were the highest in history. In 1960 the world’s most profitable company was General Motors with a workforce that numbered 600,000 people. By today’s money GM made $7.6 billion that year. Apple is on pace to pull in $88.9 billion with a workforce of only 92,600 (as pointed out by John Lanchester in the London Review of Books).
The disappearance of horses from the day-to-day economy offers a succinct, if frightening, illustration of a job industry dying. The population of working horses peaked in England in 1911: 3.25 million were plowing fields, toiling pits, and hauling wagons. With the invention of the combustion engine the displacement began. By 1924 there were less than 2 million. Eventually the wage that horses could be employed for didn’t even pay for their feed (see A Farewell to Alms by Gregory Clark).
Looking for a human parallel? In 1870 about 75% of American workers were employed in agriculture. The number is now down to 2%. Fortunately manufacturing jobs were available to fill the void. Then as manufacturing was outsourced and automated the service sector was next to at least provide jobs. As service sector jobs become more and more automated (think ATM machines, automatic check-in at airports, etc.) will there by another sector for mass employment? If technology is credited with creating most, if not ultimately all, economic growth in history, will economic growth at some point be completely divorced from jobs?
A report by the McKinsey-Global Institute, titled Disruptive Technologies, thoroughly lists a series emerging technologies and the numbers of people potentially impacted (not all b And not all those affected will be blue collar workers. The twelve economically disruptive technologies listed are: Mobile internet, automation of knowledge, internet of things, cloud technology, advanced robotics, autonomous vehicles, next generation genomics, energy storage, 3D printing, advanced materials, renewable energy, and oil and gas exploration and recovery. In a report titled The Future of Employment: How Susceptible Are Jobs to Computerization, Oxford researchers Carl Benedikt Frey and Michael Orborne predict that around 47 percent of total U.S. employment will be highly at risk of being replaced by computerization two decades from now.
In Race Against the Machine, MIT professors Erik Brynjolfsson and Andrew McAfee, while overall optimistic about the future technology will provide, acknowledge that:
There is no economic law that says that everyone, Or even most people, automatically benefit from technological progress… Even as overall wealth increases there can be, and usually will be winners and losers. And the losers are not necessarily some small segment of the labor force like buggy whip manufacturers. In principle, they can be a majority or even 90% of the population.
Of course technology is touted not only as the cause but also the solution to all this. If technological advance took some jobs surely it will eventually replace those with better ones? Listening to much of the media’s infatuation with the tech sector the answer is yes. But is it really the case? In The New Geography of Jobs, Enrico Moretti calculates that Internet sector jobs have increased by 634% the past decade, software jobs by 562%, and Life science by 300% over the last twenty years. More importantly Moretti estimates that every high-tech job creates an additional 5 local jobs outside the high tech field (compared to only 1.6 jobs for manufacturing jobs), obviously boding well overall for places like the Bay Area, Austin, Boston, Seattle and New York.
Not that this narrative hasn’t been challenged. In The Great Stagnation Tyler Cowen argues that rather than a technological revolution the U.S. has actually reached a technological plateau, ending an era of what Cowen calls ‘low-hanging fruit’ (he puts that era from roughly 1870-1970), in the sense that technological advances are taking place in arenas that don’t create as many jobs as eras past- perhaps a fair point when comparing the numbers of workers at Facebook to the numbers in the automobile and steel industries at their respective heights, especially relative to overall population numbers (though Cowen didn’t attempt to estimate the number of surrounding jobs a functioning tech sector could produce as Moretti does). Cowen writes of the internet: ‘Still, relative to how much it shapes our lives and thoughts, the revenue component of the internet is comparatively small.’
This view was echoed by economist Robert J Gordon in an essay titled ‘Is U.S. Economic Growth Over’. Gordon’s main thrust is that there have been three industrial revolutions: the first ranged from 1750-1830 centering on early inventions such as the cotton gin and steam engine to the spread of railroads (therefore its full effect came 150 years after it began); the second (IR2) from 1870-1900, which Gordon credits for having the greatest impact on standard of living. Here came along electricity, running water, indoor plumbing and sewers, the telephone and by extension consumer appliances, cars, highways, etc. (it took 100 years for IR2 to have its main effect). The third began (IR3) around 1960 with the first commercial use of computers, to the rise of PCs and e-commerce (a process that Gordon argues was completed by 2005), enabling the economic growth rate to stay consistent, averaging 2 percent a year. As for jobs, Gordon writes:
In the past decade the nature of IR3 innovations has changed. The era of computers replacing human labor was largely over,although the role of robotics continued to expand in Manufacturing, while many airline check-in employees were replaced by e-kiosks… Attention in the past decade has focused on a succession of entertainment and communication devices that do the same things we could do before but now in smaller and more convenient packages.
Ipods replaced CD walkmans, smart phones replaced early cell phones. Gordon: ‘These innovations were enthusiastically adopted, but they provided new opportunities for consumption on the job and in leisure hours rather than a continuation of the historical tradition of replacing human labor with machines’. The latest version of the Iphone may draw mobs but adds little to production. The way things are going Gordon estimates it may take a century or more for most people in the U.S. to double their standard of living, while the old rate was once every 35 years.
Whether it’s because of technology or due to an ultimate lack of it, or whether the tech revolution as already largely happened or is still forthcoming, the question always begs itself: What is to be done? Practical education reform that emphasizes science and technology, and moves away from factory style testing centers, is obvious but also incomplete. For one thing most ‘jobs’ of the future will continue to be working-class, a point that Moretti makes in The New Geography of Jobs. He describes this larger sector as the non-traded sector. According to Moretti 10% of jobs are in the tech sector. It’s interesting to note that at its peak manufacturing jobs accounted for only 30% of jobs. Even if tech is to be the new manufacturing that still leaves about 70% of jobs. According to the Bureau of labor Statistics only 20% of jobs in 2010 required a bachelor’s degree. The knowledge economy, jobs needing a graduate degree, has over six million people now with another million by 2020. Yet that’s less than 5% of the economy. If the future, at least the foreseeable one, won’t be a revolutionary break from the present then the same battle lines are drawn: living wages, labor organizing, workers justly battling owners.
There is the blurry possibility that technology truly becomes our collective servant in the sense that production growth becomes exponential to the point where human work is all but abolished and living standards even on different planets is beyond our current comprehension. However this utopia, like its Singularity cousin, has a kind of believe it when it happens texture to it. It’s also possible that what’s called the ‘nature’ of work can change enough in the semi-near future to provide jobs not even currently envisioned. The problem again would boil down to a question of ownership: for individuals or the public good. If a capitalist economy is maintained it spells a bleak future where tech companies own and receive all the money and growth of the technological paradise while excess Homo sapiens non-workers drown in a sea of Darwinistic misery. If technology is credited for a good percentage of economic growth historically and is ever becoming less dependent on human labor then policies that distribute this growth on an equitable would be paramount. In that vein, Paul Krugman argues that only a strong social safety net that guarantees healthcare and minimum income can ensure a decent life for most people in the future. It’s scary to think how this would play out in America’s current political climate where hatred for the poor is already rabid.
Brian Arthur goes an interesting step further by speculating that the main challenge for us in the future economy will be to shift from producing prosperity to distributing prosperity, which could mean delinking the idea that an individual’s wealth should spring from that individual’s production: In other words something like from each according to their ability to each according to their need. Technology achieving a standard of production to easily fulfill all of civilization’s needs would be a wonderful thing provided its benefits are evenly distributed. And how wonderful would it be as a sort of final dialectical irony that socialism (and the abolition of work) emerges when capitalism runs out of justification? After all would it not be a form of slavery for corporations to own the offspring of machines that were created by earlier machines unto the last, so to speak, generation? Public ownership could emerge by default (though perhaps the machines themselves will claim ownership- probably what Stephen Hawking had in mind a few months ago when he labeled artificial intelligence a threat to humanity’s survival).
Whatever technological developments the future may bring the battle over it always remains will it be one of shared prosperity and democracy or concentrated wealth and power by the few. In the U.S. it has been the latter that has claimed far too many victories in our current age. No matter the latest technology that trend must be reversed.
- Byron spoke in the House of Lords against the 1812 Frame Breaking Act, which made the destruction of stocking frames a capital offense (therefore punishable by death). One of his many sharper passages was ‘Are we aware of our obligations to a mob! It is the mob that labour in your fields, and serve in your houses- that man your navy, and recruit your army- that have enabled you to defy all the world- and can also defy you, when neglect and calamity have driven them to despair.’