Download PDF Excerpt
Rights Information
But everyone is wrong. In fact, the pace of change isn't notably faster than in times past and most “revolutionary” technologies are just refinements of past breakthroughs. Using dozens of entertaining examples, high-tech industry veteran Bob Seidensticker debunks nine technology myths, proving that:
The rate of change is not exponential (myth #1),
Important new products don't arrive any faster than they ever have (myth #3),
The Internet doesn't really change everything (myth #8), and much more.
Future Hype exposes the hidden costs of technology and will help both consumers and businesses take a shrewder position when the next 'essential' innovation is trotted out.
- Convincingly debunks the myth that technology today is changing at an unprecedented rate and totally transforming modern society
- Gives readers the perspective to look skeptically at claims that every new technology is a world-changing breakthrough they must have
- Written by a twenty-five-year veteran of the high-tech industry who spent eight years as a Microsoft project manager
Find out more about our Bulk Buyer Program
- 10-49: 20% discount
- 50-99: 35% discount
- 100-999: 38% discount
- 1000-1999: 40% discount
- 2000+ Contact Leslie Davis ( [email protected] )
But everyone is wrong. In fact, the pace of change isn't notably faster than in times past and most “revolutionary” technologies are just refinements of past breakthroughs. Using dozens of entertaining examples, high-tech industry veteran Bob Seidensticker debunks nine technology myths, proving that:
The rate of change is not exponential (myth #1),
Important new products don't arrive any faster than they ever have (myth #3),
The Internet doesn't really change everything (myth #8), and much more.
Future Hype exposes the hidden costs of technology and will help both consumers and businesses take a shrewder position when the next 'essential' innovation is trotted out.
- Convincingly debunks the myth that technology today is changing at an unprecedented rate and totally transforming modern society
- Gives readers the perspective to look skeptically at claims that every new technology is a world-changing breakthrough they must have
- Written by a twenty-five-year veteran of the high-tech industry who spent eight years as a Microsoft project manager
Find out more about Bob and the book here.
—Bob Frankston, VisiCalc developer and computer industry pioneer
“This clear-eyed, level-headed, historically sophisticated view of the realities of technological change by a knowledgeable insider will be absorbing reading for early adopters, neo-Luddites, and everyone in between.”
—Edward Tenner, author of Why Things Bite Back: Technology and the Revenge of Unintended Consequences
“Future Hype is a great antidote to the familiar boosterism about unprecedented technological growth. Seidensticker puts technological change into historical perspective, which enables us to measure progress against what we have known, rather than against what we are promised.”
—Henry Petroski, Aleksandar S. Vesic Professor of Civil Engineering and Professor of History, Duke University, and author of Pushing the Limits
“…. a wonderful compendium of the way the world works, and not just the way it should work. Future Hype reveals when we should be optimistic and when we should be skeptical…. An important contribution.”
—Michael Shermer, Publisher, Skeptic magazine and the "Skeptic" columnist for Scientific American
Introduction: Leveling the Exponential Curve
THE WAYS WE SEE TECHNOLOGY INCORRECTLY
The Birthday-Present Syndrome
The Perils of Prediction
The Unintended Wager
If It Ain't Broke, Be Grateful
More Powerful Than a Locomotive
Faster Than a Speeding Bullet
Leap Tall Buildings in a Single Bound
Corrective Lenses
THE MORE THINGS CHANGE . . .
For Better or For Worse
Playing with Matches
Fear and Anxiety
Technologies That Touch Us
Innovation Stimulation
What's Mine Is Mine
Conclusion: Vaccinate Against the Hype
Notes
Acknowledgments
About the Author
Index
1 The Birthday-Present Syndrome
THE WRAPPING PAPER FLIES as Junior tears into his present from Grandma. It’s the toy he’s been hoping for, and he’s delighted. All other possessions are forgotten as he begins to play with his new toy that will, in its turn, be ignored in favor of the next new thing.
When it comes to technology, most of us are like that kid with his birthday present—we are interested in the cool toy of the moment, and older technologies are only noticed in their absence. The result is that we don’t see technology clearly; we don’t soberly weigh today’s new developments against the technologies we already have. The value of today’s technology is inflated, and some revaluation is needed to restore a balance.
This chapter is an exercise in seeing more clearly the birthday-present syndrome, a seemingly permanent feature of our culture. It will also explore our uncomfortable coexistence with machines throughout the centuries. Society’s relationship with technology is like a romance in which each person sees attractive traits in the other, but with familiarity comes some unpleasant surprises. Maybe she chews with her mouth open or has disagreeable political opinions. Maybe he’s a slob or has antiquated views of a woman’s role in society. Similarly, a technology is never pure and innocent, incorruptible in every one of its applications. We find bad traits along with the good; we adopt a technology hoping we will be pleased with the balance.10
Good surprises can be difficult as well. We want to off-load tasks to machines, but egos can get bruised in the process. Does this new ability encroach on humanity? Are we reduced in value somehow by the success of our machines? Expect more of these kinds of questions as computers are increasingly able to do things that require thought; let us not forget, however, that this friction between society and technology has been around for a long time.
Technology Good and Bad
Humankind is either on its way to the stars
or hurtling out of a high-rise window to the street
and mumbling, “So far, so good.”
—EDWARD TENNER,
Why Things Bite Back (1996)
An ancient Chinese story tells of a farmer who owns a famous racehorse. One day, the horse runs away. His friends commiserate with him, but the farmer replies, “This isn’t necessarily a bad thing.” Soon, his horse returns and brings another fine-looking horse. His friends congratulate him, but the farmer observes, “This isn’t necessarily a good thing.” Later, the farmer’s son is thrown while trying to tame the new horse. He breaks his leg, which leaves him lame. The farmer’s friends offer condolences, but he responds, “This isn’t necessarily a bad thing.” Sure enough, war breaks out and the son’s lameness prevents him from being conscripted. Though many neighbors’ sons are killed in the fighting, the farmer’s son is spared. Sometimes it’s hard to tell what’s a good thing and what’s a bad thing.
But perhaps we can be certain in some cases. For example, we can all agree that the insecticide DDT is bad. The landmark book Silent Spring, by Rachel Carson (1962), made DDT’s environmental crimes common knowledge. And yet DDT’s discoverer won a Nobel Prize for his work in
1948, just six years after its properties were understood, and DDT was credited with saving five million lives by 1950. In the 1950s and ‘60s, DDT cut malaria in India to fifteen thousand cases per year, down from one hundred million. Given this remarkable progress, worldwide eradication of malaria seemed a strong possibility. Despite a growing understanding of the problems of resistance, environmental damage, and impact on human health, abandoning this insecticide was not the obvious course. Malaria kills millions of people per year even today, and DDT is still used in countries holding almost half of the world’s population, including China, India, and Mexico. So, what’s the moral? Is DDT a killer or a lifesaver? We could ask the same about antibiotics and vaccines—they mercifully saved lives and yet threatened widespread famine by encouraging dramatic overpopulation.11
Kranzberg’s First Law helps to clarify this situation: technology is neither good nor bad—nor is it neutral. At the risk of spoiling its Zenlike nature, let me propose an interpretation: a technology isn’t inherently good or bad, but it will have an impact, which is why it’s not neutral. Almost every applied technology has impact, and that impact will have a good side and a bad side. When you think of transportation technologies, for example, do you think of how they enable a delightful vacation or get the family back together during the holidays—or do you think of traffic jams and pollution? Are books a source of wisdom and spirituality or a way to distribute pornography and hate? Do you applaud medical technology for curing plagues or deplore transportation technology for spreading them? Does encrypted e-mail keep honest people safe from criminals or criminals safe from the police? Are plastics durable conveniences or everlasting pollutants? Counterfeiting comes with money, obscene phone calls come with the telephone, spam comes with e-mail, and pornography comes with the Internet. Every law creates an outlaw.
Opposites create each other. You can’t have an up without a down, a magnetic North Pole without a South Pole, or a yin without its opposite yang. Providing a technology for a good use opens the door for the bad. Werner von Braun observed, “Science does not have a moral dimension. It is like a knife. If you give it to a surgeon or a murderer, each will use it differently.” The same could be said for applications of technology.12
The dilemma of finding and maximizing technology’s gifts while minimizing its harm is especially important today but it has plagued society for centuries. Today we worry about junk on the Internet; yesterday we worried about junk on TV (and before that, junk through radio and film and books and newspapers). Today we worry about terrorists using bioengineering techniques to make new diseases; yesterday we worried about the telegraph and railroad being used to conduct the Civil War. Today, computer pioneer Bill Joy has argued that because of the downsides of possible accidents, we should deliberately avoid certain areas of research; yesterday Leonardo da Vinci destroyed plans for devices like the submarine, anticipating their use as weapons.
Man Versus Machine Contests
Now the man that invented the steam drill
He thought he was mighty fine.
But John Henry drove fifteen feet
The steam drill only made nine.
—”John Henry” (folk song)
One particular kind of social friction caused by technology occurs when machines perform tasks that have traditionally been done by human beings. This is like a junior employee taking over the menial parts of your job—it’s okay at first, but where will it end? Will it eventually cost you your job? Society has long been uneasy with machines encroaching on human turf, not just because of job loss, but also because of a vague loss of dignity. Could machines get uppity and forget their place?
The most direct example of this friction is the one-on-one turf battle—may the best man (or machine) win. Consider the story of John Henry. Though subsequently mythologized, he was a real person who worked on the Big Bend railroad tunnel in West Virginia in 1870. As a steel driver, he hammered long drills into the rock face to make holes for explosives. A mechanical drill had recently replaced steel drivers at other tunnels, and the drill manufacturer wanted it used on this project. Would it perform any better than men on the type of rock at Big Bend? To find out, a contest was proposed that pitted John Henry, the team’s best driver, against the steam drill. John Henry defeated the steam drill but died in the process, thus celebrating the heroism of humanity while foreshadowing the ultimate futility of the man versus machine contest for physical tasks.13
Perhaps the most prominent recent man versus machine contest was the defeat of chess grandmaster Gary Kasparov by IBM’s Deep Blue computer. A computer as world chess champion had been “ten years away” since the 1950s, but not until 1997 did those ten years finally pass. After the Deep Blue victory, the press reported much soulsearching, as if humanity had been dealt a major blow. However, the fact that Deep Blue didn’t celebrate its victory—and couldn’t—underscores that it is a world-class chess player but nothing more. The original 1949 paper outlining the basics of computer chess noted that if human opponents didn’t like how their game was progressing, they could always pull the plug.
To better understand the gulf that computers must still cross to be comparable to a human, imagine pitting a computer against a child rather than a chess champion. The computer’s goal would be to match the child’s understanding of the world. Some questions could test simple facts about the world (the sky is blue, water is wet, chairs are often made of wood), and others could examine common sense (What happens if you hit a pot with a spoon? What kinds of chairs burn? Can you stand on a table?). The ultimate test of this sort is the Turing Test, proposed by British mathematician Alan Turing in 1950, in which an observer communicates with two unseen entities, a computer and a human being. If the observer can’t tell the difference, the computer has fooled the observer and passed the test. Present computer technology is a long way from passing this test, one far harder than a chess match. 14
Acting Like a Human
That this toil of pure intelligence …
can possibly be performed by an unconscious machine
is a proposition which is received with incredulity.
—COLUMBIA UNIVERSITY PRESIDENT,
commenting on the adding machine (circa 1820)
Sometimes machines are deliberately designed to mimic how human beings work; a better approach is usually to discard those constraints and create a design that takes advantage of what machines do best. The history of printing gives us a good example. By the early 1800s, steam presses printing thousands of pages per hour were advancing the printing revolution Gutenberg began in 1455. The slow process of typesetting, however, remained a bottleneck. Even after text could be composed on a typewriter by the 1870s, each tiny metal character of type still had to be hand placed by skilled typesetters for printing. Not unlike programmers in the 1980s and ‘90s, fast typesetters could move between jobs at will and demand excellent wages. The best typesetters were celebrities and races became popular, attracting large audiences as if they were sporting events. Some competitors could set five thousand characters of justified and corrected text in an hour—better than one character per second. This was a tough job for machines to duplicate. Should they mimic the steps humans used or try a machinespecific approach?
By the 1880s, first generation mechanical typesetters were in use Mark Twain was interested in this new technology and invested in the Paige typesetter, backing it against its primary competitor, the Mergenthaler Linotype machine. The Paige was faster and had more capabilities. However, the complicated machine contained eighteen thousand parts and weighed three tons, making it more expensive and less reliable. As the market battle wore on, Twain put more and more money into the project, but it eventually failed in 1894, largely because the machine deliberately mimicked how human typesetters worked instead of taking advantage of the unique ways machines can operate.For example, the Paige machine re-sorted the type from completed print jobs back into bins to be reused. This impressive ability made it compatible with the manual process but very complex as well. The Linotype neatly cut the Gordian knot by simply melting old type and recasting it. After investing a quarter of a million dollars in the project, Twain was bankrupt. He spent the next four years lecturing to repay his debts. (Twain’s conclusion: never invest when you can’t afford to and never invest when you can.)15
As with typesetting machines, airplanes also flirted with animal inspiration in their early years. Flapping-wing airplane failures, however, soon yielded to propeller-driven successes. Airplanes don’t fly like birds, and submarines don’t swim like fish. Wagons roll rather than walk, and a recorded voice isn’t replayed through an artificial mouth. A washing machine doesn’t use a washboard, and a dishwasher moves the water and not the dishes. Asking whether a computer can think or wonder is like asking whether a car can trot or gallop—a computer has its own way of operating, which may be quite different from the human approach. The most efficient machines usually don’t mimic how humans or animals work.
We can approach the question of thinking another way: Does a tree falling in a forest with no one to hear it make a sound? That depends on how sound is defined. Similarly, whether a computer duplicating a particular human skill is thinking or not depends on how think is defined. You could say that a computer chess champion doesn’t think because it doesn’t operate the way people do; or you could say that it thinks in its own way because it obviously gets the job done. To take another example, ELIZA was a famous 1965 computer program that played the role of a psychiatrist. It was so convincing that some users earnestly poured out their problems to the imagined intelligence, even though replicating ELIZA is simple enough to be assigned as homework in a college artificial intelligence course. Marvin Minsky considered artificial intelligence “making machines do things that would be considered intelligent if done by people.”16
Is the Turing Test still the ultimate test of cognition? Or is mimicking a human irrelevant as long as the computer gets the job done? In the movie 2001, we see the computer HAL pass a second-generation Turing Test: not only is he convincingly human in conversation, he also becomes paranoid and homicidal. Perhaps acting like a human isn’t such a worthy goal after all.
The gap separating computers and human beings is one of appearance as well as intelligence. The computer as an anthropomorphic robot that travels on two legs, manipulates things with fingers, and has the same approximate shape as a human has a long history, predating the 1950s low-budget sci-fi movies. The Wizard of Oz novel series introduced the robot Tik-tok around 1910, and an early robot appeared in the movie Metropolis (1927). The word robot was introduced into English from a Czech play in 1921. Fascination with smart machines extends back at least to the automaton orchestra built for a Chinese emperor over two thousand years ago.
One of the most famous historical automatons was actually a deception. The chess-playing “Turk” was unveiled in 1770. It toured Europe and defeated most opponents, including Benjamin Franklin. Charles Babbage’s bout with the Turk stimulated his interest in computing machines. The Turk continued playing for decades, and few suspected its secret: a chess master hidden inside that controlled the turban-wearing mannequin. Elektro, “the amazing Westinghouse Moto-Man,” was a seven-foot-tall robot exhibited at the 1939 New York World’s Fair. Also a deception, a hidden operator controlled Elektro’s speech. In a decision that seems especially dated now, its creators thought that the ability to smoke a cigarette added to its humanness. 17
Robots’ real success so far has been in factories where precision and repeatability are important and appearance and adaptability are not. Machines work best when we let them be themselves. Around the house, the science fiction robot remains a dream, and yet telephone answering machines, microwave ovens, and other appliances have already encroached on the turf of the home robot.
The Ever-Moving Goal
“A slow sort of country!” said the Queen.
“Now, here, you see, it takes all the running you can do
to keep in the same place. If you want to get somewhere else,
you must run at least twice as fast as that!”
—LEWIS CARROLL, Through the Looking-Glass (1871)
Ask a magician to reveal how a trick is done. If you aren’t told that it’s a professional secret, you’ll probably hear, “Actually, you really don’t want to know.” Knowing the secret eliminates the mystery and ruins the fun. Is fire-walking a mysterious example of mind over matter, or is it simple physics—that charcoal doesn’t conduct heat well, so quickly moving feet don’t get burned? (And which answer makes the more interesting story?) Similarly, the idea of a machine able to beat a chess grandmaster was magical and exciting, at least until it was achieved. Now we see it simply as an impressive feat but one without any impact on daily life. After all, as we now know, a dedicated chess computer can only play chess.
When you’re told how a feat of illusion works, magic is replaced by mechanics and the fun is gone. When a computer reaches a human intelligence metric, it seems to show human-like qualities—that is, until you look behind the curtain and see very nonhuman algorithms and hardware.
A future technology milestone (the ability to see or to understand speech, for example) is sometimes considered proof of some aspect of humanity. But technology bears the burden that once that milestone is reached, it becomes a parlor trick. This new capability may wellbe useful, but it’s no threat to humanity. An “electronic brain” from the 1940s performing thousands of additions per second certainly achieved a superhuman feat, yet a computer performing billions of additions per second today is not even noteworthy. Construction equipment that is as capable as hundreds of workers? Boring. Enormous factories that shape massive metal beams or make chemicals in ways humans could never duplicate? Ho-hum. Robotic assembly-line workers? Ancient history. Chess champion of the world? We thought that would be impressive, but have changed our minds—sorry. That which is “human” is redefined as machines approach it, like the mechanical rabbit that is always just out of reach of the racing greyhounds. For technology, the race is like the Red Queen said: “It takes all the running you can do to keep in the same place.” 18
Perhaps that’s the most important difference between man and machine. Society changes and improves, setting new goals once old ones are reached. But machines do what they’re designed to do and no more. At least for now, it takes man to invent the next machine.
Technological Myopia: Revisiting the Birthday-Present Syndrome
Anything that was in the world when you were born
is normal and natural.
Anything invented between when you were 15 and 35
is new and revolutionary and exciting,
and you’ll probably get a career in it.
Anything invented after you’re 35
is against the natural order of things.
—DOUGLAS ADAMS
The world’s first escalator was installed in Harrod’s department store in London in 1889, and brandy and smelling salts were available to passengers made faint by the ordeal. It is hard for us to put ourselves in the places of people seeing for the first time, as adults, technologies that we have grown up with. 19
Try to remember the first time you used various technologies. For example, I remember the first time I flew on a Boeing 747, the first time I used a microwave oven, and the first time I used a mainframe computer. Other firsts for me: using an ATM to get cash in another state; participating in a videoconference call; and using a computer, a cell phone, and a Web browser. I remember the first time I saw a CD-ROM as the prize inside a cereal box.
By contrast, I do not recall the first time I rode in a car, watched television, read a book, used an electrical appliance, or made a telephone call. By the time I was born, these technologies had become unremarkable parts of society.
My kids will have a different list of unremarkable technologies. They have grown up with compact discs, personal computers, videotape, and cellular phones. For them, listening to music from a CD is commonplace but from a vinyl record is remarkable; I remember when it was the reverse. Similarly, flying in a jet plane for me is commonplace, but in a propeller-driven plane is noteworthy; my parents remember when it was the reverse. My grandparents knew a time when driving in a car was exciting, but horse-drawn transportation was not.
Joel Birnbaum observed: “Only people born before a technology becomes pervasive think of it as a technology; all others consider it part of the environment.” This technological myopia—the tendency to see the new out of proportion to its impact and to discount the old— helps explain the pervasive and distorted view of technology in our society today. For a similar viewpoint, consider Saul Steinberg’s wellknown “A View of the World from Ninth Avenue.” This New Yorker cover from 1976 shows several carefully drawn New York City streets in the foreground, with detail quickly dropping off in the distance. Beyond the Hudson River is a featureless and unimportant landscape composed of the rest of the United States, the Pacific Ocean, and Asia. In a similar way, we clearly see the changes caused by the PC, the Internet, and other recent technology, but older technologies, such as the printing press, train, and telegraph, fade into the distance. (By the way, I use “PC” to refer to any personal computer, not just the IBM-compatible kind.) 20
For a different perspective, let’s suppose we learned to communicate with dolphins. We could eventually ask, “So, what’s it like to be wet all the time?” The dolphin might wonder what we are talking about. We understand wet because we understand dry. A dolphin wouldn’t notice wetness even though it is constantly wet—in fact, because it is constantly wet. Similarly, we are so immersed in our technology that trying to evaluate today’s society from the vantage point of today is inherently difficult, like any type of self-analysis, and it’s not surprising that the common perception is off the mark.
We not only dismiss older technologies, we’ve also become accustomed to some rather startling consequences, things that might shock an outsider. For example, there are more than forty thousand carrelated deaths in the United States annually. This is seen as an important but unremarkable fact of modern life. By contrast, when an airplane crashes and kills forty people, it becomes front-page news. This is the expected and accepted contrasted with the unexpected and surprising. Only the new is news.
In the Monty Python movie Life of Brian, there is a debate among the revolutionaries about the impact of Roman rule on Palestine. It sounds similar to our own debate about the relative importance of old and new technology. Here is a version of that technology debate, in Life-ofBrian style.
Anything that can be automatically done for you can be automatically done to you.
—DAVID WYLAND’S Law of Automation