How People Reacted to Greatest Inventions in History: From Printing Press to Generative AI

Since the dawn of history, humans have been reluctant to embrace new inventions, from the printing press to modern generative AI innovations. But eventually, they come around.

In this post, we’ll explore some of the greatest inventions of all time that have, one by one, reshaped the way we consume information, learn, and communicate with each other. Spoiler: people did not always respond kindly to them at first.

The Printing Press 

The printing press was a game-changer that completely transformed the way information was disseminated. When Johannes Gutenberg invented the movable type printing press in the 15th century, it rocked the world.

Before the printing press, books had to be copied by hand, which was slow and expensive. The printing press increased production capacity from 5 to 25 pages per hour, allowing books to be published on a larger scale and made available to a wider audience.

The Gutenberg Bible was the first book to be printed on the press.

Reactions

The printing press initially met with resistance from some groups. Professional copyists, who had previously enjoyed their status and prestige, saw the printing press as a threat to their livelihood and feared that it would put them out of work.

In 1492, Johannes Trithemius, a German abbot, wrote in his treatise In Praise of Scribes (De Laude Scriptorum):

“Brothers, nobody should say or think: ‘What is the sense of bothering with copying by hand when the art of printing has brought to light so many important books; a huge library can be acquired inexpensively.’ I tell you, the man who ways this only tries to conceal his own laziness.”

“Yes, many books are now available in print but no matter how many books will be printed, there will always be some left unprinted and worth copying. No one will ever be able to locate and buy all printed books.”

History of Information. Translated in Tribble and Trubek eds., Writing Material: Readings from Plato to the Digital Age, 2003.

The Church was also concerned about the spread of unregulated and potentially heretical ideas through printed materials, which may also have been a contributing cause and excuse for the emergence of censorship. Consider what Pope Alexander VI wrote in his 1501 bull Inter Multiplices:

“The art of printing can be of great service in so far as it furthers the circulation of useful and tested books; but it can bring about serious evils if it is permitted to widen the influence of perncious works. It will, therefore, be necessary to maintain full control over the printers so that they may be prevented from bringing into print writings which are antagonistic to the Catholic faith, or which are likely to cause trouble to believers.”

Encyclopedia of Censorship.

It took about 50 years for the printing press to become widely accepted, and it created many new jobs in the publishing industry. By the end of the 15th century, fifty years after Gutenberg’s invention, printers in Western Europe had produced more than 20 million copies, and by the next century, the number had reached 150 to 200 million copies. However, the spread of printed books didn’t save them from further criticism. In 1690, the polymath Gottfried Wilhelm Leibniz expressed his fear that books might lead society into degradation:

“The horrible mass of books that keeps on growing might lead to a fall back into barbarism.”

Gottfried Wilhelm Leibniz in his letter to Louis XIV, 1680.

The Telephone 

The telephone drastically changed the way people communicate over long distances. Fun fact: Alexander Graham Bell wasn’t actually the first one to come up with the idea. This German scientist named Philipp Reis had a device that transmitted musical tones way back in 1861.

But it was Bell’s vision of transmitting sound over wires that was patented in 1876. Soon the telephone was in use throughout the U.S. and Europe. By 1900, there were nearly one million telephones worldwide, and by 1910, that number had skyrocketed to 5.8 million by 1910, according to Elon University. It took another 50 years to reach 50 million users.

At that time, telephones were simple, requiring a direct wire connection. But as time went on, technology improved, and people figured out how to send phone signals over longer distances using wires and even wirelessly. Today, phones are these amazing devices with text messaging, video calling, and Internet access. They’re an essential part of our lives, connecting people around the world and helping businesses and organizations operate on a global scale.

Reactions

Some critics feared that the telephone would cause a breakdown in social interaction and undermine face-to-face communication. A few months before Alexander Graham Bell unveiled his device, The New York Times published a note accusing Philipp Reis of “deliberate malice” over his telephone-like device. The fear was that if people in America got telephones, they would no longer show up at national celebrations, exhibitions, and concert halls, but would retreat to their rooms and listen “to the trembling telephone.”

“But what if Prof. REUSS, with deliberate malice, and at the instigation of the European despots, should distribute telephones to all the cities of America, and thus enable their citizens to listen to overture, oration, poem, and Declaration, without the trouble and expense of going to Philadelphia? What possible success would in such case attend an exhibition to which nobody but Philadelphians with free passes would come? There is so far nothing to indicate that this is Prof. REUSS’ dark design, but as all foreign despots, from the Queen, in the Tower of London, to the Prince of Monaco, in the backroom of his gambling palace, are notoriously and constantly tearing their hair as they hear of BELKNAP and PENDLETON, and note the progress and prosperity of our nation, it is not impossible that they have the infamous scheme of attacking the Centennial Celebration with telephones.”

The New York Times, March 22, 1876.

Some telephone critics worried about privacy or thought devices would be too expensive, impractical or even dangerous.

“Father objected to the telephone strenously. He distrusted machines of all kinds; they weren’t human, they popped or exploded and made him nervous. And the telephone to him seemed especially dangerous. He would shout into the transmitter and thought every call was meant for him. Very often he would answer when it was for someone else, and tell the person he didn’t want to speak to them.”

The New Yorker, May 13, 1933.

The first long-distance call was made in 1951 between New Jersey and California. By the 1970s, they were commonplace. In 1979, a Bell System commercial debuted on Johnny Carson’s show, urging people to “reach out and touch someone”.

That’s how one of the characters in Stephen King’s novel referred to long-distance phone calls.

“I dialed it direct. Did you know you could do that now? Yes. It is a great convenience. You dial one, the area code, the number. Eleven digits and you can be in touch with any place in the country. It is an amazing thing. In some ways a frightening thing.”

Stephen King, The Dead Zone, 1979.

The Personal Computer

Computers have come a long way since Charles Babbage’s concepts in the 19th century. In the early 20th century, analog computers dominated the scene, using analog signals for calculations. In 1938, Konrad Zuse created the world’s first fully automatic digital computer called, the Z3.

Early computers were large, expensive, and complex. After World War II, technological advances led to faster and more versatile electronic devices. The 1960s saw the rise of minicomputers, which were smaller and more affordable, although they still carried a hefty price tag of $25,000, the equivalent of $174,000 in 2021.

The real change came with the advent of personal computers in the 1970s and 1980s. The mass production of microprocessors made them accessible to individuals and small businesses. By 1982, more than 1.5 million computers had been shipped worldwide, with more than one million in the U.S. alone, according to BYTE Magazine.

Reactions

When it came to computers, there was a kind of panic that was mostly about potential mass job loss coupled with the confusion of not being able to figure out how to make the machine work.

The term “computerphobia” peaked in the 1980s, the point when computers entered the consumer market.

Some experts have blamed the media and journalists for being responsible for the phenomenon of computerphobia. In his book The Age of Communication, linguist William Luts claims that a lack of realistic explanation of technology by the media has led to a fear of interacting with machines. 

“In the Fifties, when computers and other devices for automating work were coming in, there was an almost hysterical belief that they would sharply increase unemployment. Thousands of economists and social historians were in a position to know better. They not only failed to reach the general public with a more realistic view of automation’s impact on employment, they did not even get the message to the rest of the academic community. Even though U.S. employment has increased 36 percent since 1950, millions of people, including many of the best educated, are still walking around with bad cases of computerphobia.

[…] Newspapers and television have made little effort to explain the economic and social meaning of the computer. Such a subject simply does not fit their working definitions of news. But if in the years ahead there occurs, for some reason unconnected with computers, a sharp and prolonged rise in unemployment, then the press will feel obliged to carry the mouthings of any demagogue who blames computers for the shortage of jobs. A lot of Americans would fall for this because education and journalism […]”

William Luts, The Age of Communication, archive.org

Even experts and journalists, who are usually expected to be advocates of technology, were sometimes skeptical about computers. They didn’t believe that computers would become a part of everyday life and considered computers in the home to be a fad. One of them was Ken Olsen, co-founder of one of America’s largest computer companies in the 1980s.

“There is no reason for any individual to have a computer in his home.”

Ken Olsen, CEO of Digital Equipment Corporation, quoted at Bill Gates: The Path to the Future

Olsen himself claimed that he was referring to smart homes, not personal computers, and his quote was taken out of context. In the end, however, the prediction proved inaccurate, as smart homes eventually became a part of our reality.

Ironically, even the gaming industry had a sense of trepidation when personal computers first hit the mass market. In his article for Compute!’s Apple Applications magazine, Dan Gutman observed the fall and rise of the gaming industry in the 1980s as a result of the proliferation of computers. In the early 1980s, millions of people played games on arcade machines installed in laundromats or hairdressing salons. This was when Pac-Man took America by storm and people spent more money on video games than on baseball, football, and basketball combined. With the personal computer boom, experts predicted that the game industry was dead. The author recalls:

“People began to say to themselves, “Why should I buy a video game system when I can buy a computer that will play games and do so much more?” On October 17, 1983, the New York Times announced, “Video Games Industry Comes Down to Earth.”

That January, Time put the personal computer on its cover and called it Machine of the Year. Video games were officially dead and computers were hot. In our October 1983 issue, we announced a change in the name of the magazine from Video Games Player to Computer Games. The Golden Joystick Awards came to be called The Golden Floppies. I noticed that the word games became a dirty word in the press. We started replacing it with simulations as often as possible.”

Dan Gutman, Compute!’s Apple Applications, December 1987

It’s worth noting that the early days of computing were not all about fear and skepticism. If you go to the archives, you will find numerous stories that were full of intriguing curiosity, hope, and excitement. Check out the BBC’s Tomorrow’s World series or David Hoffman’s interview with a computer store salesman about how he sees the future of computing. 

Generative AI

Now let’s fast forward to the present day and take a look at the invention of generative AI. Here are some of the earliest examples of generative AI: 

  • In 1961, scientists John Larry Kelly, Jr. and Louis Gerstman used an IBM 704 computer to synthesize speech by reproducing the melody “Daisy Bell” with a voice recorder synthesizer.
  • In the mid-1960s, Joseph Weizenbaum developed ELIZA, a natural language processing program that simulated conversations with humans and served as a precursor to modern chatbots.
  • In 1965, inventor Ray Kurzweil performed a computer-generated piano piece on the show I’ve Got a Secret, demonstrating an algorithm that creates original melodies.

These and other developments laid the groundwork for further advances in generative AI. In the 2010s, deep learning techniques and neural networks paved the way for more sophisticated systems. Apple introduced Siri in 2011, marking one of the first implementations of virtual assistants. OpenAI’s release of GPT in 2018 demonstrated the power of unsupervised learning. Today, there are nearly 30 large language models developed by a variety of companies, including OpenAI, Google, and Meta. In 2022, significant advances were made in image synthesis, opening up limitless possibilities for content creation.

The market for generative AI has grown rapidly, with more than 250 startups in various categories, according to CBInsights.

Reactions

Like previous inventions, generative AI has been met with excitement, skepticism, and even fear.

The media tends to embrace the narrative of the negative outcomes of AI development. For example, they talk a lot about jobs that are likely to be affected by AI, without mentioning the roles that are expected to evolve to make AI systems work. Nevertheless, it was entire communities have risen up against AI: from Artstation contributors protesting the inclusion of AI content on the site, to Twitter users accusing Lensa of stealing artworks to train its algorithm, to some of the most prominent figures in the tech world, including Elon Musk, signing an open letter calling for a pause in the development of powerful AI tools for at least for six months.

But the biggest fear about AI is that the technology is potentially capable of destroying the human species. In this thread on Reddit, users are imagining just how that might come to pass. 

More pragmatically, there are concerns and discussions about the ethical implications of creating artificial intelligence and establishing rules in this area.

“I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment. This new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.”

Bill Gates, GatesNotes, March 21, 2023


All in all, throughout history, people have been both excited and terrified by new inventions. Whether it’s the printing press, the telephone, the computer, or generative AI, each new technology has faced its own set of challenges and uncertainties. However, as we look back on these inventions today, it’s clear that they have all played a significant role in shaping the world we live in.

Spread the word