A Short History of Artificial Intelligence

A Short History of Artificial Intelligence

During a typical day you might use facial or fingerprint ID to unlock your smartphone, ask Siri to text your friend, rely on your car’s sensors to avoid an accident, and interact with a chatbot while online shopping. All of these experiences are fueled by artificial intelligence.

AI is everywhere, supporting our activities, productivity, and creativity. And it’s getting more advanced by the day.

But how did AI become so prevalent, and when did computer scientists first begin to develop the technologies that define our world today? In this article, we’ll explore a brief history of AI, from ideation to early breakthroughs to today’s remarkable achievements.

Early ideas about artificial intelligence

People have been intrigued by artificial intelligence for centuries, even if they didn’t use that term.

Greek mythology references Talos, a large bronze automaton that protected the island of Crete. Jonathan Swift’s 1726 novel Gulliver’s Travels mentions a word machine called “The Engine,” which is capable of producing poetry and prose. In 1763, Thomas Bayes introduced his Bayesian inference, a mathematical theorem that uses known data to calculate probabilities (not unlike generative AI). And in 1920, Czech playwright Karel Čapek coined the term “robot” in his play, Rossum’s Universal Robots.

These are just a few of the early examples of people toying with the idea of infusing intelligence into non-humans. From science fiction to mathematics, AI has been budding for a very long time. But AI in the true sense didn’t become a reality until the mid-20th century.

The history of artificial intelligence

Like any technological advancement, the development of artificial intelligence came with setbacks and disappointments. In the 1970s, one computer scientist predicted that AI would achieve human intelligence within a decade. That obviously didn’t happen, but it’s still astonishing to recognize how far we’ve come in under 100 years.

The Turing Test

In the 1940s, British polymath Alan Turing began pondering the idea of whether machines could “think” — or, at least, imitate a human’s ability to think. In Turing’s view, people make decisions based on available information and reasoning skills. He hypothesized that machines could follow the same process.

Turing’s 1950 paper “Computing Machinery and Intelligence” proposed the Turing Test, also called the Imitation Game. It involved a human evaluator engaging in a conversation with both a human and a machine through a text interface, without knowing which was which. If the evaluator could not reliably distinguish between the human and the machine, then the machine possessed artificial intelligence.

The Dartmouth Workshop

While Turing’s breakthrough test laid the foundation for artificial intelligence, the official birth of AI happened several years later in 1956 at The Dartmouth Workshop. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, the workshop brought together researchers interested in exploring the possibility of creating intelligent machines.

During the event, McCarthy coined the term “artificial intelligence,” marking the first time the field was recognized as an official area of study. This historic event provided a platform for early AI pioneers to discuss their ideas, exchange knowledge, and lay the groundwork for future research.

The workshop also introduced the Logic Theorist, developed by Allen Newell, Cliff Shaw, and Herbert Simon. This computer program successfully solved complex mathematical problems that previously required human expertise, taking humanity another step closer to today’s AI.

Symbolic AI and flourishing research

After 1956, research and development in artificial intelligence exploded. Funding came pouring in from private investors and the government, leading to more breakthroughs and excitement.

Newell and Simon built upon their Logic Theorist to create a computer program called The General Problem Solver (GPS), which demonstrated human-like problem solving in areas like logic proofs and math equations. Meanwhile, MIT professor Joseph Weizenbaum developed ELIZA, a natural language processing program that used pattern matching to simulate text-based conversations.

Most of the advancements during this period focused on symbolic AI, or logic-based programs that analyzed data and patterns to solve problems and replicate human responses.

AI Winter

Despite significant enthusiasm in the 1960s, the 70s and early 80s ushered in a period known as the “AI Winter.” Funding, research, and interest in artificial intelligence largely dried up, and innovation came to a screeching halt. This happened for two reasons.

First, AI pioneers and computer scientists had overinflated expectations about AI’s immediate potential, leading to unrealized promises. Second, computing power wasn’t yet capable of scaling AI. Processing power was extremely limited, as was storage. As we know today, advanced AI requires massive amounts of data to function at high capacity. At the time, this simply wasn’t possible.

Expert systems

Thankfully, AI made a resurgence in the late 1980s and into the 90s. Researchers shifted their focus from general intelligence to specific tools. Expert systems, a continuation of symbolic AI, were rooted in precise domains. This meant AI could solve specific and complex problems based on expert-provided data, which is more narrow than general data.

Many industries began relying on expert systems to streamline their workflows. Financial institutions used them for mortgage qualification and investment management, while manufacturing companies could automate equipment troubleshooting. Expert systems were even used in healthcare to determine diagnoses based on patient symptoms and medical history.

Funding returned, and there was a renewed interest in AI’s potential.

Machine learning

Computer scientists realized that a crucial aspect of human ingenuity was missing from artificial intelligence. People don’t just make logical decisions based on data: we also change our tactics and responses based on trial and error. In essence, we learn from experience. For AI to truly reach human-level potential, machines must also be able to adapt.

This realization sparked a resurgence in developing machine learning, which originated in the 1960s, albeit within a very limited scope. But in the 90s, computers had come a very long way, and machine learning programs, aided by neural networks, led to many hallmark advancements in AI.

IBM’s Deep Blue computer program successfully defeated world chess champion Garry Kasparov in 1997. And, speech recognition software started gaining ground with innovations like Dragon Systems’ Dragon Dictate and Bellsouth’s Voice Portal, both capable of previously unimagined voice recognition.

The internet and big data

Remember how computer processing and limited storage initially limited the scope of AI advancement? The advent of personal computers solved part of this problem, and the World Wide Web accomplished the rest.

PCs meant researchers no longer had to rely on huge, expensive mainframe computers. And, for the first time in history, engineers could give machine learning algorithms access to enormous amounts of data, making pattern recognition, trial and error, and problem-solving more human-like than ever before.

These leaps led to some of the first interactions with AI by the average person. After releasing the first version of the iPhone in 2007, Apple introduced the Siri app for iOS in 2010. As we well know, this intelligent virtual assistant performs tasks like opening apps, updating calendars, and searching the web for human users.

AI innovation continued to excel, with more and more examples of machine intelligence making headlines. In 2016, the AI company DeepMind developed AlphaGo, which quickly learned to play Go, a Chinese strategy game (far more complex than chess), and the computer program successfully defeated the reigning world champion, Lee Sedol, in 2016.

Generative AI

Today’s artificial intelligence hype centers on generative AI (like our own Quillbee). This is a big shift from traditional rule-based systems to more creative and human-like AI capabilities — creating new and original content like text, images, music, and beyond.

Early pioneers envisioned this concept and dreamed of a future when it would come to fruition. But the true breakthrough didn’t happen until many years later in 2014, when American computer scientist Ian Goodfellow developed Generative Adversarial Networks (GANs). GANs utilize two neural networks, called generators and discriminators, that compete against one another to better identify data. This combination allows AI to self-train, growing smarter and smarter while working independently.

Because of GANs, and the years of AI research before it, we can now prompt generative AI programs like ChatGPT and receive original, creative responses:

A poem about an apple tree generated by generative artificial intelligence program ChatGPT

It’s hard to believe a computer program is capable of such things, but it’s true. Though they imagined a future with artificial intelligence, pioneers like Alan Turing and John McCarthy would surely be astonished by how far the industry has come.

Artificial intelligence today, tomorrow, and beyond

And there you have it — a short timeline of where AI came from and where it is today. Of course, the industry is continuously advancing. Artificial intelligence has already transformed industries like automotives, healthcare, manufacturing, education, and customer service, with grander plans unfolding all the time.

It won’t be long before this article is ancient history.

Similar Posts