The history of Artificial Intelligence
4 min read

The history of Artificial Intelligence

The history of Artificial Intelligence


Current Artificial intelligence can be considered as a blend of two things: wanting to automate tasks, and giving 'life' to objects. We can trace those two concepts back thousands of years: Even though the idea of "computers" was unknown at the time, stories of gods giving life to objects already existed during antiquity: For example the story of pygmalion falling in love with his statue, which was then given life by Aphrodite (goddess of love), or Talos, the giant made out of bronze created by Hephaestus and infused with the life force of the gods.

It is also widely believed that the first algorithms were arithmetic algorithms, used as far back as 2500BC

When we think about AI today, those are definitely not the first things that come to our minds but it remains that they are still considered the origins of AI.

Theorized AI

For a long time, the automation of tasks was only possible theoretically as computers or any analogous machine did not exist, so, the study of what could be considered "automated thinking" was reduced to the study of the relationship between reasoning and logical operations. The philosopher Ramon Lull (13th-14th century) imagined a machine that would be able to take basic truths, and through logical operations, deduct other truths, to be able to make everything that could be true. Furthermore, during the 17th century, Leibnitz wanted to formalize reasoning into a language of calculations (Characteristica universalis), people would then be able to calculate instead of discussing.

In 1818, Mary Shelley's novel Frankenstein was published, in which a scientist creates life (instead of a god this time). However this is still pretty far from what can be considered as artificial intelligence, but the concept of people creating life was already present.

A couple of years later, in 1842, the first official program was written by Ada Lovelace. This was however still theoretical because it wasn't written on a machine, but it was a set of notes describing inputs that could be given to an analytical machine (designed by Charles Babbage) to calculate Bernoulli numbers. This analytical machine was purely theoretical as well because although designed by Babbage, it was never built by him.

First implementations

Slowly, people started building machines and computers to be able to experiment. Spanish civil engineer & mathematician Leonardo Torres y Quevedo built an autonomous chess-playing machine in 1912, and in 1920, he finally built a prototype of a Babbage machine.

The beginning of AI as we know coincided with the rise of modern computers and the model of the brain: in the 1930s-1950s, the first neural networks were created, trying to replicate the brain and neurons, essentially creating an "artificial brain".

In 1950, Alan Turing created and described the Turing test after speculating that machines might be able to "think": The Turing test supposes that a person is talking with a computer, and another person from the outside is listening to the conversation with no knowledge of who is the person and who is the computer. If the person listening cannot guess who the person is and who the computer is, then the computer passed the test.

Finally, in 1956, during the Darthmouth workshop, the term "artificial intelligence" was officially adopted. Following this, the development of AI continued to expand, domains such as Natural language processing (NLP) emerged, the first chatbot was written from 1964 to 1966, and in 1973, the first bipedal robot: WABOT-1, was built. The domain of AI even started to get financing from the government. Popular representation at the time started to depict more 'modern' AI, such as HAL 9000 in 2001: A space odyssey (1968). However, this was not destined to last long.

The two winters of AI

Eventually, all the hype around AI started to die down, due to the unrealistic expectations and the limited computing power available at the time, and as a consequence, the inability of researchers to keep up with demands. The financing started to diminish, and in 1974, the first AI winter arrived. However, before the second AI winter, in 1987, some progress was still made and some hope was seen, thanks to expert systems, which were algorithms that followed a set of rules that were made for certain domains. In 1989, the first neural network able to recognize handwritten numbers was created. The concept of AI was still well alive in popular culture and movies such as Termninator (1984) and The matrix (1999), depicting forms of AI.

Year after year, the processing power available became greater (some of you probably already heard about Moore's law: an observation that the number of transistors in an integrated circuit doubles every two years), and AI started to be used in our day-to-day life: from speech recognition to banking.

The era of 'big data'

From 2010 onwards, the domain of AI experienced a huge boom, with terms such as 'machine learning' being widely used. The main strength of the current era is the huge amount of data and processing power available: a person with a basic laptop and internet access can download an MNIST dataset and can write a neural network from the ground up. In more and more parts of our life, we are being assisted by AI, without even realizing it. Every major technology firm has a department dedicated to research in Artificial Intelligence.

The next big challenge to overcome is the research in general artificial intelligence: AIs capable of being autonomous in multiple domains.

In the meantime, please consider subscribing to my newsletter, it keeps me motivated and it's free! I write 2 articles each week.