Reading:
A Short History of Artificial Intelligence: Making Mythology a Reality

A Short History of Artificial Intelligence: Making Mythology a Reality

by omnius
Oktober 10, 2018

When thinking back to the era of ancient civilisations, it’s unlikely you’d consider insurance and artificial intelligence staples of the time. Rather, they fit much better into the modern day, where technological innovation goes hand-in-hand with better business practices. Yet, the idea of giving artificial beings a form of mind goes back to antiquity, seen in folklore, myths and stories. As Pamela McCorduck, a writer and novelist on artificial intelligence, wrote, AI stemmed from “an ancient wish to forge the gods” – making it just bit older than Google, really.

Civilisations have long-held beliefs and folklore surrounding bringing inanimate objects to life in Greek, Chinese and Jewish folklore, from Pygmalion’s Galatea, an ivory sculpture brought to life to be his wife, to rabbinic golems and a lifelike robot performing to King Mu of Zhou.

Apart from the divine wishes seemingly at play in ancient times, however, the execution of artificially intelligent machines remained limited until late last century, at which point both extreme optimism and persistent doubt characterised its journey. In this final addition to our two-part series delving into the history behind our two favourite fields, we’ll explore Artificial Intelligence from the peaks of its highs to its wintry lows:

 

1950: Alan Turing’s groundbreaking paper, Computing Machinery and Intelligence, generates early interest in the field by introducing the Turing Test, becoming one of the first to speculate on the possibility of intelligent machines.

 

 

 


1956: Artificial Intelligence officially gets its name – and place as an academic discipline – during the Summer Research Project on Artificial Intelligence conference, hosted at Dartmouth College, USA, by the ‚founding fathers‘ of AI, John McCarthy and Marvin Minsky.

1956 – 1974: Spurred on by the seminal conference, artificial intelligence’s Golden Years begin and extreme optimism takes hold. UK and US government agencies funnel funding into the field, and interest develops into experiments, discovery, and a range of initial programmes, including:   

1958: Herbert Gelernter’s “geometry machine” becomes the first advanced AI programme to prove geometric theorems and the third ever in creation.

1966: Customer service reps beware – ELIZA, the first chatbot ever is invented by Joseph Weizenbaum (above) with the ability to hold simple, yet compassionate conversations.

 

 


1966: Winter is coming – A report commissioned by the Automatic Language Processing Advisory Committee finds that human translation far outweighed machine translation in cost, speed and efficiency, leading to a substantial loss of funding to the field.

1974 – 1980: After the 1973 Lighthill Debate concluded that AI research in England wasn’t worthwhile to pursue, US agency DARPA ends its funding for a speech comprehension research programme within the year. Thus starts the first AI winter, when interest, financial support, and further AI initiatives slow to a halt.

1980: Artificial intelligence makes a cautious comeback in the form of expert systems, a type of AI programme designed to ease business practice while cutting costs became mainstream practice when adopted by global corporations.

1987: Unfortunately, so fleeting was the need for specialised AI hardware that, before long, desktop computers quickly overpowered most expert systems and rendered them redundant for the next five years. By the 90s, they’d become too expensive to maintain and the second AI winter settled in until 1993.

1993: After numerous setbacks and pessimistic research, the field of AI remained largely out of the limelight, avoiding even the term “artificial intelligence.” Yet success grew behind the scenes as AI developed to handle greater, more complex and more versatile issues, soon become mainstream practice.

 

1995: AltaVista becomes the first search engine to use natural language processing.

 

1997: On 11 May, Deep Blue makes history as the first computer chess player to defeat a reigning world champion, Garry Kasparov (left).

2005: Stanley, the Stanford Racing Team’s robotic car, drives autonomously across over 131 miles to win DARPA’s Grand Challenge.

 

2016: In a thrilling game of Go, DeepMind’s AlphaGo defeats 18-time world champion and 9 dan player (the top rank) Lee Sedol. Long considered a serious challenge for AI, AlphaGo’s comprehensive defeat of a master player was a milestone moment.

What is to come? A rich productive period lies ahead for artificial intelligence… and at omni:us we are excited to contribute in our own specialty: insurance. Our systems may not resemble quite the singing robot King Mu of Zhou was expecting, but with the inspiration and insight of AI pioneers before us, we’re positive our efficient, cost-cutting techniques will have rejoicing in no time.

Interested to learn more? Read our companion piece, A Brief History of Insurance
?
 Wikimedia commons

Related Stories

November 18, 2019

Of speed and system integrators

by
Februar 23, 2021

Die aktuellen Herausforderungen der Versicherer in Deutschland

Die Dokumentation der praktischen Regulierung von Schäden mit diesen alten Bedingungswerken ist nicht überall umfassend erfolgt. Die Konsequenz sind ein erheblicher Mehraufwand in der Regulierung von Schäden für die jüngeren Schadenssachbearbeiter sowie Fehler in der Regulierung.

by