Sunday, October 18, 2009

History of Artificial Intelligence

When we think of Artificial Intelligence, we think it to be a fairly new concept and associate it with science fiction efforts such as "The Terminator". In actual fact, the idea of artifial intelligence stretches way back, as far as Greek mythology and the Golden Robots of Hephaestus. Through the ages, various philosophers reasoned that all rational thought could be made as systematic as algebra.


The 1930's, 40's and 50's is when AI really began to develop. There was a computer that could play an average checkers player and win. Then Arthur Turing published a paper and devised a test that stated if a machine could teletype a conversation that was indistinguishable from a conversation with a human being, it could be deemed intelligent. In 1956, at Dartmouth College, a conference on the subject was held and the official name Artificial Intelligence was given to the subject.

Through the next few decades, the development of AI was subject to harsh "winters". This was the term used when various stumbling blocks were hit in the development of AI. In the 70's and 80's, the technology experienced a number of "winters", usually in keeping with the economic climate. The advent of Expert Systems in the 80's was seen as a revival, but was followed by a "winter" when the limitations of Expert Systems were realised.

Today AI is more widely used in everyday life. Algorithms are used to aid medical diagnostics, solve banking problems and of course, AI is widely used in video games to great effect. The reason the technology has advanced seems to be mainly down to the processing power of computers today. Moores Law states that the processing power of computers doubles every two years. At this rate it has been estimated that by 2029, we will begin to see machines with humans level intelligence.

No comments:

Post a Comment