Deep learning: a step toward artificial intelligence
Once the stuff of fantasy, artificial intelligence is now a reality that could change the world we live in.
While truly intelligent machines remain locked in the pages of science fiction novels, artificial intelligence (AI) – the science of creating computers that think like humans and can make their own decisions – is starting to throw up some eye-catching ideas.
Robots have been used for years to perform simple assembly line tasks in factories, but in 1997, IBM took things further by creating supercomputer, Deep Blue, which narrowly beat world chess champion Garry Kasparov in a series of games. The IT manufacturer also created Watson, an AI computer system, that beat contestants on the US quiz show Jeopardy! in 2011 to win the US$1m first prize.
So much for the gimmicks: what about the potential business applications? Last year, Microsoft Research boss Rick Rashid* demonstrated some advanced English to Cantonese voice recognition and translation software with an error rate low enough to suggest that it had moved things on. Much of the most interesting work in the field at present comes from research into neural networks – building computers that can sift through vast amounts of data and recognize patterns – and these are proving successful in disciplines such as voice and picture recognition and natural language processing (NLP).
Google – perhaps because it has access to both great computing power and an extraordinary depth of data – certainly seems to see the potential. Over the past year, it has snapped up a couple of the best-known names in the field to work with: Professor Geoffrey Hinton from the University of Toronto and AI expert Ray Kurzweil. Hinton is now working part-time with the media giant, while Kurzweil was appointed director of engineering in January.
Hinton’s work is to help machines perfect deep learning, which is using low-level data to construct complex meaning and interpretation. He believes Google’s scientists and engineers “have a real shot at making spectacular progress in machine learning.” Last year, a Google deep learning system, which had been fed millions of YouTube video images, managed to do about twice as well as other image recognition systems when identifying specific objects. “At Google, I will get to see what we can do with very largescale computation,” Hinton concluded.
Much more – in terms of both computing power and software development – may yet be required to shift the deep learning paradigm beyond voice and image recognition. Yet for all that, the processing speed of the brain’s biological circuits is a million times slower than computers and Kurzweil believes that the hardware to make AI a reality will be “very inexpensive” by 2020.
On a more prosaic level, when students from Sweden’s Chalmers University of Technology looked at AI’s ability to select from which supplier to buy a particular part ö taking into account factors such as price, lead time, delivery accuracy and quality ö they found it could do so without making too many errors.
“The problem is not with the AI itself – the algorithms developed work well – but with the scenario and real-life data quality,” suggests Dan Matthews, chief technology officer at IFS. “For this to work well, and be worthwhile, you need a high volume of decisions where there are multiple choices and up-to-date values for all variables that may affect the decision.”
He goes on: “Taking the choice of supplier scenario as an example: lack of up-to-date price or lead time information for all alternative suppliers would lead to decisions made on wrong assumptions.”
Ultimately, even AI can only be as good as the data it is given – to start with, at least.
The article was written by:
Download the article as pdf207.68 kB