HOW HAS ARTIFICIAL intelligence, associated with hubris and disappointment since its earliest days, suddenly become the hottest field in technology? The term was coined in a research proposal written in 1956 which suggested that significant progress could be made in getting machines to “solve the kinds of problems now reserved for humans…if a carefully selected group of scientists work on it together for a summer”. That proved to be wildly overoptimistic, to say the least, and despite occasional bursts of progress, AI became known for promising much more than it could deliver. Researchers mostly ended up avoiding the term, preferring to talk instead about “expert systems” or “neural networks”. The rehabilitation of “AI”, and the current excitement about the field, can be traced back to 2012 and an online contest called the ImageNet Challenge.
ImageNet is an online database of millions of images, all labelled by hand. For any given word, such as “balloon” or “strawberry”, ImageNet contains several hundred images. The annual ImageNet contest encourages those in the field to compete and measure their progress in getting computers to recognise and label images automatically. Their systems are first trained using a set of images where the correct labels are provided, and are then challenged to label previously unseen test images. At a follow-up workshop the winners share and discuss their techniques. In 2010 the winning system could correctly label an image 72% of the time (for humans, the average is 95%). In 2012 one team, led by Geoff Hinton at the University of Toronto, achieved a jump in accuracy to 85%, thanks to a novel technique known as “deep learning”. This brought further rapid improvements, producing an accuracy of 96% in the ImageNet Challenge in 2015 and surpassing humans for the first time.