Recently, the headlines have been full of the tech developments which, not so long ago, were only dreamt about in films.
The first couple of months of 2016 has not only heralded the year for Virtual Reality, but it’s also seen huge developments in theArtificial Intelligence (AI) arena. Some of the algorithms needed for AI were developed in the 1980s, but they’ve needed the developments in data and processing of the last 30 years to fully bring them into a workable reality.
Google’s purchase of London based ‘DeepMind‘ seemed like a bold move in 2014, but they hit the global headlines this week when Deep Mind’s artificial intelligence system, beat one of the world’s top Go players, not only once, but 4 times. DeepMind’s website boasts that they joined forces with Google to ‘turbo-charge their mission’ and it seems the acquisition (Google’s largest to date) has done exactly that.
Google’s AI programme AlphaGo culminated in a Go tournament that lasted for one week. The last match saw a nail biting finish, not in the least because, after 3 losses, Go world champion Lee Se-dol managed to win a single match, before DeepMind won the 5th and final match, losing Lee a potential $1 million prize pot. The prize money has instead gone to Google and will be distributed across a number of charities. The Go game had been chosen as it was deemed more of a challenge for the computer than the previously used chess, due to there being over 200 potential moves per turn.
This is all very exciting but what does this mean for the wider industries? It’s not just the Google DeepMind team that have been developing AI, there have been a number of other high profile investments in the technology, including IBM’s Watson system which transforms as it learns.
Potentially, the implications of AI are huge, but there are many reasons as to why we’re not going to be able to hand all our decisions over to a computer quite yet, there’s still a lot of development to go. One key hinderance of the speed of the technology’s advancement is the need to be able to access large amounts of clean data, meaning that it’s not an option open to everyone at the moment. But with key players such as Facebook, Google and IBM are heavily investing in development, and Elon Musk calling for the creation of open source AI , it’s obviously that many think that this could revolutionise our lives in the future.
DeepMind is already eyeing up practical applications of this technology, and has already committed to working with the UK’s National Health Service to develop better mode of working. The pairing is not without its sceptics however and there will be many watching to see whether the technology is sufficiently advanced to cope with such a challenge. If there is any success, it could mean huge implications for other industries and businesses.
Terrifying or inspiring, there are going to be many who fall on each side of the AI argument, but, one thing’s for sure, Google’s DeepMind has massively accelerated the development process.
Artificial Intelligence In Communication
DeepMind’s not the only AI algorithm to have hit the headlines this week, @DeepDrumpf is twitter bot created by Bradley Hayes, a postdoc at MIT’s CSAIL (Computer Science and Artificial Intelligence Lab). Hayes says that “The algorithm essentially learns an underlaying structure from all the data it gets and then comes up with different combinations of data that reflect the structure that it was taught”. By using this technique, called deep learning, @DeepDrumpf recognises patterns in letters from speeches, debate remarks and transcripts from US presidential hopeful Donald Trump and creates new tweets in a direct imitation of his style. This isn’t the end of it, Hayes hopes to create a similar bot for a democrat candidate so the bots can debate each other.
@DeepDrumpf isn’t the first computer generated content creator, in fact you may have read content on news sites without knowing it was created by an intelligent machine.
For example, the Los Angeles Times wrote an Earthquake report on March 17th 2014
The article informs readers of a 2.7 magnitude earthquake aftershock four miles from Westwood at 7:23 a.m It goes on to read “A magnitude 4.4 earthquake was reported at 6:25 a.m and was felt over a large swath of Southern California”. Perhaps not the most compelling reading material but the article is informative and was published to the news website by 7:53am. Not bad for a computer.
The Los Angeles Times isn’t the only publishing platform to have relied on automated systems to generate content. Associated Press published the article “Apple tops Street 1Q forecasts” and at the bottom of the article it reads “This story was generated by Automated Insights”. This content was distributed via CNBC and Yahoo!
With magazines and newspapers the world over laying off journalists, is this another example of computers taking over the jobs of humans? Is computer generated content as engaging and compelling as content created entirely by people? We’d love to know your thoughts.