Artificial Intelligence (AI) has been an active field of study for more than half a century. However, some of the recent breakthroughs accomplished both by the academic community and some companies are fuelling the interest, imagination and even fear from the general public. This can be seen on the growing numbers of novels and movies portraying (usually evil) AI-driven robots or entities that want to (usually) wipe out the human race. This huge interest was perfect for an event like the one Playfair Capital organised at the magnificent Bloomberg HQ in London. We had an amazing list of speakers from academia and industry, as well as some of the main writers about Artificial Intelligence.


The day started with the most technical sessions when we reminded about some of the main problems of NLP and how handcrafted features and lots of labeled examples are traditionally needed. On the other hand, Martin Goodson (from Skimlinks) also reminded us that Deep Learning is showing a lot of promising results in textual applications (although not as much as in other fields like image analysis) and that distributed word embeddings trained on massive unlabelled data sets can easily used with minimum effort (see my previous blog about word2vec to know more about this).

The second speaker, Gabriel Brostow, spoke about how to increase the accuracy of image recognition by using humans more effectively and efficiently. From my point of view, the optimisation of integrating humans and machines was one of the main elements in most of the talks. Gabriels’ talk mentioned some research topics that I am really interested on such as Active Learning, Meta-classification and cost-sensitive learning. A relatively similar idea was presented by Blaise Thomson (from VocalIQ) in the context of Q&A systems. He claimed that hat we will eventually use one unique interface, our voice, to control all the different devices around us and that one of the many challenges ahead of us is how to move from a Q&A session, where one question is ask and answered, to a conversation where every interaction will increase the knowledge of the system. One brilliant example was this very likely (very annoying) session between any sat-nav and a user:

System: Where do you want to go?
User: London
System: Do you mean Luton?
User: No
System: Where do you want to go?
User: London
System: Do you mean Luton?

If this would have been addressed as a conversational situation, the system should have been able to recognise that Luton is not a correct answer ant it would have been able to suggest its second most likely destination, hopefully London. This mechanism will allow for a much richer communication and interaction with a computer system to the point that some people might have an almost “friendship” relationship with some devices no so long from now. The idea of “teaching” or correcting a system via interaction was also suggested by another speaker, Guillaume Bouchard, who shown how we could exploit both textual and graph information and human interaction to enhance the knowledge and power of an AI. The rest of the morning presentations talked about teaching robots to “feel” the world with different sensors and cameras and how to do show the perfect advertisements for users based on themselves rather than the content of the webpage they are in.

The afternoon session showcased real and tangible applications of AI. It also allowed for some debates about how AI will impact humankind in the next years. We heard how the number of devices is growing and, according to Rand Hindi, how they will interfere more and more with our lives in the next years (I can definitely see this happening) until the point when technology, mainly AI, will finally be able to integrate them all. His guess is that this “integrated context” will happen by 2030. After this talk, we listened to diverse topics such as neurology and its connection to AI (mainly via neural networks) and the discussion of how to coordinate and integrate a whole array of machine learning APIs owned by different people in a seamless way.

One of my favourite talks of the day demonstrated how to define a route between two different points not based on the quickest route, but the happiest, the quietest, or even recently, the better smelling one. This amazing research line has been mastered by Daniele Quercia and it was recently shown not only in academic environments but also in TED. This is one of the best examples of originality, sound research and product vision that I have seen.

People in the event

All the event was fantastic, but there were two major events that attracted most of the attention: The presentation by Mustafa Suleyman (from DeepMind) and the debates. Mustafa started explaining some of the work they have done on deep learning and he wowed the audience by showing a videoclip of their algorithm learning how to play two atari video games completely autonomously (a similar video can be seen here). This outstanding research managed to be the front page of Nature, which is arguably one of the highest possible achievements in science. The talk then turned into a very direct criticism on the amount of time we are spending debating about singulary, super-intelligence and existential risk. According to him, this seems completely unreasonable when we have very serious, and much more tangible problems in our society. Famine, drought, waste of resources and overpopulation were just some of the few he named. His main point was that we can, and should, use the power of machine learning and AI to find solutions to this problems. Just then, we should focus on more existential questions. He was also adamant that the possibility of creating a super-intelligence have been grossly exaggerated. This has been recorded in much more detail in a wall street journal blog.

In the first debate of the day, we heard how some jobs are more likely to disappear relatively soon. Some clerical and accountant tasks seem quite easy to by taken by an AI and it is reasonably to assume these jobs will be mainly done by automatic systems in the next years. The discussion quickly turned into questions of financial and socioeconomic nature. How taxation will work in a country with much higher unemployment where most of the work is done by machines (which don’t pay income tax)? or, from a more philosophical perspective, what would we do as a society if we only need to work half the hours we are doing currently? My answer to the latter is quite simple: Live, and if you really need to work, spend time helping others as a volunteer in some capacity in a field you like. One of the general thoughts that seem to be shared by the audience is that we will need to radically change the education system to focus on those skills and features that AI currently lacks: creativity, interaction and collaboration would be some of the examples for this.

The second debate touched upon a topic that no so long ago was bounded to science fiction and movies: the potential implications of creating an Artificial General Intelligence (a machine that could perform every task a human is able to) or even a super-intelligence. This event is commonly known as Singularity. For those of you less familiar with this term, it represents a moment in history when an artificial intelligence surpass any biological intelligence. This has been a central piece on so many novels and movies. However, there is a significant amount of people who currently believe that it could be not only possible but even inevitable. It is worth mentioning, independently of my personal opinion, that some of the most influential people from science (e.g., Elon Musk, Stephen Hawking and Bill Gates) share this opinion. The debate started with questions like “what is a human?” and how humans might “merge” or interact with machines in the future. Another interesting concept was that if (or when) a machine can set its own goals, it would be consider an Artificial General Intelligence, and a super-intelligence would be just one step away. If we assume that this event might happen, we will have to admit that we will have to face very complicated moral and ethical questions as some people would argue that machines would have consciousness. A less futuristic situation that exemplifies a similar moral question is the following:

A person is using a self-driven car and another one crosses the street in a way that it is physically impossible for the car to avoid hitting him, unless the car crashes directly into a tree, probably killing the person inside.

The moral dilemma is that the decision on who would (likely) be killed is up to the computer. Is this acceptable? Should we install “moral” switches in cars when the “driver” specifies is she wants to prioritise her life over pedestrians or other cars? I am afraid that there is no easy answer to this question, and I am absolutely certain we will face many more complicated questions in the near future…

As a closing remark, this was a very enjoyable event with a very interesting mix of research, product development and even philosophy and morality. Thank you very much Playfair Capital! and I hope you do it again next year.

Note: All the images in this article have been taken from @nathanbenaich, one of the organisers of the event.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s