7 Mar 2017

On AI, Robots and Data: Progressing intelligently

Bill Gates a couple of weeks ago said something pretty extraordinary: he suggested that a so-called “robot tax” might not be a bad idea. The socialist candidate for the upcoming French election, Benoît Hamon, has made this idea part of his platform. Nobody was talking like this just two short years ago. How has this now become such a prominent topic and entered the mainstream?

Maybe the much-discussed political upheavals of 2016 made us, particularly in the privileged bubble of the tech world, take note that perhaps all is not right in society. There are multiple underlying factors at play, but one of the bigger ones appears to be people en masse feeling like their lives are simply standing still, that they are devoid of agency or purpose. In debates they’ve tended to blame globalization or immigrants, but there’s a trend here that makes it seem like this shift is just beginning, and that makes the future more worrying still.

As economists note, after the downturn of 2008 many companies recovered output but didn’t hire back to the same levels. They became leaner, which is geek-speak for more automated, or more productive. Modernization allowed them to do more with fewer people.

This is a growing trend, and one unlikely to be reversed; astute minds have noted that this is going to lead to major social upheaval in the future as employment is eroded. This will not necessarily lead to a mass laying off of jobs, although that will at times happen, but rather a slackening in demand for human labour that will see the social prospects of many stagnate further, and unrest correspondingly grow.

In short, certain advancements in technology really, really matter. In this case, I’m talking about so-called “AI”, and the machine ‘super-intelligence’ which will be the catalyst for much of this erosion.

When, in 1997, IBM’s Deep Blue beat Garry Kasparov, the world chess champion, experts factoring in the pace of change of compute power (Moore’s Law) still predicted it’d take a hundred years for a machine to beat a human at the ancient game of Go. It took eighteen. Moreover, by basing its victory on unorthodox moves that no human expert would play, a computer program shattered humanity’s belief in our mastery of that game… a game we’ve improved on, generation after generation, for millennia.

Why has this happened now? Partly it’s because computational power has exploded, and outpaced Moore’s law (when you count GPU hardware), but there have also been multiple academic breakthroughs that have made things like neural nets feasible… DeepMind, creator of the Go program, was a beneficiary of this. The critical element, though, has been the digitization of record collection and offline processes which have provided the training sets necessary to allow these algorithms to tackle real-world problems.

It is important to ensure we define what we’re talking about here. True “Artificial Intelligence” is actually further away than most of us probably think. Let’s not confuse ourselves by corrupting that term. AI has traditionally meant truly intelligent, independent agents… machines that can autonomously make decisions, see them through to completion, reflect, learn and improve. These are a lot closer than they were, but are still not just around the corner. Popular culture and the media have gotten a bit carried away.

Machines will get there, though — likely over the time span of decades — and we’ll then have to deal with some fundamental questions of ethics, not just regarding human lives that sit in the hands of AI, but what rights those machines might even themselves have as lifeforms. Furthermore, how will we avoid our very existence being threatened by entities that are super-intelligent, and may have different value systems to our own? We should certainly start thinking about some of these fundamental questions now; they are hard ones to grapple with, and we need to have a plan. But let’s keep our perspective.

Back to the present.

Machine learning, a more shallow form of intelligence, is here today, and is causing huge waves. It is machine learning which will remove the need for most radiologists, which will become the entry level point of contact for patient interaction and diagnosis and thereby (in part) supplant human GP doctors; it is machine learning which will analyze legal documents, score credit applications better than any hand-crafted formula, and help us fly aircraft or drive cars better and more safely than using an unpredictable sixty kilogram bag of flesh as pilot.

It’s been called the second machine age — the next industrial revolution — i.e. the upcoming automation of knowledge workers using computers to learn things that cannot be programmed. That’s catchy but still only part of the picture… this shift is, in fact, liable to have an even greater effect on manual workers, larger in number and whose jobs will be automated at an accelerated pace by cheap general-purpose robots, using computer vision and machine learning to tackle tasks that were impractical for traditional single-purpose, highly expensive robots used in large-scale factories. Its effects will likely be felt greatly in those developing countries whose economies have come to rely on offshoring, which will decline as it becomes cheaper to “repatriate” jobs to local robots over remote humans.

As entrepreneurs, investors and human beings we should be excited at the potential afforded by this technology; we must not kill it with policies designed to stop this progress in its tracks. In many cases it is preferable that machines do these jobs: they could often do them far better than any human could. Self-driving cars will save millions of lives through a reduction in road accidents, patient diagnoses will improve… the list of ways machine learning can improve on human performance is long.

The imminent need we have as a society, though, is to ease the transition, whether by guiding our young people away from jobs that won’t be relevant, by retraining people already in obsolescent careers, and, yes, by arranging support and subsidy for those for whom this isn’t possible.

Another pertinent question arising from this is how the investor and startup community, with our strong belief in free markets and the tremendous drive and innovation that comes from young companies, can succeed in this new age.

Some areas, such as medical technology and digital improvements to industrial processes, bestow arguably fewer advantages to incumbent technology players.

But others, such as intelligent personal assistants, are a different story. As mentioned earlier, data holds the key to building machine learning models — if data access remains with the Googles and the Facebooks then then these companies may well become the de facto winners in this new world. Some have, to date, been generous with their work, open-sourcing powerful frameworks (e.g. TensorFlow from Google) and allowing, with user consent, standardized access to data (APIs). Others have been less forthcoming, keeping user data within a “walled garden”, and making it hard or impossible for startups to use this data and build products upon it.

We need to work on open standards (e.g. PSD2 in banking) that allow companies, with clearly obtained user consent, to access the data they need in a secure, reliable way, so that you could decide to use an intelligent assistant developed by some small local startup if you so preferred. Another side to this would be opening up anonymized, non-trivial, training datasets that let smaller companies compete on a level playing field with the companies that hold your emails, your “likes”, your demographics and other key data.

Defining these standards — ideally by the industry itself — and the manner in which they can evolve and be applied, is key to developing a thriving ecosystem within emergent AI technologies. A multiplicity of players in the market will be beneficial not only for the tech sector overall, and startups in particular, but to us as a society. An open marketplace remains the most reliable way of ensuring the competition that leads to products and services that optimized for us as consumers.

We’ve covered a lot of ground, and there are more open questions than answers at this point, but one thing that is clear is that our responses to some of these questions, whether immediate or longer term, whether technological or societal, will have impact not only within our short lives but on the progress of humanity long thereafter. We take progress for granted, but it’s not a given, and (particularly in the developed world) we often take peace and stability for granted, but it’s also not a given. Now’s the time to actively engage with the many AI related challenges ahead and avoid inertia taking us down a path of needless pain.

spinner