A robot may not injure a human being or, through inaction, allow a human being to come to harm. – Asimov’s First Law of Robotics
The author Isaac Asimov first published his laws of robotics in 1942, envisioning a future in which autonomous machines with independent thinking capacity co-existed with people. With the emergence of artificial intelligence (AI) systems and the explosion in data available to train and improve those systems, we are on the cusp of that future today. In a very short time, AI systems have gone from being able to identify cat photographs to be able to identify individual people, to automate tasks done by people, to understand and respond to what people are saying, to play games with them and even to drive cars for them! As business leaders, it is tempting for us to see AI systems as a way of increasing efficiency, accuracy and profits. Before we proceed we must ask: for what purpose?
In the 1920s, the industrialist Henry Ford, an early adopter of the assembly line manufacturing technology, reduced the work week for his employees from 6 days to 5, and the work day from 10 hours to 8, while more than doubling their salaries. The technology had multiplied many times the productivity of his people. Ford could have used it to reduce his headcount and payroll costs and increase his profits. However, he had the foresight to see that better-paid people with more leisure time would not only be more productive but would also have the money and time to increase their spending and consumption. As more and more enterprises followed his lead, millions of people entered the middle class of consumers, fuelling demand for the products of those enterprises.
Business leaders of today will soon face the same choice as Henry Ford. The AI systems we adopt will have the potential not only to augment people and enhance their lives but also to put in peril some of those lives and livelihoods. This is not a fringe, dystopian viewpoint. The UK’s Office for National Statistics has found that 7.4 percent of people’s jobs are at risk of being replaced entirely by technology. The groups most in peril are women, part-time workers, and young people. We cannot afford to have a lost generation of people whose future is disrupted by technology. Not only mere business profits and economic growth but also the stability of society and the wellbeing of all people depends on it. Like Henry Ford, business leaders who are early adopters of AI must lead the way.
As a rule of thumb, you can expect the transition of your enterprise company to machine learning will be about 100x harder than your transition to mobile
There are three principles that one should keep in mind when designing and developing AI systems:
1. Diverse people design better AI systems.
Many of the problems with today’s most successful technology platforms involve issues caused by the failure of the system designers to empathize or identify with the people who would eventually use their products. These are serious issues that have resulted in costs that run into the billions of dollars. Similarly, many early issues with AI systems have been caused by the biases inherent in the data sets (or the biases of the people whose actions or decisions were captured in the data sets). Some of these systems are meant to decide whether a fruit is ripe enough to be picked. Others decide whether a supplier should be paid a higher price, whether a passenger is a security risk, or whether a prisoner should be paroled. It is crucial that these AI systems are designed with inputs not just from the people who are making the decisions today, but also by those who will be most affected by their decisions. This will ensure that they contain adequate safeguards to protect people from unexpected, unjust, unfair or dangerous outcomes.
2. Machine learning, teachers earning. Reward people fairly for training AI systems.
Whether it’s the ride-hailing car drivers who are unconsciously providing data that is helping to train self-driving cars or the book translators whose work has been used, often without their knowledge, to make language translation systems more accurate, AI systems present a very high risk of eventually putting their trainers out of work. People and the governments that represent them are increasingly focusing on this risk. Hence, businesses can expect to see workarounds, hacks, and regulations that hamper their ability to train AI systems properly. They should do pre-empt this by providing people with a fair reward for their efforts. For example, a stream of micro-payments from the future revenues of the AI systems that they have trained would be both fair as well as profitable.
3. AI to enhance people, not replace them.
The newly-adopted OECD principles on AI systems require them to benefit people and the planet by driving inclusive growth, sustainable development, and well-being. However, we don’t expect most businesses to do this simply out of a sense of altruism. The reality is that today’s AI systems are mostly ‘Weak AI,’ able only to perform specific tasks, without the capacity of a human being to carry out multiple, complex tasks. According to Peter Skomoroch, “As a rule of thumb, you can expect the transition of your enterprise company tomachine learning will be about 100x harder than your transition to mobile.” Rather than going for moon-shots that aim to replace people entirely, designing systems that enable people to use AI to do a better job is more realistic and more likely to provide results with good returns on investment.
If the purpose of business is to profitably create customers, then the purpose of AI should beto enable more people with more time and more moneyto become customers.
"The industry of this country could not long exist if factories generally went back to the ten hour day because the people would not have the time to consume the goods produced. Probably the next move will be in the direction of shortening the day rather than the week" – Henry Ford, 1926.