On Saturday afternoon 1 August 2015, I had the tremendous opportunity to attend a lecture at Birkbeck College, University of London, on the ethics of artificial intelligence (AI). I concede that this is not my normal Saturday activity. Admittedly I have had a lifelong interest in science fiction. I’m interested in ethics.
But most of all a very good friend of mine, Kay Firth Butterfield, formerly a leading family law solicitor and also barrister had moved to the US and after studying for a Masters degree in law and international relations and focusing her studies on the impact of emerging technologies had become head of the Ethics Advisory Panel of Lucid, which is apparently one of the world’s leading companies in AI. Kay had been appointed to look at the ethics of this work and the lecture was a chance to examine some really difficult ethical, practical and IT issues.
These are my notes and a few personal comments on what, in good faith, I believe I heard said at the lecture
The first speaker was Michael Stewart, chairman and CEO of Lucid. He explained to the audience of about 150 people the background to a proper understanding of where ethics arises in the context of AI.
He said intellect is and must be ethical. It is a process which can be seen throughout history. He said there were several ages of man: for example Stone Age, Bronze Age, Iron Age and similar leading to the period of agriculture and then onto the age of the Industrial Revolution and of machines. To control machines needed information. So the industrial age led onto the Information Age. He thought we were coming to the end of the Information Age. Information has become unlimited. There is too much. We cannot cope with it. We are awash with it. There is big data. Specifically we cannot interpret it, in part because there is too much of it. We don’t know what information we have and that often means we can’t make sense of it. How do we consider and interpret the data otherwise we drown in it? We need to have better sense and better knowledge. This requires a shift of technology. It’s no longer having the data and storing the data. It is interpreting the data. AI is the making sense of this data. How can we make sense of the information now available to us?
So AI is about the quest to understand the information which we have at the end of the Information Age
He said it is also about sustainability. Looking at the sustainability of life on our planet means we need to understand the information about that life and enable the continued growth of our species. He thought this required AI. How can we build something which can learn and know more than us and yet we must still be in charge or it will take over. This is not building something to take over the world and then run amok. Ethics is the positive side to make sure this is not the direction of travel. It needs so-called critical architecture.
To understand our world we need to sense it. Our brain makes sense of our various senses. We identify by senses. If we have impaired senses we may well have impaired judgement. No senses means no intelligence and no interpretation. Therefore a thinking mechanism needs to use senses by gathering information. In this regard he referred to the Deep Mind model of Google making sense of visual images to interpret into textual explanations.
Michael Stewart then went on to intelligence through symbols. Symbolism has throughout history conveyed thought, permanency and ideas more than words. Intelligence came from symbology. He made reference to the ground-breaking work during the Second World War of Alan Turing who broke the symbolic code moving from codes into computer systems. So the challenge was how to teach computers symbolically. He said software is frozen symbols put into applications. But they are hard to mesh individually because software is written locally and autonomously as and when required. This is why there are so many operating systems. In contrast, AI has to be combined, universal and ethical and to do so via symbolically engineered systems.
He accepted that autonomy means possible risk; the permanent elephant (Hal and Skynet) in the room in any discussion of AI. He said AI is needed to bring together knowledge systems, huge data of different kinds, and then understand them and interpret them and produce the lessons needed for mankind. He gave examples from the banking industry where there would be a huge number of transactions and AI would be needed to look at them all and endeavour to find a pattern of scams, laundering or other dishonesty. In the health industry it was needed to look at the vast data as to what could be possible to find cures and vaccines and similar. In the energy industry, AI was needed to look at the data to find both efficient and safe energy usages.
He said AI is real and is happening and has already been happening at Lucid. It is certainly in its infancy but it is already crossing into various industries and learning from us. It is getting more knowledgeable and faster and better.
But it needs a foundational basis. It is the contrast of the Internet of things or the Internet of intelligence. In the convergence of intelligence, humanity is not set aside or unemployed. It is essential for mankind to risk ourselves to challenge to be better in fundamental humankind ways than AI, and we can be. It can be controlled intelligently and ethically.
He ended by saying no one will build AI without obsessive safeguards
Kay Firth Butterfield
Kay explained how Lucid has created the Ethics Advisory Panel as an almost stand-alone project within Lucid. It was both internal and public facing. It had an independent budget, projects and publications. The purpose was to provide guidance on ethics in this area including the work of Lucid. It was directly involved in the launch teams. It was aware it had to collaborate and was setting up research scholarships in several leading universities around the world. It was deliberately bringing in leading experts and authorities from many ethical traditions because of the importance of collaboration in this very new area. They were from the fields of economics, neuroscience, children's rights, history, international politics, international law and ethics.
The significance of having an independent budget is to avoid actual or perceived interference. Whilst Lucid funds the budget, it is hoped that it will be able to engage in independent projects.
Lucid wanted to give, effectively donate, Cyc, their AI creation, to academics to enable them to use the tool to examine, and hopefully find solutions to, some of the problems which beset humanity which, because of the size of the data sets, cannot be addressed without AI.
Ethics has an element of plurality, particularly across disciplines. This should prevent one-dimensional thinking which was important to Lucid.
She referred to the Tallinn Maunal on Cyberwar (which extends the principles of the Geneva Convention to cyber warfare). There was a crucial importance to look at how ethics could mitigate risks.
She referred to the notion of common sense and challenged how does one know common sense will guide morality. There are several methods of examining this.
Kay said there were many risks facing our planet and humanity; climate change, environmental destruction, disease, biological technologies and more. She said the AI of Cyc at Lucid could help with Horizon scanning, data identification, data selection and similar. Equally technology will bring disruption. It is crucial that this is considered in advance so that the next generation of students and technicians can be ready for the labour market opportunities. Undoubtedly technology will continue to change the need for forms of human work.
She returned to the guiding element that it was not yet time or suitable to allow autonomy in AI. There are several blocks deliberately in place to limit on behaviours. These are software and hardware and ethically governed.
But knowing has responsibilities. The Information Age has given information but without responsibilities. There is now, in this new approaching Age, going beyond information into a way in which there are responsibilities for having the information. This is the ethics of AI and the Panel is in its very early stages of examining what ethics should apply, from whence those ethics should derive and how they will be placed within the AI as a limiter or self regulator.
The prospect of AI without any ethics is too horrendous to contemplate. But history shows that too often technology runs ahead of ethics. Lucid is to be highly congratulated on making sure ethics are at the foundational element of the development of AI. What is now crucial is where the ethical debates go, including at the same speed or faster than the development of AI, and there is a real challenge.
Everyone interested in this area of the development of life on our planet, as AI will inevitably be, will wish Michael and his colleagues at Lucid much success but will be equally certain that Kay and her work with the ethics is of foundational and, in reality, transcendental importance.