What is artificial intelligence?
Artificial Intelligence (AI) is a broad concept which is becoming harder to define. As AI researcher Nick Bostrom explains, the more AI is being incorporated into our daily lives, such as in the algorithms which reshuffle our social media walls and in the technology of general home appliances, the less we recognise that software as being AI. Instead, we see it as ‘mere technology’.
Part I – Artificial Intelligence
Artificial intelligence definition
The Maltese AI Taskforce defines Artificial Intelligence as follows:
“Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions.”
“As a scientific discipline, AI includes several approaches and techniques, such as machine learning, machine reasoning, and robotics, as well as the integration of all other techniques into cyber-physical systems.”
I decided to use this definition since it is so elaborate; it somehow defines AI sufficiently. Of interest are the highlighted key phrases. The phrase, ‘designed by humans’, clearly states that it is a technology created by us humans. As Noreen Herzfeld theorises, it is made ‘in our image’. This statement will be useful in our discussion further on. Another key phrase refers to the fact that AIs are often built to achieve a goal. An exception to this is the Artificial General Intelligence (AGI) which is built goal-less. The third highlighted phrase refers to the use of AI in analysing effects on the environment. The strength of AI is mostly appreciated in their use when processing data to make decisions regarding the environment.
Two main categories of artificial intelligence: NI and AGI
AI is normally broken down into two main categories, Narrow Intelligence (NI), and General Intelligence (AGI).
NIs are AI programmed to perform a single, albeit complicated, task such as analysing text, processing vocal input or even creating art and literature. Great advancements are happening in the NI field. Some examples are its use in self-driving cars and natural language processing. NIs ubiquitous use is also found in search and recommendation engines and in some data engines such as IBMs Watson.
In contrast, AGI is still a nascent field. It is sometimes referred to as strong AI; a superintelligent, or human-level AI. It can understand and reason its environment just as a human being would. Concisely, an AGI is built to replace the human in a decision-making environment. AGI researcher, Nick Bostrom, has defined superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. In Bostrom’s definition AGI needs a capacity to learn and to deal effectively with uncertainty and probabilistic information.
Myths about artificial intelligence
Only luddites worry about AI. This cannot be further from the truth. Many top AI researchers and technophobes, including Elon Musk, the late Stephen Hawking, Bill Gates, Sam Altman and Nick Bostrom have expressed their concern regarding its use. Musk’s response to an MIT audience – “we are raising the demon”- summarises this existential fear.
AI turns evil: While AI’s goals can be misaligned with ours, this doesn’t necessarily mean that AI can ever turn evil. This is especially the case with the way that AI is portrayed in sci-fi movies. Hence, as technologists we ought to make sure that the data on which the AI is built, and the algorithmic intelligence, are both aligned to an ecological view of humanity.
AI can’t control humans. Again, this statement is false. Intelligence enables control. One need not go far to understand that political propaganda on social media, itself an AI-based algorithm, can and did sway many an election. Moreover, data is the new gold. Thus, it is also the new power of economy. As we all know, power provides the means to influence which data is used and deemed relevant. It can determine which problems become priorities, and for whom the tools, products, and services are geared. Hence, AI is an enabler or disabler of power.
Part II – Robots and AI
Let’s start by looking at some definitions of robots.
According to Asimov’s Laws of Robotics every robot should obey these 3 laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- It must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I join the likes of Eliezer Yudkowsky’s camp of believers that such laws are unsafe at best.
On the other hand, the AI4People’s ethics group have outlined 4 principles for the use of AI in robots:
i) beneficence aligned (do only good),
ii) non-maleficence (do no harm),
iii) autonomy (human agent remains responsible for individual decisions)
I believe that these are much safer to use when interpreting AI-research.
With regards to robots such as Hanson Robotics’ Sophia, I believe that this is nothing more than a marketing stunt. Before discussing the AI behind this project, let us discuss the team behind Sophia. Given his artistic background, David Hanson fully understands the importance of having a humanoid robot that has an appearance which is both non-threatening and welcoming.
Hanson’s spokesman, Ben Goertzel, is a full-stack AI researcher and a strong believer of Singularity. He supports the idea of a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable (not necessarily bad) changes to humankind: an ‘intelligence explosion’ that qualitatively far surpasses all human intelligence. According to technologist Antoine Tardif, Goertzel’s mission is to use Sophia to raise funds for his SingularityNET project which attempts to democratise access to AI technology. Thus, Sophia is useful to keep the interest of investors in the project.
Sophia is often wheeled in during presentations. However, the robot seems to lack awareness of its surroundings, while finding it hard to focus its attention on any one object. It seems that Sophia (I refuse to use ‘her’) is using computer vision, chat-bot technology comprised of voice recognition technology and perhaps some form of Natural Language Processing. In comparison, the last two technologies are also used in Amazon’s Alexa and Apple’s Siri. Both of these are actually way more technologically advanced than Sophia despite her ‘cute’ human face.
Nonetheless, given the Law of Accelerating Returns , the AI community is still hypothesising the technological-emergence of a solid AGI framework, such as OpenCog or DeepMind, which can be easily hosted by a humanoid such as Sophia.
Do you think a humanoid robot can eventually become more powerful than a human being?
Humanoid robots are just a marketing gimmick. They give a sense of intelligence. I am much more concerned with AIs shaping us, rather than the AIs shaped after us.
In a paper co-authored with spiritual theologian Fr Charlo Camilleri and philosopher Tero Massa, we argue, in line with Paul Heelas, that rather than rationality, spirituality is essential in the definition of what it means to be human. This quote summarises what we attempt to say: “Innate drives linked essentially to being human, such as the discovery of the self, seeking to gain knowledge, progressing in life, searching for one’s identity and the definition of one’s self, are translated and replicated into the building of the new version of the human. One can postulate that innate within humankind, there is a desire for self-reproduction which surpasses the biological need and ventures into the realm of the spiritual.”
Thus, when speaking about AGIs becoming human or human-like, we believe that we need to start our discussion from what makes us spiritual. We outline the following categories:
- Self-knowledge, stimulations and intellectual aspiration
- Spirituality as 3 ‘to’s: to be (noun), to do (verb), to encounter (relational)
- Built in imago humani
- The ability to imagine
- The capacity of feeling emotions
- Consciousness, personhood and ensoulment
- The ability to re-invent the self
Technologically speaking, AGIs are far from achieving such attributes. This does not mean that God is limited from irrupting in silicon as God irrupted in biology. In our thought-provoking paper, we conclude with philosopher Johan Seiber’s suggestion that as theologians, reflecting on the digital, we need to “find ways to think about digitalisation not as a threat to humanity but as an opportunity to explore avenues that we may not have even known about.”
Part III – Ethics in AI
Artificial intelligence ethics
The main question one needs to ask when discussing AI ethics is: “For what purposes do you want to use it?” Are you going to use it to do good or to harm?
Let us look at some examples. Imagine taking a photo of a lump in the skin, and uploading it to a trained AI system to verify whether this is skin cancer or not. One can mention Facebook’s tool which is used to identify potential suicidal cases amongst its users. One can also speak of AIs which are used to allocate resources after natural disasters.
On the other hand, AI can be programmed to kill, such as in the use of autonomous weapons. One can also mention the deep fakes which are a particular source of concern in 2020’s US presidential campaign. Another example when AI can be misused is that one can programme with good intentions but use biased data. In these situations, the AI’s goals will be misaligned from those of humankind.
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals. However, if those goals aren’t aligned with ours as humankind, we have a problem.
Data as the New Gold
When speaking about ethics, a major issue lies in the collection and use of Big Data. Recognising that two points of data are connected, is not enough. The system must ask why one-point affects another. Moreover, data is deeply personal. We would not want others to access the digital ‘model’ — our Facebook or Google account — that defines us. Therefore, it must be fortified. As Joanna Bryson muses, legal frameworks with a heavy focus on AI ethics must protect this data from being bargained as an asset. Companies need to carry out a risk-assessment, reinforce their servers and take every precaution to ensure their cyber-security is defended. Furthermore, questions surrounding who owns the user’s data are necessary in the groundwork of protecting the user. So is the psychology behind how you are using it.
Related to data is algorithmic transparency and the ‘black box’ problem. AI needs to be built capable of explaining its steps. As McKinsey Global Institute discovered in their research, some companies made a trade-off opting for a slightly less performant AI because they favoured explainability. Otherwise they could end up with situations similar to that of Sarah Wysocki .
Ms Wysocki, a fifth-grade teacher, who despite being praised by students, parents and administrators alike, was fired because the algorithm decided that her performance was sub-par. Other biases emerging from data include considerable job losses suffered by African-American men due to such processing of data. Another instance is of dark-skinned patients being falsely ‘imaged’ to be healthier, with the subsequent risk of further aggravating their health issues. Against the rise of a ‘digital technocracy’, which masks itself as post-racial and merit-driven but is essentially likely to judge a person’s value based on racial identity, gender, class and social worth, Pope Francis’ call to ecological conversion sounds even stronger.
Seven Principles for discussing AI ethics
The High-Level Ethics group of the European Commission speaks of 7 principles when discussing AI ethics. I would like to outline the principles as a conclusion to this article:
- Human agency and Oversight: AIs should support individuals in making better and more informed choices in accordance with their goals. One can achieve oversight by ensuring overseeing of the work by a human person, either ‘in’ every decision cycle, ‘on-loop’ intervenes during the design cycle and monitors or ‘in command’ overseeing overall activity;
- Technical robustness and safety: AIs need to be resilient against both overt and more subtle attacks to manipulate data or algorithms themselves;
- Privacy and data governance: The user remains the owner and controller of the data;
- Transparency and Explainability: It is important to log and document the decisions of the system while the AI is able to explain such decisions. Moreover, the AI needs to be capable of communicating such decisions;
- Diversity, Non-Discrimination and Fairness: All stakeholders need to be consulted and represented;
- Societal and Environmental well-being: The use of AI systems should be given careful consideration particularly in situations relating to the democratic process, including opinion-formation, political decision-making or electoral contexts;
- Accountability: On this principle, I tend to agree with Tom Strange who says that humans must remain accountable for every AI decision. According to Bryson, the human has the onus on what code is written, as well as when, why and by whom, and on which software and data libraries are used. Thus, it is the responsibility of the organisations to monitor AIs.
Living in a shared home
In conclusion, while reading these principles, I cannot help but hear once again Pope Francis’ recommendations on ecological conversion. He suggests that we move away from a technocratic paradigm which seeks unlimited progress at all costs, and start looking at our world as a shared home. We need to convert our way of thinking, policies, lifestyle and spirituality so as to thread carefully into the future, while being neither luddites nor blind technophobes. Is my decision other-focused?