Responsible Artificial Intelligence? We are responsible
By Virginia Dignum / Responsible AI / january 2018
In the last few years, developments in Artificial Intelligence (AI) have increased substantially, and with these developments came the interest of media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) move from being a tool to being perceived as teammates, perhaps the most important research and development issue is the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup?
These and many other related questions are the focus of much attention. The way systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI.
AI is already changing our lives
Contrary to the media and popular fiction frightening images of a dystopic future where AI systems dominate the world and is mostly concerned with warfare, Artificial Intelligence (AI) is already changing our daily lives, almost entirely in ways that improve human health, safety, and productivity . Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems, the Foundation for Responsible Robotics or the Partnership on AI just to name a few.
Ethics & AI
In all areas of application, AI reasoning must be able to take into account societal values, moral and ethical considerations, weigh the respective priorities of values held by different stakeholders and in multiple multicultural contexts, explain its reasoning, and guarantee transparency. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility . Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. The above considerations show that ethics and AI are related at several levels:
Ethics by Design: the technical/algorithmic integration of ethical reasoning capabilities as part of the behaviour of artificial autonomous system;
Ethics in Design: the regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures;
Ethics for Design: the codes of conduct, standards and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage artificial intelligent systems.
Responsible Artificial Intelligence is fundamentally about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental to autonomy and should be one of the core stances underlying AI research.
It is up to us
It is up to us to decide. Are we building algorithms to maximize shareholder profit or to maximise fair distribution of resources in a community, by providing solutions to tragedy of the commons situations and ensure free access to information and education to all; to optimise company performance or to optimize crop yield for small farmers around the world, by proving real-time information on fertilizer levels, planting and harvesting moments and weather conditions; or to improve proficiency in playing Go or to improve cross-cultural communication, by providing better contextualised translation services?
We are responsible.
 See also https://ai100.stanford.edu/
 See also https://arxiv.org/abs/1706.02513