Artificial Intelligence, and Ian’s school
In 2018, I had the remarkable opportunity to visit my son while he was serving as a Peace Corps Volunteer in Benin, West Africa. Ian taught at a public secondary school in the Ouesse commune in the Collines Department of Benin, with students ranging in age from 10 to 25. The school facilities were very modest (as a side project he facilitated and raised funds for the construction of the school’s only pit latrines). The classrooms had no exterior walls, and few educational materials were available. As a teacher of English as a second language, he often taught up to 70 students at a time in a single classroom.
Ian’s situation was hardly unique. Across the Global South, countries struggle to fund public education, as the under-30 population expands rapidly. Whether much education truly happens in such circumstances is questionable, and the quality of such education is not necessarily improving. In classroom conditions such as this, we somehow expect future leaders in these countries to get their start. Publicly funded services in health, infrastructure, environmental quality, job creation, poverty alleviation, and so many other basic but essential elements of development fail miserably to catch up with funding levels and standards now commonplace in the Global North. The global COVID pandemic, and increasing levels of migration and displacement, have exacerbated this yawning development divide between Global North and South even more.
Please consider teachers working in circumstances such as Ian in his classroom in Benin as an appropriate baseline, as we who work in the humanitarian response and international development industry are now called to turn our attention to Artificial Intelligence (AI).
The promise of AI is considerable, including in improving access to better healthcare, education, financial services, transparent (and far less corrupt) governance, more efficient public administration, and so many other far better services and opportunities. If used well, AI can enhance agricultural productivity and alleviate food insecurity to a considerable extent.
Is Ian’s former school, and the public education system in Benin and in similar countries, likely to see such benefits soon? The bumps in the road (and there are many such bumps in the unpaved roads of the Collines Department) are formidable. AI is expensive, and unlikely to be affordable in such countries. Introducing AI will not be easy, and there is a very tangible risk of exacerbating inequalities if AI technology is not managed and deployed in an equitable way. While Benin’s economy is not highly industrialized, AI will still pose some threats to job losses due to automation. And in education, AI may undercut existing “leapfrogging” methods of introducing affordable digital technology to the classrooms.
AI operates through machine learning, and that process requires high quality data at a massive scale. As I learned first-hand when I was a research director at the International Center for Research on Women, such data on conditions in the Global South is in very short supply, due to very limited funding for high-quality research. How is AI supposed to learn about the Global South?
The questions continue apace. Who gets to teach AI? And how is information and the interpretation of such information to be managed? Is this part of the localization agenda? If it is, this is a well-hidden component of an important USAID localization commitment that is already bedeviled by few provisions to identify and mitigate existing (pre-AI) conflicts between differing moral values associated with control, accountability, prioritization and moving more decision-making and financial agency to those in the Global South.
In the Global South, rolling out even the weak AI systems (i.e., AI systems that lack self-awareness) that we are now able to produce entails deep challenges that are financial, technical, administrative, political, even conceptual. We are currently spending large sums of money in our efforts to overcome such constraints here in the United States and in other Global North countries that already benefit from sophisticated technologies, flexible financial resources, and highly educated workers. While progress is astonishing here, so too are the complexities involved.
As humans, there is no doubt that we ought to proceed in this direction and bring AI to humanitarian response and international development – but with an abundance of caution. The deep learning processes that drive the evolution of AI are largely impenetrable to humans (even to programmers), raising profound moral and epistemological questions of interpretation, meaning, explainability, and even trust, i.e., are AI predictions and findings reasonable and reliable? Often, we have no way to gauge this reliability. Deep learning is also vulnerable to adversarial attacks, which in turn can cause intentional mistakes with consequences that are very hard to anticipate and potentially catastrophic. One of the largest, most secretive, and least accountable areas of Global North investment in AI (frequently of cutting-edge sophistication) is in military applications. Will peacemaking fall prey to the immense destructive power of militarized AI in the Global South?
Applied ethics is all about means and ends. Even with weak AI, such systems are already being tasked to carry out functions without benefit of a moral compass, so evaluating the moral quality of such means and the intended goals is fraught. Weak AI, in most of its current forms, has little or no capacity at all to differentiate between good or bad, right or wrong. That type of programming is slowly taking shape in the new field of “machine ethics”, but machine ethics is not yet present at any discernible scale in humanitarian response and international development. That’s no surprise; the formal field of applied ethics is also almost entirely missing from humanitarian response and international development. We simply depend on good people doing the right thing. While the vast majority of people drawn to work in humanitarian response and international development are appropriately described as good and caring people, the complexities we face now are already formidable – and we haven’t seen anything yet. AI will pull us into moral quandaries of deep complexity, and AI will also open up opportunities that may be very challenging to distribute equitably. Such transformations will require that we up our game in accessing the expertise and analytical skills of applied ethics, but no one in USAID leadership now has that on their “to do” list. This is alarming. We dare not stumble our way into AI applications in humanitarian response and international development without a very robust moral compass close at hand.
At present, most of our foundational premise in humanitarian response and international development is focused on expressly utilitarian principles of efficiency and effectiveness. We simply and informally trust that implementers will soften and humanize some of these principled goals by introducing factors such as care, compassion, collaboration, empathy, and similar attributes of what is known as ethics of care. We hope and – more than we admit – depend upon people honoring moral obligations to recognize and respect universal, equal human dignity and the human rights that serve as indicators of that dignity.
While it will be relatively straightforward for programmers to incorporate efficiency and effectiveness into AI, what about all those other critically important values? Will the “right thing” come down to whatever we program weak AI systems to measure as “utility”, and how to maximize that? If so, on what moral factors is AI supposed to help us in allocating and distributing that utility (however defined, and by whom)? Will political economy analysis dominate AI programming in ways that preclude values mapping or any non-utilitarian considerations? Remember – utilitarianism is no friend of human rights or even of universal human dignity.
The overriding issue for AI developers is not just to prevent abuses to and violations of human dignity and human rights, but also to promote such human dignity and rights. USAID and other donors currently give far more attention to human rights protection than to human rights (and dignity) promotion. Why should we expect this to change with AI coming more into a central role in humanitarian response and international development processes?
So far, I have only been talking about weak AI. We dare not close our eyes to the future of AI, given that many AI researchers are already hard at work to create artificial general intelligence (AGI), which could equal or even outperform humans in intellectual tasks. To coexist with humanity, AGI would need to have intelligence that incorporates cognitive, emotional, and moral intelligence. It may be necessary for AGI to know what pain feels like before it can truly appreciate the moral duty not to inflict pain on sentient biological creatures, including humans. Given the deep challenges of poverty, the Global South (and much of the Global North as well) is no stranger to pain. Will AGI respond to our humanity and help us overcome human suffering? Will we need to come to accept that AGI has moral status, and to recognize that in time perhaps AGI may attain even higher moral status than humans? These philosophical questions await us, but they may require answers sooner than we anticipate.
As observed by the philosopher Peter Railton at the University of Michigan: “While there are dangers inherent in creating highly capable artificial agents with enough autonomy to question the goals they are given on grounds of harm, bias, or dysfunction, there is greater danger in creating highly capable artificial agents lacking any capacity to do so”[1].
We have a heavy workload of applied ethics ahead of us if we are to bring AI into humanitarian response and international development in responsible ways. When do we start?
iStock photo ID:1282210592
[1] Railton, Peter. “Ethical Learning, Natural and Artificial”. Chapter 1 in Ethics of Artificial Intelligence. S. Matthew Liao, ed. Oxford University Press, 2020.