1

The poor in a world dominated by ‘big data’

Artificial Intelligence (AI) is reshaping human experience in ways not visible to, nor fully apprehended by, the vast majority of the world’s population. The explosion of AI is having a notable impact on our present rights and future opportunities, determining the decision-making processes that affect all in today’s society.

Enormous technological change is occurring. It promises great benefits and poses insidious risks. The proportion of risks to benefits will become apparent, depending on the pioneers and creators of this technology, and, in particular, on the clarity of their vision of the common good and on how correct is their understanding of the nature of human experience.[1]

We need to understand that Artificial Intelligence is a challenge and an opportunity for the Church. It is a social justice issue. In fact, the pressing, greedy and non-transparent search for big data, i.e. the data needed to feed machine learning engines,[2] can lead to the manipulation and exploitation of the poor: “The poor in the 21st century, as well as the cash poor, are the ignorant, the naive and the exploited in a data-dominated world.”[3] Moreover, the same purposes for which AI systems are geared can lead them to interact in unpredictable ways to ensure that the poor are controlled, monitored and manipulated.

Presently the creators of AI systems are increasingly the arbiters of truth for consumers. But the philosophical challenges of understanding truth, knowledge and ethics multiply as AI capabilities grow toward and surpass human cognitive limits.[4] As the 21st century progresses, the Church’s experience and formation should be essential gifts to populations to help them to formulate an approach to controlling rather than being controlled by AI.

The Church is called to reflect and to work. In the political and economic spheres where AI is fostered, there is a need to introduce an ethical and spiritual framework. AI is a discipline and a community hungry for evangelization in the 21st century. The Church’s response to this new era has to be one of informing and inspiring the hearts of many thousands of people involved in the creation and formulation of artificial intelligence systems. In the final analysis, it is ethical decisions that determine and frame what problems an AI system should address, how to write code, and how to collect data to feed the machine learning. The code which is written today will be the foundation of future AI systems for years to come.

What we identify as the challenge of the evangelization of AI represents a combination of Pope Francis’ emphasis on the importance of seeing the world from the periphery and the experience of 16th century Jesuits whose pragmatic approach to influencing the influential could be reworded today as sharing discernment with data scientists.

What is artificial intelligence?

The definition and dream of AI has been with us for over sixty years. It is the ability of a computer, or a computer-controlled robot, to perform tasks commonly associated with intelligent beings, such as reasoning, discovering meanings, generalizing or learning from past experiences.

AI’s long development is the evolution of thinking about how machines can learn, accompanied by a recent radical improvement in computing capacity. The first idea was AI, then followed machine learning, and more recently we have neural networks and deep learning.

Basic Machine Learning is the first level of AI. It provides for the traditional input of data into handwritten software with a specific set of instructions to carry out a particular task. In other words, it employs algorithms to sort large amounts of data, builds a mathematical model from this data; and then makes determinations or predictions about certain tasks without specific instructions. The result is still a specific task – so it is called “Narrow AI” – but the task is mostly performed better than humans can. Examples of Narrow AI include activities such as image classification or face recognition.

Neural networks are a set of algorithms, modeled broadly after the human brain, that process data through discrete layers and connections to recognize patterns and provide predictive analytics. Deep learning is where vast numbers of neural networks are connected together, and trained with huge amounts of data so that it can automatically learn representations from data without human input.

Benefits

Silently but quickly, AI is reshaping our entire economy and society: how we vote and how government governs, predictive policing, how judges pass sentences, how we access financial services and our credit scores, the products and services we purchase, our housing, the media we consume, the news we read, automatic translations of texts and speech. AI increasingly designs and helps drive and navigate our cars, determines how we get a loan to buy our cars, decides which roads should be repaired, identifies if we have broken the road rules and even determines whether we should be imprisoned if we have. These are just some of the many AI contributions already in place.

AI can assemble and consider more data points and elements than humans can, and often provides less biased or unclear outcomes than humans making decisions. Examples range from the prevention of medical errors, to increasing productivity and reducing risks in the workplace. Machine learning can improve job descriptions and provide better recruitment processes. Written well, algorithms can be more impartial and pick up patterns people may miss.[5]

Scholars Mark Purdy and Paul Daugherty write: “The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.”[6] The World Bank is exploring the benefits of AI for development. Others identify farming, resource provision and healthcare as sectors in the developing economies that will benefit greatly from the application of AI. Artificial intelligence will also contribute to less pollution and less economic waste. 

Artificial intelligence for social justice

AI can certainly be a force for social good, but it also presents social justice issues. The Church has an opportunity and an obligation to bring its social justice teaching, voice and status to bear on some of the most foundational issues for the future. Some of the important social justice issues include the impact on employment for billions over the next decades, and addressing the problems of bias and the further marginalization of the poor and vulnerable.

Impact on employment. Much has been made of the impact of AI and robotics on jobs, especially since Osborne and Frey’s 2013 article estimating that 47 percent of jobs in the US were “at risk” of being automated in the next 20 years.[7] Further study and debate has traced the exact nature of this impact: the full or part erosion of existing job tasks, the impact across sectors and across developed, emerging and developing economies.

Forecasting in such areas is inherently difficult. But a recent summary by the McKinsey Global Institute reflects a mid-way analysis. Sixty percent of occupations have at least 30 percent of constituent work activities that could be automated. It will also create new occupations that do not exist today, much as technologies of the past have done. Scenarios suggest that by 2030, 75 million to 375 million workers (3 to 14 percent of the global workforce) will need to switch occupational categories. Moreover, all workers will need to adapt, as their occupations evolve alongside increasingly capable machines.[8]

If the pace of adoption continues to outpace previous major technological adoption, the scale of social dislocation is likely to be greater.[9]

Codes and prejudices. Code is written by human beings. Its complexity can therefore accentuate the defects that inevitably accompany any task we perform. Preconceptions and bias in writing algorithms are inevitable. And they can have very negative effects on individual rights, choice, worker placement and consumer protection.

In fact, researchers have discovered bias in the algorithms for systems used for university admissions, human resources, credit ratings, banking, child support systems, social security systems, and more. Algorithms are not neutral: they incorporate built-in values and serve business models that may lead to unintended biases, discrimination or economic harm.[10]

The increasing dependency of the socio-economy on AI imparts tremendous power to those who write it, and they may not even be aware of this power, or the potential harm that an incorrectly coded algorithm may have. Because the complex dimension of interacting AI continues to evolve, it is also likely that existing algorithms that may have been innocuous yesterday will have significant impact tomorrow.

AI can be distorted through specific commercial and political interests that influence the framing of the problem; selection bias or bias/corruption in data collection; bias in selection of attributes for data preparation; bias in coding. The result can be significantly flawed outputs delivered under the guise of “independent” automated decision making.

Risk of further marginalization of the vulnerable. A society-level analysis of the impact of Big Data and AI shows that its tendency toward profiling and limited proof decisions results in the further marginalization of the poor, the needy and the vulnerable.[11]

Politician Virginia Eubanks explains well how interrelated systems reinforce discrimination and narrow life opportunities for the marginalized: “Poor and working-class people are targeted by new tools of digital poverty management and face life-threatening consequences as a result. Automated eligibility systems discourage them from claiming public resources that they need to survive and thrive. Complex integrated databases collect their most personal information, with few safeguards for privacy or data security, while offering almost nothing in return. Predictive models and algorithms tag them as risky investments and problematic parents. Vast complexes of social service, law enforcement, and neighborhood surveillance make their every move visible and offer up their behavior for government, commercial, and public scrutiny.”[12]

The struggle for the truth

Artificial Intelligence, both in its present manifestations and those of the next decades and centuries, is also a philosophical challenge. The pervasiveness of AI, married to the saturating digitization of daily human experience, means that the purposes of AI engines are increasingly defining what is important and accepted in society

AI changes the way we think and our basic concepts about the world. Through choosing what question to answer, and controlling and understanding clearly what the training data actually represents, AI owners are the arbiters of truth for consumers.

The algorithms tend to focus on utility and profit. It will be all too convenient for people to follow the advice of an algorithm (or too difficult to discard and reject it), so much so that these algorithms will turn into self-fulfilling prophecies, and users into zombies that consume only what is put under their noses.

Protected by intellectual property claims and opaque coding and training data, AI engines are effectively black boxes that can deliver unverifiable inferences and prediction. “AI has the capability to shape individuals’ decisions without them even knowing it, giving those who have control of the algorithms an unfair position of power.”[13]

With its complex and opaque decision making, there is a tendency by some to see AI as being separate from human agency within the building, populating and interpretation of AI. This is a grave error and fails to understand the true role of the human within the algorithm: Humans need to be held accountable for the product of algorithmic decision making.[14] Yet some deep learning machines are starting to challenge the boundaries of human responsibility.[15]

These various manifestations of an intelligence which is not human, nor even biological, pose essential questions of metaphysics, epistemology, ethics and political theory. The Church should bring its experience and expertise in these fields to assist society to adapt – or adapt to – AI.

A failure to appreciate the philosophical and anthropological challenges of AI could result in the servant becoming the master. As the cosmologist Stephen Hawking warned: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many… it would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”[16]

Engagement of societies and governments

Over recent years there has been an increasing call by technical and scientific leaders, unions,[17] civil society and technology companies themselves[18] for governments to intervene to ensure human control and values are mandated in AI development. In May 2019, important progress was made when the 35 member countries of the OECD agreed on the OECD Principles on Artificial Intelligence.[19] These complemented the AI Ethics Guidelines for Trustworthy AI adopted by the European Commission’s High-Level Expert Group on AI in April 2019.[20]

The OECD Principles’ goal is to promote innovative and trustworthy AI that respects human rights and democratic values. They identify five complementary principles and five recommendations pertaining to national policies and international co-operation. The principles are: foster inclusive growth, sustainable development and well-being; respect human-centered values and fairness; promote democratic values, transparency and explainability; robustness, security and safety; and accountability of those who develop, distribute and manage them.

The recommendations are: invest in AI research and development; foster a digital ecosystem for AI; shape an enabling policy environment for AI; build human capacity and prepare for labor market transformation; promote international co-operation for trustworthy AI.

In June 2019, the G20 group of countries drew from the OECD Principles to adopt as non-binding the G20 AI Principles.[21] The challenge for the next several years is twofold: the further spreading of these or similar principles across the international community; and the development of practical steps to make operational these principles within the G20 and the newly created OECD AI Policy Observatory.

There is an opportunity opening up for the Church to reflect on these policy objectives and input at local, national and international fora to promote an approach consistent with Catholic social teaching.

Evangelizing AI?

While the above suggestions for government and societal engagement are important, at the heart of AI are individuals designing systems, writing code, and collecting and processing data. It is the mindsets and decisions of these individuals which determine the degree that future AI will meet appropriate ethical and human-centered standards. Presently these individuals are a technical elite of code writers and data scientists – more likely measured in hundreds of thousands rather than millions.

Here is an opportunity for Christians and for the Church to live the culture of encounter, to live and offer an authentic personal realization for this particular community. To continue bringing the Gospel and the Church’s deep experience of ethics and social justice to data scientists and software engineers is a blessing for all – and also the most likely way to change the culture and practice of AI for the better. [22]

The evolution of Artificial Intelligence will be a powerful force to shape the 21st century. The Church is called to listen, reflect and engage with an ethical and spiritual framework for the AI community and thus serve the universal community. In the tradition of Rerum Novarum, here there is a call to social justice. There is a need for discernment. The Church’s voice is needed in the ongoing policy discussions about how to define and implement ethical principles for AI.


DOI: La Civiltà Cattolica, En. Ed. Vol. 4, no. 03 art. 1, 0320: 10.32009/22072446.0320.1

[1]. Cf. G. Cucci, “For a digital humanism” in Civ. Catt. 2020 I 27-40.

[2]. Big data indicates a collection of data so extensive that it requires specific technology and analytical methods for the extraction of value or knowledge and the discovery of links between different phenomena and the prediction of future ones.

[3]. M. Kelly – P. Twomey, “Big Data and Ethical Challenges” in Civ. Catt. En. July, 2018. https://www.laciviltacattolica.com/big-data-and-ethical-challenges/

[4] Cf. A. Spadaro – T. Banchoff, “Artificial intelligence and the human person. Chinese and Western perspectives” in Civ. Catt. En. July, 2019. https://www.laciviltacattolica.com/artificial-intelligence-and-the-human-person-chinese-and-western-perspectives

[5]. Cf. J. Angwin – J. Larson – S. Mattu – L. Kirchner, “Machine Bias” in ProPublica, May 23, 2016, (www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing).

[6]. M. Purdy – P. Daugherty, Why Artificial Intelligence is the Future of Growth (www.accenture.com/us-en/insight-artificial-intelligence-future-growth).

[7]. Cf. C. B. Frey – M. A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerization?” in Technological Forecasting and Social Change 114 (2017) 254-280.

[8] Cf. McKinsey Global Institute, “Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation” (in www.mckinsey.com).

[9] See discussion in Seve Lohr, “A.I. will Transform the Economy. But How Much, and How Soon?” New York Times, November 30, 2017.

[10]. For example, media reports have pointed out clear racial bias resulting from reliance on sentencing algorithms used by many US courts. See R. Wexler, “When a Computer Program Keeps You in Jail” in ibid, June 13, 2017.

[11]. Cf. J. Obar – B. McPhail, “Preventing Big Data Discrimination in Canada: Addressing Design, Consent and Sovereignty Challenges” Wellington, Centre for International Governance Innovation, 2018 (www.cigionline.org/articles/preventing-big-data-discrimination-canada-addressing-design-consent-and-sovereignty).

[12]. V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, New York, St Martin’s Press, 2018, 11.

[13]. Ibid.

[14]. Cf. L. Jaume-Palasí – M. Spielkamp, “Ethics and algorithmic processes for decision making and decision support” in AlgorithmWatch Working Paper, N. 2, 6-7 (algorithmwatch.org/en/publication/ethics-and-algorithmic-processes-for-decision-making-and-decision-support).

[15] Cf. M. Tegmark, Vita 3.0. Essere umani nell’era dell’intelligenza artificiale, Milan, Raffaello Cortina, 2017.

[16]. R. Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind” in BBC News (www.bbc.com/news/technology-30290540), December 2, 2014.

[17] Cf. Top 10 Principles for Workers’ Data Privacy and Protection, UNI Global Union, Nyon, Switzerland, 2018.

[18] Cf. Microsoft, The Future Computed, Redmond, 2017, (news.microsoft.com/cloudforgood/_media/downloads/the-future-computed-english.pdf).

[19] OECD Principles on AI (www.oecd.org/going-digital/ai/principles/), June 2019.

[20] Cf. European Commission, Ethics guidelines for trustworthy AI (ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai), April 8, 2019.

[21] Cf. G20 Ministerial Statement on Trade and Digital Economy (www.mofa.go.jp/files/000486596.pdf), June 2019.

[22]. Research by Pew Charitable Trusts has shown that AI algorithms are compiled primarily to optimize efficiency and profitability, considering human beings a mere input into the process, rather than seeing them as real, sentient, sensitive and changeable beings. This would result inan altered society, conditioned by the logic of algorithms. In order to counter this perception and the consequent risks of bias in the AI, the commitment to the definition of purposes and to the collection and use of data is fundamental. As ethics expert Thilo Hagendorff says: “The checkboxes to be ticked should not be the only ‘tools’ of AI ethics. A transition […] to a situation-sensitive ethical criterion based on personal virtues and dispositions, expansion of knowledge, responsible autonomy and freedom of action is necessary” (T. Hagendorff, “The Ethics of AI Ethics – An Evaluation of Guidelines” [arxiv.org/abs/1903.03425], February 28, 2019).