(OSV News) -- Artificial intelligence can provide both "great opportunities" and "great dangers" -- as well as "an evangelical opportunity" for the church, a moral theologian and ethicist advised the nation's Catholic bishops.
Paul Scherz, professor of theology at the University of Notre Dame, shared his insights in a Nov. 12 address he gave regarding the problems and the potential of AI during the U.S. Conference of Catholic Bishops' fall meeting, which took place Nov. 10-13 in Baltimore.
Scherz -- whose work focuses on the nexus among theology, science, medicine and technology -- is a member of the AI Research Group, an initiative of the Vatican Center for Digital Culture under the Dicastery for Culture and Education.
The group, which consists of several North American theologians, philosophers and ethicists, collaborates to explore AI and its implications. Scherz was one of the lead authors for the group's 2024 publication, "Encountering Artificial Intelligence: Ethical and Anthropological Investigations."
Pope Leo XIV has prioritized the issue of AI, which broadly encompasses various forms of technology by which machines can mimic human learning, problem-solving and creativity.
Scherz noted that "it's especially appropriate for the church to speak" about a technology the pope -- echoing the concerns of his predecessor, Pope Francis -- has said could spark a new "industrial revolution."
Scherz said AI pundits tend to predict the technology will yield either "a paradise of unlimited resources and new power" or "an apocalyptic vision of the miseries of mass unemployment due to automation, or even human extinction at the hands of a rogue AI."
However, he said, "Our actual future will likely lie at neither of these extremes."
Rather, he said, "AI offers both a promise to better the human condition" while also posing "danger to human work, relationships and social justice" that must be addressed.
"Prudent judgment" is required "to ensure that AI enhances, rather than undermines, human flourishing," and "the Catholic vision of the person in society might help guide our use of AI" -- especially since understandings of the technology are often marred by "false analogies to human capacities," said Scherz.
At its core, he said, AI is -- as a researcher once told him -- "just statistics."
Scherz explained that the machine learning systems dominating AI -- which use algorithms to recognize patterns in data, and then inferentially apply that information to new data without formal coding instructions -- "function by drawing on massive amounts of data."
That data includes "all the writing on the internet, millions of pictures of faces, millions of health records," he said.
The programs then process the data to assess patterns "far too subtle for human analysis to detect," and can then "make probabilistic predictions," Scherz said.
Large language models, or LLMs -- which power Open AI's ChatGPT and similar platforms -- are an "impressive example" of such programs, said Scherz.
Trained on vast quantities of text from books, articles, websites and other sources, LLMs can generate text based on detected patterns in language.
The techniques "can also be used in other domains, like science," said Scherz, pointing to the AlphaFold server, which produces three-dimensional predictions of protein structures that can then be verified.
Such AI applications are "tremendous products of human creativity," with "great potential" to enhance human flourishing and the common good," said Scherz.
Yet "it would be a mistake to describe these programs as 'intelligent' in the same way that humans are,'" he cautioned.
That's because humans themselves "don't think through statistical inference," or "speak by predicting the next most likely word," said Scherz.
And while "AI applications are functioning at a logical and computational level," they do not "grasp the truth because they don't understand the meaning … of the symbols they manipulate," he said.
As Pope Francis and Pope Leo have repeatedly stressed, said Scherz, "AI applications should not be confused with the attributes of persons," who in contrast have consciousness as divinely created beings.
Clarifying the list was not exhaustive, Scherz identified in particular "three domains of ethical problems that emerge from AI" arising from confusion about the technology and about the nature of the person -- "justice, encounter and work."
In all three areas, AI applications have "effects on our relationships" -- with broad social institutions, with others and with ourselves, he said.
"We're made for relationship, as created in the image of the Triune God," and "together, we seek a common good" attainable only "through relations of justice and love," he said.
While it can serve the common good, he said, AI can "accentuate preexisting relationships of injustice," yielding results that are biased -- for example, due to the lack of inclusion of minorities in the training data.
Those problems are only exacerbated "when the data set itself is deeply biased," with "past injustices" and structural sins "embedded in the data upon which AI is trained," said Scherz.
He added that "over-reliance on AI applications can lead to injustice in areas as diverse as hiring, health care, insurance and law."
Encounter is undermined when "person-like AI is standing in for human relations," even working to convince people "that they may not need to develop authentic relationships," Scherz said.
Pointing to the teachings of Pope Francis and Pope Leo, Scherz said, "We are called to a relational encounter with others, especially with those who are suffering."
"Only by opening our hearts to others can we fight against a reductionist, technocratic paradigm, a throwaway culture, a culture of death that sees others only in terms of efficiency or utility," Scherz said.
He observed that "loneliness is perhaps at an all-time high," as "social institutions and solidarity are crumbling."
Yet although "technology offers itself as a solution," it remains "a false comfort," and "only a simulacrum of encounter."
Scherz noted that "companies are producing relational AIs" -- chatbots, which originated in 1966 with computer scientist Joseph Weizenbaum's "Eliza" model -- "that offer friendship, advice, even romance," with statistics showing that "people are responding to these products."
Relational AIs, which "lack a personal presence or true ability to be concerned about the user," become "a reflection of the user" -- leading to "grave problems," since real-life loved ones "push back against us when we are in error"and "spur us to be better," said Scherz.
With AI trained to be pleasing, the danger intensifies "when the user has harmful desires," as shown by "recent tragic cases of suicides of teens using AI applications," he said, alluding to deaths such as those of Adam Raine and Sewell Setzer III, whose parents have sued AI firms for alleged "suicided coaching" of their children.
While such cases are still rare, said Scherz, AI dependence threatens the "daily interactions" that "build the habits of solidarity that bind society together," Scherz said.
Increased AI implementation across multiple sectors of the economy presents "the potential loss of work," which in turn "endangers widespread flourishing," he said, citing Catholic teaching on the divine gift of labor and its significance in fostering human development, the cultivation of virtue and the advancement of the common good.
"AI may reproduce the threat to workers' dignity that Catholic social teaching identified in the factories of the early 20th century," he said.
Scherz highlighted three Catholic ministries in which AI may be implemented -- health care, education and religious resources.
In Catholic health care, AI "can and should be a great help in improving clinical care" -- for example, in terms of making medical record collection more efficient -- but "the algorithm cannot substitute a gesture of closeness or a word of consolation," said Scherz.
AI can also assist with "many parts" of Catholic education, especially with rote learning drills, he said, yet educators must always recall that "the fundamental purpose of Catholic education is the formation of the whole person in wisdom."
AI cannot replace the "relationship of encounter" that "is at the heart of a true education," he said, adding that "generative AI endangers important skills like writing," with students questioning the need to write on their own initiative -- a task that is cognitively "formative" -- instead of using AI-generated text.
That temptation is also present for priests writing homilies, he said, noting as well that in the religious realm, "AI is becoming a stand-in for religious authorities," with some applications framing responses in the persona of God or a religious figure.
Scherz said AI has also become a resource for spiritual direction and grief support, and urged awareness of these trends in developing pastoral responses.
Concluding his talk, Scherz said, "This is actually a moment of opportunity," since "people are asking basic questions of what it means to be human, for the first time in a long time."
"The church can provide those answers" at the parish, community, national and global levels, in order to "ensure that AI enhances rather than undermines human flourishing."