Ai

Instagram revamps restrictions on teen accounts #Catholic 
 
 null / Credit: Antonio Salaverry/Shutterstock

Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA).
Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. “Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”AI in social media Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”

Instagram revamps restrictions on teen accounts #Catholic null / Credit: Antonio Salaverry/Shutterstock Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA). Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. “Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”AI in social media Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”


null / Credit: Antonio Salaverry/Shutterstock

Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA).

Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.

In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”

Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.

Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”

“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” 

Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.

The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. 

“Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”

Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”

AI in social media 

Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”

AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”

Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”

“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”

Read More
Catholic experts say new AI ‘Friend’ device undermines real relationships #Catholic 
 
 A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images

Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA).
A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”The device first launched in 2024, retailing at 9. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.Creating ‘Friend’ amid loneliness epidemic Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. “It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI. “The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”AI device ad campaign causes stirIn a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” “A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”

Catholic experts say new AI ‘Friend’ device undermines real relationships #Catholic A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA). A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”The device first launched in 2024, retailing at $129. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.Creating ‘Friend’ amid loneliness epidemic Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. “It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI. “The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”AI device ad campaign causes stirIn a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” “A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”


A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images

Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA).

A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”

The device first launched in 2024, retailing at $129. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.

CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” 

But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.

Creating ‘Friend’ amid loneliness epidemic 

Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. 

Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.

The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. 

“It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. 

Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”

“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.

The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI

“The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” 

Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”

AI device ad campaign causes stir

In a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.

Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.

One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”

Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” 

“A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”

She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”

Read More
Catholic University of America panel explores how Christians should think about AI

From left: Ross Douthat, media fellow at the Institute for Human Ecology; Will Wilson, CEO of AI company Antithesis; Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome; and Brian J.A. Boyd, director for the Center for Ethics and Economic Justice at Loyola University New Orleans discuss AI and the Church on Sept. 23, 2025, at The Catholic University of America in Washington, D.C. / Credit: Tessa Gervasini/CNA

Washington, D.C., Sep 25, 2025 / 07:00 am (CNA).

The Catholic University of America (CUA) hosted a panel this week to discuss how Christians should think about the developing technology surrounding artificial intelligence (AI).

The Sept. 23 panel was hosted by CUA’s Institute for Human Ecology, which works to identify the economic, cultural, and social conditions vital for human flourishing. The group discussed the threats posed by AI, the future of the technology, and the Church’s place in the conversation. 

Ross Douthat, media fellow at the Institute for Human Ecology, led the discussion between Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome; Will Wilson, CEO of AI company Antithesis; and Brian J.A. Boyd, director for the Center for Ethics and Economic Justice at Loyola University New Orleans.

Douthat asked the panelists what they each believe to be the greatest threat of the emerging technology as it poses new challenges to the defense of human dignity, justice, and labor.

According to Boyd, the potential loss of human connection is the most prominent threat of AI. He said: “To be human is to be created in and for relationships of love — by love of God. Our nature is made to be receptive to grace.”

AI becomes an issue if “our main relationship and reference point is talking to a computer rather than to humans,” Boyd said. “I think that is an existential threat, and something worth discussing.”

“If we’re habituated to look at the screen before we look at our neighbor … and AI is [the] constant reference point, it will make habits of prayer much more difficult to include. It will make it harder to learn to listen to the voice of God, because the answer is always in your pocket.”

Baggot said his greatest concern is that “artificial intimacy is going to distract us from, and deter us from, the deep interpersonal bonds that are central to our happiness and our flourishing.”

“Companies now grip not only our minds but also are capturing our affections,” Baggot said. “We can all read about these tragic cases of exploitation and manipulation that are only going to continue unless we put proper guardrails in place and also provide the information that allows us to have the kind of deep interpersonal relationships we were made for.”

While many people worry that AI could create “mass unemployment,” Wilson said he disagrees: “I think that this is a very silly fear because human desires and human wants are infinite, and therefore, we always find new things for people to do.”

Rather, Wilson shared his concern that humans will no longer create their own ideas and will lose their intelligence and knowledge.

“The trouble with AI is even if it’s not actually intelligent, it does a very good simulacrum of intelligence, and it’s very tempting to use it to substitute for human intelligence,” Wilson said. “It’s very possible that we’re entering a world where very soon any cognitive labor, any reason, [or] any thought will be a luxury.”

Catholic AI 

While there are dangers to AI, Baggot addressed the positive aspects the tool can offer, highlighting the benefits of Catholic AI companies. 

“I’ve been privileged to work on the Scholarly Advisory Board of Magisterium AI, which is basically a Catholic answer engine that’s very narrowly trained on reliable documents, magisterial documents, [and] theological texts,” Baggot said. 

Magisterium AI is a “system designed to give people reliable responses to their questions about the Catholic faith,” Baggot explained. “This is appealing to Catholics who want to go deeper, but it’s also quite appealing to people who have never really had the chance, or aren’t quite ready, to speak to another human person about their curiosities regarding Catholicism.”

Baggot explained that creators of the technology work hard to keep it from being “anthropomorphic” to avoid users confusing the AI with actual connection. He said: “We do not want people having an intimate relationship with it.”

While Magisterium AI can provide useful information, Baggot acknowledged that it is not a tool for spiritual direction. He said: “Spiritual direction … should be with another living, breathing human being who actually has insight into human experience [and] who can develop a relationship of real empathy and real compassion.”

The Church’s place in AI 

The panelists had differing viewpoints about the Church’s place in AI and how Christians should approach it. Wilson said he believes “the conversation about where the technology is going and what we’re going to do with it is happening among people who do not care … what any Christian church has to say on the topic.”

“It’s actually a little hard to blame them because Christians have basically sacrificed their place at the forefront of science and technology, which is where we were in centuries past,” Wilson said.

“Control goes to those who can deploy the most capital, and capital gets allocated very fast to people who are able to deploy very efficiently. And by and large, those people are not Christians because Christians aren’t really trying.”

Baggot said that while AI does pose dangers, the Church “has a lot of insight and wisdom” that can help guide the conversation. “The Church is in a privileged position to leverage its incredible patrimony, its reflection on the human person, [and] human flourishing.” 

“The Church has reflected a lot about the meaning and value of work, the subjective value of work. It’s not just about economic efficiency, but it’s about how I use my own God-given talents to grow as a person and then also to serve others in intrinsically valuable activities.”

Read More