Ai

Instagram revamps restrictions on teen accounts #Catholic 
 
 null / Credit: Antonio Salaverry/Shutterstock

Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA).
Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. “Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”AI in social media Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”

Instagram revamps restrictions on teen accounts #Catholic null / Credit: Antonio Salaverry/Shutterstock Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA). Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. “Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”AI in social media Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”


null / Credit: Antonio Salaverry/Shutterstock

Washington, D.C. Newsroom, Oct 21, 2025 / 10:00 am (CNA).

Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.

In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”

Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.

Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”

“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” 

Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.

The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. 

“Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”

Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”

AI in social media 

Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”

AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”

Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”

“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”

Read More
Catholic experts say new AI ‘Friend’ device undermines real relationships #Catholic 
 
 A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images

Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA).
A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”The device first launched in 2024, retailing at 9. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.Creating ‘Friend’ amid loneliness epidemic Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. “It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI. “The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”AI device ad campaign causes stirIn a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” “A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”

Catholic experts say new AI ‘Friend’ device undermines real relationships #Catholic A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA). A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”The device first launched in 2024, retailing at $129. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.Creating ‘Friend’ amid loneliness epidemic Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. “It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI. “The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”AI device ad campaign causes stirIn a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” “A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”


A commuter waits at the Westchester/Veterans Metro K Line station on Thursday, Oct. 2, 2025, in Los Angeles. / Credit: Carlin Stiehl/Los Angeles Times via Getty Images

Washington, D.C. Newsroom, Oct 21, 2025 / 07:00 am (CNA).

A controversial ad campaign posted in the New York City subway system has sparked criticism and vandalism over the past few weeks. The print ads are selling an AI companion necklace called “Friend” that promises to be “someone who listens, responds, and supports.”

The device first launched in 2024, retailing at $129. It is designed to listen to conversations, process the information, and send responses to the user’s phone via a connected app. While users can tap the disc’s button to prompt an immediate response, the product will also send unprompted texts. The device’s microphones don’t offer an off switch, so it is constantly listening and sending messages based on conversations it picks up.

CNA did not receive a response to a question from Friend.com about the success of its subway ad campaign and how many people are currently using the devices, but Sister Nancy Usselmann, FSP, director of the Daughters of St. Paul’s Pauline Media Studies who also studies AI, told CNA that “people are turning to AI for companionship because they find human relationships too complicated.” 

But “without that complicatedness, we cannot grow to become the best that we can be. We remain stagnant or selfish, which is a miserable existence,” she said.

Creating ‘Friend’ amid loneliness epidemic 

Avi Schiffmann, the 22-year-old who started Friend.com, was a Harvard student before leaving school to focus on a number of projects. At 18, he created a website that tracked early COVID-19 data from Chinese health department sources. In 2022, he built another website that matched Ukrainian refugees with hosts around the world to help them find places to stay. He then founded Friend and now serves as the company’s CEO. 

Schiffmann and his company first turned heads when an eerie video announcing the new gadget was released in July 2024. The advertisement featured four different individuals interacting with their “friends.” One woman takes a hike with her pendant, while another watches a movie with hers. A man gets a text from his “friend” while playing video games with his human friends. He first appears to be sad and lonely around his friends, until his AI “friend” texts him, which appears to put him at ease.

The marketing video ends with a young man and woman spending time together as the woman discusses how she has only ever brought “her” to where they are hanging out, referencing her AI gadget. 

“It’s so strange because it’s awkward to have an AI in between a human friendship,” Usselmann said about the video ad. 

Hundreds took to the comments section of the YouTube video to respond — mostly negatively — to both the Friend.com ads and the technology. Commentators called out the company for capitalizing on loneliness and depression. One user even called the video “the most dystopian advertisement” he had ever seen, and others wrote the video felt like a “horror film.”

“While its creators might have good intentions to bring more people the joys of companionship, they are misguided in trying to achieve this through a digital simulacrum,” Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, told CNA.

The device “suffers from a misnomer, since authentic friendship involves an interpersonal relationship of mutual support,” said Baggot, who studies AI chatbots and works on the development of the Catholic AI platform Magisterium AI

“The product risks both worsening the loneliness epidemic by isolating users from others and undermining genuine solitude by intruding on quiet moments with constant notifications and surveillance. Friend commodities connection and may exploit human emotional vulnerabilities for profit,” he said, adding: “It might encourage users to avoid the challenging task of building real relationships with people and encourage them to settle for the easily controllable substitute.” 

Usselmann agreed. “Only by reaching out in genuine compassion and care can another person who feels lonely realize that they matter to someone else,” she said. “We need to get to know our neighbors and not remain so self-centered in our apartments, neighborhoods, communities, or places of work.”

AI device ad campaign causes stir

In a post to social media platform X on Sept. 25, Schiffmann announced the launch of the subway ad campaign. The post has more than 25 million views and nearly 1,000 comments criticizing the pendant and campaign — and some commending them.

Dozens of the ads have since been torn up and written on. People have posted images to social media of the vandalized ads with messages about the surveillance dangers and the general threats of chatbots. One urged the company to “stop profiting off of loneliness,” while another had “AI is not your friend” written on it.

One person added to the definition of “friend,” writing it is also a “living being.” It also had the message: “Don’t use AI to cure your loneliness. Reach out into the world!”

Usselmann said the particular issue with the campaign and device is “the tech world assuming certain words and giving them different connotations.” 

“A ‘friend’ is someone with whom you have a bond based on mutual affection,” she said. “A machine does not have real affection because it cannot love. It does not have a spiritual soul from which intellect, moral agency, and love stem.”

She continued: “And from a Christian understanding, a friend is someone who exhibits sacrificial love, who supports through the ups and downs of life, and who offers spiritual encouragement and forgiveness. An AI ‘friend’ can do none of those things.”

Read More