Instagram’s parent company, Meta, announced Monday that it is using artificial intelligence to detect accounts operated by individuals under 18 to crack down on underage use.
“We’ve been using artificial intelligence to help determine age ranges for some time, but leveraging it in this way is a big change,” the tech giant wrote in a blog post.
If an account appears to belong to a teenager, Instagram says it will enroll it in its “Teen Accounts,” which gives parents stricter moderation over the content with which their child can engage. Teen accounts debuted last fall, and Meta has enrolled 54 million teenagers. The company reported that 90% of parents say it has been beneficial as they navigate their child’s engagement on the social media app.
“We’re taking steps to ensure our technology is accurate and that we are correctly placing teens we identify into protective, age-appropriate settings, but in case we make a mistake, we’re giving people the option to change their settings,” the blog post reads.
Instagram will also send messages to parents sharing how they can engage in “conversations with their teens on the importance of providing the correct age online.”
While Meta has rolled out AI to verify users’ ages, the company holds that app store companies such as Google and Apple should be responsible for verifying users’ ages, not social media companies.
FTC GRILLS ZUCKERBERG OVER INTERNAL MESSAGES REGARDING PURCHASE OF INSTAGRAM
Social media companies have come under scrutiny in recent years for the effects of their apps on children.
Congress has failed to pass most legislation related to the issue. One was last year’s Kids Online Safety Act, which would have required social media companies to be more accountable for child safety.

