Earlier this week, Facebook unveiled Messenger Kids, their first product targeted at users under the age of 13. It has launched in a United States preview — the company has not yet announced plans for a European expansion — and it will likely spread to the rest of the world over the coming months. Messenger Kids is a variant of Facebook’s wildly successful messaging platform Messenger. As of September, Messenger is used by 1.3 billion users every single month to communicate through messages, stickers, and video. Sheryl Sandberg, Facebook’s COO, led the introduction of the product: “As a mom, I know how meaningful it can be when kids use technology to connect with family and friends. But, I also know how important it is to make sure they’re safe whenever they go online.”
She continued: “[Messenger Kids] is a video chat and messaging app for families and kids, and it’s designed to give parents more control when their kids start to communicate online.” Messenger Kids is distinguished from Messenger in that parents are given full control of their child’s contacts. Using the Facebook app, parents can create a ‘child account’, and can authenticate that identity on their shared family devices or their child’s smartphone or tablet. Messenger Kids is a separate app, and allows children to message and video chat freely with approved contacts (those contacts can use the normal Messenger app). The app is currently only available on Apple’s App Store (and supports iPhones and iPads), but Facebook has announced that it will be coming to the Amazon App Store and Google Play Store in the coming months.
Parents can grant access to family members as trusted contacts, allowing their child to message them freely from within the Messenger Kids app. Parents can also view the child accounts of their Facebook friends, and can grant their child communication access. Each child’s parent must approve the contact from the Facebook app to allow their children to communicate. Facebook moderates the chat with a hybrid of artificial intelligence and trained moderators, and reports issues to parents if they arise. Parents will also be notified when their child reports content or blocks a user. Facebook has been quick to defend the safety and integrity of their product: they hosted an unusual press briefing last week in San Francisco to answer questions from the media, as reported by BuzzFeed.
Antigone Davis, Facebook’s Public Policy Director and Global Head of Safety, heavily emphasized the company’s work to make Messenger Kids a safe environment. She emphasized Facebook’s partnership with leading child development experts, and announced a $1 million research fund that would support academics and partners in researching the impact of technology and social media on children. A Facebook study conducted with National PTA found that 60% of Americans under the age of 13 already use messaging apps, social media, or both. An independent study from Dubit found that the overwhelming majority of 6 to 12 year olds in the U.S. either have access to tablets or smartphones, or have their own.
Facebook’s orientation of Messenger Kids is clear: children are already using social media and messaging, and Facebook is striving to make the experience safer. Formerly, Facebook had not officially supported these users (in America) across their platforms (Facebook, Messenger, WhatsApp, and Instagram) because the United States enforces the Children’s Online Privacy and Protection Act (COPPA), a strict set of guidelines that are aimed to protect netizens under the age of 13. Facebook’s David Marcus, VP of Messaging Products, described the ignored demographic to BuzzFeed: “It’s such a big, unmet need, and no one has actually done a really good job with apps like this.” Messenger Kids is COPPA compliant, and Facebook will likely strive for compliance with the similar guidelines of other countries.
Over the last year, Facebook’s public acceptance has vastly changed. In November of last year, Facebook’s CEO Mark Zuckerberg very publicly denied that Facebook had any role in the 2016 United States presidential election. As the company and the media investigated further, it was found that the company’s Newsfeed was a hotbed of fake news targeted at swaying votes, or confusing voters. Nearly every single American had viewed targetted fake news through the platform, and the magnitude is still growing. Facebook’s policies have vastly shifted, and Mark Zuckerberg firmly emphasized that the issues are now a very real concern for the company and that they are investing heavily to solve their platforms’ faults.
Put simply: users don’t trust Facebook or its content like they once had, and much of that frustration is emphasized in the above tweets. The AP’s Barbara Ortutay opened her piece with “Facebook is coming for your kids” and The Verge’s Casey Newton questions Facebook’s interest in building products for children, without clearly emphasizing a revenue model: “the benefits of Messenger Kids to Facebook are too obvious, and too little acknowledged by its creators.” Facebook users have every right to question the company’s practices: the amount of data collection, and the extent of ad targeting, have never been accomplished at this scale before. Facebook is unprecedented: it very well may have disrupted Democracy in America, and a recent report would suggest that a Newsfeed change disrupted a Cambodian revolution. Facebook is powerful, and without the tools to wield it.