Current Discourses on Online Child Safety: Exacerbating Stranger Danger or Redefining Genuine Responsibility?

The following post was created as part of the assigned work for Henry Jenkins's PhD seminar, Public Intellectuals. The goal of the class is to help communication and media studies students to develop the skills and conceptual framework necessary to do more public-facing work. They learn how to write op-eds, blog posts, interviews, podcasts, and dialogic writing and consider examples of contemporary and historic public intellectuals from around the world. The definition of public intellectuals goes beyond a celebrity-focus approach to think about all of the work which gets done to engage publics -- at all scales -- with scholarship and critiques concerning the media, politics, and everyday life. Our assumption is that most scholars and many nonscholars do work which informs the public sphere, whether it is speaking on national television or to a local PTA meeting. 


Figure 1: Images used on the official site of social VR platforms, from left to right: Roblox, Zepeto, Second Life:

Which of the avatars pictured above can be identified as an adult? Which would be considered minors? Do you even bother to make such distinctions when engaging with an avatar in a virtual world?

If you, as an adult user, find younger-looking avatars in a virtual world designed for adults that condones sexual roleplay, would you consider their youthful appearance worrisome when engaging with them? Or would you assume that the platform did their due diligence screening out the users who lie about their age upon registration?

These are only some of the questions I constantly ask myself as a researcher interested in how the integration of social XR technologies will affect the ways we interact with one another and how we can devise ways to better define and interpret such mediated interactions. With younger users becoming one of the fastest growing demographics in platforms that brand themselves “Metaverses,” such as Roblox or VRChat, many companies are also asking themselves the same questions on how to ensure a safe environment for minors.

For folks who are active on major social networking platforms, you are probably familiar seeing words such as “trust” and “safety” used profusely by platforms in their policies or press releases, as well as in new types of resources platforms have created for protecting younger users in the form of “Safety Centers,” “Guardian’s Guides,” or “Parent Portals.” While no one will argue that a child’s safety online is NOT important, are platforms positive that their definitions of safety and harm accurately represent what users think? Whom do these definitions really serve, especially when we take into consideration younger users’ rights to expression, play, or privacy?

Figure 2: Images of different child safety measures that major platforms have taken in the form of “Safety Centers” (Zepeto), “Guardian’s Guide” (TikTok), “Parent Portal” (Meta), “Kid Safety and Community Guidelines” (Roblox)

Who Can Be Considered a “Child” Online?

No doubt that this increase in interest and investment in younger users’ wellbeing is commendable and necessary. Efforts to protect children from seeing harmful content or from being exploited online are essential. However, there are also fundamental issues that need to be explored, some of which may raise questions about how platforms are currently conceptualizing the rights of younger users, since such understandings affect all aspects of the platform’s development, ranging from policy to product decisions.

For one, who should be classified as a child on XR platforms? Current Web 2.0 social networking platforms, such as YouTube or Facebook, have strict age-related guidelines on who can join their platforms, and users are required to input their date of birth to prove they are old enough to enter. Still, who can verify the truthfulness of this data when users first join? How will platforms successfully weed out users who lie about their age?

Another issue relates to the identification of minors who can join platforms but who are granted restricted access to certain features and whose information and consent must be processed differently. Many Web 2.0 platforms strive to identify minor users who do not provide reliable age data through other means such as markers of physical maturity as measured by the Tanner Scale, a classification system tracking the secondary sex characteristics of humans during puberty. This scale, however, has also been critiqued for not taking into account the developmental differences that manifest amongst different ethnicities and individuals who experience puberty at faster or slower rates.

In the world of virtual reality, determining a user’s age is further complicated by the fact that users are represented as avatars. How do you determine the age of an avatar? Here are some thoughts I have after using the South Korean social VR platform Zepeto (340M users, 70% of whom are females between 13 and 21) on how platforms can further pinpoint additional behavioral and contextual indicators for identifying minor users on VR, both through human content reviewers and through AI to infer a user’s actual age when age-related data is not reliable.

Figure 3: Avatars have their own social media feeds on Zepeto, here are some examples of short video reels on other avatars’ social media I saw through my avatar’s feed

Figure 4: Some potential identifiers of minor users without having to rely on avatar appearance and self-declared age data

Whose Definition of Safety?

Currently, it seems that many platforms define minor safety in a way that would take them along the path of lowest legal liability and reputational risk. So this would include a minor’s freedom from situations that would expose them to physical and graphic violence, sexual exploitation, cyberbullying, hate speech, etc.

Figure 5

But can platforms also consider a more comprehensive definition of safety that ensures minor users the freedom to exercise their full spectrum of rights? Is an approach of minimized risk inadvertently restricting younger users’ experiences online and consequently pushing them away from these platforms? Do platforms’ emphasis on ensuring the safety of minor users simultaneously ensure minors’ dignity as users?

The UNCRC, or the United Nations Convention on the Rights of the Child, was adopted in 1989 and is currently one of the most reputable documents that outline the rights of the child. In 2021, some additions were made to the UNCRC, called General Comment 25, which explains how member states should implement the UNCRC in relation to the digital environment, namely a child’s right to:

●     Privacy: importance of protecting a child’s personal information and their digital activities and the importance of obtaining informed consent when collecting data from them (i.e.: ensuring that a child’s data is not collected from them unknowingly through third party ads)

●     Association: the right to meet, exchange and deliberate with others online (i.e.: when children meet each other in VR worlds)

●     Play: age ratings or features should not curtail children’s access to the digital environment or interfere with their opportunities for leisure (i.e.: introduction of new features to restrict access to certain content)

●     Economic exploitation: not only protecting children from trafficking and harmful goods, but also ensuring their right not to be commercially exploited by a platform’s monetization practices (i.e.: the economy of children laboring excessively to obtain in-platform currency)

 

Age-Gating and Children’s Freedom of Expression

FIGURE 6

In the process of implementing age-gating features to protect minors from potential predators and harmful content, how are we also considering a minor’s right to freedom of expression or exercising creativity on the platform? In other words, does safety always trump other values that may be equally as important to minors from their perspectives?

When the platform makes the decision to restrict minors from using a feature on live streaming or talking to certain users in a virtual world, is a complete ban from using a function based on age also considering that group’s freedom of expression? If a platform decides to abruptly disallow certain features for minor users after receiving bad press, how will these decisions affect user experience?

Currently, Facebook and TikTok have separate portals for parents and guardians that provide them with strategies for ensuring their children’s safety and privacy on their platforms. Less discussion, however, has been on whether publishing more guidelines for parents are actually shifting the platform’s responsibility to individual users. Not all guardians have the same ability to monitor their children’s activities online - some simply do not have the time and others do not possess the level of digital literacy required to do so. Also, not all children live in a stable household where parents or guardians are continuously present, facts that reveal the assumptions platforms make about the concepts and role of the family and parenting.

 

Age Labeling and Children’s Privacy Rights

FIGURE 7

Another area platforms prioritize when it comes to children’s safety is related to the safeguarding of children’s privacy, aka, ensuring the safety of minor users’ data. Currently there are strict rules around retaining minor data, such as the Children’s Online Privacy Protection Act (COPPA) that restricts the collection of data of users younger than 13 years of age.

Data on minor users’ ages may help platforms develop age verification technologies, but how much consent is obtained from the users themselves when requesting this data? According to Child Rights International Network, no personal data should be collected from children without their informed consent, which should be “a clear, accessible and unambiguous statement of how it is to be treated” that also allows children to withdraw their consent. Children who have yet to develop the capacity to grant such consent should not be asked to provide any personal data online.

Sonia Livingstone, a professor of Social Psychology at the London School of Economics specializing in children’s rights on the Internet, goes beyond the issue of consent and brings forth the issue of age identification as potentially restricting freedom of expression. She calls the surveillance of children online “a digital panopticon” that can always identify a child, as well as their whereabouts and needs. In a similar vein, other scholars such as Deborah Lupton and Ben Williamson have noted that children today have been datafied “in utero” even before birth, with their health and personal data being shared with not only companies but with healthcare providers and, after birth, public and private education systems. Datafied children subject to constant surveillance may help parents and educators understand their child more comprehensively, but it comes at the cost of exposing the child to “social sorting” where data determine important future life decisions such as employment, thus rendering children into “algorithmic assemblages” that obscure who they are as a person defined by personality traits and potential that cannot be quantified.

 

Conclusion

All this is not to say that minor users should be given free rein to engage in illegal activity or be exposed to harmful content, but to call for platforms to be more comprehensive in their definitional work on harm and safety. Are there ways for platforms to place trust in minors to take charge in certain safety decisions that affect them? Are there also ways for younger users to contribute in what they view as a fair sanction for certain behaviors, or even to help moderate in areas where our technology is not yet developed, so we can get a better communal agreement between the platform and user?

For example, what does bullying and harassment, a popular category often included in platforms’ community guidelines, really mean for younger users? VRChat’s policy on bullying in its community guidelines simply tells users not to engage in “Repeatedly approaching an individual with the intent to disturb or upset.” Roblox merely tells users not to single out another user “for ridicule or abuse.” Yet, for younger users, having gossip about them spread behind their backs or being accused of copying another user’s world or avatar design may feel equally as caustic as physical acts of shoving or name calling. In this sense, are there gaps in understanding harm and safety between the platforms’ adult policy-makers versus the minor users who are governed by such norms?

As younger users are incentivized to go on quests to earn Zems, Minecoins, or Robux, how can platforms incentivize these users to voluntarily act in a way that benefits other members of the community? Are there ways platforms can entrust minors with the role of community moderators who can help monitor their own communities and provide insights in areas where commercial content moderation teams have no visibility? For example, if user A was reported more than an X number of times by fellow users in the last hour for the same violation, can the company take this as legitimate evidence and temporarily disable that user for 24 hours before they can come back again? Users would be able to provide insights to the platform on what types of conduct and behavioral-based interactions their current policy may lack. What’s more, allowing minors to partake in communal governance may provide a valuable lesson for younger users on the importance of prosocial behavior.

Another way for platforms to gain direct feedback from users can be through the implementation of text boxes during reporting where users can describe the issue in their own words to help the platform better determine content removal, instead of just an option to flag bad content. Allowing the user to report in their own words has been proven to provide emotional release and increase gratitude towards the platform. Although the implementation at scale may be challenging for growing platforms, there are some platforms that are doing this already, such as Zepeto which allows younger users to directly voice their concerns and provide context through a free-form text box that accompanies a user report.

Figure 8: Text box on Zepeto where users can directly provide context around content that made them feel uncomfortable in their own voice before they submit the report to the platform

At the end of the day, however, the importance of allowing minors to provide input on safety is as important, and perhaps more empowering, than only teaching them to be wary of potential dangers and imposing restrictions. By allowing minors to actively contribute to decisions that affect them, younger users will have a heightened awareness of how their decisions online can affect other users, and, in the same process, inform platforms on how to approach safety in a more inclusive and accountable manner.

 

Biography

Kyooeun Jang is a doctoral student at USC's Annenberg School for Communication and Journalism. She is interested in the influence of big tech on shaping platform governance and developing norms for interpersonal communication in social extended-reality platforms. Before coming to USC, Kyooeun also worked in the tech industry, specializing in ads solutions as well as in user content operations in the trust and safety space.