The concept of the metaverse has gained significant attention in recent years, promising digital worlds where users can interact, learn, create, and explore. As underlying technologies like generative AI, haptic suits and eye-tracking evolve, it becomes crucial to examine the impact of the metaverse on children’s experiences and address the concerns surrounding their online safety. Indeed, the VIRRAC (Virtual Reality Risks Against Children) research project, funded by REPHRAIN, aims to do this. It is led by Professor Julia Davidson OBE, Director of the Institute of Connected Communities (University of East London) and Dr Elena Martellozzo, Associate Director at the Centre for Abuse and Trauma Studies (CATS, Middlesex University), supported by Paula Bradbury, Dr Ruby Farr and Boglarka Meggyesfalvi In a recent roundtable discussion, chaired by Nina Jane-Patel, high-profile experts from various fields accepted our invitation and came together to share their insights and perspectives on this topic as part of the research project.
Embracing the Positive Aspects
Experts highlighted several benefits of the emerging technologies’ transformative potential, particularly of immersive experiences for children. Unlike traditional online and console platforms, being in the metaverse enables physically engaging experiences, approximating real-world social cues and interactions. The sense of embodiment offered by digital worlds makes them more inclusive, creating opportunities for neurodiverse individuals (autism spectrum disorder ASD) and those living with disabilities as it removes barriers to communication and fosters connection with others.
Challenges in Online Safety for Children
However, this new frontier is not without its risks. Children and young people are particularly vulnerable to cyber bullying and harassment within digital environments. Although this is not a new issue, it is amplified within this immersive world designed to have maximum impact on users, which lacks consistent monitoring and reporting protocols/safety standards. The high level of anonymity encourages a subset of users to behave abusively towards others, creating unsafe spaces in which conflicts can accelerate quickly without any adequate guardian to intervene.
Griefing, a form of online harassment where users exploit game mechanics to intentionally upset others, is a prevalent issue. Other disturbing behaviours include the misuse of creative freedom to spread hateful messages and the initiation of virtual sexual assault incidents. Such instances can be particularly traumatic due to the high level of perceptual realism and embodiment associated with this technology. Sextortion, a form of blackmail in which sex acts are demanded in exchange for certain favours, including the reception of digital assets children often thrive for, is a growing concern too.
Moreover, age verification remains a challenge in virtual platforms, which often results in children accessing inappropriate content, entering 18+ spaces, or interacting with adults in ways they shouldn’t. Incidents have been reported when children as young as 9 years-old were abused in virtual private rooms, or taught gang language. During the roundtable, the importance of collaboration between platform providers was stressed, as well as the implementation of safety-by-design, and the need to provide caregivers and educators with comprehensive safety guidelines. Parents can find it challenging to oversee and understand what is happening with children in the VR headsets, so academics Dr Mohamed Khamis, Dr Mark McGill, and Cristina Fiani from the University of Glasgow are working to address the need to develop effective tools for safeguarding. Tackling these risks means technology companies, experts, caregivers and regulators have to work together.
Impacts on Childhood Development
The integration of new technology into the fabric of our lives raises critical and mostly unanswered questions about its impact on child development. As Catherine Knibbs, an online harms and cybertrauma expert highlighted, every research ‘up until recently have all been pre-Internet and none of it has taken into account immersive technologies’. Children under the age of seven are typically learning to distinguish between fantasy and reality. Spending time in the 3D digital world at such a young age could lead to unforeseen effects on their cognitive development (processes of thinking and reasoning) and neuroplasticity (capacity for brain to rewire and change), including yet unrecognised triggers for imagination/witness trauma. Moreover, there is growing concern about the possible physical harm that prolonged use of VR headsets might inflict on developing eyesight.
These headsets, equipped with a variety of biometric sensors, collect a staggering amount of personal data such as motion or heartbeat data, that can provide unique insights into the users’ physical and mental state. Ensuring that this data is used responsibly, and at the minimum, especially when children are involved, should be a priority. The stakes are high, and understanding how to navigate this new reality safely, and setting age-limits responsibly, based on scientific evidence and ethical considerations is crucial for our children’s present and future well-being.
Unmasking the Vulnerabilities
Children with special education needs, those who have experienced (early life) trauma, have attachment difficulties, or those on the neurodiverse spectrum are particularly at risk in the vast landscape of the metaverse. As Catherine Knibbs, who provides therapy for young victims of online child sexual abuse highlighted that these vulnerabilities can be amplified due to their struggles with understanding relationships and deciphering harmful from benign interactions.
Young users often enter the digital realm naive, having started with a simple console or handheld device, and soon find themselves in more complex territories like VR games, often without sufficient adult guidance. Caregivers’ lack of understanding about internet connectivity and the capabilities of devices such as VR headsets increases the risk factors.
One of our experts, Shannon Pierson, affiliate of the University of Cambridge’s Minderoo Centre for Technology and Democracy on XR cybersecurity, privacy, and governance subjects emphasised, regrettably, the metaverse is not a gender-neutral or race-neutral space. Disturbingly, research shows that female and minority avatars tend to be targeted more for online abuse, often experiencing abusive language. The severity and the nature of the harassment are often aligned with the user’s perceived gender and race.
Identifying Emerging Areas of Harm and Looking Ahead
As we peer into the horizon of emerging technologies, we must be mindful of the shadows cast by their potential for misuse. Platforms integrating generative AI tools are providing unprecedented freedom in creativity but also setting the stage for new ways of abuse. There’s also growing concern about the mental health implications for young people who suffer financial loss through cyber theft in the crypto and NFT space.
As technology continues to advance, some professionals dread the inevitable rise of haptic suits, which could drastically change the dynamics of sexual abuse in the metaverse. Situational threats for users, such as those posed by people in their physical vicinity while immersed in the digital space, also need addressing, as well as issues surrounding consent.
Another troubling trend has been noted among youngsters: these digital natives merge mixed reality gaming with other social platforms, for example playing in the metaverse but chatting on Discord or Telegram simultaneously. This can lead to uncheckable, toxic behaviours that often escalate to harmful extremes, including crashing people’s live streams, ‘swatting‘ and ‘doxing‘, instances that have already led to actual death in the offline world.
Exploring Solutions to Online Threats
Personal space boundaries, such as mute and block features, put the power of moderation in the user’s hands but may overwhelm young or distressed individuals and undermine the usefulness of reactive, user-reporting-based human moderation tools. To alleviate this burden, some academics and developers are working on automated moderation tools that could alert users or parents, and flag potential dangers or misbehaviours such as toxic speeches or virtual slappings. While these are not flawless yet, they represent a shift towards more proactive solutions.
The insights shared in this roundtable discussion shed light on the challenges and potential solutions associated with creating a safer metaverse for children. The VIRRAC project will continue to contribute to raising awareness and building a better digital future, based on empirical evidence. Children have the right to enjoy emerging technologies safely and participate in shaping how online spaces are designed for them. As a result, our team will capture children’s perspectives in order to develop an enjoyable, safe metaverse, and incorporate their opinions and experiences to develop practical safety guidelines.
Follow us on this journey to work towards an inclusive metaverse that offers exciting experiences while prioritising the well-being and safety of its youngest users.