More and more, teenagers are using social media and AI to manage their mental health. This issue has raised questions about how technology affects young people’s wellbeing. But technology can also help, when used carefully, according to a UC Berkeley Psychology study.
In his recent paper, “Advancing youth co-design of ethical guidelines for AI-powered digital mental health tools”, Berkeley Psychology Professor Giovanni Ramos found that technology, like social media and AI, can make mental health care easier to reach if teens are included in their design and clear rules are set about privacy and data use. He argues that teens should help design these tools, because they are the ones that are actually using them.
“There will never be enough mental health professionals to meet the needs of all teens who require support, and AI-based tools can help fill this gap by providing accessible care anytime and anywhere,” Ramos said.
The study calls for involving adolescents in every stage of research and development to ensure these AI-based tools are safe and supportive. Ramos notes that teenagers often turn to these easily accessible platforms to practice social skills, seek emotional support and talk to someone without fear of judgment.
But Ramos identifies a problem when attempting to include teenagers in research and development. In order for teens to participate in research, they need parental consent. Those who lack supportive guardians are often excluded from this research that could benefit them the most, since social media and AI can serve as an outlet for those individuals.
This leads to a limited group of teens being able to participate, providing insufficient data. Ramos noted that in some cases, studies that waived parental consent have successfully included participants from more diverse, racial, ethnic and sexual identity backgrounds, improving equity in mental health research.
“Requiring parental consent skews participation toward youth with more privileged or accepting home environments, leaving out those who are already underrepresented and who may be navigating mental health concerns or identity-related stigma,” Ramos said. “These exclusions undermine both the ethical principle of justice and the validity of research findings.”
Another issue that Ramos finds is the collection of “passive data” from teens. Passive data refers to information automatically collected from phones, apps or smart watches while teens use them. This data can include patterns of social interaction, sleep patterns and geolocation.
While this data can provide valuable, real-time insights into everyday behavior and teen mental health, it raises ethical questions. If teen participants are not aware of what is being collected from them, their confidentiality may be threatened. It’s also unclear whether researchers have the responsibility to intervene when passive data shows signs of crisis in teens, like if their online searches suggest they want to hurt themselves.
These challenges, Ramos explains, reveal why teens should have an active role in shaping how technology and research are used. With the right boundaries and input from teenagers, technology could shift from being a potential risk to becoming an important resource in supporting young people’s mental health.
Ramos added that inclusive research and co-design can help ensure that AI tools are designed responsibly to offer meaningful mental health support, while reducing risks such as biased responses, overreliance on AI chatbots and privacy issues.

