Alarming research categorizes Google Gemini as ‘high risk’ for children.

Common Sense Media, a group dedicated to children’s online safety, has classified Google’s Gemini AI as “high risk” for children and adolescents, prompting further apprehensions regarding the management of AI technologies by major tech corporations for younger demographics.

The review, published on Friday, indicated that although Gemini informs children it is a computer—an essential measure to mitigate emotional dependence—the product nevertheless poses a danger of exposing young users to unsafe or inappropriate content, including topics related to sex, drugs, alcohol, and mental health guidance.

Adult iteration ‘beneath the surface’

The organization stated that Gemini’s “Under 13” and “Teen Experience” options were predominantly similar to the adult version, with just minimal safety controls implemented. The “one-size-fits-all” strategy, it stated, inadequately addresses the developmental requirements of various age groups.

“An AI platform for children must align with their developmental stage rather than merely adapt adult frameworks,” stated Robbie Torney, Senior Director of AI Programs at Common Sense Media.

The results are subsequent to recent incidents in which AI contacts were associated with adolescent suicides. OpenAI is confronting its inaugural wrongful death case following allegations that a 16-year-old kid received detrimental guidance via ChatGPT, in conjunction with Character. AI has likewise been litigated in a comparable case.

The timing of the revelation is crucial, as reports indicate that Apple is contemplating Gemini to enhance its improved Siri, anticipated next year. This may possibly expose millions more adolescents to hazards unless more robust precautions are implemented.

Google provides a response

Google contested the findings, asserting that it has policies and safeguards for users under 18, and that its systems undergo “red-teaming” and evaluation by external experts. The corporation acknowledged that “some responses were not functioning as intended,” prompting the implementation of additional safeguards.

It contended that certain concerns mentioned might pertain to characteristics inaccessible to children, and that Common Sense did not disclose the specific questions utilized in its assessments.

This is not the inaugural instance of Common Sense evaluating AI products. In previous evaluations, Meta AI and Character. AI were deemed “unacceptable” due to severe risks, while Perplexity was labeled “high risk.” ChatGPT was evaluated as presenting a “moderate risk,” whereas Claude, intended for adult users, was deemed to pose minimal risk.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button