These new models offer additional tools for developers and researchers, contributing to ongoing efforts toward a secure and transparent AI future.
News
Own this piece of crypto history
In a recent development aimed at improving the safety and transparency of artificial intelligence, Google has introduced three new generative AI models. These models, part of Google’s Gemma 2 series, are designed to be safer, more efficient, and more transparent than many existing models.
A blog post on the company’s website states that the new models — Gemma 2 2B, ShieldGemma, and Gemma Scope — build upon the foundation established by the original Gemma 2 series, which was launched in May.
A new era of AI with Gemma 2
Unlike Google’s Gemini models, the Gemma series is open-source. This approach mirrors Meta’s strategy with its Llama models, aiming to provide accessible, robust AI tools for a broader audience.
Gemma 2 2B is a lightweight model for generating and analyzing text. It is versatile enough to run on various hardware, including laptops and edge devices. Its ability to function across different environments makes it an attractive option for developers and researchers looking for flexible AI solutions.
Meanwhile, the ShieldGemma model focuses on enhancing safety by acting as a collection of safety classifiers. ShieldGemma is built to detect and filter out toxic content, including hate speech, harassment, and sexually explicit material. It operates on top of Gemma 2, providing a layer of content moderation crucial in today’s digital landscape.
According to Google, ShieldGemma can filter prompts to a generative model and the content generated, making it a valuable tool for maintaining the integrity and safety of AI-generated content.
The Gemma Scope model allows developers to gain deeper insights into the inner workings of Gemma 2 models. According to Google, Gemma Scope consists of specialized neural networks that help unpack the dense, complex information processed by Gemma 2.
Related: US Copyright Office reports ‘urgent need’ for protection from deepfakes
By expanding this information into a more interpretable form, researchers can better understand how Gemma 2 identifies patterns, processes data, and makes predictions. This transparency is vital for improving AI reliability and trustworthiness.
Addressing safety concerns and government endorsement
Google’s launch of these models also comes in the wake of a warning from Microsoft engineer Shane Jones, who raised concerns about Google’s AI tools creating violent and sexual images and ignoring copyrights.
The release of these new models coincides with a preliminary report from the US Commerce Department endorsing open AI models. The report highlights the benefits of making generative AI more accessible to smaller companies, researchers, nonprofits, and individual developers.
However, it also emphasizes the importance of monitoring these models for potential risks, underscoring the need for safety measures like those implemented in ShieldGemma.
Magazine: Truth behind AI reply guys, Copilot picture panic, Trump deepfakes: AI Eye
This article first appeared at Cointelegraph.com News