Google has unveiled a set of new generative AI models within its “open” category, dubbing them “safer,” “smaller,” and “more transparent” in a confident assertion.
These are the latest inclusions in the Gemma 2 generative model lineup, which initially launched in May. Named Gemma 2 2B, ShieldGemma, and Gemma Scope, these models serve varying purposes and scenarios, yet are united by a focus on safety.
Distinguished from its Gemini series, Google’s Gemma models are accessible, in contrast to Gemini’s closed source used in Google’s proprietary products and for developers. Gemma represents an endeavor by Google to build a positive rapport with developers, mirroring Meta’s efforts with Llama.
The Gemma 2 2B model is optimized for text analysis and generation, capable of operating across diverse hardware, from laptops to edge devices. It is available for specific research and commercial endeavors and can be accessed through Google’s Vertex AI model library, Kaggle, and the AI Studio toolkit provided by Google.
ShieldGemma introduces a suite of “safety classifiers” aimed at identifying harmful content, including hate speech, harassment, and explicit materials. Developed atop Gemma 2, ShieldGemma functions to screen input to generative models as well as the output they produce.
Gemma Scope, on the other hand, offers developers the capability to delve into particular segments of a Gemma 2 model, enhancing its interpretability. As Google articulates in their announcement, “Gemma Scope comprises sophisticated neural networks designed to decode the layered and intricate data handled by Gemma 2, making it more accessible for analysis and comprehension. Through these detailed examinations, researchers are granted deep insights into Gemma 2’s pattern recognition, information processing, and predictive capabilities.”
These fresh Gemma 2 offerings arrive in the wake of a U.S. Commerce Department preliminary report advocating for open AI models. Such open frameworks aim to extend generative AI’s reach to smaller enterprises, scholars, non-profits, and solo developers, stating the importance of monitoring these models for any potential risks.
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


