Latest Gemma 2 Models Emphasize Security and Performance

Google has unveiled three additions to its Gemma 2 family of compact yet powerful open-source AI models, emphasizing safety and transparency. The company’s Gemma 2 2B is a 2.6 billion parameter update to the lightweight 2B parameter Gemma 2, with built-in improvements in safety and performance. Built on Gemma 2, ShieldGemma is a suite of safety content classifier models that “filter the input and outputs of AI models and keep the user safe.” Interoperability model tool Gemma Scope offers what Google calls “unparalleled insight into our models’ inner workings.”

Gemma 2 2B “rivals industry leaders despite its significantly smaller size,” writes VentureBeat, noting that it “demonstrates performance on par with or surpassing much larger counterparts, including OpenAI’s GPT-3.5 and Mistral AI’s Mixtral 8x7B.”

Its lightweight footprint makes it “particularly suitable for on-device applications, potentially having a major impact on mobile AI and edge computing,” adds VentureBeat, calling it “a major advancement in creating more accessible and deployable AI systems.”

Using these three new models “researchers and developers can now create safer customer experiences, gain unprecedented insights, and confidently deploy powerful AI responsibly, right on device, unlocking new possibilities for innovation,” Google said, announcing the new LLM releases on its developer blog.

Gemma 2 2B can run on a wide range of hardware. Available under what Google calls friendly terms for research and commercial applications, the Gemma models are “even small enough to run on the free tier of T4 GPUs in Google Colab” as well as non-Google platforms, a plus for expanding experimentation and development.

Open and free to use, the Gemma models can also be downloaded by verified users from the Google’s Vertex AI model library and AI Studio.

VentureBeat calls it “the little AI that could: punching above its weight class,” citing independent testing by AI research organization LMSYS to prove it. Gemma 2 2B notched a 1130 score in its evaluation arena, “slightly ahead of GPT-3.5-Turbo-0613 (1117) and Mixtral-8x7B (1114), models with ten times more parameters.”

The Gemma series is different from the flagship Gemini models “in that Google doesn’t make the source code available for Gemini, which is used by Google’s own products as well as being available to developers” for a fee, TechCrunch explains, describing Gemma as “Google’s effort to foster goodwill within the developer community, much like Meta is attempting to do with Llama.”

ShieldGemma is a set of safety filters designed to flag things like hate speech and explicit content, TechCrunch notes, discussing how “Gemma Scope allows developers to ‘zoom in’ on specific points within a Gemma 2 model and make its inner workings more interpretable.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.