New AI models drop, including ones students can run on their phones with no internet connection + Gemini Pro 1.5 Experimental
And that are more secure
Gemma 2B (Google)
Google's Gemma 2 2B represents a groundbreaking achievement in AI miniaturization. Despite its incredibly compact size of just 2.6 billion parameters (that’s approximately 2% of the parameters of GPT 3.5), this model outperforms much larger counterparts like GPT-3.5 on key benchmarks. Its diminutive stature belies its capabilities - imagine a hummingbird with the strength of an eagle. This efficiency is achieved through sophisticated distillation techniques, allowing Gemma 2 2B to compress the knowledge of larger models into a highly optimized package.
The true significance lies in Gemma 2 2B's ability to run directly on smartphones and other mobile devices without an internet connection. With a memory footprint of just 1GB, this model brings advanced language AI capabilities to the palm of your hand. Students and enthusiasts can now experience cutting-edge natural language processing without the need for expensive cloud computing resources or high-end hardware. This democratization of AI technology opens up a world of possibilities for on-device applications, from intelligent notetaking to personalized language tutoring, all while preserving user privacy by processing data locally.
For students eager to explore the frontiers of AI, Gemma 2 2B offers an unprecedented opportunity. The model is openly available for download, allowing anyone with a compatible smartphone to run sophisticated language AI without the burden of subscriptions or usage fees. And no privacy concerns.
Of course, GPT3.5 isn’t the most impressive model (It’s the original generative AI people started using in November 2022), but it’s not hard to see where this is headed: We’ll eventually be able to run even GPT4 and Gemini 1.5 Pro locally on our phones.
Shield Gemma
Google also released the Shield Gemma, which is built to detect and filter out toxic content, including hate speech, harassment, and sexually explicit material. This will help reassure K-12 schools who are worried about this type of output from generative models.
Gemma Scope
The Gemma Scope model is a tool developed by Google to provide deeper insights into how Gemma 2 models function internally. It employs specialized neural networks to decompress and interpret the complex information processing that occurs within Gemma 2.
By transforming the internal workings of Gemma 2 into a more understandable format, Gemma Scope allows researchers to gain a clearer picture of how the model identifies patterns, processes data, and generates predictions. This increased transparency is crucial for enhancing the reliability and trustworthiness of AI systems.
This will help reduce transparency concerns.
Google has introduced the next evolution of its AI technology with Gemini 1.5, a series of powerful models that push the boundaries of what's possible with artificial intelligence. The latest addition, Gemini 1.5 Pro, is now available for early testing through Google AI Studio and the Gemini API.
Gemini 1.5 Pro 8801 (Google)
Gemini 1.5 Pro delivers impressive performance across a wide range of tasks:
Multilingual Prowess: It supports over 100 languages, enabling high-quality interactions and outputs for users worldwide.
Technical Expertise: The model excels in complex domains such as mathematics, intricate prompt handling, and coding.
Multimodal Mastery: Gemini 1.5 Pro can process and reason across multiple modalities, including text, images, audio, and video.
A standout feature of the Gemini 1.5 series is its expansive context window, which can handle up to 2 million tokens in a single input. This allows Gemini 1.5 Pro to analyze vast amounts of data, such as:
Lengthy documents and PDFs
Extensive codebases with over 30,000 lines of code
Up to 1 hour of video content
11 hours of audio
Under the hood, Gemini 1.5 leverages a Mixture-of-Experts (MoE) architecture for enhanced efficiency. This approach routes requests to specialized "expert" neural networks, resulting in faster and higher-quality responses compared to traditional transformer-based models.
To get started, sign up at aistudio.google.com and unleash the power of Gemini 1.5 Pro in your applications.