In December, Google introduced Gemini, their most advanced and versatile model to date. Since its announcement, select customers including Samsung and Palo Alto Networks have been leveraging Gemini models within Vertex AI to create sophisticated AI agents, significantly enhancing productivity, personalized learning, and other user experiences. Google are now announcing further updates and broader access to their Gemini models:
- Gemini 1.0 Pro, renowned for its scalability across diverse AI tasks, is now available to all Vertex AI users. As of today, developers can utilize Gemini Pro in their production environments. Gemini 1.0 Pro is recognized for providing an optimal blend of quality, efficiency, and cost-effectiveness for a wide range of AI applications, such as content creation, editing, summarization, and classification.
- Gemini 1.0 Ultra, Goggle's most advanced model designed for intricate tasks, is now accessible on Vertex AI to select customers through an allowlist. This model excels in complex instruction, coding, reasoning, and multilingual tasks, delivering outputs of the highest quality.
Moreover, Google is thrilled to introduce the next wave of innovation with Gemini 1.5, which offers enhanced performance on an even more efficient architecture.
The first release under this new wave is Gemini 1.5 Pro, currently in private preview on Vertex AI. This mid-size multimodal model is optimized for broad task applicability and rivals the performance of our largest model, 1.0 Ultra. Gemini 1.5 Pro debuts an experimental feature for unparalleled long-context comprehension, boasting the longest context window among large-scale foundational models to date. It enables applications to process up to 1 million tokens, allowing for the analysis of extensive data sets in one instance, including an hour of video, 11 hours of audio, codebases exceeding 30,000 lines, or documents containing over 700,000 words.
This capability for larger context windows means models can reference more information, understand narrative structures, maintain coherence over longer texts, and generate responses that are rich in context. For instance, Gemini 1.5 Pro enables enterprises to:
- Thoroughly analyze entire code libraries in a single query, bypassing the need for model fine-tuning, including the identification of subtle details such as errors, inefficiencies, and inconsistencies that might be overlooked by developers.
- Navigate through extensive documents, comparing contract details, synthesizing and analyzing themes and opinions across analyst reports, research studies, or a collection of books.
- Examine and contrast hours of video content, identifying specific details in sports footage or summarizing detailed information from video meeting records to support precise question-answering.
- Equip chatbots with the ability to sustain lengthy conversations without losing track of details, even through complex tasks or multiple follow-up interactions.
- Facilitate hyper-personalized user experiences by incorporating relevant user information directly into the prompts, avoiding the complexities associated with model fine-tuning.
Innovations by Customers with Gemini Models
Vertex AI has witnessed significant adoption, with API requests increasing nearly 6X from H1 to H2 last year. We're immensely proud of the innovative ways our customers are utilizing Gemini models, particularly given their multimodal capabilities and adeptness at handling complex reasoning.
Samsung, Palo Alto Networks, Jasper, and Quora are just a few examples of organizations that are pushing the boundaries of what's possible with Gemini models, leveraging them for summarization, product interaction enhancement, content creation, and even powering creator monetization on AI chat platforms.
Develop cutting-edge applications using the Gemini API in Vertex AI
The introduction of the Gemini API within Vertex AI marks a significant leap forward in the development of AI-driven applications and agents. This groundbreaking API enables the creation of advanced AI solutions capable of processing and integrating information across a diverse range of modalities, including text, code, images, and video. For organizations and developers aiming to craft enterprise-level applications and bring them to the market, the integration of Gemini models into the Vertex AI platform represents a pivotal resource.
Vertex AI distinguishes itself as the premier cloud AI platform by offering a unified, comprehensive solution encompassing models, tools, and infrastructure. This integration ensures that applications developed with Gemini models are not only seamlessly deployable but also maintainable with ease, addressing a critical need in the application development lifecycle.
One of the standout features of the Gemini API in Vertex AI is the support for adapter-based tuning, such as Low-Rank Adaptation (LoRA), which offers developers a cost-effective and efficient method for customizing models to meet specific business requirements. The platform is also set to introduce additional customization techniques, including reinforcement learning from human feedback (RLHF) and distillation, broadening the scope for model personalization.
Furthermore, Vertex AI enhances the Gemini models' functionality by enabling the integration of real-time data, thereby improving the accuracy and relevance of responses. The platform's support for fully-managed grounding and function calling capabilities allows developers to enrich the models' outputs and connect them to external APIs for real-world actions, paving the way for more dynamic and interactive applications.
The management and scaling of Gemini models in production are facilitated by Vertex AI's purpose-built tools, including the Automatic Side by Side evaluation tool. This feature aids developers in assessing model performance against a standard set of criteria, enabling fine-tuning based on the insights gained.
Additionally, Vertex AI empowers developers to construct search and conversational agents with minimal coding expertise, significantly reducing development time from weeks to just hours or days. The platform's search and conversational capabilities, enhanced by the Gemini models, deliver Google Search-quality information retrieval and enable the creation of sophisticated AI-powered chatbots. These advancements promise to drive more personalized, informative, and engaging AI experiences in applications, showcasing the transformative potential of the Gemini API in Vertex AI for the future of AI application development.
What's next in the Gemini Era
As we embark on the Gemini era, we're eager to see the innovative applications and agents our customers will develop. Stay at the forefront of this exciting journey by collaborating with our technical team and ensuring your organization is poised to test upcoming Gemini models.