Heading
New! Jasper Brand Voice: Teach AI about your company facts, product catalogs, audiences, and style guide so your message is always on-brand.
Learn More
Special Offer Unlocked!

Your friend gave you a 7-day free trial to try Jasper!

The Prompt
Webinars
eBooks
Tutorials
The AI Academy

The Biggest AI News From Google’s I/O 2023 Keynote

Google announced major updates to its gen AI investments, including new models, deeper integrations within Google products and efforts to curb disinformation.
Published on
May 10, 2023
by

Today, Google held its annual developer conference Google I/O ‘23 in Mountain View, California. This year, CEO Sundar Pichai and a roster of other executive leaders spent close to two hours dropping announcement after announcement about their deeping investments in generative AI technology. 

The company has been largely silent about its AI developments since it announced Bard, its rival to the ChatGPT-powered Bing search engine, in February. But Google roared back onto the AI scene today by announcing big updates to Bard, which is now powered by PaLM 2: the successor to the language learning model that originally powered the tool. It announced a second new LLM titled Gemini as well as deeper AI integrations within Google search, the Google Workspace suites and even how it’s offering AI to enterprises. Speakers at the event also addressed how Google is working to curb disinformation and harmful outputs generated by AI in its products. 

The Alphabet company made a lot of exciting AI announcements today; ones that will have a profound effect on how the general public and businesses use and think about generative AI. While the event touched on a lot, this piece will cover the main generative AI news around Bard, Search and other areas outside of the many hardware and developer-focused announcements that also came out of the event. 

Now, let’s dive in. 

Major Updates to Models 

One of the more significant announcements out of I/O was PaLM 2. This new model is said to be “faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases,” according to Google DeepMind VP Zoubin Ghahramani. In an announcement blog that accompanied the keynote, he wrote: “We’ll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn. Gecko is so lightweight that it can work on mobile devices and is fast enough for great interactive applications on-device, even when offline.”

It can translate text in over 100 languages, including nuanced content like poems and scientific papers with mathematical expressions. The model also demonstrates improvements in logic and common sense reasoning.

The model currently powers Bard and 25 additional products and features in the Google product library, like “Workspace features to help you write in Gmail and Google Docs, and help you organize in Google Sheets are all tapping into the capabilities of PaLM 2,” said Ghahramani. 

Gemini is another brand new multimodal AI model that Google built from the ground up with integrations for Google tools and other APIs in mind. CEO Sundar Pichai said it’s currently being fine-tuned for safety, but will eventually be available at various sizes and capabilities just like PaLM 2. One unique feature of Gemini is its ability to watermark image outputs and infuse them with metadata to identify them as AI-generated. This is particularly important because it's working toward to increasing transparency and preventing the spread of harmful or misleading information as generative AI grows more widespread (more on this later.) 

Changes to Bard and Workspace

Eventually, Bard will move onto the Gemini model so users can prompt Bard with images, get images and better Google Search results based on their prompts and use other APIs and Google services within that tool specifically. For example, users will be able to port their AI outputs from Bard directly to Google Docs and Gmail. 

Overall, Bard will get more capable. During the event, Bard was shown helping an 18-year-old identify a college major and locate colleges in his area that had that major on Google Maps. Bard then built a comparison doc in Google Sheets with categorized information about each of the schools.

It was also announced that Bard is currently available in 180 countries and users can speak to the tool in Korean and Japanese. There’s even a dark mode for users like me who are into that sort of thing. 

Bard Exporting a user-made table into Google Sheets
Bard Exporting a user-made table into Google Sheets [image via Google I/O]

It was also announced that Google Workspace has new generative AI prompting features that make it easier to create content in Docs, Sheets and Slides (and likely elsewhere soon.) These features allow users to generate medium or long-form content like job descriptions or tables based on prompts within the document. 

With an implementation known as Duet AI, Workspace now offers contextual prompts in a sidebar based on what you are already working on. This feature, known as your “Sidekick” can suggest new prompts and also pull information from files in other Google tools you're using, all while citing its sources. Additionally, Duet AI can create speaker notes for a presentation in Slides. 

In another productivity-related update, Google also showcased an implementation called Project Tailwind. It's goal is to make note-taking smarter by using AI to organize and summarize notes, generate study guides and answer natural language questions based on the specific documents that users upload to Google Drive. From there, it  creates a private AI model with expertise in that information. Students, writers or researchers can employ the model to suggest questions based on their notes and highlight key concepts and model will cite all of its sources within the docs users provide.

Google Search Is Getting More Conversational

During the presentation, we got a live demonstration of (what I’m calling) Google Search 1.5, which includes an AI-driven snapshot at the top of the page providing a conversational response to searches. It offers sources so you can intuitively click around and see where it pulled its insights from. Users can ask a follow up question in what was called “Conversational mode” and the search tool understands the full context of your query to surface more specific results on nuanced queries like “good bike for 5 mile commute with hills.”

Google Search Labs was also announced. It’s an experimental platform that allows users to test new, more advanced AI-powered search features that are not yet available to the public, only testers.

If you’re an SEO-driven content producer — check out this rundown on how Google’s advancements in search may impact SEO from my very talented colleague Krista Doyle. 

Google's new search in action
Google's new search in action [image via Google I/O]

AI Evolutions for Enterprise

Google’s AI-driven announcements are set to impact the business world past just SEO-rankings, however. The company said it’s expanding access to its tensor and graphics units, as well as its AI models and tooling to enterprise companies via Google Cloud's Vertex AI. 

With Vertex AI, users —whether they be individual developers or entire enterprise organizations — can build their own generative AI platform based on the models of their choice, which they can fine-tune based on specific prompts and custom compute clusters.

Enterprise Search allows users to pick data from their company's knowledge base. And Google is introducing three new models: Imagen for image generation, Codey for custom code building and Chirp for speech-to-text translation in 300 languages. These features are currently available in preview. Additionally, Duet AI for Google Cloud uses gen AI to provide assistance to developers with auto-completion of code and code review.

Google's enterprise AI suite
Google's enterprise AI suite [image via Google I/O]

Bold and Responsible AI

“Our approach to AI must be bold and responsible,” said James Manyika during the event. Manyika is the senior VP of Google’s brand new Technology and Society division, which will assess and outline Google’s view on how tech (and AI in particular) influences society. He continued, “The only way to be bold in the long term is to be responsible from the start.”

The company’s work on all its AI applications is first looked at through the lens of its 2017 AI principles, where its team asks questions like, “Will it be socially beneficial?” or “Could it lead to harm in any way?” before development. Discussions like these are clearly not new for the Alphabet company and that’s a good thing. 

Manyika said misinformation is a subject that’s top of mind for Google. So it’s offering tools to evaluate information online, such as assessing the validity of images. It’s introducing an "About this image" feature to show whether an image in question has previously appeared in news or social media, which will offer context on whether it may be fake. And every image generated by Google's AI will have metadata about its creation and other creators and publishers will have that ability as well so the data appears on images in Google Searches.

Google also announced an experimental Universal Translator service that can dub a person's speech and lip movement into a different language. However, the powerful technology will only be available to chosen partners in order to reduce the risk of harmful deepfake videos. Manyika said that Google is also launching "Automated adversarial testing," where experts work to counter the probability of problematic outputs. 

Lastly, Manyika said that Perspective, an API originally made for publishers to evaluate toxicity, is now the standard framework for fighting toxicity that many other AI producers (including OpenAI) are now using in their models. 

A Hot Summer for G(oogle)enerative AI

Bard’s announcement in February was a big deal, but this round of AI news may be even bigger. We got pretty significant insights into what a more capable Bard 2.0 will look like, the new Google Search experience, Google's two newest AI models (multimodal is the future), how enterprise teams can leverage generative power and (*catches breath*) the company's efforts to curb misinformation and the proliferation of deepfakes.

It’s clear that Google has been very, very busy since ChatGPT sent it into a development “code red” around six months ago. It’s also evident that Google is, in a way, betting a significant part of its future success on AI. The technology is now in essentially every part of its business, or will soon be. Most of the features announced today are rolling out over the next few months (if they aren’t already out today). So this summer is going to be one of heavy stress testing by users. And since Google’s products, particularly Search, are used by billions each day, the stakes are high for the Alphabet company. But Sundar Pichai and his team seem confident in the current state of their AI developments based on this presentation. And they should be — there’s a lot to be excited about.

Meet The Author.

Did you enjoy this post?
Join over 4 million people who are learning to master AI in 2023.

Receive a weekly recap of breaking news, case studies, and exclusive webinars on what’s happening in generative AI.

Ready to write better with AI?

Discover how 100,000+ copywriters and businesses are using AI content.
Works in 30+ languages
2 minute signup
Rated 4.8/5 stars in 10k+ reviews
Try Jasper Free
New! Jasper for Business
Announced Feb. 13th.
Tailor AI for your brand voice, collaborate with your team, and access Jasper everywhere with new extensions and access to Jasper's API.
Learn More →
←See All posts

The Biggest AI News From Google’s I/O 2023 Keynote

Google announced major updates to its gen AI investments, including new models, deeper integrations within Google products and efforts to curb disinformation.

May 23, 2023
The Biggest AI News From Google’s I/O 2023 Keynote
All Things AI
Want to try Jasper for free?
Claim your 10,000 word trial.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Today, Google held its annual developer conference Google I/O ‘23 in Mountain View, California. This year, CEO Sundar Pichai and a roster of other executive leaders spent close to two hours dropping announcement after announcement about their deeping investments in generative AI technology. 

The company has been largely silent about its AI developments since it announced Bard, its rival to the ChatGPT-powered Bing search engine, in February. But Google roared back onto the AI scene today by announcing big updates to Bard, which is now powered by PaLM 2: the successor to the language learning model that originally powered the tool. It announced a second new LLM titled Gemini as well as deeper AI integrations within Google search, the Google Workspace suites and even how it’s offering AI to enterprises. Speakers at the event also addressed how Google is working to curb disinformation and harmful outputs generated by AI in its products. 

The Alphabet company made a lot of exciting AI announcements today; ones that will have a profound effect on how the general public and businesses use and think about generative AI. While the event touched on a lot, this piece will cover the main generative AI news around Bard, Search and other areas outside of the many hardware and developer-focused announcements that also came out of the event. 

Now, let’s dive in. 

Major Updates to Models 

One of the more significant announcements out of I/O was PaLM 2. This new model is said to be “faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases,” according to Google DeepMind VP Zoubin Ghahramani. In an announcement blog that accompanied the keynote, he wrote: “We’ll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn. Gecko is so lightweight that it can work on mobile devices and is fast enough for great interactive applications on-device, even when offline.”

It can translate text in over 100 languages, including nuanced content like poems and scientific papers with mathematical expressions. The model also demonstrates improvements in logic and common sense reasoning.

The model currently powers Bard and 25 additional products and features in the Google product library, like “Workspace features to help you write in Gmail and Google Docs, and help you organize in Google Sheets are all tapping into the capabilities of PaLM 2,” said Ghahramani. 

Gemini is another brand new multimodal AI model that Google built from the ground up with integrations for Google tools and other APIs in mind. CEO Sundar Pichai said it’s currently being fine-tuned for safety, but will eventually be available at various sizes and capabilities just like PaLM 2. One unique feature of Gemini is its ability to watermark image outputs and infuse them with metadata to identify them as AI-generated. This is particularly important because it's working toward to increasing transparency and preventing the spread of harmful or misleading information as generative AI grows more widespread (more on this later.) 

Changes to Bard and Workspace

Eventually, Bard will move onto the Gemini model so users can prompt Bard with images, get images and better Google Search results based on their prompts and use other APIs and Google services within that tool specifically. For example, users will be able to port their AI outputs from Bard directly to Google Docs and Gmail. 

Overall, Bard will get more capable. During the event, Bard was shown helping an 18-year-old identify a college major and locate colleges in his area that had that major on Google Maps. Bard then built a comparison doc in Google Sheets with categorized information about each of the schools.

It was also announced that Bard is currently available in 180 countries and users can speak to the tool in Korean and Japanese. There’s even a dark mode for users like me who are into that sort of thing. 

Bard Exporting a user-made table into Google Sheets
Bard Exporting a user-made table into Google Sheets [image via Google I/O]

It was also announced that Google Workspace has new generative AI prompting features that make it easier to create content in Docs, Sheets and Slides (and likely elsewhere soon.) These features allow users to generate medium or long-form content like job descriptions or tables based on prompts within the document. 

With an implementation known as Duet AI, Workspace now offers contextual prompts in a sidebar based on what you are already working on. This feature, known as your “Sidekick” can suggest new prompts and also pull information from files in other Google tools you're using, all while citing its sources. Additionally, Duet AI can create speaker notes for a presentation in Slides. 

In another productivity-related update, Google also showcased an implementation called Project Tailwind. It's goal is to make note-taking smarter by using AI to organize and summarize notes, generate study guides and answer natural language questions based on the specific documents that users upload to Google Drive. From there, it  creates a private AI model with expertise in that information. Students, writers or researchers can employ the model to suggest questions based on their notes and highlight key concepts and model will cite all of its sources within the docs users provide.

Google Search Is Getting More Conversational

During the presentation, we got a live demonstration of (what I’m calling) Google Search 1.5, which includes an AI-driven snapshot at the top of the page providing a conversational response to searches. It offers sources so you can intuitively click around and see where it pulled its insights from. Users can ask a follow up question in what was called “Conversational mode” and the search tool understands the full context of your query to surface more specific results on nuanced queries like “good bike for 5 mile commute with hills.”

Google Search Labs was also announced. It’s an experimental platform that allows users to test new, more advanced AI-powered search features that are not yet available to the public, only testers.

If you’re an SEO-driven content producer — check out this rundown on how Google’s advancements in search may impact SEO from my very talented colleague Krista Doyle. 

Google's new search in action
Google's new search in action [image via Google I/O]

AI Evolutions for Enterprise

Google’s AI-driven announcements are set to impact the business world past just SEO-rankings, however. The company said it’s expanding access to its tensor and graphics units, as well as its AI models and tooling to enterprise companies via Google Cloud's Vertex AI. 

With Vertex AI, users —whether they be individual developers or entire enterprise organizations — can build their own generative AI platform based on the models of their choice, which they can fine-tune based on specific prompts and custom compute clusters.

Enterprise Search allows users to pick data from their company's knowledge base. And Google is introducing three new models: Imagen for image generation, Codey for custom code building and Chirp for speech-to-text translation in 300 languages. These features are currently available in preview. Additionally, Duet AI for Google Cloud uses gen AI to provide assistance to developers with auto-completion of code and code review.

Google's enterprise AI suite
Google's enterprise AI suite [image via Google I/O]

Bold and Responsible AI

“Our approach to AI must be bold and responsible,” said James Manyika during the event. Manyika is the senior VP of Google’s brand new Technology and Society division, which will assess and outline Google’s view on how tech (and AI in particular) influences society. He continued, “The only way to be bold in the long term is to be responsible from the start.”

The company’s work on all its AI applications is first looked at through the lens of its 2017 AI principles, where its team asks questions like, “Will it be socially beneficial?” or “Could it lead to harm in any way?” before development. Discussions like these are clearly not new for the Alphabet company and that’s a good thing. 

Manyika said misinformation is a subject that’s top of mind for Google. So it’s offering tools to evaluate information online, such as assessing the validity of images. It’s introducing an "About this image" feature to show whether an image in question has previously appeared in news or social media, which will offer context on whether it may be fake. And every image generated by Google's AI will have metadata about its creation and other creators and publishers will have that ability as well so the data appears on images in Google Searches.

Google also announced an experimental Universal Translator service that can dub a person's speech and lip movement into a different language. However, the powerful technology will only be available to chosen partners in order to reduce the risk of harmful deepfake videos. Manyika said that Google is also launching "Automated adversarial testing," where experts work to counter the probability of problematic outputs. 

Lastly, Manyika said that Perspective, an API originally made for publishers to evaluate toxicity, is now the standard framework for fighting toxicity that many other AI producers (including OpenAI) are now using in their models. 

A Hot Summer for G(oogle)enerative AI

Bard’s announcement in February was a big deal, but this round of AI news may be even bigger. We got pretty significant insights into what a more capable Bard 2.0 will look like, the new Google Search experience, Google's two newest AI models (multimodal is the future), how enterprise teams can leverage generative power and (*catches breath*) the company's efforts to curb misinformation and the proliferation of deepfakes.

It’s clear that Google has been very, very busy since ChatGPT sent it into a development “code red” around six months ago. It’s also evident that Google is, in a way, betting a significant part of its future success on AI. The technology is now in essentially every part of its business, or will soon be. Most of the features announced today are rolling out over the next few months (if they aren’t already out today). So this summer is going to be one of heavy stress testing by users. And since Google’s products, particularly Search, are used by billions each day, the stakes are high for the Alphabet company. But Sundar Pichai and his team seem confident in the current state of their AI developments based on this presentation. And they should be — there’s a lot to be excited about.

Tags

No items found.

Meet the author

Alton Zenon III
Jasper Content Marketing Manager
See more content by this author →
Logos of Jasper integrations with other products Hubspot, Shopify, Facebook, Google, and more
Logos of Jasper integrations with other products Hubspot, Shopify, Facebook, Google, and more

Now AI writes where you do.
Add the Jasper Browser Extension today.

Easily bring AI in your workflow to improve & create content, wherever you are. Also available on Edge.

Chrome extension download button
Add Jasper to Chrome
Start your free trial

Want To Try Jasper Risk-Free?
Claim Your 10,000 Word Free Trial.

Your trial gives you 10,000 words written by Jasper for free. The free word credits expire after 5 days. By starting your trial you agree to our terms of service and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
horizontal-example-cta
Horizontal style
Lorem ipsumLorem ipsum

Lorem ipsum

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s.

Register now
vertical-example-cta
Vertical style
Lorem ipsumLorem ipsum

Lorem ipsum

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s.

Register now