A Newsletter Focused on *What AI and Robotics Can DO* #2
With an additional section on what it means for schools
On March 18, I posted that I was going to add a weekly blog about what AI can now do. I’m finally getting to the second release of that, and I’ve added notes on what it means for schools.
Media Generation
According to the The Decoder -
Google has updated several generative AI models for media generation, adding features across video, music, images, and voice synthesis. The Veo 2 video model now supports editing functions such as inpainting (removing elements), outpainting (expanding content), camera control, and interpolation for smoother transitions. The new Google Vids app allows users to create realistic video content using the Veo 2 model—without needing external editing software. These features are available in preview for selected testers.
What it means for schools —
(1) “Google Schools” (many) will soon be able to offer a full suite of AI-enabled tools for creative arts, graphic design, etc.
(2) Have I convinced you to start an entrepreneurship program yet? Students already have access to tools to help them tell their business’ story!
Soon, you'll be able to generate high-quality, original video clips directly within Vids, powered by our advanced Veo 2 model. Need a specific shot to illustrate a point or add visual flair? Generate clips with realistic motion and diverse styles to supplement your content and tell a more powerful story — no complex editing software required. This feature will roll out to alpha customers in the coming weeks.
Generally, video generation quality is substantially improving, with new tools being able to achieve consistency longer across the video.
Similarly, Adobe is building AI agents for Photoshop and Premiere Pro. The agents can perform complex tasks to help students get started on projects.
Midjourney has released Version 7 with personalization. This is something we’ll see more of as memory expands (and just today we saw it from ChatGPT). Ideogram also upgraded its model. ChatGPT-4o image generation seems to have given everyone a little push.
Anyhow, create away!
Data Analysis
Do the students need lep with their new business’ data?
Drowning in data? Wish everyone on your team could easily find the story hidden in the numbers, without needing to be a spreadsheet wizard? We're giving everyone access to an on-demand analyst, available 24/7, by building Help me analyze in Google Sheets.
This on-demand analyst provides guidance to get you started, points out interesting trends you might have missed, suggests next steps for digging deeper, and creates clear, interactive charts to bring your data to life. This makes powerful analysis accessible to your entire team, spreadsheet expert or not. Help me analyze is coming to Sheets later this year.
Writing
According to The Verge
Another feature coming to Docs is a prompt to “Help me refine.” Rather than just doing the writing for you, it will leave comments with suggestions about how you can tighten up an existing draft. I’m familiar with this concept as an editor, and they’re hella useful. If you don’t have access to, you know, a person editor, an AI version might not be a terrible idea. This one will be available “later this quarter.”
What it means for schools: Students will have free tutor-based writing support to them.
Traditional Academic Research
Google has announced the launch of its upgraded Deep Research feature, now powered by the Gemini 2.5 Pro model.
This updated version, available to Gemini Advanced subscribers, aims to improve how users find, analyze, and understand information. With a focus on accuracy and insight, the tool is designed to simplify complex research tasks and deliver high-quality reports across various topics.
What this means for schools: Student research projects will be able to take advantage of the combination of one of the best research and reasoning models to take research to new heights.
An Expanding Universal Tutor
Google has expanded NotebookLM's functionality with a new web search feature that helps users discover and incorporate online sources directly into their notebooks.
The system lets users describe what they're looking for, then automatically finds and organizes relevant sources from across the internet. Users can add these discovered sources to their notebooks with a single click, making them available for NotebookLM's existing features like creating briefings and generating FAQs. Google says the feature is currently rolling out to all NotebookLM users.
What this means for schools: As soon as I saw NotebookLM, I thought it was the foundation of an emerging universal tutor from Google and now I’m even more convinced. We have podcasters who will discuss any topic, including those in your Google docs. And the same tool will generate a mind map, search the web for additional support, and even engage in a conversation with the user. Isn’t this the foundation of a universal tutor?
Learning from Images
According to The Verge
Google is adding multimodal capabilities to its search-centric AI Mode chatbot that enable it to “see” and answer questions about images, as it expands access to AI Mode to “millions more” users.
The update combines a custom version of Gemini AI with the company’s Lens image recognition tech, allowing AI Mode Search users to take or upload a picture and receive a “rich, comprehensive response with links” about its contents. The multimodal update for AI Mode is available starting today and can be accessed in the Google app on Android and iOS.
What this means for schools: Students who want to learn more about the image they come across can easily do so
UK Murder Prediction Tool
The UK is UK creating ‘murder prediction’ tool to identify people most likely to kill.
What this means for schools: There are so many learning opporutnities here for students. Students can explore fundamental AI concepts like predictive analytics and machine learning, learning how algorithms identify behavioral patterns based on data. This also opens discussions on data ethics and privacy, especially since the tool draws from sensitive sources like mental health records and domestic abuse reports—raising questions about informed consent and the limits of surveillance. The initiative highlights the risks of bias in AI, as predictive tools often reflect and amplify systemic inequalities, disproportionately targeting marginalized communities. Legal and human rights dimensions can be examined, such as the conflict between predictive policing and the presumption of innocence, along with broader implications for civil liberties. Students can also assess the societal consequences, including the potential for stigmatization and eroded trust in law enforcement. Ultimately, this example challenges students to critically evaluate the role of AI in governance and public safety, weighing its potential benefits against ethical, legal, and social costs—equipping them with the skills needed to navigate an increasingly AI-integrated world.
AI Capabilities and Human-Level Intelligence
According to the new Stanford AI Index 2025, the intelligence of the models keeps rapidly advancing.
A new report suggests that we could even reach human-level intelligence in machines by 2027.
Google recently suggested similar capabilities by 2030.
What this means for schools: Schools will need to fundamentally reimagine curricula to emphasize uniquely human skills that complement rather than compete with AI, such as creative thinking, ethical reasoning, and interpersonal collaboration. Educational institutions must simultaneously integrate AI literacy across all subjects while developing clear policies on appropriate AI use in assignments and assessments. This technological shift will likely accelerate personalized learning through AI tutors and assistants while requiring significant professional development for educators, ultimately demanding that schools balance technological implementation with maintaining equitable access and preserving crucial human elements of the educational experience.
Robot Accelleration
Robotics continues to explode.
Physical AI research, which is designed to support robotics, continues to expand.
We now even have robot swarms that are designed to build smart aircraft. Robots aren’t just in the form of cars or humanoids. We also have tiny hoping robots.
Hyundai is going to buy 10s of thousands of robots.
Eventually, advances in physical AI will be combined with advances in digital AI in a drive to produced more generalized intelligences.
What This Means for Schools: Educational institutions must now their curricula to include robotics education at all levels, from elementary schools to universities, preparing students for a future where human-robot collaboration will be commonplace. This shift requires schools to develop new teaching methodologies that integrate hands-on robotics experience with traditional subjects, creating interdisciplinary learning environments that foster innovation and technological literacy. As companies like Hyundai invest in tens of thousands of robots, and as we see developments like robot swarms building aircraft and tiny hopping robots, schools must focus not only on technical skills but also on nurturing critical thinking and ethical understanding about automation's societal impact.
Thank you for supporting my work!