At Google Cloud Next 2025, AI dominated the headlines. But for cloud engineers, the most impactful developments came in the form of infrastructure upgrades, deployment automation, and new tools for operational visibility. These launches expand what teams can build—and how quickly they can bring it to production.
This article and video recap the most relevant technical updates for platform teams, data engineers, and builders deploying GenAI workloads.
TPUs Are Ready for Production at Scale
Google launched its seventh-generation TPU, Ironwood, which delivers a 3600x improvement in performance over the first version and up to 40x greater energy efficiency. These TPUs are now available for customer use, the same chips used to train Gemini 2.5.
Key advantages:
- Higher availability across regions compared to GPUs
- More efficient for Transformer workloads
- Lower operational cost per training or inference job
When optimizing for scale, sustainability, or cost-efficiency, TPUs could very well be evaluated as the default for GenAI training and inference.
App Hub and Database Center Improve GCP Observability
Cloud App Hub introduces a new way to organize cloud services. Teams can tag and group infrastructure (Cloud Run, GKE, Cloud SQL, etc.) into logical applications, regardless of project or environment.
Database Center gives centralized visibility across all databases, including configuration, engine versions, backup settings, and performance health.
These tools help platform teams:
- Standardize application management across projects
- Detect compliance risks or misconfigurations early
- Prioritize modernization work with better data
App Hub and Database Center provide operational clarity for managing large, distributed GCP environments.
Firestore Now Supports MongoDB APIs
Firestore’s new preview feature allows applications to use the MongoDB API directly. Developers can now keep their existing Mongo drivers while benefiting from Firestore’s auto-scaling, serverless architecture.
Benefits include:
- No manual sharding, backup management, or maintenance
- Easy migration from self-managed MongoDB
- Ideal for event-driven and real-time workloads
This makes Firestore a practical choice for teams building mobile backends or modern web apps that previously depended on Mongo.
BigQuery Adds Agent Capabilities and Lakehouse Support
BigQuery continues its evolution into a full data platform. Several new features launched or reached general availability at Next 2025:
- BigQuery Data Agent: Generates SQL queries and dashboards from natural language prompts
- Serverless Spark Pipelines: Now GA, offering integrated PySpark and pipeline execution inside BigQuery
- Iceberg and Delta Lake integration: Enables use of open table formats in BigQuery and BigLake
Together, these upgrades reduce the effort required for prototyping, building, and deploying production-grade data pipelines.
Agent Engine and Agent Development Kit Enable End-to-End Agentic Workflows
To support agentic AI development, Google introduced:
- Agent Development Kit (SDK): A framework for building multi-agent systems using reusable components and open protocols (MCP, A2A)
- Agent Engine: A fully managed runtime for hosting and serving agents
- AgentSpace: An interface layer for business users interacting with AI agents
These tools allow teams to deploy complex agent systems with built-in communication protocols, monitoring, and integration options for both Gemini and third-party models.
At Xebia, we leverage this stack to build agentic platforms for our client that support customer service, business operations, and domain-specific automation.