Blog
MotherDuck: Serverless Analytics Without the Overhead

Data engineers know the pain: wrestling with cluster configurations at 2 AM, optimizing Spark jobs that take hours to run, or explaining why that "simple" aggregation query costs $500 in compute. The promise of modern data warehouses was scale, but it came at the cost of elegance and simplicity.
Enter MotherDuck – a serverless analytics platform that fundamentally rethinks data warehousing. Built on DuckDB's blazingly fast columnar engine, MotherDuck delivers sub-second queries without the overhead of large, distributed systems. We’re excited to partner with MotherDuck on their launch of a new EU region, offering companies performant, efficient analytics with full European data residency.
Let’s dive into what makes MotherDuck such a compelling option in data warehousing.
Vertical scaling with per-user tenancy
Traditional data warehouses force organizations into a one-size-fits-all distributed computing model, even when 95% of workloads could run on a single powerful node. MotherDuck takes a different approach with its per-user tenancy model.
Each user connects to their own dedicated DuckDB instance (called a “Duckling”) that scales vertically based on workload requirements. Each instance is completely isolated, but ultimately maintains a connection with a central data warehouse. No more warehouse over-provisioning, fighting over cluster resources, or dealing with noisy neighbor problems where users compete for compute in a multi-tenant environment.
Ducklings scale vertically to meet the needs of each workload–from Pulse instances for ad hoc queries and data exploration, to Jumbo instances for consistent heavy workloads, all the way up to Giga instances for massive backfills and other intensive workloads where compute power is paramount.
Local development, cloud scale
One of the most powerful features of MotherDuck’s architecture is the tight coupling between local development and scaled, serverless analytics in the cloud.
With MotherDuck’s dual execution model, users can leverage the power of their local machines in concert with the cloud. Queries are intelligently split between each environment, allocating resources optimally to process data closest to where it lives. Small datasets and development work run locally at zero cost, while production workloads seamlessly scale to the cloud.
On the workflow side, local development in DuckDB translates easily to cloud scale with MotherDuck. You can easily create a sample of your data, develop a query locally, and then merge to production.

The beauty is in the simplicity – a single ATTACH
command bridges local and cloud. Here's an example workflow that shows how to query cloud data alongside local data:
# Local DuckDB development
import duckdb
# Start with local development - analyzing a CSV file
con = duckdb.connect('local.db')
con.sql("CREATE TABLE sales AS SELECT * FROM 'sales_data.csv'")
con.sql("SELECT product, SUM(revenue) FROM sales GROUP BY product").show()
# Scale to the cloud with one line - just attach MotherDuck
con.sql("ATTACH 'md:my_warehouse'")
# Now seamlessly query cloud data alongside local data
con.sql("""
SELECT
l.product,
l.revenue as local_revenue,
c.revenue as cloud_revenue
FROM sales l
JOIN my_warehouse.production.sales c
ON l.product = c.product
""").show()
# Or push local development work to the cloud for production
con.sql("CREATE TABLE my_warehouse.production.sales_v2 AS SELECT * FROM sales")
Real-time query development with Instant SQL
Query development is inherently iterative: write a complex transformation, run it, wait, find an error in CTE #7, comment out code to debug, run again, wait again.
MotherDuck’s Instant SQL feature streamlines this painful cycle, giving users tools for query development in real-time. Result set previews update as you type, allowing users to explore query results and iterate faster.
Complex CTEs that usually take hours to debug? Click around and instantly visualize any CTE in seconds. That mysterious NULL in your business metric calculation? Break apart your column expressions in your result table to pinpoint exactly what's happening.

Whether working with massive tables in MotherDuck, parquet files in S3, Postgres tables, SQLite, MySQL, Iceberg, or Delta, you get instant feedback on every keystroke. Even AI-generated suggestions become trustworthy because you immediately see the suggestion applied to the result set before committing.
By combining DuckDB's local-first design with MotherDuck's dual execution architecture, you can speedrun ad-hoc queries as you type. Test transformations locally with cached samples, validate logic in real-time, then scale to production datasets – all without leaving your flow state.
MotherDuck for European teams
MotherDuck's EU region is now in private preview, with general availability arriving in October. This means European organizations can finally leverage DuckDB's blazing-fast analytics at cloud scale. Your data stays in the EU throughout its entire lifecycle while maintaining the sub-second query performance that MotherDuck customers like Trunkrs and Layers rely on. Combined with co-location benefits that eliminate cross-region egress fees and simplified compliance for GDPR requirements, European teams can now access fast, efficient analytics without the overhead of distributed systems.
We’re joining the team at MotherDuck for a webinar on October 14th at 11 AM CET, where we’ll share a demo of the platform and discuss where it fits into real-world data engineering use cases–including the benefits and the trade-offs.

Written by
Diederik Greveling
Our Ideas
Explore More Blogs
Contact