ByteByteGo 05月05日 23:45
How Canva Collects 25 Billion Events a Day
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Canva通过构建一套严谨的数据管道,实现了每天250亿级别事件的收集、处理和分发,同时有效控制了技术债务和云成本。该系统分为三个核心阶段:结构化(定义严格的schema)、收集(摄取和丰富事件)和分发(将事件路由到适当的目标)。Canva在数据schema定义、客户端统一、异步处理、以及路由解耦等方面采取了一系列优化措施,保证了数据质量、系统稳定性和可扩展性。通过这些实践,Canva不仅实现了大规模数据处理,还在基础设施成本优化方面取得了显著成果。

🏷️严格Schema:Canva从一开始就锁定其分析Schema,每个事件都必须符合严格定义的Protobuf Schema,保证完全传递兼容性,并通过Datumgen工具自动执行Schema规则,确保数据仓库始终了解传入数据的形状。

📱统一客户端:Canva没有为iOS、Android和Web构建和维护单独的分析SDK,而是采用了一种统一的TypeScript分析客户端,该客户端在WebView shell中运行,从而减少了重复和偏差。

🚀异步Kinesis管道:Canva的摄取层依靠统一的客户端和异步的、AWS Kinesis支持的丰富管道。事件经过Schema验证后,会被异步推送到Amazon Kinesis Data Streams (KDS),从而实现摄取端点与下游处理的解耦,保证了低延迟。

🚦解耦路由服务:Canva通过将管道清晰地分离,避免了摄取和交付之间的紧密耦合,使用Router服务在丰富和消费之间进行解耦,将每个事件发送到正确的下游消费者,而不会让任何消费者减慢其他消费者的速度。

ACI.dev: The Only MCP Server Your AI Agents Need (Sponsored)

ACI.dev’s Unified MCP Server provides every API your AI agents will need through just one MCP server and two functions. One connection unlocks 600+ integrations with built-in multi-tenant auth and natural-language permission scopes.

Skip months of infra plumbing; ship the agent features that matter.

Star us on GitHub!


Disclaimer: The details in this post have been derived from the articles written by the Canva engineering team. All credit for the technical details goes to the Canva Engineering Team. The links to the original articles and videos are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.

Every product team wants data. Not just numbers, but sharp, trustworthy, real-time answers to questions like: Did this new feature improve engagement? Are users abandoning the funnel? What’s trending right now?

However, collecting meaningful analytics at scale is less about dashboards and more about plumbing.

At Canva, analytics isn’t just a tool for dashboards but a part of the core infrastructure. Every design viewed, button clicked, or page loaded gets translated into an event. Multiply that across hundreds of features and millions of users, and it becomes a firehose: 25 billion events every day, flowing with five nines of uptime.

Achieving that kind of scale requires deliberate design choices: strict schema governance, batch compression, fallback queues, and a router architecture that separates ingestion from delivery. 

This article walks through how Canva structures, collects, and distributes billions of events daily, without drowning in tech debt and increasing cloud bills.

Their system is organized into three core stages:

Let’s each look at each stage in detail.

Structure

Most analytics pipelines start with implementation speed in mind, resulting in undocumented types and incompatible formats. It works until someone asks why this metric dropped, and there is no satisfactory answer.

Canva avoided that trap by locking down its analytics schema from day one. Every event, from a page view to a template click, flows through a strictly defined Protobuf schema.

Instead of treating schemas as an afterthought, Canva treats them like long-term contracts. Every analytics event must conform to a Protobuf schema that guarantees full transitive compatibility:

Breaking changes like removing a required field or changing types aren’t allowed. If something needs to change fundamentally, engineers ship an entirely new schema version. This keeps years of historical data accessible and analytics queries future-proof.

To enforce these schema rules automatically, Canva built Datumgen: a layer on top of protoc that goes beyond standard code generation.

Datumgen handles various components like:

Every event schema must list two human owners:

Fields must also include clear, human-written comments that explain what they mean and why they matter. These aren’t just helpful for teammates. They directly power the documentation shown in Snowflake and the Event Catalog.

Collect

The biggest challenge with analytics pipelines isn’t collecting one event, but collecting billions, across browsers, devices, and flaky networks, without turning the ingestion service into a bottleneck or a brittle mess of platform-specific hacks.

Canva’s ingestion layer solves this by betting on two things: a unified client and an asynchronous, AWS Kinesis-backed enrichment pipeline. Rather than building (and maintaining) separate analytics SDKs for iOS, Android, and web, Canva went the other way: every frontend platform uses the same TypeScript analytics client, running inside a WebView shell.

Only a thin native layer is used to grab platform-specific metadata like device type or OS version. Everything else, from event structure to queueing to retries, is handled in one shared codebase.

This pays off in a few key ways:

Once events leave the client, they land at a central ingestion endpoint.

Before anything else happens, each event is checked against the expected schema. If it doesn’t match (for example, if a field is missing, malformed, or just plain wrong) it’s dropped immediately. This upfront validation acts as a firewall against bad data.

Valid events are then pushed asynchronously into Amazon Kinesis Data Streams (KDS), which acts as the ingestion buffer for the rest of the pipeline.

The key move here is the decoupling: the ingestion endpoint doesn’t block on enrichment or downstream delivery. It validates fast, queues fast, and moves on. That keeps response times low and isolates ingest latency from downstream complexity.

The Ingest Worker pulls events from the initial KDS stream and handles all the heavy lifting that the client can’t or shouldn’t do, such as:

Once events are enriched, they’re forwarded to a second KDS stream that acts as the handoff to the routing and distribution layer.

This staging model brings two major benefits:

Deliver

A common failure mode in analytics pipelines isn’t losing data but delivering it too slowly. When personalization engines lag, dashboards go blank, or real-time triggers stall, it usually traces back to one culprit: tight coupling between ingestion and delivery.

Canva avoids this trap by splitting the pipeline cleanly. Once events are enriched, they flow into a decoupled router service.

The router service sits between enrichment and consumption. Its job is simple in theory but crucial in practice: get each event to the right place, without letting any consumer slow down the others.

Here’s how it works:

Why decouple routing from the ingest worker?  Because coupling them would mean:

Canva delivers analytics events to a few key destinations, each optimized for a different use case:

This multi-destination setup lets each consumer pick the trade-off it cares about: speed, volume, simplicity, or cost.

The platform guarantees “at-least-once” delivery. In other words, an event may be delivered more than once, but never silently dropped. That means each consumer is responsible for deduplication, whether by using idempotent writes, event IDs, or windowing logic.

This trade-off favors durability over purity. In large-scale systems, it’s cheaper and safer to over-deliver than to risk permanent data loss due to transient failures.

Infrastructure Cost Optimization

Here’s how the team brought infrastructure costs down by over 20x, without sacrificing reliability or velocity.

SQS + SNS

The MVP version of Canva’s event delivery pipeline leaned on AWS SQS and SNS:

But convenience came at a cost. Over time, SQS and SNS accounted for 80% of the platform’s operating expenses.

That kicked off a debate between streaming solutions:

The numbers made the decision easy: KDS delivered an 85% cost reduction compared to the SQS/SNS stack, with only a modest latency penalty (10–20ms increase). The team made the switch and cut costs by a factor of 20.

Compress First: Then Ship

Kinesis charges by volume, not message count. That makes compression a prime lever for cost savings. Instead of firing events one by one, Canva performs some key optimizations such as:

This tiny shift delivered a big impact. Some stats are as follows:

KDS Tail Latency

Kinesis isn’t perfect. While average latency stays around 7ms, tail latency can spike over 500ms, especially when shards approach their 1MB/sec write limits.

This poses a threat to frontend response times. Waiting on KDS means users wait too. That’s a no-go.

The fix was A fallback to SQS whenever KDS misbehaves:

This fallback also acts as a disaster recovery mechanism. If KDS ever suffers a full outage, the system can redirect the full event stream to SQS with no downtime. 

Conclusion

Canva’s event collection pipeline is a great case of fundamentals done right: strict schemas, decoupled services, typed clients, smart batching, and infrastructure that fails gracefully. Nothing in the architecture is wildly experimental, and that’s the point.

Real systems break when they’re over-engineered for edge cases or under-designed for scale. Canva’s approach shows what it looks like to walk the line: enough abstraction to stay flexible, enough discipline to stay safe, and enough simplicity to keep engineers productive.

For any team thinking about scaling their analytics, the lesson would be to build for reliability, cost, and long-term clarity. That’s what turns billions of events into usable insight.

References:


SPONSOR US

Get your product in front of more than 1,000,000 tech professionals.

Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.

Space Fills Up Fast - Reserve Today

Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing sponsorship@bytebytego.com.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Canva 数据管道 大数据 数据分析
相关文章