Notification Infrastructure

How Notification Batching and Digests Actually Work (2026)

Nikita Navral
May 11, 2026
TABLE OF CONTENTS

Last Updated: May 2026

Last year, a team we were working with shipped a collaboration feature. Users could comment on shared documents, tag teammates, leave reactions. Within two weeks, their most active workspace had generated over 12,000 comment notifications in a single day. Their top users started turning off notifications entirely. The feature that was supposed to drive engagement was actively pushing people away.

The fix was not to send fewer notifications. It was to send them smarter. Notification batching and digest systems solve this exact problem: they take a stream of individual events and compress them into a single, useful message. "Sarah, Alex, and 4 others commented on your document" instead of six separate pings. It sounds simple, but getting it right in production is where things get interesting.

This post covers how batching actually works under the hood, the architecture decisions that matter, and the non-obvious design choices we have learned from building and operating these systems. If you have ever tried to implement notification grouping and found it more complex than expected, this is the context you were probably missing.

What Is Notification Batching (and How Is It Different from a Digest)?

These two terms get used interchangeably in most documentation, and that is where confusion starts. They are related but distinct concepts.

Batching is the mechanism. It is the process of collecting multiple notification events over a time window or until a count threshold is reached, and then processing them as a group instead of individually. Batching happens at the system level. The user does not necessarily see the word "batch" anywhere. They just see fewer, more useful notifications.

A digest is the output format. It is the actual message the user receives that summarizes the batched events. A daily email that says "Here is what happened in your projects today" is a digest. The batching system decided to hold those events. The digest template decided how to present them.

Here is why the distinction matters in practice: you can batch notifications and still deliver them individually (just delayed), or you can batch them and deliver a single digest. The batching logic and the rendering logic are separate concerns, and treating them as one leads to rigid systems that are hard to extend.

Consider how GitHub handles PR reviews. When three teammates review your pull request within a few minutes, GitHub batches those events and sends you a single notification: "3 new reviews on your PR." That is batching (collecting the events) plus a digest (the summary format). But GitHub also lets you expand each review inline in their notification center. The same batch, two different renderings depending on the channel.

Notification aggregation, notification grouping, notification bundling: these are all variations of the same concept. The core idea is always the same. Multiple events in, one (or fewer) messages out.

Why Batching Reduces Fatigue Without Losing Engagement

The data on notification overload is stark. Per Business of Apps' Push Notifications Statistics, the average US smartphone user receives 46 push notifications per day. That number has been climbing steadily, and the user response is predictable: they start ignoring or disabling notifications entirely.

The same source finds that sending 3 to 6 push notifications per week causes 40% of users to disable notifications entirely. The threshold is lower than most teams assume, and it falls inside the cadence many products consider normal.

But here is the part most teams miss: the problem is not notification volume. It is notification interrupt volume. Users do not mind getting a lot of information. They mind getting interrupted a lot. A single digest email summarizing 15 project updates is useful. Fifteen separate push notifications over three hours is hostile.

Batching lets you decouple the volume of events your system generates from the number of interrupts the user experiences. Your backend can fire hundreds of notification events per user per day. The batching layer absorbs that volume and delivers it in a cadence the user can process. SuprSend's best practices for batching and digest docs cover the same trade-off in more depth.

The engagement benefit is counterintuitive. You would expect that fewer notifications means less engagement. In practice, the opposite happens. When every notification a user receives is substantive (a digest with 8 meaningful updates vs. 8 individual pings they have learned to ignore), open rates go up, not down. Users start trusting their notification channel again because the signal-to-noise ratio improved.

Batch-on-Write vs Batch-on-Read: Two Architecture Approaches

This is the first major architectural decision you face when building a notification batching system, and it has cascading consequences for complexity, latency, and reliability.

Batch-on-Write

In this model, you decide at the time an event arrives whether to deliver it immediately or hold it in a batch. When a new comment event comes in, your system checks: is there an open batch window for this user and this context? If yes, append the event to the batch. If no, start a new batch window.

The batch window is typically time-based (collect events for the next 5 minutes) or count-based (collect up to 10 events) or both (whichever threshold hits first). When the window closes, the system processes the batch: renders the digest template with all collected events and sends it.

How it works in practice:

  1. Event arrives ("new comment on post #42 by Alex")
  2. System checks for an open batch window for this user + grouping key (post #42)
  3. If no open window: create one, set a 5-minute timer, store the event
  4. If open window exists: append the event to the batch
  5. When the timer fires: render a digest with all collected events and deliver

Why batch-on-write is almost always the right choice: It is simpler to reason about, easier to debug, and more predictable for the user. Each batch has a clear lifecycle: open, collecting, closed, delivered. You can inspect any batch and see exactly what events it contains and when it will fire. When something goes wrong (a user did not get a notification), you can trace it to a specific batch window.

The main complexity is managing the timers. In a distributed system, you need reliable scheduled execution. If your batch timer fires at T+5 minutes and the worker handling it crashes, you need to make sure another worker picks it up. This is solvable with any robust job queue (SQS with visibility timeouts, Redis-based delay queues, database polling) but it does require thought.

Batch-on-Read

In this model, every event is stored individually. The batching happens when the notification is actually read or delivered. A scheduled job runs (say, every morning at 9 AM) and queries all undelivered events for each user, groups them, renders digests, and sends them.

This model is conceptually simpler at write time (just store the event) but pushes complexity to the read side. The morning digest job has to query potentially millions of users, fetch their undelivered events, group them by context, and render unique digests for each. That is a heavy batch job, and if it fails or runs slowly, everyone's digest is delayed.

When batch-on-read makes sense: Fixed-schedule digests. If your product sends a "Daily Summary" email at 9 AM regardless of activity volume, batch-on-read is a natural fit because the delivery schedule is predetermined. You are not making a per-event decision; you are running a scheduled report.

When it falls apart: Dynamic batching. If you want "batch comments for 5 minutes after the first one," batch-on-read does not work well because the batch window is event-driven, not clock-driven. You would have to simulate per-event windows in your read query, which gets messy fast.

Most production systems use batch-on-write for event-driven batches and batch-on-read for scheduled digests. They are not mutually exclusive. Linear, for example, uses event-driven batching for real-time notification grouping in their inbox and a separate daily digest email that is more of a batch-on-read pattern.

Common Batching Patterns (with Real Examples from Slack, GitHub, Linear, Figma)

After working with dozens of teams implementing notification batching, we see the same patterns recurring. Here are the ones that matter, illustrated with how real products handle them.

Pattern 1: Activity-Based Batching (Slack)

Slack batches channel activity when you are away. Instead of 47 individual push notifications for messages in #engineering, you get a single "You have 47 new messages in #engineering." The batching is context-aware: it only batches when you are not actively viewing the channel. The moment you open the channel, the batch resets.

The key design decision here is the grouping key. Slack groups by channel, not globally. You still get separate notifications for #engineering and #design because those are different contexts the user cares about independently. This sounds obvious, but many teams start with global batching ("batch all notifications for user X") and then realize users need more granular grouping.

Pattern 2: Entity-Scoped Digests (GitHub)

GitHub batches activity around a specific entity: a pull request, an issue, a discussion. "3 new reviews on your PR" is scoped to one PR. If you have reviews on two different PRs, you get two separate notifications. The entity (PR, issue) is the grouping key.

GitHub also demonstrates a critical batching pattern: different rendering per channel. In your email, you get a full digest with each reviewer's comments inline. In the notification bell on the web, you get a grouped entry you can expand. On mobile push, you get a count. Same batch, three different digest templates. This is where most homegrown batching systems fall short. They build batching for email and then realize they need completely different rendering for push and in-app.

Pattern 3: Time-Windowed Project Digests (Linear)

Linear sends daily digest emails that summarize all issue updates, comments, and status changes across your projects. This is the classic batch-on-read pattern: a scheduled job collects everything that happened since the last digest and renders a summary.

What Linear does well is information hierarchy in the digest. High-priority issues and direct mentions are at the top. Status changes are in the middle. Comments on issues you are watching are at the bottom. The digest is not a flat list; it is a prioritized summary. This is a design decision that separates useful digests from ones users learn to ignore.

Pattern 4: Thread-Scoped Comment Batching (Figma)

Figma batches comment threads on design files. "5 new comments on Design File" with inline previews of the actual comments. The grouping key is the design file, and the batch window is activity-based: if comments keep coming, the batch keeps growing until there is a pause.

Figma demonstrates another important pattern: batch item limits. In the notification, you see the first 3 comments and then "+2 more." The digest does not try to show all 50 comments if a thread gets heated. It shows the most recent or most relevant ones and provides a link to see the rest. Without truncation logic, digests become as overwhelming as the individual notifications they were meant to replace.

Pattern 5: Critical Alert Bypass

This is the pattern teams forget until it causes an incident. Some notifications should never be batched. A payment failure, a security alert, a system outage notification: these need to arrive immediately, regardless of any batching window that is open.

The implementation is a priority flag or category check that runs before the batching logic. If the event is flagged as critical, skip the batch entirely and deliver immediately. Every batching system needs this escape hatch, and it should be configurable per notification type, not hardcoded.

How to Design a Digest Template That Users Actually Read

The batching logic is only half the problem. The other half is rendering a digest that is actually useful. And this is where most implementations get lazy.

The Variable-Length Problem

Your digest template needs to handle 3 items and 300 items gracefully. This is the single biggest design challenge in digest notifications, and most teams do not think about it until they see a digest email that is 40 screens long.

The solution is a tiered rendering approach:

  • 1 to 3 items: Show full detail for each. The user can process this without summarization.
  • 4 to 10 items: Show a headline for each with a link to the full context. Compact but still individually addressable.
  • 11+ items: Show the top 3 to 5 by priority or recency, then a count: "+47 more updates. View all." Anything beyond this threshold becomes noise if rendered individually.

This is not just about email layout. Each channel needs its own truncation rules. A push notification showing "3 new comments" is fine. A push notification trying to show 15 items is not.

Multi-Channel Rendering

A single batch of events needs to render differently depending on where it is delivered. This is a non-obvious architectural requirement that trips up most teams.

  • Email: Full summary table with details, links, and context. This is where you can be verbose because the user chose to read it.
  • Push notification: Count only. "5 new comments on your document." No detail. The goal is to get the user to open the app, not to deliver the content in the push payload.
  • In-app notification center: Expandable group. Show the summary line, let the user click to expand and see individual items. This is where interaction design matters most.
  • Slack/chat: Brief summary with a deep link. "3 updates on Project Alpha. View details."

The template engine needs access to the full array of batched items, a count, and metadata about the batch (time range, grouping key). This lets template authors make rendering decisions: "if batch.count > 5, show summary view; else show detail view."

User-Controlled Digest Frequency

The best batching system is one where users can choose their own cadence. Some users want real-time notifications for everything. Others want a single weekly summary. Your notification preference center should expose digest frequency as a per-category setting: real-time, hourly, daily, weekly. SuprSend's user preferences docs show how to model this per-category and read it from the workflow.

This means your batching logic has to read user preferences at batch-open time. If a user has set "daily digest" for project updates, the batch window for that category is 24 hours, not the default 5 minutes. This interaction between batching and preferences is where the two systems need tight integration.

What to Look For in a Notification Platform's Batching Support

Most notification platforms support batching at a basic level. The interesting question is what kind of batching, and how configurable it is. If you are evaluating platforms for a workload where batching matters, here are the capabilities worth checking on each vendor's docs before committing.

Two batching primitives, not one

Some platforms expose a single "batch" or "digest" step. Others expose both: an event-driven batch (timing relative to the first event) and a scheduled digest (timing fixed to a recurring schedule). The two solve different problems. Comment threads need event-driven batching. Daily summary emails need scheduled digests. If your product needs both, a platform that conflates them into a single primitive will force awkward workarounds.

Custom grouping keys

Can you set a grouping key per workflow (post_id, project_id, channel_id)? Or are batches scoped only to user-level? Per-entity grouping is the difference between a useful "5 comments on your post" digest and an overwhelming "47 updates across your account."

Time-window flexibility

Does the platform support fixed windows (every event extends the window) and dynamic windows (window closes a fixed time after first event)? Both have valid use cases. Also check whether the window can be a function of user preferences, not just a static value in the workflow.

Batch item limits

When 200 events land in one batch, what does the digest look like? Platforms that let you retain only the first N or last N items prevent runaway digest emails. Platforms without this control will happily render a 200-row table.

Critical alert bypass

Can you conditionally skip the batch for high-priority events (payment failures, security alerts) without forking the workflow into two parallel paths? Native bypass is much cleaner than the manual branching most platforms force.

Multi-channel digest rendering

Does the same batch render differently across email, push, and in-app, or does the platform force a single template across all channels? This is one of the most common gaps in homegrown batching, and not every platform solves it either.

Template-side access to batch metadata

Can the digest template iterate over the actual batched events with a known variable name, render the count, and access metadata about the batch (time range, grouping key value)? Without this, you cannot render "5 comments by Sarah, Alex, and 3 others on Document Z" — only generic count strings.

User preference integration

Does digest frequency live in the user's preference center as a per-category setting, with the batching engine reading the user's choice at runtime? Or does the workflow have to hard-code one frequency per audience? Per-user, per-category control matters more than most teams think on day one.

Run this checklist against each vendor's public docs before signing. The differentiator across platforms is not whether they support batching — it is how granular and configurable the batching primitives are.

How SuprSend Handles Batching and Digest

SuprSend exposes two distinct workflow nodes for this problem: a Batch node for event-driven batching (timing relative to the first event) and a Digest node for recurring scheduled summaries (daily, weekly, timezone-aware). Keeping them as workflow nodes means you can branch before the batch (skip batching for critical alerts) and branch after (route to different channels based on batch size). Splitting batch from digest means each primitive does one thing well rather than overloading a single node with conflicting timing models.

Here is how the key design decisions play out:

Grouping keys. Every Batch and Digest node takes a batch key parameter. If you are building a social or collaborative application, this is the difference between "batch all comment notifications for this user" (overwhelming daily digest) and "batch comments on post #42 for this user" (useful per-entity summary). The batch key is what makes batching granular enough to be helpful.

Batch windows and item limits. You configure a time window (fixed, dynamic, or relative) and a Retain Items count (2–100, default 10). When the window closes, the digest renders. You can choose to retain the first N items (show the earliest activity) or the last N items (show the most recent). This matters more than you would think: for a comment thread, the most recent comments are usually most relevant. For task assignments, the first one sets the context.

Critical alert bypass. A conditional branch before the Batch node checks the event priority. If the event is flagged as critical (payment failure, security alert, SLA breach), the workflow skips the batch entirely and routes to immediate delivery. This is not an edge case. Every team we have worked with eventually needs this, and retrofitting it after the fact is painful. SuprSend's best practices for batching and digest docs walk through the same pattern.

Multi-channel digest rendering. When the batch closes, the digest renders separately for each delivery channel. The template has access to $batched_events (the array of event properties for every batched event) and $batched_events_count (the total count, which stays accurate even when Retain Items truncates the array). The email template iterates over $batched_events to render a summary table with links. The push notification template uses $batched_events_count to render only the count. The in-app template renders an expandable group. You are not forced to compromise on one format that works poorly everywhere.

User preference integration. The Digest node supports recipient-based timezones and recurring schedules per user. If a user has set "daily digest" for a notification category, the Digest node closes the window on that user's local schedule, not a global clock. No custom logic needed per workflow.

The whole system is configurable through a visual workflow builder. You drag a Batch or Digest node into the workflow, set the batch key and window, connect it to channel nodes, and configure templates per channel using $batched_events. No code required to set up or modify batching rules.

FAQ

What is the difference between notification batching and a digest?

Batching is the mechanism of collecting multiple notification events over a time window before processing them. A digest is the formatted summary message that presents those batched events to the user. Batching is the system behavior; the digest is the user-facing output. You can batch events and still deliver them individually (just delayed), or batch them and deliver a single digest summary.

What is a good batch window duration for notifications?

It depends on the notification type and urgency. For collaborative features (comments, reactions), 2 to 10 minutes works well because activity often comes in bursts. For project updates and status changes, hourly or daily digests are more appropriate. For transactional notifications like payment confirmations, do not batch at all. Start with 5 minutes for activity-based notifications and adjust based on user feedback and engagement data.

Should critical notifications be batched?

No. Payment failures, security alerts, account lockouts, and system outage notifications should always bypass batching and deliver immediately. Your batching system needs a conditional check that evaluates event priority before adding it to a batch. If you do not build this escape hatch, you will eventually have an incident where a critical alert sat in a batch window for 5 minutes instead of arriving instantly.

How do grouping keys work in notification batching?

A grouping key determines which events get batched together. Without a grouping key, all notifications of a type batch into one digest ("You have 50 updates"). With a grouping key like post_id or project_id, events batch per entity ("5 new comments on your blog post" and "3 updates on Project Alpha" as separate notifications). Granular grouping keys produce more useful digests because each one has a clear context.

Can users control their own digest frequency?

Yes, and they should be able to. Best practice is to expose digest frequency as a per-category setting in your notification preference center: real-time (no batching), hourly, daily, or weekly. The batching system reads the user's preference and adjusts the batch window accordingly. This gives power users real-time delivery while allowing others to consolidate into daily summaries.

How should digest notifications render differently across channels?

Email digests can include full detail: summary tables, individual item descriptions, and links. Push notifications should only show a count ("5 new comments on your document") because the goal is to bring the user to the app, not deliver all content in the push payload. In-app notifications work best as expandable groups: a summary line the user can click to see individual items. Each channel needs its own digest template for the same batch of events.

What is batch-on-write vs batch-on-read?

Batch-on-write decides at event arrival time whether to hold the event in a batch or deliver immediately. The system maintains active batch windows and appends events as they arrive. Batch-on-read stores all events individually and groups them at delivery time (e.g., a daily job that collects all events from the past 24 hours). Batch-on-write is better for dynamic, event-driven batching. Batch-on-read is better for fixed-schedule digests like daily summaries.

How do I avoid digest emails that are too long?

Use tiered rendering based on batch size. For 1 to 3 items, show full detail. For 4 to 10, show headlines with links. For 11 or more, show the top 3 to 5 items by priority or recency, then a count with a "View all" link. Also set batch item limits: retain only the first N or last N items per batch to cap the maximum digest length regardless of how many events occurred.

TL;DR

  • Batching is the mechanism (collecting events over a window). Digest is the output format (the summary message users see). They are separate concerns.
  • Batch-on-write (decide at event arrival) works best for dynamic, event-driven batching. Batch-on-read (group at delivery time) works best for fixed-schedule digests.
  • Grouping keys are the single most important configuration. Batch by entity (post_id, project_id), not globally. "5 comments on your post" is useful. "47 updates across your account" is not.
  • Multi-channel rendering is mandatory. Email gets a full summary table. Push gets a count. In-app gets an expandable group. One template does not fit all channels.
  • Critical alerts bypass batching. Always. Payment failures and security alerts should never sit in a batch window.
  • Digest template design must handle variable-length batches. 3 items and 300 items need different rendering logic (tiered truncation).
  • User preferences should control digest frequency. Let users choose real-time, daily, or weekly per notification category.
  • SuprSend handles all of this through dedicated Batch and Digest nodes in the workflow engine, with configurable batch keys, $batched_events / $batched_events_count template variables, per-channel digest templates, critical alert bypass, and user preference integration.

If you are building notification batching from scratch, start with batch-on-write, entity-scoped grouping keys, and a critical alert bypass. Add multi-channel digest rendering and user preference controls from day one. These are not nice-to-haves you can add later; they are the core requirements that determine whether your users find your notifications useful or just turn them off.

Ready to add batching and digest to your notification stack? Explore SuprSend's workflow engine to see how batching works as a visual workflow node with batch keys, per-channel digest templates, and user preference integration built in.

Written by:
Nikita Navral
Co-Founder, SuprSend
Implement a powerful stack for your notifications
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.