Most production applications need work that does not fit the request/response cycle: sending emails, processing uploads, running AI pipelines, syncing third-party data, generating reports. The traditional answer is a queue (Redis, SQS, RabbitMQ), a worker fleet, a scheduler, and a fragile pile of glue code that breaks on every deploy.
Trigger.dev collapses that stack into a single TypeScript SDK. You write functions, you call them from anywhere, and the platform handles queuing, retries, observability, scheduling, and durable execution. Tasks run for as long as they need - no 10-second serverless timeout, no lost work on redeploys.
Why Trigger.dev
The shift in 2026 is durable execution. Workflows must survive restarts, crashes, deploys, and rate limits. They must also stream progress back to the UI in real time and pause for human input. Trigger.dev was rebuilt around these requirements with version 3 and continues to expand its AI infrastructure surface.
The model is simple: you define tasks as exports, the SDK picks them up, the platform schedules and runs them in isolated containers, and the run state is persisted so you can resume, retry, and observe.
Getting Started
Initialize a project
npx trigger.dev@latest login npx trigger.dev@latest init
This creates a trigger.config.ts file and a trigger/ directory with example tasks. The config file is the source of truth for your project: which directories contain tasks, build settings, lifecycle hooks, and runtime options.
// trigger.config.ts import { defineConfig } from "@trigger.dev/sdk"; export default defineConfig({ project: "proj_abc123", runtime: "node", logLevel: "log", maxDuration: 3600, retries: { enabledInDev: true, default: { maxAttempts: 3, factor: 2, minTimeoutInMs: 1000, maxTimeoutInMs: 30_000, }, }, dirs: ["./trigger"], });
Run tasks locally
npx trigger.dev@latest dev
The dev server connects to the cloud, registers your tasks, and streams runs through your local code. You set breakpoints in your editor and hit them on real triggers - the same loop you would use in any normal Node.js project.
Defining a Task
A task is an object exported with a unique id and a run function. The SDK introspects exports across dirs and registers them automatically.
// trigger/send-welcome-email.ts import { task } from "@trigger.dev/sdk"; import { Resend } from "resend"; const resend = new Resend(process.env.RESEND_API_KEY); export const sendWelcomeEmail = task({ id: "send-welcome-email", retry: { maxAttempts: 5, factor: 1.8, minTimeoutInMs: 500, maxTimeoutInMs: 30_000, }, run: async (payload: { email: string; name: string }) => { const { data, error } = await resend.emails.send({ from: "hello@spinny.dev", to: payload.email, subject: `Welcome, ${payload.name}`, html: `<p>Glad you are here, ${payload.name}.</p>`, }); if (error) throw error; return { messageId: data?.id }; }, });
Three things to notice:
- No timeout in the run body. The platform manages execution time through
maxDurationin config, not the runtime. - Throws are retries. The SDK catches exceptions and re-runs with exponential backoff according to the
retrypolicy. - The return value is persisted. Other tasks and your frontend can read
run.outputfrom anywhere.
Triggering Tasks
You call a task from your backend, your API routes, or another task.
import { sendWelcomeEmail } from "@/trigger/send-welcome-email"; const handle = await sendWelcomeEmail.trigger( { email: "user@example.com", name: "Alex" }, { idempotencyKey: `welcome-${userId}`, concurrencyKey: `tenant-${tenantId}`, queue: { name: "emails", concurrencyLimit: 50 }, delay: "30s", ttl: "10m", } ); console.log(handle.id); // run_xyz - use this to track or display progress
The options unlock a lot of behavior in one call:
idempotencyKey- if a run with the same key already exists, the SDK returns the existing handle instead of duplicating work.concurrencyKey- serializes runs sharing the key so you do not overrun a per-tenant rate limit.queue.concurrencyLimit- global cap for the queue across all keys.delay- schedules the run for a future time.ttl- if the run has not started by then, expire it automatically.
Batch trigger
For fan-out workloads, batchTrigger accepts up to 500 items per call and creates one run per item.
await sendWelcomeEmail.batchTrigger( newUsers.map((u) => ({ payload: { email: u.email, name: u.name }, options: { idempotencyKey: `welcome-${u.id}` }, })) );
Scheduled Tasks
Cron jobs become first-class declarations. The schedule itself is a separate object you can attach to a task multiple times.
// trigger/daily-digest.ts import { schedules } from "@trigger.dev/sdk"; export const dailyDigest = schedules.task({ id: "daily-digest", cron: "0 9 * * *", run: async (payload) => { console.log("Scheduled at:", payload.timestamp); console.log("Last run:", payload.lastTimestamp); console.log("Timezone:", payload.timezone); console.log("Next 5 runs:", payload.upcoming); await sendDigestForDate(payload.timestamp); }, });
For per-tenant schedules - say, one cron per customer - you create them dynamically through the management API.
import { schedules } from "@trigger.dev/sdk"; await schedules.create({ task: "daily-digest", cron: "0 9 * * *", timezone: "America/New_York", externalId: `customer_${customerId}`, deduplicationKey: `digest-${customerId}`, });
The deduplicationKey makes the call idempotent: re-running the same code at deploy time does not stack duplicate schedules.
Queues, Concurrency, and Idempotency
Three primitives cover most rate-limiting and ordering needs.
A common pattern: a queue per tenant with a small per-key concurrency to respect a vendor's rate limit, plus an idempotency key to make retries safe.
await syncShopifyOrders.trigger( { shopId }, { queue: { name: `shopify-${shopId}`, concurrencyLimit: 2 }, concurrencyKey: shopId, idempotencyKey: `sync-${shopId}-${Date.now() / 60_000 | 0}`, } );
Waits and Long-Running Work
Tasks can pause without holding a connection or burning compute. The platform persists state and resumes the function when the wait completes.
import { wait } from "@trigger.dev/sdk"; export const onboarding = task({ id: "onboarding", run: async (payload: { userId: string }) => { await sendWelcomeEmail.triggerAndWait({ userId: payload.userId }); await wait.for({ days: 1 }); await sendTipsEmail.trigger({ userId: payload.userId }); await wait.until({ date: oneWeekFromSignup(payload.userId) }); await sendUpgradeOffer.trigger({ userId: payload.userId }); }, });
triggerAndWait is the killer feature: it triggers a child task and suspends the parent until the child completes. You compose tasks like async functions, but the orchestration runs durably across days or weeks.
Human-in-the-loop with wait.forToken
For approval flows and AI gates, wait.forToken pauses until your application calls back with a result.
import { task, wait } from "@trigger.dev/sdk"; export const publishPost = task({ id: "publish-post", run: async (payload: { draftId: string }) => { const draft = await generateAIContent(payload.draftId); const token = await wait.createToken({ timeout: "7d" }); await notifyEditor({ draftId: draft.id, token: token.id }); const decision = await wait.forToken<{ approved: boolean; notes?: string }>( token.id ); if (decision.approved) { return await publish(draft); } return await applyFeedback(draft, decision.notes); }, });
The editor opens a UI, reviews the draft, clicks Approve, and your backend completes the token. The task picks up where it left off - even if hours or days have passed.
Lifecycle Hooks
You can attach init, onStart, onSuccess, and onFailure to a task or globally in trigger.config.ts. Use these for tracing, error reporting, and shared setup.
// trigger.config.ts export default defineConfig({ // ... init: async () => { Sentry.init({ dsn: process.env.SENTRY_DSN }); }, onFailure: async ({ error, ctx }) => { Sentry.captureException(error, { tags: { taskId: ctx.task.id, runId: ctx.run.id }, }); }, });
init runs once per worker container at boot, not per run, so it is the right place to set up clients and pools.
Realtime in the Frontend
Trigger.dev publishes run state changes - status, metadata, output - over a streaming API. The React hooks subscribe to that stream and re-render automatically.
// trigger/process-video.ts import { task, metadata } from "@trigger.dev/sdk"; export const processVideo = task({ id: "process-video", run: async (payload: { videoId: string }) => { metadata.set("stage", "transcoding"); await transcode(payload.videoId); metadata.set("stage", "thumbnails"); await generateThumbnails(payload.videoId); metadata.set("stage", "uploading"); const url = await uploadToCDN(payload.videoId); return { url }; }, });
// components/VideoStatus.tsx "use client"; import { useRealtimeRun } from "@trigger.dev/react-hooks"; import type { processVideo } from "@/trigger/process-video"; export function VideoStatus({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun<typeof processVideo>(runId, { accessToken: publicAccessToken, }); if (error) return <p>Error: {error.message}</p>; if (!run) return <p>Loading...</p>; return ( <div> <p>Status: {run.status}</p> <p>Stage: {String(run.metadata?.stage ?? "queued")}</p> {run.output?.url && <video src={run.output.url} controls />} </div> ); }
You generate the public access token server-side, scoped to a specific run, and ship it to the client. The hook handles auth, reconnection, and incremental updates.
For trigger-and-subscribe in one shot:
import { useRealtimeTaskTrigger } from "@trigger.dev/react-hooks"; const { submit, run, isLoading } = useRealtimeTaskTrigger<typeof processVideo>( "process-video", { accessToken: publicAccessToken } ); <button onClick={() => submit({ videoId })} disabled={isLoading}> Process video </button>;
AI Agents and Streaming
Trigger.dev has become a popular runtime for AI agents because the same primitives - durable execution, retries, waits, real-time metadata, human-in-the-loop - are exactly what agents need. You stream tokens from a model provider into metadata while the run is happening, the frontend renders them live, and the run survives long-running tool calls without burning a serverless timeout.
import { task, metadata } from "@trigger.dev/sdk"; import { streamText } from "ai"; import { anthropic } from "@ai-sdk/anthropic"; export const researchAgent = task({ id: "research-agent", maxDuration: 1800, run: async (payload: { question: string }) => { const result = streamText({ model: anthropic("claude-opus-4-7"), system: "You are a research assistant. Use the web.", prompt: payload.question, tools: { webSearch }, }); let fullText = ""; for await (const chunk of result.textStream) { fullText += chunk; metadata.set("partial", fullText); } return { answer: fullText, usage: await result.usage }; }, });
The frontend uses useRealtimeRun and reads run.metadata.partial to render the streaming response, the same way you would render a chat completion - except this one survives a full page reload.
Deploying
Deploys compile your tasks into a versioned bundle, build a container, and atomically swap traffic. Old in-flight runs keep using the previous version.
npx trigger.dev@latest deploy --env prod
In CI you typically wire this into the same workflow that ships your app:
# .github/workflows/deploy.yml - name: Deploy Trigger.dev env: TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }} run: npx trigger.dev@latest deploy --env prod
For preview environments, pass --env preview --branch ${{ github.head_ref }} and Trigger.dev creates an isolated environment per branch, mirroring how Vercel handles preview deployments.
Self-Hosting vs Cloud
Trigger.dev is open source under the Apache 2.0 license. You can self-host on any container platform (Docker Compose, Kubernetes, Fly.io) or use the managed cloud at trigger.dev.
| Aspect | Cloud | Self-hosted |
|---|---|---|
| Setup | Sign up, run init | Run docker-compose or Helm chart |
| Scaling | Automatic | Your responsibility |
| Pricing | Per run + per compute | Infra cost only |
| Compliance | SOC 2 | Whatever your environment provides |
| Best for | Most teams | Strict data residency, custom infra |
The SDK and CLI are identical between modes - you change a profile flag and point at your own instance.
Best Practices
1. Make payloads small and serializable
Pass IDs and references, not full objects. Pull the data inside the task. This keeps the queue small, payloads cheap to log, and lets you change the data source without re-triggering.
2. Idempotency keys on every external call
Combine idempotencyKey on the task trigger with idempotency keys at your vendor APIs (Stripe, OpenAI, etc.). Retries will be safe end to end.
3. Use triggerAndWait for orchestration, not Promise.all of triggers
A parent that calls triggerAndWait durably composes child tasks. A parent that triggers and resolves immediately loses observability of the chain.
4. Tag runs
Add tags to triggers (tags: ["user:123", "feature:onboarding"]) so you can filter the dashboard and the management API by business dimensions.
5. Keep init idempotent
It runs on every cold start. Avoid migrations or one-shot side effects there.
Conclusion
Trigger.dev removes the categories of work that used to require building a job system from scratch. You write async TypeScript, you call it from anywhere, and the platform gives you durable execution, scheduling, queues, retries, real-time updates, and human-in-the-loop patterns out of the box.
The same surface that powers a nightly cron is the surface that powers a multi-step AI agent that streams to the frontend and pauses for review. That convergence is what makes the framework worth a serious look in 2026, whether you are running a SaaS that needs reliable background work or shipping AI features that outlive a serverless timeout.
Getting Started Checklist:
- Sign up at trigger.dev or run the self-hosted Docker stack
npx trigger.dev@latest initin your project- Define your first task with
task({ id, run })- Trigger it from your API and watch the run in the dashboard
- Add
idempotencyKeyandconcurrencyKeyfor production safety- Wire
useRealtimeRuninto a status component- Deploy with
trigger.dev deploy --env prodfrom CI