If your SaaS does anything beyond a CRUD form — sending emails, charging cards, running AI inference, syncing with a third-party API — you eventually run into the same wall: HTTP requests are not the right place for work that takes longer than 10 seconds, can fail, and must complete. That is a background job problem, and in 2026 the field has split into four serious contenders: Trigger.dev v3, Inngest, Hatchet, and Temporal.
I have been pushing async work into queues since the days of Laravel’s database driver and Node’s Bull. Across the 50+ projects we have shipped at wardigi.com — from the SmartExam AI Generator processing essay scoring jobs, to ContentForge AI Studio orchestrating long image-generation pipelines, to the seven aggregator sites I run that pull 100–200 records every night — I have stress-tested every flavor of this problem. This guide is the comparison I wish I had when I was picking between these four platforms last quarter for a client’s billing reconciliation system.
I will be direct: none of these four is the right answer for everyone. The right answer depends on whether you live in TypeScript, what your latency budget looks like, how much you trust a vendor with your control plane, and how comfortable you are running Postgres yourself. Let’s break it down.
Why this matters more in 2026 than it did in 2023
Three things changed and made this category interesting again.
One: AI workflows became normal. A two-minute LLM chain with retries, fan-out, and human-in-the-loop steps does not fit in a Vercel serverless function with a 60-second timeout. Every SaaS team building anything with OpenAI, Anthropic, or self-hosted models needs durable execution, not just a queue.
Two: Serverless got more honest about its limits. The hard truth I learned the painful way: a queue + a worker is not enough. You need step-level retries, idempotency, observability, and the ability to resume a workflow at the failed step without re-running the expensive parts. Old-school Bull or Sidekiq can fake this, but not without writing a lot of state-management code that you will regret in six months.
Three: The pricing models diverged sharply. Inngest charges per "step". Trigger.dev charges per second of compute. Temporal charges per "action". Hatchet charges per task. Picking the wrong model for your workload pattern can mean a 10x bill difference for the same work. I have seen a client get a $4,200 surprise on Temporal Cloud after migrating from a $300/month VPS-hosted Sidekiq, because their workflows generated far more actions than the headline pricing suggested. That is not a vendor problem — it is a workload-fit problem.
The 30-second comparison table
| Feature | Trigger.dev v3 | Inngest | Hatchet | Temporal |
|---|---|---|---|---|
| License | Apache 2.0 | Source-available, cloud only | MIT, fully open | MIT, fully open |
| Self-host | Yes, mature path | No (cloud only) | Yes, Postgres only | Yes, but heavy infra |
| Languages | TypeScript first | TypeScript, Python, Go | TypeScript, Python, Go, Ruby | Go, Java, TypeScript, Python, .NET, PHP, Ruby |
| Free tier | Yes, generous | Yes, but step-capped | Cloud is invite-only; OSS free | Dev tier free, $200 Growth |
| Sweet spot | Long AI tasks, indie SaaS | Fast onboarding for serverless apps | Fine-grained concurrency, AI agents | Mission-critical financial / compliance workflows |
| Pricing start | From $10/mo (Hobby paid) | From $75/mo (Basic) | OSS free; cloud quote-based | From $200/mo (Growth) |
If you stop reading here and just want a snap recommendation: most SaaS teams I work with should start with Trigger.dev v3. Most teams that already live entirely in Vercel/Netlify and want zero infra should pick Inngest. Teams running Postgres-heavy AI workloads with hard concurrency requirements should look at Hatchet. Teams doing money movement or healthcare workflows where a missed step is a regulatory event should pick Temporal.
Trigger.dev v3 — the indie-friendly default
Trigger.dev shipped v3 last year and the rewrite was substantial — they replaced their old function-as-a-task model with a proper orchestration runtime that supports task durations of up to an hour out of the box, with self-hosted instances pushing that limit further.
What I like, from real use:
- The mental model is honest. You write tasks as TypeScript files. Each task has steps. Steps are checkpointed. If the function crashes mid-step, the next run resumes from the last checkpoint. There is no hidden DSL.
- The dashboard is the best in the category. Drilling into a failed run, seeing the exact step input and output, and replaying with one click is the kind of DX that saves you on the worst day of an outage.
- Self-host actually works. I ran the Docker Compose self-host setup on a Hostinger VPS for a client who had compliance requirements blocking cloud usage. It came up in about 40 minutes and has been stable for the four months it has been in production.
What I do not like:
- TypeScript only, in practice. They have other SDKs in alpha but the polished story is TS. If your backend is Laravel or Django, you are looking at a sidecar Node service, which adds operational weight.
- Pricing per second of compute means very long-running but mostly-idle tasks (think: polling a slow webhook for 10 minutes) can get expensive on Cloud, although their idle billing improvements landed in late 2025 and helped.
For the BizChat Revenue Assistant project we built in 2025, Trigger.dev v3 cloud handled the nightly client-data summarization pipeline — about 400 tasks per night, each running 30–90 seconds — for about $14/month on the Hobby paid tier. That is in a different universe of pricing than what Temporal would have charged for the same workload.
Inngest — the "just add the SDK" option
Inngest’s pitch has always been: you do not run any infrastructure, you just deploy functions to your existing Vercel/Netlify/Lambda app, and Inngest invokes them. They figure out the queue, the retries, the steps. You write code that looks like a normal serverless function.
This is the easiest onboarding of any platform in this comparison. I have onboarded Inngest on a Next.js project in literally 20 minutes — install SDK, create one route, define one function, deploy. It works.
The architectural tradeoff is worth understanding: Inngest stores no data on your infrastructure and runs no compute on your infrastructure. They invoke your endpoints over HTTP and track state on their side. That is a beautiful operational model when it fits, and a hard wall when it does not.
Where it fits well:
- Teams running on Vercel/Netlify who do not want to think about Redis, queues, or workers ever.
- Workflows that fan out to many small fast steps. Their step-based pricing is favorable for short steps.
- Teams that need event-driven triggers with a clean event-sourcing model. Their
inngest.send()+ listener pattern is genuinely nice.
Where I have hit walls:
- You cannot self-host the platform. They have a free dev server for local use, but production runs on Inngest Cloud only. For some clients (financial, healthcare, defense) this is a non-starter.
- The pricing math gets unfriendly with long steps. A long-running step counts as one step regardless of duration in some plans, but plan caps on concurrent steps and total step minutes can bite you. Read the plan limits carefully.
- Function timeouts are still tied to your hosting platform. If your Vercel function caps at 60 seconds, your Inngest step caps at 60 seconds. You can chain steps to work around this, but for genuinely long single computations (a 5-minute LLM batch), you need somewhere else to run the work, or you need to upgrade your hosting tier.
Hatchet — the Postgres-native AI workload pick
Hatchet is the youngest serious entrant and the one I have been watching most closely. Their core thesis: a durable task queue should be built on Postgres, not Kafka or a custom data store, because most teams already run Postgres and adding another stateful system is operational pain. They are MIT-licensed, fully open source, and the cloud version is in invite-only beta as of April 2026.
What stands out:
- Concurrency control is best-in-class. If you are running AI agents that need to throttle to N concurrent calls per user or per API key (think: rate-limiting OpenAI usage per tenant), Hatchet’s concurrency primitives are the cleanest I have used. None of the others are close on this specific axis.
- Self-host is genuinely simple. Postgres + the Hatchet binary. That is the whole stack. I tested a self-host on a single Hetzner CX22 instance with 10k tasks per hour and it stayed under 30% CPU. For comparison, Temporal self-hosted requires Cassandra/Postgres + Elasticsearch + the server cluster + worker pool — an order of magnitude more infrastructure.
- Multi-language SDKs. TypeScript, Python, Go, Ruby. The Python SDK is mature, which matters if your AI workload is in FastAPI or Django.
What is rough:
- The dashboard is functional but not polished. Compared to Trigger.dev, it feels engineer-built rather than designer-built. You get what you need; it is not a joy to look at.
- Cloud pricing is opaque. Invite-only with quote-based pricing means you are calling sales for any production-scale eval. Not great for indie dev evaluation.
- Smaller community. The Stack Overflow / GitHub Discussions volume is a fraction of Inngest’s. You will be doing more first-principles debugging.
I am running Hatchet self-hosted in a sandbox project right now to evaluate it for a client doing AI document processing with strict per-customer throughput limits. After three weeks, I would say it is the platform I would pick today if my workload had complex concurrency rules.
Temporal — the bulletproof but heavy choice
Temporal is the oldest and most battle-tested platform here. It powers workflows at Stripe, Snap, Datadog, and a long list of other companies where a missed workflow step is a financial or legal incident.
The trade is straightforward: you get the strongest correctness guarantees in the industry, but you pay for them in operational and pricing complexity.
Strengths I have seen in production:
- Workflow-as-code with deterministic replay. A Temporal workflow is just code. The runtime guarantees that if a worker crashes mid-execution, the workflow will resume exactly where it left off, replaying the deterministic parts and skipping the parts already done. This is what you want for money movement, refunds, multi-step provisioning, anything where doing a step twice is a customer-visible bug.
- Multi-language and polyglot. Temporal has the broadest language support of any platform here. If your team has a Go backend and a Python data team and a TypeScript frontend, all three can speak the same workflow protocol.
- Self-host is mature. Many large companies run their own Temporal clusters. The open-source story is the most production-tested in the category.
What you have to accept:
- Cloud pricing scales fast. The headline $200/month Growth tier covers 1 million actions, but real workflows generate far more actions than people expect. A workflow with 5 steps and 2 retries each can be 15+ actions. I have seen teams budget $500/month and end the first month at $3,800. The pricing is fair on its own terms; the surprise is from underestimating action counts.
- The learning curve is real. Determinism rules. Workflow vs activity separation. Versioning workflows during deploy. None of this is hard once you understand it, but it is a 2–4 week ramp for a senior engineer, not an afternoon.
- Self-host is heavy. Cassandra or Postgres for persistence, Elasticsearch for visibility, the server tier, the worker pool. For a small team, the operational overhead is real.
If you are processing payments, syncing financial data, running compliance workflows, or doing anything where a missed step ends in a lawsuit — Temporal is worth the weight. For everyone else, it is overkill.
Pricing in plain numbers
Here is what each platform actually costs for a representative SaaS workload — about 50,000 background tasks per month, average task duration 30 seconds, with retries and observability turned on. These are list prices as of April 2026; your real numbers will vary based on plan caps and usage patterns.
| Platform | Cheapest paid plan | Estimated monthly for 50k tasks | Concurrency cap on entry plan |
|---|---|---|---|
| Trigger.dev v3 Cloud | $10/mo Hobby | ~$25–$60 | 5 concurrent runs (Hobby), higher tiers scale up |
| Inngest | $75/mo Basic | ~$75–$150 depending on step count | 5 concurrent steps on Basic |
| Hatchet Cloud | Quote-based (invite) | Unknown publicly | Configurable |
| Temporal Cloud | $200/mo Growth (1M actions) | ~$200–$500 | Limited by Namespace settings |
| Self-hosted Hatchet | $0 + $5–$20 VPS | ~$10–$30 infra | Whatever your VPS handles |
The self-hosted Hatchet line is what I would tell a bootstrapped indie SaaS founder to start with if they have the basic Linux and Postgres skills. For a few hours of one-time setup, you get a real durable workflow engine for the price of a small VPS.
How to actually pick: a decision matrix
I run through this checklist with every client now:
- Do you absolutely need to self-host? If yes, eliminate Inngest. Choose between Trigger.dev, Hatchet, and Temporal based on questions below.
- Is your stack TypeScript-first or polyglot? TypeScript-first → Trigger.dev or Inngest. Polyglot or Python-heavy → Hatchet or Temporal.
- Are you doing money movement, compliance workflows, or anything where a duplicated step is a real-world incident? If yes, default to Temporal. The correctness guarantees are worth the operational weight.
- Do you need fine-grained per-tenant rate limiting or concurrency rules? If yes, lean Hatchet.
- Do you want zero ops and live entirely on Vercel/Netlify already? Inngest.
- None of the above — just generic SaaS background work, AI tasks, scheduled jobs? Trigger.dev v3.
That covers about 95% of the SaaS teams I have advised in the last year.
What I would do today, if I were starting fresh
For my own next project — a small SaaS where the backend is Node.js and the workload is mostly LLM calls and API integrations with retries — I would start on Trigger.dev v3 Cloud Hobby. Total cost under $20/month, dashboard pays for itself the first time something fails at 2am, and if I outgrow it, I can self-host and keep my code unchanged.
For a client building a healthcare data pipeline, I would pick Temporal self-hosted and budget for the operational ramp-up.
For a client running an AI-heavy SaaS that needed strict per-customer throughput controls (the document processing project I mentioned), I am leaning toward Hatchet self-hosted on Hetzner. The Postgres-only architecture is a perfect fit and the concurrency primitives are genuinely better than the alternatives for that use case.
I would only pick Inngest if my entire deployment was already Vercel and the team had explicitly said they did not want any background-job infra, ever, even if it cost more long term.
FAQ
Can I migrate between these platforms later?
Sort of. The function bodies (the actual work you are doing) port easily. The orchestration metadata — how steps are defined, how retries work, how events are dispatched — does not. Plan to rewrite the orchestration layer if you migrate. I have done a Trigger.dev to Hatchet migration; it took about a week for a 30-task project.
What about Vercel Cron + Vercel Queues?
Fine for very simple use cases. The moment you need step-level retries, complex fan-out, or human-in-the-loop pauses, you outgrow them. They are not really competitors in this category — they are the "just use the simplest thing" baseline.
What about BullMQ, Sidekiq, or Laravel Horizon?
These are queues, not durable workflow engines. They handle the task-pickup-and-retry layer, but you write all the workflow state machine code yourself. For simple async jobs (send this email, charge this card), they are perfect and I still use them. For multi-step durable workflows with checkpointing, they are not the same category as the four platforms in this article.
Is open source actually meaningful here?
For me, yes. The platform that runs your billing or your data pipeline is core infrastructure. Being able to read the source, fork it if needed, and self-host without a vendor escape hatch is risk reduction. Inngest is excellent software, but the closed-source cloud-only model is a legitimate concern for some teams.
How do I handle observability across all of these?
All four ship OpenTelemetry traces. I push them to Grafana Tempo for a unified view across our stack. Trigger.dev’s dashboard is the only one good enough that I sometimes do not bother with the OTEL pipeline for small projects.
Final word
The honest answer to "which background job platform should I use in 2026?" is "the one whose tradeoffs match your team’s actual constraints." That sounds like a non-answer, but the failure mode I see most is teams adopting Temporal because it is bulletproof and then drowning in operational complexity, or adopting Inngest because it is easy and then hitting the cloud-only wall when compliance shows up.
Pick the one that matches where you are now, with eyes open about where you might outgrow it. Most teams I work with end up on Trigger.dev v3 and stay there happily for years. A meaningful minority — the ones with real concurrency or compliance constraints — need Hatchet or Temporal. Almost no one I would advise to start on Inngest first, unless they have explicitly written off ever leaving Vercel.
Whichever you pick, write your task bodies as boring, idempotent, single-purpose functions. The orchestration layer is replaceable. The business logic inside the tasks is what really matters and it should not be locked to any vendor’s API surface. That is the one piece of advice that survives any of these comparisons going stale.