Skip to content

Benchmarks

OJS publishes reproducible performance benchmarks for all backends. Results are generated automatically in CI and tracked over time to detect regressions. This page covers reference results, methodology, interpretation guidance, and instructions for running benchmarks yourself.

The following table shows representative throughput numbers from the v0.1.0 release, measured on GitHub Actions CI runners. These serve as a baseline for tracking improvements over time.

OperationLite (ops/sec)Redis (ops/sec)PostgreSQL (ops/sec)
Enqueue40,23315,7418,410
Fetch30,94712,7016,716
Round-trip12,0934,7792,635

The Lite backend serves as the upper-bound baseline — it runs entirely in-process with no network or storage overhead. The gap between Lite and Redis/PostgreSQL reflects the cost of real storage I/O and network round-trips.

Key takeaways from v0.1.0:

  • Redis achieves roughly 2–3× the throughput of PostgreSQL across all operations
  • The Lite backend is roughly 2.5–4.5× faster than Redis, showing the cost of network + storage
  • Round-trip (enqueue → fetch → ack) is the most demanding operation, as expected
OperationDescription
EnqueueSingle job enqueue via HTTP POST
EnqueueBatchBatch enqueue (10, 50, 100 jobs) via HTTP POST
FetchSingle job fetch via HTTP POST
AckJob acknowledgment after processing
RoundTripFull lifecycle: enqueue → fetch → ack
ConcurrentEnqueueParallel enqueue from N goroutines
MetricDescription
ops/secOperations per second (derived from ns/op)
ns/opNanoseconds per operation
B/opBytes allocated per operation
allocs/opHeap allocations per operation
BackendSetupCharacteristics
LiteIn-processZero dependencies, in-memory, baseline measurement
RedisRedis 7+Lua-scripted atomicity, production-grade
PostgreSQLPostgreSQL 16+SKIP LOCKED dequeue, LISTEN/NOTIFY, production-grade
Go SDKClient libraryMeasures SDK overhead separately from backend

Understanding what the numbers mean is as important as the numbers themselves.

  • Higher ops/sec = better. This is the primary throughput metric.
  • ops/sec is derived from the Go benchmark’s ns/op measurement: ops/sec = 1,000,000,000 / ns_per_op.
  • The Lite backend represents the theoretical ceiling — it runs in-process with no network or storage overhead.
  • Redis vs PostgreSQL tradeoff: Redis is faster for pure throughput; PostgreSQL offers stronger transactional guarantees and durability.
  • Round-trip is the most realistic benchmark — it measures the full lifecycle (enqueue → fetch → ack) that a real application would exercise.
  • Network overhead is always included. Every benchmark operation is a full HTTP round-trip to reflect real-world deployment patterns, not just a function call.
  • B/op (bytes per operation) measures how much memory is allocated during each operation. Lower is better.
  • allocs/op (allocations per operation) counts the number of heap allocations. Fewer allocations means less GC pressure and more predictable latency.
  • These metrics are especially useful for detecting memory regressions in the SDK and serialization layers.

All benchmarks follow a rigorous, reproducible process.

  • Go benchmark framework: Standard testing.B with -benchmem to capture allocation metrics
  • Iterations: -count=5 for each benchmark to ensure statistical significance
  • Timer reset: b.ResetTimer() is called after setup to exclude initialization costs (server startup, database seeding) from the measurement
  • Parallel benchmarks: Use b.RunParallel() with GOMAXPROCS goroutines to measure concurrent throughput
  • Each operation is a full HTTP round-trip — the benchmark client sends an HTTP request to the OJS server and waits for the response. This includes JSON serialization, network I/O, server-side processing, and storage I/O.
  • Backends start with a clean state before each benchmark suite. This means no pre-existing jobs, no warmed caches, and no leftover state from previous runs.
  • The benchmark binary and the OJS server run as separate processes, connected over localhost HTTP — matching how OJS is deployed in production.
  • Running with -count=5 produces 5 independent measurements for each benchmark.
  • The CI pipeline uses benchstat to compute means and detect statistically significant changes between runs.
  • Outliers from CI runner variability (noisy neighbors, GC pauses) are smoothed out by the multiple iterations.

Benchmarks run in a controlled CI environment to maximize reproducibility.

ParameterValue
CI PlatformGitHub Actions, Ubuntu latest runners
CPU2-core (x86_64)
RAM7 GB
Go Version1.22+
Redis7+ (Docker container)
PostgreSQL16+ (Docker container)
NetworkLoopback (localhost) — no real network latency
DockerUsed for Redis and PostgreSQL backends

The CI workflow automatically detects performance regressions:

  1. Baseline caching: After each main branch run, results are cached as the baseline for the next run
  2. Threshold: Any metric (ns/op, B/op, allocs/op) that worsens by more than 10% is flagged
  3. PR comments: If regressions are detected on a pull request, a comment is posted with details
  4. Trend alerts: The github-action-benchmark action monitors for sustained degradation
  • Go 1.22+
  • Docker (for Redis and PostgreSQL backends)
Terminal window
cd ojs-benchmarks
# Run against the Lite backend (no dependencies needed)
make bench-lite
# Start infrastructure for other backends
docker compose up -d
# Run all backends
make bench-all
# Generate a comparison report (Markdown + JSON)
make report
# Generate a report with regression detection against a baseline
make report-with-baseline
Terminal window
# Lite (in-process, no dependencies)
make bench-lite
# Redis (requires Redis on localhost:6379)
make bench-redis
# PostgreSQL (requires PostgreSQL on localhost:5432)
make bench-postgres

The make report command generates two files in results/:

  • RESULTS.md — Human-readable Markdown with summary tables and backend comparisons
  • benchmark-results.json — Machine-readable JSON for programmatic consumption

The JSON report includes structured data for each benchmark result, cross-backend comparisons, and any detected regressions.

Environment VariableDefaultDescription
OJS_BENCH_URLhttp://localhost:8080Target server URL
OJS_BENCH_API_KEY(empty)API key for authenticated endpoints

OJS backends use the same proven patterns as widely-adopted job processing systems, but exposed through a standardized protocol.

The OJS Redis backend uses Lua scripts for atomic multi-key operations — the same approach used by Sidekiq (Ruby), BullMQ (Node.js), and other Redis-based job queues. This ensures that complex operations like enqueue-with-dedup or fetch-and-lock execute atomically without distributed locking.

Performance characteristics are comparable to other Redis-backed job systems operating over HTTP, though direct comparison is difficult due to differing protocols (OJS uses HTTP/gRPC while others use native Redis protocol or language-specific drivers).

The OJS PostgreSQL backend uses SELECT ... FOR UPDATE SKIP LOCKED for non-blocking job dequeue — the same pattern used by Postgres-based queues like Graphile Worker, Oban (Elixir), and good_job (Ruby). It also uses LISTEN/NOTIFY for real-time push notifications to waiting workers, reducing polling overhead.

PostgreSQL trades some raw throughput for stronger durability guarantees: jobs survive crashes, support transactional enqueue (enqueue within an application transaction), and benefit from PostgreSQL’s mature replication and backup ecosystem.

  • Protocol differences: OJS operates over HTTP/gRPC, while many job systems use language-native drivers or custom protocols
  • Feature differences: OJS implements a full 8-state lifecycle with middleware, workflows, and extensions — not all competitors offer the same feature set
  • Measurement differences: Some systems benchmark at the library level (function calls), while OJS benchmarks at the protocol level (HTTP round-trips)

Despite these differences, OJS throughput is competitive with production-grade job systems, and the standardized protocol enables multi-language interoperability that language-specific systems cannot offer.

If you need to maximize throughput in production:

  • Use batch enqueue (EnqueueBatch) for bulk inserts — amortizes HTTP and storage overhead across many jobs
  • Tune worker concurrency — increase GOMAXPROCS and worker pool size to match available CPU cores
  • Co-locate workers with the backend — minimize network latency by deploying workers in the same region or availability zone as your Redis/PostgreSQL instance
  • Monitor allocs/op — high allocation counts lead to GC pressure, which causes latency spikes under load
  • Consider the Lite backend for testing — it eliminates storage overhead entirely, making it ideal for integration tests and local development

Benchmark results are published on every push to main and on a weekly schedule (Sundays at 2am UTC).

The benchmark workflow (.github/workflows/benchmarks.yml) runs:

  • On every push to main that modifies backend or SDK code
  • On a weekly schedule (Sundays at 2am UTC)
  • On workflow_dispatch (manual trigger)
  • On pull requests that modify backend or SDK code

Results are uploaded as artifacts with 90-day retention and published to the benchmarks branch for trend tracking.