Skip to main content

Goal

Offload work to a named queue so it runs asynchronously with built-in retries, concurrency control, and optional FIFO ordering. Target functions receive data normally — no handler changes required.
Queues use the Enqueue trigger action. If you are new to trigger actions, read Trigger Actions first to understand the difference between synchronous, Void, and Enqueue invocations.

Steps

1

Define named queues in config

Declare one or more named queues under queue_configs in your iii-config.yaml. Each queue has independent retry, concurrency, and ordering settings.
iii-config.yaml
modules:
  - class: modules::queue::QueueModule
    config:
      queue_configs:
        default:
          max_retries: 5
          concurrency: 10
          type: standard
        payment:
          max_retries: 10
          concurrency: 2
          type: fifo
          message_group_field: transaction_id
        email:
          max_retries: 8
          concurrency: 5
          type: standard
          backoff_ms: 2000
      adapter:
        class: modules::queue::BuiltinQueueAdapter
        config:
          store_method: file_based
          file_path: ./data/queue_store
You can define as many named queues as your system requires. Each queue name is referenced when enqueuing work.
See the Queue module reference for every field, type, and default value.
2

Enqueue work via trigger action

From any function, enqueue a job by calling trigger() with TriggerAction.Enqueue and the target queue name. The caller does not wait for the job to be processed — it receives an acknowledgement (messageReceiptId) once the engine accepts the job.
import { registerWorker, TriggerAction } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

const receipt = await iii.trigger({
  function_id: 'orders::process-payment',
  payload: { orderId: 'ord_789', amount: 149.99, currency: 'USD' },
  action: TriggerAction.Enqueue({ queue: 'payment' }),
})

console.log(receipt.messageReceiptId) // "msg_abc123"
The target function (orders::process-payment in this example) receives the payload as its input — it does not need to know it was invoked via a queue.
Unlike TriggerAction.Void() which is fire-and-forget, Enqueue validates the queue exists and (for FIFO) checks the message_group_field. The messageReceiptId lets you correlate enqueue operations with DLQ entries or retry events. See Trigger Actions for a detailed comparison.
3

Handle the enqueue result

The enqueue call can fail synchronously if the queue name is unknown or FIFO validation fails. Always handle the result.
try {
  const receipt = await iii.trigger({
    function_id: 'orders::process-payment',
    payload: { orderId: 'ord_789', amount: 149.99 },
    action: TriggerAction.Enqueue({ queue: 'payment' }),
  })
  console.log('Enqueued:', receipt.messageReceiptId)
} catch (err) {
  if (err.enqueue_error) {
    console.error('Queue rejected job:', err.enqueue_error)
  }
}
Common rejection reasons:
  • The queue name does not exist in queue_configs
  • A FIFO queue’s message_group_field is missing or null in the payload
4

Use FIFO queues for ordered processing

When processing order matters — for example, financial transactions for the same account — use a FIFO queue. Set type: fifo and specify message_group_field, the field in your payload whose value determines the ordering group. Jobs sharing the same group value are processed strictly in order.
iii-config.yaml (excerpt)
queue_configs:
  payment:
    max_retries: 10
    concurrency: 2
    type: fifo
    message_group_field: transaction_id
The payload must contain the field named by message_group_field, and its value must be non-null. The engine rejects enqueue requests that violate this.
await iii.trigger({
  function_id: 'payments::process',
  payload: { transaction_id: 'txn-abc-123', amount: 49.99, currency: 'USD' },
  action: TriggerAction.Enqueue({ queue: 'payment' }),
})
5

Configure retries and backoff

Every named queue retries failed jobs automatically. Configure max_retries (total delivery attempts before the job moves to the dead-letter queue) and backoff_ms (base delay between retries). Backoff is exponential:
delay = backoff_ms × 2^(attempt - 1)
Attemptbackoff_ms: 1000backoff_ms: 2000
11 000 ms2 000 ms
22 000 ms4 000 ms
34 000 ms8 000 ms
48 000 ms16 000 ms
516 000 ms32 000 ms
iii-config.yaml (excerpt)
queue_configs:
  email:
    max_retries: 8
    backoff_ms: 2000
    concurrency: 5
    type: standard
After all retries are exhausted, the job moves to a dead-letter queue (DLQ) where it is preserved for inspection or manual reprocessing.
See Manage Failed Triggers for DLQ configuration, inspection, and redrive.
6

Control concurrency

The concurrency field sets the maximum number of jobs the engine processes simultaneously from a single queue. This applies per-engine-instance.
iii-config.yaml (excerpt)
queue_configs:
  default:
    concurrency: 10    # up to 10 jobs in parallel
    type: standard
  payment:
    concurrency: 2     # ignored for ordering — FIFO uses prefetch=1
    type: fifo
    message_group_field: transaction_id
  • Standard queues: the engine pulls up to concurrency jobs simultaneously.
  • FIFO queues: the engine processes one job at a time (prefetch=1) to preserve ordering, regardless of the concurrency value.
Use low concurrency to protect downstream systems from overload (e.g. rate-limited APIs). Use high concurrency for embarrassingly parallel work (e.g. image resizing).

Standard vs FIFO Queues

The two queue types solve fundamentally different problems. Standard queues maximize throughput. FIFO queues guarantee ordering.
DimensionStandardFIFO
Processing modelUp to concurrency jobs in parallelOne job at a time (prefetch=1)
OrderingNo guarantees — jobs may complete in any orderStrictly ordered within a message group
message_group_fieldNot requiredRequired — must be present and non-null in every payload
ThroughputHigh — scales with concurrencyLower — trades throughput for ordering
Use casesEmail sends, image processing, notificationsPayments, ledger entries, state machines
RetriesRetried independently, other jobs continueRetried inline — blocks the queue until success or DLQ

Standard queue flow

Jobs are dequeued and processed concurrently. Each job is independent.

FIFO queue flow

Jobs within the same message group are processed one at a time, strictly in order.

Retry and dead-letter flow

When a job fails, the engine retries it with exponential backoff. After all retries exhaust, the job moves to the DLQ.

Real-World Scenarios

Scenario 1: E-Commerce Order Pipeline

An order API must respond fast. Payment processing is critical and must happen in order per transaction. Email confirmation should be reliable. Analytics is best-effort. Queue configuration:
iii-config.yaml
modules:
  - class: modules::queue::QueueModule
    config:
      queue_configs:
        payment:
          max_retries: 10
          concurrency: 2
          type: fifo
          message_group_field: orderId
        email:
          max_retries: 5
          concurrency: 10
          type: standard
          backoff_ms: 2000
      adapter:
        class: modules::queue::BuiltinQueueAdapter
        config:
          store_method: file_based
          file_path: ./data/queue_store
import { registerWorker, TriggerAction, Logger } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

iii.registerFunction({ id: 'orders::create' }, async (req) => {
  const logger = new Logger()
  const order = { id: crypto.randomUUID(), ...req.body }

  await iii.trigger({
    function_id: 'orders::process-payment',
    payload: { orderId: order.id, amount: order.total, currency: 'USD' },
    action: TriggerAction.Enqueue({ queue: 'payment' }),
  })

  await iii.trigger({
    function_id: 'emails::confirmation',
    payload: { email: order.email, orderId: order.id },
    action: TriggerAction.Enqueue({ queue: 'email' }),
  })

  await iii.trigger({
    function_id: 'analytics::track',
    payload: { event: 'order_created', orderId: order.id },
    action: TriggerAction.Void(),
  })

  logger.info('Order created', { orderId: order.id })
  return { status_code: 201, body: { orderId: order.id } }
})

iii.registerTrigger({
  type: 'http',
  function_id: 'orders::create',
  config: { api_path: '/orders', http_method: 'POST' },
})
This example uses all three trigger actions: Enqueue for payment (reliable, ordered) and email (reliable, parallel), and Void for analytics (best-effort).

Scenario 2: Bulk Email Delivery with Rate Limiting

A marketing system sends thousands of emails. The SMTP provider has a rate limit. A standard queue with low concurrency prevents overloading the provider while retrying transient SMTP failures. Queue configuration:
process-order.ts
import { registerWorker, Logger } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

iii.registerFunction({ id: 'orders::process-order' }, async (order) => {
  const logger = new Logger()
  logger.info('Processing payment', { orderId: order.id })
  // ...payment logic...
  return { processed: true }
})
A worker can also enqueue further work, creating processing pipelines:
iii.registerFunction({ id: 'orders::process-order' }, async (order) => {
  // ...charge the customer...

  await iii.trigger({
    function_id: 'notifications::send',
    payload: { orderId: order.id, type: 'payment-confirmed' },
    action: TriggerAction.Enqueue({ queue: 'default' }),
  })

  return { processed: true }
})

4. Use FIFO queues for ordered processing

When order matters (e.g. payment transactions for the same account), use a FIFO queue. Set type: fifo and specify message_group_field — the field in your job data whose value determines the ordering group. Jobs with the same group value are processed strictly in order. The field named by message_group_field must be present and non-null in every job payload — the engine rejects enqueue requests where the field is missing or null.
iii-config.yaml (excerpt)
queue_configs:
  bulk-email:
    max_retries: 5
    concurrency: 3
    type: standard
    backoff_ms: 5000
import { registerWorker, TriggerAction } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

iii.registerFunction({ id: 'campaigns::launch' }, async (campaign) => {
  for (const recipient of campaign.recipients) {
    await iii.trigger({
      function_id: 'emails::send',
      payload: {
        to: recipient.email,
        subject: campaign.subject,
        body: campaign.body,
      },
      action: TriggerAction.Enqueue({ queue: 'bulk-email' }),
    })
  }

  return { enqueued: campaign.recipients.length }
})

iii.registerFunction({ id: 'emails::send' }, async (email) => {
  const response = await fetch('https://smtp-provider.example/send', {
    method: 'POST',
    body: JSON.stringify(email),
    headers: { 'Content-Type': 'application/json' },
  })

  if (!response.ok) {
    throw new Error(`SMTP error: ${response.status}`)
  }

  return { sent: true }
})
With concurrency: 3, at most three emails are in-flight at any time. Failed sends retry with exponential backoff (5s, 10s, 20s, 40s, 80s), protecting the SMTP provider from overload.

Scenario 3: Financial Transaction Ledger

A banking system processes account transactions. Transactions for the same account must be applied in order to prevent balance inconsistencies. Different accounts can process in parallel. Queue configuration:
iii-config.yaml (excerpt)
queue_configs:
  ledger:
    max_retries: 15
    concurrency: 1
    type: fifo
    message_group_field: account_id
    backoff_ms: 500
import { registerWorker, TriggerAction } from 'iii-sdk'

const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')

iii.registerFunction({ id: 'transactions::submit' }, async (req) => {
  const { account_id, type, amount } = req.body

  const receipt = await iii.trigger({
    function_id: 'ledger::apply',
    payload: { account_id, type, amount },
    action: TriggerAction.Enqueue({ queue: 'ledger' }),
  })

  return { status_code: 202, body: { receiptId: receipt.messageReceiptId } }
})

iii.registerFunction({ id: 'ledger::apply' }, async (txn) => {
  const { account_id, type, amount } = txn
  if (type === 'deposit') {
    await db.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, account_id])
  } else if (type === 'withdraw') {
    const { rows } = await db.query('SELECT balance FROM accounts WHERE id = $1', [account_id])
    if (rows[0].balance < amount) {
      throw new Error('Insufficient funds')
    }
    await db.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, account_id])
  }
  return { applied: true }
})
Because the ledger queue is FIFO with message_group_field: account_id, the deposit for acct_A always completes before the withdrawal. Without FIFO ordering, the withdrawal could execute first and fail with “Insufficient funds” even though the deposit was submitted first.

Choosing an Adapter

The queue adapter determines where messages are stored and how they are distributed. Your choice depends on your deployment topology.
ScenarioRecommended AdapterWhy
Local developmentBuiltinQueueAdapter (in_memory)Zero dependencies, fast iteration
Single-instance productionBuiltinQueueAdapter (file_based)Durable across restarts, no external infra
Multi-instance productionRabbitMQAdapterDistributes messages across engine instances
Regardless of which adapter you choose, retry semantics, concurrency enforcement, and FIFO ordering behave identically — the engine owns these behaviors, not the adapter.
See the Queue module reference for adapter configuration and the adapter comparison table for a feature matrix.
When using the RabbitMQ adapter, iii creates exchanges and queues using a predictable naming convention. For a queue named payment, the main queue is iii.__fn_queue::payment, the retry queue is iii.__fn_queue::payment::retry.queue, and the DLQ is iii.__fn_queue::payment::dlq.queue. See Dead Letter Queues for the full resource map. For the design rationale behind this topology, see Queue Architecture.

Queue Config Reference

FieldTypeDefaultDescription
max_retriesu323Maximum delivery attempts before routing to DLQ
concurrencyu3210Maximum concurrent workers for this queue (standard only)
typestring"standard""standard" for concurrent processing; "fifo" for ordered processing
message_group_fieldstringRequired for FIFO — the JSON field in the payload used for ordering groups (must be non-null)
backoff_msu641000Base retry backoff in milliseconds. Applied exponentially: backoff_ms × 2^(attempt - 1)
poll_interval_msu64100Worker poll interval in milliseconds
For the full module configuration including adapter settings, see the Queue module reference.

Next Steps

Trigger Actions

Understand synchronous, Void, and Enqueue invocation modes

Dead Letter Queues

Handle and redrive failed queue messages

Queue Module Reference

Full configuration reference for queues and adapters

Queue Architecture

Design rationale behind retry, dead-lettering, and multi-resource topology