Skip to main content
Operational reverse ETL depends on visibility: you need to know when a run succeeded, how many rows moved, and which rows failed validation or API rules. Planasonix surfaces run history, status, and a dead letter queue (DLQ) so you can fix data and reprocess without rerunning the entire warehouse.

Run history

Each sync execution appears in run history with:
  • Start and end time, duration, and triggering user or schedule
  • Rows read from the warehouse vs rows accepted by the destination
  • API errors summarized by code and message
Open a run to see step-level detail (query execution, batch uploads, per-batch responses). Export logs when you need to attach evidence to a vendor support ticket.

Sync status

Syncs display a current status such as healthy, failing, or paused. Status rolls up from recent runs and configuration issues (for example, expired OAuth). Subscribe to notifications so owners learn about regressions before business teams report stale data.
Recent runs completed within thresholds you define; error rate is below alert rules.

Dead letter queue (DLQ)

The DLQ stores rows that could not be written after retries. Typical causes include validation errors, duplicate key conflicts, missing required fields, and permission errors on specific fields.
1

Open the DLQ for the sync

From the sync detail page, open Dead letter queue to filter by error type, time range, and destination response code.
2

Inspect the payload

Compare the stored payload to your warehouse row. Often the fix is a mapping change, a SQL correction, or a data quality rule upstream.
3

Fix upstream or in the mapper

Update the view or mapping, then clear or reprocess affected DLQ entries as described below.

Error resolution

Rate limits and short outages usually resolve with automatic retries. If failures persist, check vendor status and reduce batch size or concurrency in sync advanced settings when available.
Fix the row in the warehouse or adjust coercion in SQL, then reprocess from the DLQ so only corrected rows are resent.
When the destination adds required fields or changes types, refresh connector schema, update mappings, and run a preview before clearing the DLQ backlog.

Bulk operations and export

You can select multiple DLQ entries to reprocess, discard, or export (CSV/JSON) for analysis in a spreadsheet or ticket. Discarding acknowledges that a row should not be retried—for example, when the business retires a legacy ID space.
Reprocessing after you change only the destination without fixing warehouse data can recreate the same error. Always confirm the root cause before large bulk retries.

Tie-in to observability

For organization-wide visibility, use the Observability dashboard and alerts so reverse ETL failures appear next to batch pipeline incidents your team already monitors.