Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (4)
📒 Files selected for processing (41)
📝 WalkthroughWalkthroughThis PR introduces comprehensive audit evidence collection, packaging, and verification services; extends audit journaling with sequence numbering and structured metadata fields; refactors payout operations to transfer-owned per-entry flows; and adds archival/retention workflows with sink integration to the audit trail system. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Payout as Payout Service
participant Dispersal as Dispersal Bridge
participant Transfer as Transfer Engine
participant Ledger as Cash Ledger
participant Audit as Audit Journal
Client->>Payout: triggerImmediatePayout (per mortgage group)
Payout->>Dispersal: findEligibleEntries (filter by transferRequestId)
Dispersal-->>Payout: dispersalEntries[]
loop for each dispersalEntry
Payout->>Transfer: executeTransferOwnedPayout
Transfer->>Transfer: processSingleDisbursement (create/reuse transfer)
Transfer->>Transfer: initiateTransferInternal (invoke provider)
alt Manual Provider
Transfer->>Transfer: confirmManualTransferInternal (immediate settlement)
Transfer->>Ledger: post LENDER_PAYOUT receipt (via transfer effect)
Ledger->>Audit: record cashJournalEntry + linkedTransferId
end
Transfer->>Transfer: patch dispersalEntry (processingAt, disbursedAt, transferRequestId)
Transfer->>Audit: record SETTLEMENT_RECORDED + LEDGER_LINK_RECORDED entries
Transfer-->>Payout: { transferId, confirmed, created }
end
Payout->>Payout: updateLenderPayoutDateRef (if confirmed > 0 && failures == 0)
Payout-->>Client: { payoutCount, failures }
sequenceDiagram
actor Admin
participant AE as Audit Evidence
participant AuditJournal as Audit Journal
participant DB as Database
participant Verify as Verification
Admin->>AE: generateAuditPackage { asOf, scope, format }
AE->>AuditJournal: collectAuditEvidenceDataImpl (time-filter to asOf)
AuditJournal->>DB: query auditJournal + cashLedger (scope-filtered)
DB-->>AuditJournal: entries[]
AuditJournal-->>AE: { entities, ledgerBalances }
AE->>DB: createCSV/JSON artifacts (chunked with SHA-256)
AE->>DB: insert auditEvidencePackages + auditEvidencePackageArtifacts
AE->>AuditJournal: appendAuditJournalEntry PACKAGE_GENERATED
AE-->>Admin: { packageId, artifacts }
Admin->>Verify: verifyAuditPackage { packageId }
Verify->>DB: queryPackage + loadArtifacts
Verify->>AuditJournal: collectAuditEvidenceDataImpl (re-collect)
Verify->>Verify: validate sequenceNumber monotonicity
Verify->>AuditJournal: appendAuditJournalEntry AUDIT_PACKAGE_VERIFIED
alt sequenceValid
Verify->>DB: patch packageDoc.verificationJson
end
Verify-->>Admin: { isSequenceValid, verifiedAt }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
Reviewer's GuideImplements structured audit evidence packaging and canonical audit envelopes, makes transfer-confirmation the sole owner of cash ledger settlement and payout flows, enriches dispersal/servicing records, and updates payout and collection attempt flows and tests accordingly. Sequence diagram for transfer confirmation owning cash ledger and dispersal statesequenceDiagram
participant Engine as EngineTransition
participant Transfer as TransferRequests
participant Effects as TransferEffects
participant Ledger as CashLedger
participant Dispersal as DispersalEntries
participant Attempt as CollectionAttempts
participant Audit as AuditJournal
Engine->>Transfer: load transfer(entityId)
Note over Engine,Transfer: executeTransition FUNDS_SETTLED already ran
Engine->>Effects: publishTransferConfirmed(effectPayload)
activate Effects
Effects->>Transfer: patch {confirmedAt, settledAt}
Effects->>Audit: appendTransferMutationAuditEntry SETTLEMENT_RECORDED
alt direction == inbound
Effects->>Ledger: postCashReceiptForTransfer(transferRequestId)
Ledger-->>Effects: receiptEntry
else direction == outbound
Effects->>Ledger: postLenderPayoutForTransfer(transferRequestId)
Ledger-->>Effects: payoutEntry
else no_direction
Effects-->>Effects: throw Error(no direction)
end
alt cashEntryId present
Effects->>Transfer: patch {cashJournalEntryIds += cashEntryId}
Effects->>Audit: appendTransferMutationAuditEntry LEDGER_LINK_RECORDED
end
alt inbound && collectionAttemptId set
Effects->>Attempt: reconcileAttemptLinkedInboundSettlement(transfer, settledAt)
Attempt-->>Effects: reconciledAttempt?
alt !reconciledAttempt
Effects-->>Effects: throw Error(reconciliation failed)
end
end
alt transfer.dispersalEntryId set
Effects->>Dispersal: load dispersalEntry
alt exists
Effects->>Dispersal: patch { status: disbursed, disbursedAt: settledAt, payoutDate, processingAt, reversedAt: undefined, transferRequestId }
Effects->>Audit: appendAuditJournalEntry on dispersalEntry
else missing
Effects-->>Effects: log error but do not fail
end
end
deactivate Effects
Sequence diagram for transfer-owned payout via batch and admin flowssequenceDiagram
actor Admin
participant Cron as PayoutBatchCron
participant AdminAction as AdminPayoutAction
participant PayoutFlow as TransferOwnedFlow
participant Bridge as DisbursementBridge
participant Transfer as TransferRequests
participant TransferAPI as TransferEngine
rect rgb(230,230,255)
Cron->>Cron: getEligibleDispersalEntries()
Cron->>Cron: group entries by lender and mortgage
loop per mortgage group
Cron->>PayoutFlow: processMortgageGroup(entries, mortgageId, source)
activate PayoutFlow
loop per entry
PayoutFlow->>Bridge: processSingleDisbursement(dispersalEntryId, providerCode)
Bridge-->>PayoutFlow: {transferId, created}
PayoutFlow->>TransferAPI: initiateTransferInternal(transferId)
TransferAPI-->>PayoutFlow: initiated
alt confirmSettlement && providerCode is manual
PayoutFlow->>TransferAPI: confirmManualTransferInternal(manualSettlement, transferId)
TransferAPI-->>PayoutFlow: transition FUNDS_SETTLED
end
PayoutFlow->>TransferAPI: getTransferInternal(transferId)
TransferAPI-->>PayoutFlow: transfer(status)
alt status == confirmed
PayoutFlow-->>Cron: increment confirmedCount, totalAmount
else
PayoutFlow-->>Cron: record failure
end
end
deactivate PayoutFlow
end
end
rect rgb(230,255,230)
Admin->>AdminAction: triggerImmediatePayout(lenderId)
activate AdminAction
AdminAction->>AdminAction: getEligibleDispersalEntries(lender)
AdminAction->>AdminAction: group entries by mortgage
loop per mortgage group
loop per entry
AdminAction->>PayoutFlow: executeTransferOwnedPayout(entry, manual, source)
end
end
AdminAction-->>Admin: result or ConvexError with failures
deactivate AdminAction
end
ER diagram for new audit and dispersal calculation entitieserDiagram
auditJournal {
string _id
string entityType
string entityId
string eventType
string eventCategory
string eventId
string previousState
string newState
string outcome
string effectiveDate
string organizationId
string legalEntityId
string originSystem
string requestId
string actorId
string actorType
string channel
string ip
string sessionId
int64 sequenceNumber
number timestamp
any payload
any beforeState
any afterState
any delta
any linkedRecordIds
string idempotencyKey
string correlationId
string reason
}
auditJournalSequenceCounters {
string _id
string name
int64 nextSequenceNumber
number updatedAt
}
auditEvidencePackages {
string _id
any scope
number asOf
string format
string manifestJson
string eventsJson
string eventsCsv
string entitiesCsv
string balancesCsv
string linkageCsv
string reconstructionNotes
string verificationJson
number createdAt
string createdBy
}
audit_events {
string _id
string entityId
string entityType
string eventType
string actorId
string beforeState
string afterState
string canonicalEnvelope
string prevHash
string hash
boolean emitted
number emittedAt
string sinkReference
number emitFailures
number archivedAt
number retentionUntilAt
number timestamp
}
audit_outbox {
string _id
string eventId
string idempotencyKey
string status
number emitFailures
number createdAt
number emittedAt
string sinkReference
number archivedAt
number retentionUntilAt
number lastFailureAt
string lastFailureReason
}
audit_evidence_objects {
string _id
string eventId
string idempotencyKey
string sinkReference
string contentType
string payload
number createdAt
}
dispersalCalculationRuns {
string _id
string orgId
string mortgageId
string obligationId
string idempotencyKey
number settledAmount
string settledDate
string paymentMethod
string payoutEligibleAfter
string calculationVersion
any inputs
any outputs
any source
number createdAt
}
dispersalEntries {
string _id
string orgId
string mortgageId
string obligationId
string lenderId
string lenderAccountId
string calculationRunId
string transferRequestId
number amount
string dispersalDate
string paymentMethod
string payoutEligibleAfter
string idempotencyKey
string status
string payoutDate
number processingAt
number disbursedAt
number reversedAt
number createdAt
}
servicingFeeEntries {
string _id
string calculationRunId
string mortgageId
string obligationId
number amount
number feeCashApplied
number feeDue
number feeReceivable
number principalBalance
string date
string feeCode
string mortgageFeeId
number annualRate
string policyVersion
string sourceObligationType
number createdAt
}
cash_ledger_journal_entries {
string _id
string debitAccountId
string creditAccountId
string mortgageId
string obligationId
string lenderId
string transferRequestId
string entryType
string effectiveDate
string causedBy
string postingGroupId
string idempotencyKey
string reason
any metadata
any source
int64 sequenceNumber
number amount
number timestamp
}
transferRequests {
string _id
string orgId
string status
string direction
string transferType
number amount
string currency
string providerCode
string providerRef
string idempotencyKey
string collectionAttemptId
string dispersalEntryId
string mortgageId
string obligationId
string dealId
string lenderId
string borrowerId
any manualSettlement
any machineContext
number createdAt
number lastTransitionAt
string confirmedAt
string settledAt
string reversedAt
string failedAt
string failureCode
string failureReason
string reversalRef
string[] cashJournalEntryIds
}
auditJournalSequenceCounters ||--o{ auditJournal : assigns_sequence
auditJournal ||--o{ audit_events : drives_hash_chain
audit_events ||--o{ audit_outbox : queued_for_sink
audit_events ||--o{ audit_evidence_objects : materialized_payload
dispersalCalculationRuns ||--o{ dispersalEntries : produces_shares
dispersalCalculationRuns ||--o{ servicingFeeEntries : produces_fee
dispersalEntries }o--|| transferRequests : paid_by_transfer
transferRequests ||--o{ cash_ledger_journal_entries : settled_into_ledger
cash_ledger_journal_entries }o--|| auditJournal : audited_as_events
auditEvidencePackages }o--o{ auditJournal : packages_events
auditEvidencePackages }o--o{ cash_ledger_journal_entries : packages_ledger
Class diagram for audit evidence packaging and sink infrastructureclassDiagram
class AuditJournalEntry {
+EntityType entityType
+string entityId
+string eventType
+string eventCategory
+string eventId
+string previousState
+string newState
+string outcome
+string effectiveDate
+string organizationId
+string legalEntityId
+string originSystem
+string requestId
+string actorId
+ActorType actorType
+CommandChannel channel
+string ip
+string sessionId
+Record~string,unknown~ payload
+Record~string,unknown~ beforeState
+Record~string,unknown~ afterState
+Record~string,unknown~ delta
+Record~string,unknown~ linkedRecordIds
+string idempotencyKey
+string correlationId
+string reason
+bigint sequenceNumber
+number timestamp
}
class AuditEvidenceScope {
+string entityId
+EntityType entityType
+string lenderId
+string mortgageId
+string obligationId
+string transferRequestId
}
class AuditEvidenceSink {
<<interface>>
+emit(ctx, contentType, eventId, idempotencyKey, payload) Promise~SinkResult~
}
class ComponentTableAuditEvidenceSink {
+emit(ctx, contentType, eventId, idempotencyKey, payload) Promise~SinkResult~
}
class SinkResult {
+string sinkReference
}
class AuditEvidenceServices {
+collectAuditEvidenceData(asOf, scope)
+recordAuditEvidenceAccess(action, actorId, asOf, packageId, scope)
+persistAuditEvidencePackage(asOf, format, scope, artifacts)
+generateAuditPackage(asOf, actorId, format, scope)
+verifyAuditPackage(actorId, packageId, scope)
+reconstructEntityState(actorId, asOf, entityId, entityType)
+reconstructLedgerBalances(actorId, asOf, scope)
}
class AuditEvidencePackageDoc {
+string _id
+AuditEvidenceScope scope
+number asOf
+string format
+string manifestJson
+string eventsJson
+string eventsCsv
+string entitiesCsv
+string balancesCsv
+string linkageCsv
+string reconstructionNotes
+string verificationJson
+number createdAt
+string createdBy
}
class AuditTrailInsertArgsBuilder {
+buildAuditTrailInsertArgs(entry) CashLedgerAuditArgs
}
class CashLedgerAuditArgs {
+string entityId
+string entityType
+string eventType
+string actorId
+string beforeState
+string afterState
+string canonicalEnvelope
+string metadata
+number timestamp
}
class AuditTrailComponent {
+insert(entityId, entityType, eventType, actorId, beforeState, afterState, canonicalEnvelope, metadata, timestamp)
+emitPending()
+processOutbox()
+processRetention()
}
AuditEvidenceSink <|.. ComponentTableAuditEvidenceSink
AuditEvidenceServices --> AuditEvidencePackageDoc : creates
AuditEvidenceServices --> AuditJournalEntry : queries_events
AuditEvidenceServices --> CashLedgerAuditArgs : queries_ledger
AuditEvidenceServices --> AuditTrailComponent : appends_access_events
AuditTrailInsertArgsBuilder --> CashLedgerAuditArgs : builds
AuditTrailComponent --> AuditEvidenceSink : uses_via_emitAuditEvidence
Class diagram for transfer-owned payout flow and manual settlementclassDiagram
class ManualSettlementDetails {
+string instrumentType
+number settlementOccurredAt
+string externalReference
+string enteredBy
+string location
+string[] evidenceAttachmentIds
}
class TransferRequestInput {
+string transferType
+string direction
+string providerCode
+number amount
+string currency
+string idempotencyKey
+ManualSettlementDetails manualSettlement
+Record~string,unknown~ metadata
}
class InitiateResult {
+string providerRef
+string status
+Record~string,unknown~ providerData
+number settledAt
}
class ManualTransferProvider {
+initiate(request TransferRequestInput) InitiateResult
+confirm(ref string) InitiateResult
}
class TransferOwnedPayoutCtx {
<<interface>>
+runMutation(name, args) Promise
+runQuery(name, args) Promise
+runAction(name, args) Promise
}
class TransferOwnedPayoutResult {
+number amount
+boolean confirmed
+boolean created
+string transferId
}
class TransferOwnedFlow {
+executeTransferOwnedPayout(confirmSettlement, ctx, entry, providerCode, source) TransferOwnedPayoutResult
-buildManualSettlement(entry, occurredAt, source) ManualSettlementDetails
}
class DispersalEntryDoc {
+string _id
+string mortgageId
+string obligationId
+string lenderId
+number amount
+string status
+string payoutEligibleAfter
+string paymentMethod
+string calculationRunId
+string transferRequestId
}
class TransferDoc {
+string _id
+string status
+string providerCode
+ManualSettlementDetails manualSettlement
}
ManualTransferProvider --> ManualSettlementDetails : uses
TransferOwnedFlow --> DispersalEntryDoc : consumes
TransferOwnedFlow --> ManualSettlementDetails : constructs
TransferOwnedFlow --> TransferOwnedPayoutCtx : depends_on
TransferOwnedFlow --> TransferDoc : reads_status
TransferOwnedFlow --> TransferOwnedPayoutResult : returns
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- In
collectionPlan/__tests__/runner.test.ts, theprocessDuePlanEntriestest redeclaresconst replayTransfersin the same scope, which will fail TypeScript/ESLint; rename one of the variables or reuse the existing binding. - The audit evidence collectors (e.g.
collectAuditEvidenceDataImpl) currently read entireauditJournalandcash_ledger_journal_entriestables into memory and then filter/sort in JS; consider pushing theasOfcutoff and scope constraints into the Convex queries (and/or adding supporting indexes) to keep package generation and verification efficient on larger datasets. - The logic for building
linkedRecordIdsand transfer snapshots is duplicated betweenappendTransferCreationAuditEntryandbuildTransferLinkedRecordIds; factoring this into a shared helper would reduce the risk of the audit linkage fields diverging over time.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `collectionPlan/__tests__/runner.test.ts`, the `processDuePlanEntries` test redeclares `const replayTransfers` in the same scope, which will fail TypeScript/ESLint; rename one of the variables or reuse the existing binding.
- The audit evidence collectors (e.g. `collectAuditEvidenceDataImpl`) currently read entire `auditJournal` and `cash_ledger_journal_entries` tables into memory and then filter/sort in JS; consider pushing the `asOf` cutoff and scope constraints into the Convex queries (and/or adding supporting indexes) to keep package generation and verification efficient on larger datasets.
- The logic for building `linkedRecordIds` and transfer snapshots is duplicated between `appendTransferCreationAuditEntry` and `buildTransferLinkedRecordIds`; factoring this into a shared helper would reduce the risk of the audit linkage fields diverging over time.
## Individual Comments
### Comment 1
<location path="convex/engine/effects/transfer.ts" line_range="227-236" />
<code_context>
- ctx,
- {
- transfer,
+ await appendTransferMutationAuditEntry(ctx, {
+ beforeState: {
+ ...transfer,
+ _id: `${transfer._id}`,
+ },
+ afterState: {
+ ...transfer,
+ _id: `${transfer._id}`,
+ confirmedAt: transfer.confirmedAt ?? settledAt,
settledAt,
- source: args.source,
- }
- );
+ },
+ eventType: "SETTLEMENT_RECORDED",
+ payload: {
+ settledAt,
+ },
</code_context>
<issue_to_address>
**suggestion:** Reuse a single timestamp for `failedAt`/`reversedAt` and the audit snapshot to avoid subtle state mismatches.
Using separate `Date.now()` calls in `publishTransferFailed` and `publishTransferReversed` means the audit `afterState.failedAt`/`reversedAt` may differ slightly from the value actually persisted on the transfer. Capturing a single `const timestamp = Date.now()` and reusing it for both the `ctx.db.patch` and `afterState` will keep the audit log aligned with the stored state and simplify later investigation.
</issue_to_address>
### Comment 2
<location path="convex/auditEvidence/services.ts" line_range="195-197" />
<code_context>
+ const scope = normalizeScope(
+ (args.scope as Parameters<typeof normalizeScope>[0] | undefined) ?? null
+ );
+ const [journalEvents, cashEntries] = await Promise.all([
+ ctx.db.query("auditJournal").collect(),
+ ctx.db.query("cash_ledger_journal_entries").collect(),
+ ]);
+
</code_context>
<issue_to_address>
**suggestion (performance):** Avoid full-table scans when collecting audit evidence; push scope/asOf filters into the queries.
`auditJournal` and `cash_ledger_journal_entries` are both fully loaded and only then filtered by `asOf` and `scope`, which will be expensive and may hit memory/query limits as they grow. Since you already have useful indexes (`by_entity`, `by_mortgage`, `by_obligation`, timestamps/sequence numbers, etc.), please push these filters into the DB queries instead—for example, constrain by `timestamp <= asOf` (or `sequenceNumber`) and, when `scope` is present, add the corresponding indexed predicates (entity, mortgage, obligation, lender). This will keep evidence generation scalable for large, long‑lived systems.
Suggested implementation:
```typescript
const scope = normalizeScope(
(args.scope as Parameters<typeof normalizeScope>[0] | undefined) ?? null
);
const { asOf } = args;
// Build scoped auditJournal query
let auditJournalQuery = ctx.db.query("auditJournal");
if (scope) {
if (scope.entityId) {
// Prefer most-specific index available for entity scoping
auditJournalQuery = auditJournalQuery.withIndex("by_entity", (q) =>
q.eq("entityId", scope.entityId)
);
} else if (scope.mortgageId) {
auditJournalQuery = auditJournalQuery.withIndex("by_mortgage", (q) =>
q.eq("mortgageId", scope.mortgageId)
);
} else if (scope.obligationId) {
auditJournalQuery = auditJournalQuery.withIndex("by_obligation", (q) =>
q.eq("obligationId", scope.obligationId)
);
} else if (scope.lenderId) {
auditJournalQuery = auditJournalQuery.withIndex("by_lender", (q) =>
q.eq("lenderId", scope.lenderId)
);
} else if (scope.transferRequestId) {
auditJournalQuery = auditJournalQuery.withIndex("by_transfer_request", (q) =>
q.eq("transferRequestId", scope.transferRequestId)
);
}
}
// Constrain auditJournal to events at-or-before asOf
auditJournalQuery = auditJournalQuery.filter((q) =>
// Use the appropriate temporal/ordering field here: "timestamp" or "sequenceNumber"
q.lte(q.field("timestamp"), asOf)
);
// Build scoped cash_ledger_journal_entries query
let cashLedgerQuery = ctx.db.query("cash_ledger_journal_entries");
if (scope) {
if (scope.entityId) {
cashLedgerQuery = cashLedgerQuery.withIndex("by_entity", (q) =>
q.eq("entityId", scope.entityId)
);
} else if (scope.mortgageId) {
cashLedgerQuery = cashLedgerQuery.withIndex("by_mortgage", (q) =>
q.eq("mortgageId", scope.mortgageId)
);
} else if (scope.obligationId) {
cashLedgerQuery = cashLedgerQuery.withIndex("by_obligation", (q) =>
q.eq("obligationId", scope.obligationId)
);
} else if (scope.lenderId) {
cashLedgerQuery = cashLedgerQuery.withIndex("by_lender", (q) =>
q.eq("lenderId", scope.lenderId)
);
} else if (scope.transferRequestId) {
cashLedgerQuery = cashLedgerQuery.withIndex("by_transfer_request", (q) =>
q.eq("transferRequestId", scope.transferRequestId)
);
}
}
// Constrain cash ledger entries to entries at-or-before asOf
cashLedgerQuery = cashLedgerQuery.filter((q) =>
// Use the appropriate temporal/ordering field here: "timestamp" or "sequenceNumber"
q.lte(q.field("timestamp"), asOf)
);
const [journalEvents, cashEntries] = await Promise.all([
auditJournalQuery.collect(),
cashLedgerQuery.collect(),
]);
```
The above changes assume:
- The `auditJournal` and `cash_ledger_journal_entries` tables each have:
- A temporal/ordering field named `timestamp` (or alternatively `sequenceNumber`).
- Indexes named `by_entity`, `by_mortgage`, `by_obligation`, `by_lender`, and `by_transfer_request` with the corresponding fields.
To finalize this change, you should:
1. Adjust the `q.field("timestamp")` references to the actual temporal/ordering field used in each table (e.g. `sequenceNumber` if that is the canonical as-of key).
2. Align the `.withIndex(...)` index names and the field names (`entityId`, `mortgageId`, etc.) with the actual schema/index definitions.
3. If the `scope` normalization guarantees a particular precedence or supports additional scope fields, mirror that precedence in the index-selection logic so that the most selective/indexed predicate is used.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Pull request overview
This PR introduces an “audit-ready” execution spine for collections/transfers/payouts by (1) enriching the Layer-1 audit journal with canonical, reconstructable event envelopes + sequencing, (2) adding durable audit-trail emission/retention + evidence sinks, and (3) moving dispersal/payout cash posting to be transfer-owned for end-to-end traceability.
Changes:
- Add audit journal sequencing + richer event envelopes (before/after/delta/linkage) and persist more domain-write events.
- Add audit evidence package generation/verification/reconstruction flows plus durable audit-trail outbox emission + retention archival.
- Refactor payout/disbursement flows to create/confirm canonical transfer requests and link resulting cash-ledger entries back to transfers/dispersal entries.
Reviewed changes
Copilot reviewed 39 out of 40 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| convex/test/moduleMaps.ts | Registers new Convex modules for test/runtime imports. |
| convex/schema.ts | Extends schema for audit journal sequencing, evidence packages, dispersal calculation runs, transfer/manual settlement evidence. |
| convex/payments/transfers/validators.ts | Adds manual settlement evidence validators. |
| convex/payments/transfers/providers/manual.ts | Carries manual settlement evidence into provider initiation/confirmation results. |
| convex/payments/transfers/mutations.ts | Adds transfer creation audit entries; adds manual confirmation flows (public + internal); propagates manual settlement fields. |
| convex/payments/transfers/interface.ts | Defines ManualSettlementDetails and extends provider initiation result contract. |
| convex/payments/transfers/collectionAttemptReconciliation.ts | Returns executeTransition success result instead of always true. |
| convex/payments/transfers/tests/inboundFlow.integration.test.ts | Updates bridge/settlement expectations to transfer-owned cash posting. |
| convex/payments/transfers/tests/handlers.integration.test.ts | Adds integration coverage for persisted manual settlement evidence. |
| convex/payments/payout/transferOwnedFlow.ts | Introduces canonical “transfer-owned” payout execution helper. |
| convex/payments/payout/queries.ts | Excludes dispersal entries already linked to a transfer. |
| convex/payments/payout/batchPayout.ts | Refactors scheduled payouts to execute per-entry transfer-owned payouts. |
| convex/payments/payout/adminPayout.ts | Refactors admin payouts to execute per-entry transfer-owned payouts and report per-entry failures. |
| convex/payments/payout/tests/batchPayout.test.ts | Updates E2E assertions for per-entry transfer-owned payouts and links. |
| convex/payments/payout/tests/adminPayout.test.ts | Updates admin payout integration coverage for transfer-owned payouts and evidence. |
| convex/payments/obligations/createCorrectiveObligation.ts | Adds richer audit journal entry data for corrective obligations. |
| convex/payments/collectionPlan/execution.ts | Adds audit journal entry for collection attempt creation. |
| convex/payments/collectionPlan/tests/runner.test.ts | Adds replay assertions to ensure reprocessing is no-op. |
| convex/payments/cashLedger/transferReconciliationCron.ts | Changes “healing” behavior to escalation/audit defect surfacing for confirmed-without-ledger. |
| convex/payments/cashLedger/hashChain.ts | Adds canonical envelope to cash-ledger audit trail hashing payloads. |
| convex/obligations/mutations.ts | Adds richer audit journal entry data for obligation creation. |
| convex/engine/validators.ts | Extends entityType validator for new audited entities. |
| convex/engine/types.ts | Extends EntityType + AuditJournalEntry shape and maps new entities to tables. |
| convex/engine/transition.ts | Adds before/after snapshots to journal entries; runs certain transfer effects inline instead of scheduling. |
| convex/engine/machines/collectionAttempt.machine.ts | Adds same-state settlement observation effect and DRAW_FAILED transition. |
| convex/engine/hashChain.ts | Adds canonical envelope to hash-chain inserts for audit journal entries. |
| convex/engine/effects/transfer.ts | Makes transfers authoritative for cash posting; records ledger links + richer transfer mutation audit entries. |
| convex/engine/effects/registry.ts | Registers the new recordSettlementObserved effect. |
| convex/engine/effects/collectionAttempt.ts | Adds recordSettlementObserved no-op effect for traceability. |
| convex/engine/auditJournal.ts | Adds normalization defaults + monotonic sequence numbers for auditJournal entries. |
| convex/dispersal/disbursementBridge.ts | Links dispersal entries to transfers; reuses active transfers; creates canonical transfers via createTransferRequestRecord. |
| convex/dispersal/createDispersalEntries.ts | Adds dispersal calculation runs + audit journal events for calculation, dispersal entries, and servicing fee entries. |
| convex/components/auditTrail/sink.ts | Adds configurable durable evidence sink abstraction and default component-table sink. |
| convex/components/auditTrail/schema.ts | Adds canonical envelope, sink refs, and retention fields/tables to the auditTrail component. |
| convex/components/auditTrail/lib.ts | Incorporates canonical envelopes into hashing and emission payloads; adds retention timestamps. |
| convex/components/auditTrail/internal.ts | Implements durable outbox emission via sink and retention archival by retention index. |
| convex/components/auditTrail/crons.ts | Updates retention cron semantics/description for archival. |
| convex/auditTrailClient.ts | Extends client API to accept canonicalEnvelope. |
| convex/auditEvidence/services.ts | Implements audit evidence data collection, package generation, verification, and reconstruction mutations/queries. |
| convex/_generated/api.d.ts | Adds generated API typing for new payout module. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
convex/engine/machines/collectionAttempt.machine.ts (1)
64-71:⚠️ Potential issue | 🔴 CriticalRemove the duplicated
DRAW_FAILEDtransition.Lines 68–71 redefine the same key already declared on lines 64–67. In the
initiatedstate'sonobject, duplicate keys cause the second to silently overwrite the first, resulting in incomplete or incorrect transition logic.✂️ Minimal fix
DRAW_FAILED: { target: "failed", actions: ["incrementRetryCount"], }, - DRAW_FAILED: { - target: "failed", - actions: ["incrementRetryCount"], - },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/engine/machines/collectionAttempt.machine.ts` around lines 64 - 71, The initiated state's on object contains a duplicated DRAW_FAILED transition; remove the duplicate entry so there is a single DRAW_FAILED mapping (target: "failed", actions: ["incrementRetryCount"]) in the collectionAttempt.machine definition to avoid silent overwrite and ensure the retry logic in the initiated state is applied exactly once.convex/engine/transition.ts (1)
335-344:⚠️ Potential issue | 🟠 MajorDon't journal an
afterStatethat was never persisted.The same-state-with-effects branch does not patch the entity, but
afterStatenow injectsnextSnapshot.context/status. If anassignchanges context without a state change, the audit journal will claim a document state that never existed in storage. Either persist those fields here or keepafterStateequal tocurrentEntitySnapshot. Based on learnings "Implement auditability from the ground floor - every state transition must be journaled, hash-chained, and queryable for 5-year regulatory retention".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/engine/transition.ts` around lines 335 - 344, The audit journal is recording an afterState that includes nextSnapshot.context and status even when the same-state-with-effects path did not persist those changes; update the logic around appendAuditJournalEntry so afterState accurately reflects what was written to storage: either persist the context/status before journaling (ensure the patch that applies assign effects is saved) or set afterState to currentEntitySnapshot when no DB write occurred. Locate the call to appendAuditJournalEntry in transition.ts and use currentEntitySnapshot for afterState in the same-state-with-effects branch, or defer journaling until after persisting nextSnapshot (referencing currentEntitySnapshot, nextSnapshot and appendAuditJournalEntry to find the right branch).convex/dispersal/disbursementBridge.ts (1)
343-347:⚠️ Potential issue | 🟠 MajorVersion retry keys for
reversedtransfers too.
isReusableTerminalTransferStatus()now allows"reversed", but this branch still only suffixes retries for"failed"/"cancelled". A reversed transfer will therefore try to recreate with the original idempotency key and collide with the old row, which makes that retry path fail.Suggested fix
const effectiveIdempotencyKey = existing && - (existing.status === "failed" || existing.status === "cancelled") + isReusableTerminalTransferStatus(existing.status) ? `${idempotencyKey}:retry:${Date.now()}` : idempotencyKey;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/dispersal/disbursementBridge.ts` around lines 343 - 347, The retry idempotency-key branch for effectiveIdempotencyKey only checks for existing.status === "failed" || "cancelled" and therefore doesn't handle "reversed" transfers; update the condition so reversed retries also get a unique suffix (either include `"reversed"` in the OR list or, better, call the canonical helper like isReusableTerminalTransferStatus(existing.status) to decide when to append `:retry:${Date.now()}`) so that effectiveIdempotencyKey is unique for reversed transfers and avoids colliding with the original row.convex/payments/transfers/mutations.ts (1)
578-585:⚠️ Potential issue | 🟠 MajorImmediate-confirm transfers still miss provider-ref journaling.
persistProviderRefis only used on the synchronous-confirmation path whererecordTransferProviderRefnever fires. This patch writesproviderRefwith noauditJournalentry, so those transfers are missing the provider-reference mutation from the evidence package even though the async path now emitsPROVIDER_REF_RECORDED.As per coding guidelines, "Implement auditability from the ground floor - every state transition must be journaled, hash-chained, and queryable for 5-year regulatory retention."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/transfers/mutations.ts` around lines 578 - 585, persistProviderRef currently updates providerRef via ctx.db.patch without creating an auditJournal entry, so immediate-confirm transfers miss the PROVIDER_REF_RECORDED journal record; update the persistProviderRef handler to perform the same audit journaling as the async path by either invoking the existing recordTransferProviderRef routine (or the underlying audit-append helper it uses) after patching, ensuring an auditJournal entry is appended (hash-chained, including providerRef and actor/context) and the PROVIDER_REF_RECORDED event is emitted so the provider reference mutation is queryable and retained.convex/engine/effects/transfer.ts (1)
756-847:⚠️ Potential issue | 🟠 MajorThis reversal path can silently lose the reversal marker.
reversedAt/REVERSAL_RECORDEDare written first, but any later throw on Line 791 or Line 843 aborts the whole mutation. That means a provider-confirmed reversal can disappear from bothtransferRequestsand the audit journal if the linked attempt/journal-entry invariant is broken. Persist the reversal state first and surface downstream repair work without rethrowing from this handler.Based on learnings, all operations inside a single
internalMutationcommit atomically, so a later throw rolls back earlierpostEntryandctx.db.patchcalls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/engine/effects/transfer.ts` around lines 756 - 847, The handler currently writes reversedAt and a REVERSAL_RECORDED audit entry inside the same atomic mutation as follow-up work (ctx.db.patch and appendTransferMutationAuditEntry are followed by runPaymentReversalCascadeForPlanEntry/postTransferReversal), so any later throw rolls back those writes; fix by persisting the reversal marker and audit entry in their own committed mutation (or an immediate non-throwing internalMutation) first (use ctx.db.patch and appendTransferMutationAuditEntry for transfer._id), then perform the attempt-linked or cash-reversal work (runPaymentReversalCascadeForPlanEntry, postTransferReversal, reconcileAttemptLinkedInboundReversal) inside a try/catch that logs errors, enqueues a repair/healing task, and does NOT rethrow so the reversal state remains durable and downstream fixes can be applied.convex/schema.ts (1)
1444-1504:⚠️ Potential issue | 🟠 MajorVerify there is a backfill for preexisting
auditJournalrows.
convex/auditEvidence/services.tsnow assumes every journal row hassequenceNumber,eventId,eventCategory,effectiveDate, andoriginSystem. If older rows exist without those fields, package generation and verification will mis-sort or fail as soon as they touch legacy history. I don't see the backfill in the provided context, so please confirm it ships with this schema change.Run this read-only check to look for the migration/backfill and current writers:
#!/bin/bash set -euo pipefail echo "=== auditJournal schema/writer usage ===" rg -n --type=ts -C2 'appendAuditJournalEntry\(|insert\("auditJournal"|query\("auditJournal"|sequenceNumber|eventId|effectiveDate|eventCategory|originSystem' convex echo echo "=== potential migrations/backfills ===" rg -n --type=ts -C3 'backfill|migrat|auditJournalSequenceCounters|sequenceNumber|eventId|effectiveDate|eventCategory|originSystem' convex🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/schema.ts` around lines 1444 - 1504, The schema change requires a migration/backfill so all existing auditJournal rows have sequenceNumber, eventId, eventCategory, effectiveDate, and originSystem; add a migration that scans auditJournal, computes/fills missing fields (using existing timestamp/actor/context or stable defaults), writes updates and initializes auditJournalSequenceCounters.nextSequenceNumber to avoid collisions, and ensure writers (appendAuditJournalEntry / insert("auditJournal") in convex/auditEvidence/services.ts and any other writers found by searching for insert/query/sequenceNumber/eventId/effectiveDate/eventCategory/originSystem) now always set those fields; include the migration/backfill in the deployment path and a read-only validation check to confirm no rows remain missing these fields.
🧹 Nitpick comments (1)
convex/payments/transfers/__tests__/inboundFlow.integration.test.ts (1)
592-603: Assert the single-apply monetary side effects here too.This still passes if the second invocation doubles
amountSettledor posts a secondCASH_RECEIVEDrow while leaving the obligation insettled. Add an assertion onamountSettledand/or the transfer/cash-ledger row count so the idempotency regression is actually pinned down.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/transfers/__tests__/inboundFlow.integration.test.ts` around lines 592 - 603, Add assertions that pin the monetary side-effects are applied exactly once: after fetching refreshedAttempt and refreshedObligation (using attemptId and obligationId), assert refreshedObligation.amountSettled equals the expected single-settlement value (not doubled) and query the cash/ledger entries (e.g., via the same db.query used for "transferRequests" but targeting the cash/ledger table or by type "CASH_RECEIVED") to assert there is exactly one ledger row for this settlement; ensure you reference transferRequests/bridgeTransfers and the obligation.amountSettled fields so the idempotency regression is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/auditEvidence/services.ts`:
- Around line 195-198: The current code eagerly materializes entire tables via
ctx.db.query("auditJournal").collect() and
ctx.db.query("cash_ledger_journal_entries").collect(), causing O(total history)
work; change these to query only the indexed, scope- and asOf-bounded rows
before collecting. Specifically, replace the global collects for auditJournal
and cash_ledger_journal_entries with scoped/indexed queries that apply the scope
filter and asOf time window (e.g., filter/where on scope and timestamp or use a
range/index) and only then call collect() so journalEvents and cashEntries are
built from the narrowed result set.
- Around line 402-410: The listAuditPackages filtering only checks
scope.entityType and scope.entityId, so callers filtering by mortgageId,
lenderId, obligationId, or transferRequestId still receive unrelated packages;
update the packages.filter predicate in listAuditPackages to additionally check
each optional scope field (mortgageId, lenderId, obligationId,
transferRequestId) against the corresponding pkg.scope property when that field
is present on the incoming scope, returning false if any provided scope value
doesn't match; keep the existing entityType/entityId checks and ensure all
comparisons are simple equality checks on pkg.scope.<field>.
In `@convex/components/auditTrail/internal.ts`:
- Around line 78-102: The retention query is returning already-archived rows and
filling the batch with them; change the audit_events query (the
ctx.db.query("audit_events").withIndex("by_retention", ...).take(100) call) to
only select unarchived rows by adding a filter for archivedAt == null (or the
Convex equivalent filter function) before take(100), and likewise when looking
up the related audit_outbox entry only consider outbox rows where archivedAt is
null so you don't repeatedly touch already-archived entries; alternatively
implement pagination to skip archived rows, but simplest fix: add a filter for
archivedAt being null on both queries so only unarchived events are processed.
- Around line 18-21: The catch path currently tries to patch the missing
audit_event row which will throw if ctx.db.get(entry.eventId) returned null;
update the error-handling logic in internal.ts so that when
ctx.db.get(entry.eventId) is null you do NOT call ctx.db.patch(entry.eventId,
...) — instead either mark the outbox entry failed directly (update the outbox
row) or skip the event patch entirely; specifically adjust the catch branches
around the ctx.db.get(entry.eventId) check and the subsequent ctx.db.patch calls
(also apply the same change at the other occurrence around lines 61-63) so
missing audit_events do not cause a second patch that can roll back the outbox
failure update.
In `@convex/components/auditTrail/lib.ts`:
- Around line 344-347: The loop in emitPending currently does
ctx.db.get(entry.eventId) and simply continues when the event is missing,
leaving orphaned outbox rows forever; change that logic so when
ctx.db.get(entry.eventId) returns null you mark the outbox entry (use entry.id)
as terminal/failed rather than skipping it (e.g., update the outbox row's status
field and optionally record a reason or log), so subsequent emitPending runs
won't repeatedly reprocess the orphaned row; update the branch around
ctx.db.get(entry.eventId) in emitPending to perform the status update via the
same DB helper used elsewhere for outbox updates and include a short log message
referencing entry.id and entry.eventId.
In `@convex/components/auditTrail/schema.ts`:
- Around line 51-60: The audit_evidence_objects table is missing retention
metadata; add retentionUntilAt and archivedAt fields to its defineTable schema
(e.g., add retentionUntilAt: v.number(), archivedAt: v.number()) so it mirrors
audit_events/audit_outbox retention semantics, and update any relevant indexes
if needed (e.g., add an index like "by_retention" on ["retentionUntilAt"] or
"by_archived" on ["archivedAt"]) and ensure any code that inserts durable
evidence (the logic that writes to audit_evidence_objects) is updated to
populate these fields consistently.
In `@convex/components/auditTrail/sink.ts`:
- Around line 28-37: The current idempotency check in the audit_evidence_objects
lookup (withIndex("by_idempotency_key") using args.idempotencyKey) returns
existing.sinkReference unconditionally, which can silently collapse different
events; change the logic to compare the stored record's canonical fields (e.g.,
existing.eventId and the stored evidence fingerprint/digest or payload) against
the incoming values from args (eventId and computed evidence digest/payload) and
only return { sinkReference: existing.sinkReference } when they match exactly;
if they differ, throw an error or return a failure (fail closed) so callers
cannot reuse an idempotencyKey for a different event. Ensure you compute
whatever canonical digest the system uses for evidence and reference the same
fields as stored in audit_evidence_objects when performing the equality check.
In `@convex/dispersal/createDispersalEntries.ts`:
- Around line 471-476: When reusing an existingCalculationRun (queried from
dispersalCalculationRuns by idempotencyKey) validate that its mortgageId and
obligationId match args.mortgageId and args.obligationId before linking new
dispersalEntries/servicingFeeEntries; if they differ, throw an explicit error
(e.g., IdempotencyKeyConflict) instead of reusing the row. Add the same
validation wherever existingCalculationRun is fetched in this file (the other
occurrence around the 514-573 region) so a cross-request idempotency-key
collision cannot attach entries to a calculation run for a different mortgage or
obligation.
In `@convex/engine/effects/transfer.ts`:
- Around line 246-287: The current flow posts the authoritative cash entries via
postCashReceiptForTransfer / postLenderPayoutForTransfer and records the link
with recordCashJournalLink, but still performs
reconcileAttemptLinkedInboundSettlement inside the same internalMutation and
throws on failure—causing Convex to roll back the cash post and link if
reconciliation fails. Fix by removing the throw path: either (A) move the call
to reconcileAttemptLinkedInboundSettlement out of this internalMutation into a
follow-up/background step (e.g., schedule a separate job) so the cash post and
recordCashJournalLink remain committed, or (B) if you must keep it here, replace
the Error throw with an integrity-defect audit/log (do not throw) so failures
are recorded but do not roll back
postCashReceiptForTransfer/postLenderPayoutForTransfer or recordCashJournalLink;
update publishTransferConfirmed logic accordingly to ensure ledger linkage is
durable.
In `@convex/engine/hashChain.ts`:
- Around line 69-73: The serializer currently replaces undefined snapshot states
with {"status":"none"} by JSON.stringify(entry.beforeState ?? { status:
entry.previousState }) and similarly for afterState, causing hash-chain mismatch
with normalizeAuditJournalEntry() which expects snapshots to be left undefined
for "none"; update the serialization to only JSON.stringify when a real snapshot
exists or the status is not "none" — e.g. for beforeState, if entry.beforeState
is defined use JSON.stringify(entry.beforeState), else if entry.previousState
!== "none" use JSON.stringify({ status: entry.previousState }), otherwise set
beforeState to undefined (same change for afterState with entry.afterState and
entry.newState); keep canonicalEnvelope JSON.stringify as-is.
In `@convex/payments/collectionPlan/__tests__/runner.test.ts`:
- Around line 296-313: Remove the duplicated replay block: delete the second
invocation and assertions that redeclare replay and the follow-up checks (the
block that calls internal.payments.collectionPlan.runner.processDuePlanEntries
and redefines replay, replayAttempts, and replayTransfers). Keep only the first
valid replay flow and its assertions; ensure references to
getAttemptsForPlanEntry(planEntryId) and getTransfersForAttempt(attempt._id)
appear only once so the test file parses and the transfer-count assertion is not
duplicated.
In `@convex/payments/collectionPlan/execution.ts`:
- Around line 355-403: The audit journal is recording a pre-patch placeholder
(machineContext.attemptId: "" and missing transferRequestId) because
appendAuditJournalEntry is called before the atomic patch that finalizes the
collectionAttempt; move the appendAuditJournalEntry call to after the mutation
that patches the collectionAttempt (or alternatively construct afterState from
the final patched values returned by that patch) so the journaled afterState
reflects the committed row; update references to machineContext.attemptId and
transferRequestId in afterState accordingly and ensure
idempotencyKey/linkedRecordIds still match the committed attempt.
In `@convex/payments/payout/adminPayout.ts`:
- Around line 102-121: The loop processes each dispersal entry using
executeTransferOwnedPayout and collects failures; do not advance the lender
payout cadence (i.e. do not call updateLenderPayoutDateRef / update
lastPayoutDate) if any per-entry failures occurred. After the for-loop that
updates payoutCount, totalAmountCents and pushes into failures, guard the
existing call to updateLenderPayoutDateRef (or any code that sets
lastPayoutDate) with a check like if (failures.length === 0) { /* existing
updateLenderPayoutDateRef call */ } so the lastPayoutDate is only advanced when
the run completed with zero failures (you can also ensure you only advance when
payoutCount > 0 to avoid no-op advances).
In `@convex/payments/payout/batchPayout.ts`:
- Around line 157-164: The handler currently updates a lender's lastPayoutDate
even if some entries failed; change the logic in the block that handles the
return value from processMortgageGroup(...) so lastPayoutDate is only updated
when there are no lender-level failures: check result.failures (or
result.failures.length) and only perform the lastPayoutDate update when it is
empty (you can still increment lenderPayoutCount, totalAmountCents and push
result.failures regardless), ensuring partial successes do not advance
lastPayoutDate and leave pending dispersals stranded.
In `@convex/payments/transfers/__tests__/handlers.integration.test.ts`:
- Around line 659-672: The test is forcing types to never when calling the
public mutation; remove the unnecessary "as never" casts so the call uses the
real mutation signature: update the auth.mutation invocation that passes
api.payments.transfers.mutations.confirmManualTransfer and the second argument
object (the manualSettlement payload) to drop both "as never" casts, letting
TypeScript validate against confirmManualTransfer's declared input (which
accepts manualSettlement with optional location and evidenceAttachmentIds as per
manualSettlementValidator).
In `@convex/payments/transfers/providers/manual.ts`:
- Around line 30-45: The current code sets settledAt from
request.manualSettlement regardless of status, which lets pending outbound
payouts carry a settlement timestamp and leaves inbound confirms without a
fallback; fix by computing status first (status = request.direction ===
"outbound" ? "pending" : "confirmed") and only set settledAt when status ===
"confirmed" — use request.manualSettlement?.settlementOccurredAt ?? Date.now()
for the confirmed/inbound path, and ensure settledAt is undefined for the
outbound/pending path; leave providerRef and providerData construction
unchanged.
---
Outside diff comments:
In `@convex/dispersal/disbursementBridge.ts`:
- Around line 343-347: The retry idempotency-key branch for
effectiveIdempotencyKey only checks for existing.status === "failed" ||
"cancelled" and therefore doesn't handle "reversed" transfers; update the
condition so reversed retries also get a unique suffix (either include
`"reversed"` in the OR list or, better, call the canonical helper like
isReusableTerminalTransferStatus(existing.status) to decide when to append
`:retry:${Date.now()}`) so that effectiveIdempotencyKey is unique for reversed
transfers and avoids colliding with the original row.
In `@convex/engine/effects/transfer.ts`:
- Around line 756-847: The handler currently writes reversedAt and a
REVERSAL_RECORDED audit entry inside the same atomic mutation as follow-up work
(ctx.db.patch and appendTransferMutationAuditEntry are followed by
runPaymentReversalCascadeForPlanEntry/postTransferReversal), so any later throw
rolls back those writes; fix by persisting the reversal marker and audit entry
in their own committed mutation (or an immediate non-throwing internalMutation)
first (use ctx.db.patch and appendTransferMutationAuditEntry for transfer._id),
then perform the attempt-linked or cash-reversal work
(runPaymentReversalCascadeForPlanEntry, postTransferReversal,
reconcileAttemptLinkedInboundReversal) inside a try/catch that logs errors,
enqueues a repair/healing task, and does NOT rethrow so the reversal state
remains durable and downstream fixes can be applied.
In `@convex/engine/machines/collectionAttempt.machine.ts`:
- Around line 64-71: The initiated state's on object contains a duplicated
DRAW_FAILED transition; remove the duplicate entry so there is a single
DRAW_FAILED mapping (target: "failed", actions: ["incrementRetryCount"]) in the
collectionAttempt.machine definition to avoid silent overwrite and ensure the
retry logic in the initiated state is applied exactly once.
In `@convex/engine/transition.ts`:
- Around line 335-344: The audit journal is recording an afterState that
includes nextSnapshot.context and status even when the same-state-with-effects
path did not persist those changes; update the logic around
appendAuditJournalEntry so afterState accurately reflects what was written to
storage: either persist the context/status before journaling (ensure the patch
that applies assign effects is saved) or set afterState to currentEntitySnapshot
when no DB write occurred. Locate the call to appendAuditJournalEntry in
transition.ts and use currentEntitySnapshot for afterState in the
same-state-with-effects branch, or defer journaling until after persisting
nextSnapshot (referencing currentEntitySnapshot, nextSnapshot and
appendAuditJournalEntry to find the right branch).
In `@convex/payments/transfers/mutations.ts`:
- Around line 578-585: persistProviderRef currently updates providerRef via
ctx.db.patch without creating an auditJournal entry, so immediate-confirm
transfers miss the PROVIDER_REF_RECORDED journal record; update the
persistProviderRef handler to perform the same audit journaling as the async
path by either invoking the existing recordTransferProviderRef routine (or the
underlying audit-append helper it uses) after patching, ensuring an auditJournal
entry is appended (hash-chained, including providerRef and actor/context) and
the PROVIDER_REF_RECORDED event is emitted so the provider reference mutation is
queryable and retained.
In `@convex/schema.ts`:
- Around line 1444-1504: The schema change requires a migration/backfill so all
existing auditJournal rows have sequenceNumber, eventId, eventCategory,
effectiveDate, and originSystem; add a migration that scans auditJournal,
computes/fills missing fields (using existing timestamp/actor/context or stable
defaults), writes updates and initializes
auditJournalSequenceCounters.nextSequenceNumber to avoid collisions, and ensure
writers (appendAuditJournalEntry / insert("auditJournal") in
convex/auditEvidence/services.ts and any other writers found by searching for
insert/query/sequenceNumber/eventId/effectiveDate/eventCategory/originSystem)
now always set those fields; include the migration/backfill in the deployment
path and a read-only validation check to confirm no rows remain missing these
fields.
---
Nitpick comments:
In `@convex/payments/transfers/__tests__/inboundFlow.integration.test.ts`:
- Around line 592-603: Add assertions that pin the monetary side-effects are
applied exactly once: after fetching refreshedAttempt and refreshedObligation
(using attemptId and obligationId), assert refreshedObligation.amountSettled
equals the expected single-settlement value (not doubled) and query the
cash/ledger entries (e.g., via the same db.query used for "transferRequests" but
targeting the cash/ledger table or by type "CASH_RECEIVED") to assert there is
exactly one ledger row for this settlement; ensure you reference
transferRequests/bridgeTransfers and the obligation.amountSettled fields so the
idempotency regression is covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 93626a62-2b27-4f5e-8c2a-40076d92ea67
⛔ Files ignored due to path filters (1)
convex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (39)
convex/auditEvidence/services.tsconvex/auditTrailClient.tsconvex/components/auditTrail/crons.tsconvex/components/auditTrail/internal.tsconvex/components/auditTrail/lib.tsconvex/components/auditTrail/schema.tsconvex/components/auditTrail/sink.tsconvex/dispersal/createDispersalEntries.tsconvex/dispersal/disbursementBridge.tsconvex/engine/auditJournal.tsconvex/engine/effects/collectionAttempt.tsconvex/engine/effects/registry.tsconvex/engine/effects/transfer.tsconvex/engine/hashChain.tsconvex/engine/machines/collectionAttempt.machine.tsconvex/engine/transition.tsconvex/engine/types.tsconvex/engine/validators.tsconvex/obligations/mutations.tsconvex/payments/cashLedger/hashChain.tsconvex/payments/cashLedger/transferReconciliationCron.tsconvex/payments/collectionPlan/__tests__/runner.test.tsconvex/payments/collectionPlan/execution.tsconvex/payments/obligations/createCorrectiveObligation.tsconvex/payments/payout/__tests__/adminPayout.test.tsconvex/payments/payout/__tests__/batchPayout.test.tsconvex/payments/payout/adminPayout.tsconvex/payments/payout/batchPayout.tsconvex/payments/payout/queries.tsconvex/payments/payout/transferOwnedFlow.tsconvex/payments/transfers/__tests__/handlers.integration.test.tsconvex/payments/transfers/__tests__/inboundFlow.integration.test.tsconvex/payments/transfers/collectionAttemptReconciliation.tsconvex/payments/transfers/interface.tsconvex/payments/transfers/mutations.tsconvex/payments/transfers/providers/manual.tsconvex/payments/transfers/validators.tsconvex/schema.tsconvex/test/moduleMaps.ts
Merge activity
|
4390711 to
83e739a
Compare
c983751 to
a4498bf
Compare
- Add audit evidence package generation, verification, and reconstruction flows - Persist audit trail emissions with canonical envelopes, retention, and sink refs - Record dispersal and payout flows in the audit journal for audit readiness
83e739a to
4390711
Compare
a4498bf to
482d3ab
Compare

collection attempt execution spine
sorry
Add audit evidence packaging and durable sink
Summary by Sourcery
Introduce a canonical, audit-ready evidence spine across transfers, collections, cash ledger, dispersals, and payouts, and route lender payouts through transfer-owned settlement while enriching audit journaling and durability.
New Features:
Bug Fixes:
Enhancements:
Tests:
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes & Improvements