Conversation
There was a problem hiding this comment.
Sorry @Connorbelez, your pull request is larger than the review limit of 150000 diff characters
📝 WalkthroughWalkthroughAdds provider‑managed recurring collection schedules (pad_rotessa): DB schema, types/validators, Rotessa API client and adapters, activation/poller/ingestion flows with leasing/idempotency, webhook routing into shared ingestion, cron registration, demo/UI, and extensive tests and helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant App as App/Client
participant Activation as Activation Action
participant DB as Database
participant Rotessa as Rotessa API
participant Plan as PlanEntries
App->>Activation: activateRecurringSchedule(bankAccount, mortgage, planEntryIds)
Activation->>DB: loadActivationSnapshot(...)
DB-->>Activation: snapshot + eligible plan entries
Activation->>DB: beginRecurringScheduleActivation(idempotencyKey,...)
DB-->>Activation: scheduleId (activating)
Activation->>Rotessa: createSchedule(cadence, startDate, endDate)
Rotessa-->>Activation: { externalScheduleRef, providerData }
Activation->>DB: commitRecurringScheduleActivation(scheduleId, externalScheduleRef)
DB-->>Plan: patch covered entries → provider_scheduled + executionMode/provider linkage
Activation-->>App: { scheduleId, externalScheduleRef }
sequenceDiagram
participant Poller as Polling Spine
participant DB as Database
participant Rotessa as Rotessa API
participant Ingest as Occurrence Ingestion
participant Transfer as Transfer Processing
Poller->>DB: listSchedulesEligibleForPolling(asOf, limit)
DB-->>Poller: candidate schedules
loop each schedule
Poller->>DB: claimExternalCollectionScheduleSync(scheduleId, leaseOwner)
DB-->>Poller: { claimed: true } or reason
Poller->>Rotessa: pollOccurrenceUpdates(externalScheduleRef, cursor range)
Rotessa-->>Poller: normalized events
loop each event
Poller->>Ingest: ingestExternalOccurrenceEvent(event, receivedVia: poller)
Ingest->>DB: find/create collectionAttempt and transferRequest
Ingest->>Transfer: executeTransition(mappedTransferEvent)
Transfer->>DB: update transfer/attempt, settle obligation
DB-->>Ingest: success outcome
end
Poller->>DB: recordExternalCollectionScheduleSyncSuccess(scheduleId, nextCursor, nextPollAt)
end
Poller-->>Poller: return aggregate counts
sequenceDiagram
participant Webhook as Rotessa Webhook
participant Handler as Webhook Handler (Action)
participant API as RotessaApiClient
participant Ingest as Occurrence Ingestion
participant Transfer as Transfer Processing
participant DB as Database
Webhook->>Handler: processRotessaPadWebhook(payload)
Handler->>API: findTransactionReportRow(transactionId) [if needed]
API-->>Handler: transaction row or null
Handler->>Ingest: ingestExternalOccurrenceEvent(normalized event, receivedVia: webhook)
Ingest->>DB: materialize/resolve attempt & transfer
Ingest->>Transfer: executeTransition(mappedTransferEvent)
Transfer->>DB: update records, audit
DB-->>Handler: ingestion outcome (applied|already_applied|unresolved)
Handler->>DB: mark webhook processed or failed
Handler-->>Webhook: response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
There was a problem hiding this comment.
Pull request overview
Implements a provider-managed recurring collection schedule flow (initially for Rotessa PAD) alongside the existing app-owned collection runner, including activation, occurrence ingestion via webhook/poller normalization, polling cron wiring, and expanded schema + tests to prevent double-execution.
Changes:
- Adds recurring schedule domain (schema/table + provider adapters) with activation, occurrence ingestion, and polling spine.
- Extends collection plan execution model with
executionMode+provider_scheduledstatus and updates due-entry queries/runner isolation. - Updates Rotessa webhook handling to support provider-managed lifecycle ingestion and adds integration/contract tests for webhook + polling flows.
Reviewed changes
Copilot reviewed 36 out of 38 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts | Adds lifecycle contract tests for Rotessa webhook + polling status normalization. |
| src/test/convex/payments/helpers.ts | Extends test seeding helpers to support provider-managed execution mode/status. |
| src/routeTree.gen.ts | Regenerated TanStack Router route tree output (demo listings children). |
| specs/15-support-provider-managed-recurring-collection-schedules/tasks.md | Adds master task list for the provider-managed schedules workstream. |
| specs/15-support-provider-managed-recurring-collection-schedules/PRD.md | Adds PRD describing dual execution modes and operational requirements. |
| specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md | Documents current-state gaps and risk controls for the new execution model. |
| specs/15-support-provider-managed-recurring-collection-schedules/design.md | Adds detailed design for schedule activation, ingestion, polling, and schema. |
| reviews/04-11-provider_managed_payment_schedule/74b377d61.md | Stores prior automated review notes for the PR. |
| package.json | Adds fast-check + Vitest integration for property-based testing support. |
| docs/architecture/state-machines.md | Adds a comprehensive developer guide for Governed Transitions state machines. |
| convex/schema.ts | Extends schema for execution ownership + adds externalCollectionSchedules and related indexes/fields. |
| convex/payments/webhooks/rotessaPad.ts | Expands Rotessa PAD webhook processing to support provider-managed occurrence ingestion path. |
| convex/payments/webhooks/handleReversal.ts | Adjusts reversal transfer status typing for compatibility with “completed”. |
| convex/payments/transfers/providers/rotessa.ts | Adds read-only Rotessa transfer provider implementation for poll-based status retrieval. |
| convex/payments/transfers/providers/registry.ts | Registers pad_rotessa provider for status retrieval. |
| convex/payments/rotessa/financialTransactions.ts | Adds Rotessa transaction_report mapping to normalized occurrence events + transfer status results. |
| convex/payments/rotessa/api.ts | Adds Rotessa API client (schedules + transaction_report + row search). |
| convex/payments/recurringSchedules/validators.ts | Adds Convex validators for execution modes, schedule status, and normalized occurrence events. |
| convex/payments/recurringSchedules/types.ts | Adds shared types for recurring schedules, normalized occurrence events, and Rotessa rows. |
| convex/payments/recurringSchedules/queries.ts | Adds internal queries for activation snapshot + schedule lookup + polling eligibility/detail. |
| convex/payments/recurringSchedules/providers/rotessaRecurring.ts | Implements Rotessa recurring schedule provider adapter (create/cancel/status/poll). |
| convex/payments/recurringSchedules/providers/registry.ts | Adds provider registry for recurring schedule providers. |
| convex/payments/recurringSchedules/poller.ts | Implements schedule polling action + lease claiming + success/failure recording mutations. |
| convex/payments/recurringSchedules/occurrenceIngestion.ts | Adds shared normalized occurrence ingestion/match/materialization + transfer transition dispatch. |
| convex/payments/recurringSchedules/activation.ts | Implements two-phase activation (begin/commit/fail) + Rotessa schedule creation. |
| convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts | Adds integration coverage for activation, webhook ingestion, poller recovery, and NSF decline behavior. |
| convex/payments/collectionPlan/queries.ts | Extends due-entry selection to exclude provider-managed entries via executionMode. |
| convex/payments/collectionPlan/mutations.ts | Extends collection plan entry creation contract with provider-managed fields/status. |
| convex/payments/collectionPlan/initialScheduling.ts | Extends scheduling types/statuses + persists execution ownership fields. |
| convex/payments/collectionPlan/executionContract.ts | Updates plan entry status union to include provider_scheduled. |
| convex/payments/collectionPlan/execution.ts | Minor refactor/formatting in execution staging + audit payload source fields. |
| convex/payments/collectionPlan/admin.ts | Extends admin status validator to include provider_scheduled. |
| convex/payments/collectionPlan/tests/runner.test.ts | Adds runner test ensuring provider-managed entries are skipped by app-owned runner. |
| convex/crons.ts | Adds provider-managed schedule polling spine cron. |
| convex/_generated/api.d.ts | Updates generated Convex API types for new modules. |
| CLAUDE.md | Updates GitNexus index metadata. |
| bun.lock | Locks newly added fast-check dependencies. |
| AGENTS.md | Updates GitNexus index metadata. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (3)
reviews/04-11-provider_managed_payment_schedule/74b377d61.md (1)
1-74: Avoid committing transient review transcripts into the repo.This file is generated review output (not enduring product/spec documentation) and will become stale quickly; consider removing it from version control.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@reviews/04-11-provider_managed_payment_schedule/74b377d61.md` around lines 1 - 74, Remove the risky fallback in Rotessa transaction lookup by changing the matched assignment to only use rows.find(...) (do not fallback to rows[0]) and keep the existing if (!matched) throw new Error(...) message so a missing providerRef fails explicitly (refer to matched, rows, providerRef and Rotessa in the function). In recurringSchedules/queries.ts, stop silently falling back to borrowerLink[0]: assign primaryBorrowerLink = borrowerLink.find(link => link.role === "primary") and if it is undefined throw a ConvexError that includes args.mortgageId and a brief serialization or count of borrowerLink (refer to primaryBorrowerLink, borrowerLink, ConvexError, args.mortgageId). Finally, make the time behavior deterministic by removing the Date.now() fallback (replace const asOf = args.asOf ?? Date.now() with requiring args.asOf) and update the activateRecurringSchedule args/type and its callers to supply asOf, or alternatively add clear JSDoc on activateRecurringSchedule/args.asOf if you prefer to keep the runtime default (refer to activateRecurringSchedule and args.asOf).convex/payments/rotessa/api.ts (1)
112-119: Consider adding request timeout for external API calls.The
fetchcall has no timeout, which could cause the Convex action to hang if Rotessa's API is unresponsive. Convex actions have execution time limits, but an explicit timeout would provide cleaner failure handling.💡 Optional: Add AbortController timeout
const response = await this.fetchImpl(url.toString(), { method: args?.method ?? "GET", headers: { Authorization: buildRotessaAuthHeader(this.apiKey), "Content-Type": "application/json", }, body: args?.body ? JSON.stringify(args.body) : undefined, + signal: AbortSignal.timeout(30_000), // 30 second timeout });Note:
AbortSignal.timeout()requires Node 17.3+ / modern runtimes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/rotessa/api.ts` around lines 112 - 119, The fetch call in this.fetchImpl(...) lacks a timeout and can hang; update the Rotessa API request to use an AbortController signal (or AbortSignal.timeout when available) and pass signal to this.fetchImpl, create a timer that calls controller.abort() after a configured timeout (e.g., 5–10s), and clear the timer after the response to avoid leaks; locate the request by the this.fetchImpl(...) invocation and ensure the headers (buildRotessaAuthHeader(this.apiKey)) and body handling remain unchanged while adding the signal option to the fetch arguments.convex/payments/recurringSchedules/types.ts (1)
42-60: Make customer reference a compile-time one-of contract
customerIdandcustomIdentifierare both optional, so invalid input (neither set) type-checks today. This is already enforced at runtime in activation, so encoding it in the type will prevent misuse at call sites.Suggested fix
-export interface RecurringCollectionScheduleInput { +type RotessaCustomerReference = + | { customerId: number; customIdentifier?: never } + | { customerId?: never; customIdentifier: string }; + +export type RecurringCollectionScheduleInput = RotessaCustomerReference & { amount: number; bankAccountId: Id<"bankAccounts">; comment?: string; - customerId?: number; - customIdentifier?: string; frequency: | "Once" | "Weekly" @@ installments?: number; processDate: string; providerCode: Extract<ProviderCode, "pad_rotessa">; -} +};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/types.ts` around lines 42 - 60, RecurringCollectionScheduleInput currently allows both customerId and customIdentifier to be omitted; change the declaration so the type enforces at compile-time that exactly one customer reference is provided by replacing the single interface with a union (or a shared base plus two variants) where one variant requires customerId (and disallows customIdentifier) and the other requires customIdentifier (and disallows customerId); update references to RecurringCollectionScheduleInput accordingly so callers must supply either customerId or customIdentifier, not neither.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/collectionPlan/mutations.ts`:
- Around line 18-33: Add cross-field validation in createEntryImpl: when the
incoming payload has status === "provider_scheduled", assert executionMode ===
"provider_managed" and that externalCollectionScheduleId and
externalOccurrenceOrdinal are present (non-null/defined); if any of these checks
fail, throw a validation/error (e.g., BadRequest) to reject the mutation. Locate
createEntryImpl in this file, check the incoming fields status, executionMode,
externalCollectionScheduleId, and externalOccurrenceOrdinal, and add a short
guard block that enforces the invariant before any DB writes.
In
`@convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts`:
- Around line 403-425: The afterEach cleanup currently only clears
ROTESSA_API_KEY; update the afterEach block to also reset DISABLE_GT_HASHCHAIN
and DISABLE_CASH_LEDGER_HASHCHAIN on testGlobal.process.env (the same env object
used in beforeEach) so those flags are set back to undefined (or their previous
values) to avoid leaking into other Vitest suites; locate the afterEach function
in providerManaged.integration.test.ts and mirror the cleanup for
env.DISABLE_GT_HASHCHAIN and env.DISABLE_CASH_LEDGER_HASHCHAIN alongside
ROTESSA_API_KEY.
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 216-223: The patch currently hardcodes status and
lastProviderScheduleStatus to "active" when calling ctx.db.patch for
args.scheduleId; instead, use the actual provider/local status values passed in
(e.g., args.status or args.providerScheduleStatus /
args.lastProviderScheduleStatus) and set lastProviderScheduleStatus from that
value rather than forcing "active"; also only update
lastTransitionAt/activatedAt based on the provided timestamps or when the status
actually changes (use args.activatedAt or args.lastTransitionAt as appropriate).
Make the same change in the other ctx.db.patch site that sets these fields (the
analogous block later in the file) so the stored lifecycle and poll timing
reflect the provider-returned state instead of being overwritten.
- Around line 115-122: The idempotency key built by the return that joins
"provider-managed-schedule", args.providerCode, args.primaryBankAccountId,
args.planEntryIds[0], lastPlanEntryId, and args.planEntryIds.length can collide
for different planEntryIds sets; change the key to include a deterministic
representation of the full planEntryIds (e.g., sort the array and join all IDs,
or compute a stable hash of JSON.stringify(sorted planEntryIds)) instead of only
first+last+length so each distinct selection yields a unique idempotency key;
update the code that constructs the key (the return expression using
args.planEntryIds and lastPlanEntryId) to incorporate the full sorted/joined IDs
or their hash.
- Around line 146-149: The file uses raw internalMutation/internalAction exports
(e.g., beginRecurringScheduleActivation) instead of the fluent-convex builder
pattern; update each internal effect handler in this file (including
beginRecurringScheduleActivation and the other handlers noted around lines 199,
248, 268) to use the fluent builder and explicitly mark visibility with
.internal(), i.e., replace internalMutation/internalAction usage with the
corresponding fluent-convex builder call and append .internal() at the export
site so the function signatures and args stay the same but follow the
.internal() visibility pattern.
In `@convex/payments/recurringSchedules/poller.ts`:
- Around line 98-131: The mutation recordExternalCollectionScheduleSyncSuccess
always sets status: "active" causing finished schedules to be re-polled; change
it to determine the new local status from the provider status (use
args.lastProviderScheduleStatus ?? schedule.lastProviderScheduleStatus) and set
status to "completed" when that value is a terminal state (e.g., ===
"completed"), otherwise set it to "active" before calling ctx.db.patch so the
persisted row reflects the provider's terminal status.
In `@convex/payments/recurringSchedules/providers/rotessaRecurring.ts`:
- Around line 54-131: pollOccurrenceUpdates advances nextCursor using the full
events array even though it only returns events.slice(0, maxEvents), which can
skip occurrences; compute latestOccurredAt from the same emitted array used in
the response. Specifically, after building events and before returning, take
const emitted = events.slice(0, maxEvents) and derive latestOccurredAt from
emitted (not events), then set nextCursor from that emitted-derived timestamp
and return emitted as events; update references to events.slice(0, maxEvents)
accordingly in pollOccurrenceUpdates so nextCursor matches the last emitted
occurrence.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 43-69: The current selection logic treats an explicit empty
args.planEntryIds as "select all future planned entries" and silently drops
missing IDs; change the branch so that if args.planEntryIds is defined (even if
empty) you only resolve those IDs via ctx.db.get for each id (using the existing
map over args.planEntryIds) and do not fall back to the index scan on
collectionPlanEntries; additionally, after fetching, detect any null/missing
entries and surface an error (or return a clear failure) instead of silently
ignoring them so callers get feedback; keep the downstream filters/sort
(filteredPlanEntries) unchanged and reference the planEntryIds, ctx.db.get, and
collectionPlanEntries symbols when making the change.
In `@convex/payments/rotessa/financialTransactions.ts`:
- Around line 16-18: The parseRotessaAmountToCents helper currently converts the
provider string directly and can emit NaN/Infinity; update
parseRotessaAmountToCents to parse the input with Number(amount), then assert
Number.isFinite(parsed) before doing the cents conversion, and if it's not
finite throw a descriptive Error (including the raw amount string and context
like "invalid provider amount") so callers fail fast instead of producing bogus
numeric values.
In `@convex/payments/transfers/providers/rotessa.ts`:
- Around line 30-55: getStatus currently only calls
this.apiClient.getTransactionReport with page: 1 which can miss transactions on
later pages; update getStatus to use the paginated lookup
(this.apiClient.findTransactionReportRow) to locate the row across pages (or
implement a loop that fetches successive pages up to the same max used by
RotessaApiClient) and assign the matched row to a variable, throw the existing
not-found Error if no row is returned, then return
buildRotessaTransferStatusResult(matched) as before; reference getStatus,
this.apiClient.getTransactionReport, this.apiClient.findTransactionReportRow,
and buildRotessaTransferStatusResult when making the change.
In `@convex/payments/webhooks/handleReversal.ts`:
- Around line 4-12: The ReversalTransferRecord query-return type narrows the
persisted status to TransferStatus | "completed", which omits legacy persisted
values; update the status field to use PersistedTransferStatus (the persisted DB
type for transferRequests) instead of TransferStatus | "completed" so query
boundaries retain full persisted typing and preserve migration safety for
transferRequests reads in handleReversal.ts.
In `@src/test/convex/payments/helpers.ts`:
- Around line 167-173: The fixture currently defaults executionMode to
"app_owned" even when status is "provider_scheduled", causing inconsistent
state; update the fixture that constructs this object (the code using the
executionMode, status, method, scheduledDate fields) so that if status ===
"provider_scheduled" the executionMode default becomes "provider_managed" (or
explicitly set executionMode = "provider_managed" when status is
"provider_scheduled"); otherwise keep the existing default ("app_owned"). Ensure
this logic is applied where the fixture assigns executionMode so tests that set
status: "provider_scheduled" get a consistent executionMode.
---
Nitpick comments:
In `@convex/payments/recurringSchedules/types.ts`:
- Around line 42-60: RecurringCollectionScheduleInput currently allows both
customerId and customIdentifier to be omitted; change the declaration so the
type enforces at compile-time that exactly one customer reference is provided by
replacing the single interface with a union (or a shared base plus two variants)
where one variant requires customerId (and disallows customIdentifier) and the
other requires customIdentifier (and disallows customerId); update references to
RecurringCollectionScheduleInput accordingly so callers must supply either
customerId or customIdentifier, not neither.
In `@convex/payments/rotessa/api.ts`:
- Around line 112-119: The fetch call in this.fetchImpl(...) lacks a timeout and
can hang; update the Rotessa API request to use an AbortController signal (or
AbortSignal.timeout when available) and pass signal to this.fetchImpl, create a
timer that calls controller.abort() after a configured timeout (e.g., 5–10s),
and clear the timer after the response to avoid leaks; locate the request by the
this.fetchImpl(...) invocation and ensure the headers
(buildRotessaAuthHeader(this.apiKey)) and body handling remain unchanged while
adding the signal option to the fetch arguments.
In `@reviews/04-11-provider_managed_payment_schedule/74b377d61.md`:
- Around line 1-74: Remove the risky fallback in Rotessa transaction lookup by
changing the matched assignment to only use rows.find(...) (do not fallback to
rows[0]) and keep the existing if (!matched) throw new Error(...) message so a
missing providerRef fails explicitly (refer to matched, rows, providerRef and
Rotessa in the function). In recurringSchedules/queries.ts, stop silently
falling back to borrowerLink[0]: assign primaryBorrowerLink =
borrowerLink.find(link => link.role === "primary") and if it is undefined throw
a ConvexError that includes args.mortgageId and a brief serialization or count
of borrowerLink (refer to primaryBorrowerLink, borrowerLink, ConvexError,
args.mortgageId). Finally, make the time behavior deterministic by removing the
Date.now() fallback (replace const asOf = args.asOf ?? Date.now() with requiring
args.asOf) and update the activateRecurringSchedule args/type and its callers to
supply asOf, or alternatively add clear JSDoc on
activateRecurringSchedule/args.asOf if you prefer to keep the runtime default
(refer to activateRecurringSchedule and args.asOf).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2342684f-7342-4263-9b2e-8f6005b6759f
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockconvex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (36)
AGENTS.mdCLAUDE.mdconvex/crons.tsconvex/payments/collectionPlan/__tests__/runner.test.tsconvex/payments/collectionPlan/admin.tsconvex/payments/collectionPlan/execution.tsconvex/payments/collectionPlan/executionContract.tsconvex/payments/collectionPlan/initialScheduling.tsconvex/payments/collectionPlan/mutations.tsconvex/payments/collectionPlan/queries.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/registry.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/recurringSchedules/types.tsconvex/payments/recurringSchedules/validators.tsconvex/payments/rotessa/api.tsconvex/payments/rotessa/financialTransactions.tsconvex/payments/transfers/providers/registry.tsconvex/payments/transfers/providers/rotessa.tsconvex/payments/webhooks/handleReversal.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsdocs/architecture/state-machines.mdpackage.jsonreviews/04-11-provider_managed_payment_schedule/74b377d61.mdspecs/15-support-provider-managed-recurring-collection-schedules/PRD.mdspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdspecs/15-support-provider-managed-recurring-collection-schedules/tasks.mdsrc/routeTree.gen.tssrc/test/convex/payments/helpers.tssrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
convex/payments/recurringSchedules/queries.ts (1)
10-180:⚠️ Potential issue | 🟠 MajorUse fluent-convex builders for internal query exports in this file.
This file defines exported Convex endpoints with raw
internalQuery(...)instead of the fluent builder pattern used elsewhere in this PR (convex.query().input().handler().internal()), which weakens consistency with the project’s Convex contract style.#!/bin/bash # Verify raw internalQuery exports in this file rg -n 'export const .*=\s*internalQuery\(' convex/payments/recurringSchedules/queries.ts # Compare with fluent builder usage rg -n 'convex\.(query|mutation|action)\(\)' convex/payments/recurringSchedules/queries.tsAs per coding guidelines: “
convex/**/*.ts: Use fluent-convex as the canonical way of writing Convex functions. Exported queries, mutations, and actions must use the fluent builder and end with an explicit.public()or.internal()visibility marker.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/queries.ts` around lines 10 - 180, Several exported endpoints (loadActivationSnapshot, getExternalCollectionScheduleByProviderRef, listSchedulesEligibleForPolling, getExternalCollectionScheduleDetail, listExternalCollectionScheduleSyncIssues) use raw internalQuery(...) instead of the project's fluent-convex pattern; switch each export to use the fluent builder (convex.query().input(...).handler(...).internal()) preserving the existing arg schemas (v.* validators) and the handler logic, keep the same exported constant names, and ensure you end with .internal() to match visibility rules and project style.
🧹 Nitpick comments (2)
convex/payments/recurringSchedules/occurrenceIngestion.ts (1)
241-245: Type assertion after insert assumes non-null return.Line 244 casts the result of
ctx.db.get(attemptId)toPromise<CollectionAttemptDoc>, butget()returnsDoc | null. While practically safe within a single mutation (OCC guarantees), a defensive null check would be more explicit.♻️ Defensive alternative
- return args.ctx.db.get(attemptId) as Promise<CollectionAttemptDoc>; + const inserted = await args.ctx.db.get(attemptId); + if (!inserted) { + throw new ConvexError( + `Collection attempt ${attemptId} could not be loaded after insertion.` + ); + } + return inserted;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/occurrenceIngestion.ts` around lines 241 - 245, The code currently casts ctx.db.get(attemptId) to Promise<CollectionAttemptDoc> which ignores that get() can return null; update the function that calls ctx.db.get(attemptId) (in occurrenceIngestion.ts around the variable attemptId) to await the result into a local (e.g., attemptDoc) and perform a null check—if null, throw a descriptive error (or handle accordingly) instead of force-casting—so callers of the function rely on an explicit runtime guarantee that a CollectionAttemptDoc was returned.convex/payments/recurringSchedules/activation.ts (1)
329-333: Use generic parameter onmakeFunctionReferenceto avoid unsafe type casts.Instead of
makeFunctionReference(...) as unknown as FunctionReference<"mutation">, use the generic form:makeFunctionReference<FunctionType>("path:function"). The generic approach preserves type information for args and return values at compile time, preventing drift and catching validation errors early.This pattern is already used elsewhere in the codebase (e.g.,
convex/fees/__tests__/config.test.ts,convex/seed/seedAll.ts). Apply it consistently across all three locations in this file (lines 329–333, 360–370, 380–390).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/activation.ts` around lines 329 - 333, Replace the unsafe cast pattern makeFunctionReference(... ) as unknown as FunctionReference<"mutation"> with the generic makeFunctionReference<T>("path:function") form for all three makeFunctionReference usages in this file used with ctx.runMutation; e.g., for the payments/recurringSchedules/activation:beginRecurringScheduleActivation call, supply the appropriate function type as the generic parameter so the reference preserves the args/return types at compile time, and do the same for the other two activation-related makeFunctionReference calls in this file (the ones passed into ctx.runMutation).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 167-174: The current logic treats any existing schedule without
externalScheduleRef as "in progress" and throws forever; change it to allow
retries when the existing record has failed activation. In the block that checks
existing (the variable named existing and properties
existing.externalScheduleRef and args.activationIdempotencyKey), return the
existing._id only if existing.externalScheduleRef is present; otherwise, if
existing.status === 'activation_failed' allow the flow to proceed with retry
(e.g., do not throw and either recreate/update the record or clear the
idempotency marker), and only throw the ConvexError when existing.status is not
'activation_failed' (i.e., truly in-progress). Ensure you reference
existing.status and args.activationIdempotencyKey in the updated condition.
- Around line 219-230: The patch call in activation.ts directly mutates the
schedule.status and lastTransitionAt (ctx.db.patch(args.scheduleId, { status:
localScheduleStatus, ... lastTransitionAt: ... })), which bypasses the required
Governed Transitions/Audit system; replace this direct status mutation with a
call into the Transition Engine (e.g., the project’s schedule transition
API/TransitionEngine method) to request the lifecycle change for args.scheduleId
to localScheduleStatus so the engine journals and hash-chains the transition,
passing provider metadata (providerData, lastProviderScheduleStatus, nextPollAt,
activatedAt) as transition context; only update non-status metadata via db.patch
after the Transition Engine confirms the transition (and use schedule.status and
schedule.lastTransitionAt to compute lastTransitionAt when invoking the
transition rather than writing it directly).
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 116-138: The current selection takes up to limit rows (variables
active and syncError) then filters out leased items (syncLeaseExpiresAt) which
can underfill; update the DB queries against externalCollectionSchedules (the
withIndex calls using "by_status_and_next_poll") to include the lease condition
in the query itself (e.g., require syncLeaseExpiresAt to be null OR <=
args.asOf) so the DB returns only unleased rows before .take(limit); if the
query language lacks OR support, fetch a larger page (e.g., limit * 3) for
active and sync_error then apply the in-memory filter to reach up to limit
results, ensuring the final combined result length is limited to limit.
- Around line 50-61: The code that resolves args.planEntryIds currently maps and
fetches each id which allows duplicate IDs and can create duplicate
installments; before calling Promise.all check for duplicates in
args.planEntryIds (e.g., compare new Set(args.planEntryIds).size to
args.planEntryIds.length) and if any duplicates exist throw a ConvexError with a
clear message including the duplicate ids; keep the rest of the logic (the
ctx.db.get loop and ConvexError when an entry is missing) unchanged so you fail
fast on duplicate planEntryIds instead of resolving them.
In `@convex/payments/transfers/providers/rotessa.ts`:
- Around line 51-55: The initiate method in rotessa.ts currently lacks the
required parameter from the TransferProvider interface; update the async
initiate signature to accept request: TransferRequestInput (while still
returning Promise<InitiateResult>), even if unused, to match the interface
contract and preserve type clarity; ensure the function signature reads async
initiate(request: TransferRequestInput): Promise<InitiateResult> and keep the
existing thrown Error body unchanged.
- Around line 18-28: The cancel and confirm method signatures in rotessa.ts do
not include the required providerRef: string parameter from the TransferProvider
interface; update the async cancel and async confirm definitions to accept
providerRef: string (e.g., async cancel(providerRef: string):
Promise<CancelResult> and async confirm(providerRef: string):
Promise<ConfirmResult>) and keep the existing thrown Errors (or log/use
providerRef if desired) so the method signatures match the interface contract
while preserving current behavior.
---
Duplicate comments:
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 10-180: Several exported endpoints (loadActivationSnapshot,
getExternalCollectionScheduleByProviderRef, listSchedulesEligibleForPolling,
getExternalCollectionScheduleDetail, listExternalCollectionScheduleSyncIssues)
use raw internalQuery(...) instead of the project's fluent-convex pattern;
switch each export to use the fluent builder
(convex.query().input(...).handler(...).internal()) preserving the existing arg
schemas (v.* validators) and the handler logic, keep the same exported constant
names, and ensure you end with .internal() to match visibility rules and project
style.
---
Nitpick comments:
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 329-333: Replace the unsafe cast pattern makeFunctionReference(...
) as unknown as FunctionReference<"mutation"> with the generic
makeFunctionReference<T>("path:function") form for all three
makeFunctionReference usages in this file used with ctx.runMutation; e.g., for
the payments/recurringSchedules/activation:beginRecurringScheduleActivation
call, supply the appropriate function type as the generic parameter so the
reference preserves the args/return types at compile time, and do the same for
the other two activation-related makeFunctionReference calls in this file (the
ones passed into ctx.runMutation).
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 241-245: The code currently casts ctx.db.get(attemptId) to
Promise<CollectionAttemptDoc> which ignores that get() can return null; update
the function that calls ctx.db.get(attemptId) (in occurrenceIngestion.ts around
the variable attemptId) to await the result into a local (e.g., attemptDoc) and
perform a null check—if null, throw a descriptive error (or handle accordingly)
instead of force-casting—so callers of the function rely on an explicit runtime
guarantee that a CollectionAttemptDoc was returned.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 72a3bcb9-c582-4e29-9e5e-50a55a76bbcc
📒 Files selected for processing (16)
convex/payments/collectionPlan/__tests__/runner.test.tsconvex/payments/collectionPlan/initialScheduling.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/rotessa/financialTransactions.tsconvex/payments/transfers/providers/rotessa.tsconvex/payments/transfers/types.tsconvex/payments/webhooks/handleReversal.tsreviews/04-11-provider_managed_payment_schedule/74b377d61.mdspecs/15-support-provider-managed-recurring-collection-schedules/tasks.mdsrc/test/convex/payments/helpers.tssrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
✅ Files skipped from review due to trivial changes (2)
- convex/payments/webhooks/handleReversal.ts
- specs/15-support-provider-managed-recurring-collection-schedules/tasks.md
🚧 Files skipped from review as they are similar to previous changes (4)
- convex/payments/collectionPlan/tests/runner.test.ts
- convex/payments/rotessa/financialTransactions.ts
- src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
- convex/payments/recurringSchedules/poller.ts
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
convex/payments/webhooks/rotessaPad.ts (1)
334-347: 🛠️ Refactor suggestion | 🟠 MajorUse fluent-convex for these new webhook endpoints.
These exports revert to raw
internalMutation/internalAction, so visibility is no longer declared at the export site like the rest of the migrated Convex surface.As per coding guidelines, "Use
fluent-convexas the canonical way of writing Convex functions; exported Convex queries, mutations, and actions must use the fluent builder and end with an explicit.public()or.internal()for visibility clarity."Also applies to: 523-632
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 334 - 347, The exported Convex mutation processRotessaPadTransferWebhookMutation is defined with raw internalMutation instead of the fluent-convex builder; replace its use with the fluent builder pattern (e.g., use mutation(...) / action(...) / query(...) builder that matches the original kind) and attach the explicit visibility method (.internal()) and keep the same args and handler that call processRotessaPadTransferWebhook; apply the same change to the other exported endpoints in this file that still use internalMutation/internalAction (the similar exports around the processRotessa* handlers).convex/payments/recurringSchedules/queries.ts (1)
35-173: 🛠️ Refactor suggestion | 🟠 MajorMigrate these internal queries to fluent-convex.
The new query exports still use raw
internalQuery, which breaks the repo’s current Convex pattern and hides visibility away from the export site.As per coding guidelines, "Use
fluent-convexas the canonical way of writing Convex functions; exported Convex queries, mutations, and actions must use the fluent builder and end with an explicit.public()or.internal()for visibility clarity."Also applies to: 175-190, 192-247, 249-275, 277-287
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/queries.ts` around lines 35 - 173, The exported query loadActivationSnapshot (and other exported queries/mutations/actions in this file) still use the raw internalQuery API; migrate each exported Convex function to use the fluent-convex builder (e.g., the fluent query/mutation/action builder) and replace internalQuery(...) with the fluent form, then call the appropriate visibility method (.internal() for internal-only exports or .public() for public exports) on the builder; locate loadActivationSnapshot and the other exported functions in this file and refactor their signatures/handlers to the fluent-convex pattern while preserving existing args validation, handler logic, and return shape.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 398-438: The provider.createSchedule() call can succeed before the
durable commit, so persist the provider-side identifiers immediately and
atomically in a small mutation before calling commitRecurringScheduleActivation:
after receiving providerSchedule from provider.createSchedule(), call
ctx.runMutation with a narrow mutation (e.g., a new or existing mutation that
writes externalScheduleRef, providerData, providerStatus, activatedAt/nextPollAt
and planEntryIds) to durable storage, then call
commitRecurringScheduleActivation to apply the broader local patch;
additionally, if commitRecurringScheduleActivation fails, invoke a compensating
provider cancellation (e.g., provider.cancelSchedule using
providerSchedule.externalScheduleRef) inside the catch path and record the
cancellation via failRecurringScheduleActivation so external schedules are not
orphaned.
- Around line 194-236: The guard relying only on
mortgage.activeExternalCollectionScheduleId allows concurrent activations
because that mortgage field is only set in commitRecurringScheduleActivation();
to fix, before inserting into externalCollectionSchedules query the
externalCollectionSchedules table for rows with mortgageId === args.mortgageId
and check for any non-terminal schedule (use
isTerminalExternalCollectionScheduleStatus) and compare
activationIdempotencyKey: if a non-terminal row exists with the same idempotency
key return created: false with that schedule._id, and if a non-terminal row
exists with a different idempotency key throw a ConvexError rejecting the
concurrent activation; perform this check in the activation mutation (before the
ctx.db.insert and before calling the external provider) so Convex OCC will
serialize contenders.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 55-66: The current logic uses borrowerLink.find(...) to pick a
primaryBorrowerLink from the mortgageBorrowers query, which can silently pick
one when multiple links have role === "primary"; change the check to explicitly
collect all primary links (e.g., filter borrowerLink for link.role ===
"primary"), and if the resulting array length is 0 throw the existing
ConvexError, and if >1 throw a new ConvexError rejecting the invalid state
(include borrowerLink.length and the conflicting primary link ids or count in
the message) so activation fails deterministically instead of arbitrarily
binding to one row.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 592-597: The unresolved branch is calling
markTransferWebhookFailed(...) and then throwing, which causes the surrounding
catch to call markTransferWebhookFailed again and double-count the failure;
modify the control flow so only one call occurs: either remove the initial mark
in the ingestionResult.outcome === "unresolved" branch and just throw so the
outer catch handles marking, or keep the initial mark and rethrow a special
error (or set a flag) so the catch in the surrounding try/catch (the handler
that currently also calls markTransferWebhookFailed) skips calling
markTransferWebhookFailed again; update the logic around ingestionResult and the
surrounding try/catch to ensure markTransferWebhookFailed is invoked exactly
once per webhook (references: ingestionResult.outcome === "unresolved",
markTransferWebhookFailed, and the surrounding try/catch in rotessaPad.ts).
---
Duplicate comments:
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 35-173: The exported query loadActivationSnapshot (and other
exported queries/mutations/actions in this file) still use the raw internalQuery
API; migrate each exported Convex function to use the fluent-convex builder
(e.g., the fluent query/mutation/action builder) and replace internalQuery(...)
with the fluent form, then call the appropriate visibility method (.internal()
for internal-only exports or .public() for public exports) on the builder;
locate loadActivationSnapshot and the other exported functions in this file and
refactor their signatures/handlers to the fluent-convex pattern while preserving
existing args validation, handler logic, and return shape.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 334-347: The exported Convex mutation
processRotessaPadTransferWebhookMutation is defined with raw internalMutation
instead of the fluent-convex builder; replace its use with the fluent builder
pattern (e.g., use mutation(...) / action(...) / query(...) builder that matches
the original kind) and attach the explicit visibility method (.internal()) and
keep the same args and handler that call processRotessaPadTransferWebhook; apply
the same change to the other exported endpoints in this file that still use
internalMutation/internalAction (the similar exports around the processRotessa*
handlers).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cde138f3-f449-422d-8aed-b8bd6d0d22d1
📒 Files selected for processing (5)
convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/transfers/providers/rotessa.tsconvex/payments/webhooks/rotessaPad.ts
102424e to
a99f4e2
Compare
There was a problem hiding this comment.
Actionable comments posted: 17
♻️ Duplicate comments (3)
convex/payments/recurringSchedules/queries.ts (1)
55-66:⚠️ Potential issue | 🟠 MajorReject multiple primary borrower links instead of picking one arbitrarily.
.find()still makes activation depend on row order when bad data contains tworole === "primary"links. That binds the schedule to an arbitrary borrower and makes downstream ownership checks nondeterministic. Require exactly one primary link here and fail fast otherwise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/queries.ts` around lines 55 - 66, Currently the code uses borrowerLink.find(...) which silently picks the first "primary" when there are multiple; update the logic in the mortgageBorrowers query handling so you collect all links with role === "primary" (e.g., filter borrowerLink for role), then require that the count is exactly one: if zero or >1 throw a ConvexError from the same block (include borrowerLink.length and primaryCount in the message) instead of proceeding, and set primaryBorrowerLink from the single item when primaryCount === 1; this makes activation deterministic and fails fast on bad data.convex/demo/__tests__/ampsExecutionModes.test.ts (1)
28-34:⚠️ Potential issue | 🟡 MinorRestore the mutated env flags after each harness.
createHarness()writesDISABLE_GT_HASHCHAINandDISABLE_CASH_LEDGER_HASHCHAINintoprocess.envbut never restores the previous values. That can leak the hashchain kill-switches into later Vitest suites running in the same worker and mask unrelated failures. Snapshot and restore these vars in a localtry/finallyor sharedafterEach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/demo/__tests__/ampsExecutionModes.test.ts` around lines 28 - 34, The createHarness function mutates process.env.DISABLE_GT_HASHCHAIN and process.env.DISABLE_CASH_LEDGER_HASHCHAIN without restoring them; update createHarness (or add a shared afterEach) to snapshot the prior values of these two env vars before setting them and restore them after the harness is used (use a try/finally around the harness setup/teardown or register an afterEach that resets process.env for DISABLE_GT_HASHCHAIN and DISABLE_CASH_LEDGER_HASHCHAIN), keeping the existing convexTest/auditLogTest.register calls unchanged.convex/payments/webhooks/rotessaPad.ts (1)
592-612:⚠️ Potential issue | 🟠 MajorMark unresolved lifecycle webhooks failed only once.
The
"unresolved"branch callsmarkTransferWebhookFailed()and then throws into the surroundingcatch, which calls the same helper again. That double-counts a single delivery and can overwrite the stored failure metadata.Suggested fix
if (ingestionResult.outcome === "unresolved") { - await markTransferWebhookFailed(ctx, { - webhookEventId: args.webhookEventId, - error: ingestionResult.reason, - }); throw new Error(ingestionResult.reason); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 592 - 612, The unresolved branch currently calls markTransferWebhookFailed(...) then throws, which triggers the surrounding catch to call markTransferWebhookFailed(...) again; to fix, after calling markTransferWebhookFailed in the ingestionResult.outcome === "unresolved" branch, throw an Error that carries a flag (e.g., set a property like alreadyMarked = true on the Error object) or throw a small custom error class, and in the catch block check that flag (or instance) and skip calling markTransferWebhookFailed if alreadyMarked is true; reference the ingestionResult.outcome === "unresolved" branch, markTransferWebhookFailed, and the surrounding catch to implement this conditional skip.
🧹 Nitpick comments (2)
convex/payments/webhooks/rotessaPad.ts (1)
65-90: Align the function-reference return type with the actual ingestion result.
ingestExternalOccurrenceEventReferenceis typed as returning only{ transferRequestId? }, but this handler immediately branches onoutcomeandreasonvia a cast. That erases the real mutation contract and makes type drift here invisible to the compiler. Model the full union on the function reference and drop the cast.Suggested fix
+type IngestExternalOccurrenceResult = + | { + outcome: "applied" | "already_applied" | "materialized"; + transferRequestId?: Id<"transferRequests">; + } + | { + outcome: "unresolved"; + reason: string; + transferRequestId?: Id<"transferRequests">; + }; + const ingestExternalOccurrenceEventReference = makeFunctionReference< "mutation", @@ - Promise<{ transferRequestId?: Id<"transferRequests"> }> + Promise<IngestExternalOccurrenceResult> >( "payments/recurringSchedules/occurrenceIngestion:ingestExternalOccurrenceEvent" ); @@ - const ingestionResult = (await ctx.runMutation( + const ingestionResult = await ctx.runMutation( ingestExternalOccurrenceEventReference, { event: { @@ - } - )) as - | { - outcome: "applied" | "already_applied" | "materialized"; - transferRequestId?: Id<"transferRequests">; - } - | { - outcome: "unresolved"; - reason: string; - transferRequestId?: Id<"transferRequests">; - }; + } + );Also applies to: 565-584
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 65 - 90, The function reference ingestExternalOccurrenceEventReference is currently typed to return only { transferRequestId? } which hides the real mutation's union result and forces a runtime cast; update the generic return type passed to makeFunctionReference (for "payments/recurringSchedules/occurrenceIngestion:ingestExternalOccurrenceEvent") to the full union shape the mutation actually returns (including outcome, reason, and any other discriminants and payloads) and remove the subsequent cast so TypeScript can statically enforce the branching on outcome/reason in the handler.convex/payments/rotessa/financialTransactions.ts (1)
87-112: TightenproviderCodeto Rotessa instead of casting away the check.This helper accepts any
ProviderCodeand then coerces it to"pad_rotessa"with a cast. That means a future caller can pass a non-Rotessa provider and still get a type-safe-looking event that fails validation later. Restrict the parameter toExtract<ProviderCode, "pad_rotessa">or drop the override entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/rotessa/financialTransactions.ts` around lines 87 - 112, The function buildNormalizedOccurrenceFromRotessaRow accepts a broad ProviderCode and then force-casts it to "pad_rotessa", which lets callers pass non-Rotessa providers and masks the error; change the parameter type providerCode to be optional but constrained to Extract<ProviderCode, "pad_rotessa"> (or remove the providerCode override entirely) and update any use of args.providerCode in the function to rely on that narrower type so you no longer need the unsafe cast in the returned providerCode field; ensure the signature and the returned providerCode value consistently use the constrained type (function name: buildNormalizedOccurrenceFromRotessaRow, param: providerCode).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/demo/ampsExecutionModes.ts`:
- Around line 1260-1267: The demo currently calls the global runner
internal.payments.collectionPlan.runner.processDuePlanEntries with only { asOf,
batchSize }, which can advance app-owned entries for unrelated mortgages; change
this to a demo-scoped call by either (A) invoking a dedicated helper (e.g.,
processDuePlanEntriesForWorkspace) that confines work to the current workspace,
or (B) pass and apply a workspace-scoped filter (e.g., workspaceId or
mortgageId) into processDuePlanEntries so it only processes plan entries owned
by state.workspace (refer to state.workspace.executionMode and
state.workspace.id or mortgage id) before running the runner; implement the
filtered runner or new helper and call that from the demo branch instead of the
global processDuePlanEntries.
- Around line 635-645: Reseeding currently only removes
demo_collection_external_occurrences and leaves the prior demo graph (property,
mortgage, bank account) and any provider-managed schedule rows active, causing
orphaned demo state; update the reseed logic (the block referencing
existingWorkspace and demo_collection_external_occurrences) to also find and
remove or deactivate all demo graph entities tied to existingWorkspace._id —
specifically query and delete rows from the tables that store demo properties,
mortgages, bank accounts (e.g., queries over property, mortgage, bankAccount
tables or whatever entity table names are used) and any
externalCollectionSchedules rows for that workspace, or at minimum set those
schedules to inactive instead of leaving them active; use the same
ctx.db.query/.withIndex(...) pattern and ctx.db.delete (or an update to set
active=false) so all prior demo records are removed/deactivated before creating
new demo rows.
- Around line 534-596: Convert the two exported internal callables to the
fluent-convex builder pattern: replace internalQuery({...}) for
getCollectionExecutionWorkspaceInternal and
getPendingManualReviewTransferInternal with
convex.query().input({...}).handler(async (ctx, args) => { /* move existing
handler body unchanged */ }).internal(); ensure you import convex from
"../fluent" if not already present and keep the existing argument schemas
(v.string(), v.optional(...), v.id("mortgages")) and return types identical
while moving only the handler body into the fluent .handler(...) call.
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 512-631: The export currently uses raw internalMutation(...)
(ingestExternalOccurrenceEvent) and must be converted to the repo's
fluent-convex builder style and explicitly marked internal; replace the
top-level export so ingestExternalOccurrenceEvent is built with the fluent
builder (preserve the same args object and handler function body) and end the
chain with .internal() instead of calling internalMutation directly; keep all
referenced helpers (resolveExistingTransfer, resolveScheduleId,
resolveOccurrenceMatch, ensureCollectionAttempt,
syncCollectionAttemptProviderMirror, patchPlanEntryProviderMirror,
ensureTransferRequest, ensurePendingBaseline, executeTransition) unchanged and
wire them into the handler as-is.
- Around line 381-415: ensurePendingBaseline currently ignores the {success,
reason} result from executeTransition calls (for DRAW_INITIATED and
PROVIDER_INITIATED) and thus may report outcome: "applied" even if the
transition engine rejected the event; update ensurePendingBaseline to capture
the return value of each executeTransition call (the DRAW_INITIATED and
PROVIDER_INITIATED calls and the final mapped transfer event mentioned in the
review), check success and if false throw or return an error that fails the
mutation (including the reason) so the function fails fast instead of proceeding
to patch mirrors; apply the same pattern to the other similar block referenced
(the code around the second location mentioned, e.g., the calls between lines
~615-629) to ensure all executeTransition results are validated before reporting
applied.
In `@convex/payments/recurringSchedules/poller.ts`:
- Around line 75-85: When setting status to "sync_error" in the ctx.db.patch
calls (the block that patches args.scheduleId and the similar block later), also
set lastTransitionAt to args.asOf so the transition timestamp is updated; update
the patch payloads to include lastTransitionAt: args.asOf alongside
lastSyncAttemptAt/lastSyncErrorAt and other fields to keep transition metadata
consistent when status changes to "sync_error".
- Around line 62-302: The four Convex functions
claimExternalCollectionScheduleSync,
recordExternalCollectionScheduleSyncSuccess,
recordExternalCollectionScheduleSyncFailure, and pollProviderManagedSchedules
are defined using raw internalMutation/internalAction; convert each to the
fluent builder form used in activation.ts (e.g.,
convex.mutation().input(...).handler(async (ctx,args)=>{...}).internal() or
convex.action().input(...).handler(...).internal()) preserving the same arg
schemas and handler bodies, and export those fluent builders instead of the raw
internal* calls so the functions have explicit .internal() visibility and
consistent middleware composition.
In `@convex/payments/recurringSchedules/providers/rotessaRecurring.ts`:
- Around line 93-98: The current cursor logic uses only
occurrenceEvent.occurredAt and a strict greater-than filter (args.sinceCursor ->
occurrenceEvent.occurredAt > Date.parse(args.sinceCursor)), which will
permanently skip tied rows; change the cursor to a composite sort key (e.g.
serialize occurredAt plus a unique id like providerRef or externalOccurrenceRef
into nextCursor) and update both cursor generation (where nextCursor is created)
and the filter (where args.sinceCursor is parsed) to compare the full composite
key rather than only the timestamp; alternatively implement an inclusive replay
(use >=) and add de-duplication by providerRef/externalOccurrenceRef before
emitting to avoid duplicates—update code paths that set nextCursor and the
filter that uses args.sinceCursor/occurrenceEvent.occurredAt to reflect the
chosen approach.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 35-173: Replace the raw internalQuery export for
loadActivationSnapshot with the repo’s fluent-convex builder pattern: locate the
loadActivationSnapshot symbol and convert the internalQuery({...}).handler
export into using the exported builder (e.g., query or internalQuery builder)
chain that ends with .internal(), moving the args and handler into the
builder-style calls (so visibility is explicit via .internal()); apply the same
conversion to the other new callables referenced around lines 175-287 so all
exported Convex functions use the fluent builder and end with .internal().
In `@convex/payments/rotessa/api.ts`:
- Around line 96-139: The request<T> method currently calls this.fetchImpl
without a timeout; wrap the fetch call in an AbortController and start a timer
(configurable e.g. this.requestTimeoutMs or a fixed ms) that calls
controller.abort() after the timeout, pass controller.signal to this.fetchImpl,
clear the timer after completion, and handle the abort case by catching the
AbortError and throwing a provider-timeout style Error (instead of leaving the
request to hang); update any places referencing fetchImpl or request<T>
accordingly and keep existing error-path logic (RotessaErrorResponse,
formatRotessaErrors, buildRotessaAuthHeader) intact.
In `@convex/schema.ts`:
- Around line 835-840: The occurrence lookup must be scoped to a schedule: add a
new field externalCollectionScheduleId to the occurrence schema (so it sits
alongside externalOccurrenceRef/externalLastIngestedVia) and create a compound
index keyed by externalCollectionScheduleId + externalOccurrenceRef (name it
e.g. by_external_collection_schedule_id_and_external_occurrence_ref), then
update the resolver in convex/payments/recurringSchedules/occurrenceIngestion.ts
(around the current external lookup at lines where by_external_occurrence_ref is
used) to first query the new compound index for (externalCollectionScheduleId,
externalOccurrenceRef) and only fall back to the global
by_external_occurrence_ref if the scoped lookup returns nothing.
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`:
- Around line 41-45: The pollOccurrenceUpdates declaration currently returns
NormalizedExternalCollectionOccurrenceEvent[] but the implementation returns a
cursorized payload; update the pollOccurrenceUpdates signature to return a
Promise of an object shape containing events:
NormalizedExternalCollectionOccurrenceEvent[], nextCursor: string (or null), and
optional providerData: any; locate and update the interface/method declaration
named pollOccurrenceUpdates and any tests or callers expecting an array (e.g.,
places that destructure or iterate the return) to use response.events and
response.nextCursor/providerData accordingly so the contract matches the
implementation.
- Around line 125-159: The spec incorrectly describes
externalCollectionSchedules as a governed Transition Engine entity; update the
document to state that externalCollectionSchedules is managed outside the
Transition Engine (it is not included in EntityType, has no machine
registration, and its lifecycle/status fields are mutated directly via
ctx.db.patch). Replace any wording that implies Transition Engine governance or
machine fields for externalCollectionSchedules and add a short note explaining
the intended implementation pattern: direct DB patches for status, no transition
registrations, and reference to existing usage of ctx.db.patch and the absence
from EntityType/machine registry.
In
`@specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md`:
- Around line 23-46: The gap list is stale—remove or mark as implemented the
items now provided by the new recurring-schedule modules and schema changes and
rewrite the section to only describe remaining gaps; specifically, update
entries referencing a missing external schedule aggregate, missing
`provider_scheduled` plan-entry state, missing Rotessa initiation
(`pad_rotessa`), and missing recurring-schedule polling to reflect that these
are implemented by the new recurring-schedule modules and schema migrations
(look for the recurring-schedule modules, the `externalCollectionSchedules`
aggregate/schema changes, the `provider_scheduled` status field addition, and
the Rotessa initiation implementation), and then describe only the true
outstanding gaps and any next steps for follow-ups.
In `@src/routes/demo/amps/-execution-modes.tsx`:
- Around line 498-510: The payload to runAction("advance", advanceMonth(...))
incorrectly branches on the local executionMode variable; update it to read the
execution mode from the live workspace state instead (e.g., the current
workspace object or selector that represents the active workspace) so
providerOutcome and providerChannel are included only when the workspace's
execution mode is "provider_managed". Change the branching inside the
advanceMonth call (the providerOutcome/providerChannel ternaries) to use the
workspace's executionMode (for example workspace.executionMode or the
selector/getter you already use elsewhere) rather than the local executionMode
variable so the sent payload reflects the live workspace mode.
- Around line 425-548: The UI treats workspace === undefined (loading) the same
as null (no data), causing a flash of the EmptySurface; change the render logic
around workspace so undefined shows a loading/skeleton state while null shows
the EmptySurface. Concretely, update the conditional that currently checks
"workspace ? (...) : (<EmptySurface />)" to explicitly handle three cases: if
workspace === undefined render a loader/skeleton (or disable buttons/inline
spinner), else if workspace === null render the EmptySurface, else render the
existing workspace UI (the block using workspace.mortgage, workspace.workspace,
nextInstallment, canAdvance, runAction, advanceMonth, executionMode,
providerOutcome, providerChannel, pendingAction, confirmPendingManualReview).
Apply the same pattern where workspace is used elsewhere (lines ~604-837).
In `@src/routeTree.gen.ts`:
- Around line 415-418: The parent component DemoListingsRoute does not render
nested routes because it omits React Router's Outlet; open
src/routes/demo.listings.tsx, import Outlet from 'react-router-dom' and update
the DemoListingsRoute component (the function or component that returns the
<section> with <ListingsClient />) to include <Outlet /> where child routes
should render (e.g., alongside or below <ListingsClient />), ensuring nested
routes like /demo/listings/$listingid are displayed.
---
Duplicate comments:
In `@convex/demo/__tests__/ampsExecutionModes.test.ts`:
- Around line 28-34: The createHarness function mutates
process.env.DISABLE_GT_HASHCHAIN and process.env.DISABLE_CASH_LEDGER_HASHCHAIN
without restoring them; update createHarness (or add a shared afterEach) to
snapshot the prior values of these two env vars before setting them and restore
them after the harness is used (use a try/finally around the harness
setup/teardown or register an afterEach that resets process.env for
DISABLE_GT_HASHCHAIN and DISABLE_CASH_LEDGER_HASHCHAIN), keeping the existing
convexTest/auditLogTest.register calls unchanged.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 55-66: Currently the code uses borrowerLink.find(...) which
silently picks the first "primary" when there are multiple; update the logic in
the mortgageBorrowers query handling so you collect all links with role ===
"primary" (e.g., filter borrowerLink for role), then require that the count is
exactly one: if zero or >1 throw a ConvexError from the same block (include
borrowerLink.length and primaryCount in the message) instead of proceeding, and
set primaryBorrowerLink from the single item when primaryCount === 1; this makes
activation deterministic and fails fast on bad data.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 592-612: The unresolved branch currently calls
markTransferWebhookFailed(...) then throws, which triggers the surrounding catch
to call markTransferWebhookFailed(...) again; to fix, after calling
markTransferWebhookFailed in the ingestionResult.outcome === "unresolved"
branch, throw an Error that carries a flag (e.g., set a property like
alreadyMarked = true on the Error object) or throw a small custom error class,
and in the catch block check that flag (or instance) and skip calling
markTransferWebhookFailed if alreadyMarked is true; reference the
ingestionResult.outcome === "unresolved" branch, markTransferWebhookFailed, and
the surrounding catch to implement this conditional skip.
---
Nitpick comments:
In `@convex/payments/rotessa/financialTransactions.ts`:
- Around line 87-112: The function buildNormalizedOccurrenceFromRotessaRow
accepts a broad ProviderCode and then force-casts it to "pad_rotessa", which
lets callers pass non-Rotessa providers and masks the error; change the
parameter type providerCode to be optional but constrained to
Extract<ProviderCode, "pad_rotessa"> (or remove the providerCode override
entirely) and update any use of args.providerCode in the function to rely on
that narrower type so you no longer need the unsafe cast in the returned
providerCode field; ensure the signature and the returned providerCode value
consistently use the constrained type (function name:
buildNormalizedOccurrenceFromRotessaRow, param: providerCode).
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 65-90: The function reference
ingestExternalOccurrenceEventReference is currently typed to return only {
transferRequestId? } which hides the real mutation's union result and forces a
runtime cast; update the generic return type passed to makeFunctionReference
(for
"payments/recurringSchedules/occurrenceIngestion:ingestExternalOccurrenceEvent")
to the full union shape the mutation actually returns (including outcome,
reason, and any other discriminants and payloads) and remove the subsequent cast
so TypeScript can statically enforce the branching on outcome/reason in the
handler.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: aa649203-90fa-458a-b5b2-27a8d3dfd600
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockconvex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (42)
AGENTS.mdCLAUDE.mdconvex/crons.tsconvex/demo/__tests__/ampsExecutionModes.test.tsconvex/demo/ampsExecutionModes.tsconvex/payments/collectionPlan/__tests__/runner.test.tsconvex/payments/collectionPlan/admin.tsconvex/payments/collectionPlan/execution.tsconvex/payments/collectionPlan/executionContract.tsconvex/payments/collectionPlan/initialScheduling.tsconvex/payments/collectionPlan/mutations.tsconvex/payments/collectionPlan/queries.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/registry.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/recurringSchedules/types.tsconvex/payments/recurringSchedules/validators.tsconvex/payments/rotessa/api.tsconvex/payments/rotessa/financialTransactions.tsconvex/payments/transfers/providers/registry.tsconvex/payments/transfers/providers/rotessa.tsconvex/payments/transfers/types.tsconvex/payments/webhooks/handleReversal.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsconvex/test/moduleMaps.tsdocs/architecture/state-machines.mdpackage.jsonreviews/04-11-provider_managed_payment_schedule/74b377d61.mdspecs/15-support-provider-managed-recurring-collection-schedules/PRD.mdspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdspecs/15-support-provider-managed-recurring-collection-schedules/tasks.mdsrc/routeTree.gen.tssrc/routes/demo/amps/-execution-modes.tsxsrc/routes/demo/amps/-index.tsxsrc/test/convex/payments/helpers.tssrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
✅ Files skipped from review due to trivial changes (9)
- CLAUDE.md
- package.json
- AGENTS.md
- convex/payments/collectionPlan/executionContract.ts
- convex/payments/webhooks/handleReversal.ts
- convex/payments/collectionPlan/tests/runner.test.ts
- docs/architecture/state-machines.md
- convex/payments/recurringSchedules/validators.ts
- src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
🚧 Files skipped from review as they are similar to previous changes (12)
- convex/payments/transfers/types.ts
- convex/payments/collectionPlan/admin.ts
- convex/payments/transfers/providers/registry.ts
- convex/crons.ts
- convex/payments/collectionPlan/execution.ts
- convex/payments/collectionPlan/queries.ts
- convex/payments/recurringSchedules/providers/registry.ts
- convex/payments/collectionPlan/mutations.ts
- src/test/convex/payments/helpers.ts
- specs/15-support-provider-managed-recurring-collection-schedules/tasks.md
- convex/payments/recurringSchedules/types.ts
- convex/payments/recurringSchedules/activation.ts
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
convex/demo/ampsExecutionModes.ts (1)
932-967: Extract a shared workspace upsert helper to avoid branch drift.The insert/patch blocks for
demo_collection_execution_workspacesare duplicated across provider-managed and app-owned branches. A shared helper would reduce maintenance risk.Based on learnings: Apply DRY (Don't Repeat Yourself) principle: if writing the same code more than once, abstract it into a reusable function or module.
Also applies to: 1044-1079
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/demo/ampsExecutionModes.ts` around lines 932 - 967, Duplicate insert/patch logic for demo_collection_execution_workspaces (the existingWorkspace check, ctx.db.insert and ctx.db.patch blocks that set ownerAuthId, workspaceKey, executionMode, paymentRail, mortgageId, borrowerId, brokerId, propertyId, bankAccountId, externalCollectionScheduleId, currentMonthIndex, currentDate, startDate, createdAt/updatedAt, lastAdvancedAt) should be extracted into a single helper (e.g. upsertDemoCollectionExecutionWorkspace) that returns the workspaceId; implement that helper to accept ctx, args and computed fields (startDate, now, workspaceKey, paymentRail, etc.), perform insert when existingWorkspace is falsy and patch when present, and then replace both the current branch and the similar block at lines ~1044-1079 to call the helper and use its returned id. Ensure the helper updates lastAdvancedAt to undefined in the patch path and keeps createdAt only on insert.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/demo/__tests__/ampsExecutionModes.test.ts`:
- Around line 34-40: createHarness currently mutates
process.env.DISABLE_GT_HASHCHAIN and process.env.DISABLE_CASH_LEDGER_HASHCHAIN
without restoring them, causing global state leakage; modify createHarness to
record the previous values of those two env vars, set the flags as now, and
register a teardown (via the test harness returned by convexTest or
auditLogTest.register, e.g., using t.after or the harness cleanup hook) to
restore the original values after each test completes so other tests don't
inherit the disabled hashchain state; reference the createHarness function and
the two env names when implementing the snapshot/restore logic.
In `@convex/demo/ampsExecutionModes.ts`:
- Around line 1341-1358: The workspace clock is advanced before downstream
processing, causing persistent month advancement on downstream failure; move the
call to internal.demo.ampsExecutionModes.advanceWorkspaceClockInternal so it
runs only after downstream work succeeds: remove the early call with params
(workspaceId, currentMonthIndex, currentDate, advancedAt) and instead invoke
advanceWorkspaceClockInternal at the end of the app_owned path immediately after
a successful processDuePlanEntries run (the app_owned runner success block) and
at the end of the provider_managed path immediately after the poller/event
ingestion sync completes; keep transitionWorkspaceObligationsInternal (with
mortgageId/asOf) where it is but ensure advanceWorkspaceClockInternal runs last
in both paths.
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 333-349: The loop that patches plan entries (iterating
args.planEntryIds and calling ctx.db.patch) and the subsequent patch of
schedule.mortgageId blindly overwrite rows; before rebinding each plan entry to
provider_scheduled you must re-fetch and revalidate the current plan entry state
(e.g., via a read/get for each plan entry id) to ensure its status,
externalCollectionScheduleId and any other relevant fields still match the
snapshot/provider-create expectations, and if any mismatch occurs fail the
activation (or skip and surface an explicit error) instead of patching; only
call ctx.db.patch for the plan entries that pass validation and only update the
mortgage (collectionExecutionMode/activeExternalCollectionScheduleId) if all
plan entries validated successfully.
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 311-318: When reusing args.existingTransfer you backfill
transfer.collectionAttemptId but never set the reverse backlink on the attempt,
so add logic to set args.attempt.transferRequestId to the existing transfer id
when missing; specifically, in the branch that returns args.existingTransfer,
after patching the transfer (or if transfer already had collectionAttemptId),
conditionally call args.ctx.db.patch(args.attempt._id, { transferRequestId:
args.existingTransfer._id }) only if args.attempt.transferRequestId is falsy so
the attempt and transfer are mutually linked (use the same symbols:
args.existingTransfer, args.attempt, collectionAttemptId, transferRequestId,
planEntryId).
In `@convex/payments/recurringSchedules/poller.ts`:
- Around line 252-279: The loop over occurrenceUpdates.events ignores the return
value from ingestExternalOccurrenceEventRef which can yield { outcome:
"unresolved" }, causing the poll to record success and advance nextCursor
prematurely; change the ingestion loop in payments/recurringSchedules/poller so
you capture the result of each ctx.runMutation(ingestExternalOccurrenceEventRef)
call, and if any result has outcome === "unresolved" fail the poll instead of
proceeding — i.e., do not call recordExternalCollectionScheduleSyncSuccess (the
makeFunctionReference to
"payments/recurringSchedules/poller:recordExternalCollectionScheduleSyncSuccess")
when unresolved is encountered; instead either call a failure/rollback mutation
or return/throw to ensure nextCursor is not advanced and the event will be
retried.
In `@convex/payments/rotessa/api.ts`:
- Around line 219-245: The findTransactionReportRow function currently uses a
hard cap (while (page <= 10)) that can silently truncate searches; remove that
arbitrary limit and iterate pages until the API indicates no more rows: change
the loop to an unbounded loop (e.g., for (let page = 1;; page++) or while
(true)) calling getTransactionReport(page) and keep the existing checks (return
matched when found, return null when rows.length < 1000); this ensures you
exhaust all pages instead of stopping at 10. Use the existing symbols
findTransactionReportRow and getTransactionReport and keep the page increment
logic and the rows.length < 1000 termination to avoid infinite loops.
In `@src/routes/demo/amps/-execution-modes.tsx`:
- Around line 263-307: Wrap each option group container (the arrays mapped in
EXECUTION_MODE_OPTIONS, currentRailOptions and the provider options mapped
later) with role="radiogroup" and add accessible labelling, then change each
option button (the elements rendered in the maps that call setExecutionMode,
setPaymentRail, etc.) to include role="radio" and set aria-checked={paymentRail
=== option.value} (or aria-checked={executionMode === option.value} / provider
state respectively); keep the existing onClick handlers (setExecutionMode,
setPaymentRail, etc.), update selectorButtonClass usage unchanged, and ensure
only one element per group has aria-checked=true so screen readers can announce
the active selection.
---
Nitpick comments:
In `@convex/demo/ampsExecutionModes.ts`:
- Around line 932-967: Duplicate insert/patch logic for
demo_collection_execution_workspaces (the existingWorkspace check, ctx.db.insert
and ctx.db.patch blocks that set ownerAuthId, workspaceKey, executionMode,
paymentRail, mortgageId, borrowerId, brokerId, propertyId, bankAccountId,
externalCollectionScheduleId, currentMonthIndex, currentDate, startDate,
createdAt/updatedAt, lastAdvancedAt) should be extracted into a single helper
(e.g. upsertDemoCollectionExecutionWorkspace) that returns the workspaceId;
implement that helper to accept ctx, args and computed fields (startDate, now,
workspaceKey, paymentRail, etc.), perform insert when existingWorkspace is falsy
and patch when present, and then replace both the current branch and the similar
block at lines ~1044-1079 to call the helper and use its returned id. Ensure the
helper updates lastAdvancedAt to undefined in the patch path and keeps createdAt
only on insert.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bce6b4a9-f1bc-4402-8bc7-8857aa99921c
📒 Files selected for processing (17)
convex/demo/__tests__/ampsExecutionModes.test.tsconvex/demo/ampsExecutionModes.tsconvex/payments/collectionPlan/queries.tsconvex/payments/collectionPlan/runner.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/rotessa/api.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdsrc/routes/demo.listings.tsxsrc/routes/demo/amps/-execution-modes.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- convex/payments/collectionPlan/queries.ts
- convex/schema.ts
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
convex/payments/collectionPlan/queries.ts (1)
106-113:⚠️ Potential issue | 🟠 MajorTreat
provider_scheduledas covered in the idempotency helpers too.Adding a second live occurrence state here makes
getEntryForObligation()andgetPlannedEntriesForObligations()above stale, since both still only scanstatus = "planned". After activation, cron generation can miss an existing provider-managed row, create a second plan entry for the same obligation, and later send that duplicate through the app-owned path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/collectionPlan/queries.ts` around lines 106 - 113, The status enum now includes "provider_scheduled" but the idempotency helpers still only scan for status = "planned", which can create duplicate plan entries; update the idempotency logic in getEntryForObligation and getPlannedEntriesForObligations to treat "provider_scheduled" the same as "planned" by changing their queries/filters to include both statuses (e.g., status IN ("planned","provider_scheduled")) and any related checks that compare to the literal "planned" so provider-managed rows are considered covered.
🧹 Nitpick comments (3)
src/routes/demo/amps/-execution-modes.tsx (2)
343-412: Minor UX consideration: provider selectors visibility.The provider channel/outcome selectors are shown based on the seed-form's
executionModestate, but the advance action sends these values based onworkspaceExecutionMode. If an operator changes the seed form toapp_ownedwhile the active workspace is stillprovider_managed, the selectors disappear but the last-selected values are still sent on advance.This isn't a bug (the stale values are valid), but showing the selectors when the workspace is provider-managed (regardless of the seed form state) could improve clarity:
♻️ Optional: show provider selectors when workspace is provider_managed
-{executionMode === "provider_managed" ? ( +{(executionMode === "provider_managed" || workspaceExecutionMode === "provider_managed") ? ( <> {/* provider channel and outcome selectors */} </>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/demo/amps/-execution-modes.tsx` around lines 343 - 412, The provider selector blocks are currently rendered when executionMode === "provider_managed" which hides them if the seed form changes even though workspaceExecutionMode may still be provider-managed; update the rendering condition(s) that wrap the PROVIDER_CHANNEL_OPTIONS and PROVIDER_OUTCOME_OPTIONS blocks to check workspaceExecutionMode === "provider_managed" (or workspaceExecutionMode === "provider_managed" || executionMode === "provider_managed" if you prefer showing them when either is managed) so the UI reflects the active workspace mode; no other state handling (providerChannel, providerOutcome, setProviderChannel, setProviderOutcome) needs changing.
211-225: Consider removing the unused generic type parameter.The
runAction<T>generic is declared but never referenced in the function body—the return type is alwaysPromise<void>. If the result is intentionally discarded, simplifying to a non-generic signature makes the intent clearer.♻️ Optional simplification
-async function runAction<T>( +async function runAction( action: "seed" | "advance" | "confirm", - promise: Promise<T>, + promise: Promise<unknown>, successMessage: string ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/demo/amps/-execution-modes.tsx` around lines 211 - 225, The runAction<T> function declares an unused generic T; remove the generic and make the promise parameter non-generic (e.g., Promise<unknown> or Promise<any>) and ensure the function signature reflects it returns Promise<void>; update the runAction declaration (the function named runAction) to drop <T> and adjust the promise parameter type accordingly so the intent of discarding the result is clear.convex/payments/webhooks/rotessaPad.ts (1)
334-347: Use the repo’s fluent builder for the new webhook endpoints.Both new exports are introduced via raw
internalMutation(...)/internalAction(...), so they miss the standard fluent surface and explicit.internal()marker. Since this logic is new, it’s a good time to land it on the builder before more call sites grow around the raw helpers.As per coding guidelines "Use
fluent-convexfor writing Convex functions with explicit.public()or.internal()visibility at export site" and "Exported Convex queries, mutations, and actions must use the fluent builder and end with explicit.public()or.internal()for visibility clarity".Also applies to: 523-628
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 334 - 347, The exported Convex function processRotessaPadTransferWebhookMutation is created with raw internalMutation(...) instead of the repo’s fluent builder; replace that export to use the fluent-convex builder (e.g., start from the module’s mutation/action builder import), move the same args and handler into the builder chain, and append the explicit .internal() visibility marker at the export site; do the same for the other new webhook exports referenced in the diff range (lines ~523-628) so all new webhook endpoints use the fluent builder and end with .internal().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 100-102: The thrown ConvexError message in
recurringSchedules/activation.ts lists only
bankAccount.metadata.rotessaCustomerId and
bankAccount.metadata.rotessaCustomerCustomIdentifier but omits the accepted
rotessaCustomIdentifier variant; update the ConvexError text (the throw new
ConvexError(...) line) to mention all supported
keys—bankAccount.metadata.rotessaCustomerId,
bankAccount.metadata.rotessaCustomerCustomIdentifier, and
bankAccount.metadata.rotessaCustomIdentifier—to match the validation logic that
accepts rotessaCustomIdentifier.
- Around line 177-189: The retry path that handles existing.status ===
"activation_failed" updates the DB and returns early (referencing existing,
ctx.db.patch and needsProviderCreate), which skips the mortgage-level
live-schedule exclusivity check later in this function; change the flow so that
after patching the record you do NOT return immediately but instead either (A)
fall through to the same mortgage/live-schedule validation that runs later (the
check at the mortgage-level that ensures only one live schedule per mortgage) or
(B) explicitly re-run that exclusivity check here (compare mortgage
pointer/existing.externalScheduleRef vs any current live schedule) and
abort/reset to activation_failed if a different live schedule exists; apply the
same adjustment to the analogous retry block noted (lines ~228-249) so no retry
can bypass the live-schedule guard.
In `@convex/payments/recurringSchedules/providers/rotessaRecurring.ts`:
- Around line 124-193: The loop currently stops when events.length >= maxEvents
which is unsafe because you sort and slice after fetching; instead continue
paginating until there are no more pages to fetch so the post-sort slice is
stable. Modify the loop that uses this.apiClient.getTransactionReport (the block
populating events, page and pagesVisited) to not break based on
events.length/maxEvents; iterate until rows.length < 1000 (page-size) or the API
indicates no more pages, then perform the sort/slice and compute nextCursor with
buildOccurrenceSortKey and encodeOccurrenceCursor as before.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 241-248: The current logic fetches "sync_error" only after filling
"active" to limit, starving retries; change the selection to reserve a retry
quota and/or interleave results: compute a retryQuota (e.g. Math.max(1,
Math.floor(limit * 0.1)) or another policy), call
collectEligibleSchedules("active", limit - retryQuota) and
collectEligibleSchedules("sync_error", retryQuota), then merge results; if
either side returns fewer than requested, top up from the other status up to
limit so you still return up to limit items. Use the existing symbols
collectEligibleSchedules, active, syncError, and limit when implementing this.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 556-598: If findRotessaFinancialTransactionForWebhook() returns a
row but buildNormalizedOccurrenceFromRotessaRow(...) returns undefined, the
current code silently falls through to the transfer-fallback path and drops
provider-managed lifecycle updates; instead add an explicit branch when row
exists and event is falsy that treats this as an unresolved occurrence: call
patchPersistedTransferWebhookMetadata(...) with the webhookEventId and
normalizedEventType (and any transferRequestId if available), then surface the
unresolved state by throwing an Error (or returning an unresolved result) with a
clear message so the webhook is not marked processed (avoid calling
markTransferWebhookProcessed(...) in this path); update logic in the block
around buildNormalizedOccurrenceFromRotessaRow,
ingestExternalOccurrenceEventReference, patchPersistedTransferWebhookMetadata,
and markTransferWebhookProcessed to implement this behavior.
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`:
- Around line 352-355: Update the documented match order in design.md to match
the implemented logic in resolvePlanEntryByOccurrence(): first match existing
`transferRequest` by `providerCode + providerRef`, then match
`collectionPlanEntry` by `externalCollectionScheduleId + externalOccurrenceRef`
(scoped occurrence), then fall back to `collectionPlanEntry` by unscoped
`externalOccurrenceRef`, then `collectionPlanEntry` by
`externalCollectionScheduleId + externalOccurrenceOrdinal`, and finally
`collectionPlanEntry` by `externalCollectionScheduleId + scheduledDate` when
unique; ensure the list references the exact symbols
(`resolvePlanEntryByOccurrence`, `transferRequest`, `collectionPlanEntry`,
`externalCollectionScheduleId`, `externalOccurrenceRef`,
`externalOccurrenceOrdinal`, `scheduledDate`) so the doc reflects the
implemented order.
---
Outside diff comments:
In `@convex/payments/collectionPlan/queries.ts`:
- Around line 106-113: The status enum now includes "provider_scheduled" but the
idempotency helpers still only scan for status = "planned", which can create
duplicate plan entries; update the idempotency logic in getEntryForObligation
and getPlannedEntriesForObligations to treat "provider_scheduled" the same as
"planned" by changing their queries/filters to include both statuses (e.g.,
status IN ("planned","provider_scheduled")) and any related checks that compare
to the literal "planned" so provider-managed rows are considered covered.
---
Nitpick comments:
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 334-347: The exported Convex function
processRotessaPadTransferWebhookMutation is created with raw
internalMutation(...) instead of the repo’s fluent builder; replace that export
to use the fluent-convex builder (e.g., start from the module’s mutation/action
builder import), move the same args and handler into the builder chain, and
append the explicit .internal() visibility marker at the export site; do the
same for the other new webhook exports referenced in the diff range (lines
~523-628) so all new webhook endpoints use the fluent builder and end with
.internal().
In `@src/routes/demo/amps/-execution-modes.tsx`:
- Around line 343-412: The provider selector blocks are currently rendered when
executionMode === "provider_managed" which hides them if the seed form changes
even though workspaceExecutionMode may still be provider-managed; update the
rendering condition(s) that wrap the PROVIDER_CHANNEL_OPTIONS and
PROVIDER_OUTCOME_OPTIONS blocks to check workspaceExecutionMode ===
"provider_managed" (or workspaceExecutionMode === "provider_managed" ||
executionMode === "provider_managed" if you prefer showing them when either is
managed) so the UI reflects the active workspace mode; no other state handling
(providerChannel, providerOutcome, setProviderChannel, setProviderOutcome) needs
changing.
- Around line 211-225: The runAction<T> function declares an unused generic T;
remove the generic and make the promise parameter non-generic (e.g.,
Promise<unknown> or Promise<any>) and ensure the function signature reflects it
returns Promise<void>; update the runAction declaration (the function named
runAction) to drop <T> and adjust the promise parameter type accordingly so the
intent of discarding the result is clear.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d21f2622-c08e-445c-830a-cf83604fb14a
📒 Files selected for processing (19)
convex/demo/__tests__/ampsE2e.test.tsconvex/demo/__tests__/ampsExecutionModes.test.tsconvex/demo/ampsExecutionModes.tsconvex/payments/collectionPlan/queries.tsconvex/payments/collectionPlan/runner.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/rotessa/api.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdsrc/routes/demo.listings.tsxsrc/routes/demo/amps/-execution-modes.tsxsrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
✅ Files skipped from review due to trivial changes (5)
- src/routes/demo.listings.tsx
- convex/demo/tests/ampsE2e.test.ts
- src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
- convex/demo/tests/ampsExecutionModes.test.ts
- convex/demo/ampsExecutionModes.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- convex/payments/collectionPlan/runner.ts
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (6)
specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md (1)
23-29:⚠️ Potential issue | 🟡 MinorRemove the composite-cursor item from the “remaining gaps” list.
The Rotessa poller in this branch already moved to a composite cursor (
occurredAt+ secondary sort key). Leaving this bullet here makes the gap analysis stale again and points follow-up work at something the code now implements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md` around lines 23 - 29, Remove the outdated bullet that says "The Rotessa poll cursor still needs a composite sort key so equal timestamps cannot drop rows" from the "What does not fit the target design yet" list in specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md; search for the exact sentence referencing the Rotessa poll cursor and delete that list item so the gap analysis no longer claims the composite-cursor work is outstanding.specs/15-support-provider-managed-recurring-collection-schedules/design.md (1)
350-355:⚠️ Potential issue | 🟡 MinorPut the schedule-scoped occurrence ref back ahead of the global fallback.
This section still documents the older global-first order. The implementation now prefers
externalCollectionScheduleId + externalOccurrenceRefbefore an unscopedexternalOccurrenceReflookup, so the doc currently points follow-up work back at the cross-schedule ambiguity this PR fixes.💡 Suggested fix
1. existing `transferRequest` by `providerCode + providerRef` -2. `collectionPlanEntry` by `externalOccurrenceRef` -3. `collectionPlanEntry` by `externalCollectionScheduleId + externalOccurrenceOrdinal` -4. `collectionPlanEntry` by `externalCollectionScheduleId + scheduledDate` when unique -5. otherwise fail closed: +2. `collectionPlanEntry` by `externalCollectionScheduleId + externalOccurrenceRef` +3. `collectionPlanEntry` by `externalOccurrenceRef` +4. `collectionPlanEntry` by `externalCollectionScheduleId + externalOccurrenceOrdinal` +5. `collectionPlanEntry` by `externalCollectionScheduleId + scheduledDate` when unique +6. otherwise fail closed:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md` around lines 350 - 355, Update the documented "Recommended match order" to match the implementation by moving the schedule-scoped occurrence lookup ahead of the global fallback: list matching by externalCollectionScheduleId + externalOccurrenceRef (i.e., schedule-scoped occurrence ref) before the unscoped externalOccurrenceRef lookup. Keep the other items (transferRequest by providerCode+providerRef, collectionPlanEntry by externalOccurrenceRef, collectionPlanEntry by externalCollectionScheduleId+externalOccurrenceOrdinal, collectionPlanEntry by externalCollectionScheduleId+scheduledDate when unique) but reorder so externalCollectionScheduleId+externalOccurrenceRef precedes the unscoped externalOccurrenceRef entry.convex/payments/recurringSchedules/providers/rotessaRecurring.ts (1)
124-168:⚠️ Potential issue | 🟠 MajorDon’t stop paginating before the final sort/slice is stable.
This loop exits as soon as the unsorted accumulator hits
maxEvents, but the method then reorders everything by(occurredAt, sortKey)before choosing the emitted subset. Unless Rotessa guarantees report pages already arrive in that same order, a later page can contain an earlier occurrence that gets skipped oncenextCursoradvances from the post-sort slice.💡 Suggested fix
- while (events.length < maxEvents) { + while (true) { const rows = await this.apiClient.getTransactionReport({ startDate: args.startDate, endDate: args.endDate, status: "All", page, }); @@ events.push(...filtered); if (rows.length < 1000) { break; } page += 1; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/providers/rotessaRecurring.ts` around lines 124 - 168, The loop stops fetching further pages once the unsorted events accumulator reaches maxEvents, which can miss earlier events that appear on later pages because you sort by (occurredAt, sortKey) after collecting; change the pagination loop (where getTransactionReport, page, rows, and events are used) to continue fetching all pages (i.e., remove the events.length < maxEvents loop guard and use a loop that only breaks when rows.length < pageSize / no-more-rows) so you collect every matching row, then apply buildNormalizedOccurrenceFromRotessaRow, dedupe/sort with buildOccurrenceSortKey, and only afterwards take/slice to maxEvents and calculate nextCursor relative to the final sorted list (preserve use of sinceCursor during post-filtering as now).convex/payments/recurringSchedules/queries.ts (1)
241-248:⚠️ Potential issue | 🟠 MajorDon't let
sync_errorschedules starve behind active backlog.This still only considers
sync_errorrows afteractivehas spare capacity. If the due active set stays at or abovelimit, recovery polls never run and transient sync failures can remain stuck indefinitely. Reserve some retry budget or interleave the two statuses before topping up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/queries.ts` around lines 241 - 248, The current logic only fetches "sync_error" schedules after filling up "active" to limit which starves retries; change collectEligibleSchedules usage in this block to reserve or interleave slots for retries instead of waiting for spare capacity: compute a retryBudget (e.g., a fixed minimum like 1 or a percentage of limit), call collectEligibleSchedules("active", limit - retryBudget) and collectEligibleSchedules("sync_error", retryBudget), then merge the results (or alternately fetch both with smaller chunks and interleave them) so "sync_error" entries get polled even when "active" is at capacity; update variables active, syncError and the final return to reflect the merged/interleaved list.convex/payments/recurringSchedules/occurrenceIngestion.ts (1)
410-427:⚠️ Potential issue | 🟠 MajorDon't reopen terminal plan entries on provider replays.
Line 411 still forces every matched entry back to
executing. A replayedApprovedor laterChargebackagainst an alreadycompleted,cancelled, orrescheduledentry will reopen the row even though only the provider mirror fields changed. Only promote live scheduled/provider-owned rows; preserve terminal states.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/occurrenceIngestion.ts` around lines 410 - 427, The patch unconditionally sets status to "executing" and reopens terminal plan entries; change the logic in occurrenceIngestion.ts around the args.ctx.db.patch call so you only set status: "executing" when the existing args.planEntry.status is not a terminal state (e.g., exclude "completed", "cancelled", "rescheduled" — use whatever terminal constants your codebase uses); compute a newStatus or conditionally include the status field in the patch for live/provider-owned rows only, but always update the provider mirror fields (externalCollectionScheduleId, externalOccurrenceOrdinal, externalOccurrenceRef, externalProviderEventStatus, externalProviderReason, externalLastReportedAt, externalLastIngestedVia, collectionAttemptId, executedAt) so replayed provider updates don’t reopen completed/cancelled/rescheduled entries.convex/payments/webhooks/rotessaPad.ts (1)
556-598:⚠️ Potential issue | 🟠 MajorFail when a matched financial row can't be normalized.
If
rowexists butbuildNormalizedOccurrenceFromRotessaRow()returns nothing, this block silently falls through to the legacy transfer mutation. For provider-managed occurrences that have not materialized a transfer yet, that drops the lifecycle update instead of surfacing an unresolved occurrence. Treat!eventas a failure/unresolved path instead of falling back.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 556 - 598, When buildNormalizedOccurrenceFromRotessaRow(...) returns falsy, treat that as an unresolved occurrence instead of falling through: call ctx.runMutation(ingestExternalOccurrenceEventReference, { event: { externalScheduleRef: `${row.transaction_schedule_id}`, receivedVia: "webhook", rawProviderReason: args.reason ?? args.returnCode, /* include the raw row or other provider fields needed */ } }) to persist an unresolved occurrence, then call patchPersistedTransferWebhookMetadata with webhookEventId: args.webhookEventId, normalizedEventType, and the returned transferRequestId, and if the ingestion result has outcome === "unresolved" throw an Error(ingestionResult.reason); this mirrors the existing happy/error paths used after buildNormalizedOccurrenceFromRotessaRow and prevents silently falling back to the legacy transfer mutation.
🧹 Nitpick comments (1)
convex/payments/webhooks/rotessaPad.ts (1)
334-347: Use the repo’s fluent-convex builder for these webhook exports.These new/changed exported Convex callables are still on raw
internalMutation(...)/internalAction(...), so visibility is no longer obvious at the export site and this file diverges from the repo’s standard builder surface.As per coding guidelines "Use fluent-convex as the canonical way of writing Convex functions in this application" and "Exported Convex queries, mutations, and actions must use the fluent-convex builder and end with an explicit
.public()or.internal()so visibility is obvious at the export site".Also applies to: 523-628
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/webhooks/rotessaPad.ts` around lines 334 - 347, The exported Convex callable processRotessaPadTransferWebhookMutation (and other similar exports in this file, e.g., the ones between the noted ranges) currently uses raw internalMutation/internalAction; replace these with the repo's fluent-convex builder pattern so visibility is explicit at the export site (e.g., use the fluent builder to construct the mutation/action and call .internal() or .public() as appropriate). Locate exports named like processRotessaPadTransferWebhookMutation and any other internalMutation/internalAction exports in the 523-628 area, refactor their definitions to use the fluent-convex builder API, and ensure each ends with .internal() or .public() to match repository guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/demo/__tests__/ampsE2e.test.ts`:
- Around line 36-47: In restoreEnv(), when previousGtHashchain or
previousCashLedgerHashchain is undefined you should remove the env var instead
of assigning undefined; replace assignments like
process.env.DISABLE_GT_HASHCHAIN = undefined and
process.env.DISABLE_CASH_LEDGER_HASHCHAIN = undefined with delete
process.env.DISABLE_GT_HASHCHAIN and delete
process.env.DISABLE_CASH_LEDGER_HASHCHAIN respectively, and keep setting the
variables to their previousGtHashchain / previousCashLedgerHashchain values when
they are defined.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 542-555: The condition guarding the occurrence-ingestion path only
checks isRotessaFinancialLifecycleStatus(args.eventType) so webhook event types
like "transaction.pending", "transaction.completed", and "transaction.nsf" never
trigger the row lookup; update the guard so the handler also treats mapped
webhook event types as financial lifecycle events—either extend
isRotessaFinancialLifecycleStatus to return true for those mapped strings or add
an OR check (e.g., isRotessaFinancialLifecycleStatus(args.eventType) ||
isRotessaTransactionWebhookEvent(args.eventType) or explicit checks for
"transaction.pending"/"transaction.completed"/"transaction.nsf") before calling
findRotessaFinancialTransactionForWebhook, ensuring the occurrence-ingestion
branch runs for provider-managed webhooks.
---
Duplicate comments:
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 410-427: The patch unconditionally sets status to "executing" and
reopens terminal plan entries; change the logic in occurrenceIngestion.ts around
the args.ctx.db.patch call so you only set status: "executing" when the existing
args.planEntry.status is not a terminal state (e.g., exclude "completed",
"cancelled", "rescheduled" — use whatever terminal constants your codebase
uses); compute a newStatus or conditionally include the status field in the
patch for live/provider-owned rows only, but always update the provider mirror
fields (externalCollectionScheduleId, externalOccurrenceOrdinal,
externalOccurrenceRef, externalProviderEventStatus, externalProviderReason,
externalLastReportedAt, externalLastIngestedVia, collectionAttemptId,
executedAt) so replayed provider updates don’t reopen
completed/cancelled/rescheduled entries.
In `@convex/payments/recurringSchedules/providers/rotessaRecurring.ts`:
- Around line 124-168: The loop stops fetching further pages once the unsorted
events accumulator reaches maxEvents, which can miss earlier events that appear
on later pages because you sort by (occurredAt, sortKey) after collecting;
change the pagination loop (where getTransactionReport, page, rows, and events
are used) to continue fetching all pages (i.e., remove the events.length <
maxEvents loop guard and use a loop that only breaks when rows.length < pageSize
/ no-more-rows) so you collect every matching row, then apply
buildNormalizedOccurrenceFromRotessaRow, dedupe/sort with
buildOccurrenceSortKey, and only afterwards take/slice to maxEvents and
calculate nextCursor relative to the final sorted list (preserve use of
sinceCursor during post-filtering as now).
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 241-248: The current logic only fetches "sync_error" schedules
after filling up "active" to limit which starves retries; change
collectEligibleSchedules usage in this block to reserve or interleave slots for
retries instead of waiting for spare capacity: compute a retryBudget (e.g., a
fixed minimum like 1 or a percentage of limit), call
collectEligibleSchedules("active", limit - retryBudget) and
collectEligibleSchedules("sync_error", retryBudget), then merge the results (or
alternately fetch both with smaller chunks and interleave them) so "sync_error"
entries get polled even when "active" is at capacity; update variables active,
syncError and the final return to reflect the merged/interleaved list.
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 556-598: When buildNormalizedOccurrenceFromRotessaRow(...) returns
falsy, treat that as an unresolved occurrence instead of falling through: call
ctx.runMutation(ingestExternalOccurrenceEventReference, { event: {
externalScheduleRef: `${row.transaction_schedule_id}`, receivedVia: "webhook",
rawProviderReason: args.reason ?? args.returnCode, /* include the raw row or
other provider fields needed */ } }) to persist an unresolved occurrence, then
call patchPersistedTransferWebhookMetadata with webhookEventId:
args.webhookEventId, normalizedEventType, and the returned transferRequestId,
and if the ingestion result has outcome === "unresolved" throw an
Error(ingestionResult.reason); this mirrors the existing happy/error paths used
after buildNormalizedOccurrenceFromRotessaRow and prevents silently falling back
to the legacy transfer mutation.
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`:
- Around line 350-355: Update the documented "Recommended match order" to match
the implementation by moving the schedule-scoped occurrence lookup ahead of the
global fallback: list matching by externalCollectionScheduleId +
externalOccurrenceRef (i.e., schedule-scoped occurrence ref) before the unscoped
externalOccurrenceRef lookup. Keep the other items (transferRequest by
providerCode+providerRef, collectionPlanEntry by externalOccurrenceRef,
collectionPlanEntry by externalCollectionScheduleId+externalOccurrenceOrdinal,
collectionPlanEntry by externalCollectionScheduleId+scheduledDate when unique)
but reorder so externalCollectionScheduleId+externalOccurrenceRef precedes the
unscoped externalOccurrenceRef entry.
In
`@specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md`:
- Around line 23-29: Remove the outdated bullet that says "The Rotessa poll
cursor still needs a composite sort key so equal timestamps cannot drop rows"
from the "What does not fit the target design yet" list in
specs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.md;
search for the exact sentence referencing the Rotessa poll cursor and delete
that list item so the gap analysis no longer claims the composite-cursor work is
outstanding.
---
Nitpick comments:
In `@convex/payments/webhooks/rotessaPad.ts`:
- Around line 334-347: The exported Convex callable
processRotessaPadTransferWebhookMutation (and other similar exports in this
file, e.g., the ones between the noted ranges) currently uses raw
internalMutation/internalAction; replace these with the repo's fluent-convex
builder pattern so visibility is explicit at the export site (e.g., use the
fluent builder to construct the mutation/action and call .internal() or
.public() as appropriate). Locate exports named like
processRotessaPadTransferWebhookMutation and any other
internalMutation/internalAction exports in the 523-628 area, refactor their
definitions to use the fluent-convex builder API, and ensure each ends with
.internal() or .public() to match repository guidelines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55da0063-cf54-4782-b4a4-062ff79d12ce
📒 Files selected for processing (19)
convex/demo/__tests__/ampsE2e.test.tsconvex/demo/__tests__/ampsExecutionModes.test.tsconvex/demo/ampsExecutionModes.tsconvex/payments/collectionPlan/queries.tsconvex/payments/collectionPlan/runner.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/rotessa/api.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdsrc/routes/demo.listings.tsxsrc/routes/demo/amps/-execution-modes.tsxsrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
✅ Files skipped from review due to trivial changes (3)
- src/routes/demo.listings.tsx
- convex/payments/recurringSchedules/activation.ts
- convex/demo/ampsExecutionModes.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- convex/payments/collectionPlan/runner.ts
- convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts
👮 Files not reviewed due to content moderation or server errors (2)
- convex/demo/tests/ampsExecutionModes.test.ts
- convex/payments/recurringSchedules/poller.ts
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
convex/demo/__tests__/ampsE2e.test.ts (1)
78-245: Optional DRY refactor: extract a small harness runner helper.The repeated
createHarness+try/finally+ timer cleanup pattern can be centralized to reduce future missed-cleanup risk.♻️ Optional refactor sketch
+async function withHarness( + options: { fakeTimers?: boolean }, + run: (ctx: { t: ReturnType<typeof convexTest>; admin: ReturnType<ReturnType<typeof convexTest>["withIdentity"]> }) => Promise<void>, +) { + if (options.fakeTimers) vi.useFakeTimers(); + const { t, restoreEnv } = createHarness(); + const admin = t.withIdentity(ADMIN_IDENTITY); + try { + await run({ t, admin }); + } finally { + if (options.fakeTimers) vi.useRealTimers(); + restoreEnv(); + } +}Based on learnings: "Apply DRY (Don't Repeat Yourself) principle; extract shared logic into reusable functions or modules when the same code is written more than once."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/demo/__tests__/ampsE2e.test.ts` around lines 78 - 245, Extract a small helper to centralize the repeated harness setup/teardown and fake-timer handling: create a function (e.g. runWithHarness or withHarnessCleanup) that calls createHarness(), invokes vi.useFakeTimers(), yields { t, admin, runId } or accepts a callback to run the test body, and ensures vi.useRealTimers() and restoreEnv() are called in a finally block using t.finishAllScheduledFunctions where needed; replace the repeated patterns in the tests that call createHarness, vi.useFakeTimers/vi.useRealTimers, restoreEnv, and t.finishAllScheduledFunctions to call this helper and keep existing calls to api.demo.ampsE2e.* actions/queries unchanged.convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts (2)
389-399: Share the activation idempotency-key builder with production code.The helper at Lines 389-399 hard-codes the same key format the activation flow depends on, and the seeded resume/retry setups at Lines 712-721 and 875-887 only work while that format stays in lockstep. If the production key changes, these tests stop exercising the intended path. Move the key construction into a shared non-endpoint helper and import it here.
Based on learnings: Apply DRY (Don't Repeat Yourself) principle; extract shared logic into reusable functions or modules when the same code is written more than once.
Also applies to: 712-721, 875-887
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts` around lines 389 - 399, The test-local buildActivationIdempotencyKey function duplicates production idempotency-key logic and must be extracted to a shared helper so tests and production use the same implementation; create an exported helper (e.g., buildActivationIdempotencyKey) in a non-endpoint/shared module used by production activation flow, ensure it accepts the same args shape (bankAccountId: Id<"bankAccounts">, planEntryIds: Id<"collectionPlanEntries">[]), remove the local function from this test and the duplicated versions at the other spots (the blocks around the existing 712-721 and 875-887 regions), and import the shared function into the test so both tests and production derive the idempotency key from the single canonical implementation.
230-281: Assert the Rotessa create-schedule payload.The POST stub at Lines 247-255 returns success for any create-schedule request, and the happy-path activation check at Line 534 only verifies that
fetchran. A regression in amount, installment count, date window, or the resolved customer identifier would still leave this suite green. Capture the request JSON in the harness and assert the fields that define the provider-managed schedule.Also applies to: 483-535
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts` around lines 230 - 281, The POST stub in installRotessaFetchHarness accepts any create-schedule request without validating payload; modify installRotessaFetchHarness to capture and expose the request JSON for POSTs to the create endpoints (transaction_schedules and transaction_schedules/create_with_custom_identifier) by reading init.body and parsing JSON inside the vi.fn(fetchMock) and storing it (e.g., lastCreatePayload) alongside fetchMock, setNextProcessDate and setTransactionRows; then update the tests that assert activation (the happy-path around Line 534 / 483-535) to read that captured payload and assert the critical fields (amount, installments, next_process_date/process_date window, and the resolved customer identifier) so regressions in those values fail the suite. Ensure TRANSACTION_SCHEDULE_PATH_RE behavior is unchanged and the harness still returns scheduleResponse for status responses.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts`:
- Around line 449-462: The test sets a crypto polyfill by directly assigning
globalThis.crypto in the beforeEach block (see beforeEach and the use of
webcrypto), which can leak across tests; replace that direct assignment with
Vitest's vi.stubGlobal('crypto', webcrypto) so the global is stubbed and will be
restored by vi.unstubAllGlobals() in afterEach—update the beforeEach to call
vi.stubGlobal when webcrypto is needed and remove the globalThis.crypto ??=
webcrypto assignment to avoid test contamination.
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 599-605: The code currently returns an "unresolved" outcome when
planEntry is missing, which drops the event; instead, persist a durable
audit/rejection record before returning by creating/using the audit persistence
path used elsewhere (e.g., call a function like
insertAuditEntry/createRejectionRecord/persistProviderEventAudit) with the
original external occurrence payload, metadata (timestamp, provider id, event
id), and reason text; ensure the audit entry is write-confirmed (await the
insert) and include a unique ID/hash so operators can query/replay the event,
then return the same unresolved object after the persistence completes.
- Around line 210-230: The mirror-field patches (e.g., in the branch that
updates existingByOccurrenceKey and the block updating existingAttempt when
args.planEntry.collectionAttemptId is set) unconditionally write
rawProviderStatus/rawProviderReason and
providerLastReportedAt/externalLastReportedAt, allowing delayed replays to
overwrite newer timestamps; change these db.patch calls to first read the
current providerLastReportedAt/externalLastReportedAt (from
existingByOccurrenceKey or existingAttempt) and only perform the patch when
args.event.occurredAt is absent (treat as always apply) or when
args.event.occurredAt >= (existing providerLastReportedAt || 0); otherwise skip
updating the raw mirror fields (you can still apply canonical state transitions
separately). Apply the same timestamp-guarded update logic to the other similar
blocks referenced (around the other ranges) so
providerLastReportedAt/externalLastReportedAt and raw lifecycle fields are never
regressed by stale events.
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 118-163: The current code re-sorts requestedPlanEntries by
scheduledDate (creating sortedRequestedPlanEntries), which loses the canonical
order from eligibleFuturePlanEntries and can mis-order same-day installments;
instead derive the requested subset in the order of eligibleFuturePlanEntries by
filtering eligibleFuturePlanEntryIds (or eligibleFuturePlanEntries) to include
only the requestedPlanEntryIdStrings so the canonical eligible-window order is
preserved; replace the scheduledDate sort and creation of
requestedPlanEntryIdStrings with logic that builds sortedRequestedPlanEntries by
iterating eligibleFuturePlanEntries and collecting entries whose _id is in the
requested set (preserving that order) before running the eligibility checks and
contiguous-window validation.
---
Nitpick comments:
In `@convex/demo/__tests__/ampsE2e.test.ts`:
- Around line 78-245: Extract a small helper to centralize the repeated harness
setup/teardown and fake-timer handling: create a function (e.g. runWithHarness
or withHarnessCleanup) that calls createHarness(), invokes vi.useFakeTimers(),
yields { t, admin, runId } or accepts a callback to run the test body, and
ensures vi.useRealTimers() and restoreEnv() are called in a finally block using
t.finishAllScheduledFunctions where needed; replace the repeated patterns in the
tests that call createHarness, vi.useFakeTimers/vi.useRealTimers, restoreEnv,
and t.finishAllScheduledFunctions to call this helper and keep existing calls to
api.demo.ampsE2e.* actions/queries unchanged.
In
`@convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.ts`:
- Around line 389-399: The test-local buildActivationIdempotencyKey function
duplicates production idempotency-key logic and must be extracted to a shared
helper so tests and production use the same implementation; create an exported
helper (e.g., buildActivationIdempotencyKey) in a non-endpoint/shared module
used by production activation flow, ensure it accepts the same args shape
(bankAccountId: Id<"bankAccounts">, planEntryIds:
Id<"collectionPlanEntries">[]), remove the local function from this test and the
duplicated versions at the other spots (the blocks around the existing 712-721
and 875-887 regions), and import the shared function into the test so both tests
and production derive the idempotency key from the single canonical
implementation.
- Around line 230-281: The POST stub in installRotessaFetchHarness accepts any
create-schedule request without validating payload; modify
installRotessaFetchHarness to capture and expose the request JSON for POSTs to
the create endpoints (transaction_schedules and
transaction_schedules/create_with_custom_identifier) by reading init.body and
parsing JSON inside the vi.fn(fetchMock) and storing it (e.g.,
lastCreatePayload) alongside fetchMock, setNextProcessDate and
setTransactionRows; then update the tests that assert activation (the happy-path
around Line 534 / 483-535) to read that captured payload and assert the critical
fields (amount, installments, next_process_date/process_date window, and the
resolved customer identifier) so regressions in those values fail the suite.
Ensure TRANSACTION_SCHEDULE_PATH_RE behavior is unchanged and the harness still
returns scheduleResponse for status responses.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8b7e4012-163b-4691-93f7-66cc18995931
📒 Files selected for processing (8)
convex/demo/__tests__/ampsE2e.test.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/webhooks/rotessaPad.tsspecs/15-support-provider-managed-recurring-collection-schedules/design.md
✅ Files skipped from review due to trivial changes (1)
- convex/payments/recurringSchedules/activation.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- convex/payments/webhooks/rotessaPad.ts
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/recurringSchedules/queries.ts`:
- Around line 199-205: The handler currently accepts fractional limits because
v.number() allows decimals; normalize args.limit to an integer immediately
(e.g., const limit = Math.max(1, Math.min(Math.floor(args.limit ?? 25), 100)))
so all subsequent math (syncErrorBudget, targetCount, pageSize) uses the same
integer; apply the same normalization inside
listExternalCollectionScheduleSyncIssues to keep both callers consistent and
ensure .slice(0, limit) matches the computed counts.
- Around line 45-69: Capture the asOf snapshot before any awaits by moving the
line that computes asOf (const asOf = args.asOf ?? Date.now()) to the start of
the handler (before the first ctx.db.get call). Specifically, compute asOf
immediately after entering the .handler(async (ctx, args) => { block so that
mortgage, bankAccount, borrowerLink and primaryBorrowerLinks reads use the same
invocation-time boundary rather than a time measured after the awaited DB reads.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 73fe4e3a-0f3f-4b32-869b-87e204c9527f
📒 Files selected for processing (3)
convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/queries.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts
- convex/payments/recurringSchedules/occurrenceIngestion.ts
baed938 to
5d8fa22
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
convex/payments/collectionPlan/initialScheduling.ts (1)
99-154:⚠️ Potential issue | 🟠 MajorReject external schedule linkage on non-
provider_scheduledrows.The validation only protects the happy path. A caller can still pass
externalCollectionScheduleId/externalOccurrenceOrdinalwithstatus: "planned", and this function will defaultexecutionModeto"app_owned". That leaves the row eligible for the internal runner even though it already carries provider-managed linkage, which is exactly the kind of split-brain ownership this PR is trying to avoid.🔧 Proposed fix
export async function createEntryImpl( ctx: Pick<MutationCtx, "db">, args: CreateCollectionPlanEntryArgs ): Promise<Id<"collectionPlanEntries">> { const executionMode = args.executionMode ?? "app_owned"; + const hasExternalScheduleLinkage = + args.externalCollectionScheduleId !== undefined || + args.externalOccurrenceOrdinal !== undefined; const [firstObligationId] = args.obligationIds; if (!firstObligationId) { throw new ConvexError( "Collection plan entries require at least one obligation" ); @@ if (args.status === "provider_scheduled") { if (executionMode !== "provider_managed") { throw new ConvexError( 'Provider-scheduled collection plan entries require executionMode = "provider_managed".' ); @@ if (args.externalOccurrenceOrdinal === undefined) { throw new ConvexError( "Provider-scheduled collection plan entries require externalOccurrenceOrdinal." ); } + } else if (hasExternalScheduleLinkage) { + throw new ConvexError( + "External schedule linkage is only valid for provider_scheduled collection plan entries." + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/collectionPlan/initialScheduling.ts` around lines 99 - 154, The code currently allows externalCollectionScheduleId/externalOccurrenceOrdinal to be set when status !== "provider_scheduled", creating split-brain; inside the initial scheduling logic (in initialScheduling.ts, around where args.status and executionMode are handled) add a guard that if args.status !== "provider_scheduled" and (args.externalCollectionScheduleId is present or args.externalOccurrenceOrdinal is not undefined) then throw a ConvexError rejecting external schedule linkage; ensure this check runs before inserting the collectionPlanEntries and complements the existing provider_scheduled branch that enforces executionMode === "provider_managed".
♻️ Duplicate comments (1)
convex/payments/recurringSchedules/activation.ts (1)
177-189:⚠️ Potential issue | 🔴 CriticalRetry reopening still skips the live-schedule exclusivity guard.
This
activation_failedbranch still returns before the mortgage-level live-schedule checks run. If another schedule became live after the first failure, a same-key retry can reopen the stale row and later overwritemortgage.activeExternalCollectionScheduleId.Also applies to: 202-249
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/activation.ts` around lines 177 - 189, The activation_failed branch returns early and skips the mortgage-level live-schedule exclusivity check, allowing a reopened schedule to overwrite mortgage.activeExternalCollectionScheduleId; remove the early return and instead run the same live-schedule exclusivity logic used later in the flow before marking the row "activating" and returning. Concretely: in the block that handles existing.status === "activation_failed" (where you call ctx.db.patch on existing._id), call the mortgage-level check that inspects mortgage.activeExternalCollectionScheduleId and enforces the single-live-schedule rule (the same logic used in the later 202-249 path), and only proceed to patch status to "activating" and return { needsProviderCreate, scheduleId } if the exclusivity check permits activation; otherwise handle the conflict path the same way the later branch does.
🧹 Nitpick comments (1)
convex/payments/recurringSchedules/poller.ts (1)
302-318: Consider exponential backoff for retry intervals.The poller tracks
consecutiveSyncFailures(line 182, 188) but always schedules the next retry at a fixedPOLL_INTERVAL_MS(line 312). For schedules with persistent failures (e.g., missing provider data, auth issues), this creates unnecessary polling load without increasing the retry delay.💡 Suggested approach
+const MAX_BACKOFF_MS = 24 * 60 * 60 * 1000; // 24 hours + +function computeRetryDelay(consecutiveFailures: number): number { + const baseDelayMs = POLL_INTERVAL_MS; + const backoffMs = Math.min( + baseDelayMs * Math.pow(2, consecutiveFailures), + MAX_BACKOFF_MS + ); + return backoffMs; +} // In the catch block: +const schedule = await ctx.runQuery(...); // fetch to get consecutiveSyncFailures await ctx.runMutation( makeFunctionReference( "payments/recurringSchedules/poller:recordExternalCollectionScheduleSyncFailure" ) as unknown as FunctionReference<"mutation">, { asOf, errorMessage, - nextPollAt: asOf + POLL_INTERVAL_MS, + nextPollAt: asOf + computeRetryDelay(schedule?.consecutiveSyncFailures ?? 0), scheduleId: candidate._id, leaseOwner, } );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/poller.ts` around lines 302 - 318, The catch block always sets nextPollAt to asOf + POLL_INTERVAL_MS; change it to compute an exponential backoff using the schedule's consecutiveSyncFailures (e.g., backoff = POLL_INTERVAL_MS * 2 ** Math.min(consecutiveSyncFailures, MAX_BACKOFF_EXP)), optionally add jitter and cap with a MAX_BACKOFF_MS, then pass asOf + backoff as nextPollAt when calling the mutation (recordExternalCollectionScheduleSyncFailure via makeFunctionReference) while preserving other args (asOf, errorMessage, scheduleId: candidate._id, leaseOwner); define MAX_BACKOFF_EXP / MAX_BACKOFF_MS near POLL_INTERVAL_MS and ensure consecutiveSyncFailures is read from the candidate record used earlier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 743-753: The code is overwriting a real providerRef and breaking
resolveExistingTransfer() keying; change the branch that patches providerRef
(around transfer, args.event.providerRef and ctx.db.patch) to only replace when
the current transfer.providerRef is the synthetic placeholder (e.g.
startsWith("provider-managed:") or otherwise empty/synthetic); if
transfer.providerRef is a different real ref, do not patch and instead fail
closed (throw or return an error) so you don't mutate the key used by
resolveExistingTransfer(), ensuring any update only replaces synthetic refs tied
to transfer._id.
In `@convex/payments/rotessa/financialTransactions.ts`:
- Around line 123-125: The current providerRef assignment uses a decorated
fallback `rotessa_financial_transaction:${args.row.id}` which prevents later
Rotessa status lookups from matching; change the providerRef fallback to the raw
Rotessa id by using String(args.row.id) when args.row.transaction_number is
absent, while preserving the decorated value on externalOccurrenceRef; update
the providerRef expression (the branch that currently reads
args.row.transaction_number ?? `rotessa_financial_transaction:${args.row.id}`)
to use args.row.transaction_number ?? String(args.row.id) so status refreshes
can resolve via transaction_number or String(row.id).
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`:
- Around line 41-48: The spec's pollOccurrenceUpdates signature is incomplete;
update the documented input contract for pollOccurrenceUpdates (on
RecurringCollectionScheduleProvider) to include the required startDate plus
optional endDate and maxEvents in addition to externalScheduleRef and
sinceCursor, and describe types and semantics (e.g., startDate: ISO date string
required to anchor polling, endDate?: ISO date string to cap returned
occurrences, maxEvents?: number to limit results, sinceCursor?: string for
incremental polling, and externalScheduleRef: string), so adapters/tests produce
the same shape the implementation expects.
In `@specs/15-support-provider-managed-recurring-collection-schedules/PRD.md`:
- Around line 71-79: Update UC-2 to include the explicit primary-borrower
prerequisite: state that the mortgage must have a linked borrower with role ===
"primary" before activation; mention this alongside the existing "Mortgage is
eligible for provider-managed PAD collection and future plan entries exist"
precondition so the PRD matches the implementation that rejects activation
without a primary borrower.
---
Outside diff comments:
In `@convex/payments/collectionPlan/initialScheduling.ts`:
- Around line 99-154: The code currently allows
externalCollectionScheduleId/externalOccurrenceOrdinal to be set when status !==
"provider_scheduled", creating split-brain; inside the initial scheduling logic
(in initialScheduling.ts, around where args.status and executionMode are
handled) add a guard that if args.status !== "provider_scheduled" and
(args.externalCollectionScheduleId is present or args.externalOccurrenceOrdinal
is not undefined) then throw a ConvexError rejecting external schedule linkage;
ensure this check runs before inserting the collectionPlanEntries and
complements the existing provider_scheduled branch that enforces executionMode
=== "provider_managed".
---
Duplicate comments:
In `@convex/payments/recurringSchedules/activation.ts`:
- Around line 177-189: The activation_failed branch returns early and skips the
mortgage-level live-schedule exclusivity check, allowing a reopened schedule to
overwrite mortgage.activeExternalCollectionScheduleId; remove the early return
and instead run the same live-schedule exclusivity logic used later in the flow
before marking the row "activating" and returning. Concretely: in the block that
handles existing.status === "activation_failed" (where you call ctx.db.patch on
existing._id), call the mortgage-level check that inspects
mortgage.activeExternalCollectionScheduleId and enforces the
single-live-schedule rule (the same logic used in the later 202-249 path), and
only proceed to patch status to "activating" and return { needsProviderCreate,
scheduleId } if the exclusivity check permits activation; otherwise handle the
conflict path the same way the later branch does.
---
Nitpick comments:
In `@convex/payments/recurringSchedules/poller.ts`:
- Around line 302-318: The catch block always sets nextPollAt to asOf +
POLL_INTERVAL_MS; change it to compute an exponential backoff using the
schedule's consecutiveSyncFailures (e.g., backoff = POLL_INTERVAL_MS * 2 **
Math.min(consecutiveSyncFailures, MAX_BACKOFF_EXP)), optionally add jitter and
cap with a MAX_BACKOFF_MS, then pass asOf + backoff as nextPollAt when calling
the mutation (recordExternalCollectionScheduleSyncFailure via
makeFunctionReference) while preserving other args (asOf, errorMessage,
scheduleId: candidate._id, leaseOwner); define MAX_BACKOFF_EXP / MAX_BACKOFF_MS
near POLL_INTERVAL_MS and ensure consecutiveSyncFailures is read from the
candidate record used earlier.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1b054673-4a35-4a05-870c-06a6badfb335
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockconvex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (50)
AGENTS.mdCLAUDE.mdconvex/crons.tsconvex/demo/__tests__/ampsE2e.test.tsconvex/demo/__tests__/ampsExecutionModes.test.tsconvex/demo/ampsExecutionModes.tsconvex/payments/collectionPlan/__tests__/runner.test.tsconvex/payments/collectionPlan/admin.tsconvex/payments/collectionPlan/execution.tsconvex/payments/collectionPlan/executionContract.tsconvex/payments/collectionPlan/initialScheduling.tsconvex/payments/collectionPlan/mutations.tsconvex/payments/collectionPlan/queries.tsconvex/payments/collectionPlan/runner.tsconvex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/activation.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/recurringSchedules/poller.tsconvex/payments/recurringSchedules/providers/registry.tsconvex/payments/recurringSchedules/providers/rotessaRecurring.tsconvex/payments/recurringSchedules/queries.tsconvex/payments/recurringSchedules/types.tsconvex/payments/recurringSchedules/validators.tsconvex/payments/rotessa/api.tsconvex/payments/rotessa/financialTransactions.tsconvex/payments/transfers/providers/registry.tsconvex/payments/transfers/providers/rotessa.tsconvex/payments/transfers/types.tsconvex/payments/webhooks/handleReversal.tsconvex/payments/webhooks/rotessaPad.tsconvex/schema.tsconvex/test/moduleMaps.tsdocs/architecture/rbac-and-permissions.mddocs/architecture/state-machines.mddocs/architecture/unified-payment-rails-technical-design.mddocs/cash-ledger-developer-guide.mddocs/reviews/2026-04-08-security-rbac-review.mddocs/technical-design/unified-payment-rails.mdpackage.jsonreviews/04-11-provider_managed_payment_schedule/74b377d61.mdspecs/15-support-provider-managed-recurring-collection-schedules/PRD.mdspecs/15-support-provider-managed-recurring-collection-schedules/design.mdspecs/15-support-provider-managed-recurring-collection-schedules/gap-analysis.mdspecs/15-support-provider-managed-recurring-collection-schedules/tasks.mdsrc/routeTree.gen.tssrc/routes/demo.listings.tsxsrc/routes/demo/amps/-execution-modes.tsxsrc/routes/demo/amps/-index.tsxsrc/test/convex/payments/helpers.tssrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
✅ Files skipped from review due to trivial changes (16)
- docs/reviews/2026-04-08-security-rbac-review.md
- convex/payments/transfers/types.ts
- convex/payments/webhooks/handleReversal.ts
- package.json
- CLAUDE.md
- convex/payments/collectionPlan/tests/runner.test.ts
- convex/payments/collectionPlan/execution.ts
- docs/architecture/unified-payment-rails-technical-design.md
- convex/payments/recurringSchedules/providers/registry.ts
- docs/architecture/rbac-and-permissions.md
- convex/test/moduleMaps.ts
- docs/technical-design/unified-payment-rails.md
- specs/15-support-provider-managed-recurring-collection-schedules/tasks.md
- docs/architecture/state-machines.md
- src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
- convex/payments/recurringSchedules/validators.ts
🚧 Files skipped from review as they are similar to previous changes (15)
- convex/crons.ts
- convex/payments/collectionPlan/executionContract.ts
- AGENTS.md
- convex/payments/collectionPlan/admin.ts
- convex/payments/transfers/providers/registry.ts
- convex/payments/collectionPlan/runner.ts
- src/routes/demo.listings.tsx
- convex/payments/collectionPlan/mutations.ts
- src/test/convex/payments/helpers.ts
- convex/payments/transfers/providers/rotessa.ts
- convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts
- convex/payments/webhooks/rotessaPad.ts
- convex/payments/rotessa/api.ts
- convex/payments/recurringSchedules/providers/rotessaRecurring.ts
- convex/payments/recurringSchedules/types.ts
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
convex/payments/rotessa/financialTransactions.ts (1)
87-112: NarrowproviderCodeat the input type instead of force-casting at output.At Line 109,
as Extract<ProviderCode, "pad_rotessa">can mask accidental non-Rotessa values passed inargs.providerCode. Prefer constraining the arg type to Rotessa up front and dropping the cast.💡 Proposed refactor
export function buildNormalizedOccurrenceFromRotessaRow(args: { externalScheduleRef: string; - providerCode?: ProviderCode; + providerCode?: Extract<ProviderCode, "pad_rotessa">; receivedVia: "poller" | "webhook"; row: RotessaTransactionReportRow; }): NormalizedExternalCollectionOccurrenceEvent | null { @@ - providerCode: (args.providerCode ?? "pad_rotessa") as Extract< - ProviderCode, - "pad_rotessa" - >, + providerCode: args.providerCode ?? "pad_rotessa",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/rotessa/financialTransactions.ts` around lines 87 - 112, The providerCode is being force-cast in buildNormalizedOccurrenceFromRotessaRow; tighten the input type instead: change the args type for providerCode from ProviderCode (or optional ProviderCode) to only the Rotessa-specific value (e.g., providerCode?: Extract<ProviderCode, "pad_rotessa"> or providerCode?: "pad_rotessa"), and then remove the as Extract<...> cast in the returned object and simply use (args.providerCode ?? "pad_rotessa") for providerCode; this ensures mapRotessaFinancialStatusToTransferEvent and buildNormalizedOccurrenceFromRotessaRow only accept/return the correct Rotessa provider code without unsafe casting.convex/payments/recurringSchedules/occurrenceIngestion.ts (1)
263-286: Avoid double-writing collection-attempt provider mirror fields in the same ingestion pass.
ensureCollectionAttemptalready patches mirror fields for existing attempts, and Line 729 then patches again viasyncCollectionAttemptProviderMirror. Consolidating to a single write path will reduce OCC churn and write volume.Also applies to: 729-733
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/payments/recurringSchedules/occurrenceIngestion.ts` around lines 263 - 286, The code is double-writing collection-attempt provider mirror fields: ensureCollectionAttempt (which patches mirror fields for existing attempts) and the later call to syncCollectionAttemptProviderMirror both update the same fields; remove the redundant patch to avoid OCC churn by consolidating to a single write path. Specifically, stop calling buildCollectionAttemptProviderMirrorPatch/args.ctx.db.patch for the already-handled attempts (references: ensureCollectionAttempt, existingByOccurrenceKey, existingAttempt, args.planEntry.collectionAttemptId) and let syncCollectionAttemptProviderMirror perform the canonical update (or vice versa)—pick one location to perform the patch and delete the duplicate patching logic so only one write occurs per ingestion pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`:
- Around line 98-104: The design table for Rotessa status mappings omits the
Chargeback row while the implementation maps `Chargeback` to
`TRANSFER_REVERSED`; update the mapping table in
specs/15-support-provider-managed-recurring-collection-schedules/design.md by
adding a row for `Chargeback` that persists the exact local raw mirror value and
maps to `TRANSFER_REVERSED` (include any notes about reusing existing
reversal/ledger effects as appropriate) so the design matches the runtime
behavior.
In `@specs/15-support-provider-managed-recurring-collection-schedules/PRD.md`:
- Around line 145-150: Add "Chargeback" to the Rotessa lifecycle values list and
note its mapping to the canonical transfer event (Chargeback ->
TRANSFER_REVERSED) so the lifecycle contract matches the implementation; update
the section that currently lists `Future`, `Pending`, `Approved`, `Declined` to
include `Chargeback` and a short sentence stating the local system maps these
raw statuses (including Chargeback) to canonical transfer events for settlement
behavior.
---
Nitpick comments:
In `@convex/payments/recurringSchedules/occurrenceIngestion.ts`:
- Around line 263-286: The code is double-writing collection-attempt provider
mirror fields: ensureCollectionAttempt (which patches mirror fields for existing
attempts) and the later call to syncCollectionAttemptProviderMirror both update
the same fields; remove the redundant patch to avoid OCC churn by consolidating
to a single write path. Specifically, stop calling
buildCollectionAttemptProviderMirrorPatch/args.ctx.db.patch for the
already-handled attempts (references: ensureCollectionAttempt,
existingByOccurrenceKey, existingAttempt, args.planEntry.collectionAttemptId)
and let syncCollectionAttemptProviderMirror perform the canonical update (or
vice versa)—pick one location to perform the patch and delete the duplicate
patching logic so only one write occurs per ingestion pass.
In `@convex/payments/rotessa/financialTransactions.ts`:
- Around line 87-112: The providerCode is being force-cast in
buildNormalizedOccurrenceFromRotessaRow; tighten the input type instead: change
the args type for providerCode from ProviderCode (or optional ProviderCode) to
only the Rotessa-specific value (e.g., providerCode?: Extract<ProviderCode,
"pad_rotessa"> or providerCode?: "pad_rotessa"), and then remove the as
Extract<...> cast in the returned object and simply use (args.providerCode ??
"pad_rotessa") for providerCode; this ensures
mapRotessaFinancialStatusToTransferEvent and
buildNormalizedOccurrenceFromRotessaRow only accept/return the correct Rotessa
provider code without unsafe casting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 02c0919e-fe90-47ba-a747-c162c932019b
📒 Files selected for processing (6)
convex/payments/recurringSchedules/__tests__/providerManaged.integration.test.tsconvex/payments/recurringSchedules/occurrenceIngestion.tsconvex/payments/rotessa/financialTransactions.tsspecs/15-support-provider-managed-recurring-collection-schedules/PRD.mdspecs/15-support-provider-managed-recurring-collection-schedules/design.mdsrc/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts
- convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts
| | Rotessa status | Local raw mirror | Mapped transfer event | Notes | | ||
| |---|---|---|---| | ||
| | `Future` | persist exact value | `PROCESSING_UPDATE` | The occurrence exists remotely and may lazily materialize local attempt/transfer rows. | | ||
| | `Pending` | persist exact value | `PROCESSING_UPDATE` | Keeps the local occurrence on the non-terminal path. | | ||
| | `Approved` | persist exact value | `FUNDS_SETTLED` | Reuses existing settlement and ledger effects. | | ||
| | `Declined` | persist exact value plus reason such as `NSF` | `TRANSFER_FAILED` | Reuses existing failure and retry policy while preserving provider decline reason. | | ||
|
|
There was a problem hiding this comment.
Include Chargeback in the Rotessa mapping table.
The mapping contract currently skips Chargeback, but the implementation handles it as TRANSFER_REVERSED. Add the row to keep design and runtime behavior aligned.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@specs/15-support-provider-managed-recurring-collection-schedules/design.md`
around lines 98 - 104, The design table for Rotessa status mappings omits the
Chargeback row while the implementation maps `Chargeback` to
`TRANSFER_REVERSED`; update the mapping table in
specs/15-support-provider-managed-recurring-collection-schedules/design.md by
adding a row for `Chargeback` that persists the exact local raw mirror value and
maps to `TRANSFER_REVERSED` (include any notes about reusing existing
reversal/ledger effects as appropriate) so the design matches the runtime
behavior.
| - The first delivery must preserve the raw Rotessa lifecycle values: | ||
| - `Future` | ||
| - `Pending` | ||
| - `Approved` | ||
| - `Declined` | ||
| - The local system still maps those statuses into canonical transfer events for settlement behavior. |
There was a problem hiding this comment.
Add Chargeback to the documented Rotessa lifecycle expectations.
The implemented mapping includes Chargeback -> TRANSFER_REVERSED; omitting it here leaves the lifecycle contract incomplete.
📝 Suggested doc update
- The first delivery must preserve the raw Rotessa lifecycle values:
- `Future`
- `Pending`
- `Approved`
- `Declined`
+ - `Chargeback`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - The first delivery must preserve the raw Rotessa lifecycle values: | |
| - `Future` | |
| - `Pending` | |
| - `Approved` | |
| - `Declined` | |
| - The local system still maps those statuses into canonical transfer events for settlement behavior. | |
| - The first delivery must preserve the raw Rotessa lifecycle values: | |
| - `Future` | |
| - `Pending` | |
| - `Approved` | |
| - `Declined` | |
| - `Chargeback` | |
| - The local system still maps those statuses into canonical transfer events for settlement behavior. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@specs/15-support-provider-managed-recurring-collection-schedules/PRD.md`
around lines 145 - 150, Add "Chargeback" to the Rotessa lifecycle values list
and note its mapping to the canonical transfer event (Chargeback ->
TRANSFER_REVERSED) so the lifecycle contract matches the implementation; update
the section that currently lists `Future`, `Pending`, `Approved`, `Declined` to
include `Chargeback` and a short sentence stating the local system maps these
raw statuses (including Chargeback) to canonical transfer events for settlement
behavior.
Merge activity
|

provider managed payment schedule
red-tests
adding consideration to poll based flows in red-tests and specs
implementation for provider-managed recurring schedule flow
The fixes are in convex/payments/transfers/providers/rotessa.ts (line 44), convex/payments/recurringSchedules/queries.ts (line 4), and convex/payments/recurringSchedules/activation.ts (line 10). pad_rotessa status lookup no longer falls back to the first report row, activation now requires an explicit primary borrower instead of silently picking the first linked borrower, and the asOf default behavior is documented where the snapshot is loaded and invoked.
I also tightened the tests in convex/payments/recurringSchedules/tests/providerManaged.integration.test.ts (line 481) and src/test/convex/payments/rotessaManagedRecurringLifecycle.test.ts (line 272) so the reviewed edge cases stay covered: missing primary borrower, missing polled provider ref, lifecycle normalization, and the current NSF retry behavior.
Summary by CodeRabbit
New Features
Schema
Bug Fixes / Improvements
Tests
Documentation
Chores