diff --git a/DEPLOY.md b/DEPLOY.md index d1fc9af3..bf8e2d89 100644 --- a/DEPLOY.md +++ b/DEPLOY.md @@ -119,13 +119,12 @@ Cloudflare Worker secrets are set. In CI, the deploy workflow runs ### Scopes -The login flow requests `scope=repo read:org notifications`: +The login flow requests `scope=repo read:org`: | Scope | Used for | |-------|----------| | `repo` | Read issues, PRs, check runs, workflow runs (includes private repos) | | `read:org` | `GET /user/orgs` — list user's organizations for the org selector | -| `notifications` | `GET /notifications` — polling optimization gate (304 = skip full fetch) | **Note:** The `repo` scope grants write access to repositories, but this app never performs write operations (POST/PUT/PATCH/DELETE on repo endpoints). It is read-only by design. diff --git a/README.md b/README.md index 14de0e0f..d52514d3 100644 --- a/README.md +++ b/README.md @@ -59,7 +59,7 @@ A second, faster poll loop (default 30s, configurable 10–120s) targets only in ### Desktop Notifications -Browser notifications for new issues, PRs, and failed runs. Per-type toggles in settings. Notification permission requested on first enable. Uses the GitHub Notifications API as a change-detection gate when the `notifications` scope is available. +Browser notifications for new issues, PRs, and failed runs. Per-type toggles in settings. Notification permission requested on first enable. New items are detected via the Events API polling loop and full refresh cycles. ### Repo Pinning and Reordering @@ -87,7 +87,7 @@ Hide specific items with a persistent ignore list. An "N ignored" badge on the r ### ETag Caching and Auto-Refresh -Conditional requests using `If-None-Match` headers — GitHub doesn't count 304 responses against the rate limit. Background polling keeps data fresh even when the tab is hidden (when the notifications scope is available for efficient change detection). +Conditional requests using `If-None-Match` headers — GitHub doesn't count 304 responses against the rate limit. A 60-second events poll uses ETag requests to detect changes and trigger targeted per-repo refreshes, keeping data fresh even in background tabs at zero rate-limit cost. ## Tech Stack diff --git a/docs/USER_GUIDE.md b/docs/USER_GUIDE.md index 13d73138..dcd58f3d 100644 --- a/docs/USER_GUIDE.md +++ b/docs/USER_GUIDE.md @@ -45,9 +45,9 @@ GitHub Tracker is a dashboard that aggregates open issues, pull requests, and Gi ### OAuth Sign-In -OAuth is the recommended sign-in method. Click **Sign in with GitHub** on the login page and authorize the application. GitHub will redirect you back with a token that grants access to `repo`, `read:org`, and `notifications` scopes. +OAuth is the recommended sign-in method. Click **Sign in with GitHub** on the login page and authorize the application. GitHub will redirect you back with a token that grants access to `repo` and `read:org` scopes. -OAuth tokens work across all organizations you belong to and support the notifications optimization that reduces API usage in background tabs. +OAuth tokens work across all organizations you belong to. ### Personal Access Token Sign-In @@ -55,8 +55,8 @@ If you prefer not to use OAuth, you can sign in with a GitHub Personal Access To Two token formats are accepted: -- **Classic tokens** (starts with `ghp_`) — recommended. Works across all organizations you belong to. Required scopes: `repo`, `read:org` (under admin:org), `notifications`. -- **Fine-grained tokens** (starts with `github_pat_`) — also work, but have limitations: they only access one organization at a time, do not support the `notifications` scope, and therefore cannot use the background-poll optimization. Required permissions: Actions (read), Contents (read), Issues (read), Pull requests (read). +- **Classic tokens** (starts with `ghp_`) — recommended. Works across all organizations you belong to. Required scopes: `repo`, `read:org` (under admin:org). +- **Fine-grained tokens** (starts with `github_pat_`) — also work, but only access one organization at a time. Required permissions: Actions (read), Contents (read), Issues (read), Pull requests (read). The token is validated against the GitHub API before being stored. It is saved permanently in your browser's `localStorage` — you will not need to re-enter it on revisit. @@ -316,10 +316,10 @@ Hover the rate limit display in the dashboard footer to see detailed remaining c When the tab is hidden: - The **hot poll always pauses** (it provides only visual feedback). -- The **full poll continues in background** when the notifications gate is available (OAuth or classic PAT with `notifications` scope). The gate uses `If-Modified-Since` headers for near-zero-cost 304 checks that do not count against your rate limit. -- When the notifications gate is **unavailable** (fine-grained PAT or classic PAT missing the `notifications` scope), the full poll also pauses in background tabs to conserve API budget. +- The **full refresh pauses** in background tabs — GraphQL requests have no 304 shortcut and every poll consumes real rate-limit budget. +- The **events poll continues in background** — it uses ETag conditional requests (`If-None-Match`) that return 304 when nothing has changed, costing zero rate-limit points. When changes are detected, targeted per-repo refreshes run immediately. -When you return to a tab that has been hidden for more than 2 minutes, a catch-up fetch fires immediately regardless of where the timer is in its cycle. +When you return to a tab that has been hidden for more than 2 minutes, a catch-up full refresh fires immediately regardless of where the timer is in its cycle. --- @@ -473,19 +473,19 @@ These are UI preferences that persist across sessions but are not included in th The tracker uses GitHub's GraphQL and REST APIs. Each poll cycle consumes some of your 5,000 request hourly budget. Tracking many repos, tracked users, or having a short refresh interval increases consumption. Increasing the refresh interval or reducing the number of tracked repos will reduce API usage. -OAuth tokens and classic PATs use the notifications gate (304 shortcut), which significantly reduces per-cycle cost when nothing has changed. Fine-grained PATs do not support this optimization. +A 60-second events poll uses ETag conditional requests to detect changes at near-zero cost, triggering targeted per-repo refreshes only when needed. For detailed per-source API call counts, see Settings > API Usage. **PAT vs OAuth: what is the difference?** -OAuth tokens (from "Sign in with GitHub") work across all your organizations and support all features including the notifications background-poll optimization. Classic PATs with the correct scopes (`repo`, `read:org`, `notifications`) behave identically to OAuth. +OAuth tokens (from "Sign in with GitHub") work across all your organizations and support all features. Classic PATs with the correct scopes (`repo`, `read:org`) behave identically to OAuth. -Fine-grained PATs are limited to one organization at a time, do not support the `notifications` scope, and therefore cannot use the background-poll optimization — the full poll pauses in hidden tabs, and a warning appears in the notification drawer. +Fine-grained PATs are limited to one organization at a time. Required permissions: Actions (read), Contents (read), Issues (read), Pull requests (read). **Data looks stale after switching back to the tab.** -When a tab has been hidden for more than 2 minutes, a catch-up fetch fires automatically on return. If the notifications gate is unavailable (fine-grained PAT), polling was paused while the tab was hidden — the catch-up fetch provides a single refresh on return. To ensure continuous background updates, use OAuth or a classic PAT with the `notifications` scope. +When a tab has been hidden for more than 2 minutes, a catch-up fetch fires automatically on return. The events poll continues running in background tabs using ETag conditional requests (zero rate-limit cost), so changes are detected even while the tab is hidden. **I want to stop tracking a repository.** diff --git a/src/app/components/dashboard/DashboardPage.tsx b/src/app/components/dashboard/DashboardPage.tsx index dd50fcb7..336d2298 100644 --- a/src/app/components/dashboard/DashboardPage.tsx +++ b/src/app/components/dashboard/DashboardPage.tsx @@ -16,7 +16,9 @@ import { fetchOrgs } from "../../services/api"; import { createPollCoordinator, createHotPollCoordinator, + createEventsPollCoordinator, rebuildHotSets, + seedHotSetsFromTargeted, clearHotSets, getHotPollGeneration, fetchAllData, @@ -24,7 +26,8 @@ import { } from "../../services/poll"; import { expireToken, user, onAuthCleared, DASHBOARD_STORAGE_KEY } from "../../stores/auth"; import { updateRelaySnapshot } from "../../lib/mcp-relay"; -import { pushNotification } from "../../lib/errors"; +import { pushNotification, pushError } from "../../lib/errors"; +import { detectNewItems, dispatchNotifications } from "../../lib/notifications"; import { getClient, getGraphqlRateLimit, fetchRateLimitDetails } from "../../services/github"; import { formatCount, prSizeCategory, rateLimitCssClass } from "../../lib/format"; import { setsEqual } from "../../lib/collections"; @@ -137,6 +140,11 @@ onAuthCleared(() => { hotCoord.destroy(); if (_hotCoordinator() === hotCoord) _setHotCoordinator(null); } + const eventsCoord = _eventsCoordinator(); + if (eventsCoord) { + eventsCoord.destroy(); + if (_eventsCoordinator() === eventsCoord) _setEventsCoordinator(null); + } clearHotSets(); }); @@ -167,101 +175,96 @@ async function pollFetch(): Promise { }); } }); - // When notifications gate says nothing changed, keep existing data - if (!data.skipped) { - const hasErrors = data.errors.length > 0; - setLastFetchHadErrors(hasErrors); - - // When the fetch had errors and returned no data, keep stale dashboard - // visible rather than wiping it to empty. This prevents the summary strip, - // tab counts, and tracked items from vanishing during rate limiting. - if (hasErrors && data.issues.length === 0 && data.pullRequests.length === 0 && data.workflowRuns.length === 0) { - setDashboardData("loading", false); - return data; - } + const hasErrors = data.errors.length > 0; + setLastFetchHadErrors(hasErrors); - setHasFetchedFresh(true); - const now = new Date(); - - if (phaseOneFired) { - // Phase 1 fired — use fine-grained merge for the light→enriched - // transition. Only update heavy fields to avoid re-rendering the - // entire list (light fields haven't changed within this poll cycle). - const enrichedMap = new Map(); - for (const pr of data.pullRequests) enrichedMap.set(pr.id, pr); - - setDashboardData(produce((state) => { - state.issues = data.issues; - state.workflowRuns = data.workflowRuns; - state.loading = false; - state.lastRefreshedAt = now; - - let canMerge = state.pullRequests.length === enrichedMap.size; - if (canMerge) { - for (let i = 0; i < state.pullRequests.length; i++) { - if (!enrichedMap.has(state.pullRequests[i].id)) { canMerge = false; break; } - } + // When the fetch had errors and returned no data, keep stale dashboard + // visible rather than wiping it to empty. This prevents the summary strip, + // tab counts, and tracked items from vanishing during rate limiting. + if (hasErrors && data.issues.length === 0 && data.pullRequests.length === 0 && data.workflowRuns.length === 0) { + setDashboardData("loading", false); + return data; + } + + setHasFetchedFresh(true); + const now = new Date(); + + if (phaseOneFired) { + // Phase 1 fired — use fine-grained merge for the light→enriched + // transition. Only update heavy fields to avoid re-rendering the + // entire list (light fields haven't changed within this poll cycle). + const enrichedMap = new Map(); + for (const pr of data.pullRequests) enrichedMap.set(pr.id, pr); + + setDashboardData(produce((state) => { + state.issues = data.issues; + state.workflowRuns = data.workflowRuns; + state.loading = false; + state.lastRefreshedAt = now; + + let canMerge = state.pullRequests.length === enrichedMap.size; + if (canMerge) { + for (let i = 0; i < state.pullRequests.length; i++) { + if (!enrichedMap.has(state.pullRequests[i].id)) { canMerge = false; break; } } + } - if (canMerge) { - for (let i = 0; i < state.pullRequests.length; i++) { - const e = enrichedMap.get(state.pullRequests[i].id)!; - const pr = state.pullRequests[i]; - pr.headSha = e.headSha; - pr.assigneeLogins = e.assigneeLogins; - pr.reviewerLogins = e.reviewerLogins; - pr.checkStatus = e.checkStatus; - pr.additions = e.additions; - pr.deletions = e.deletions; - pr.changedFiles = e.changedFiles; - pr.comments = e.comments; - pr.reviewThreads = e.reviewThreads; - pr.totalReviewCount = e.totalReviewCount; - pr.enriched = e.enriched; - pr.nodeId = e.nodeId; - pr.surfacedBy = e.surfacedBy; - pr.starCount = e.starCount; - } - } else { - state.pullRequests = data.pullRequests; + if (canMerge) { + for (let i = 0; i < state.pullRequests.length; i++) { + const e = enrichedMap.get(state.pullRequests[i].id)!; + const pr = state.pullRequests[i]; + pr.headSha = e.headSha; + pr.assigneeLogins = e.assigneeLogins; + pr.reviewerLogins = e.reviewerLogins; + pr.checkStatus = e.checkStatus; + pr.additions = e.additions; + pr.deletions = e.deletions; + pr.changedFiles = e.changedFiles; + pr.comments = e.comments; + pr.reviewThreads = e.reviewThreads; + pr.totalReviewCount = e.totalReviewCount; + pr.enriched = e.enriched; + pr.nodeId = e.nodeId; + pr.surfacedBy = e.surfacedBy; + pr.starCount = e.starCount; } - })); - } else { - // Phase 1 did NOT fire (cached data existed or subsequent poll). - // Full atomic replacement — all fields (light + heavy) may have - // changed since the last cycle. Preserve scroll position: SolidJS - // DOM updates are synchronous within the setter, so save/restore - // around it to prevent scroll reset from DOM rebuild. - withScrollLock(() => { - setDashboardData({ - issues: data.issues, - pullRequests: data.pullRequests, - workflowRuns: data.workflowRuns, - loading: false, - lastRefreshedAt: now, - }); - }); - } - rebuildHotSets(data); - // Persist for stale-while-revalidate on full page reload. - // Errors are transient and not persisted. Deferred to avoid blocking paint. - const cachePayload = { - _v: CACHE_VERSION, - issues: data.issues, - pullRequests: data.pullRequests, - workflowRuns: data.workflowRuns, - lastRefreshedAt: now.toISOString(), - }; - setTimeout(() => { - try { - localStorage.setItem(DASHBOARD_STORAGE_KEY, JSON.stringify(cachePayload)); - } catch { - pushNotification("localStorage:dashboard", "Dashboard cache write failed — storage may be full", "warning"); + } else { + state.pullRequests = data.pullRequests; } - }, 0); + })); } else { - setDashboardData("loading", false); + // Phase 1 did NOT fire (cached data existed or subsequent poll). + // Full atomic replacement — all fields (light + heavy) may have + // changed since the last cycle. Preserve scroll position: SolidJS + // DOM updates are synchronous within the setter, so save/restore + // around it to prevent scroll reset from DOM rebuild. + withScrollLock(() => { + setDashboardData({ + issues: data.issues, + pullRequests: data.pullRequests, + workflowRuns: data.workflowRuns, + loading: false, + lastRefreshedAt: now, + }); + }); } + rebuildHotSets(data); + // Persist for stale-while-revalidate on full page reload. + // Errors are transient and not persisted. Deferred to avoid blocking paint. + const cachePayload = { + _v: CACHE_VERSION, + issues: data.issues, + pullRequests: data.pullRequests, + workflowRuns: data.workflowRuns, + lastRefreshedAt: now.toISOString(), + }; + setTimeout(() => { + try { + localStorage.setItem(DASHBOARD_STORAGE_KEY, JSON.stringify(cachePayload)); + } catch { + pushNotification("localStorage:dashboard", "Dashboard cache write failed — storage may be full", "warning"); + } + }, 0); return data; } catch (err) { // Handle 401 auth errors @@ -285,6 +288,97 @@ async function pollFetch(): Promise { const [_coordinator, _setCoordinator] = createSignal | null>(null); const [_hotCoordinator, _setHotCoordinator] = createSignal<{ destroy: () => void } | null>(null); +const [_eventsCoordinator, _setEventsCoordinator] = createSignal<{ destroy: () => void } | null>(null); + +// Mutates data.issues[].surfacedBy and data.pullRequests[].surfacedBy in-place before merging. +function handleTargetedData(data: DashboardData, affectedRepos: string[]): void { + const affectedSet = new Set(affectedRepos.map(r => r.toLowerCase())); + + // Build surfacedBy index from old store items BEFORE the merge + const oldSurfacedByIssues = new Map(); + for (const i of dashboardData.issues) { + if (affectedSet.has(i.repoFullName.toLowerCase()) && i.surfacedBy?.length) { + oldSurfacedByIssues.set(i.id, i.surfacedBy); + } + } + const oldSurfacedByPRs = new Map(); + for (const pr of dashboardData.pullRequests) { + if (affectedSet.has(pr.repoFullName.toLowerCase()) && pr.surfacedBy?.length) { + oldSurfacedByPRs.set(pr.id, pr.surfacedBy); + } + } + + // Merge surfacedBy into targeted results before appending + for (const item of data.issues) { + const oldSb = oldSurfacedByIssues.get(item.id); + if (oldSb) { + item.surfacedBy = [...new Set([...(item.surfacedBy ?? []), ...oldSb])]; + } + } + for (const pr of data.pullRequests) { + const oldSb = oldSurfacedByPRs.get(pr.id); + if (oldSb) { + pr.surfacedBy = [...new Set([...(pr.surfacedBy ?? []), ...oldSb])]; + } + } + + withScrollLock(() => { + setDashboardData(produce((state) => { + // ID-based merge: replace targeted items, keep unaffected + tracked-user-only items + const newIssueIds = new Set(data.issues.map(i => i.id)); + state.issues = [ + ...state.issues.filter(i => + !affectedSet.has(i.repoFullName.toLowerCase()) || + !newIssueIds.has(i.id) + ), + ...data.issues, + ]; + const newPRIds = new Set(data.pullRequests.map(pr => pr.id)); + state.pullRequests = [ + ...state.pullRequests.filter(pr => + !affectedSet.has(pr.repoFullName.toLowerCase()) || + !newPRIds.has(pr.id) + ), + ...data.pullRequests, + ]; + const newRunIds = new Set(data.workflowRuns.map(r => r.id)); + state.workflowRuns = [ + ...state.workflowRuns.filter(r => + !affectedSet.has(r.repoFullName.toLowerCase()) || + !newRunIds.has(r.id) + ), + ...data.workflowRuns, + ]; + })); + }); + + for (const err of data.errors) { + pushError(err.repo, err.message, err.retryable); + } + + const newItems = detectNewItems(data); + dispatchNotifications(newItems, config); + + seedHotSetsFromTargeted(data); + + // Capture snapshot eagerly — a concurrent hot poll can mutate the store via produce + // between here and the deferred setTimeout write. + const lastRefreshed = dashboardData.lastRefreshedAt; + const snapshot = { + _v: CACHE_VERSION, + issues: [...dashboardData.issues], + pullRequests: [...dashboardData.pullRequests], + workflowRuns: [...dashboardData.workflowRuns], + lastRefreshedAt: lastRefreshed?.toISOString() ?? null, + }; + setTimeout(() => { + try { + localStorage.setItem(DASHBOARD_STORAGE_KEY, JSON.stringify(snapshot)); + } catch { + // Non-fatal + } + }, 0); +} export default function DashboardPage() { const [hotPollingPRIds, setHotPollingPRIds] = createSignal>(new Set()); @@ -404,6 +498,21 @@ export default function DashboardPage() { _setCoordinator(createPollCoordinator(() => config.refreshInterval, pollFetch)); } + if (!_eventsCoordinator()) { + _setEventsCoordinator(createEventsPollCoordinator( + () => user()?.login ?? "", + () => { + const repos = new Set(); + for (const r of [...config.selectedRepos, ...(config.upstreamRepos ?? []), ...(config.monitoredRepos ?? [])]) { + repos.add(`${r.owner}/${r.name}`.toLowerCase()); + } + return repos; + }, + () => _coordinator()?.isRefreshing() ?? false, + handleTargetedData, + )); + } + if (!_hotCoordinator()) { _setHotCoordinator(createHotPollCoordinator( () => config.hotPollInterval, @@ -487,10 +596,13 @@ export default function DashboardPage() { onCleanup(() => { const coord = _coordinator(); const hotCoord = _hotCoordinator(); + const eventsCoord = _eventsCoordinator(); coord?.destroy(); if (_coordinator() === coord) _setCoordinator(null); hotCoord?.destroy(); if (_hotCoordinator() === hotCoord) _setHotCoordinator(null); + eventsCoord?.destroy(); + if (_eventsCoordinator() === eventsCoord) _setEventsCoordinator(null); clearHotSets(); clearInterval(clockInterval); }); diff --git a/src/app/lib/oauth.ts b/src/app/lib/oauth.ts index 217e8093..7ab6e725 100644 --- a/src/app/lib/oauth.ts +++ b/src/app/lib/oauth.ts @@ -29,8 +29,8 @@ export function buildAuthorizeUrl(options?: { returnTo?: string }): string { const params = new URLSearchParams({ client_id: clientId, redirect_uri: redirectUri, - // repo: read issues/PRs; read:org: list orgs; notifications: gate - scope: "repo read:org notifications", + // repo: read issues/PRs; read:org: list orgs + scope: "repo read:org", state, prompt: "select_account", }); diff --git a/src/app/pages/LoginPage.tsx b/src/app/pages/LoginPage.tsx index 824c5ecb..0c0b127a 100644 --- a/src/app/pages/LoginPage.tsx +++ b/src/app/pages/LoginPage.tsx @@ -129,7 +129,6 @@ export default function LoginPage() {
  • repo
  • read:org (under admin:org)
  • -
  • notifications
@@ -142,7 +141,7 @@ export default function LoginPage() { > Fine-grained tokens - {" "}also work, but only access one org at a time and do not support notifications. Add read-only permissions for Actions, Contents, Issues, and Pull requests. + {" "}also work, but only access one org at a time. Add read-only permissions for Actions, Contents, Issues, and Pull requests.

diff --git a/src/app/services/api-usage.ts b/src/app/services/api-usage.ts index 4c604779..33c82e24 100644 --- a/src/app/services/api-usage.ts +++ b/src/app/services/api-usage.ts @@ -8,7 +8,7 @@ import { onApiRequest, type ApiRequestInfo } from "./github"; const API_CALL_SOURCES = [ "lightSearch", "heavyBackfill", "forkCheck", "globalUserSearch", "unfilteredSearch", - "upstreamDiscovery", "workflowRuns", "hotPRStatus", "hotRunStatus", "notifications", + "upstreamDiscovery", "workflowRuns", "hotPRStatus", "hotRunStatus", "userEvents", "validateUser", "fetchOrgs", "fetchRepos", "rateLimitCheck", "graphql", "rest", ] as const; @@ -28,7 +28,7 @@ export const SOURCE_LABELS: Record = { workflowRuns: "Workflow Runs", hotPRStatus: "Hot PR Status", hotRunStatus: "Hot Run Status", - notifications: "Notifications", + userEvents: "Events", validateUser: "Validate User", fetchOrgs: "Fetch Orgs", fetchRepos: "Fetch Repos", @@ -195,7 +195,7 @@ export function updateResetAt(resetAt: number): void { // /^\/user$/ uses $ to avoid shadowing /user/orgs and /user/repos. // /actions/runs/\d+$ must precede /actions/runs/ (specific before general). const REST_SOURCE_PATTERNS: Array<[RegExp, ApiCallSource]> = [ - [/^\/notifications/, "notifications"], + [/^\/users\/[^/]+\/events/, "userEvents"], [/^\/users\/[^/]+$/, "validateUser"], [/^\/user$/, "fetchOrgs"], [/^\/user\/orgs/, "fetchOrgs"], diff --git a/src/app/services/events.ts b/src/app/services/events.ts new file mode 100644 index 00000000..e7489d4f --- /dev/null +++ b/src/app/services/events.ts @@ -0,0 +1,174 @@ +import { getClient } from "./github"; +import { onAuthCleared } from "../stores/auth"; + +// ── Types ───────────────────────────────────────────────────────────────────── + +export interface GitHubEvent { + id: string; + type: string; + actor: { id: number; login: string }; + repo: { id: number; name: string }; // "owner/repo" format + payload: Record; + created_at: string; +} + +export interface RepoEventSummary { + repoFullName: string; // "owner/repo" + eventTypes: Set; // which event types fired + hasIssueActivity: boolean; + hasPRActivity: boolean; + hasWorkflowActivity: boolean; // PushEvent can trigger workflows + latestEventAt: string; // ISO timestamp of newest event +} + +// PullRequestReviewEvent presence on the user events endpoint is unverified; +// included optimistically — it's harmless if absent. +export const ACTIONABLE_EVENT_TYPES = [ + "IssuesEvent", + "IssueCommentEvent", + "PullRequestEvent", + "PullRequestReviewEvent", + "PullRequestReviewCommentEvent", + "PushEvent", +] as const; + +// ── Module-level ETag state ─────────────────────────────────────────────────── + +let _eventsETag: string | null = null; +let _lastEventId: string | null = null; + +// ── Auth cleanup ────────────────────────────────────────────────────────────── + +export function resetEventsState(): void { + _eventsETag = null; + _lastEventId = null; +} + +// Self-contained cleanup — same pattern as api-usage.ts onAuthCleared registration +onAuthCleared(resetEventsState); + +// ── fetchUserEvents ─────────────────────────────────────────────────────────── + +type GitHubOctokit = NonNullable>; + +export async function fetchUserEvents( + octokit: GitHubOctokit, + username: string, +): Promise<{ events: GitHubEvent[]; changed: boolean }> { + // Empty login would hit the public /users//events endpoint + if (!username) { + return { events: [], changed: false }; + } + + const headers: Record = {}; + if (_eventsETag) { + headers["If-None-Match"] = _eventsETag; + } + + try { + const response = await octokit.request("GET /users/{username}/events", { + username, + per_page: 100, + headers, + }); + + // Store ETag for next conditional request + const etag = (response.headers as Record)["etag"]; + if (etag) { + _eventsETag = etag; + } + + const allEvents = (response.data as GitHubEvent[]); + + // First call: no ID filter — seed _lastEventId and return all events + if (_lastEventId === null) { + if (allEvents.length > 0) { + _lastEventId = allEvents[0].id; // events are newest-first + } + return { events: allEvents, changed: allEvents.length > 0 }; + } + + // Subsequent calls: filter to only events newer than _lastEventId + // Use numeric comparison — event IDs are numeric strings; lexicographic + // comparison would break for IDs of different lengths (e.g. "9" > "10"). + const lastIdNum = parseInt(_lastEventId, 10); + const newEvents = allEvents.filter( + (e) => parseInt(e.id, 10) > lastIdNum, + ); + + if (newEvents.length > 0) { + _lastEventId = allEvents[0].id; // newest event is always first + } + + return { events: newEvents, changed: newEvents.length > 0 }; + } catch (err) { + // Octokit throws RequestError on 304 — same pattern as hasNotificationChanges() + if ( + typeof err === "object" && + err !== null && + (err as { status?: number }).status === 304 + ) { + return { events: [], changed: false }; + } + // Silent fallback for all other errors — full refresh handles reconciliation + console.warn("[events] fetchUserEvents error:", err instanceof Error ? err.message : String(err)); + return { events: [], changed: false }; + } +} + +// ── parseRepoEvents ─────────────────────────────────────────────────────────── + +const ACTIONABLE_SET = new Set(ACTIONABLE_EVENT_TYPES); + +export function parseRepoEvents( + events: GitHubEvent[], + trackedRepoNames: Set, +): Map { + const result = new Map(); + + for (const event of events) { + if (!ACTIONABLE_SET.has(event.type)) continue; + + const repoNameLower = event.repo.name.toLowerCase(); + if (!trackedRepoNames.has(repoNameLower)) continue; + + // Use the canonical casing from the event payload + const repoFullName = event.repo.name; + + let summary = result.get(repoNameLower); + if (!summary) { + summary = { + repoFullName, + eventTypes: new Set(), + hasIssueActivity: false, + hasPRActivity: false, + hasWorkflowActivity: false, + latestEventAt: event.created_at, + }; + result.set(repoNameLower, summary); + } + + summary.eventTypes.add(event.type); + + if (event.type === "IssuesEvent" || event.type === "IssueCommentEvent") { + summary.hasIssueActivity = true; + } + if ( + event.type === "PullRequestEvent" || + event.type === "PullRequestReviewEvent" || + event.type === "PullRequestReviewCommentEvent" + ) { + summary.hasPRActivity = true; + } + if (event.type === "PushEvent") { + summary.hasWorkflowActivity = true; + } + + // Track latest timestamp (events are newest-first, but don't assume order) + if (event.created_at > summary.latestEventAt) { + summary.latestEventAt = event.created_at; + } + } + + return result; +} diff --git a/src/app/services/poll.ts b/src/app/services/poll.ts index c42fa8de..b9a29125 100644 --- a/src/app/services/poll.ts +++ b/src/app/services/poll.ts @@ -18,8 +18,9 @@ import { type HotWorkflowRunUpdate, resetEmptyActionRepos, } from "./api"; +import { fetchUserEvents, parseRepoEvents, resetEventsState, type RepoEventSummary } from "./events"; import { detectNewItems, dispatchNotifications, _resetNotificationState } from "../lib/notifications"; -import { pushError, pushNotification, getNotifications, dismissNotificationBySource, startCycleTracking, endCycleTracking, resetNotificationState } from "../lib/errors"; +import { pushError, getNotifications, dismissNotificationBySource, startCycleTracking, endCycleTracking, resetNotificationState } from "../lib/errors"; // ── Types ──────────────────────────────────────────────────────────────────── @@ -28,8 +29,6 @@ export interface DashboardData { pullRequests: PullRequest[]; workflowRuns: WorkflowRun[]; errors: ApiError[]; - /** True when notifications gate determined nothing changed — consumer should keep existing data */ - skipped?: boolean; } export interface PollCoordinator { @@ -39,11 +38,6 @@ export interface PollCoordinator { destroy: () => void; } -// ── Notifications gate ─────────────────────────────────────────────────────── - -let _notifLastModified: string | null = null; -let _notifGateDisabled = false; // Disabled after 403 (notifications scope not granted) - // ── Hot poll state ──────────────────────────────────────────────────────────── /** PRs with pending/null check status: maps GraphQL node ID → databaseId */ @@ -73,16 +67,7 @@ export function clearHotSets(): void { _hotRuns.clear(); } -/** Simulate 403 on /notifications — disables the notifications gate. - * Used by tests to exercise the conditional background-poll guard. */ -export function disableNotifGate(): void { - _notifGateDisabled = true; -} - export function resetPollState(): void { - _notifLastModified = null; - _lastSuccessfulFetch = null; - _notifGateDisabled = false; _hotPRs.clear(); _hotPRsByDbId.clear(); _hotRuns.clear(); @@ -90,6 +75,8 @@ export function resetPollState(): void { _resetNotificationState(); resetEmptyActionRepos(); resetNotificationState(); + resetEventsState(); + _repoLastTargeted.clear(); } // Auto-reset poll state on logout (avoids circular dep with auth.ts) @@ -103,11 +90,26 @@ onAuthCleared(resetPollState); // NOTE: Mount flags are intentionally permanent (module lifetime) and NOT cleared // by resetPollState(). The createRoot runs once at module load; the effects // continue tracking config changes across auth cycles without re-mounting. +let _userLoginMounted = false; +let _userLoginKey = ""; let _trackedUsersMounted = false; let _trackedUsersKey = ""; let _monitoredReposMounted = false; let _monitoredReposKey = ""; createRoot(() => { + createEffect(() => { + const key = user()?.login ?? ""; + if (!_userLoginMounted) { + _userLoginMounted = true; + _userLoginKey = key; + return; + } + if (key !== _userLoginKey) { + _userLoginKey = key; + untrack(() => resetEventsState()); + } + }); + createEffect(() => { const key = (config.trackedUsers ?? []).map((u) => u.login).sort().join(","); if (!_trackedUsersMounted) { @@ -117,8 +119,10 @@ createRoot(() => { } if (key !== _trackedUsersKey) { _trackedUsersKey = key; - _lastSuccessfulFetch = null; // Force next poll to bypass notifications gate - untrack(() => _resetNotificationState()); + untrack(() => { + _resetNotificationState(); + resetEventsState(); + }); } }); @@ -131,79 +135,14 @@ createRoot(() => { } if (key !== _monitoredReposKey) { _monitoredReposKey = key; - _lastSuccessfulFetch = null; // Force next poll to bypass notifications gate - untrack(() => _resetNotificationState()); + untrack(() => { + _resetNotificationState(); + resetEventsState(); + }); } }); }); -/** - * Checks if anything changed since last poll using the Notifications API. - * Returns true if there are new notifications (or first check), false if unchanged. - * Uses If-Modified-Since for zero-cost 304 checks (doesn't count against rate limit). - * - * Auto-disables after a 403 (notifications scope not granted) to stop wasting - * rate limit tokens on requests that will always fail. - */ -async function hasNotificationChanges(): Promise { - if (_notifGateDisabled) return true; - - const octokit = getClient(); - if (!octokit) return true; - - try { - const headers: Record = {}; - if (_notifLastModified) { - headers["If-Modified-Since"] = _notifLastModified; - } - - const response = await octokit.request("GET /notifications", { - per_page: 1, - headers, - }); - - // Store Last-Modified for next conditional request - const lastMod = (response.headers as Record)["last-modified"]; - if (lastMod) { - _notifLastModified = lastMod; - } - - return true; // 200 = something changed - } catch (err) { - // 304 and 403 are still real API calls — tracked automatically by the hook - if ( - typeof err === "object" && - err !== null && - (err as { status?: number }).status === 304 - ) { - return false; // Nothing changed since last check - } - // 403 = notifications scope not granted — disable gate permanently - // to stop burning rate limit tokens on every poll cycle - if ( - typeof err === "object" && - err !== null && - (err as { status?: number }).status === 403 - ) { - console.warn("[poll] Notifications API returned 403 — disabling gate"); - pushNotification("notifications", config.authMethod === "pat" - ? "Notifications API returned 403 — fine-grained tokens do not support notifications; classic tokens need the notifications scope. Background refresh in hidden tabs is disabled." - : "Notifications API returned 403 — check that the notifications scope is granted. Background refresh in hidden tabs is disabled.", "warning"); - _notifGateDisabled = true; - } - return true; - } -} - -// ── Incremental fetch timestamps ───────────────────────────────────────────── - -let _lastSuccessfulFetch: Date | null = null; - -// Force a full fetch if the notifications gate has been skipping for too long. -// Notifications don't cover all change types (e.g., workflow runs on unwatched -// repos, label changes without notification), so we cap staleness. -const MAX_GATE_STALENESS_MS = 10 * 60 * 1000; // 10 minutes - // ── fetchAllData orchestrator ───────────────────────────────────────────────── /** @@ -217,20 +156,7 @@ export async function fetchAllData( ): Promise { const octokit = getClient(); if (!octokit) { - return { issues: [], pullRequests: [], workflowRuns: [], errors: [], skipped: true }; - } - - // On subsequent polls, check notifications first (free when 304) - if (_lastSuccessfulFetch) { - const staleness = Date.now() - _lastSuccessfulFetch.getTime(); - if (staleness < MAX_GATE_STALENESS_MS) { - const changed = await hasNotificationChanges(); - if (!changed) { - console.info("[poll] No notification changes — skipping full fetch"); - return { issues: [], pullRequests: [], workflowRuns: [], errors: [], skipped: true }; - } - } - // If staleness >= MAX_GATE_STALENESS_MS, skip the gate and force a full fetch + return { issues: [], pullRequests: [], workflowRuns: [], errors: [] }; } const userLogin = user()?.login ?? ""; @@ -298,14 +224,6 @@ export async function fetchAllData( ...(runData?.errors ?? []), ]; - // Only activate the notifications gate if at least one fetch succeeded. - // If all failed (e.g., network outage), we don't want the gate to - // suppress retries on the next poll cycle. - const anySucceeded = issuesAndPrsData !== null || runData !== null; - if (anySucceeded) { - _lastSuccessfulFetch = new Date(); - } - return { issues: issuesAndPrsData?.issues ?? [], pullRequests: issuesAndPrsData?.pullRequests ?? [], @@ -320,7 +238,7 @@ const REJITTER_WINDOW_MS = 30_000; // ±30 seconds jitter const REVISIT_THRESHOLD_MS = 2 * 60 * 1000; // 2 minutes // Sources managed by the poll coordinator — used for reconciliation -const POLL_MANAGED_SOURCES = new Set(["poll", "graphql", "rate-limit", "notifications", "search/issues", "search/prs"]); +const POLL_MANAGED_SOURCES = new Set(["poll", "graphql", "rate-limit", "search/issues", "search/prs"]); function withJitter(intervalMs: number): number { const jitter = (Math.random() * 2 - 1) * REJITTER_WINDOW_MS; @@ -332,10 +250,7 @@ function withJitter(intervalMs: number): number { * - Triggers an immediate fetch on init * - Polls at getInterval() seconds (reactive — restarts when interval changes) * - If getInterval() === 0, disables auto-polling - * - Continues polling in background tabs when notifications gate is available - * (304 responses make background polls near-zero cost). When the gate is - * disabled (fine-grained PAT or missing notifications scope), background - * polling pauses to conserve API budget. + * - Skips background polls when hidden (GraphQL POST has no 304 shortcut) * - On re-visible after >2 min hidden, fires catch-up fetch (safety net for * browser tab throttling/freezing — Safari purge, Chrome Energy Saver) * - Applies ±30 second jitter to poll interval @@ -371,7 +286,6 @@ export function createPollCoordinator( try { const data = await fetchAll(); - if (data.skipped) return; // finally handles endCycleTracking + setIsRefreshing setLastRefreshAt(new Date()); // Surface per-repo API errors globally for (const err of data.errors) { @@ -409,20 +323,18 @@ export function createPollCoordinator( const intervalMs = withJitter(intervalSec * 1000); intervalId = setInterval(() => { - // Without the notifications gate (403 — scope not granted), every background - // poll is a full fetch with no 304 shortcut. Skip background polls to avoid - // burning API budget; the catch-up handler still fires on tab return. - if (document.visibilityState === "hidden" && _notifGateDisabled) return; + // Full refresh (GraphQL POST) has no 304 shortcut — skip background tabs + // to avoid burning API budget. The catch-up handler fires on tab return, + // and the events poll continues in background tabs (ETag 304 = zero cost). + if (document.visibilityState === "hidden") return; void doFetch(); }, intervalMs); } - // Safety net for browser-level tab throttling/freezing. Background polling - // continues via setInterval, but browsers may throttle or freeze timers in - // hidden tabs (Chrome Energy Saver, Safari tab purge, Firefox timer capping). - // When the tab becomes visible again after >2 min, this handler fires a - // catch-up fetch in case the browser suppressed scheduled polls. The - // notifications gate (304) makes redundant fetches near-zero cost. + // Safety net for browser-level tab throttling/freezing. Background polls are + // skipped (no 304 shortcut for GraphQL), but browsers may also freeze hidden + // tab timers (Chrome Energy Saver, Safari tab purge, Firefox timer capping). + // When the tab becomes visible again after >2 min, fire a catch-up fetch. function handleVisibilityChange(): void { if (document.visibilityState === "hidden") { hiddenAt = Date.now(); @@ -727,3 +639,225 @@ export function createHotPollCoordinator( return { destroy }; } + +// ── Targeted refresh (events-driven) ───────────────────────────────────────── + +const MAX_TARGETED_REPOS = 10; +const TARGETED_COOLDOWN_MS = 2 * 60 * 1000; +const _repoLastTargeted = new Map(); + +export async function fetchTargetedRepoData( + repoSummaries: Map, +): Promise { + const octokit = getClient(); + if (!octokit) { + return { issues: [], pullRequests: [], workflowRuns: [], errors: [] }; + } + + const userLogin = user()?.login ?? ""; + + // Skip repos refreshed recently — prevents API amplification when multiple events fire for the same repo + const now = Date.now(); + let entries = [...repoSummaries.entries()].filter(([key]) => { + const lastTargeted = _repoLastTargeted.get(key); + return !lastTargeted || (now - lastTargeted) >= TARGETED_COOLDOWN_MS; + }); + + // Cap targeted repos per cycle — prioritize by most recent event to focus on active work + if (entries.length > MAX_TARGETED_REPOS) { + entries.sort((a, b) => b[1].latestEventAt.localeCompare(a[1].latestEventAt)); + entries = entries.slice(0, MAX_TARGETED_REPOS); + } + + if (entries.length === 0) { + return { issues: [], pullRequests: [], workflowRuns: [], errors: [] }; + } + + // Record cooldown timestamps + for (const [key] of entries) { + _repoLastTargeted.set(key, now); + } + + const targetRepos = entries + .map(([, summary]) => { + const parts = summary.repoFullName.split("/"); + if (parts.length !== 2) return null; + return { owner: parts[0], name: parts[1], fullName: summary.repoFullName }; + }) + .filter((r): r is NonNullable => r !== null); + + const workflowRepos = entries + .filter(([, summary]) => summary.hasWorkflowActivity) + .map(([, summary]) => { + const parts = summary.repoFullName.split("/"); + if (parts.length !== 2) return null; + return { owner: parts[0], name: parts[1], fullName: summary.repoFullName }; + }) + .filter((r): r is NonNullable => r !== null); + + const [issuesAndPrsResult, runResult] = await Promise.allSettled([ + fetchIssuesAndPullRequests(octokit, targetRepos, userLogin), + workflowRepos.length > 0 + ? fetchWorkflowRuns(octokit, workflowRepos, config.maxWorkflowsPerRepo, config.maxRunsPerWorkflow) + : Promise.resolve({ workflowRuns: [] as WorkflowRun[], errors: [] as ApiError[] }), + ]); + + const errors: ApiError[] = []; + if (issuesAndPrsResult.status === "rejected") { + const err = issuesAndPrsResult.reason; + errors.push({ repo: "targeted-issues", statusCode: null, message: err instanceof Error ? err.message : String(err), retryable: true }); + } + if (runResult.status === "rejected") { + const err = runResult.reason; + errors.push({ repo: "targeted-runs", statusCode: null, message: err instanceof Error ? err.message : String(err), retryable: true }); + } + + const issuesAndPrsData = issuesAndPrsResult.status === "fulfilled" ? issuesAndPrsResult.value : null; + const runData = runResult.status === "fulfilled" ? runResult.value : null; + + return { + issues: issuesAndPrsData?.issues ?? [], + pullRequests: issuesAndPrsData?.pullRequests ?? [], + workflowRuns: runData?.workflowRuns ?? [], + errors: [...errors, ...(issuesAndPrsData?.errors ?? []), ...(runData?.errors ?? [])], + }; +} + +// ── Hot set seeding from targeted refresh ──────────────────────────────────── + +export function seedHotSetsFromTargeted(data: DashboardData): void { + for (const pr of data.pullRequests) { + if (pr.enriched && pr.checkStatus === "pending" && pr.nodeId) { + if (_hotPRs.size >= MAX_HOT_PRS) break; + if (!_hotPRs.has(pr.nodeId)) { + _hotPRs.set(pr.nodeId, pr.id); + _hotPRsByDbId.set(pr.id, pr.nodeId); + } + } + } + + for (const run of data.workflowRuns) { + if (run.status === "queued" || run.status === "in_progress") { + if (_hotRuns.size >= MAX_HOT_RUNS) break; + if (!_hotRuns.has(run.id)) { + const parts = run.repoFullName.split("/"); + if (parts.length === 2) { + _hotRuns.set(run.id, { owner: parts[0], repo: parts[1] }); + } + } + } + } +} + +// ── Events poll coordinator ────────────────────────────────────────────────── + +// Fixed at 60s: GitHub's Events API has a ~60s server-side cache, so polling +// more frequently returns stale data and wastes rate-limit quota. +const EVENTS_POLL_INTERVAL_MS = 60_000; + +export function createEventsPollCoordinator( + getUsername: () => string, + getTrackedRepoNames: () => Set, + isFullRefreshing: () => boolean, + onTargetedData: (data: DashboardData, affectedRepos: string[]) => void, +): { destroy: () => void } { + let timeoutId: ReturnType | null = null; + let chainGeneration = 0; + let consecutiveFailures = 0; + const MAX_BACKOFF_MULTIPLIER = 8; + + function destroy(): void { + chainGeneration++; + consecutiveFailures = 0; + if (timeoutId !== null) { + clearTimeout(timeoutId); + timeoutId = null; + } + } + + function schedule(myGeneration: number, delayMs: number): void { + if (myGeneration !== chainGeneration) return; + const backoff = Math.min(2 ** consecutiveFailures, MAX_BACKOFF_MULTIPLIER); + timeoutId = setTimeout(() => void cycle(myGeneration), delayMs * backoff); + } + + async function cycle(myGeneration: number): Promise { + if (myGeneration !== chainGeneration) return; + + const username = getUsername(); + if (!username) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + const octokit = getClient(); + if (!octokit) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + if (isFullRefreshing()) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + try { + const { events, changed } = await fetchUserEvents(octokit, username); + if (myGeneration !== chainGeneration) return; + + if (!changed || events.length === 0) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + const repoSummaries = parseRepoEvents(events, getTrackedRepoNames()); + if (repoSummaries.size === 0) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + if (isFullRefreshing()) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + const preGeneration = getHotPollGeneration(); + + const data = await fetchTargetedRepoData(repoSummaries); + if (myGeneration !== chainGeneration) return; + + if (preGeneration !== getHotPollGeneration()) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + if (isFullRefreshing()) { + consecutiveFailures = 0; + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + return; + } + + const affectedRepos = [...repoSummaries.values()].map((s) => s.repoFullName); + onTargetedData(data, affectedRepos); + consecutiveFailures = 0; + } catch (err) { + consecutiveFailures++; + console.warn("[events-poll] cycle error:", err instanceof Error ? err.message : String(err)); + } + + schedule(myGeneration, EVENTS_POLL_INTERVAL_MS); + } + + // First cycle fires immediately (delay=0) to establish ETag baseline + const gen = chainGeneration; + timeoutId = setTimeout(() => void cycle(gen), 0); + + return { destroy }; +} diff --git a/tests/components/DashboardPage.test.tsx b/tests/components/DashboardPage.test.tsx index 8fa105a5..331d4fe9 100644 --- a/tests/components/DashboardPage.test.tsx +++ b/tests/components/DashboardPage.test.tsx @@ -40,6 +40,13 @@ vi.mock("../../src/app/services/github", () => ({ getClient: () => null, })); +// Mock notifications lib +vi.mock("../../src/app/lib/notifications", () => ({ + detectNewItems: vi.fn(() => []), + dispatchNotifications: vi.fn(), + _resetNotificationState: vi.fn(), +})); + // Mock errors lib — return empty by default vi.mock("../../src/app/lib/errors", () => ({ getErrors: vi.fn().mockReturnValue([]), @@ -66,6 +73,8 @@ let capturedOnHotData: (( runUpdates: Map, generation: number, ) => void) | null = null; +// capturedOnTargetedData is populated by the createEventsPollCoordinator mock +let capturedOnTargetedData: ((data: DashboardData, affectedRepos: string[]) => void) | null = null; // DashboardPage and pollService are imported dynamically after each vi.resetModules() // so the module-level _coordinator variable is always fresh (null) per test. @@ -115,7 +124,14 @@ beforeEach(async () => { return { destroy: vi.fn() }; } ), + createEventsPollCoordinator: vi.fn().mockImplementation( + (_getUsername: unknown, _trackedRepoNames: unknown, _isFullRefreshing: unknown, onTargetedData: typeof capturedOnTargetedData) => { + capturedOnTargetedData = onTargetedData; + return { destroy: vi.fn() }; + } + ), rebuildHotSets: vi.fn(), + seedHotSetsFromTargeted: vi.fn(), clearHotSets: vi.fn(), getHotPollGeneration: vi.fn().mockReturnValue(0), })); @@ -132,6 +148,7 @@ beforeEach(async () => { mockLocationReplace.mockClear(); capturedFetchAll = null; capturedOnHotData = null; + capturedOnTargetedData = null; vi.mocked(authStore.clearAuth).mockClear(); vi.mocked(authStore.expireToken).mockClear(); vi.mocked(pollService.fetchAllData).mockResolvedValue({ @@ -582,61 +599,6 @@ describe("DashboardPage — data flow", () => { screen.getByRole("status"); }); - it("skipped fetch (notifications gate) keeps existing data", async () => { - const issues = [makeIssue({ id: 5, title: "Existing issue" })]; - // First call: returns real data; subsequent calls: skipped=true - vi.mocked(pollService.fetchAllData) - .mockResolvedValueOnce({ issues, pullRequests: [], workflowRuns: [], errors: [] }) - .mockResolvedValue({ issues: [], pullRequests: [], workflowRuns: [], errors: [], skipped: true }); - render(() => ); - await waitFor(() => { - // Repo group header visible (collapsed — verify data reached the tab) - screen.getByText("owner/repo"); - screen.getByText("1 issue"); - }); - - // Trigger a second fetch via the captured callback — skipped result should not erase data - await capturedFetchAll?.(); - // Data still present (collapsed repo group summary persists) - screen.getByText("1 issue"); - }); - - it("auto-prune runs after first non-skipped poll even if a skipped poll occurred first", async () => { - configStore.updateConfig({ - enableTracking: true, - selectedRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - }); - viewStore.updateViewState({ - trackedItems: [{ - id: 555, - number: 55, - type: "issue" as const, - repoFullName: "org/repo", - title: "Will be pruned after non-skipped poll", - addedAt: Date.now(), - }], - }); - - // First call: skipped — hasFetchedFresh must stay false, no pruning - vi.mocked(pollService.fetchAllData) - .mockResolvedValueOnce({ issues: [], pullRequests: [], workflowRuns: [], errors: [], skipped: true }) - // Second call: real data with empty issues — item 555 absent means closed - .mockResolvedValueOnce({ issues: [], pullRequests: [], workflowRuns: [], errors: [] }); - - render(() => ); - - // After the first (skipped) fetch, tracked item must NOT be pruned yet - await waitFor(() => { - expect(viewStore.viewState.trackedItems.length).toBe(1); - }); - - // Trigger a second fetch — non-skipped, sets hasFetchedFresh=true, triggers prune - await capturedFetchAll?.(); - - await waitFor(() => { - expect(viewStore.viewState.trackedItems.length).toBe(0); - }); - }); }); describe("DashboardPage — auth error handling", () => { @@ -1662,3 +1624,168 @@ describe("DashboardPage — tabCounts applies filterPreset", () => { }); }); }); + +describe("DashboardPage — events poll targeted merge", () => { + it("preserves tracked-user-only items from affected repos", async () => { + const trackedUserIssue = makeIssue({ id: 99, title: "Tracked user only", repoFullName: "org/repo", surfacedBy: ["other-user"] }); + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [trackedUserIssue, makeIssue({ id: 1, title: "My issue", repoFullName: "org/repo" })], + pullRequests: [], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => { screen.getByText("org/repo"); }); + + const targetedData: DashboardData = { + issues: [makeIssue({ id: 1, title: "My issue updated", repoFullName: "org/repo" })], + pullRequests: [], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + await waitFor(() => { + screen.getByText("2 issues"); + }); + }); + + it("merges surfacedBy annotations via union for issues", async () => { + const sharedIssue = makeIssue({ id: 50, title: "Shared", repoFullName: "org/repo", surfacedBy: ["primary", "tracked-user"] }); + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [sharedIssue], + pullRequests: [], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => { screen.getByText("org/repo"); }); + + const targetedIssue = makeIssue({ id: 50, title: "Shared updated", repoFullName: "org/repo", surfacedBy: ["primary"] }); + const targetedData: DashboardData = { + issues: [targetedIssue], + pullRequests: [], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + await waitFor(() => { + screen.getByText("1 issue"); + }); + + // handleTargetedData mutates data items in-place before merging into the store + expect(targetedIssue.surfacedBy).toEqual(expect.arrayContaining(["primary", "tracked-user"])); + expect(targetedIssue.surfacedBy).toHaveLength(2); + }); + + it("merges surfacedBy annotations via union for pull requests", async () => { + const sharedPR = makePullRequest({ id: 60, repoFullName: "org/repo", surfacedBy: ["primary", "tracked-user"] }); + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [], + pullRequests: [sharedPR], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => expect(capturedOnTargetedData).not.toBeNull()); + + const targetedPR = makePullRequest({ id: 60, repoFullName: "org/repo", surfacedBy: ["primary"] }); + const targetedData: DashboardData = { + issues: [], + pullRequests: [targetedPR], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + // handleTargetedData mutates data items in-place before merging into the store + expect(targetedPR.surfacedBy).toEqual(expect.arrayContaining(["primary", "tracked-user"])); + expect(targetedPR.surfacedBy).toHaveLength(2); + }); + + it("calls detectNewItems and dispatchNotifications after targeted merge", async () => { + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [], + pullRequests: [], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => expect(capturedOnTargetedData).not.toBeNull()); + + const notifLib = await import("../../src/app/lib/notifications"); + vi.mocked(notifLib.detectNewItems).mockClear(); + vi.mocked(notifLib.dispatchNotifications).mockClear(); + + const targetedData: DashboardData = { + issues: [makeIssue({ id: 200, title: "New via events", repoFullName: "org/repo" })], + pullRequests: [], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + expect(vi.mocked(notifLib.detectNewItems)).toHaveBeenCalledWith(targetedData); + expect(vi.mocked(notifLib.dispatchNotifications)).toHaveBeenCalled(); + }); + + it("calls seedHotSetsFromTargeted after targeted merge", async () => { + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [], + pullRequests: [], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => expect(capturedOnTargetedData).not.toBeNull()); + + vi.mocked(pollService.seedHotSetsFromTargeted).mockClear(); + + const targetedData: DashboardData = { + issues: [], + pullRequests: [makePullRequest({ id: 300, repoFullName: "org/repo" })], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + expect(vi.mocked(pollService.seedHotSetsFromTargeted)).toHaveBeenCalledWith(targetedData); + }); + + it("does not update lastRefreshedAt after targeted merge (MCP relay exclusion)", async () => { + vi.mocked(pollService.fetchAllData).mockResolvedValue({ + issues: [makeIssue({ id: 1, repoFullName: "org/repo" })], + pullRequests: [], + workflowRuns: [], + errors: [], + }); + + render(() => ); + await waitFor(() => { screen.getByText("org/repo"); }); + + // The targeted merge callback does NOT call setDashboardData with a new + // lastRefreshedAt — it uses produce() which only modifies issues/PRs/runs. + // This means the MCP relay effect (which tracks lastRefreshedAt) won't fire. + // We verify this by checking that rebuildHotSets is NOT called (it's only + // called on full refresh, not targeted merge). + vi.mocked(pollService.rebuildHotSets).mockClear(); + + const targetedData: DashboardData = { + issues: [makeIssue({ id: 1, title: "Updated", repoFullName: "org/repo" })], + pullRequests: [], + workflowRuns: [], + errors: [], + }; + capturedOnTargetedData?.(targetedData, ["org/repo"]); + + // seedHotSetsFromTargeted is called (additive), NOT rebuildHotSets (full replacement) + expect(vi.mocked(pollService.rebuildHotSets)).not.toHaveBeenCalled(); + expect(vi.mocked(pollService.seedHotSetsFromTargeted)).toHaveBeenCalledWith(targetedData); + }); +}); diff --git a/tests/components/settings/ApiUsageSection.test.tsx b/tests/components/settings/ApiUsageSection.test.tsx index 2a7e657f..df340293 100644 --- a/tests/components/settings/ApiUsageSection.test.tsx +++ b/tests/components/settings/ApiUsageSection.test.tsx @@ -35,7 +35,7 @@ vi.mock("../../../src/app/services/api-usage", () => ({ lightSearch: "Light Search", heavyBackfill: "PR Backfill", forkCheck: "Fork Check", globalUserSearch: "Tracked User Search", unfilteredSearch: "Unfiltered Search", upstreamDiscovery: "Upstream Discovery", workflowRuns: "Workflow Runs", - hotPRStatus: "Hot PR Status", hotRunStatus: "Hot Run Status", notifications: "Notifications", + hotPRStatus: "Hot PR Status", hotRunStatus: "Hot Run Status", userEvents: "Events", validateUser: "Validate User", fetchOrgs: "Fetch Orgs", fetchRepos: "Fetch Repos", rateLimitCheck: "Rate Limit Check", graphql: "GraphQL (other)", rest: "REST (other)", }, @@ -208,7 +208,7 @@ describe("ApiUsageSection — source label display", () => { ["workflowRuns", "Workflow Runs"], ["hotPRStatus", "Hot PR Status"], ["hotRunStatus", "Hot Run Status"], - ["notifications", "Notifications"], + ["userEvents", "Events"], ["validateUser", "Validate User"], ["fetchOrgs", "Fetch Orgs"], ["fetchRepos", "Fetch Repos"], diff --git a/tests/lib/oauth.test.ts b/tests/lib/oauth.test.ts index 222d2506..3bc4361f 100644 --- a/tests/lib/oauth.test.ts +++ b/tests/lib/oauth.test.ts @@ -79,9 +79,9 @@ describe("oauth helpers", () => { expect(url.searchParams.get("scope")).toBeTruthy(); }); - it("scope value is 'repo read:org notifications'", () => { + it("scope value is 'repo read:org'", () => { const url = new URL(buildAuthorizeUrl()); - expect(url.searchParams.get("scope")).toBe("repo read:org notifications"); + expect(url.searchParams.get("scope")).toBe("repo read:org"); }); it("URL contains state param matching sessionStorage", () => { diff --git a/tests/services/api-usage.test.ts b/tests/services/api-usage.test.ts index 08d4a33f..e74e7d49 100644 --- a/tests/services/api-usage.test.ts +++ b/tests/services/api-usage.test.ts @@ -96,7 +96,7 @@ describe("trackApiCall — increment and record creation", () => { }); it("tracks separate records for different pool types", () => { - mod.trackApiCall("notifications", "core"); + mod.trackApiCall("userEvents", "core"); mod.trackApiCall("lightSearch", "graphql"); const snapshot = mod.getUsageSnapshot(); expect(snapshot).toHaveLength(2); @@ -125,7 +125,7 @@ describe("getUsageSnapshot — sorting", () => { }); it("returns records sorted by count descending", () => { - mod.trackApiCall("notifications", "core", 1); + mod.trackApiCall("userEvents", "core", 1); mod.trackApiCall("lightSearch", "graphql", 5); mod.trackApiCall("workflowRuns", "core", 3); const snapshot = mod.getUsageSnapshot(); @@ -136,7 +136,7 @@ describe("getUsageSnapshot — sorting", () => { it("tiebreaks by lastCalledAt descending when counts are equal", () => { vi.setSystemTime(new Date("2026-01-01T10:00:00Z")); - mod.trackApiCall("notifications", "core", 2); + mod.trackApiCall("userEvents", "core", 2); vi.setSystemTime(new Date("2026-01-01T10:00:10Z")); mod.trackApiCall("lightSearch", "graphql", 2); @@ -144,7 +144,7 @@ describe("getUsageSnapshot — sorting", () => { const snapshot = mod.getUsageSnapshot(); // lightSearch called more recently — should be first expect(snapshot[0].source).toBe("lightSearch"); - expect(snapshot[1].source).toBe("notifications"); + expect(snapshot[1].source).toBe("userEvents"); }); }); @@ -440,8 +440,8 @@ describe("deriveSource — URL pattern matching", () => { } it.each([ - ["/notifications", "notifications"], - ["/notifications?per_page=1", "notifications"], + ["/users/testuser/events", "userEvents"], + ["/users/testuser/events?per_page=100", "userEvents"], ["/users/octocat", "validateUser"], ["/user", "fetchOrgs"], ["/user/orgs", "fetchOrgs"], diff --git a/tests/services/events-poll.test.ts b/tests/services/events-poll.test.ts new file mode 100644 index 00000000..65a59a6e --- /dev/null +++ b/tests/services/events-poll.test.ts @@ -0,0 +1,823 @@ +import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; +import { createRoot } from "solid-js"; +import { makePullRequest, makeWorkflowRun } from "../helpers/index"; + +// ── Mocks ───────────────────────────────────────────────────────────────────── + +const mockGetClient = vi.fn(); +vi.mock("../../src/app/services/github", () => ({ + getClient: () => mockGetClient(), + fetchRateLimitDetails: vi.fn(() => Promise.resolve(null)), + cachedRequest: vi.fn(), + updateGraphqlRateLimit: vi.fn(), + updateRateLimitFromHeaders: vi.fn(), + onApiRequest: vi.fn(), + initClientWatcher: vi.fn(), +})); + +vi.mock("../../src/app/lib/errors", () => ({ + pushError: vi.fn(), + clearErrors: vi.fn(), + getErrors: vi.fn(() => []), + getNotifications: vi.fn(() => []), + dismissNotificationBySource: vi.fn(), + startCycleTracking: vi.fn(), + endCycleTracking: vi.fn(() => new Set()), + pushNotification: vi.fn(), + clearNotifications: vi.fn(), + resetNotificationState: vi.fn(), + addMutedSource: vi.fn(), + isMuted: vi.fn(() => false), + clearMutedSources: vi.fn(), +})); + +vi.mock("../../src/app/lib/notifications", () => ({ + detectNewItems: vi.fn(() => []), + dispatchNotifications: vi.fn(), + _resetNotificationState: vi.fn(), +})); + +const mockFetchUserEvents = vi.fn(); +const mockResetEventsState = vi.fn(); +const mockParseRepoEvents = vi.fn(); + +vi.mock("../../src/app/services/events", () => ({ + fetchUserEvents: (...args: unknown[]) => mockFetchUserEvents(...args), + parseRepoEvents: (...args: unknown[]) => mockParseRepoEvents(...args), + resetEventsState: () => mockResetEventsState(), +})); + +const mockFetchIssuesAndPullRequests = vi.fn(); +const mockFetchWorkflowRuns = vi.fn(); +vi.mock("../../src/app/services/api", () => ({ + fetchIssuesAndPullRequests: (...args: unknown[]) => mockFetchIssuesAndPullRequests(...args), + fetchWorkflowRuns: (...args: unknown[]) => mockFetchWorkflowRuns(...args), + fetchHotPRStatus: vi.fn(async () => ({ results: new Map(), hadErrors: false })), + fetchWorkflowRunById: vi.fn(async () => ({ id: 1, status: "completed", conclusion: "success", updatedAt: "2026-01-01T00:00:00Z", completedAt: "2026-01-01T00:05:00Z" })), + pooledAllSettled: vi.fn(async (tasks: (() => Promise)[]) => { + const results = await Promise.allSettled(tasks.map((t) => t())); + return results; + }), + resetEmptyActionRepos: vi.fn(), +})); + +import { fetchHotPRStatus, fetchWorkflowRunById } from "../../src/app/services/api"; + +vi.mock("../../src/app/stores/config", () => ({ + config: { + selectedRepos: [], + maxWorkflowsPerRepo: 5, + maxRunsPerWorkflow: 3, + hotPollInterval: 30, + trackedUsers: [], + monitoredRepos: [], + }, +})); + +vi.mock("../../src/app/stores/auth", () => ({ + user: vi.fn(() => null), + onAuthCleared: vi.fn(), +})); + +vi.mock("../../src/app/services/api-usage", () => ({ + checkAndResetIfExpired: vi.fn(), +})); + +vi.mock("@sentry/solid", () => ({ + captureException: vi.fn(), +})); + +// Import AFTER mocks +import { + resetPollState, + fetchTargetedRepoData, + fetchHotData, + seedHotSetsFromTargeted, + createEventsPollCoordinator, + getHotPollGeneration, + clearHotSets, + rebuildHotSets, + type DashboardData, +} from "../../src/app/services/poll"; + +// ── Helpers ─────────────────────────────────────────────────────────────────── + +const emptyData: DashboardData = { + issues: [], + pullRequests: [], + workflowRuns: [], + errors: [], +}; + +function makeOctokit() { + return { + request: vi.fn(() => Promise.resolve({ data: {}, headers: {} })), + graphql: vi.fn(() => Promise.resolve({ nodes: [], rateLimit: { limit: 5000, remaining: 4999, resetAt: "2026-01-01T00:00:00Z" } })), + hook: { before: vi.fn() }, + }; +} + +function makeRepoSummary(overrides: { + repoFullName?: string; + hasIssueActivity?: boolean; + hasPRActivity?: boolean; + hasWorkflowActivity?: boolean; + latestEventAt?: string; +} = {}) { + return { + repoFullName: overrides.repoFullName ?? "owner/repo", + eventTypes: new Set(), + hasIssueActivity: overrides.hasIssueActivity ?? false, + hasPRActivity: overrides.hasPRActivity ?? false, + hasWorkflowActivity: overrides.hasWorkflowActivity ?? false, + latestEventAt: overrides.latestEventAt ?? "2026-01-01T00:00:00Z", + }; +} + +async function flushPromises(): Promise { + for (let i = 0; i < 10; i++) await Promise.resolve(); +} + +// ── Tests ───────────────────────────────────────────────────────────────────── + +describe("fetchTargetedRepoData", () => { + beforeEach(() => { + resetPollState(); + vi.clearAllMocks(); + mockFetchIssuesAndPullRequests.mockResolvedValue({ issues: [], pullRequests: [], errors: [] }); + mockFetchWorkflowRuns.mockResolvedValue({ workflowRuns: [], errors: [] }); + }); + + it("returns empty data when no octokit client", async () => { + mockGetClient.mockReturnValue(null); + const summaries = new Map([["owner/repo", makeRepoSummary()]]); + + const result = await fetchTargetedRepoData(summaries); + + expect(result.issues).toHaveLength(0); + expect(result.pullRequests).toHaveLength(0); + expect(mockFetchIssuesAndPullRequests).not.toHaveBeenCalled(); + }); + + it("calls fetchIssuesAndPullRequests with target repos", async () => { + mockGetClient.mockReturnValue(makeOctokit()); + const summaries = new Map([ + ["owner/repo-a", makeRepoSummary({ repoFullName: "owner/repo-a" })], + ]); + + await fetchTargetedRepoData(summaries); + + expect(mockFetchIssuesAndPullRequests).toHaveBeenCalledTimes(1); + const calledRepos = mockFetchIssuesAndPullRequests.mock.calls[0][1] as Array<{ owner: string; name: string }>; + expect(calledRepos).toContainEqual(expect.objectContaining({ owner: "owner", name: "repo-a" })); + }); + + it("calls fetchWorkflowRuns only for repos with hasWorkflowActivity=true", async () => { + mockGetClient.mockReturnValue(makeOctokit()); + const summaries = new Map([ + ["owner/repo-a", makeRepoSummary({ repoFullName: "owner/repo-a", hasWorkflowActivity: true })], + ["owner/repo-b", makeRepoSummary({ repoFullName: "owner/repo-b", hasWorkflowActivity: false })], + ]); + + await fetchTargetedRepoData(summaries); + + expect(mockFetchWorkflowRuns).toHaveBeenCalledTimes(1); + const workflowRepos = mockFetchWorkflowRuns.mock.calls[0][1] as Array<{ owner: string; name: string }>; + expect(workflowRepos).toHaveLength(1); + expect(workflowRepos[0]).toMatchObject({ owner: "owner", name: "repo-a" }); + }); + + it("skips fetchWorkflowRuns when no repos have hasWorkflowActivity", async () => { + mockGetClient.mockReturnValue(makeOctokit()); + const summaries = new Map([ + ["owner/repo", makeRepoSummary({ hasWorkflowActivity: false })], + ]); + + await fetchTargetedRepoData(summaries); + + expect(mockFetchWorkflowRuns).not.toHaveBeenCalled(); + }); + + it("caps targeted repos at MAX_TARGETED_REPOS=10 and selects the 10 most recent by latestEventAt", async () => { + mockGetClient.mockReturnValue(makeOctokit()); + + const summaries = new Map>(); + for (let i = 0; i < 12; i++) { + const name = `owner/repo-${i}`; + const ts = i < 2 + ? `2026-01-0${i + 1}T00:00:00Z` + : `2026-02-${String(i).padStart(2, "0")}T00:00:00Z`; + summaries.set(name.toLowerCase(), makeRepoSummary({ repoFullName: name, latestEventAt: ts })); + } + + await fetchTargetedRepoData(summaries); + + const calledRepos = mockFetchIssuesAndPullRequests.mock.calls[0][1] as Array<{ owner: string; name: string }>; + expect(calledRepos).toHaveLength(10); + + const calledNames = calledRepos.map((r) => r.name); + expect(calledNames).not.toContain("repo-0"); + expect(calledNames).not.toContain("repo-1"); + }); + + it("applies per-repo cooldown: skips repos targeted within TARGETED_COOLDOWN_MS", async () => { + mockGetClient.mockReturnValue(makeOctokit()); + const summaries = new Map([ + ["owner/repo", makeRepoSummary({ repoFullName: "owner/repo" })], + ]); + + // First call — repo is targeted + await fetchTargetedRepoData(summaries); + const firstCallRepos = mockFetchIssuesAndPullRequests.mock.calls[0][1] as unknown[]; + expect(firstCallRepos).toHaveLength(1); + + // Second immediate call — repo is on cooldown, should be skipped + mockFetchIssuesAndPullRequests.mockClear(); + await fetchTargetedRepoData(summaries); + + // fetchTargetedRepoData returns early (entries.length === 0) without calling fetchIssuesAndPullRequests + expect(mockFetchIssuesAndPullRequests).not.toHaveBeenCalled(); + }); + + it("re-targets repo after TARGETED_COOLDOWN_MS has elapsed", async () => { + vi.useFakeTimers(); + try { + mockGetClient.mockReturnValue(makeOctokit()); + const summaries = new Map([ + ["owner/repo", makeRepoSummary({ repoFullName: "owner/repo" })], + ]); + + await fetchTargetedRepoData(summaries); + expect(mockFetchIssuesAndPullRequests).toHaveBeenCalledTimes(1); + + vi.setSystemTime(Date.now() + 120_001); // TARGETED_COOLDOWN_MS + 1ms + mockFetchIssuesAndPullRequests.mockClear(); + + await fetchTargetedRepoData(summaries); + expect(mockFetchIssuesAndPullRequests).toHaveBeenCalledTimes(1); + } finally { + vi.useRealTimers(); + } + }); +}); + +// ── seedHotSetsFromTargeted ─────────────────────────────────────────────────── + +describe("seedHotSetsFromTargeted", () => { + beforeEach(() => { + resetPollState(); + mockGetClient.mockReturnValue(makeOctokit()); + vi.mocked(fetchHotPRStatus).mockClear(); + vi.mocked(fetchWorkflowRunById).mockClear(); + }); + + it("adds enriched pending-checkStatus PRs with nodeId to hot set", async () => { + seedHotSetsFromTargeted({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 1, checkStatus: "pending", enriched: true, nodeId: "PR_a" }), + ], + }); + + await fetchHotData(); + + // fetchHotPRStatus should be called with the seeded node ID + expect(fetchHotPRStatus).toHaveBeenCalledTimes(1); + const calledNodeIds = vi.mocked(fetchHotPRStatus).mock.calls[0][1] as string[]; + expect(calledNodeIds).toContain("PR_a"); + }); + + it("does NOT add PRs with checkStatus=null to hot set", async () => { + seedHotSetsFromTargeted({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 2, checkStatus: null, enriched: true, nodeId: "PR_b" }), + ], + }); + + await fetchHotData(); + + // No PRs in hot set — fetchHotPRStatus not called + expect(fetchHotPRStatus).not.toHaveBeenCalled(); + }); + + it("does NOT add PRs that are not enriched", async () => { + seedHotSetsFromTargeted({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 3, checkStatus: "pending", enriched: false, nodeId: "PR_c" }), + ], + }); + + await fetchHotData(); + + expect(fetchHotPRStatus).not.toHaveBeenCalled(); + }); + + it("does NOT remove existing hot items (additive only)", async () => { + // Seed existing hot set via rebuildHotSets + rebuildHotSets({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 10, checkStatus: "pending", enriched: true, nodeId: "PR_existing" }), + ], + }); + + // seedHotSetsFromTargeted adds new PR without clearing the existing one + seedHotSetsFromTargeted({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 11, checkStatus: "pending", enriched: true, nodeId: "PR_new" }), + ], + }); + + await fetchHotData(); + + expect(fetchHotPRStatus).toHaveBeenCalledTimes(1); + const calledNodeIds = vi.mocked(fetchHotPRStatus).mock.calls[0][1] as string[]; + expect(calledNodeIds).toContain("PR_existing"); + expect(calledNodeIds).toContain("PR_new"); + }); + + it("does NOT increment _hotPollGeneration", () => { + const genBefore = getHotPollGeneration(); + + seedHotSetsFromTargeted({ + ...emptyData, + pullRequests: [ + makePullRequest({ id: 20, checkStatus: "pending", enriched: true, nodeId: "PR_gen" }), + ], + }); + + expect(getHotPollGeneration()).toBe(genBefore); + }); + + it("adds queued/in_progress workflow runs to hot set", async () => { + seedHotSetsFromTargeted({ + ...emptyData, + workflowRuns: [ + makeWorkflowRun({ id: 42, status: "in_progress", conclusion: null, repoFullName: "owner/repo" }), + makeWorkflowRun({ id: 43, status: "queued", conclusion: null, repoFullName: "owner/repo" }), + ], + }); + + await fetchHotData(); + + // fetchWorkflowRunById called once per run via pooledAllSettled + expect(fetchWorkflowRunById).toHaveBeenCalledTimes(2); + }); +}); + +// ── createEventsPollCoordinator ─────────────────────────────────────────────── + +describe("createEventsPollCoordinator", () => { + beforeEach(() => { + vi.useFakeTimers(); + resetPollState(); + vi.clearAllMocks(); + mockGetClient.mockReturnValue(makeOctokit()); + }); + + afterEach(() => { + vi.useRealTimers(); + }); + + it("fires first cycle immediately (delay=0)", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + // Trigger the immediate setTimeout(..., 0) then clean up + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + }); + + it("calls onTargetedData when events indicate changes in tracked repos", async () => { + const event = { + id: "100", + type: "IssuesEvent", + actor: { id: 1, login: "user" }, + repo: { id: 1, name: "owner/repo" }, + payload: {}, + created_at: "2026-01-01T00:00:00Z", + }; + mockFetchUserEvents.mockResolvedValue({ events: [event], changed: true }); + mockParseRepoEvents.mockReturnValue( + new Map([["owner/repo", makeRepoSummary({ repoFullName: "owner/repo" })]]) + ); + mockFetchIssuesAndPullRequests.mockResolvedValue({ issues: [], pullRequests: [], errors: [] }); + + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(onTargetedData).toHaveBeenCalledTimes(1); + }); + + it("does NOT call onTargetedData when changed=false", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(onTargetedData).not.toHaveBeenCalled(); + }); + + it("does NOT call parseRepoEvents when changed=true but events.length=0 (defense-in-depth)", async () => { + // After the fetchUserEvents fix, changed=true with empty events can't occur in production. + // This tests the coordinator's defensive || guard at poll.ts: if (!changed || events.length === 0). + mockFetchUserEvents.mockResolvedValue({ events: [], changed: true }); + + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + expect(mockParseRepoEvents).not.toHaveBeenCalled(); + expect(onTargetedData).not.toHaveBeenCalled(); + }); + + it("does NOT call onTargetedData when parseRepoEvents returns empty map (untracked repos)", async () => { + const event = { + id: "300", + type: "IssuesEvent", + actor: { id: 1, login: "user" }, + repo: { id: 1, name: "other/untracked" }, + payload: {}, + created_at: "2026-01-01T00:00:00Z", + }; + mockFetchUserEvents.mockResolvedValue({ events: [event], changed: true }); + mockParseRepoEvents.mockReturnValue(new Map()); + + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + expect(mockParseRepoEvents).toHaveBeenCalledTimes(1); + expect(onTargetedData).not.toHaveBeenCalled(); + }); + + it("skips cycle when isFullRefreshing becomes true after fetchUserEvents resolves", async () => { + const event = { + id: "200", + type: "IssuesEvent", + actor: { id: 1, login: "user" }, + repo: { id: 1, name: "owner/repo" }, + payload: {}, + created_at: "2026-01-01T00:00:00Z", + }; + mockFetchUserEvents.mockResolvedValue({ events: [event], changed: true }); + mockParseRepoEvents.mockReturnValue( + new Map([["owner/repo", makeRepoSummary({ repoFullName: "owner/repo" })]]) + ); + + const isFullRefreshing = vi.fn().mockReturnValueOnce(false).mockReturnValue(true); + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + isFullRefreshing, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + expect(mockFetchIssuesAndPullRequests).not.toHaveBeenCalled(); + expect(onTargetedData).not.toHaveBeenCalled(); + }); + + it("skips cycle when isFullRefreshing=true", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => true, // full refresh in progress + vi.fn(), + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + // fetchUserEvents not called when isFullRefreshing=true + expect(mockFetchUserEvents).not.toHaveBeenCalled(); + }); + + it("skips cycle when username is empty", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).not.toHaveBeenCalled(); + }); + + it("skips cycle when no octokit client", async () => { + mockGetClient.mockReturnValue(null); + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + expect(mockFetchUserEvents).not.toHaveBeenCalled(); + }); + + it("destroy before first cycle fires prevents any cycle from running", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void } | null = null; + + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + coordinator!.destroy(); + + vi.advanceTimersByTime(300_000); + await flushPromises(); + + expect(mockFetchUserEvents).not.toHaveBeenCalled(); + }); + + it("destroy after initial cycle fires stops all subsequent cycles", async () => { + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void } | null = null; + + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + + coordinator!.destroy(); + + vi.advanceTimersByTime(300_000); + await flushPromises(); + + expect(mockFetchUserEvents).toHaveBeenCalledTimes(1); + }); + + it("applies exponential backoff after consecutive failures", async () => { + mockFetchUserEvents.mockRejectedValue(new Error("API error")); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + // Trigger first cycle (delay=0) + vi.advanceTimersByTime(0); + await flushPromises(); + + // After first error, backoff = 2^1 = 2x base interval (60s * 2 = 120s). + // Advancing 60s should NOT trigger the next cycle yet. + const callsAtBase = mockFetchUserEvents.mock.calls.length; + vi.advanceTimersByTime(60_000); + await flushPromises(); + + expect(mockFetchUserEvents.mock.calls.length).toBe(callsAtBase); + + // Advancing the remaining 60s (total 120s) should trigger it + vi.advanceTimersByTime(60_000); + await flushPromises(); + + expect(mockFetchUserEvents.mock.calls.length).toBeGreaterThan(callsAtBase); + coordinator!.destroy(); + }); + + it("resets backoff to base interval after a successful cycle following failures", async () => { + // First cycle: error → consecutiveFailures = 1 + mockFetchUserEvents.mockRejectedValueOnce(new Error("API error")); + // Second cycle: success → consecutiveFailures = 0, next schedule at base interval + mockFetchUserEvents.mockResolvedValue({ events: [], changed: false }); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + vi.fn(), + ); + dispose(); + }); + + // First cycle (delay=0) — errors + vi.advanceTimersByTime(0); + await flushPromises(); + const callsAfterError = mockFetchUserEvents.mock.calls.length; + expect(callsAfterError).toBe(1); + + // After error: backoff = 2^1 = 2x → next at 120s + // Advance 120s to trigger the recovery cycle + vi.advanceTimersByTime(120_000); + await flushPromises(); + expect(mockFetchUserEvents.mock.calls.length).toBe(2); + + // After success: consecutiveFailures = 0, backoff = 2^0 = 1x → next at 60s + const callsAfterRecovery = mockFetchUserEvents.mock.calls.length; + vi.advanceTimersByTime(60_000); + await flushPromises(); + + // Should fire at base interval, not backed-off interval + expect(mockFetchUserEvents.mock.calls.length).toBeGreaterThan(callsAfterRecovery); + coordinator!.destroy(); + }); + + it("discards targeted data when hot poll generation changes during fetchTargetedRepoData", async () => { + const event = { + id: "400", + type: "IssuesEvent", + actor: { id: 1, login: "user" }, + repo: { id: 1, name: "owner/repo" }, + payload: {}, + created_at: "2026-01-01T00:00:00Z", + }; + mockFetchUserEvents.mockResolvedValue({ events: [event], changed: true }); + mockParseRepoEvents.mockReturnValue( + new Map([["owner/repo", makeRepoSummary({ repoFullName: "owner/repo" })]]) + ); + // Simulate a full refresh completing during fetchTargetedRepoData: + // rebuildHotSets increments _hotPollGeneration, so we call it inside + // the mock to simulate concurrent full refresh + mockFetchIssuesAndPullRequests.mockImplementation(async () => { + rebuildHotSets(emptyData); // increments _hotPollGeneration + return { issues: [], pullRequests: [], errors: [] }; + }); + + const onTargetedData = vi.fn(); + + let coordinator: { destroy: () => void }; + createRoot((dispose) => { + coordinator = createEventsPollCoordinator( + () => "testuser", + () => new Set(["owner/repo"]), + () => false, + onTargetedData, + ); + dispose(); + }); + + vi.advanceTimersByTime(0); + await flushPromises(); + coordinator!.destroy(); + + // fetchTargetedRepoData ran (fetchIssuesAndPullRequests was called), + // but generation changed during the fetch → targeted data discarded + expect(mockFetchIssuesAndPullRequests).toHaveBeenCalled(); + expect(onTargetedData).not.toHaveBeenCalled(); + }); +}); + +// ── Config-change effects ───────────────────────────────────────────────────── + +describe("config-change effects (QA-007)", () => { + // These effects are registered at module load via createRoot in poll.ts. + // We test them by checking resetEventsState is called when config signals change. + // Because the config mock is a plain object (not reactive), we test the + // resetEventsState integration via resetPollState() which calls it directly. + + it("resetPollState calls resetEventsState (integration: resetEventsState is part of full reset)", () => { + // resetPollState is what gets called on auth clear, and it internally calls resetEventsState. + // Verify the module wiring is correct by checking resetPollState resets module state. + resetPollState(); + + // After resetPollState, the generation is 0 (resetEventsState clears ETag/lastEventId) + expect(getHotPollGeneration()).toBe(0); + expect(mockResetEventsState).toHaveBeenCalled(); + }); + + it("clearHotSets does NOT increment generation (different from rebuildHotSets)", () => { + rebuildHotSets(emptyData); + expect(getHotPollGeneration()).toBe(1); + + clearHotSets(); + // clearHotSets clears sets but does not touch generation + expect(getHotPollGeneration()).toBe(1); + }); +}); diff --git a/tests/services/events.test.ts b/tests/services/events.test.ts new file mode 100644 index 00000000..2e673711 --- /dev/null +++ b/tests/services/events.test.ts @@ -0,0 +1,363 @@ +import { describe, it, expect, vi, beforeEach } from "vitest"; + +// Mock auth store — events.ts calls onAuthCleared() at module scope +vi.mock("../../src/app/stores/auth", () => ({ + onAuthCleared: vi.fn(), + user: vi.fn(() => null), +})); + +// Mock github module (not directly used by events.ts, but imported transitively) +vi.mock("../../src/app/services/github", () => ({ + getClient: vi.fn(() => null), +})); + +// Import AFTER mocks +import { fetchUserEvents, parseRepoEvents, resetEventsState } from "../../src/app/services/events"; + +// ── Helpers ─────────────────────────────────────────────────────────────────── + +function makeOctokit(requestImpl: (...args: unknown[]) => unknown) { + return { + request: vi.fn(requestImpl), + hook: { before: vi.fn() }, + }; +} + +function makeEvent(overrides: { + id?: string; + type?: string; + repoName?: string; + created_at?: string; +} = {}) { + return { + id: overrides.id ?? "100", + type: overrides.type ?? "PushEvent", + actor: { id: 1, login: "user" }, + repo: { id: 1, name: overrides.repoName ?? "owner/repo" }, + payload: {}, + created_at: overrides.created_at ?? "2026-01-01T00:00:00Z", + }; +} + +// ── fetchUserEvents ─────────────────────────────────────────────────────────── + +describe("fetchUserEvents", () => { + beforeEach(() => { + resetEventsState(); + vi.clearAllMocks(); + }); + + it("returns events and changed=true on 200 response", async () => { + const event = makeEvent({ id: "500" }); + const octokit = makeOctokit(() => + Promise.resolve({ + data: [event], + headers: { etag: '"abc123"' }, + }) + ); + + const result = await fetchUserEvents(octokit as never, "someuser"); + + expect(result.changed).toBe(true); + expect(result.events).toHaveLength(1); + expect(result.events[0].id).toBe("500"); + }); + + it("returns empty events and changed=false on 304", async () => { + const octokit = makeOctokit(() => Promise.reject({ status: 304 })); + + const result = await fetchUserEvents(octokit as never, "someuser"); + + expect(result.changed).toBe(false); + expect(result.events).toHaveLength(0); + }); + + it("returns empty events and changed=false on network error without throwing", async () => { + const octokit = makeOctokit(() => Promise.reject(new Error("Network failure"))); + + const result = await fetchUserEvents(octokit as never, "someuser"); + + expect(result.changed).toBe(false); + expect(result.events).toHaveLength(0); + }); + + it("sends If-None-Match header on second call after ETag received", async () => { + const octokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "200" })], + headers: { etag: '"etag-value"' }, + }) + ); + + // First call — seeds ETag + await fetchUserEvents(octokit as never, "someuser"); + + // Second call — ETag should be sent + await fetchUserEvents(octokit as never, "someuser"); + + const secondCallHeaders = (octokit.request.mock.calls[1][1] as { headers?: Record }).headers ?? {}; + expect(secondCallHeaders["If-None-Match"]).toBe('"etag-value"'); + }); + + it("does NOT send If-None-Match on first call", async () => { + const octokit = makeOctokit(() => + Promise.resolve({ data: [], headers: {} }) + ); + + await fetchUserEvents(octokit as never, "someuser"); + + const firstCallHeaders = (octokit.request.mock.calls[0][1] as { headers?: Record }).headers ?? {}; + expect(firstCallHeaders["If-None-Match"]).toBeUndefined(); + }); + + it("returns all events on first call (no ID filter)", async () => { + const events = [ + makeEvent({ id: "300" }), + makeEvent({ id: "299" }), + makeEvent({ id: "298" }), + ]; + const octokit = makeOctokit(() => + Promise.resolve({ data: events, headers: {} }) + ); + + const result = await fetchUserEvents(octokit as never, "someuser"); + + expect(result.events).toHaveLength(3); + expect(result.changed).toBe(true); + }); + + it("filters to only events with IDs > lastEventId on subsequent calls", async () => { + // First call: seed lastEventId = "300" + const firstOctokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "300" })], + headers: {}, + }) + ); + await fetchUserEvents(firstOctokit as never, "someuser"); + + // Second call: events with IDs 301 (new) and 299 (old) + const secondOctokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "301" }), makeEvent({ id: "299" })], + headers: {}, + }) + ); + const result = await fetchUserEvents(secondOctokit as never, "someuser"); + + expect(result.events).toHaveLength(1); + expect(result.events[0].id).toBe("301"); + expect(result.changed).toBe(true); + }); + + it("uses numeric comparison for event ID filtering (not lexicographic)", async () => { + // Seed with lastEventId = "9" + const firstOctokit = makeOctokit(() => + Promise.resolve({ data: [makeEvent({ id: "9" })], headers: {} }) + ); + await fetchUserEvents(firstOctokit as never, "someuser"); + + // "10" > "9" numerically but NOT lexicographically + const secondOctokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "10" }), makeEvent({ id: "8" })], + headers: {}, + }) + ); + const result = await fetchUserEvents(secondOctokit as never, "someuser"); + + expect(result.events).toHaveLength(1); + expect(result.events[0].id).toBe("10"); + }); + + it("returns changed=false when no new events since last ID", async () => { + // First call: seed lastEventId = "500" + const firstOctokit = makeOctokit(() => + Promise.resolve({ data: [makeEvent({ id: "500" })], headers: {} }) + ); + await fetchUserEvents(firstOctokit as never, "someuser"); + + // Second call: no new events (all IDs <= 500) + const secondOctokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "500" }), makeEvent({ id: "499" })], + headers: {}, + }) + ); + const result = await fetchUserEvents(secondOctokit as never, "someuser"); + + expect(result.changed).toBe(false); + expect(result.events).toHaveLength(0); + }); + + it("returns empty events and changed=false for empty username (SEC-IMPL-001)", async () => { + const octokit = makeOctokit(() => Promise.resolve({ data: [], headers: {} })); + + const result = await fetchUserEvents(octokit as never, ""); + + expect(result.changed).toBe(false); + expect(result.events).toHaveLength(0); + expect(octokit.request).not.toHaveBeenCalled(); + }); +}); + +// ── parseRepoEvents ─────────────────────────────────────────────────────────── + +describe("parseRepoEvents", () => { + it("returns empty map for empty events array", () => { + const result = parseRepoEvents([], new Set(["owner/repo"])); + expect(result.size).toBe(0); + }); + + it("filters out events for untracked repos", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "owner/tracked" }), + makeEvent({ type: "IssuesEvent", repoName: "owner/untracked" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/tracked"])); + + expect(result.size).toBe(1); + expect([...result.keys()]).toContain("owner/tracked"); + }); + + it("filters out non-actionable event types", () => { + const events = [ + makeEvent({ type: "CreateEvent", repoName: "owner/repo" }), + makeEvent({ type: "DeleteEvent", repoName: "owner/repo" }), + makeEvent({ type: "WatchEvent", repoName: "owner/repo" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + + expect(result.size).toBe(0); + }); + + it("sets hasIssueActivity for IssuesEvent and IssueCommentEvent", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "owner/repo" }), + makeEvent({ type: "IssueCommentEvent", repoName: "owner/repo" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + const summary = result.get("owner/repo")!; + + expect(summary.hasIssueActivity).toBe(true); + expect(summary.hasPRActivity).toBe(false); + expect(summary.hasWorkflowActivity).toBe(false); + }); + + it("sets hasPRActivity for PullRequestEvent, PullRequestReviewEvent, PullRequestReviewCommentEvent", () => { + const events = [ + makeEvent({ type: "PullRequestEvent", repoName: "owner/repo" }), + makeEvent({ type: "PullRequestReviewEvent", repoName: "owner/repo" }), + makeEvent({ type: "PullRequestReviewCommentEvent", repoName: "owner/repo" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + const summary = result.get("owner/repo")!; + + expect(summary.hasPRActivity).toBe(true); + expect(summary.hasIssueActivity).toBe(false); + }); + + it("sets hasWorkflowActivity for PushEvent", () => { + const events = [makeEvent({ type: "PushEvent", repoName: "owner/repo" })]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + + expect(result.get("owner/repo")!.hasWorkflowActivity).toBe(true); + }); + + it("does case-insensitive repo matching: Owner/Repo vs owner/repo", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "Owner/Repo" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + + expect(result.size).toBe(1); + }); + + it("picks the max timestamp for latestEventAt", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "owner/repo", created_at: "2026-01-01T10:00:00Z" }), + makeEvent({ type: "PushEvent", repoName: "owner/repo", created_at: "2026-01-01T12:00:00Z" }), + makeEvent({ type: "PullRequestEvent", repoName: "owner/repo", created_at: "2026-01-01T08:00:00Z" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + + expect(result.get("owner/repo")!.latestEventAt).toBe("2026-01-01T12:00:00Z"); + }); + + it("groups multiple events for the same repo into one summary", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "owner/repo" }), + makeEvent({ type: "PushEvent", repoName: "owner/repo" }), + ]; + const result = parseRepoEvents(events, new Set(["owner/repo"])); + + expect(result.size).toBe(1); + const summary = result.get("owner/repo")!; + expect(summary.hasIssueActivity).toBe(true); + expect(summary.hasWorkflowActivity).toBe(true); + expect(summary.eventTypes.size).toBe(2); + }); + + it("handles mix of event types across tracked and untracked repos", () => { + const events = [ + makeEvent({ type: "IssuesEvent", repoName: "owner/a" }), + makeEvent({ type: "PushEvent", repoName: "owner/b" }), + makeEvent({ type: "PullRequestEvent", repoName: "owner/c" }), // untracked + makeEvent({ type: "CreateEvent", repoName: "owner/a" }), // non-actionable + ]; + const result = parseRepoEvents(events, new Set(["owner/a", "owner/b"])); + + expect(result.size).toBe(2); + expect(result.get("owner/a")!.hasIssueActivity).toBe(true); + expect(result.get("owner/b")!.hasWorkflowActivity).toBe(true); + }); +}); + +// ── resetEventsState ────────────────────────────────────────────────────────── + +describe("resetEventsState", () => { + it("clears ETag so next call sends no If-None-Match header", async () => { + const octokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "100" })], + headers: { etag: '"etag-123"' }, + }) + ); + + // First call — seeds ETag + await fetchUserEvents(octokit as never, "someuser"); + + // Reset + resetEventsState(); + + // Next call should have no If-None-Match + await fetchUserEvents(octokit as never, "someuser"); + + const thirdCallHeaders = (octokit.request.mock.calls[1][1] as { headers?: Record }).headers ?? {}; + expect(thirdCallHeaders["If-None-Match"]).toBeUndefined(); + }); + + it("clears lastEventId so next call returns all events (first-call semantics)", async () => { + // First call: seed lastEventId = "100" + const firstOctokit = makeOctokit(() => + Promise.resolve({ data: [makeEvent({ id: "100" })], headers: {} }) + ); + await fetchUserEvents(firstOctokit as never, "someuser"); + + // Reset + resetEventsState(); + + // After reset, next call should behave like first call (return all events, not filter) + const secondOctokit = makeOctokit(() => + Promise.resolve({ + data: [makeEvent({ id: "100" }), makeEvent({ id: "99" })], + headers: {}, + }) + ); + const result = await fetchUserEvents(secondOctokit as never, "someuser"); + + // All events returned — no ID filtering since _lastEventId was cleared + expect(result.events).toHaveLength(2); + expect(result.changed).toBe(true); + }); +}); diff --git a/tests/services/poll-fetchAllData.test.ts b/tests/services/poll-fetchAllData.test.ts index 524be7c1..d6804775 100644 --- a/tests/services/poll-fetchAllData.test.ts +++ b/tests/services/poll-fetchAllData.test.ts @@ -73,11 +73,11 @@ afterEach(() => { vi.clearAllMocks(); }); -// ── qa-1: First call returns data and updates _lastSuccessfulFetch ──────────── +// ── qa-1: fetchAllData returns data ────────────────────────────────────────── describe("fetchAllData — first call", () => { - it("returns data from all fetches on first call", async () => { + it("returns data from all fetches", async () => { vi.resetModules(); const { getClient } = await import("../../src/app/services/github"); @@ -96,10 +96,9 @@ describe("fetchAllData — first call", () => { expect(result.pullRequests).toEqual([]); expect(result.workflowRuns).toEqual([]); expect(result.errors).toEqual([]); - expect(result.skipped).toBeUndefined(); }); - it("calls both fetch functions on first call (no notification gate)", async () => { + it("calls both fetch functions unconditionally on every call", async () => { vi.resetModules(); const { getClient } = await import("../../src/app/services/github"); @@ -114,9 +113,16 @@ describe("fetchAllData — first call", () => { await fetchAllData(); - // First call: no _lastSuccessfulFetch, so notifications gate is skipped + // No notification gate — both data fetches always run + expect(mockOctokit.request).not.toHaveBeenCalled(); + expect(fetchIssuesAndPullRequests).toHaveBeenCalledOnce(); + expect(fetchWorkflowRuns).toHaveBeenCalledOnce(); + + // Second call — still unconditional, no gate check + vi.mocked(fetchIssuesAndPullRequests).mockClear(); + vi.mocked(fetchWorkflowRuns).mockClear(); + await fetchAllData(); expect(mockOctokit.request).not.toHaveBeenCalled(); - // Both data fetches should run expect(fetchIssuesAndPullRequests).toHaveBeenCalledOnce(); expect(fetchWorkflowRuns).toHaveBeenCalledOnce(); }); @@ -145,111 +151,10 @@ describe("fetchAllData — first call", () => { config.maxRunsPerWorkflow ); }); - - it("sets _lastSuccessfulFetch so second call checks notification gate", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — no gate check - await fetchAllData(); - expect(mockOctokit.request).not.toHaveBeenCalled(); - - // Second call — _lastSuccessfulFetch is set, gate checks notifications - // Return 200 for notifications (something changed) - mockOctokit.request.mockResolvedValueOnce({ - data: [], - headers: { "last-modified": "Thu, 20 Mar 2026 12:00:00 GMT" }, - }); - - await fetchAllData(); - - expect(mockOctokit.request).toHaveBeenCalledOnce(); - expect(mockOctokit.request).toHaveBeenCalledWith( - "GET /notifications", - expect.objectContaining({ per_page: 1 }) - ); - }); }); -// ── qa-1: Notification gate skips full fetch when nothing changed ───────────── -describe("fetchAllData — notification gate skip", () => { - afterEach(() => { - vi.useRealTimers(); - }); - - it("returns { skipped: true } when hasNotificationChanges returns false (304)", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call to set _lastSuccessfulFetch - await fetchAllData(); - - vi.mocked(fetchIssuesAndPullRequests).mockClear(); - vi.mocked(fetchWorkflowRuns).mockClear(); - - // Simulate 304 from notifications — nothing changed - mockOctokit.request.mockRejectedValueOnce({ status: 304 }); - - const result = await fetchAllData(); - - expect(result.skipped).toBe(true); - // Data fetches should NOT have been called - expect(fetchIssuesAndPullRequests).not.toHaveBeenCalled(); - expect(fetchWorkflowRuns).not.toHaveBeenCalled(); - }); - - it("forces full fetch when staleness exceeds 10 minutes even if gate would skip", async () => { - vi.useFakeTimers(); - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - vi.mocked(fetchIssuesAndPullRequests).mockClear(); - - // Advance time past 10 minutes - vi.advanceTimersByTime(11 * 60 * 1000); - - // Even though notifications would 304, staleness cap forces a full fetch - mockOctokit.request.mockRejectedValueOnce({ status: 304 }); - - const result = await fetchAllData(); - - // Should NOT be skipped — staleness cap bypasses the gate - expect(result.skipped).toBeUndefined(); - expect(fetchIssuesAndPullRequests).toHaveBeenCalled(); - }); -}); - -// ── qa-1: All fetches fail — errors aggregated, _lastSuccessfulFetch not updated ── +// ── All fetches fail — errors aggregated ───────────────────────────────────── describe("fetchAllData — all fetches fail", () => { it("aggregates top-level errors when all fetches reject", async () => { @@ -275,10 +180,9 @@ describe("fetchAllData — all fetches fail", () => { expect(result.issues).toEqual([]); expect(result.pullRequests).toEqual([]); expect(result.workflowRuns).toEqual([]); - expect(result.skipped).toBeUndefined(); }); - it("does NOT update _lastSuccessfulFetch when all fetches reject", async () => { + it("fetches are still attempted on subsequent calls even after all fail", async () => { vi.resetModules(); const { getClient } = await import("../../src/app/services/github"); @@ -291,20 +195,18 @@ describe("fetchAllData — all fetches fail", () => { const { fetchAllData } = await import("../../src/app/services/poll"); - // First call — all fail, so _lastSuccessfulFetch should NOT be set await fetchAllData(); - // Second call — if _lastSuccessfulFetch were set, a notification request would be made - // Since all failed, it should NOT be set → no notification request - mockOctokit.request.mockClear(); + // Second call — fetches run again (no gate to suppress them) + vi.mocked(fetchIssuesAndPullRequests).mockClear(); + vi.mocked(fetchWorkflowRuns).mockClear(); vi.mocked(fetchIssuesAndPullRequests).mockRejectedValue(new Error("fail")); vi.mocked(fetchWorkflowRuns).mockRejectedValue(new Error("fail")); - await fetchAllData(); - // No notification gate check — _lastSuccessfulFetch was never set - expect(mockOctokit.request).not.toHaveBeenCalled(); + expect(fetchIssuesAndPullRequests).toHaveBeenCalled(); + expect(fetchWorkflowRuns).toHaveBeenCalled(); }); }); @@ -362,234 +264,6 @@ describe("fetchAllData — no client", () => { }); }); -// ── qa-4: resetPollState after logout re-enables notification gate ──────────── - -describe("fetchAllData — resetPollState via onAuthCleared", () => { - it("re-enables notification gate after logout (onAuthCleared callback invocation)", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const { onAuthCleared } = await import("../../src/app/stores/auth"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - // Import poll.ts — this triggers onAuthCleared(resetPollState) at module scope. - // api-usage.ts also registers clearUsageData, so onAuthCleared is called multiple times. - const { fetchAllData } = await import("../../src/app/services/poll"); - - // onAuthCleared mock must have been called (multiple registrations expected now). - // Collect all callbacks and invoke them all — mirrors real clearAuth() behavior, - // which fires every registered callback. This avoids fragile positional indexing. - expect(vi.mocked(onAuthCleared)).toHaveBeenCalled(); - const allAuthClearedCallbacks = vi.mocked(onAuthCleared).mock.calls.map((c) => c[0] as () => void); - const capturedAuthClearedCb = () => { for (const cb of allAuthClearedCallbacks) cb(); }; - - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - - // Second call — gate fires a 403, which sets _notifGateDisabled = true - mockOctokit.request.mockRejectedValueOnce({ status: 403 }); - await fetchAllData(); - - // Gate is now disabled; third call should NOT call GET /notifications - mockOctokit.request.mockClear(); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - await fetchAllData(); - expect(mockOctokit.request).not.toHaveBeenCalled(); - - // Invoke the logout callback — resets _notifGateDisabled and _lastSuccessfulFetch - capturedAuthClearedCb(); - - // First call after logout: _lastSuccessfulFetch is null → no gate check - mockOctokit.request.mockClear(); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - await fetchAllData(); - // No notification gate on first call after reset (no _lastSuccessfulFetch) - expect(mockOctokit.request).not.toHaveBeenCalled(); - - // Second call after logout: _lastSuccessfulFetch is now set, gate fires again - mockOctokit.request.mockResolvedValueOnce({ - data: [], - headers: { "last-modified": "Thu, 20 Mar 2026 12:00:00 GMT" }, - }); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - await fetchAllData(); - // GET /notifications was called — gate is active again (not disabled) - expect(mockOctokit.request).toHaveBeenCalledWith( - "GET /notifications", - expect.objectContaining({ per_page: 1 }) - ); - }); -}); - -// ── qa-5: If-Modified-Since header on second notification call ──────────────── - -describe("fetchAllData — If-Modified-Since header", () => { - it("sends If-Modified-Since header from first response on second GET /notifications call", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — no gate (no _lastSuccessfulFetch), sets _lastSuccessfulFetch - await fetchAllData(); - - // Second call — gate fires 200 response with last-modified header - const lastModified = "Fri, 21 Mar 2026 08:00:00 GMT"; - mockOctokit.request.mockResolvedValueOnce({ - data: [], - headers: { "last-modified": lastModified }, - }); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - await fetchAllData(); - - // Third call — gate should send If-Modified-Since from the second call's response - mockOctokit.request.mockResolvedValueOnce({ - data: [], - headers: {}, - }); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - await fetchAllData(); - - // Inspect the third GET /notifications call for the If-Modified-Since header - const notifCalls = mockOctokit.request.mock.calls.filter( - (c) => c[0] === "GET /notifications" - ); - expect(notifCalls.length).toBeGreaterThanOrEqual(2); - const thirdCallParams = (notifCalls[notifCalls.length - 1] as unknown[])[1] as Record; - expect((thirdCallParams["headers"] as Record)["If-Modified-Since"]).toBe(lastModified); - }); -}); - -// ── qa-2: hasNotificationChanges 403 auto-disable ──────────────────────────── - -describe("fetchAllData — notification gate 403 auto-disable", () => { - it("disables notification gate after 403 and skips it on subsequent calls", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const { pushNotification } = await import("../../src/app/lib/errors"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - vi.mocked(fetchIssuesAndPullRequests).mockClear(); - - // Second call — gate checks notifications, gets 403 - mockOctokit.request.mockRejectedValueOnce({ status: 403 }); - await fetchAllData(); - - // Gate received 403 → _notifGateDisabled = true → pushNotification called - expect(pushNotification).toHaveBeenCalledWith( - "notifications", - expect.stringContaining("403"), - "warning" - ); - - // Third call — gate should be DISABLED, no notifications request - mockOctokit.request.mockClear(); - vi.mocked(fetchIssuesAndPullRequests).mockClear(); - vi.mocked(fetchWorkflowRuns).mockClear(); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - await fetchAllData(); - - expect(mockOctokit.request).not.toHaveBeenCalled(); - // The data fetches still run - expect(fetchIssuesAndPullRequests).toHaveBeenCalled(); - expect(fetchWorkflowRuns).toHaveBeenCalled(); - }); - - it("still fetches data on the same call that triggers the 403", async () => { - vi.resetModules(); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - vi.mocked(fetchIssuesAndPullRequests).mockClear(); - vi.mocked(fetchWorkflowRuns).mockClear(); - - // Second call — gate returns 403; hasNotificationChanges returns true → full fetch runs - mockOctokit.request.mockRejectedValueOnce({ status: 403 }); - - const result = await fetchAllData(); - - expect(result.skipped).toBeUndefined(); - expect(fetchIssuesAndPullRequests).toHaveBeenCalled(); - expect(fetchWorkflowRuns).toHaveBeenCalled(); - }); - - it("shows PAT-specific 403 notification when authMethod is 'pat'", async () => { - vi.resetModules(); - - // Override config mock to include authMethod: "pat" for this test - vi.doMock("../../src/app/stores/config", () => ({ - config: { - selectedRepos: [{ owner: "octocat", name: "Hello-World", fullName: "octocat/Hello-World" }], - maxWorkflowsPerRepo: 5, - maxRunsPerWorkflow: 3, - authMethod: "pat", - }, - })); - - const { getClient } = await import("../../src/app/services/github"); - const { fetchIssuesAndPullRequests, fetchWorkflowRuns } = await import("../../src/app/services/api"); - const { pushNotification } = await import("../../src/app/lib/errors"); - const mockOctokit = makeMockOctokit(); - vi.mocked(getClient).mockReturnValue(mockOctokit as unknown as ReturnType); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue(emptyIssuesAndPrsResult); - vi.mocked(fetchWorkflowRuns).mockResolvedValue(emptyRunResult); - - const { fetchAllData } = await import("../../src/app/services/poll"); - - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - - // Second call — gate fires a 403 - mockOctokit.request.mockRejectedValueOnce({ status: 403 }); - await fetchAllData(); - - // PAT-specific message should mention fine-grained tokens - expect(pushNotification).toHaveBeenCalledWith( - "notifications", - expect.stringContaining("fine-grained tokens do not support notifications"), - "warning" - ); - }); -}); // ── Upstream repos + tracked users integration ──────────────────────────────── diff --git a/tests/services/poll-notification-effects.test.ts b/tests/services/poll-notification-effects.test.ts deleted file mode 100644 index 809eb4b8..00000000 --- a/tests/services/poll-notification-effects.test.ts +++ /dev/null @@ -1,198 +0,0 @@ -/** - * Tests for the module-scope reactive effects in poll.ts that reset notification - * state when config.trackedUsers or config.monitoredRepos change. - * - * Uses the REAL reactive config store (not a static mock) so that updateConfig() - * triggers the reactive effects registered by poll.ts at module load. - */ -import "fake-indexeddb/auto"; -import { describe, it, expect, vi, beforeEach } from "vitest"; - -const { mockResetNotifState } = vi.hoisted(() => ({ - mockResetNotifState: vi.fn(), -})); - -// Mock github client -vi.mock("../../src/app/services/github", () => ({ - getClient: vi.fn(), - onApiRequest: vi.fn(), -})); - -// Mock auth store — onAuthCleared is called at poll.ts module scope -vi.mock("../../src/app/stores/auth", () => ({ - user: vi.fn(() => ({ login: "octocat", avatar_url: "https://github.com/images/error/octocat_happy.gif", name: "Octocat" })), - onAuthCleared: vi.fn(), -})); - -// Mock API functions -vi.mock("../../src/app/services/api", () => ({ - fetchIssuesAndPullRequests: vi.fn(), - fetchWorkflowRuns: vi.fn(), - fetchHotPRStatus: vi.fn(), - fetchWorkflowRunById: vi.fn(), - pooledAllSettled: vi.fn(), - resetEmptyActionRepos: vi.fn(), -})); - -// Mock notifications — spy on _resetNotificationState -vi.mock("../../src/app/lib/notifications", () => ({ - detectNewItems: vi.fn(() => []), - dispatchNotifications: vi.fn(), - _resetNotificationState: mockResetNotifState, -})); - -// Mock errors store -vi.mock("../../src/app/lib/errors", () => ({ - pushError: vi.fn(), - pushNotification: vi.fn(), - getErrors: vi.fn().mockReturnValue([]), - dismissError: vi.fn(), - getNotifications: vi.fn().mockReturnValue([]), - getUnreadCount: vi.fn().mockReturnValue(0), - markAllAsRead: vi.fn(), - startCycleTracking: vi.fn(), - endCycleTracking: vi.fn(), - resetNotificationState: vi.fn(), - dismissNotificationBySource: vi.fn(), -})); - -// Use REAL config store — the reactive effects in poll.ts subscribe to this -import { updateConfig, resetConfig } from "../../src/app/stores/config"; - -// Import poll.ts — triggers createRoot + createEffect registration at module scope -import { fetchAllData, resetPollState } from "../../src/app/services/poll"; -import { getClient } from "../../src/app/services/github"; -import { fetchIssuesAndPullRequests, fetchWorkflowRuns } from "../../src/app/services/api"; - -describe("poll.ts — notification reset reactive effects", () => { - beforeEach(() => { - resetConfig(); - mockResetNotifState.mockClear(); - }); - - it("resets notification state when monitoredRepos changes", () => { - updateConfig({ - selectedRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - monitoredRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - }); - - expect(mockResetNotifState).toHaveBeenCalled(); - }); - - it("resets notification state when trackedUsers changes", () => { - updateConfig({ - trackedUsers: [{ - login: "octocat", - avatarUrl: "https://avatars.githubusercontent.com/u/583231", - name: "Octocat", - type: "user" as const, - }], - }); - - expect(mockResetNotifState).toHaveBeenCalled(); - }); - - it("does not reset when config update does not change the key", () => { - updateConfig({ theme: "dark" }); - - expect(mockResetNotifState).not.toHaveBeenCalled(); - }); - - it("resets notification state when monitoredRepos cleared to empty", () => { - updateConfig({ - selectedRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - monitoredRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - }); - mockResetNotifState.mockClear(); - - updateConfig({ monitoredRepos: [] }); - - expect(mockResetNotifState).toHaveBeenCalled(); - }); - - it("detects swap at same array length (key-based comparison)", () => { - updateConfig({ - selectedRepos: [ - { owner: "org", name: "a", fullName: "org/a" }, - { owner: "org", name: "b", fullName: "org/b" }, - ], - monitoredRepos: [{ owner: "org", name: "a", fullName: "org/a" }], - }); - mockResetNotifState.mockClear(); - - updateConfig({ - monitoredRepos: [{ owner: "org", name: "b", fullName: "org/b" }], - }); - - expect(mockResetNotifState).toHaveBeenCalled(); - }); -}); - -describe("poll.ts — notifications gate bypass on config change", () => { - const mockRequest = vi.fn(); - - beforeEach(() => { - resetPollState(); - resetConfig(); - mockRequest.mockReset(); - mockRequest.mockResolvedValue({ - data: [], - headers: { "last-modified": "Thu, 20 Mar 2026 12:00:00 GMT" }, - }); - vi.mocked(getClient).mockReturnValue({ - request: mockRequest, - graphql: vi.fn(), - hook: { before: vi.fn() }, - } as never); - vi.mocked(fetchIssuesAndPullRequests).mockResolvedValue({ - issues: [], pullRequests: [], errors: [], - }); - vi.mocked(fetchWorkflowRuns).mockResolvedValue({ - workflowRuns: [], errors: [], - } as never); - }); - - it("bypasses notifications gate after monitoredRepos change", async () => { - // First call — no _lastSuccessfulFetch, gate skipped - await fetchAllData(); - expect(mockRequest).not.toHaveBeenCalled(); - - // Second call — _lastSuccessfulFetch set, gate fires - await fetchAllData(); - expect(mockRequest).toHaveBeenCalledWith("GET /notifications", expect.anything()); - mockRequest.mockClear(); - - // Change monitoredRepos — should null _lastSuccessfulFetch - updateConfig({ - selectedRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - monitoredRepos: [{ owner: "org", name: "repo", fullName: "org/repo" }], - }); - - // Third call — gate bypassed because _lastSuccessfulFetch was nulled - await fetchAllData(); - expect(mockRequest).not.toHaveBeenCalled(); - }); - - it("bypasses notifications gate after trackedUsers change", async () => { - // First call — sets _lastSuccessfulFetch - await fetchAllData(); - - // Second call — gate fires - await fetchAllData(); - mockRequest.mockClear(); - - // Change trackedUsers — should null _lastSuccessfulFetch - updateConfig({ - trackedUsers: [{ - login: "octocat", - avatarUrl: "https://avatars.githubusercontent.com/u/583231", - name: "Octocat", - type: "user" as const, - }], - }); - - // Next call — gate bypassed - await fetchAllData(); - expect(mockRequest).not.toHaveBeenCalled(); - }); -}); diff --git a/tests/services/poll.test.ts b/tests/services/poll.test.ts index 8389485a..676f5f45 100644 --- a/tests/services/poll.test.ts +++ b/tests/services/poll.test.ts @@ -1,7 +1,7 @@ import "fake-indexeddb/auto"; import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; import { createRoot, createSignal } from "solid-js"; -import { createPollCoordinator, disableNotifGate, resetPollState, type DashboardData } from "../../src/app/services/poll"; +import { createPollCoordinator, type DashboardData } from "../../src/app/services/poll"; import * as githubMod from "../../src/app/services/github"; // Mock pushError so we can spy on it @@ -135,7 +135,7 @@ describe("createPollCoordinator", () => { }); }); - it("continues polling when document is hidden (notifications gate enabled)", async () => { + it("pauses polling when document is hidden (no 304 shortcut for GraphQL)", async () => { const randomSpy = vi.spyOn(Math, "random").mockReturnValue(0.5); // jitter = 0 const fetchAll = makeFetchAll(); @@ -152,8 +152,8 @@ describe("createPollCoordinator", () => { vi.advanceTimersByTime(61_000); await flushPromises(); - // Should have fetched while hidden (background refresh) - expect(fetchAll.mock.calls.length).toBeGreaterThan(callsAfterInit); + // Should NOT have fetched — background polls skipped (no 304 shortcut) + expect(fetchAll.mock.calls.length).toBe(callsAfterInit); dispose(); }); @@ -186,31 +186,6 @@ describe("createPollCoordinator", () => { }); }); - it("pauses background polling when hidden and notifications gate is disabled", async () => { - disableNotifGate(); - const fetchAll = makeFetchAll(); - - await createRoot(async (dispose) => { - createPollCoordinator(makeGetInterval(60), fetchAll); - await Promise.resolve(); // initial fetch - - const callsAfterInit = fetchAll.mock.calls.length; - - // Hide document - setDocumentVisible(false); - - // Advance past the interval - vi.advanceTimersByTime(90_000); - await Promise.resolve(); - - // Should NOT have fetched — gate disabled means no cheap 304, skip background polls - expect(fetchAll.mock.calls.length).toBe(callsAfterInit); - dispose(); - }); - - resetPollState(); // restore gate for other tests - }); - it("does NOT trigger immediate refresh on re-visible within 2 minutes", async () => { // Pin jitter to 0 so 300s interval is exactly 300s (no background poll in 90s) const randomSpy = vi.spyOn(Math, "random").mockReturnValue(0.5); @@ -238,7 +213,7 @@ describe("createPollCoordinator", () => { randomSpy.mockRestore(); }); - it("resets timer on re-visible after >2 min, preventing double-fire with background polls", async () => { + it("resets timer on re-visible after >2 min, fires catch-up then waits full interval", async () => { // Pin jitter to 0 so 60s interval is exactly 60s const randomSpy = vi.spyOn(Math, "random").mockReturnValue(0.5); const fetchAll = makeFetchAll(); @@ -249,20 +224,20 @@ describe("createPollCoordinator", () => { const callsAfterInit = fetchAll.mock.calls.length; - // Hide for >2 min — background polls fire at 60s and 120s + // Hide for >2 min — background polls are SKIPPED (no 304 shortcut) setDocumentVisible(false); vi.advanceTimersByTime(130_000); await flushPromises(); - const callsWhileHidden = fetchAll.mock.calls.length; - expect(callsWhileHidden).toBeGreaterThan(callsAfterInit); + // No polls while hidden + expect(fetchAll.mock.calls.length).toBe(callsAfterInit); // Restore visibility — catch-up fetch fires + timer resets setDocumentVisible(true); await flushPromises(); const callsAfterRevisible = fetchAll.mock.calls.length; - expect(callsAfterRevisible).toBeGreaterThan(callsWhileHidden); + expect(callsAfterRevisible).toBeGreaterThan(callsAfterInit); // Advance 30s — should NOT fire (timer was reset to full 60s interval) vi.advanceTimersByTime(30_000); @@ -459,68 +434,6 @@ describe("createPollCoordinator", () => { }); }); - // ── qa-5: fetchAll returns skipped:true — lastRefreshAt not updated ────────── - - it("does not update lastRefreshAt and does not push errors when fetchAll returns skipped:true", async () => { - mockPushError.mockClear(); - - const skippedData: DashboardData = { - issues: [], - pullRequests: [], - workflowRuns: [], - errors: [], - skipped: true, - }; - const fetchAll = vi.fn().mockResolvedValue(skippedData); - - await createRoot(async (dispose) => { - const coordinator = createPollCoordinator(makeGetInterval(0), fetchAll); - - // Wait for the in-flight fetch to settle - await Promise.resolve(); - await Promise.resolve(); - await Promise.resolve(); - - // lastRefreshAt must remain null — skipped fetch should not record a refresh time - expect(coordinator.lastRefreshAt()).toBeNull(); - - // isRefreshing must be cleared — the finally block always runs - expect(coordinator.isRefreshing()).toBe(false); - - // pushError must NOT have been called — per-repo errors are only processed on non-skipped fetches - expect(mockPushError).not.toHaveBeenCalled(); - - dispose(); - }); - }); - - // ── qa-3a: doFetch skipped path — no restore (reconciliation replaces snapshot/restore) ── - - it("skipped fetch does NOT call pushError for previous errors (no restore logic)", async () => { - mockPushError.mockClear(); - - const skippedData: DashboardData = { - issues: [], - pullRequests: [], - workflowRuns: [], - errors: [], - skipped: true, - }; - const fetchAll = vi.fn().mockResolvedValue(skippedData); - - await createRoot(async (dispose) => { - createPollCoordinator(makeGetInterval(0), fetchAll); - - await Promise.resolve(); - await Promise.resolve(); - await Promise.resolve(); - - // No pushError calls on skip — notifications persist naturally - expect(mockPushError).not.toHaveBeenCalled(); - dispose(); - }); - }); - // ── qa-3b: reconciliation — resolved error is dismissed ─────────────────────── it("dismisses resolved errors: source in previous cycle but not pushed in current cycle", async () => { @@ -578,33 +491,6 @@ describe("createPollCoordinator", () => { mockEndCycleTracking.mockReturnValue(new Set()); }); - // ── qa-3d: endCycleTracking called on skipped path ──────────────────────────── - - it("endCycleTracking is called on skipped path (no tracking state leak)", async () => { - mockEndCycleTracking.mockClear(); - mockStartCycleTracking.mockClear(); - - const skippedData: DashboardData = { - issues: [], - pullRequests: [], - workflowRuns: [], - errors: [], - skipped: true, - }; - const fetchAll = vi.fn().mockResolvedValue(skippedData); - - await createRoot(async (dispose) => { - createPollCoordinator(makeGetInterval(0), fetchAll); - await Promise.resolve(); - await Promise.resolve(); - await Promise.resolve(); - - // startCycleTracking called, endCycleTracking called in finally - expect(mockStartCycleTracking).toHaveBeenCalled(); - expect(mockEndCycleTracking).toHaveBeenCalled(); - dispose(); - }); - }); // ── qa-11: Jitter test with fixed Math.random to make interval deterministic ──