Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 38 additions & 1 deletion ai/rules/task-creator.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ epicTemplate() {
"""
# ${EpicName} Epic

**Status**: 📋 PLANNED
**Status**: 📋 PLANNED
**Goal**: ${briefGoal}

## Overview
Expand All @@ -86,6 +86,34 @@ epicTemplate() {

---

## Interfaces

Define key types and interfaces in SudoLang format:

```sudo
$InterfaceName {
...otherInterfaceToMixIn
[$keyName]: $type
}
```

Example:
```sudo
type ID = string(cuid2)
type Timestamp = number(epoch ms)

User {
id: ID
name: string
createdAt: Timestamp
meta {
lastSignedIn: Timestamp
}
}
```

---

## ${TaskName}

${briefTaskDescription}
Expand All @@ -104,18 +132,27 @@ epicConstraints {
Explain what gaps are being addressed
Keep it terse

// Interfaces:
Define all key types and interfaces in SudoLang format
Interfaces take precedence - requirements must align with defined interfaces
Include types for configuration, events, payloads, and return values
Use clear, descriptive names

// Tasks:
No task numbering (use task names only)
Brief description (1 sentence max)
Requirements section with bullet points ONLY using "Given X, should Y" format
Include ONLY novel, meaningful, insightful requirements
Requirements must be consistent with defined interfaces
NO extra sections, explanations or text
}

reviewEpic() {
After creating the epic file, verify:

1. Single paragraph overview starting with WHY
1. Interfaces section includes all key types in SudoLang format
1. Requirements align with defined interfaces
1. No task numbering
1. All requirements follow "Given X, should Y" format
1. Only novel/insightful requirements remain (eliminate obvious boilerplate)
Expand Down
13 changes: 10 additions & 3 deletions plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,14 @@

### 📋 Context7 GitHub Action Epic

**Status**: 📋 PLANNED
**File**: [`tasks/context7-github-action-epic.md`](./tasks/context7-github-action-epic.md)
**Goal**: Integrate Context7 GitHub Action to automatically maintain up-to-date code documentation for LLMs and AI code editors
**Status**: 📋 PLANNED
**File**: [`tasks/context7-github-action-epic.md`](./tasks/context7-github-action-epic.md)
**Goal**: Integrate Context7 GitHub Action to automatically maintain up-to-date code documentation for LLMs and AI code editors
**Tasks**: 6 tasks (configuration, workflow creation, API integration, release integration, testing, documentation)

### 📋 Unified Logger Epic

**Status**: 📋 PLANNED
**File**: [`tasks/unified-logger-epic.md`](./tasks/unified-logger-epic.md)
**Goal**: Create event-driven logging framework for unified telemetry across client and server
**Tasks**: 9 tasks (core infrastructure, client implementation, server implementation, event configuration, security/privacy, transport, schema validation, testing, documentation)
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect task count and incomplete task list. The summary states "9 tasks" but the epic contains 10 task sections. Additionally, the task list omits "Server Event Endpoint" (which is distinct from the Client Transport Layer). Update to: "10 tasks (core infrastructure, client implementation, server implementation, event configuration, security/privacy, client transport, server endpoint, schema validation, testing, documentation)".

Suggested change
**Tasks**: 9 tasks (core infrastructure, client implementation, server implementation, event configuration, security/privacy, transport, schema validation, testing, documentation)
**Tasks**: 10 tasks (core infrastructure, client implementation, server implementation, event configuration, security/privacy, client transport, server endpoint, schema validation, testing, documentation)

Copilot uses AI. Check for mistakes.
266 changes: 266 additions & 0 deletions tasks/unified-logger-epic.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,266 @@
# Unified Logger Epic

**Status**: 📋 PLANNED
**Goal**: Create event-driven logging framework for unified telemetry across client and server

## Overview

Developers need a unified logging solution that works consistently across client and server environments to enable reliable telemetry, debugging, and monitoring without coupling to specific dispatch implementations. This logger subscribes to framework events and handles sampling, sanitization, batching, and transport with built-in offline resilience and GDPR compliance.

---

## Interfaces

```sudo
// Base types
type ID = string(cuid2)
type Timestamp = number(epoch ms)
type LogLevel = "debug" | "info" | "warn" | "error" | "fatal"
type Sanitizer = (payload: any) => any
type Serializer = (payload: any) => string
type RxJSOperator = function(Observable) => Observable
// Framework interfaces (out of scope - mocked for testing)
type Dispatch = (event: Event) => void // synchronous, returns nothing
type Events$ = Observable<Event> // RxJS Observable
// Event structures (Redux-compatible action objects)
Event {
type: string
payload: any // typically LogPayload but can be any
}
LogPayload {
timestamp: Timestamp // Date.now() at creation
message: string
logLevel: LogLevel
sanitizer?: Sanitizer // optional override
serializer?: Serializer // optional override
context?: Record<string, any> // contextual fields
props?: Record<string, any> // additional structured data
}
EnrichedEvent {
...Event
schemaVersion: number // = 1
eventId: ID // cuid2()
userPseudoId: string
requestId?: ID
sessionId?: ID
appVersion?: string
route?: string
locale?: string
createdAt?: Timestamp // server ingestion time
}
// Configuration
LoggerOptions {
endpoint?: string // POST target (client: '/api/events', server: 'console')
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent endpoint definition. Line 58 defines the client endpoint as '/api/events' but the Client Transport Layer (line 201, 207) and Server Event Endpoint (line 214) sections describe the endpoint as /api/events/[id] or /api/events/[eventId]. The endpoint should be consistently defined. Based on the idempotent delivery design mentioned in line 201, the endpoint in LoggerOptions should likely be '/api/events' as a base, with the implementation appending /${eventId} when making requests.

Suggested change
endpoint?: string // POST target (client: '/api/events', server: 'console')
endpoint?: string // POST base path (e.g., '/api/events'); implementation appends '/${eventId}' for idempotent delivery (client), or 'console' for server

Copilot uses AI. Check for mistakes.
payloadSanitizer?: Sanitizer // (any) => any
headerSanitizer?: (headers: Record<string, string>) => Record<string, string>
serializer?: Serializer // (any) => string (default: JSON.stringify)
batchSizeMin?: number // default 10
batchSizeMax?: number // default 50 events OR 64KB bytes, whichever first
flushIntervalMs?: number // timer for auto-flush when online
maxLocalBytes?: number // cap for localStorage pool
maxByteLength?: number // max request body size (server)
skewWindow?: number // acceptable timestamp skew (default: 24h in ms)
futureSkewWindow?: number // acceptable future timestamp skew (default: 1h5m in ms)
consentProvider?: () => { analytics: boolean }
getIds?: () => {
sessionId?: ID
userPseudoId?: string
requestId?: ID
}
clock?: () => Timestamp // default: Date.now
level?: LogLevel // default: 'info'
sampler?: RxJSOperator // default: takeEvery (pass-through)
events?: Record<string, EventConfig> // per-event overrides
}
Comment on lines +57 to +79
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing interface definition. The Server Event Endpoint requirement (line 219) references allowedOrigins for origin validation, but this configuration option is not defined in the LoggerOptions interface. Add allowedOrigins?: string[] to the LoggerOptions interface to align with this requirement, or clarify how allowed origins are configured.

Copilot uses AI. Check for mistakes.
EventConfig {
shouldLog?: boolean // default: true
sampler?: RxJSOperator // rxjs pipeable operator
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent capitalization of "RxJS". The library name "RxJS" should be consistently capitalized. Line 83 uses lowercase "rxjs pipeable operator" and line 249 uses lowercase "rxjs". These should be "RxJS pipeable operator" and "RxJS" respectively to match the official library name capitalization used in the type name "RxJSOperator".

Suggested change
sampler?: RxJSOperator // rxjs pipeable operator
sampler?: RxJSOperator // RxJS pipeable operator

Copilot uses AI. Check for mistakes.
sanitizer?: Sanitizer // (payload) => payload
serializer?: Serializer // (payload) => string
level?: LogLevel // default: 'info'
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing interface definition. The Schema Validation requirement (line 238) mentions that developers can "register EventConfig with schema", but the EventConfig interface (lines 81-87) does not include a schema field. Add schema?: object or a more specific JSON schema type to the EventConfig interface to align with this requirement.

Suggested change
level?: LogLevel // default: 'info'
level?: LogLevel // default: 'info'
schema?: object // optional JSON schema for validation

Copilot uses AI. Check for mistakes.
}
// Return types
Logger {
log: (message: string, context?: Record<string, any>) => void
info: (message: string, context?: Record<string, any>) => void
warn: (message: string, context?: Record<string, any>) => void
error: (message: string | Error, context?: Record<string, any>) => void
debug: (message: string, context?: Record<string, any>) => void
fatal: (message: string | Error, context?: Record<string, any>) => void
dispose?: () => void // cleanup subscriptions
}
ClientLogger {
...Logger
withLogger: (Component: any) => any // HOC for React components
}
ServerLogger {
...Logger
withLogger: ({ request, response }: {
request: any
response: any
}) => Promise<{ request: any, response: any }>
}
// Storage adapter (for testing)
StorageAdapter {
getItem: (key: string) => Promise<string | null>
setItem: (key: string, value: string) => Promise<void>
removeItem: (key: string) => Promise<void>
keys: () => Promise<string[]>
}
```

---

## Core Logger Infrastructure

Create base logger factory, event subscription mechanism, and error handling.

**Requirements**:
- Given createLogger is called with LoggerOptions, should return Logger with all methods
- Given framework emits Event via events$ Observable, should subscribe and process matching events
- Given logger receives Event, should apply global LoggerOptions and per-event EventConfig
- Given serializer or sanitizer throws error, should log to console.error and continue processing
- Given logger is created with invalid LoggerOptions, should throw descriptive TypeError
- Given logger.dispose is called, should unsubscribe from events$ and clean up resources

---

## Client Logger Implementation

Implement browser-based logger with localStorage buffering and network resilience.

**Requirements**:
- Given createLogger is called in browser, should return ClientLogger
- Given browser is online and Event received, should batch to localStorage and flush in background
- Given browser is offline and Event received, should append to localStorage without flushing
- Given browser reconnects (online event), should auto-flush all pooled buffers from localStorage
- Given localStorage exceeds maxLocalBytes quota, should evict oldest EnrichedEvent FIFO
- Given navigator.sendBeacon is available and batch ready, should prefer sendBeacon over fetch
- Given navigator.sendBeacon unavailable, should fallback to fetch POST
- Given batch reaches batchSizeMax events OR 64KB bytes, should flush immediately
- Given batch not flushed within flushIntervalMs, should flush on timer
- Given POST to endpoint fails, should log error to console and ignore (fire-and-forget)
- Given consentProvider returns analytics false, should skip logging non-essential events
- Given Event payload, should enrich with schemaVersion, eventId, userPseudoId, sessionId from getIds

---

## Server Logger Implementation

Implement Node.js logger with middleware integration.

**Requirements**:
- Given createLogger is called on server, should return ServerLogger
- Given withLogger called with request and response, should set response.locals.logger to Logger
- Given server logger receives Event via events$, should call appropriate logger method (log/error/etc)
- Given logger method called, should dispatch to transport (default: console.log/console.error)
- Given endpoint is 'console', should log to console with appropriate level
- Given endpoint is custom function, should call function with EnrichedEvent
- Given Event payload, should enrich with schemaVersion, eventId, requestId from response.locals

---

## Event Configuration System

Implement per-event sampling, sanitization, and serialization overrides.

**Requirements**:
- Given EventConfig has custom sampler RxJSOperator, should pipe events$ through operator
- Given EventConfig has custom sanitizer, should apply before global payloadSanitizer
- Given EventConfig has custom serializer, should use instead of global serializer
- Given Event type not in events map, should use global LoggerOptions defaults
- Given EventConfig includes invalid RxJSOperator, should fail fast with TypeError

---

## Security and Privacy Layer

Implement sanitizers, consent checking, and PII scrubbing utilities.

**Requirements**:
- Given server receives Event with headers, should apply headerSanitizer before logging
- Given server receives Event with request/response data, should apply payloadSanitizer before logging
- Given client consentProvider returns analytics false, should skip all telemetry events
- Given payload matches PII detection patterns, should redact before storage or transport
- Given developer integrates logger, should have documentation on GDPR-compliant PII scrubbing

---

## Client Transport Layer

Implement batching and idempotent delivery to /api/events/[id] endpoint.

**Requirements**:
- Given client online and queued events reach batchSizeMax, should flush immediately
- Given client online and flushIntervalMs elapsed, should flush on timer
- Given POST to /api/events/[id] fails, should log error to console and ignore (fire-and-forget)
- Given batch to flush, should POST to /api/events/[eventId] with Content-Type application/json
- Given multiple events to flush, should batch into single POST body with events array
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Batching Conflicts with URL Design

The client transport requirements for flushing events are contradictory. The design specifies POSTing to an endpoint with a single event ID in the URL path, but also requires batching multiple events into a single request body. This creates an ambiguous URL structure for batched requests and conflicts with the server's idempotency handling, which expects individual event IDs.

Fix in Cursor Fix in Web


---

## Server Event Endpoint

Implement POST /api/events/[id] handler with validation and idempotency.

**Requirements**:
- Given request method is not POST, should reject with 405 Method Not Allowed
- Given Content-Type is not application/json, should reject with 415 Unsupported Media Type
- Given request origin not in allowedOrigins, should reject with 403 Forbidden
- Given request referer origin does not match origin header, should reject with 400 Bad Request
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requirement mentions checking "referer origin" but the correct HTTP header name is "Referer" (note the misspelling in the HTTP standard is intentional). However, the requirement says "request referer origin does not match origin header" which seems like a security check. Consider clarifying what this check is validating - typically you'd check that the Origin header matches allowed origins, or that Referer matches the request origin, but checking "referer origin does not match origin header" is unclear. This might need to be: "Given request Referer header origin does not match Origin header, should reject with 400 Bad Request" or simply removed if covered by the allowedOrigins check on line 219.

Suggested change
- Given request referer origin does not match origin header, should reject with 400 Bad Request

Copilot uses AI. Check for mistakes.
- Given request body exceeds maxByteLength, should reject with 413 Payload Too Large
- Given Event timestamp outside skewWindow (past), should reject with 400 Bad Request
- Given Event timestamp outside futureSkewWindow (future), should reject with 400 Bad Request
- Given duplicate eventId received, should respond 204 No Content without processing
- Given Event passes all validations, should enqueue and respond 204 No Content

---

## Schema Validation

Implement JSON schema validation for Event types using Ajv.

**Requirements**:
- Given Event posted to server, should validate Event.payload against registered JSON schema
- Given Event fails schema validation, should reject with 400 Bad Request and detailed error
- Given Event type has no registered schema, should validate against default Event schema
- Given schemas defined at initialization, should compile with Ajv once on startup
- Given developer registers EventConfig with schema, should validate schema is valid JSON Schema Draft 7

---

## Testing Infrastructure

Create test utilities, mocks, and adapters for isolated testing.

**Requirements**:
- Given tests need storage isolation, should provide mock StorageAdapter
- Given tests need dispatch spy, should provide vi.fn mock for Dispatch
- Given tests need events$ stub, should provide Subject from rxjs for controllable event emission
Copy link

Copilot AI Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent capitalization of "RxJS". The library name should be "RxJS" (not "rxjs") to match the official library name capitalization used elsewhere in the document (e.g., "RxJSOperator" type name).

Suggested change
- Given tests need events$ stub, should provide Subject from rxjs for controllable event emission
- Given tests need events$ stub, should provide Subject from RxJS for controllable event emission

Copilot uses AI. Check for mistakes.
- Given tests need deterministic time, should inject clock function returning fixed Timestamp
- Given tests run in parallel, should not share localStorage state between test suites
- Given logger created in test, should expose dispose method for cleanup

---

## Documentation and Examples

Document integration patterns, PII guidelines, and usage examples.

**Requirements**:
- Given developer integrates logger, should have clear client setup example with createLogger
- Given developer integrates logger, should have clear server middleware example with withLogger
- Given developer needs PII scrubbing, should have comprehensive guidelines with examples
- Given developer configures events, should have RxJS operator examples (takeEvery, sampleTime, throttleTime)
- Given developer handles retention, should have documentation referencing GDPR Article 5 and Article 17
- Given developer creates Event schemas, should have Ajv JSON Schema examples for Redux action objects