|
|
||
|---|---|---|
| .forgejo/workflows | ||
| migrations | ||
| public | ||
| src | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| architecture.md | ||
| deploy-portainer.sh | ||
| design.md | ||
| docker-compose.yml | ||
| Dockerfile | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| requirements.md | ||
| stack.yml | ||
| tasks.md | ||
| tsconfig.json | ||
| vitest.config.ts | ||
TV Guide XMLTV Server
Fastify-based API that ingests TV listings from tvtv.us into a local database and serves a Jellyfin-compatible XMLTV feed. Optional FlareSolverr integration automates Cloudflare cookie handling.
Features
- Ingest tvtv.us lineup channels and 7-day programme grid with batching and retries
- Serve streaming XMLTV output compatible with Jellyfin/TVH
- Optional FlareSolverr for automated Cloudflare challenge bypass
- SQLite for development, Postgres for production
- Scheduler + Worker with DB-backed job queue
- Observability: structured request logs and basic metrics endpoint (
/metrics.json) - Tvpass support: snapshot ingestion and independent XMLTV output
- Tvpass M3U playlist caching with TTL and conditional requests (ETag/Last-Modified)
Architecture (high level)
flowchart LR
subgraph Client
A[User / Jellyfin]
end
subgraph API[Fastify Server]
R1["GET /lineups/:id/channels"]
R2["GET /lineups/:id/xmltv"]
R3["GET /xmltv/tvpass"]
R4["GET /tvpass/playlist.m3u"]
R5["GET /tvpass/playlist.meta"]
end
subgraph Services
S1[TvtvXmltvService]
S2[TvpassXmltvService]
S3[TvpassPlaylistService]
S4[ChannelService]
end
subgraph Worker
W1[Scheduler]
W2[Job Worker]
end
subgraph DB[SQLite / Postgres]
DBCore[(DB)]
T1[lineups]
T2[channels]
T3[programmes]
T4[jobs]
T5[cache_items]
end
subgraph Upstream
U1[(tvtv.us)]
U2[(tvpass.org)]
U3[(FlareSolverr)]
end
A --> R1
A --> R2
A --> R3
A --> R4
A --> R5
R1 --> S4 --> DBCore
R2 --> S1 --> DBCore
R3 --> S2 --> DBCore
R4 --> S3 --> DBCore
R5 --> S3 --> DBCore
W1 -->|enqueue| T4
W2 -->|lease/process| T4
W2 -->|ingest| DBCore
W2 -->|HTTP| U1
W2 -->|HTTP| U2
W2 -. optional .-> U3
See more detailed diagrams and data flows in architecture.md.
Quick Start (Development)
- Copy env and adjust
cp .env.example .env
# adjust DB_CLIENT (sqlite for dev) and any upstream headers/cookies if needed
- Install deps
npm ci
- Run migrations (dev)
npm run db:migrate
- Start API (dev)
npm run dev
API will listen on http://localhost:3000
Docker Compose (Production-like)
The Compose stack includes Postgres and the API. FlareSolverr is available behind a profile.
# Build and start db+api
docker compose up --build -d
# If you want FlareSolverr too:
docker compose --profile flaresolverr up -d flaresolverr
Environment variables for the API service can be overridden via an .env file or inline in Compose. See
docker-compose.yml and the Environment section below.
Notes:
- The API service runs DB migrations on startup and exposes a healthcheck that calls
GET /healthz. - Postgres has a healthcheck via
pg_isready; the API waits for it before starting.
Environment Variables
-
Runtime
- PORT (default 3000)
- NODE_ENV (development|production)
- LOG_LEVEL (debug|info|warn|error)
-
Database
- DB_CLIENT (sqlite|postgres)
- DATABASE_URL (e.g. postgres://user:pass@host:5432/db or sqlite file path)
-
Defaults & behavior
- DEFAULT_LINEUP_ID (default USA-OTA80022)
- TIMEZONE (default UTC)
- CONCURRENCY_MAX (default 20)
- CONCURRENCY_PER_LINEUP (default 4)
- RATE_LIMIT_RPS (default 20)
-
Upstream HTTP
- UPSTREAM_UA
- UPSTREAM_ACCEPT_LANGUAGE
- UPSTREAM_REFERER
- UPSTREAM_COOKIES (cookie header string; auto-managed if FlareSolverr enabled)
-
FlareSolverr (optional)
- FLARESOLVERR_ENABLED (default false)
- FLARESOLVERR_URL (default http://localhost:8191)
- FLARESOLVERR_TIMEOUT_MS (default 90000)
- FLARESOLVERR_AUTO_WRITE_ENV (default false; writes UPSTREAM_COOKIES to .env)
-
Feature Flags
- FEATURE_TVPASS (default true; stubbed for future)
- FEATURE_UI (default true; placeholder)
- FEATURE_REFRESH (default true)
-
Tvpass
- TVPASS_LINEUP_ID (lineup label for tvpass dataset)
- TVPASS_SNAPSHOT_URL (JSON snapshot source)
- TVPASS_PLAYLIST_URL (M3U playlist URL)
- TVPASS_PLAYLIST_TTL_MINUTES (cache TTL for playlist; default in config)
Ingestion
- Channels (once per lineup or daily):
npm run ingest:channels - Grid (windowed slices; see script for slice config):
npm run ingest:grid - Scheduler (enqueue channels daily + 7-day grid slices) and Job summary:
npm run schedule:once npm run jobs:summary
API Routes
-
Health
- Method/Path: GET
/healthz - Response:
{ ok: boolean, db: 'ok'|'error', workerLagSec: number } - Content-Type:
application/json; charset=utf-8
- Method/Path: GET
-
Metrics
- Method/Path: GET
/metrics.json - Response: JSON snapshot of internal metrics
- Content-Type:
application/json; charset=utf-8
- Method/Path: GET
-
Channels (by lineup)
- Method/Path: GET
/lineups/:externalLineupId/channels - Params (path):
externalLineupId(e.g.USA-OTA80022) - Responses:
- 200 JSON:
{ lineupId, source, channels: [{ stationId, callSign, channelNumber, logoUrl }] } - 404 JSON:
{ error: 'lineup-not-found', externalLineupId }
- 200 JSON:
- Content-Type:
application/json; charset=utf-8
- Method/Path: GET
-
XMLTV (tvtv dataset)
- Method/Path: GET
/lineups/:externalLineupId/xmltv - Query:
days(int 1–14, default 7)stationIds(CSV of station IDs, optional)
- Response: XMLTV stream
- Content-Type:
application/xml; charset=utf-8 - Behavior: opportunistically schedules ingestion if data is missing/sparse (channels.refresh, grid.refreshSlice). Falls back to Tvpass data if tvtv not available; returns 404 if nothing can be produced.
- Caching: no explicit cache headers set by server (client may cache as desired).
- Method/Path: GET
-
XMLTV (tvpass dataset)
- Method/Path: GET
/xmltv/tvpass - Query:
days(int 1–14, default 7)stationIds(CSV of Tvpass station IDs; matches M3Utvg-id)
- Response: XMLTV stream
- Content-Type:
application/xml; charset=utf-8 - Behavior: opportunistically schedules Tvpass snapshot/playlist refresh if missing/sparse.
- Caching: no explicit cache headers.
- Method/Path: GET
-
Tvpass Playlist (cached with TTL, ETag/Last-Modified)
- Method/Path: GET
/tvpass/playlist.m3u - Query:
force(boolean:1|true|yes|onto bypass cache and fetch upstream) - Response: M3U content
- Content-Type: upstream content-type or
audio/x-mpegurl; charset=utf-8 - Caching behavior:
- Cache stored in DB table
cache_items(keytvpass:playlist.m3u). - Freshness TTL controlled by
TVPASS_PLAYLIST_TTL_MINUTES(default in config). - Uses conditional requests with
If-None-Match(ETag) andIf-Modified-Since(Last-Modified). - On upstream
304 Not Modified, refreshes cache timestamp without downloading body. - On upstream non-200, serves stale cache if available.
- Cache stored in DB table
- Method/Path: GET
-
Tvpass Playlist cache meta
- Method/Path: GET
/tvpass/playlist.meta - Response:
{ cached, fetchedAt, ageMinutes, fresh, etag, lastModified, contentType, ttlMinutes, playlistUrl } - Content-Type:
application/json; charset=utf-8
- Method/Path: GET
-
EPGShare XMLTV (proxy)
- Method/Path: GET
/epgshare/xmltv - Query:
id(defaultUS_LOCALS2, token pattern[A-Z0-9_]+) - Response: Streams and gunzips XMLTV from epgshare01
- Content-Type:
application/xml; charset=utf-8 - Caching:
Cache-Control: public, max-age=900(15 minutes) - Errors:
400 { error: 'invalid-id' }; on upstream non-200 returns{ error: 'upstream-error', status, body }; on network errors returns502 { error: 'bad-gateway', message }
- Method/Path: GET
-
EPGShare XMLTV (match by callsigns)
- Method/Path: GET
/epgshare/xmltv/match - Query:
id(defaultUS_LOCALS2)lineup(default serverDEFAULT_LINEUP_ID)callsigns(CSV; optional explicit allowlist)
- Behavior: if
callsignsomitted, allowed callsigns are derived from current TvTV channels forlineup(DB fallback). Filters channels/programmes by normalized callsigns. - Response: filtered XMLTV stream
- Content-Type:
application/xml; charset=utf-8 - Caching:
Cache-Control: public, max-age=900 - Errors:
400 { error: 'invalid-lineup' }for bad lineup param;404 { error: 'no-callsigns-for-lineup-tvtv', lineup }if none could be determined.
- Method/Path: GET
-
Refresh scheduler
- Method/Path: POST
/refresh - Body (JSON):
{ lineupId: string, source?: 'tvtv'|'tvpass'|'all' (default 'all'), full?: boolean } - Behavior: enqueues ingestion jobs. For tvtv:
channels.refreshand, whenfull=true,grid.refreshSliceacross next 7 days in 6-hour windows. For tvpass:tvpass.refreshSnapshotand, whenfull=true,tvpass.refreshPlaylist. - Feature flags: skipped when
FEATURE_REFRESH=false(tvtv) orFEATURE_TVPASS=false(tvpass). - Response:
{ ok, scheduled, skipped, details: [{ type, status: 'scheduled'|'skipped', reason? }] } - Content-Type:
application/json; charset=utf-8
- Method/Path: POST
-
UI (HTML dashboard; requires FEATURE_UI=true)
- GET
/ui- Response: Full HTML dashboard
- Content-Type:
text/html; charset=utf-8
- GET
/ui/partials/guide- Query:
lineupId(string; defaults to serverDEFAULT_LINEUP_IDif omitted)hours(int 1–24; default 3)slot(int minutes 5–120; default 30)start(datetime; optional ISO/local datetime)q(string; optional search filter for channel call sign/number/station id)limit(int 5–200; default 25)
- Response: HTML fragment of the guide grid
- Content-Type:
text/html; charset=utf-8 - Errors: 500 HTML snippet on render failure
- Query:
- GET
/ui/partials/jobs-summary- Response: HTML fragment summarizing jobs by type/status with next run time per type
- Content-Type:
text/html; charset=utf-8
- GET
/ui/partials/health- Response: HTML fragment with DB health (
ok/error) - Content-Type:
text/html; charset=utf-8
- Response: HTML fragment with DB health (
- POST
/ui/run-jobs- Body (form or JSON):
{ max?: number }(default 10; 1–100) - Behavior: manually leases and processes up to
maxqueued jobs using the same handlers as the worker - Response: HTML fragment listing processed/failed job types
- Content-Type:
text/html; charset=utf-8
- Body (form or JSON):
- POST
/ui/refresh- Body (JSON via HTMX):
{ lineupId: string, source?: 'tvtv'|'tvpass'|'all' (default 'all'), full?: boolean } - Behavior: proxies to
POST /refreshand returns an HTML summary of scheduled/skipped jobs - Response status mirrors the underlying
/refreshcall - Content-Type:
text/html; charset=utf-8
- Body (JSON via HTMX):
- Admin actions from UI:
- The UI provides an input for an admin token that is stored in
localStorageand automatically attached asAuthorization: Bearer <TOKEN>for UI elements marked withdata-admin-auth="true". - You can trigger admin endpoints (e.g.,
POST /admin/maintenance/reset-stale) from the UI. Admin routes also acceptx-admin-token.
- The UI provides an input for an admin token that is stored in
- GET
-
Admin (requires token)
- Auth: provide
Authorization: Bearer <TOKEN>orx-admin-token: <TOKEN>. Configure token in server config. - POST
/admin/maintenance/reset-stale- Effect: resets stale job leases. Returns JSON
{ reset }or a small HTML snippet whenHX-Request: true.
- Effect: resets stale job leases. Returns JSON
- POST
/admin/jobs/force-process(query or bodyminutes=1..60, default 5)- Effect: pulls forward queued jobs whose
next_run_atis within the window. Returns JSON{ pulledForward, minutes }or HTML whenHX-Request: true.
- Effect: pulls forward queued jobs whose
- Auth: provide
Smoke Tests
Run these quick checks after starting the server to validate end-to-end wiring:
# Health
curl -sS http://127.0.0.1:3000/healthz
# Channels for default lineup (show first KB)
curl -sS http://127.0.0.1:3000/lineups/USA-OTA80022/channels | head -c 1500
# XMLTV (1 day, a few station IDs) – stream just the first lines
curl -sS "http://127.0.0.1:3000/lineups/USA-OTA80022/xmltv?days=1&stationIds=35366,35856,100218" | head -n 30
Minimal UI
Served when FEATURE_UI=true (default). Visit http://localhost:3000/ui/.
-
Overview
- Built with Franken UI (CDN) + HTMX; no build step required.
- Sections: Jobs Summary (auto-refresh), Admin, Trigger Refresh, TV Guide grid, Health.
-
Jobs Summary
- Auto-refreshes every 5s via
GET /ui/partials/jobs-summary. - "Run queued" posts to
POST /ui/run-jobswithmax(1–100; UI defaults to 5, server default is 10) and shows a result snippet.
- Auto-refreshes every 5s via
-
Admin
- Enter your admin token; it is stored locally in your browser.
- UI attaches
Authorization: Bearer <TOKEN>for actions marked withdata-admin-auth="true". - "Reset stale leases" triggers
POST /admin/maintenance/reset-staleand renders the result.
-
Trigger Refresh
- Enqueues work via
POST /ui/refresh(proxied toPOST /refresh). - Fields: lineupId, source (
all|tvtv|tvpass), full (grid slices for tvtv).
- Enqueues work via
-
TV Guide grid
- Loads via
GET /ui/partials/guidewith controls for lineup, search, hours, slot (min), start time, and channel limit. - Features: sticky first column, current-time indicator line, tooltips with title/subtitle/time/desc.
- Loads via
-
Health
GET /ui/partials/healthshows DB status.
-
Metrics
- Raw metrics endpoint:
GET /metrics.json(JSON snapshot).
- Raw metrics endpoint:
-
Toggle UI off by setting
FEATURE_UI=false.
Cloudflare and FlareSolverr
- If requests to tvtv.us are blocked, enable FlareSolverr:
- Set
FLARESOLVERR_ENABLED=true - Run FlareSolverr (see Compose profile)
- Set
- When the app detects a block, it will request cookies from FlareSolverr and retry once.
- If
FLARESOLVERR_AUTO_WRITE_ENV=true, new cookies are persisted to.envasUPSTREAM_COOKIES.
Manual cookie fetch (fallback):
npm run get:cookies
# follow the headful browser to solve challenges, then cookies are printed and can be copied to UPSTREAM_COOKIES
Troubleshooting
- 403/Cloudflare blocks: enable FlareSolverr and verify it is reachable; ensure UA/Referer headers are realistic
- Empty XMLTV: confirm ingestion ran and DB contains channels/programmes for the lineup
- Postgres connection: check
DATABASE_URLand that thedbservice is healthy in Compose - SQLite file path: default dev DB is
./data/dev.sqlite3; ensure thedata/directory exists (auto-created)
Build & Run
- Typecheck:
npm run typecheck - Build:
npm run build - Start (compiled):
npm start
Testing
- Unit and integration tests (Vitest):
npm run test:unit # unit tests only
npm run test:integration # integration tests only (in-memory SQLite)
npm run test:run # all tests once
Notes:
- Tests use a dedicated setup (
tests/setup.ts) that configures in-memory SQLite and runs migrations automatically. - Integration tests validate tvpass ingestion end-to-end and XMLTV generation.
Docker image includes a startup command that runs migrations before the server starts:
node dist/db/migrate.js && node dist/server.js
CI/CD and Deployment (Forgejo + Portainer)
- Workflow file:
.forgejo/workflows/ci.yml - Deploy helper:
deploy-portainer.sh - Docker ignore:
.dockerignore
What it does:
- Test job: installs deps, type-checks (
npm run typecheck), and runs tests (npm run test:run). - Build job: builds and pushes Docker image to your registry using Buildx/BuildKit.
- Deploy jobs:
developbranch → Deploys to staging stack.mainbranch → Deploys to production stack.- Deploy uses
deploy-portainer.shto trigger your Portainer stack webhook with image tag set to${GITHUB_SHA}.
Configure these in .forgejo/workflows/ci.yml:
- REGISTRY: e.g.
registry.aqueous.network - IMAGE_NAME: e.g.
tv-guide-xmltv - PORTAINER_URL: e.g.
https://port.aqueous.network - PORTAINER_STACK_ID_STAGING / PORTAINER_STACK_ID_PRODUCTION: your Portainer stack IDs
Required Forgejo repository secrets:
- PORTAINER_API_KEY: Portainer API key (for status checks and script)
- PORTAINER_WEBHOOK_URL: Webhook URL for the target stack
- Optional (only if your registry requires auth):
- REGISTRY_USERNAME
- REGISTRY_PASSWORD
Notes:
-
The workflow tags pushed images as:
${sha},${ref_name}, andlatest. -
If your registry requires auth, enable
docker/login-action@v3in the build job and set the above credentials as secrets. -
The deploy script supports optional
PORTAINER_ENDPOINT_ID(defaults to1). -
Example deploy step usage in CI:
./deploy-portainer.sh <STACK_ID> staging "$GITHUB_SHA"
Production Deployment Guide
This section summarizes practical ways to run in production. See the full list of configuration keys in the "Environment Variables" section above.
-
Option A: Docker Compose (single host)
- Prepare env: copy
.env.example→.env; setDB_CLIENT=postgres,DATABASE_URL,DEFAULT_LINEUP_ID, feature flags (FEATURE_TVPASS,FEATURE_UI), and optionally FlareSolverr (FLARESOLVERR_ENABLED=true). - Start services:
docker compose up -d db api # Optional Cloudflare solver docker compose --profile flaresolverr up -d flaresolverr - Queue initial 7-day work (scheduler):
docker compose exec api node dist/scripts/schedule.js - Start a background worker (continuously processes jobs):
docker compose run -d --name worker --no-deps api node dist/scripts/worker.js - Verify:
/healthz,/metrics.json, UI at/ui/, XMLTV at/lineups/<LINEUP_ID>/xmltv?days=7.
- Prepare env: copy
-
Option B: Portainer Stack (recommended)
- Create a stack using your built image. Include:
apiservice (this repo's image) with env from your prod.env/stack varsworkerservice runningnode dist/scripts/worker.jsschedulerservice that runs daily:sh -lc "while true; do node dist/scripts/schedule.js; sleep 86400; done"- Postgres service or managed DB; point
DATABASE_URLaccordingly - Optional
flaresolverrservice; setFLARESOLVERR_ENABLED=true
- Create a stack using your built image. Include:
-
Option C: CI/CD (Forgejo → Registry → Portainer)
- Set registry/Portainer secrets in Forgejo. Push to
main(prod) ordevelop(staging). The workflow builds/pushes and triggers your Portainer stack via webhook.
- Set registry/Portainer secrets in Forgejo. Push to
Recommended services (Compose/Stack snippets)
Worker:
worker:
image: ${REGISTRY}/${IMAGE_NAME}:${API_IMAGE_TAG:-latest}
restart: unless-stopped
environment:
NODE_ENV: production
LOG_LEVEL: ${LOG_LEVEL:-info}
DB_CLIENT: postgres
DATABASE_URL: ${DATABASE_URL}
DEFAULT_LINEUP_ID: ${DEFAULT_LINEUP_ID:-USA-OTA80022}
TIMEZONE: ${TIMEZONE:-UTC}
CONCURRENCY_MAX: ${CONCURRENCY_MAX:-20}
CONCURRENCY_PER_LINEUP: ${CONCURRENCY_PER_LINEUP:-4}
RATE_LIMIT_RPS: ${RATE_LIMIT_RPS:-20}
UPSTREAM_UA: ${UPSTREAM_UA:-Mozilla/5.0 (X11; Linux x86_64; rv:137.0) Gecko/20100101 Firefox/137.0}
UPSTREAM_ACCEPT_LANGUAGE: ${UPSTREAM_ACCEPT_LANGUAGE:-en-US,en;q=0.5}
UPSTREAM_REFERER: ${UPSTREAM_REFERER:-https://tvtv.us}
UPSTREAM_COOKIES: ${UPSTREAM_COOKIES:-}
FLARESOLVERR_ENABLED: ${FLARESOLVERR_ENABLED:-false}
FLARESOLVERR_URL: ${FLARESOLVERR_URL:-http://flaresolverr:8191}
FLARESOLVERR_TIMEOUT_MS: ${FLARESOLVERR_TIMEOUT_MS:-90000}
FEATURE_TVPASS: ${FEATURE_TVPASS:-true}
TVPASS_LINEUP_ID: ${TVPASS_LINEUP_ID:-tvpass}
TVPASS_SNAPSHOT_URL: ${TVPASS_SNAPSHOT_URL:-}
TVPASS_PLAYLIST_URL: ${TVPASS_PLAYLIST_URL:-https://tvpass.org/playlist/m3u}
TVPASS_PLAYLIST_TTL_MINUTES: ${TVPASS_PLAYLIST_TTL_MINUTES:-60}
command: ["node","dist/scripts/worker.js"]
Scheduler (daily):
scheduler:
image: ${REGISTRY}/${IMAGE_NAME}:${API_IMAGE_TAG:-latest}
restart: unless-stopped
environment:
# same env as worker/api
DB_CLIENT: postgres
DATABASE_URL: ${DATABASE_URL}
DEFAULT_LINEUP_ID: ${DEFAULT_LINEUP_ID:-USA-OTA80022}
FEATURE_TVPASS: ${FEATURE_TVPASS:-true}
command: ["sh","-lc","while true; do node dist/scripts/schedule.js; sleep 86400; done"]
Optional FlareSolverr:
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
environment:
LOG_LEVEL: info
restart: unless-stopped
Initial data population
- tvtv: ensure
UPSTREAM_COOKIESare valid or setFLARESOLVERR_ENABLED=true. Run scheduler once and keep the worker running. - tvpass: set
TVPASS_SNAPSHOT_URL. The scheduler enqueuestvpass.refreshSnapshotand the worker ingests it. Playlist warm-up is handled bytvpass.refreshPlayliston a TTL.
Operational tips
- Tune
CONCURRENCY_*andRATE_LIMIT_RPSif you see 429/5xx from tvtv.us. - Health:
GET /healthz→{ ok, db, workerLagSec }. Metrics:GET /metrics.json. - Jellyfin XMLTV URL:
http://<host>:3000/lineups/<LINEUP_ID>/xmltv?days=7or tvpass:http://<host>:3000/xmltv/tvpass.
License
MIT