No description
Find a file
Nathan Bland 476f853c7c
All checks were successful
CI/CD Pipeline / Type-check and Test (push) Successful in 10m8s
CI/CD Pipeline / Build and Push Image (push) Successful in 2m41s
CI/CD Pipeline / Deploy Staging (push) Has been skipped
CI/CD Pipeline / Deploy Production (push) Successful in 1m58s
feat: improve callsign normalization and matching for EPG data
2025-10-08 20:08:50 -06:00
.forgejo/workflows chore: update production stack ID to 125 in CI workflow 2025-08-19 17:37:12 -06:00
migrations db(knex): initial schema (lineups, channels, programmes, jobs) and cache_items; add knex config and migrate/rollback scripts 2025-08-19 15:52:42 -06:00
public feat(ui): add minimal UI (channels list, XMLTV preview) and basic grid view 2025-08-19 15:49:18 -06:00
src feat: improve callsign normalization and matching for EPG data 2025-10-08 20:08:50 -06:00
tests feat: add interactive TV guide UI with channel grid and job runner 2025-08-20 20:59:28 -06:00
.dockerignore chore(gitignore): add .dockerignore and ignore dist, coverage, logs, editor files 2025-08-19 15:38:57 -06:00
.env.example feat: add admin UI and protected endpoints for maintenance tasks 2025-08-23 03:53:12 -06:00
.gitignore chore(gitignore): refine patterns and keep .env.example tracked 2025-08-19 15:45:36 -06:00
architecture.md feat: add data retention cleanup job and increase tvpass playlist TTL to 60min 2025-08-19 20:51:09 -06:00
deploy-portainer.sh ci: add Forgejo workflow and Portainer deploy script; add Portainer stack template 2025-08-19 15:46:24 -06:00
design.md feat: add data retention cleanup job and increase tvpass playlist TTL to 60min 2025-08-19 20:51:09 -06:00
docker-compose.yml build(docker): add Dockerfile and docker-compose with Postgres and optional FlareSolverr 2025-08-19 15:46:50 -06:00
Dockerfile Dockerfile: fix missing tsc by installing dev deps in build stage; prod deps in runtime; copy public/ for UI 2025-08-19 16:44:48 -06:00
package-lock.json feat: add epgshare XMLTV channel matching and filtering endpoint 2025-08-24 17:45:31 -06:00
package.json feat: improve callsign normalization and matching for EPG data 2025-10-08 20:08:50 -06:00
README.md feat: add XML escaping for text and attributes in EPGShare XMLTV output 2025-08-25 14:57:25 -06:00
requirements.md feat: add data retention cleanup job and increase tvpass playlist TTL to 60min 2025-08-19 20:51:09 -06:00
stack.yml feat: add admin UI and protected endpoints for maintenance tasks 2025-08-23 03:53:12 -06:00
tasks.md docs: expand README (architecture link + CI/CD), add architecture/design/requirements, update tasks traceability 2025-08-19 15:39:46 -06:00
tsconfig.json build(config): add tsconfig and example .env with documented variables 2025-08-19 15:50:53 -06:00
vitest.config.ts test(vitest): configure vitest and add unit/integration tests; update npm scripts 2025-08-19 15:48:03 -06:00

TV Guide XMLTV Server

Fastify-based API that ingests TV listings from tvtv.us into a local database and serves a Jellyfin-compatible XMLTV feed. Optional FlareSolverr integration automates Cloudflare cookie handling.

Features

  • Ingest tvtv.us lineup channels and 7-day programme grid with batching and retries
  • Serve streaming XMLTV output compatible with Jellyfin/TVH
  • Optional FlareSolverr for automated Cloudflare challenge bypass
  • SQLite for development, Postgres for production
  • Scheduler + Worker with DB-backed job queue
  • Observability: structured request logs and basic metrics endpoint (/metrics.json)
  • Tvpass support: snapshot ingestion and independent XMLTV output
  • Tvpass M3U playlist caching with TTL and conditional requests (ETag/Last-Modified)

Architecture (high level)

flowchart LR
  subgraph Client
    A[User / Jellyfin]
  end

  subgraph API[Fastify Server]
    R1["GET /lineups/:id/channels"]
    R2["GET /lineups/:id/xmltv"]
    R3["GET /xmltv/tvpass"]
    R4["GET /tvpass/playlist.m3u"]
    R5["GET /tvpass/playlist.meta"]
  end

  subgraph Services
    S1[TvtvXmltvService]
    S2[TvpassXmltvService]
    S3[TvpassPlaylistService]
    S4[ChannelService]
  end

  subgraph Worker
    W1[Scheduler]
    W2[Job Worker]
  end

  subgraph DB[SQLite / Postgres]
    DBCore[(DB)]
    T1[lineups]
    T2[channels]
    T3[programmes]
    T4[jobs]
    T5[cache_items]
  end

  subgraph Upstream
    U1[(tvtv.us)]
    U2[(tvpass.org)]
    U3[(FlareSolverr)]
  end

  A --> R1
  A --> R2
  A --> R3
  A --> R4
  A --> R5

  R1 --> S4 --> DBCore
  R2 --> S1 --> DBCore
  R3 --> S2 --> DBCore
  R4 --> S3 --> DBCore
  R5 --> S3 --> DBCore

  W1 -->|enqueue| T4
  W2 -->|lease/process| T4
  W2 -->|ingest| DBCore

  W2 -->|HTTP| U1
  W2 -->|HTTP| U2
  W2 -. optional .-> U3

See more detailed diagrams and data flows in architecture.md.

Quick Start (Development)

  1. Copy env and adjust
cp .env.example .env
# adjust DB_CLIENT (sqlite for dev) and any upstream headers/cookies if needed
  1. Install deps
npm ci
  1. Run migrations (dev)
npm run db:migrate
  1. Start API (dev)
npm run dev

API will listen on http://localhost:3000

Docker Compose (Production-like)

The Compose stack includes Postgres and the API. FlareSolverr is available behind a profile.

# Build and start db+api
docker compose up --build -d

# If you want FlareSolverr too:
docker compose --profile flaresolverr up -d flaresolverr

Environment variables for the API service can be overridden via an .env file or inline in Compose. See docker-compose.yml and the Environment section below.

Notes:

  • The API service runs DB migrations on startup and exposes a healthcheck that calls GET /healthz.
  • Postgres has a healthcheck via pg_isready; the API waits for it before starting.

Environment Variables

  • Runtime

    • PORT (default 3000)
    • NODE_ENV (development|production)
    • LOG_LEVEL (debug|info|warn|error)
  • Database

    • DB_CLIENT (sqlite|postgres)
    • DATABASE_URL (e.g. postgres://user:pass@host:5432/db or sqlite file path)
  • Defaults & behavior

    • DEFAULT_LINEUP_ID (default USA-OTA80022)
    • TIMEZONE (default UTC)
    • CONCURRENCY_MAX (default 20)
    • CONCURRENCY_PER_LINEUP (default 4)
    • RATE_LIMIT_RPS (default 20)
  • Upstream HTTP

    • UPSTREAM_UA
    • UPSTREAM_ACCEPT_LANGUAGE
    • UPSTREAM_REFERER
    • UPSTREAM_COOKIES (cookie header string; auto-managed if FlareSolverr enabled)
  • FlareSolverr (optional)

    • FLARESOLVERR_ENABLED (default false)
    • FLARESOLVERR_URL (default http://localhost:8191)
    • FLARESOLVERR_TIMEOUT_MS (default 90000)
    • FLARESOLVERR_AUTO_WRITE_ENV (default false; writes UPSTREAM_COOKIES to .env)
  • Feature Flags

    • FEATURE_TVPASS (default true; stubbed for future)
    • FEATURE_UI (default true; placeholder)
    • FEATURE_REFRESH (default true)
  • Tvpass

    • TVPASS_LINEUP_ID (lineup label for tvpass dataset)
    • TVPASS_SNAPSHOT_URL (JSON snapshot source)
    • TVPASS_PLAYLIST_URL (M3U playlist URL)
    • TVPASS_PLAYLIST_TTL_MINUTES (cache TTL for playlist; default in config)

Ingestion

  • Channels (once per lineup or daily):
    npm run ingest:channels
    
  • Grid (windowed slices; see script for slice config):
    npm run ingest:grid
    
  • Scheduler (enqueue channels daily + 7-day grid slices) and Job summary:
    npm run schedule:once
    npm run jobs:summary
    

API Routes

  • Health

    • Method/Path: GET /healthz
    • Response: { ok: boolean, db: 'ok'|'error', workerLagSec: number }
    • Content-Type: application/json; charset=utf-8
  • Metrics

    • Method/Path: GET /metrics.json
    • Response: JSON snapshot of internal metrics
    • Content-Type: application/json; charset=utf-8
  • Channels (by lineup)

    • Method/Path: GET /lineups/:externalLineupId/channels
    • Params (path): externalLineupId (e.g. USA-OTA80022)
    • Responses:
      • 200 JSON: { lineupId, source, channels: [{ stationId, callSign, channelNumber, logoUrl }] }
      • 404 JSON: { error: 'lineup-not-found', externalLineupId }
    • Content-Type: application/json; charset=utf-8
  • XMLTV (tvtv dataset)

    • Method/Path: GET /lineups/:externalLineupId/xmltv
    • Query:
      • days (int 114, default 7)
      • stationIds (CSV of station IDs, optional)
    • Response: XMLTV stream
    • Content-Type: application/xml; charset=utf-8
    • Behavior: opportunistically schedules ingestion if data is missing/sparse (channels.refresh, grid.refreshSlice). Falls back to Tvpass data if tvtv not available; returns 404 if nothing can be produced.
    • Caching: no explicit cache headers set by server (client may cache as desired).
  • XMLTV (tvpass dataset)

    • Method/Path: GET /xmltv/tvpass
    • Query:
      • days (int 114, default 7)
      • stationIds (CSV of Tvpass station IDs; matches M3U tvg-id)
    • Response: XMLTV stream
    • Content-Type: application/xml; charset=utf-8
    • Behavior: opportunistically schedules Tvpass snapshot/playlist refresh if missing/sparse.
    • Caching: no explicit cache headers.
  • Tvpass Playlist (cached with TTL, ETag/Last-Modified)

    • Method/Path: GET /tvpass/playlist.m3u
    • Query: force (boolean: 1|true|yes|on to bypass cache and fetch upstream)
    • Response: M3U content
    • Content-Type: upstream content-type or audio/x-mpegurl; charset=utf-8
    • Caching behavior:
      • Cache stored in DB table cache_items (key tvpass:playlist.m3u).
      • Freshness TTL controlled by TVPASS_PLAYLIST_TTL_MINUTES (default in config).
      • Uses conditional requests with If-None-Match (ETag) and If-Modified-Since (Last-Modified).
      • On upstream 304 Not Modified, refreshes cache timestamp without downloading body.
      • On upstream non-200, serves stale cache if available.
  • Tvpass Playlist cache meta

    • Method/Path: GET /tvpass/playlist.meta
    • Response: { cached, fetchedAt, ageMinutes, fresh, etag, lastModified, contentType, ttlMinutes, playlistUrl }
    • Content-Type: application/json; charset=utf-8
  • EPGShare XMLTV (proxy)

    • Method/Path: GET /epgshare/xmltv
    • Query: id (default US_LOCALS2, token pattern [A-Z0-9_]+)
    • Response: Streams and gunzips XMLTV from epgshare01
    • Content-Type: application/xml; charset=utf-8
    • Caching: Cache-Control: public, max-age=900 (15 minutes)
    • Errors: 400 { error: 'invalid-id' }; on upstream non-200 returns { error: 'upstream-error', status, body }; on network errors returns 502 { error: 'bad-gateway', message }
  • EPGShare XMLTV (match by callsigns)

    • Method/Path: GET /epgshare/xmltv/match
    • Query:
      • id (default US_LOCALS2)
      • lineup (default server DEFAULT_LINEUP_ID)
      • callsigns (CSV; optional explicit allowlist)
    • Behavior: if callsigns omitted, allowed callsigns are derived from current TvTV channels for lineup (DB fallback). Filters channels/programmes by normalized callsigns.
    • Response: filtered XMLTV stream
    • Content-Type: application/xml; charset=utf-8
    • Caching: Cache-Control: public, max-age=900
    • Errors: 400 { error: 'invalid-lineup' } for bad lineup param; 404 { error: 'no-callsigns-for-lineup-tvtv', lineup } if none could be determined.
  • Refresh scheduler

    • Method/Path: POST /refresh
    • Body (JSON): { lineupId: string, source?: 'tvtv'|'tvpass'|'all' (default 'all'), full?: boolean }
    • Behavior: enqueues ingestion jobs. For tvtv: channels.refresh and, when full=true, grid.refreshSlice across next 7 days in 6-hour windows. For tvpass: tvpass.refreshSnapshot and, when full=true, tvpass.refreshPlaylist.
    • Feature flags: skipped when FEATURE_REFRESH=false (tvtv) or FEATURE_TVPASS=false (tvpass).
    • Response: { ok, scheduled, skipped, details: [{ type, status: 'scheduled'|'skipped', reason? }] }
    • Content-Type: application/json; charset=utf-8
  • UI (HTML dashboard; requires FEATURE_UI=true)

    • GET /ui
      • Response: Full HTML dashboard
      • Content-Type: text/html; charset=utf-8
    • GET /ui/partials/guide
      • Query:
        • lineupId (string; defaults to server DEFAULT_LINEUP_ID if omitted)
        • hours (int 124; default 3)
        • slot (int minutes 5120; default 30)
        • start (datetime; optional ISO/local datetime)
        • q (string; optional search filter for channel call sign/number/station id)
        • limit (int 5200; default 25)
      • Response: HTML fragment of the guide grid
      • Content-Type: text/html; charset=utf-8
      • Errors: 500 HTML snippet on render failure
    • GET /ui/partials/jobs-summary
      • Response: HTML fragment summarizing jobs by type/status with next run time per type
      • Content-Type: text/html; charset=utf-8
    • GET /ui/partials/health
      • Response: HTML fragment with DB health (ok/error)
      • Content-Type: text/html; charset=utf-8
    • POST /ui/run-jobs
      • Body (form or JSON): { max?: number } (default 10; 1100)
      • Behavior: manually leases and processes up to max queued jobs using the same handlers as the worker
      • Response: HTML fragment listing processed/failed job types
      • Content-Type: text/html; charset=utf-8
    • POST /ui/refresh
      • Body (JSON via HTMX): { lineupId: string, source?: 'tvtv'|'tvpass'|'all' (default 'all'), full?: boolean }
      • Behavior: proxies to POST /refresh and returns an HTML summary of scheduled/skipped jobs
      • Response status mirrors the underlying /refresh call
      • Content-Type: text/html; charset=utf-8
    • Admin actions from UI:
      • The UI provides an input for an admin token that is stored in localStorage and automatically attached as Authorization: Bearer <TOKEN> for UI elements marked with data-admin-auth="true".
      • You can trigger admin endpoints (e.g., POST /admin/maintenance/reset-stale) from the UI. Admin routes also accept x-admin-token.
  • Admin (requires token)

    • Auth: provide Authorization: Bearer <TOKEN> or x-admin-token: <TOKEN>. Configure token in server config.
    • POST /admin/maintenance/reset-stale
      • Effect: resets stale job leases. Returns JSON { reset } or a small HTML snippet when HX-Request: true.
    • POST /admin/jobs/force-process (query or body minutes=1..60, default 5)
      • Effect: pulls forward queued jobs whose next_run_at is within the window. Returns JSON { pulledForward, minutes } or HTML when HX-Request: true.

Smoke Tests

Run these quick checks after starting the server to validate end-to-end wiring:

# Health
curl -sS http://127.0.0.1:3000/healthz

# Channels for default lineup (show first KB)
curl -sS http://127.0.0.1:3000/lineups/USA-OTA80022/channels | head -c 1500

# XMLTV (1 day, a few station IDs)  stream just the first lines
curl -sS "http://127.0.0.1:3000/lineups/USA-OTA80022/xmltv?days=1&stationIds=35366,35856,100218" | head -n 30

Minimal UI

Served when FEATURE_UI=true (default). Visit http://localhost:3000/ui/.

  • Overview

    • Built with Franken UI (CDN) + HTMX; no build step required.
    • Sections: Jobs Summary (auto-refresh), Admin, Trigger Refresh, TV Guide grid, Health.
  • Jobs Summary

    • Auto-refreshes every 5s via GET /ui/partials/jobs-summary.
    • "Run queued" posts to POST /ui/run-jobs with max (1100; UI defaults to 5, server default is 10) and shows a result snippet.
  • Admin

    • Enter your admin token; it is stored locally in your browser.
    • UI attaches Authorization: Bearer <TOKEN> for actions marked with data-admin-auth="true".
    • "Reset stale leases" triggers POST /admin/maintenance/reset-stale and renders the result.
  • Trigger Refresh

    • Enqueues work via POST /ui/refresh (proxied to POST /refresh).
    • Fields: lineupId, source (all|tvtv|tvpass), full (grid slices for tvtv).
  • TV Guide grid

    • Loads via GET /ui/partials/guide with controls for lineup, search, hours, slot (min), start time, and channel limit.
    • Features: sticky first column, current-time indicator line, tooltips with title/subtitle/time/desc.
  • Health

    • GET /ui/partials/health shows DB status.
  • Metrics

    • Raw metrics endpoint: GET /metrics.json (JSON snapshot).
  • Toggle UI off by setting FEATURE_UI=false.

Cloudflare and FlareSolverr

  • If requests to tvtv.us are blocked, enable FlareSolverr:
    • Set FLARESOLVERR_ENABLED=true
    • Run FlareSolverr (see Compose profile)
  • When the app detects a block, it will request cookies from FlareSolverr and retry once.
  • If FLARESOLVERR_AUTO_WRITE_ENV=true, new cookies are persisted to .env as UPSTREAM_COOKIES.

Manual cookie fetch (fallback):

npm run get:cookies
# follow the headful browser to solve challenges, then cookies are printed and can be copied to UPSTREAM_COOKIES

Troubleshooting

  • 403/Cloudflare blocks: enable FlareSolverr and verify it is reachable; ensure UA/Referer headers are realistic
  • Empty XMLTV: confirm ingestion ran and DB contains channels/programmes for the lineup
  • Postgres connection: check DATABASE_URL and that the db service is healthy in Compose
  • SQLite file path: default dev DB is ./data/dev.sqlite3; ensure the data/ directory exists (auto-created)

Build & Run

  • Typecheck: npm run typecheck
  • Build: npm run build
  • Start (compiled): npm start

Testing

  • Unit and integration tests (Vitest):
npm run test:unit         # unit tests only
npm run test:integration  # integration tests only (in-memory SQLite)
npm run test:run          # all tests once

Notes:

  • Tests use a dedicated setup (tests/setup.ts) that configures in-memory SQLite and runs migrations automatically.
  • Integration tests validate tvpass ingestion end-to-end and XMLTV generation.

Docker image includes a startup command that runs migrations before the server starts:

node dist/db/migrate.js && node dist/server.js

CI/CD and Deployment (Forgejo + Portainer)

  • Workflow file: .forgejo/workflows/ci.yml
  • Deploy helper: deploy-portainer.sh
  • Docker ignore: .dockerignore

What it does:

  • Test job: installs deps, type-checks (npm run typecheck), and runs tests (npm run test:run).
  • Build job: builds and pushes Docker image to your registry using Buildx/BuildKit.
  • Deploy jobs:
    • develop branch → Deploys to staging stack.
    • main branch → Deploys to production stack.
    • Deploy uses deploy-portainer.sh to trigger your Portainer stack webhook with image tag set to ${GITHUB_SHA}.

Configure these in .forgejo/workflows/ci.yml:

  • REGISTRY: e.g. registry.aqueous.network
  • IMAGE_NAME: e.g. tv-guide-xmltv
  • PORTAINER_URL: e.g. https://port.aqueous.network
  • PORTAINER_STACK_ID_STAGING / PORTAINER_STACK_ID_PRODUCTION: your Portainer stack IDs

Required Forgejo repository secrets:

  • PORTAINER_API_KEY: Portainer API key (for status checks and script)
  • PORTAINER_WEBHOOK_URL: Webhook URL for the target stack
  • Optional (only if your registry requires auth):
    • REGISTRY_USERNAME
    • REGISTRY_PASSWORD

Notes:

  • The workflow tags pushed images as: ${sha}, ${ref_name}, and latest.

  • If your registry requires auth, enable docker/login-action@v3 in the build job and set the above credentials as secrets.

  • The deploy script supports optional PORTAINER_ENDPOINT_ID (defaults to 1).

  • Example deploy step usage in CI:

    ./deploy-portainer.sh <STACK_ID> staging "$GITHUB_SHA"
    

Production Deployment Guide

This section summarizes practical ways to run in production. See the full list of configuration keys in the "Environment Variables" section above.

  • Option A: Docker Compose (single host)

    1. Prepare env: copy .env.example.env; set DB_CLIENT=postgres, DATABASE_URL, DEFAULT_LINEUP_ID, feature flags (FEATURE_TVPASS, FEATURE_UI), and optionally FlareSolverr (FLARESOLVERR_ENABLED=true).
    2. Start services:
      docker compose up -d db api
      # Optional Cloudflare solver
      docker compose --profile flaresolverr up -d flaresolverr
      
    3. Queue initial 7-day work (scheduler):
      docker compose exec api node dist/scripts/schedule.js
      
    4. Start a background worker (continuously processes jobs):
      docker compose run -d --name worker --no-deps api node dist/scripts/worker.js
      
    5. Verify: /healthz, /metrics.json, UI at /ui/, XMLTV at /lineups/<LINEUP_ID>/xmltv?days=7.
  • Option B: Portainer Stack (recommended)

    • Create a stack using your built image. Include:
      • api service (this repo's image) with env from your prod .env/stack vars
      • worker service running node dist/scripts/worker.js
      • scheduler service that runs daily: sh -lc "while true; do node dist/scripts/schedule.js; sleep 86400; done"
      • Postgres service or managed DB; point DATABASE_URL accordingly
      • Optional flaresolverr service; set FLARESOLVERR_ENABLED=true
  • Option C: CI/CD (Forgejo → Registry → Portainer)

    • Set registry/Portainer secrets in Forgejo. Push to main (prod) or develop (staging). The workflow builds/pushes and triggers your Portainer stack via webhook.

Worker:

  worker:
    image: ${REGISTRY}/${IMAGE_NAME}:${API_IMAGE_TAG:-latest}
    restart: unless-stopped
    environment:
      NODE_ENV: production
      LOG_LEVEL: ${LOG_LEVEL:-info}
      DB_CLIENT: postgres
      DATABASE_URL: ${DATABASE_URL}
      DEFAULT_LINEUP_ID: ${DEFAULT_LINEUP_ID:-USA-OTA80022}
      TIMEZONE: ${TIMEZONE:-UTC}
      CONCURRENCY_MAX: ${CONCURRENCY_MAX:-20}
      CONCURRENCY_PER_LINEUP: ${CONCURRENCY_PER_LINEUP:-4}
      RATE_LIMIT_RPS: ${RATE_LIMIT_RPS:-20}
      UPSTREAM_UA: ${UPSTREAM_UA:-Mozilla/5.0 (X11; Linux x86_64; rv:137.0) Gecko/20100101 Firefox/137.0}
      UPSTREAM_ACCEPT_LANGUAGE: ${UPSTREAM_ACCEPT_LANGUAGE:-en-US,en;q=0.5}
      UPSTREAM_REFERER: ${UPSTREAM_REFERER:-https://tvtv.us}
      UPSTREAM_COOKIES: ${UPSTREAM_COOKIES:-}
      FLARESOLVERR_ENABLED: ${FLARESOLVERR_ENABLED:-false}
      FLARESOLVERR_URL: ${FLARESOLVERR_URL:-http://flaresolverr:8191}
      FLARESOLVERR_TIMEOUT_MS: ${FLARESOLVERR_TIMEOUT_MS:-90000}
      FEATURE_TVPASS: ${FEATURE_TVPASS:-true}
      TVPASS_LINEUP_ID: ${TVPASS_LINEUP_ID:-tvpass}
      TVPASS_SNAPSHOT_URL: ${TVPASS_SNAPSHOT_URL:-}
      TVPASS_PLAYLIST_URL: ${TVPASS_PLAYLIST_URL:-https://tvpass.org/playlist/m3u}
      TVPASS_PLAYLIST_TTL_MINUTES: ${TVPASS_PLAYLIST_TTL_MINUTES:-60}
    command: ["node","dist/scripts/worker.js"]

Scheduler (daily):

  scheduler:
    image: ${REGISTRY}/${IMAGE_NAME}:${API_IMAGE_TAG:-latest}
    restart: unless-stopped
    environment:
      # same env as worker/api
      DB_CLIENT: postgres
      DATABASE_URL: ${DATABASE_URL}
      DEFAULT_LINEUP_ID: ${DEFAULT_LINEUP_ID:-USA-OTA80022}
      FEATURE_TVPASS: ${FEATURE_TVPASS:-true}
    command: ["sh","-lc","while true; do node dist/scripts/schedule.js; sleep 86400; done"]

Optional FlareSolverr:

  flaresolverr:
    image: ghcr.io/flaresolverr/flaresolverr:latest
    environment:
      LOG_LEVEL: info
    restart: unless-stopped

Initial data population

  • tvtv: ensure UPSTREAM_COOKIES are valid or set FLARESOLVERR_ENABLED=true. Run scheduler once and keep the worker running.
  • tvpass: set TVPASS_SNAPSHOT_URL. The scheduler enqueues tvpass.refreshSnapshot and the worker ingests it. Playlist warm-up is handled by tvpass.refreshPlaylist on a TTL.

Operational tips

  • Tune CONCURRENCY_* and RATE_LIMIT_RPS if you see 429/5xx from tvtv.us.
  • Health: GET /healthz{ ok, db, workerLagSec }. Metrics: GET /metrics.json.
  • Jellyfin XMLTV URL: http://<host>:3000/lineups/<LINEUP_ID>/xmltv?days=7 or tvpass: http://<host>:3000/xmltv/tvpass.

License

MIT