Date: 2026-03-31 Status: System functionally complete. These items need building. Context: Full NBA analyst pipeline was built in a marathon session. 3 blind analysts (Sonnet 4.6, Gemini 3.1 Pro, gpt-oss-120b) set lines from fundamentals, comparison layer classifies convergence, Opus makes final verdict. All specs, rulings, scrapers, matchup cards, APIs, MCP server, council skill, and cron jobs are done.
Changed cron.schedule('30 18 * * *') to cron.schedule('30 17 * * *') in scheduler.ts line 604. Log message updated to "5:30 PM". EdgeClaw restarted.
Deleted 181 Python patch files from /tmp/. Confirmed clean.
What: Delete temp .py and .ts and .md files in C:\Users\New\ that were used for patching.
Files like: fix-*.py, wire-*.py, add-*.py, improve-*.py, rebalance-*.py, etc.
These are build artifacts — the actual code is on the VPS.
Note: This is on boss's local PC — can't be done from VPS.
Create a Claude Code scheduled task that runs daily at ~6:15PM ET. This IS the Opus verdict layer.
Go to claude.ai/code/scheduled or type /schedule in Claude Code.
You are the Opus Verdict Layer for the EdgeClaw sports betting pipeline.
You go FRESH every session. No memory of past verdicts.
STEP 1: Check if today has analyst picks.
Use the edgeclaw MCP tool `analyst_picks` for today's date and NBA.
If no picks exist, respond "No analyst pipeline output found for today. Nothing to rule on." and stop.
STEP 2: Read all data.
- Use `matchup_cards` to see today's matchup projections
- Use `analyst_picks` to see what Sonnet, Gemini, and gpt-oss said
- Use `edges` to see the mathematical edge scanner output
- Use `freshness` to verify data is current
STEP 3: For EVERY market (spread, total, ML) on EVERY game:
Make a final verdict:
- final_line: your line
- final_probability: your probability
- final_conviction: 1-5
- position_size: FULL / THREE_QUARTER / HALF / QUARTER / MINIMUM
- convergence_class: from comparison layer
- verdict_narrative: 2-3 sentences on why
STEP 4: Store your verdicts.
Use the EdgeClaw API at https://edge-assist.duckdns.org/api/ to store results.
POST to /api/store-verdict with the verdict data.
Rules:
- No passing. Every market gets a pick.
- You see everything: analyst reports + market prices + math. The analysts were blind — you are not.
- Position size based on convergence: A=FULL, B=THREE_QUARTER, C=HALF, D=QUARTER, E=MINIMUM
- Adjust based on your own assessment of the evidence quality
Build POST /api/store-verdict endpoint on EdgeClaw that accepts Opus verdict data and updates analyst_picks table with verdict_line, verdict_probability, verdict_conviction, convergence_class, position_size.
File to modify: /home/ubuntu/edgeclaw/src/pipeline/edgeclaw-api.ts
app.post('/api/store-verdict', async (c) => {
const body = await c.req.json();
const db = getDb();
const stmt = db.prepare(`
UPDATE analyst_picks SET
verdict_line = ?, verdict_probability = ?, verdict_conviction = ?,
convergence_class = ?, position_size = ?
WHERE date = ? AND sport = ? AND game_id = ? AND market_type = ?
`);
for (const v of body.verdicts) {
stmt.run(v.final_line, v.final_probability, v.final_conviction,
v.convergence_class, v.position_size,
v.date, v.sport, v.game_id, v.market_type);
}
return c.json({ success: true, updated: body.verdicts.length });
});
Fire the full pipeline manually for one night to verify output quality.
# On VPS or via MCP:
curl -X POST https://edge-assist.duckdns.org/api/run-full-pipeline/nba
# Or via MCP tool:
run_full_pipeline
Cost: ~$0.10 for all 3 analysts on a full slate.
New tab on /data-status page alongside "Desk Coverage", "Unrouted Kalshi", "All Sources".
After 1-2 weeks of graded picks (need data first).
Script that generates personalized calibration digests for each analyst + Opus.
Per the CAPS-001 calibration spec at /home/ubuntu/edgeclaw/docs/CALIBRATION-SPEC.md:
/home/ubuntu/edgeclaw/data/calibration/digests/After 4-6 weeks of baseline data (no corrections during baseline period per spec).
src/pipeline/calibration-digest-generator.tsanalyst_picks table (settled picks with Brier scores)data/calibration/digests/{analyst}-D{version}-{date}.mdThe research_intel table exists and the Step 2 section on the pipeline page is wired to read from it. But it currently shows "No alerts" because research hasn't run yet.
Run research manually:
curl -X POST https://edge-assist.duckdns.org/api/run-research/nba
Cost: ~$0.50 for Grok x_search queries across all games.
After running, refresh pipeline page — Step 2 should show findings per game.
Replicate the NBA pipeline for NHL, NCAAB, MLB, Soccer, MMA.
/home/ubuntu/edgeclaw/docs/panel-prompt-data-audit.md| File | What It Does |
|---|---|
src/pipeline/data/scrapers/scrape-nba-official.ts |
Consolidated NBA.com API scraper (4 endpoints) |
src/pipeline/data/scrapers/scrape-nba-advanced.ts |
5 new NBA.com endpoints (splits, clutch, hustle) |
src/pipeline/data/scrapers/scrape-nba-injuries.ts |
ESPN roster API injury scraper |
src/pipeline/data/nba-matchup-cards.ts |
Matchup card calculator (league-anchored + injury + rest) |
src/pipeline/analyst-briefing.ts |
Generates blind analyst prompt (no prices) |
src/pipeline/analyst-runner.ts |
Calls 3 models in parallel, parses JSON, stores picks |
src/pipeline/analyst-tracker.ts |
DB schema, insert/query picks, Brier scores |
src/pipeline/comparison-layer.ts |
Convergence A-E, composite probability |
src/pipeline/settle-picks.ts |
Match outcomes to picks, compute Brier |
src/pipeline/research-module.ts |
Grok x_search + Exa for pre-game intel |
src/pipeline/pipeline-page.ts |
Full pipeline visualization per game |
src/pipeline/game-data-renderer.ts |
Comprehensive stat display (all tables + deltas) |
src/pipeline/edgeclaw-api.ts |
All API endpoints |
src/cron/scheduler.ts |
Cron jobs (11AM data, 5PM research, 5:30PM analysts, 3AM settlement) |
| Document | What It Governs |
|---|---|
analyst-briefing-ruling.md |
Blind analysis protocol, model roster, no-pass rule, conviction scale |
analyst-learning-ruling.md |
Calibration digests, per-analyst personalized, monthly cadence |
CALIBRATION-SPEC.md |
Full calibration spec (CAPS-001) — Brier, ECE, thresholds, formulas |
council-spec.md |
Council system — 5 advisors + peer review |
research-spec.md |
Research pipeline — Grok + Exa, timing, query templates |
nba-matchup-card-spec.md |
All metrics, deltas, data sources, build priority |
panel-prompt-data-audit.md |
Reusable template for auditing any sport desk |
| Item | Value |
|---|---|
| Analyst 1 | Claude Sonnet 4.6 (claude-sonnet-4-6) — Anthropic |
| Analyst 2 | Gemini 3.1 Pro Preview (google/gemini-3.1-pro-preview) — OpenRouter |
| Analyst 3 | gpt-oss-120b (openai/gpt-oss-120b) — OpenRouter |
| Researcher | Grok 4.1 Fast (x-ai/grok-4.1-fast) with search plugin — OpenRouter |
| Verdict | Opus 4.6 — Claude Code scheduled task (fresh, no resume) |
| Council | Opus + Sonnet (native) + Gemini + Grok 4.20 + gpt-oss (OpenRouter) |
| NBA.com Proxy | gg-cli-proxy.ecocellga.workers.dev/nba-stats/ |
| OpenRouter Key | In VPS .env: OPENROUTER_API_KEY |
| Anthropic Key | In VPS .env: ANTHROPIC_API_KEY |
| Time | What |
|---|---|
| 11:00 AM | Data scrapers (team stats, ratings, shooting, all NBA.com endpoints) |
| 5:00 PM | Research (injuries refresh + Grok x_search for beat reporter intel) |
| 5:30 PM | Analyst pipeline (scrapers refresh → matchup cards → 3 analysts → comparison) |
| ~6:15 PM | Opus verdict (Claude Code scheduled task — NOT BUILT YET) |
| 9:30 PM | Late game matchup card refresh |
| 3:00 AM | Settlement (match outcomes → Brier scores) |