Future Additions Tracker
Consolidated from all memory files, CLAUDE.md, and TODO-PRIORITY.md on Mar 14, 2026.
Everything in one place so nothing gets lost.
SHORT TERM (Do Now / Next 1-2 Weeks)
1. Fix EdgeClaw Chat (BLOCKER)
- Source: TODO-PRIORITY.md
getMessages() in store.ts line 84 loads OLDEST messages instead of most recent
- Add context windowing for only last N relevant messages
- Clean memory pollution (detect/clean contradictory entries)
- Remove [COREF] and [COREF-DEBUG] debug logs from agent.ts
2. Pipeline Code Fixes (5 Issues)
- Source: memory/pipeline-test-results.md
- a) Wire up prompts.ts — 680 lines of better prompts sitting unused. Each pipeline file has inferior inline versions.
- b) Fix settlement horizon hardcode — settlement.ts line 215 always writes to intraday folder. Use actual prediction horizon.
- c) Fix verdict deliberation bug — verdict.ts line 250,
market === 'memory' should be judgeType === 'memory'
- d) Extract duplicate parseJsonResponse() from 7 files into shared util
- e) Remove unused callGemini import from llm-router.ts line 10
3. Weather Monitor — Rebuild on EdgeClaw
- Source: memory/weather-lock-in-analysis.md
- Old Vultr monitor broke Mar 9 (1-2 snapshots/day)
- NOT fixing old monitor — building fresh on EdgeClaw
- Bring over backtest data (weather_monitor.json + market_depth.json) from Vultr
- Key goal: nail forecast modeling (predict high earlier, not just react after drop)
- Add city-specific peak time filters to eliminate false signals
- Investigate recovery patterns to filter them out
4. Gemini Relay Port on Vultr
- Source: memory/build-plan.md
- Relay deployed at /opt/gemini-relay/relay.js on Vultr (port 3100)
- Need to open port 3100 in Vultr cloud firewall for Oracle IP 64.181.198.130
- Bypasses Google's geo-block on Oracle Cloud IPs
5. Review validator.ts
- Source: memory/pipeline-test-results.md (Issue 6)
- All other 13 new pipeline files were code-reviewed. validator.ts was tested but not read in detail yet.
6. Research Briefs — Boss Final Approval
- Source: memory/research-briefs.md
- Opus reviewed and marked changes. Pending Boss sign-off before build.
7. Fix Pinnacle/FanDuel Distinction
- Source: memory/research-briefs.md line 56
- Needs correction in the briefs doc.
7b. Probability Curve Phase 2 & 3 (DATA-GATED)
- Source: docs/phase2-phase3-timeline.md
- Phase 1 DONE (Mar 19, 2026): Normal CDF, de-vig, sigma fitting, settlement rules, Kalshi safeguards, platform pricing, SHADOW_MODE
- Phase 2 — Build after 1-2 weeks of live pipeline runs (10+ games with settlement results):
- Poisson distributions for NHL/soccer totals and discrete player props
- DNP probability estimator from historical injury data
- Fat-tail sigma boost 15% beyond |z| > 2.0 (stub exists in probability-curve.ts)
- Negative binomial for overdispersed count props (assists, rebounds)
- Phase 3 — Build after 4+ weeks (50+ settled predictions with Brier scores):
- Brier score tracking per distribution type, per sport, per z-score range
- Automatic sigma calibration from backtesting
- Per-platform edge threshold tuning based on observed win rates
- Reminder: Edge should alert boss at 2-week and 4-week marks after first live Sports desk run
- Full details: /home/ubuntu/edgeclaw/docs/phase2-phase3-timeline.md
MEDIUM TERM (Weeks 2-8)
8. Multi-User Memory Isolation (HARD — SaaS Blocker)
- Source: TODO-PRIORITY.md, CLAUDE.md, launch-roadmap.md
- 8 database tables need user_id column: cold_tags, memory_blocks, entities, claims, episodes, tag_embeddings, retrieval_log, tag_links
- Every query function needs WHERE user_id clauses
- Requires SQLite schema migration + data migration to tag existing data
- File: src/memory/tiered.ts (~2800 lines)
- Last major multi-user gap
9. EdgeClaw Chat Brain Restructure (84% Cost Cut)
- Source: memory/edgeclaw-chat-brain.md
- Replace Sonnet with GPT-4.1-mini for tool-use (97% cheaper)
- Replace Haiku with Qwen 3.5 9B local for coreference (free)
- Replace Haiku with GPT-4.1-mini for file content filter (90% cheaper)
- Keep Grok 4.1 Fast for chat (has built-in web/X search)
- Files: router.ts, agent.ts, client.ts, config.ts
- Status: APPROVED, not built
10. Architecture Foundation (Panel-Mandated, Pre-Build)
- Source: spec-panel-report.md Sections 3+4, Boss-approved Mar 15
- a) Centralized API client — shared module for all desks (rate limiting, caching, retries, logging). Build once, every desk uses it.
- b) SQLite WAL mode + write queue — one-line DB setting + write queue so 12 desks don't block each other. 5 lines of config.
- c) Backtest framework BASIC — simple replay engine in Phase 2-3. Loop old data through signals, log what would have happened. Catches bad weight assumptions early.
- d) Data freshness monitoring — heartbeat table, check every 5 min, alert Boss on Telegram if stale, flag signals as "degraded confidence."
- e) Graceful degradation — each desk defines fallback behavior (last known value, backup source, suppress signals). Pinnacle→Odds API fallback + alert is the model example.
- f) Backtest framework FULL — walk-forward testing, Monte Carlo, automated weight optimization. Phase 5, after real data flowing from multiple desks.
- g) Human analyst tracking (cross-desk) — track every public human prediction we can find (earnings, oil, crops, GDP, CPI, rates, sports picks, weather, forex bank forecasts). Log prediction vs actual outcome. Build accuracy scorecard per analyst, per sector. Use to identify: who to follow, who to fade (contrarian trade), consensus vs individual accuracy. Same Brier scoring system as AI models. Applies to ALL desks.
- Note: a/b/d/e are small (hours-days each). c is medium (days). f is the big lift (weeks). g is ongoing collection from day one.
10b. Spec Additions — Include From Start (NOT deferred)
- Source: spec-panel-report.md Section 4, Boss-approved Mar 15. Boss rule: if it's free and simple, add it now, don't wait.
- a) Weather data (NOAA API, free) — Sports, Futures, MLB, TA-Futures
- b) Options implied volatility (IV) — via Polygon.io (key added). Options, TA-Stocks, TA-Futures, Futures
- c) Disaggregated COT data (CFTC, free) — Forex, Futures, TA-Futures
- d) Cross-platform wallet event matching — Wallet Intelligence, Crypto
- e) OpEx/triple witching calendar (known dates) — Options, Stocks, TA-Stocks, Futures
- f) Umpire/referee tendency data (free public sources) — Sports, MLB, UFC
- g) Central bank speech sentiment (Fed/ECB/BOJ websites, free) — Forex, Stocks, Futures
- h) Exchange margin requirement changes (CME publishes, free) — Futures, TA-Futures, Crypto
- i) Consensus analyst estimates (FRED, Estimize free tier, USDA, EIA) — Futures, Stocks, MLB. Feeds into human analyst tracking (10g).
- j) Funding rate / liquidation maps (Binance API, free) — Crypto, TA-Crypto, Wallet Intelligence
- k) Baltic Dry Index / freight rates (free, 1 data point/day) — Futures
- l) Sector rotation data (SPDR ETFs, free via Polygon) — Stocks, TA-Stocks
- m) Geopolitical event tracking via news NLP (free news feeds) — Futures, Forex
- Deferred to v2: MEV data (complex), injury leak monitoring (no reliable free source), DEX liquidity depth (complex)
11. Research Pipeline V1 — Build All At Once
- Source: memory/build-plan.md, spec-panel-report.md, Boss decision Mar 15: NO phasing, build everything in V1 simultaneously
- Boss rule: Everything in V1 gets built at the same time. No phased rollout.
- Shared infrastructure: Centralized API client, Polygon.io, SQLite WAL + write queue, data freshness monitoring, graceful degradation, economic calendar (FRED), basic backtest, Human Intelligence desk, S&R Zone Engine (10 methods), Signal Tracking (23 variants), Volatility Regime Classifier
- All 16 desks: Crypto, Wallet Intelligence, Forex, Options, Stocks, Futures (ES/CL/GC/NQ), Sports, TA-Crypto, TA-Stocks, TA-Futures, MLB, UFC, Human Intelligence, Weather, DFS, Player Props
- All A+ upgrades and spec additions (10b) baked into each desk from day one
- Already done: DB schema (db.ts), LLM router (llm-router.ts), Filesystem (filesystem.ts), Polygon key in .env
- Still needed: Kalshi/Polymarket market adapters, FRED economic data collector, VIX/volatility data, SEC EDGAR pipeline, Odds scraper (Pinnacle first, Odds API fallback)
- Decision needed: Are Gemini judges called via Telegram relay (Boss forwards prompts) or direct Gemini API?
- Note: 424B2 NLP parsing deferred to V2
13. Image Generation Enhancement
- Source: MEMORY.md
- Prompt enhancement for realism (auto-add "photorealistic, film grain" etc.)
- Sharp post-processing to reduce AI look
- Boss agreed to Option 1+2 approach
- Status: NOT YET DONE
14. Cloudflare Proxy Auth + Gemini Subscription Routing
- Source: CLAUDE.md, security audit, Boss request Mar 15
- gg-cli-proxy and edge-proxy Cloudflare workers have no auth
- Can't modify from this server — needs Cloudflare dashboard access
- NEW: Route Gemini subscription (oauth-personal) through Cloudflare proxy to bypass Oracle geo-block
- Google account ([email protected]) is already logged in, OAuth creds cached at ~/.gemini/oauth_creds.json
- Currently gets 403 from Google because Oracle IPs are blocked
- Fix: update Cloudflare worker to also proxy
cloudcode-pa.googleapis.com OAuth endpoint
- Goal: use Gemini subscription for all bot calls (free), fall back to OpenRouter if rate limited
- Track cost savings: compare subscription usage vs API pay-as-you-go
15. Wire Up Dead Memory Functions
- Source: CLAUDE.md
- Skill memory (storeSkill/findSkills) — never wired up
- reflectOnFailure — never wired up
- storeAntiPattern — never wired up
15b. Time-Based Memory Tiers (PANEL APPROVED — Mar 16, 2026)
- Source: 5-model panel + Gemini synthesis + Opus final ruling (A- grade)
- Ruling: /home/ubuntu/edgeclaw/results/panel-results/memory-tiers-final-ruling.md
- Replace usage-only decay with hybrid time + usage tiers
- 4 layers: Evergreen (tier 0, never compressed) → Hot days 1-3 → Warm days 4-10 → Cool days 11-20 → Cold day 21+
- 2 profiles: standard (3/10/21 days) and fast (2/5/14 days). Per-desk configs deferred until desks are running
- Token budget: Evergreen 30%, Hot 35%, Warm 20%, Cool 10%, Cold 5%
- Single table with tier column (NOT separate tables). 1 new table (compression_log). ~380 lines total across 3 phases.
- Phase 1 (schema + scoring, ~80 lines) → Phase 2 (tier transitions + Haiku compression, ~200 lines) → Phase 3 (deep compression + desk profiles, ~100 lines)
- KG is separate and untiered — permanent ground truth, facts don't decay. Design COOL-to-COLD compression to output graph triples later as seed data.
- Compression keeps originals forever (~22MB/year, negligible). compressed_note used for retrieval only.
- Corrections are evergreen (tier 0) with supersession tracking. Cold memories are read-only inject, never promoted back.
- Two staggered cron jobs: 11:30 PM (transitions + maintenance) and midnight (Haiku compression batch)
15c. Weekly Post-Mortem Review System (PANEL APPROVED — Mar 16, 2026)
- Source: 6-model panel + Opus final ruling (A grade)
- Ruling: /home/ubuntu/edgeclaw/results/panel-results/postmortem-final-ruling.md
- Replace per-failure reactive LLM analysis with weekly batch reviewing ALL outcomes (winners AND losers) across all 19 desks
- Three buckets: CONFIRMED (take action) → WATCH LIST (carry forward, 3-week max) → DISMISSED (normal variance)
- Real-time
classifyWin() added for winners alongside existing classifyFailure(). edge_alignment: did the stated edge drive the outcome?
- Cross-desk pattern detection. Watch list = informational only, zero impact on betting.
- Haiku 4.5 for weekly LLM calls (~$3/year). Coaching updates scoped by domain + market type with 2-week regression check.
- Cron Sunday 2:00 AM ET, 10-prediction min. 4-week calibration mode before CONFIRMED findings enabled.
- Build: 6 phases, ~8 days, ~500-700 lines, 4 new tables.
15d. Memory Visualization (Obsidian Vault Style)
- Source: Gigabrain project evaluation (Mar 15, 2026)
- Visual browsing of what the AI knows — entities, beliefs, connections between memories
- Useful for debugging memory issues (why did it forget something? why pulling wrong context?)
- Gigabrain does this with Obsidian vault export. Could build similar for Edge's tiered memory system.
- Not urgent — quality-of-life improvement for memory debugging
LONG TERM (Months 2-6+)
15. SaaS Launch — Full Roadmap
- Source: memory/launch-roadmap.md
- Phase 1 (Weeks 1-3): Multi-tenant foundation + auth + frontend shell
- Phase 2 (Weeks 3-6): Usage metering + Stripe billing + credit ledger
- Phase 3 (Weeks 6-9): SQLite → PostgreSQL migration + chat UI + usage dashboard
- Phase 4 (Weeks 9-12): Security hardening + legal (LLC, ToS) + monitoring + beta test
- Phase 5 (Week 12+): Launch ($49/mo Starter, $149/mo Pro)
- Phase 6 (Month 3+): Research pipeline access, team accounts, API access, mobile app
16. Research Pipeline — Phase 2 (Finance Desks)
- Source: memory/build-plan.md
- Options, Stocks, Futures, Crypto, Forex desks
- Clone Phase 1 architecture + desk-specific collectors
17. Research Pipeline — Phase 3 (Everything Else)
- Source: memory/build-plan.md
- Weather, Soccer, MLB, UFC, research-only desks
- Code pipeline update to new audition-winner flow
- Self-improving prompts system (min 20 settled predictions, max 1 proposal/desk/week)
- Advanced data sources: SEC 424B2 NLP parser, satellite data (Sentinel-5P NO2), ship AIS tracking, GridStatus.io
18. Research Pipeline — Phase 4 (Dashboard & Monitoring)
- Source: memory/build-plan.md
- Performance dashboard
- Automated alerts
- Wire up @OpusGodBot for weekly Sunday review
19. PostgreSQL Migration
- Source: memory/launch-roadmap.md
- FTS5 → PostgreSQL full-text search (query syntax differs)
- Add pgvector extension for embeddings
- Use Drizzle ORM + pgloader for data migration
- Risk: network latency (1-5ms vs local SQLite)
20. Edge Trading Bot — Live Trading Path
- Source: memory/edge-trading-expansion.md
- Phase 1: Prediction tracking only (current — build accuracy history)
- Phase 2: Paper trading (simulated bankroll, Kelly sizing, P&L tracking)
- Phase 3: Live trading (bankroll determined by Phase 2 track record)
- Add all sports: NHL, NBA, NFL, MLB, Tennis, Golf, MMA/UFC, Soccer, Boxing, NASCAR/F1
- Dual-platform execution: Kalshi + Polymarket
- DraftKings DFS analysis
- Research needed: Polymarket sports coverage, DK salary data source, DK contest entry API
21. Hardware Upgrades + Local Models
- Source: memory/boss-vision.md, memory/pipeline-auditions.md
- Current: 45GB disk, 87% used — can't fit 30B+ models
- Ollama stuck/hung, needs sudo to kill
- Goal: run 70B+ models locally
- Qwen 3 14B is universal backup (~10GB VRAM)
22. Forex Institutional-Grade Data (When Budget Allows)
- Source: memory/forex-data-collection.md
- Tier 1: Options-derived intelligence (IV, risk reversal, gamma) — ~$200-500/mo
- Tier 2: Interbank market depth (EBS/Reuters, Level 2/3) — ~$500+/mo
- Tier 3: Alternative data (RavenPack NLP, SWIFT flows, CLS) — ~$1000+/mo
- Tier 4: Advanced analytics (GARCH, PCA, DCC-GARCH) — free but compute-heavy, could run on EdgeClaw's 21GB RAM
23. Agency / Business Arm
- Source: memory/boss-vision.md, memory/agency-gbp-playbook.md, memory/research-upgrades.md
- GBP audit playbook ready (8-prompt stack, Day 1 revenue)
- Programmatic SEO service
- AI audit service
- Base44 client app builder
- Growth marketing engine
- GEO (Generative Entity Optimization)
- Agency agent templates
- Reference code: /home/ubuntu/agency-agents-reference/
- Priority: AFTER pipeline, not now
24. V2 Deferred Features (Add Back After Core Is Validated)
- Source: spec-panel-report.md Section 5, Boss-approved Mar 15
- Why deferred: Each needs either expensive data, heavy compute, or infrastructure that doesn't exist yet. Build core first, prove it works, then layer these on.
- When to add back: After the desk they belong to has been running for 4+ weeks with real data and validated results.
- a) Wallet similarity/clustering — Wallet Intelligence. Why wait: computationally expensive, need wallet data flowing first to cluster against. When: after 4+ weeks of wallet tracking data collected.
- b) HMM/PCA statistical models — TA-Futures, TA-Stocks. Why wait: no proven edge over simpler methods. When: after simpler signals are validated and you want to squeeze out more alpha.
- c) Full Market Profile / TPO analysis — TA-Futures. Why wait: needs tick data feed. When: after Polygon or similar tick source is justified by v1 results. Use Volume Profile from 1-min candles in the meantime.
- d) Dual-track RTH/ETH full indicator pipeline — TA-Futures. Why wait: doubles compute for unproven benefit. When: after v1 shows RTH-specific metrics (VWAP, IB) add value. Run full session for now, just add RTH overlays.
- e) Dark pool real-time data — TA-Stocks, Stocks. Why wait: not available free. When: FINRA delayed data is fine for v1. Upgrade if delayed data proves too slow.
- f) Live fight tracking (1-min intervals) — UFC. Why wait: UFC actively blocks this, events are only every 2 weeks. When: if UFC opens an API or a reliable source appears.
- g) GEX dealer positioning model — Options. Why wait: oversimplified assumptions produce bad signals. When: after IV surface is solid and you have enough options data to validate the model.
- h) Tier 3-4 analytics (all specs) — Multiple. Why wait: advanced features on top of unproven core. When: after Tier 1-2 analytics are running and producing validated results.
- i) MEV data (miner extractable value) — Crypto, Wallet Intelligence. Why wait: complex to interpret, niche signal. When: after crypto desk is running and you want deeper on-chain edge.
- j) Injury leak monitoring — UFC. Why wait: no reliable free source, would need social media scraping. When: if a structured data source appears.
- k) DEX liquidity depth (Uniswap V3 ticks) — Crypto, TA-Crypto. Why wait: complex data, heavy to store. When: after crypto desk is running and order book analysis proves valuable.
25a. Kronos Price Prediction Model — Shadow Signal (LOW PRIORITY)
- Source: Boss evaluation of Awesome-finance-skills repo (Mar 27, 2026)
- What: Kronos is a 102M param transformer trained on 12B candlestick records from 45 exchanges. Predicts full OHLCV candle shapes. MIT license, free, runs locally.
- Purpose: Run alongside our primary sharp-line anchors (Pinnacle, Deribit options-implied, OANDA) as a secondary confirmation signal. NOT a replacement for anything. Shadow-log predictions, check periodically if they correlate with our edge calls.
- Applicable desks: Stocks ("AAPL above $X"), Crypto ("BTC above $X"), Futures (commodity price contracts), Forex-Macro
- How it would work: Generate 100 Monte Carlo candle paths → count paths where close > target → use as independent probability estimate → compare to our primary anchors → log agreement/disagreement
- Hardware: Kronos-small (25M params, ~100MB) runs on our ARM VPS CPU. 5-10 sec per prediction.
- Red flags: RankIC of 0.025 (marginal), backtested only on Chinese A-shares, best model (Large) is closed-source, no real-money track record
- When: After Stocks and Crypto desks have live price feeds flowing. Needs OHLCV candle data as input.
- Boss note: "Don't pay attention to it, just let it run in the background, check on it every once in a while." Not a heavily discussed metric. Evaluate after collecting a few weeks of shadow data.
- Repo: github.com/shiyu-coder/Kronos (11.3k stars), HuggingFace: NeoQuasar/Kronos-base, Paper: arxiv:2508.02739 (AAAI 2026)
25b. XGBoost/SHAP for Correlation Mining Tier 2 (DATA-GATED)
- Source: Correlation mining panel (Mar 22, 2026) — Grok and Gemini recommended, Opus and Sonnet said not yet
- What: Use XGBoost machine learning + SHAP interaction values to automatically discover non-obvious feature pairs that predict outcomes
- Why not now: Sports data is too small — 82 games per team per season. Once you slice by conditions (B2B + road + backup goalie) you're down to 15-20 games. XGBoost would overfit (find fake patterns that look real but aren't)
- When: After 500+ settled games per sport with full feature data in signal_features table. Roughly 2-3 full seasons of data collection.
- How: Train XGBoost on full feature set, use SHAP interaction values to find top 50 feature pairs, then feed those into the existing statistical validation pipeline (BH correction, minimum N, forward validation)
- Prerequisite: Correlation mining Phase 1 (hand-picked hypotheses, pure SQL+math) must be running and validated first
25c. LLM Knowledge Wiki / Compiled Intelligence (Karpathy Pattern)
- Source: Karpathy gist (gist.github.com/karpathy/442a6bf555914893e9891c11519de94f), Apr 4, 2026
- What: Instead of just storing raw data in tables, periodically have an LLM "compile" collected data into persistent, interlinked knowledge articles that build on each other over time
- How it works: Raw data (park factors, line movements, weather, settlements, odds) → LLM reads and compiles into structured markdown wiki pages (e.g. "Coors Field Edge Patterns", "Kalshi Morning Mispricing Trends") → pages get updated/refined weekly as new data arrives → edge scanner references compiled articles instead of re-analyzing raw data every time
- Three operations: Ingest (new data updates 10-15 wiki pages), Query (ask questions against compiled knowledge), Lint (periodic checks for contradictions, gaps, stale info)
- Why it matters: Knowledge compounds. After 3 months, the wiki "knows" things no single day's data would show. Patterns across desks become visible.
- Applicable to: All desks — sports (team tendencies, venue patterns), crypto (regime patterns, exchange behavior), weather (city-specific forecast accuracy), forex (central bank reaction patterns)
- Complementary to: 25d AutoAgent tuning (wiki compiles knowledge, AutoAgent tunes parameters based on it)
- When: Phase 4+, after data collection is mature across multiple desks and there's enough raw material to compile
25d. AutoAgent-Style Edge Threshold Tuning (DATA-GATED)
- Source: github.com/kevinrgu/autoagent evaluation (Apr 4, 2026)
- What: Use an automated meta-agent loop to tune edge thresholds — try different minimum edge % cutoffs, confidence floors, and desk-specific parameters, then score each config against actual settlement results to find optimal settings
- How it works: Define a scoring function (ROI, hit rate, or Sharpe over rolling 30-day window) → meta-agent tries parameter variations → keeps improvements, discards regressions → repeats overnight
- Applicable to: All desks with settlement data — edge thresholds, minimum probability gaps, confidence cutoffs, Kelly fraction sizing
- Why not now: Need months of live settlement results to score against. Without enough data, the optimizer would overfit to noise.
- When: After 3+ months of live settled bets across multiple desks (500+ settlements minimum). Phase 4+.
- Prerequisite: Weekly postmortem system running, settlement tracking mature, enough bet volume to produce statistically meaningful scores
- Repo reference: github.com/kevinrgu/autoagent
26. System Upgrades (No Timeline)
- Source: memory/research-upgrades.md
- Prompt degradation detection after model updates
- A/B prompt testing framework
- Config versioning system (Git repo of all AI configs across 3 servers)
- Auto-optimization loop (predictions → Brier → detect underperformance → generate new prompts → A/B test → commit)
- Cross-desk signal integration
- Multimodal embeddings (image/audio/video in memory system)
25. Budget Optimization Option
- Source: memory/research-pipeline-jobs.md
- If budget needs cutting: Swap Sonar Pro (
$45/mo) for Gemini Flash with Google Grounding ($3/mo)
- Tradeoff: loses citation quality
DECISIONS PENDING
| Decision |
Context |
Source |
| Gemini judge relay method |
Direct API via Cloudflare proxy OR Boss relays prompts to Telegram bots? |
build-plan.md |
| Research briefs approval |
Opus reviewed, Boss hasn't signed off |
research-briefs.md |
| Sonar Pro vs cheaper alternative |
$45/mo biggest variable cost — cut if needed? |
research-pipeline-jobs.md |
| Crypto desk search strategy |
Not written yet — only desk missing strategy doc |
build-plan.md |
Last updated: Apr 4, 2026
Source: ~/.claude/projects/-home-ubuntu-edgeclaw/memory/future-additions-tracker.md