Step 1: League-Level Trends
Before projecting individual players, we need to understand the macro environment: how many total touchdowns, yards, and turnovers does the NFL produce, and where is the trend headed?
These distributions show all 32 teams each season. The violin shape captures the full range of outcomes. The white line tracks the league mean, and the dashed projection uses recency-weighted regression (recent seasons weighted exponentially higher) to estimate 2025-2026 baselines.
What the trends tell us
Touchdowns are roughly flat at ~40 per team, with a slight downward drift (-0.3/yr) after the 2020 peak. Passing TDs have declined from their 2020 high while rushing TDs hold steady around 15-16. The variance across teams remains wide (IQR ~10 TDs) — team context matters enormously.
Yards show a clear passing-to-rushing shift. Passing yards are trending down ~40 yards/yr since 2021, while rushing yards trend up ~20 yards/yr. Total yards are essentially flat, but the composition is changing. This has direct fantasy implications: RB volume is rising league-wide.
Turnovers (INTs thrown) are stable at ~13 per team with a slight decline. Lower interception rates favor QB fantasy floors but also reduce defensive scoring opportunities.
These league baselines anchor the team-level projections in the next step: each team's projected output is expressed as a deviation from the league trend, not as an absolute number.
Step 2: Team Output Allocation
The league-level projections give us a total budget — ~1,309 TDs, ~188,000 passing yards, etc. Now we distribute that budget across all 32 teams as a zero-sum allocation: what one team gains, another must lose.
Each team starts with a backtest-validated projection model, then gets adjusted by coaching and QB context flags, then normalized to match the league total from Step 1.
Stage 1: Gross totals (3yr_wavg_mr25)
Base projection uses a 3-year recency-weighted average (2025 weighted 50%, 2024 weighted 30%, 2023 weighted 20%) blended with 25% mean reversion toward the league average. This projects each team's total TDs and total yards — the size of their offensive output pie.
Why mean reversion? A team that scores 55 TDs has roughly a 70% chance of scoring fewer the next year. The weighted average captures true team quality better than a single season, and the 25% league pull brings outliers back toward the pack without over-correcting stable teams near the mean. No trend extrapolation — the backtest showed that dampened linear trends actually hurt accuracy versus a simple weighted average.
Stage 2: Pass/rush splits (3yr_wavg_mr15 + coach_blend)
Once we have total TDs and total yards, we need to decompose them into passing and rushing components. Each team's pass/rush split ratios (pass_td%, rush_td%, pass_yard%, rush_yard%) are computed as a 3-year weighted average (50/30/20) with 15% mean reversion toward league average splits.
For HC-change teams, the split is further blended 30% toward the incoming coach's historical pass tendency from coach_scheme_profile. A coach like John Harbaugh (0.44 pass_pct) moving to NYG shifts their split toward more rushing TDs, while a pass-heavy coordinator would shift the opposite direction. Teams whose incoming coach has no historical profile fall back to the base model.
We backtested red zone features (goal-line rush rate, inside-20 rush rate) and personnel packages (21/22 formation rates) as split predictors. Neither added predictive value — the team's historical TD ratio already contains this information. Red zone rush rates got zero coefficients in ridge regression. Personnel features made accuracy worse (+6.6% error). The 15% mean reversion + coach blend combination improves TD split RMSE by ~4-5% over the previous production model.
Multiplying Stage 1 totals by Stage 2 ratios gives us: pass_tds, rush_tds, pass_yards, rush_yards.
Interceptions (int_mr50)
Interceptions are projected standalone: prior year INTs blended 50/50 with the league average. Every INT model we tested produced negative R2 — interceptions are effectively random year-to-year. Heavy mean reversion simply minimizes the damage.
Contextual flags (visual only)
The chart displays flags for structural changes — new head coach, new QB, rookie QB — as visual context next to each team. These flags do not adjust the projection numbers.
Why not? We backtested flag-adjusted models against their non-flag counterparts. Adding flag weights (HC change, QB change) to the 3-year weighted average actually made accuracy worse — nRMSE increased from 0.9706 to 1.0000 and TD R2 dropped from 0.044 to -0.012. The mean reversion already captures what flags are trying to do: a team that just lost its HC probably had a turbulent year, and mean reversion naturally pulls them back toward the league average. The flags are redundant signal on top of that.
Normalization (zero-sum)
After applying all flag adjustments, the 32 team projections are rescaled so their sum exactly matches the league total from Step 1. This enforces the zero-sum constraint — the league can only produce so many touchdowns, and every team's projection must be consistent with that total.
Reading the chart
Sort by Value to see which teams project highest. Sort by Change to see the biggest year-over-year movers — these are the teams where flags are having the most impact. The sparkline shows each team's 5-year trajectory with the 2026 projection as the final dashed point. The vertical white line marks the league average.
Model Selection: Walk-Forward Backtest
How did we choose this model? We backtested 23 Stage 1 models (totals), 5 split models (pass/rush ratios), and 5 interception models across every season from 2017-2025. For each target season, only prior-year data was visible — no peeking at the future.
The dominant finding: mean reversion is the strongest signal in team projections. Ridge regression with heavy regularization (nRMSE=0.9005) finished first, but the simpler 3-year weighted average + 25% mean reversion (nRMSE=0.9228) finished close behind with zero overfitting risk and full interpretability. We chose the simpler model as the production baseline — it's easier to explain, easier to audit, and nearly identical in accuracy.
The previous production model (3-year recency-weighted trend with dampened linear extrapolation) ranked 19th of 23 models — trend extrapolation actively hurts team-level projections.
Loading backtest results...
The best model combination improves on the previous production baseline by 10-16% MAE across all metrics. Interceptions remain nearly unpredictable (negative R2 across all models) — 50% mean reversion simply minimizes the damage.
Step 2.5: Positional Share Budgets
Before distributing shares to individual players, we need to answer a structural question: what percentage of a team's targets and carries go to each position group?
This matters because player-level "when-active" rates can't simply be summed. When a starting RB gets injured and the backup takes over, both may show 40%+ carry share — but that doesn't mean the RB position commands 80% of carries when the starter returns. The position group only gets its historical budget.
Target budgets
We pull each team's historical target distribution from team_target_share — the same data visible in the Target Share Distribution chart on each team's profile page. Using a recency-weighted average (2025 weighted 4x, 2024 weighted 2x, 2023 weighted 1x), we compute a positional target budget for each team:
- WR group: Typically 45-60% of team targets. WR-heavy offenses (BAL 62%, SF 57%) get higher budgets; TE-heavy offenses (KC 49%) get lower ones.
- TE group: Ranges from 10-25%. KC's historical TE share (23-32%) is well above the league average (~18%) — a scheme signature that persists across seasons.
- RB group: Usually 12-20% of targets. Teams that feature pass-catching backs (SF 23%) allocate more here.
Carry budgets
Similarly, we pull from team_carry_share — the Carry Share Distribution chart on the team profile. Carry budgets are more position-concentrated than targets:
- RB group: Commands 70-86% of carries. The variation reflects scheme — power-run teams (DAL ~80%) vs. QB-run teams (BAL ~68%).
- QB group: Ranges from 5-22%. Mobile QBs like Lamar Jackson (BAL ~20%) or Jalen Hurts create a systematically higher QB carry budget. Pocket passers like Dak Prescott sit at ~12%.
- WR/TE: Typically 1-8% combined (jet sweeps, end-arounds).
Why this matters
Without positional budgets, the projection would let two RBs who timeshared in 2025 both project their "when-active" carry rates simultaneously — inflating the RB group well beyond what the offense actually allocates. The budget enforces the constraint: the RB position gets X% of carries, and individual RBs compete for shares within that budget.
These budgets feed directly into Step 3, where individual player shares are projected within their position group's allocation.
Step 3: Roster Share Distribution
Now we connect players to their team's production budget. Each team's projected TDs and yards from Step 2 need to be distributed among the players actually on the roster — constrained by the positional budgets from Step 2.5.
The key question: how much of the offense is accounted for, and how much is up for grabs?
For each team, we look at the current 2026 roster (from Sleeper roster data) and pull each player's 2025 production — targets, carries, snap share, fantasy PPG (half-PPR). Players are classified as returning (same team as 2025) or new arrivals (traded/signed from another team). The target and carry shares tell us how much of the offense is locked up with known commodities vs. being redistributed.
Projection methodology
The projected shares in the rightmost columns use a tiered, ratio-preserving approach:
-
Roster viability filter — Players with near-zero production AND low SHAPE scores (below 40th percentile) are identified as likely roster cuts and excluded from projections. This prevents practice-squad caliber players from diluting shares among likely contributors.
-
When-active rates — Each player's target and carry shares are calculated as their per-game rate divided by the team's per-game rate. This gives the true usage rate when the player was on the field, not diluted by games missed to injury.
-
Ratio preservation — The stickiest stat year-over-year isn't raw target share — it's the ratio of targets-per-snap (Tgt%/Snap%) and carries-per-snap (Car%/Snap%). A WR who gets 37% of his snaps as targets in 2025 should project similarly in 2026. These ratios are mildly regressed toward positional means (3% for Tier 1 anchors, 10% for depth players).
-
Tiered anchoring — The WR1, WR2, RB1 (and RB2 if the team runs a committee), and TE1 are identified as Tier 1 anchors. Their snap shares are held steady (no regression) and their ratios get minimal regression. Tier 2 players fill the remaining positional budget proportionally.
-
Positional budget normalization — Within each position group, projected shares are scaled to match the team's historical budget from Step 2.5. This prevents overlapping when-active rates from inflating position totals beyond what the offense actually allocates.
What to look for
Teams with high returning share (BAL 98%, CIN 98%) have stable offenses — the same players are running the same routes. Projection confidence is higher.
Teams with low returning share and heavy new arrivals (WAS 42% new, MIA 34% new, TEN 35% new) are volatile. These rosters have significant turnover in skill positions, meaning opportunity shares are uncertain and projectable ceilings are wider.
The unaccounted share represents targets/carries that went to players no longer on any NFL roster (retired, cut, unsigned free agents). This share gets redistributed among the returning players and new arrivals in the final projection.
Blue-highlighted names are new to the team. The green cell shading shows share intensity — darker green means a larger slice of the team's opportunity.
Step 4: Player Stat Projections
With shares assigned and team budgets locked, we now convert opportunity into counting stats. This is where efficiency enters the model — not every target is equal, not every carry produces the same yards, and TD allocation depends heavily on red zone usage.
Efficiency aggregation
We aggregate play-by-play data from pbp_raw across 2023-2025, computing per-player rates for:
- Receiving: catch rate, yards/target, yards/reception, YAC/reception, air yards/target
- Rushing: yards/carry
- Passing: completion %, yards/attempt, TD rate, INT rate
- Red zone: RZ targets, RZ carries, RZ TD rates (both receiving and rushing), goal-to-go splits
These rates are recency-weighted (2025 at 4x, 2024 at 2x, 2023 at 1x) and volume-weighted (a 120-target season influences the average more than a 30-target season). The career-weighted averages smooth out single-season noise while staying responsive to recent trajectory.
Efficiency by view
Toggle between receiving, rushing, passing, and red zone views to see each player's efficiency profile alongside their projected shares. The efficiency columns show career-weighted rates with minimum thresholds (5 targets, 5 carries, 10 pass attempts) to avoid noisy small samples.
What to look for
- High share + high efficiency = safe projection floor (e.g., elite WR1 with 25% target share and 8+ yards/target)
- High share + low efficiency = volume-dependent (e.g., RB1 with 45% carry share but 3.8 yards/carry)
- Red zone concentration = TD upside. Players with high RZ target share AND high RZ TD rate will claim a disproportionate share of team TDs
- New arrivals with no efficiency data for the current team will use their prior-team rates, regressed toward positional means
Step 5: Distribution Validation
The final step validates our projections against reality. We compare the 2026 projected half-PPR fantasy point distribution against actual 2023-2025 distributions by position rank — WR1-100, RB1-50, TE1-50, QB1-40.
The goal isn't to predict outlier ceiling seasons. The goal is distribution shape fidelity: does our WR25 project close to where a real WR25 has historically scored? Does the curve from WR1 to WR100 follow a realistic shape?
Loading validation data...
Model evolution: v1 to v3
The projection model went through three major iterations to reach this accuracy:
v1 (budget-normalized): Every player's raw yards and TDs were scaled to sum exactly to the team budget. This caused severe compression — elite players got scaled down, backup players got inflated. QB RMSE was 61.1 points, WR bias was -18.7.
v2 (rate-based, no budget normalization): Yards and TDs computed directly from individual efficiency rates. Removed the budget scaling that was compressing the distribution. WR RMSE improved dramatically (39 to 17), but top-end players were still under-projected because career-average rates regress recent elite performance toward the mean.
v3 (weighted rates + total TD rate + FPPG blend): Three key changes:
-
Total TD rate from season data — The v1/v2 models used a red-zone-only TD formula:
targets x RZ_target_rate x RZ_TD_rate. This captured only touchdowns scored inside the 20-yard line. Elite players score many TDs from outside the red zone on long catches and runs. Switching totargets x total_TD_rate(rec_tds / targets from play-by-play) added 4-8 TDs per season for elite WRs and RBs. This was the single biggest accuracy improvement. -
Weighted season efficiency — Instead of flat career averages that regress peaks, efficiency rates are now weighted by recency (2025 at 4x, 2024 at 2x, 2023 at 1x). A player whose yards/target improved from 7.0 to 9.5 over three seasons gets projected closer to 9.0 than 8.0. This preserves trajectory for ascending players.
-
Historical FPPG blend — For WR and RB, the structural model is blended with historical fantasy points per game (scaled to 17 games). The blend weight scales with production: a 20 FPPG player gets ~31% historical weight, a 5 FPPG player gets ~8%. This captures unmeasured "star quality" — scheme priority, matchup advantages, talent — that raw efficiency rates miss. TEs are excluded from the blend (over-inflated their projections) and instead get a position-specific dampening that's stronger for low-usage TEs with less reliable small-sample efficiency.
Position-specific findings
WR (RMSE 8.7, MAE 5.3, Grade A): The strongest fit. WR10 is within 3.5 points of the historical average, WR25 within 6, WR50 within 2.4. The top end (WR1-3) is under-projected by ~40-50 points, but this is expected — the historical WR1 slot captures whoever had the best season that year, often an outlier performance we can't structurally predict.
RB (RMSE 18.2, MAE 14.2, Grade B): RB5 is within 2 points, RB25 is dead-on (+0.3 points). The top end is under-projected (RB1 at -63) for the same outlier reason. RB50 is slightly over-projected (+20) — low-end RBs with small samples get inflated by default efficiency rates.
TE (RMSE 12.8, MAE 11.3, Grade B): TE1-7 track well. The position has a persistent positive bias (+10.8) because TE TD rates are volatile and small samples regress insufficiently. The dampening factor reduces this but doesn't eliminate it — TEs at the bottom of the rankings (TE30-50) are still over-projected.
QB (RMSE 64.9, Grade D): QB projection compression is structural and expected. Every team has a starting QB with ~95% projected snap share, so the model projects ~32 viable starting QBs. In reality, injuries and benchings split output between 2 QBs on many rosters, creating a wider distribution we cannot project for. QB is not optimized in this model — it's a known limitation.
Methodology notes
- Validation is rank-based, not player-based. We compare "our projected WR5" against "the historical average WR5," not "our projection for Rashee Rice" against "what Rashee Rice actually scored." This tests distribution shape, not individual accuracy.
- Actual averages use 2023-2025 with equal weight. Players must have played 4+ games to qualify for a rank slot.
- The model does not try to predict ceiling seasons. A WR1 averaging 340 points historically includes breakout seasons (Ja'Marr Chase 2024 at 370+). Our WR1 projection of ~290 represents a strong-but-not-historic season — the expected mean, not the upside.
Appendix: Pipeline Execution Guide
This appendix documents exactly how to re-run the 2026 projection pipeline. Follow these steps in order whenever rosters change (post-draft, free agency, trades, cuts) or when underlying data is updated.
Prerequisites
- Supabase credentials in
backend/.env(SUPABASE_URL,SUPABASE_SERVICE_ROLE_KEY) - Python 3.10+ with packages:
supabase,numpy,python-dotenv - Node.js for dev server (
npm run dev) - The following tables must be populated through the current season:
team_offensive_trends,historic_player_season,coach_assignments,coach_scheme_profile,team_flag_weights,team_scheme_detail,team_personnel_usage
Step 1: League-Level Trends
Script: scripts/league_trends_analysis.py
Output: public/data/league-trends.json
What it does: Projects league-wide totals (total TDs, yards, turnovers) for 2026 using recency-weighted regression across all 32 teams. These totals become the zero-sum budget that all team projections must sum to.
When to re-run: Only when prior season actuals are finalized or methodology changes. Rarely needs re-running mid-offseason.
python scripts/league_trends_analysis.py
Step 2: Team Output Projections
Script: scripts/team_projections_analysis.py
Output: public/data/team-projections.json
What it does: Distributes the league budget from Step 1 across 32 teams.
Production model (validated via walk-forward backtest 2017-2025):
- Stage 1 — Gross totals:
3yr_wavg_mr25— 3-year weighted average (50/30/20 for Y-1/Y-2/Y-3) blended with 25% mean reversion toward league average. Projectstotal_tdsandtotal_yardsper team. Mean reversion is the dominant signal; the old trend extrapolation model ranked 19th/23. - Stage 2 — Pass/rush splits:
3yr_wavg_mr15 + coach_blend— 3-year weighted average of each team's pass/rush TD and yard ratios + 15% mean reversion toward league average splits. For HC-change teams, the split is further blended 30% toward the incoming coach's historicalavg_pass_pctfromcoach_scheme_profile. Red zone and personnel features were backtested and rejected (zero predictive value). - Interceptions:
int_mr50— Prior year team INTs blended 50/50 with league average. All INT models have negative R2; this minimizes error. - Flags: Detected and displayed in the chart (New HC, New QB, Rookie QB) but do NOT adjust projections. Backtested and confirmed to add no accuracy.
- Normalization: All 32 teams scaled so each metric sums to the league total from Step 1.
Before running: Update MANUAL_NEW_VET_QB and MANUAL_ROOKIE_QB lists at the top of the script (around line 80) with the current year's QB changes.
python scripts/team_projections_analysis.py
Step 2.5: Positional Share Budgets
Part of: scripts/team_share_analysis.py (runs as a precursor to Step 3)
What it does: Computes what percentage of each team's targets and carries go to WR/RB/TE position groups using recency-weighted historical averages from team_target_share and team_carry_share tables.
Step 3: Roster Share Distribution
Script: scripts/team_share_analysis.py
Output: public/data/team-shares.json
What it does: Distributes each team's projected output to individual players on the current roster.
Key methodology:
- Depth ranking: SHAPE-led formula (40% SHAPE, 35% PPG, 15% snap rate, 10% games played) determines position rank (WR1, WR2, RB1, etc.) — no starter locks, no leapfrog logic, pure formula.
- Share projection: Each player gets their depth rank's historical team-level share as base, blended with their own prior share (70/30 for Tier 1 players, 30/70 for Tier 2, 100% rank budget for new arrivals).
- Normalization: Target shares and carry shares each independently sum to 100% across all active roster players. QBs excluded from target share logic.
- Roster data source: Sleeper API for current rosters; player stats from
historic_player_seasonandplayer_shape_scores.
Before running: Ensure Sleeper rosters are synced and player_shape_scores is current for the latest season.
python scripts/team_share_analysis.py
Step 4: Player Efficiency Aggregation
Script: scripts/player_efficiency_aggregation.py
Output: public/data/player-efficiency.json
What it does: Aggregates play-by-play data into per-player efficiency metrics including red zone splits, catch rates, yards/target, yards/carry, and fumble data.
Key methodology:
- PBP aggregation: Fetches all pass and rush plays from
pbp_rawfor 2023-2025, computing per-player receiving, rushing, and passing stats. - Red zone splits:
yardline_100 <= 20for red zone,goal_to_go = 1for goal line. Tracks RZ targets, carries, TDs, and conversion rates. - Career weighting: Season rates weighted by recency (4x/2x/1x) and volume (more touches = more influence on the average).
- Team totals: Also computes team-level pass attempts, rush attempts, RZ attempts, and TD totals for budget reconciliation.
Before running: Ensure pbp_raw is loaded for all relevant seasons and players_master positions are current.
python scripts/player_efficiency_aggregation.py
Step 5: Projection Validation
Script: scripts/projection_validation.py
Output: public/data/projection-validation.json
What it does: Replicates the v3 projection model in Python, projects fantasy points for all rostered players, compares the rank-based distribution against actual 2023-2025 half-PPR fantasy point distributions.
Validation model (v3):
- Volume: Team attempt budget (weighted avg) x player shares (from Step 3)
- Yards: Volume x weighted season efficiency rates (NOT budget-normalized)
- TDs: Volume x total TD rate from PBP season data (captures both RZ and non-RZ TDs)
- FPPG blend: WR/RB projections blended with historical FPPG x 17 (weight scales with production)
- TE dampening: Position-specific scale factor (0.85-0.95) based on target share to prevent over-projection
- Output: RMSE, MAE, and bias by position rank vs historical averages
When to re-run: After any change to Steps 1-4 outputs, or when methodology changes.
python scripts/projection_validation.py
Step 6: Rankings Integration
Source: src/lib/rankings/rankingsEngine.ts, src/hooks/useRankings.ts, src/lib/database/players.ts
Table: player_projections_current
What it does: Connects the projection pipeline output to the live fantasy rankings page. When player_projections_current contains rows, the rankings engine uses projected stats instead of paced prior-season actuals.
How it works:
PlayersDatabase.getProjections()fetches all rows fromplayer_projections_current(keyed bynflfastr player_id)useRankingshook loads projections in parallel with player data and SHAPE scores on mountcomputeAllScores()checks for a projection per player: if found, converts it to aPlayerSeasonStatsshape viaprojectionToStats()and uses it for scoring/PAPR; if not found, falls back to paced actuals (the old behavior)- SHAPE scores remain based on actual prior-season production (not projections) so the SHAPE signal stays independent of the projection model
- The projected team is used when available, so FA/trade moves reflected in projections propagate to rankings automatically
Graceful degradation: If the player_projections_current table is empty or the fetch errors, the rankings engine falls back entirely to paced actuals — no crash, no blank page. This means the pipeline is additive: rankings always work, projections make them better.
To populate the table: Run the backend transform after Steps 1-5 are complete:
# Populate player_projections_current from the validated v3 model
python -m backend.transformations.transform_runner layer4_player_projections_v2
# Or run directly:
cd backend && python -c "from transformations.transforms.layer4_player_projections_v2 import PlayerProjectionsV2Transform; PlayerProjectionsV2Transform().run()"
When to re-run: After any change to Steps 1-5, or after roster moves (FA, trades, cuts) that change team assignments or depth charts.
Full Pipeline (all steps in order)
# Step 1: League trends (run once, rarely changes)
python scripts/league_trends_analysis.py
# Step 2: Team projections (re-run after coach/QB changes)
python scripts/team_projections_analysis.py
# Step 3: Player share distribution (re-run after any roster change)
python scripts/team_share_analysis.py
# Step 4: Player efficiency aggregation (re-run after PBP data update)
python scripts/player_efficiency_aggregation.py
# Step 5: Projection validation (re-run after any methodology change)
python scripts/projection_validation.py
# Step 6: Populate rankings table (re-run after any step above)
python -m backend.transformations.transform_runner layer4_player_projections_v2
Backtest Scripts (validation only — not part of the production pipeline)
These scripts were used to select the production models. Re-run them only if you want to add new candidate models or validate with additional seasons of data.
scripts/team_projection_backtest.py— Tests 23 Stage 1 models, 5 split models, 5 INT models. Output:public/data/team-projection-backtest.jsonscripts/split_model_backtest.py— Tests 16 split models including red zone, personnel, and coaching features. Output:public/data/split-model-backtest.json
Model Constants Reference
| Constant | Value | Location | Meaning |
|---|---|---|---|
WAVG_WEIGHTS | {-1: 0.50, -2: 0.30, -3: 0.20} | team_projections_analysis.py | 3-year recency weights |
MEAN_REVERSION_PCT | 0.25 | team_projections_analysis.py | Stage 1 gross totals: 75% team / 25% league |
SPLIT_MR_PCT | 0.15 | team_projections_analysis.py | Stage 2 split ratios: 85% team / 15% league |
COACH_BLEND_PCT | 0.30 | team_projections_analysis.py | HC-change teams: 70% team / 30% incoming coach |
INT_MEAN_REVERSION_PCT | 0.50 | team_projections_analysis.py | INTs: 50% team / 50% league |
SEASON_WEIGHTS | {2025: 4, 2024: 2, 2023: 1} | projection_validation.py | Recency weights for per-player efficiency rates |
FPPG_BLEND_MAX | 0.35 | projection_validation.py | Max historical FPPG blend weight for WR/RB studs |
TE_DAMPEN_RANGE | 0.85–0.95 | projection_validation.py | TE scale factor based on target share (0.15 = full) |
PROJECTION_YEAR | 2026 | team_projections_analysis.py | Update annually |
Architecture: Rankings Integration
The projection pipeline feeds the fantasy rankings page through a clean data contract:
Scripts (Steps 1-5) Backend Transform (Step 6) Frontend (Rankings)
───────────────────── ────────────────────────── ──────────────────
league-trends.json ─┐
team-projections.json ├──▶ layer4_player_projections_v2 ──▶ player_projections_current (Supabase)
team-shares.json │ │
player-efficiency.json ┘ ▼
PlayersDatabase.getProjections()
│
▼
useRankings hook
│
▼
computeAllScores(players, stats, ..., projections)
│
▼
Rankings Page (/football/fantasy/rankings)
Fallback behavior: If player_projections_current is empty, computeAllScores falls back to pacing prior-season actuals to 17 games (the pre-pipeline behavior). Rankings always render.
Post-Draft Checklist
- Update
MANUAL_ROOKIE_QBlist inteam_projections_analysis.pywith drafted rookie QBs expected to start - Verify
coach_assignmentstable has 2026 entries for all 32 teams - Run SHAPE transform for drafted rookies so they have SHAPE scores
- Run Steps 1-5 in order
- Run Step 6 to populate
player_projections_currenttable - Verify article charts render with updated data at
/football/fantasy/research/projection-engine-2026 - Verify rankings page at
/football/fantasy/rankingsreflects projected stats (not paced actuals)