Methodology

How the DisplaceIndex is calculated - data sources, scoring approach, and known limitations.

Overview

The DisplaceIndex is a composite 0–100 indicator of AI-driven job displacement pressure in the US labor market. Higher scores reflect a healthy, growing labor market with limited displacement evidence. Lower scores indicate rising pressure from automation, layoffs, or deteriorating employment conditions.

The index is computed every 6 hours by an automated pipeline combining two layers: objective hard data from Federal Reserve economic series (70% weight) and AI-scored real-time market sentiment (30% weight). Hard data anchors the index to verified statistics; sentiment captures fast-moving signals that official data - with its publication lags - cannot yet reflect.

Layer 1 - Hard Data (70%)

Layer 1 uses six Federal Reserve (FRED) economic series as objective inputs. Series are selected for their coverage of both cyclical labor market health and AI-specific displacement signals.

Each series is normalised against its long-run historical distribution so that a reading at the most favorable end of its historical range contributes a strong positive signal, and a reading near the worst-ever level contributes a strong negative signal. Both the current level and recent direction of change are factored into each series score - where a market is positioned historically matters, and so does whether conditions are improving or deteriorating.

Series are weighted based on their assessed reliability as leading indicators of displacement pressure. Higher-cadence series with more predictive power receive greater weight; series subject to significant revisions or long lags receive lower weight.

FREDInitial Jobless Claims: Highest-frequency hard data. First to move when labor conditions deteriorate.
FREDJob Openings (JOLTS): Employer demand signal. Falling openings precede rising unemployment by 3–6 months.
FREDQuits Rate (JOLTS): Worker confidence. Workers only quit voluntarily when they expect to find something better - rising AI displacement anxiety shows up as falling quits.
FREDLayoffs & Discharges (JOLTS): Official involuntary separation data - the hard-data equivalent of tracked layoff events.
FREDInformation Sector Employment: AI-exposed sector headcount. The closest public data proxy for GenAI-driven displacement, having declined since late 2022.
FREDUnemployment Rate: Included for credibility and comparability. Significantly lagged and subject to composition effects.

Layer 2 - Sentiment (30%)

Layer 2 captures real-time market sentiment by analysing current news headlines, Reddit posts, and industry RSS feeds using our proprietary scoring engine. This layer reflects fast-moving signals that hard economic data, with its publication lags, cannot capture.

Signals are filtered to require co-occurrence of AI/automation terms and employment terms - purely political or economic news with no labor market relevance is excluded before scoring. Cross-source duplicate stories are identified and deduplicated so that the same event reported by multiple outlets is only scored once.

Each unique signal is scored on a −1.0 to +1.0 scale from the perspective of US workers and job-seekers. Scores are aggregated into an overall sentiment reading, which our engine also summarises in plain language along with an assessment of what current signals suggest about the near-term outlook.

Layer 3 - Confirmed Displacement Events

Layer 3 tracks verified AI-attributed layoff events — cases where a company publicly cited AI or automation as an explicit reason for cutting jobs. Unlike the macro FRED series (which measure general labor market health) or sentiment signals (which capture narrative tone), Layer 3 records actual confirmed job losses attributed to AI.

Events are extracted automatically from news headlines on every cron run using a structured AI extraction model, and augmented with a curated historical backfill of major AI-attributed layoffs since 2022. Each event records the company, sector, job count, date, and the verbatim reason AI was cited.

WHAT COUNTS AS A CONFIRMED EVENT

  • A specific company is identified by name
  • A specific number of jobs cut is reported
  • AI, automation, or machine learning is explicitly cited as the reason
  • The event is sourced from a news article or company press release

The primary metric derived from Layer 3 is the AI Displacement Ratio:

AI Displacement Ratio = AI-attributed job cuts ÷ Total US layoffs (FRED JTSLDL) × 100

A rising ratio means AI is becoming a larger share of overall displacement — even when total layoffs are stable.

Using FRED's official Layoffs & Discharges total (JTSLDL) as the denominator normalises AI-attributed cuts against the economic cycle. An absolute increase in AI layoffs during a recession is less significant than the same increase during a stable market — the ratio captures this distinction.

SCORING INTEGRATION — PHASES

Phase 1 (current)Layer 3 data is displayed on the Trends page as context. Confirmed displacement events receive a sentiment score floor of −0.85 in Layer 2, ensuring real events are never under-scored by the model. No change to composite weights yet.
Phase 2 (planned)After ~90 days of live event collection, a 30-day rolling AI job-cut total will be added as a dedicated hard-data series (10% weight), reducing FRED hard data from 70% to 60% and producing a revised formula: Hard Data (60%) + AI Events (10%) + Sentiment (30%).

Limitations: AI-attributed events are self-reported by companies and sourced from public announcements. This data almost certainly under-counts displacement — companies rarely announce AI cuts directly, and indirect displacement (productivity gains reducing future headcount) is not captured. The ratio should be interpreted as a lower bound on AI's contribution to layoffs, not a definitive measure.

Composite Score

The final index score is a weighted blend of the two layer scores, each normalised to a 0–100 scale:

DisplaceIndex = Hard Data Score × 70% + Sentiment Score × 30%

The 70/30 split reflects our assessment that official government statistics are more reliable than real-time sentiment, while still giving meaningful weight to leading signals not yet captured in published data.

As confirmed AI displacement event data accumulates (Layer 3), the weights will be adjusted to introduce a dedicated AI events component. See the Layer 3 section for the planned Phase 2 formula.

Score Labels

75–100

Strong Growth

Labor market is expanding robustly. Low displacement risk.

60–74

Cautious Growth

Market is stable but growing moderately. Some displacement signals present.

40–59

Transitional

Mixed signals. Neutral to uncertain displacement environment.

25–39

Displacement Pressure

Significant stress evident. Notable displacement risk.

0–24

High Displacement

Severe displacement pressure. Rapid automation-driven job losses likely.

Occupation Risk Scoring

Each occupation page shows an AI Task Coverage score (0–100) representing how much of the occupation's day-to-day work today's AI can perform autonomously. Scores are derived from role-specific task analysis and scored by two independent AI models to reduce single-model bias.

How it works

  1. Task sourcing — for occupations in the O*NET database (US Dept. of Labor), we use official task statements. For modern roles not yet in O*NET (e.g. SEO Manager, AI/ML Engineer), a set of 8–10 realistic, role-specific tasks is generated by Claude Sonnet 4-6 and reviewed for accuracy.
  2. AI tool enumeration — for each occupation, up to 8 real, commercially deployed AI tools that automate or augment work in that role are generated and stored (e.g. GitHub Copilot for software engineers, Aidoc for radiologists). These are injected into the scoring prompt so both models assess task coverage with awareness of the actual tools causing displacement — not just general AI capability.
  3. Per-task scoring by two models — each task is independently scored 0–100 by both Claude Sonnet 4-6 and GPT-4o, representing how much of that specific task current AI can handle autonomously. 0 = AI cannot help; 100 = AI fully replaces the human for that task.
  4. Consensus average — the two model scores are averaged per task to produce a consensus score. Hover any task bar on an occupation page to see the individual model scores. This two-model approach corrects for each model's tendency to over- or under-estimate its own capabilities.
  5. Weighted composite — Core tasks are weighted 2× over Supplemental tasks. The weighted average across all tasks produces the final AI Task Coverage score for the occupation.
  6. Risk bucket — the composite score maps to an automation risk level: Very High (80+), High (65–79), Medium (45–64), Low (25–44), Very Low (<25).

Why two models?

In our testing across 57 occupations, Claude Sonnet 4-6 consistently scored tasks 7–12 points higher than GPT-4o. This is a known pattern: language models tend to overestimate their own capabilities when self-evaluating, as their training optimises for appearing helpful and capable. GPT-4o conversely tends to anchor on the hardest human-judgment component of each task, producing more conservative estimates.

The consensus average sits between both extremes and is more robust than either model alone. All per-model scores are stored and visible on each occupation page.

O*NET data is public domain, produced by the US Department of Labor's Employment and Training Administration. We use the O*NET 29.1 bulk database release. Learn more about O*NET.

Data Sources

SourceSeriesFrequencyLagLayerNotes
FRED / Federal ReserveUnemployment RateMonthly~4 weeksLayer 1U-3 headline rate; inversely weighted
FRED / Federal ReserveJob Openings (JOLTS)Monthly~6 weeksLayer 1Total nonfarm demand signal; 3–6 month leading indicator
FRED / Federal ReserveInitial Jobless ClaimsWeekly1 weekLayer 1Highest cadence hard data; inversely weighted
FRED / Federal ReserveQuits Rate (JOLTS)Monthly~6 weeksLayer 1Worker confidence signal - rising quits indicate job market optimism
FRED / Federal ReserveLayoffs & Discharges (JOLTS)Monthly~6 weeksLayer 1Official involuntary separation count; inversely weighted
FRED / Federal ReserveInformation Sector EmploymentMonthly~4 weeksLayer 1AI-exposed tech sector headcount - closest public proxy for GenAI displacement
FRED / Federal ReserveNonfarm PayrollsMonthly~4 weeksSupportingDisplayed for context; subject to significant revision
NewsAPIAI & employment headlinesEvery 6 hoursReal-timeLayer 2Filtered to require AI/automation + employment co-occurrence
Redditr/layoffs community postsEvery 6 hoursReal-timeLayer 2Raw worker sentiment; public JSON API
BLS / Reuters RSSOfficial press releasesEvery 6 hoursReal-timeLayer 2Official government economic releases
Layoffs.fyiTech layoff announcementsEvery 6 hoursHoursLayer 2Curated tracker of employer-confirmed layoff events
News reports / press releasesAI-attributed layoff eventsEvery 6 hours + backfillHours–daysLayer 3Structured events extracted by AI: company, job count, AI as stated reason
FRED / Federal ReserveLayoffs & Discharges (JTSLDL)Monthly~6 weeksLayer 3Used as denominator to compute AI displacement ratio (AI cuts ÷ total layoffs)
O*NET (US Dept. of Labor)Occupational task statements, skills, technology toolsAnnual release~1 yearOccupationTask statements for official occupations; modern roles use Claude-generated tasks. Role-specific AI tools enumerated per occupation. Scored by Claude Sonnet 4-6 + GPT-4o with tool context; consensus average used.

Limitations

Data lags

FRED monthly series are published 4–6 weeks after the reference period. The index reflects the most recently available data, which may not capture very recent structural shifts.

Sentiment noise

News sentiment is inherently noisy. A single high-profile layoff announcement can shift Layer 2 scores significantly even if the broader market is stable. The 70/30 weighting is designed to mitigate this.

US-centric data

All hard data series measure the US labor market. While news signals include international coverage, the index should not be interpreted as a global indicator. Regional expansion is planned for a future version.

AI scoring subjectivity

Automated sentiment classifications reflect patterns in training data and may exhibit biases. Individual classifications are not validated against human labels.

Not a forecast

The DisplaceIndex is a snapshot of current conditions, not a prediction of future employment levels. Leading indicators in the model provide some forward signal, but the index is primarily descriptive.

Sector attribution

Hard data series measure general labor market conditions, not AI-caused displacement specifically. A recession and an AI displacement wave can produce similar index readings. The Information Sector Employment series is included as the closest available public proxy for AI-specific impact.

AI self-assessment bias (occupation scores)

Occupation task scores are produced by AI models evaluating AI capabilities — an inherent conflict of interest. Both models used (Claude Sonnet 4-6 and GPT-4o) are trained to appear capable and helpful, which biases scores upward. The two-model consensus mitigates this but does not eliminate it. Scores should be treated as informed estimates, not ground truth. Human expert validation is planned for a future version.

LLM scoring is not occupation-specific

Claude Sonnet 4-6 and GPT-4o are used as general AI capability assessors, not as the tools that actually displace workers in each role. For physical and specialist occupations — truck drivers, radiologists, graphic designers — the real displacement comes from autonomous vehicles, medical imaging AI, and generative image tools respectively. This is addressed in the current pipeline: each occupation's tasks are scored with role-specific commercially deployed AI tools provided as context, grounding scores in real-world capability rather than general LLM self-assessment.

Occupation scores are snapshots, not forecasts

Task coverage scores reflect what AI can do today, not what it will be able to do in 12 or 24 months. Scores are recalculated periodically as model capabilities evolve. The assessedAgainstModel field on each occupation records which model version was used, enabling longitudinal comparison.

How to Cite

If you reference the DisplaceIndex in research or reporting, please cite:

DisplaceIndex (2026). AI Job Displacement Index [Data source]. Retrieved March 15, 2026, from https://displaceindex.com

Underlying FRED data is public domain courtesy of the Federal Reserve Bank of St. Louis. Occupational task data from O*NET, produced by the US Department of Labor's Employment and Training Administration. Sentiment analysis powered by our proprietary scoring engine.