Simulating Upside Inflation: A Reproducible Monte Carlo Model
inflationmodelingopen-source

Simulating Upside Inflation: A Reproducible Monte Carlo Model

sstatistics
2026-01-23 12:00:00
10 min read
Advertisement

Open-source Monte Carlo model to quantify 2026 inflation tail risk from metals spikes, tariffs and Fed-independence shocks. Reproducible and actionable.

Hook: Why your data-hungry models still miss inflation tail risk

Technology teams, analysts and quant shops struggle with one recurring pain point: reliable, reproducible models that expose tail inflation risk from realistic shock channels. You have CPI series, you can calibrate AR models, but when metals spike, tariffs change, or central-bank credibility is threatened, the simple models break. This piece gives you an open-source, reproducible Monte Carlo framework tuned for 2026 that explicitly encodes metals shocks, supply-side tariffs and probabilistic Fed independence shocks so you can quantify upside inflation outcomes — not just expected paths.

Executive summary (inverted pyramid)

We present a transparent reproducible Monte Carlo framework you can run locally or in CI that simulates headline inflation outcomes for 2026 under multiple shock channels. Key features:

  • Compound-process metals shocks (Poisson jumps with heavy-tailed sizes) calibrated to historical LME spikes;
  • Tariff/policy shock module that maps tariff changes to producer price shocks with lagged pass-through;
  • Fed independence shock as a discrete event changing inflation expectations and policy reaction function;
  • Reproducibility: deterministic random seeds, requirements.txt, Dockerfile, and notebook-ready code on GitHub (open-source MIT license);
  • Outputs: full distribution of headline and core inflation for 2026, percentiles, tail-probabilities (e.g., P(inflation>5%)).

Why this matters in 2026

Macro conditions entering 2026 are different from the contained-disinflation narrative of early 2024–25. Late-2025 signals include persistent services inflation, episodic commodity upticks, and heightened political rhetoric around central bank independence. Technology teams building products or risk managers pricing contracts need numerical tail estimates, not binary scenarios. This model answers: how much does a metals spike combined with tariff hikes and a Fed credibility shock lift the right-hand tail of inflation in 2026?

Model overview: components and logic

The model is modular: core inflation dynamics + additive shock modules + observation/pass-through layers. Each Monte Carlo draw produces a simulated 2026 annual inflation outcome (headline and core). Repeat N times (default N = 200,000) to build the distribution.

1) Baseline inflation process (core dynamics)

We model baseline core inflation (ex-food & energy) as a mean-reverting stochastic process with AR(1) + Gaussian error:

core_t = phi * core_{t-1} + (1-phi) * mu + epsilon_t, where epsilon_t ~ N(0, sigma^2)

Calibration notes:

  • mu is set to the 2023–2025 average core inflation (PCE/Core or CPI/Core depending on dataset) — default ~2.8% annualized but configurable;
  • phi is estimated from monthly core series AR(1) fit covering 2010–2025 to capture persistence;
  • sigma is the historical residual standard deviation; allow user override for stress tests.

2) Metals shock module (compound Poisson)

Metals shocks drive input-cost inflation for manufacturing and construction, and they have heavy tails. We model metals events as a compound Poisson process:

  • Number of jumps in the year ~ Poisson(lambda_m); lambda_m is calibrated from 2010–2025 LME extreme events (copper, nickel, aluminum). For 2026 baseline, we set lambda_m = 0.9 (i.e., ~1 sizable event per year on average) but let users increase it to reflect geopolitical risk.
  • Jump sizes drawn from a lognormal (mu_j, sigma_j) to reflect right-skew; calibrate mu_j/sigma_j to historical % moves on jump days (20–60% for severe spikes).
  • The metals shock feeds into producer price inflation (PPI) with a pass-through factor alpha_m and a lag distribution (e.g., 3–9 months distributed lag).

3) Tariff module (policy-driven persistent shock)

Tariffs are modeled as deterministic or stochastic policy moves that raise import costs and pass through to consumer prices over time. Mechanically:

  • Tariff event indicator T = 0/1 for whether new tariffs (or tariff increases) occur in 2026. Assign probability p_t based on observed USTR actions and early-2026 trade policy signals; default p_t = 0.25 but tunable.
  • Tariff magnitude mapped to an effective ad-valorem shock s_t (e.g., a 10% tariff on a producer basket yields an effective input-cost shock equal to weighted exposure).
  • Pass-through modeled with a polynomial distributed lag across 12 months; long-run pass-through factor alpha_tariff in range 0.2–0.7 depending on sector.

4) Fed independence shock (credibility channel)

The Fed independence shock is modeled as an event that probabilistically occurs and alters the expectations and policy reaction function:

  • Event probability p_fed calculated from political calendar and late-2025 rhetoric signals; default p_fed = 0.10 (10%).
  • If the event occurs, inflation expectations temporarily rebase upward by delta_e (e.g., 50–120 bps) and the policy response weakens for a horizon H months (modeled as a reduced phi or an exogenous additive inflation term).
  • Crucially, this is not a shock to real rates only; it affects nominal expectations and thus wage/price-setting, raising pass-through of other shocks.

5) Correlations and conditional interactions

Shocks are not independent. We encode conditional dependencies:

  • Tariff increases raise the probability of metals supply disruptions (higher lambda_m) because tariffs often follow trade restrictions;
  • A Fed independence event increases pass-through parameters (alpha_m and alpha_tariff) because expectations shift;
  • Users can override correlation matrices or use copulas to model tail dependence explicitly.

Implementation and reproducibility

We provide a production-ready reference implementation in Python with the following reproducibility guarantees:

  • Deterministic PRNG with configurable seed (numpy.random.default_rng(seed)).
  • Requirements in requirements.txt with pinned versions (numpy, pandas, scipy, statsmodels, matplotlib, seaborn, altair optional). See the discussion on data-first workflows and pinned artifacts for recommended practices.
  • Dockerfile and a Makefile for one-command runs: make run-n100k builds the container and runs 100k simulations.
  • Notebook with step-by-step calibration: data download scripts (BLS API, BEA API, LME CSV), parameter-estimation cells, and sensitivity analysis sections.
  • MIT license and GitHub Actions workflow to run test simulations and regenerate reproducible figures.

Key files in the repo

  • simulate_inflation.py — core Monte Carlo loop;
  • calibrate.py — functions to estimate phi, sigma, lambda_m and jump params from historical series;
  • data_fetch.py — automated scripts to pull BLS CPI, BEA PCE, LME csvs and USTR tariff notices (see customs and tariff mapping reviews like customs clearance & compliance platforms for sources and formatting tips);
  • notebooks/2026_inflation_scenarios.ipynb — walkthrough and charts;
  • docker/Dockerfile and ci/workflow.yaml.

Calibration: data sources and parameter choices

We anchor calibration to public datasets so outputs are auditable and citable:

  • BLS CPI (headline and core monthly series) for baseline and residual estimation;
  • BEA PCE Core for an alternative baseline;
  • London Metal Exchange (LME) daily prices for copper, aluminum, nickel — used to calibrate jump frequency and sizes; these market signals are similar to the operational inputs used in operational signals for retail investors workstreams.
  • USTR tariff actions and US harmonized code changes to map to tariff probabilities and magnitudes;
  • Federal Reserve public statements and the FOMC minutes to construct a simple Fed independence risk indicator (frequency of public political criticisms and legislative proposals in late 2025 increase p_fed).

We make all calibration scripts available so you can replace inputs, e.g., use alternative metals lists or regional tariff baskets.

Running scenarios: default and stress

Default run (recommended): N = 200k, seed = 42, baseline parameters anchored to 2010–2025 history. Output includes median, 75th, 90th, 95th, 99th percentiles for 2026 headline and core inflation.

Stress scenarios to run:

  1. Metals-only: double lambda_m and increase jump sigma_j by 50% to model synchronized commodity runs;
  2. Tariffs-only: set p_t=1 and s_t to represent a 10% effective basket shock with 50% long-run pass-through;
  3. FedShock-only: set p_fed=1 with delta_e=100 bps and H=12 months;
  4. Combined tail: all three modules stressed simultaneously, with correlation enforcement (copula rho=0.6) to simulate worst-case dependencies.

Interpreting outputs: examples and caveats

Example (illustrative): a baseline run returns a median 2026 headline inflation of 3.0% with a 95th percentile at 5.8%. The combined-tail scenario pushes the 95th to 8.9% and the 99th to 12% — signaling true tail events where policy and input costs amplify each other.

Important caveats:

  • Model outputs are conditional on parameter choices and calibration windows. Always run sensitivity sweeps.
  • Events like wars, global supply-chain rearrangements, or sudden energy shocks are captured only if encoded as shock types — extend the model accordingly.
  • We use reduced-form pass-throughs; sector-specific models (auto, semiconductors, construction) require disaggregation.

Actionable recommendations for technology and risk teams

Use the model to operationalize inflation tail risk:

  • Embed the simulation in scenario pipelines. Schedule weekly runs that update with the latest metals prices and tariff announcements — follow CI and orchestration patterns from advanced DevOps playbooks.
  • Trigger alerts based on tail-probability thresholds (e.g., if P(inflation>6%) > 5%). Link alerts to hedging playbooks and an observability stack such as cloud native observability.
  • Stress-test contracts that have CPI-indexing or long-term pricing clauses; compute expected worst-case profit/loss at the 95th percentile.
  • Visualize uncertainty for non-technical stakeholders: cumulative probability charts, fan charts, and scenario matrices help communicate risk without model detail overload — see playbooks on micro-metrics and visualization for communicating probabilistic outputs.
  • Keep it reproducible: pin dependency versions, use deterministic seeds, and store calibration artifacts (fitted phi, sigma, lambda_m) in version control or a robust recovery workflow as described in cloud recovery UX.

Advanced extensions (for quant teams)

If you’re building a production-grade risk engine, consider these extensions:

  • Use multivariate copulas to model tail dependence between metals, energy prices, and FX;
  • Replace the AR(1) core with a state-space model (Kalman filter) that estimates unobserved trend inflation and decomposes shocks;
  • Introduce structural supply-chain network models to propagate localized tariff/embargo shocks across sectors;
  • Calibrate jump processes with Extreme Value Theory (EVT) for more conservative tail estimates;
  • Integrate market-based expectations (TIPS spreads, inflation swaps) to inform p_fed and delta_e through option-implied densities.

Transparency and methodology notes

We follow four transparency principles:

  1. All data-fetch and calibration scripts are public so any user can recompute parameters;
  2. Random seeds and run metadata are logged for each simulation;
  3. Assumptions (e.g., pass-through rates, event probabilities) are documented and exposed as CLI flags or notebook variables;
  4. Model limitations are listed prominently in the README and notebooks so results are not taken out of context.

Practical example: quick-start

Steps to reproduce a default 200k-run on your laptop or cloud instance:

  1. Clone the repo (git clone <repo>).
  2. Create a virtualenv and install pinned requirements: pip install -r requirements.txt.
  3. Run data_fetch.py to pull BLS and LME series (see customs/tariff formatting notes in platform reviews like customs clearance & compliance platforms).
  4. Run calibrate.py to compute baseline parameters.
  5. Run simulate_inflation.py --n 200000 --seed 42 --out results_200k.csv.
  6. Open notebooks/2026_inflation_scenarios.ipynb for visualizations and sensitivity sweeps.

What the 2026 signal looks like today (late-2025/early-2026 context)

Market signals in late-2025 include stronger-than-expected wage growth in select sectors, episodic LME moves tied to geopolitical supply restrictions, and a wave of tariff notices in specific categories. Political commentary around the Fed has elevated p_fed in our baseline compared to pre-2024 levels. The model's job is not to predict whether a shock will happen but to quantify the consequences if it does — especially when shocks interact.

Conclusion: use the model to move from qualitative fear to quantitative preparedness

Tail risks matter. For engineering and finance teams, the cost of being underprepared for an inflation shock can be large. This Monte Carlo framework converts qualitative threats — metals spikes, tariffs, Fed independence risks — into probabilistic outcomes you can act on. It’s open-source, reproducible, and configurable for your data and risk tolerance.

Data-first, transparent modeling beats guesswork. Run the simulation, stress the parameters, and embed the outputs into your risk workflows.

Call to action

Get the code, run the baseline, and produce a tailored scenario report for your product or portfolio this week. Visit the GitHub repo (MIT license) to clone the project, try the notebooks, and open issues or pull requests with alternative calibrations or extensions. If you embed the simulation into a CI pipeline, share your alerts and dashboards with the community so we can improve the model together.

Ready to quantify 2026 upside inflation risk? Clone, run, and report back — the repository includes a template report that you can adapt for leadership or regulatory submissions.

Advertisement

Related Topics

#inflation#modeling#open-source
s

statistics

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T07:00:34.096Z