Skip to main content

Engine Methodology

A finance-grade walkthrough of the Ambrosia rNPV engine: how it values assets, how calibration works, and where the model is honestly not ready. Written for BD teams, buy-side analysts, and pharma finance professionals who need a model they can stand up to their investment committee.

1. What the engine computes#

The engine takes a pharmaceutical asset profile and produces a risk-adjusted net present value (rNPV) plus an implied deal value decomposition (upfront, milestones, royalties). It is calibrated to Phase 2 / Phase 3 licensing and co-development deals — the segment where intrinsic-value modeling matches how real negotiations anchor. Early-stage strategic upfronts, acquisitions, and approved-asset commercialization handoffs are explicitly out of scope (see limitations).

2. Fourteen modeling dimensions#

Modern biopharma deal valuation cannot be reduced to a single rNPV number. The Ambrosia engine composes fourteen independently-calibrated dimensions, each one a distinct correction to the simplistic “peak sales × PoS × discount rate” frame:

  1. rNPV core. Phase-transition probabilities × peak sales × discount rate × market-access delay, summed over projected cash flows.
  2. Monte Carlo. 10,000-path simulation with phase-dependent correlation matrix, conditional floors, and fat-tailed peak sales distribution.
  3. Real options. CRR binomial-lattice overlay for early-stage assets where compound options dominate intrinsic NPV.
  4. Ensemble valuation. Inverse-variance blend of rNPV, comparable-transaction median, and real-options value — headline number robust to method-specific bias.
  5. Deal structure optimizer. Runs all five deal types (licensing, acquisition, codev, option, collaboration), ranks by total value to licensor, surfaces a recommendation only when ≥20% better than current.
  6. Counterparty premiums. Per-buyer historical premium vs. peer medians across 39 large buyers (quarterly refresh from disclosed deals).
  7. RWE auto-tuning. Weekly backtest against 164 approved-drug trajectories produces bounded (±15%) delta proposals for key calibration parameters.
  8. Risk decomposition. 4-bucket attribution (clinical, commercial, manufacturing, regulatory) via counterfactual re-runs with each risk source neutralized.
  9. Macro factors. Live 10Y Treasury yield, biotech beta, equity risk premium (daily FRED refresh), clamped so final WACC stays in [5%, 25%].
  10. Subpopulation modeling. Mutation status, demographic, severity — 31 evidence-cited entries (FDA labels, IQVIA 2024-2025).
  11. Patent cliff timing. Per-indication market-leader LOE + biosimilar dates, mapped to the asset’s projected launch year.
  12. Combination therapy. Effective revenue multiplier when the asset is used in a combo regimen (comboFraction × revenueShare, floored at 0.25).
  13. Geographic decomposition. US / EU5 / Japan / China / RoW revenue curves with per-geo launch delay, ramp, and pricing multiplier.
  14. Time-windowed PoS. Rolling cohort phase-transition rates (2014-2024, 2019-2024, 2021-2024) for indications where PoS has shifted materially (Alzheimer’s, NASH, obesity).

Dimensions 1-4 always run. Dimensions 5-14 are opt-in via calculator inputs or feature flags — default off until backtest validates each one individually.

3. Calibration framework (Option B rigor)#

Most engines publish source citations and call it rigor. Ours does both that AND measures accuracy empirically. The discipline we follow:

4. Accuracy measurement#

We score the engine’s predicted implied deal value against real disclosed terms from 251 biopharma licensing, co-development, acquisition, and collaboration deals (2017–2026). For each deal, the engine is fed the asset profile known at deal date and computes an implied upfront. The hit rate is the share of deals where the absolute error falls within a tolerance band.

Core scope (Phase 2 / 3 licensing + codev, n=69) is the primary calibration target — the segment where intrinsic-value modeling maps onto market clearing price. Full scope (all 251) is reported for transparency but includes segments (early- stage option value, approved commercialization) where single-asset rNPV is structurally the wrong frame.

Accuracy is measured on every calibration round against the held-out test set. We track failed rounds alongside wins in the internal iteration log — the calibration journey itself is part of the model’s methodology.

5. Honest limitations#

The model is not ready for:

6. Software architecture#

The engine has a five-layer bug-detection system designed to prevent silent drift:

7. Reproducing the backtest#

The full backtest is reproducible in the open-source repo:

# Clone and install
git clone github.com/ikildani/ambrosia-benchmarker-
cd ambrosia-benchmarker && npm install

# Run baseline
npx tsx scripts/run-deal-backtest.ts

# A/B test individual flags
TIER4_MACRO=on npx tsx scripts/run-deal-backtest.ts
TIER4_SUBPOP=on npx tsx scripts/run-deal-backtest.ts

# Per-round calibration diff
git diff __tests__/backtest/baseline-errors.json

Every commit to __tests__/backtest/baseline-errors.json is the live accuracy state read by the internal accuracy tracking system.

8. Model governance#

For regulated users (banks, asset managers subject to SR 11-7), we treat the model like a model:

Questions, critique, or your own backtest?

We welcome rigorous challenge. Email issa@ambrosiaventures.co with a deal you think we got wrong, and we’ll include it in the next calibration round with the reason documented in the iteration log.