C+

Glossary term

Brier Score

Accuracy score for probabilistic forecasts — mean squared difference between predicted probability and actual outcome. Lower is better; perfect = 0, worst = 1. Closelook scores every Predictions call with Brier alongside log-loss so the public scoreboard rewards calibration, not confident wrongness.

Definition & Context

The Brier Score grades probabilistic forecasts: for each binary event you predict probability p, and once the event resolves (0 or 1), the error is (p − outcome)2. Averaging across all forecasts gives a Brier Score between 0 (perfect calibration) and 1 (perfectly wrong). Unlike accuracy, Brier penalises both overconfidence and underconfidence — calling something a 90% favourite and being wrong hurts more than a 60% call going against you.

Brier is decomposable into reliability (are your 70% forecasts right 70% of the time?) and resolution (do you separate likely from unlikely outcomes?). Combined with log-loss — which punishes confident wrongness more aggressively — it is the industry standard for forecasting tournaments, from Metaculus to weather services. Forecasts that look clever but run at Brier > 0.25 are usually no better than a coin flip.

Why It Matters for Investors

Investing is forecasting in disguise: every trade is an implicit probability bet. Most investors never keep score because opinion feels more comfortable than calibration. Closelook’s Predictions subsite exists to invert that habit — each call carries an explicit probability, a horizon and an invalidation level, and the public scoreboard is ranked by Brier + log-loss. A forecaster with modest accuracy but tight calibration often beats a loud forecaster with spotty accuracy.

Related Concepts

Brier connects to Alpha (risk-adjusted outperformance is the financial analogue of calibrated forecasting) and the Directional Alpha framework behind Closelook’s pattern library.

← Back to Glossary