Learning in the Wild: How Our Insight Engine Improves Over Time

August 28, 2025

We don’t just publish picks—we track how the system learns from real games. This post explains the evaluation behind our Insights Result dashboard and updates the headline numbers, monthly accuracy, and our “learning curve” view (keep only recent months, drop the earliest) to make model progress visible.

(current snapshot)

By insight type

These are finished matches with predictionInsight source accuracies ≥ 90%, validated as WON/LOST vs actual outcomes.


Why a “learning” view?

Early-season data is noisy (transfers, managers, tactical shifts). As the engine ingests fresh matches and recalibrates priors, we expect performance to stabilize and improve. To visualize that, we publish two lenses:

  1. Monthly performance (won / total): how we did in each calendar month.
  2. Learning curve (drop oldest months): for each step i, drop the first i months and recompute accuracy on the kept months only. If accuracy rises as we keep more recent data, the model is learning useful season-specific signal.

Monthly performance (won / total)

MonthWonTotalWin %
Sept 20246211056%
Oct 2024133043%
Nov 20243475%
Jan 202544100%
Feb 2025101856%
Mar 2025030%
Apr 20251425%
May 202581362%
Aug 2025131587%

Notes:


Learning curve: keep only recent months

At step i, we drop the first i month(s) and recompute accuracy on the kept range.

Kept rangeWonTotalWin %
Sept 2024 → Aug 202511420157%
Oct 2024 → Aug 2025529157%
Nov 2024 → Aug 2025396164%
Jan 2025 → Aug 2025365763%
Feb 2025 → Aug 2025325360%
Mar 2025 → Aug 2025223563%
Apr 2025 → Aug 2025223269%
May 2025 → Aug 2025212875%
Aug 2025 → Aug 2025131587%

Takeaways:


How we keep it honest


What’s next

If there’s a specific league or market you want us to slice, ping us—we’ll add it to the dashboard and keep learning in public.

← Back to Blog