Learning in the Wild: How Our Insight Engine Improves Over Time
August 28, 2025
We don’t just publish picks—we track how the system learns from real games. This post explains the evaluation behind our Insights Result dashboard and updates the headline numbers, monthly accuracy, and our “learning curve” view (keep only recent months, drop the earliest) to make model progress visible.
(current snapshot)
- Matches included: 161
- Insights (all): 201
- Won: 114
- Lost: 92
- Win rate: 57%
By insight type
- OVER/UNDER 2.5: 39 / 71 won
- BTTS: 43 / 71 won
- RESULT (1X2): 32 / 59 won
These are finished matches with predictionInsight source accuracies ≥ 90%, validated as WON/LOST vs actual outcomes.
Why a “learning” view?
Early-season data is noisy (transfers, managers, tactical shifts). As the engine ingests fresh matches and recalibrates priors, we expect performance to stabilize and improve. To visualize that, we publish two lenses:
- Monthly performance (won / total): how we did in each calendar month.
- Learning curve (drop oldest months): for each step i, drop the first i months and recompute accuracy on the kept months only. If accuracy rises as we keep more recent data, the model is learning useful season-specific signal.
Monthly performance (won / total)
| Month | Won | Total | Win % |
|---|---|---|---|
| Sept 2024 | 62 | 110 | 56% |
| Oct 2024 | 13 | 30 | 43% |
| Nov 2024 | 3 | 4 | 75% |
| Jan 2025 | 4 | 4 | 100% |
| Feb 2025 | 10 | 18 | 56% |
| Mar 2025 | 0 | 3 | 0% |
| Apr 2025 | 1 | 4 | 25% |
| May 2025 | 8 | 13 | 62% |
| Aug 2025 | 13 | 15 | 87% |
Notes:
- September & February are large, mixed slates as leagues settle—solid but not peak accuracy.
- August 2025 (start of new cycles with refreshed data) shows 87% on a small but meaningful sample.
- Lows in March/April reflect thin slates plus variance; we keep them in for transparency.
Learning curve: keep only recent months
At step i, we drop the first i month(s) and recompute accuracy on the kept range.
| Kept range | Won | Total | Win % |
|---|---|---|---|
| Sept 2024 → Aug 2025 | 114 | 201 | 57% |
| Oct 2024 → Aug 2025 | 52 | 91 | 57% |
| Nov 2024 → Aug 2025 | 39 | 61 | 64% |
| Jan 2025 → Aug 2025 | 36 | 57 | 63% |
| Feb 2025 → Aug 2025 | 32 | 53 | 60% |
| Mar 2025 → Aug 2025 | 22 | 35 | 63% |
| Apr 2025 → Aug 2025 | 22 | 32 | 69% |
| May 2025 → Aug 2025 | 21 | 28 | 75% |
| Aug 2025 → Aug 2025 | 13 | 15 | 87% |
Takeaways:
- Dropping the earliest months lifts accuracy into the low/mid-60s, reaching ~75% from May → Aug, and ~87% in Aug alone.
- This pattern is consistent with season-conditioned learning: as we onboard more current-season evidence (tactics, rotations, travel effects, injuries), our priors adapt and edges sharpen.
How we keep it honest
- Fixed inclusion rule: only finished matches with source accuracy ≥ 90% for the given insight type.
- Strict WON/LOST mapping: we record outcome against the specific pick (e.g., BTTS=YES, O/U 2.5=UNDER, RESULT=AWAY WIN).
- No curve-fitting: the learning curve is a temporal mask (drop earliest months), not a re-tuned threshold.
- Public ledger: you can inspect every match card and its insight tags on the dashbord page:
➡︎ Insights Result
What’s next
- Expand breakdowns by league, home/away, and price band to isolate where edges are strongest.
- Add uncertainty bands (Wilson intervals) for months with small samples.
- Track feature drift explicitly (e.g., rotation depth, rest days, travel) and surface when those signals drive a pick.
If there’s a specific league or market you want us to slice, ping us—we’ll add it to the dashboard and keep learning in public.