I didn’t get a chance to do an analysis of the ‘fun’ bets from the first part of this season before now but I think it’s worth looking back to see what info we can glean. In addition, it was the first time I’ve shared the information regarding value % and it’s the first time anyone has seen the output I get from my ratings although I’ve discussed what it looks like before on the blog.
It’s important to remember what we’re trying to do with these ratings of mines. Simply, I think the fundamental point of using ratings it to be able to identify the best bets from each set of games. I personally don’t care what my ratings say about games where I don’t see any value in backing any of the teams to win. I therefore don’t include any of these games in my results as you will know by now.
This is a debatable point I suspect but I’d even go a step further and say that I don’t really care about whether my ratings make a profit or not when backing all games to level stakes. I’d like my ratings to make a profit from backing every team that is defined as value but as I’ve spoken about before on the blog, it isn’t compulsory that this has to happen. I don’t want to be judged on my ratings, I want to be judged on my systems from the ratings. It’s also for this reason that I don’t share the ratings each week for all the games as in the wrong hands, my ratings could do some damage to a betting bank!
So, keeping the above in mind, how do the results from this early season trial look compared to the historical results?
The first observation is that backing every team to win defined as value created a 4.54pts loss across 63 games. There was a profit of 0.28pts from the 63 games for DNB. Is this disappointing?
Well, to answer this, you need to look at last season I suspect. Last season, there were 765 unique games thrown up my ratings as value. These generated an ROI of 2%. For comparison purposes, backing DNB gave a return of 3% last season in these games.
I’ve spent a long time trying to look at the historical correlation between my base ratings and the systems I use. There isn’t a simple formula you can use (or at least, I’ve not found one!) but I reckon that when my ratings break-even, you’ll make a 5%-8% return from using the systems. If the ratings make 2%-3% return, you’ll make 8%-11% return from the using the systems. If the ratings make 3%-5%, you’re talking about a return of anywhere up to 15% from the systems. If the ratings make in excess of 5%, the returns from the systems can be above 15%.
In the first half of last season, my ratings were making 10% return and the systems were making 25% return overall with the best systems making 40% return.
Therefore, you’re looking at a return of 2.5-4 times the return from using my systems compared to the base ratings.
Obvious question then…..why is this? Well, it’s the way I’ve structured the ratings but also the way I structure the systems.
In my humble opinion, the reason the results from my systems look so good is because the ratings are able to identify the strength of the bets. Hence, a team that is high value is a much better bet in the long-run than a team that is marginal value. Seems common sense to everyone in here but if anyone has built a rating algorithm, they’ll realise how difficult it is to get to work out the way you want it to!
If we look at the results from this trial, we can see that the systems made a profit of 28% and yet, the ratings lost money! ;)
Simply, the highest value bets won and the lowest value bets lost during the trial. Obviously, this is nearly perfect from what we’re trying to achieve and therefore, I’m pretty chuffed about how the ratings performed during the free trial as it lets people see how the ratings work but also how good they can be when they do well!
The top 4 value rated teams won during the trial, at odds of 5, 4.2, 3.25 and 2.5. By anyone’s standards, that’s good going. The 5th rated team drew at 3.25.
The top 13 rated teams had 7 winners, 2 draws and only 4 losers. At average odds of 4.6, a good summary of what my ratings can do I think on high odds teams.
The top 17 rated teams had 9 winners, 3 draws and 5 losers.
As you work down the value %, the profit falls and due to the fact there were a lot of very low value bets during the trial, overall, you end up with a loss if backing every team.
At the end of the day though, no one is backing teams based on the value %. If they were, they could choose a value % cut-off and follow these but no one has to do this work as my systems do it all for them!
As you can see from what I posted, the lower value teams appear on systems 6 and 21 as these are the weakest bets my ratings find. As you work up the value %, the teams appear on less and less systems until you reach a point where you end up with the very highest value bets appearing on the best systems.
That I think is what makes the TFA systems pretty unique. If my ratings weren’t so good at identifying value, I’d never manage to have systems that create returns in excess of 10% like so many of my systems do.
As you can see from the trial, the systems did their job perfectly and the fact they took 63 games that made a loss and turned them into a profit of 28% is an example of what they can do. Obviously, there is a lot of luck involved here as I wouldn’t expect every team with a higher value % to win like what happened during the trial but it gives an indication of what can happen over a small number of bets. I wish the highest value teams always won like they did in the trial!
The 46 lowest value bets created a loss of 21.89pts during the trial. The 17 highest value bets created a profit of 17.35pts during the trial.
Importantly, the systems took the 46 bets and turned them into 101 system bets to generate a loss of 43.10pts on the systems.
However, the systems took the 17 highest value bets and created a profit of 102.45pts.
This sums up the potential of the TFA systems……..
Anyway, that’s a quick summary of the trial. Hopefully this rolls forward to this season. ;)