Displayed on this page are the predictions and results recorded by my AFLW tipping model in season 2023. An FAQ is available at the bottom of the page.
There are three elements to each prediction in the table: tip, the team the model is tipping to win; margin, the amount of points the model is tipping that team to win by; and chance, the percentage likelihood of the tipped team winning by any margin. The table also includes methods of evaluating all three elements of the tip. These are discussed in detail in the other FAQs.
The model is still in development and as such I am not currently releasing the details of how it makes predictions, but I may decide to do so in the future. You are welcome to use the information below as you see fit, but please note that you do so at your own risk. I do not recommend or encourage the use of this information for betting.Accuracy, quite simply, records whether the tip was right or wrong. If the team being tipped to win the game does win, you will see a green tick. For any other result you will see a red cross. Until the result is known, you will see a black question mark.
Most users of a tipping model will be primarily concerned with how often it correctly picks the winning team. Accuracy provides a direct answer to this question.
However, past accuracy is not always the best predictor of future accuracy. This is why other methods of evaluation are included.The acronym MAE stands for Mean Absolute Error.
The error of a tip is the amount by which the real-world margin differs from the predicted margin. For example, if the model tips the home team to win by 20 points, and they instead win by 35, the error for that tip will be 15. If they win by 5, it will be -15.
The absolute error is the same number, but is always a positive value. In the scenario above, whether the home team wins by 35 or 5, the absolute error of the tip will be 15.
Therefore, the mean absolute error is the average number of points by which the margin prediction was wrong. The lower the MAE, the better the model.
In comparison to accuracy, MAE is a superior predictor of future accuracy. Why is this?
Imagine a match with tips provided by two models. Model A predicts the home team will win by 2 points. Model B predicts the away team will win by 30 points. The match is played, and the away team wins by 4 points.
Model A better predicted the outcome of the match - it was only 6 points off, while Model B missed by 26. But, Model B predicted the right team, and Model A the wrong one. This would be reflected in MAE, but the accuracy would incorrectly report Model B as superior.
This is an outcome that, understandably, can feel counter-intuitive. If you’re interested in reading more, I highly recommend Tony Corke’s article on the topic on his website, Matter of Stats.The bits measurement evaluates ‘chance’, the probability of the tipped team winning the match by any margin.
If the tip is accurate, then the model will be awarded some value of bits, depending on how probable the model declared the tip to be. The more confident the model is in its tip, the more bits it will receive if the tip is correct.
However, the same is true in reverse. If the tip is inaccurate, the model loses some value of bits, and this value grows larger depending on how confident the model was in the faulty prediction. A draw result also sees the model lose bits, albeit not as many.
As such, bits provides something of a compromise between accuracy and MAE. It’s a score that goes up or down depending on the real world outcome of the match, but with the acknowledgement that the confidence in a prediction is just as important as the prediction itself.
Bits are used to score the Monash University Probabilistic Footy Tipping Competition. You can find more information on how they are calculated on their website.
Last updated: 6:50pm, 24 Nov 2023 (AEDT).
Feedback, corrections, coding tips, questions and suggestions are always welcome.