And then the Madness really begins


Well, there is always a moment in the tournament where you watch the tournament go into insanity.   It is amazing to say, but Norfolk State was just the tip of the iceberg.

In honor of having the 5th ever 15 seed win a tournament game, Cinderella bonded together to sweep the 7 pm games.  First, there were the smaller upsets.  9 seed Saint Louis take out Memphis and my beloved Boilermakers hold on after getting up by double digits to sneak away with a 3 point victory over St. Mary’s.

But those were small upsets in comparison.   MAC Champion Ohio played a great game to take out 4 seed Michigan.  And then, not to be outdone, Patriot Champion Lehigh became the 6th ever 16 seed to win a tournament game, as they took out number 2 Duke.

I know if I was Michigan State, Kansas, Temple, or Notre Dame in the nightcap, I would try as hard as I could to get up early and remove any doubt.  Because the momentum has started, and if they are not careful, they will become another statistic in this crazy evening.

I might have been wrong about it being unlikely that all 8 favorites win on Thursday afternoon.  But I am very sure, that it was extremely unlikely to have two 15 seeds, a 13 seed, a 10 seed and 9 seed win in 5 consecutive games.  But that is what we love about March!!!!


2 responses to “And then the Madness really begins”

  1. I’m beginning to question my Bradley-Terry model (argh!)

    The core assumption in the BT is that play during the tournament will be a statistical replay of play during the season. (Yes, I do overweight later season results, but that seems to have only a small effect).

    Tom was suprised by 8 for 8 lower seeds winning to start round two. Turned out the straight BT model said that was an 18% probability. But it was starting to call into question if the BT framework was adequately separating the teams–maybe the lower seeds were really much better than the higher the seeds, not just a little better.

    But then the other 24 games got played. And we had multiple significant upsets. So now I am asking if the BT model is overly confident–separating teams by more than is right. In many levels of professional sports the BT validates very well–there is no psychological effect, no hot hand, no clutch hitting. The pros are just too good. Too consistent. Unflappable. And, maybe at the highest level of collegiate athletics, they also are consistent. But, maybe, for 19 year-olds in mid-conference programs, they really do step up to the plate and in the national tournament perform at a higher level than they ever have before.

    So, Round Two results, acording to my BT model were a 1.3% event. Pretty unusual. I can keep playing Vizzini (The Princess Bride) and simply say “Inconceivable!” Or I can change my belief, i.e. change my model.

    With my simulator I have worked out various possible adjustment factors. If I shrink my estimates by 30% this first round becomes only a 5% event–still rare, but not inconceivable. The maximum likelihood estimate is to shrink my BT estimates by 75%. At that point the model hardly does anything–almost might was well pick randomly (not quite, but not far either).

    Well, welcome to Bayesian analysis–you have to pick your prior. In this case I am sweating model specification uncertainty. I don’t want this “Big Dance Adjustment Factor” in the model, but given last year’s weird results, and this year’s round two weird results, sometimes you are forced by the data to actually learn something.

    So, I think I will now start to use a 30% adjustment factor (this is on the logit scale) for my parameters.

    None of this will change any of my selections–it just changes the betting odds I might hypothetically offer or accept. All my rank-orderings of team quality are invariant with respect to this BDAF (Big Dance Adjustment Factor). I suppose, someday I should integrate like 10 years of Tom’s regular season and tournament data and see if the BDAF is bigger for the weaker teams.

    Maybe the BDAF is not due to individual players maturing and stepping up their performance (as this might be the first time they have been on national TV one can imagine that they might perfrom at a higher level) but rather due to the coaches using a different strategy? Knowing a regular approach against a superior team is sure to lose maybe they design some higher variance strategy that increases the tail probability of winning? As I have never watched a basketball game (on TV or in person), or played in a basketball game, and can’t make 1 in 10 free throws personally, I clearly have absolutely no idea if such high-variance strategies exist or are used.

    But regardless of the reason, statistically, it is looking like teams really do perform differently in the tournament.

    Very interesting–nearly every single published test of hundreds of empirical tests of BT have validated it. We may be seeing something new here.

    -Bill

Leave a Reply to Bill Kahn Cancel reply

Your email address will not be published. Required fields are marked *