Last spring, Myles had the idea that instead of just compiling our dumb predictions about the upcoming season, we should do something a bit more interesting. The idea was for each of the OV authors to pick a group of teams and track those teams throughout the season to see whose group did the best. Originally, we were going to pick teams through a sort of combination draft/white elephant gift exchange, but we ended up settling on an auction. The six of us (Berselius, DMick, IL424, Myles, Myself, and Sitrick) decided to meet up at OV headquarters, use an imaginary budget, and auction off the teams.
Everything was going smoothly until Sitrick decided that not only did he not want to participate in this silly exercise, he didn’t want to write about the Cubs or see any of us again for as long as we all should live. Thus far, history has smiled on that decision. Unfailing steadfast in the face of rejection, however, the rest of us soldiered on and auctioned off 6 teams a piece rather than 5, with a budget of 126 fake dollars per person.
My strategy going in was to avoid all the worst teams. I wanted at least two good teams in the hope that one would finish above 90 wins, and four mediocre teams that would collectively finish above .500. Everything went according to plan except that one of my “good” teams ended up being the Red Sox. (That and the fact the DMick laid waste during the auction. Bleh…)
[table id=9 /]As you can see, DMick managed to win comfortably despite buying the DBacks, who ended up as the worst team in baseball.
The one thing I was curious about going into the auction was whether we, as a group, would add any value over and above information in the public sphere. The short answer is: not really. The results were pretty stratified. Nine teams went for $30 or more, which translates into an 89-or-better win projection, and 10 teams went for $10 or less, a projection of less than 70 wins. Only two teams went for between $11 and $21 (71 to 81 wins), whereas 9 teams actually finished in that range. In fantasy terms, the group went with a stars-and-scrubs approach. As a result, you would have been much better relying on PECOTA or some other system to predict win totals.
But what about the identity of our stars and our scrubs? Were the teams we picked as stars more likely to exceed their projections, and the scrubs more likely to tank? To answer that question, I looked at BP’s projections at the time. I compared the difference between our effective projections and BP’s, and compared that to the actual difference at the end of the year. Take the Tigers for example. Dmick bought them for $35, which translates into a projection of approximately 94 wins, whereas BP had them at 88. So our auction had them at +6 relative to BP. They finished at 90 wins, or +2 wins relative to BP. In that case, the group was right to be optimistic. Repeating that for all teams and running a correlation for the 2 values revealed a slightly positive result (r = 0.07). That is, teams that we valued above or below BP’s season win projection were slightly more likely overperform or underperform, respectively. When I ran the same exercise using Vegas over/unders as a baseline, however, things didn’t work out as well. The correlation for those results was negative (r = -0.11).
On the whole, I think all that was probably random. Listening to us was not a great idea if you were headed to Vegas. We will try to do better next year.