It’s a common occurrence for many of us when visiting a course: you walk off the 18th after your round thinking “Gosh, that course was a lot easier/harder than the Slope rating suggests.” Or maybe there are a few individual holes at your home club that seem to always play much easier/harder than the index on the scorecard. There could be, say, a hole where you regularly have two putts for a plus in Par comps, yet another hole where you regularly take a wipe before you reach the green.
You’re not alone. Every month I get countless emails about these sorts of situations. And it’s a significant gripe for many.
But the solution, I believe, is quite simple: Real data.
First, let’s look at Hole Indexing, which ranks a course’s holes from 1-18 based on difficulty.
Hole indexing for a course is determined in a variety of ways. Traditionally it was a result of the course designer’s or committee’s collective opinion, stating “This hole seems the hardest, or plays the longest, so let’s make it the Number 1 index”, etc. Scorecards were then printed, the numbers etched in stone, and that was that.
The problem with this is that it’s opinion-based, and doesn’t always reflect reality. I know plenty of courses, for example, where the holes rated 1 or 2 feel/play easier than other holes. The reverse is also true – there are holes that may be rated 17 or 18 on the card, but play much harder.
While it seems a minor point, this essentially gives some players an advantage via extra (unnecessary) shots on certain holes, while withholding a needed shot by a middle/lower marker. And while some argue that this will “even out” on the flip side—via a shot given/lost elsewhere—this isn’t necessarily true, especially in Par comps where an extra shot on an easy hole gives no benefit (unless you want to count a double-plus!)
Traditional indexing also doesn’t necessarily take into account how much different a hole plays in summer vs winter, etc, (which is definitely a factor in the southern states). Nor does it reflect the inevitable evolution/change over time to the course itself, advancements in technology, etc.
But the times are changing.
Indexing is slowly shifting to a more statistics-based model, where data is gathered over a certain period of time, and the holes are rated according to how players ACTUALLY score on them.
For clubs that have advanced/electronic scorekeeping, the analysis of past round data can reveal, for example, the difference a hole plays from red/white/blue/black tees, in summer vs winter, with/against prevailing winds, etc. This is more accurate (and potentially more fair to all players) than traditional methods.
There are many courses already doing this—The National in Victoria, and Mandurah in WA are two that spring to mind—which review and adjust the index of holes according to the ‘played-to’ stats. And anecdotal evidence suggests that the players and members approve (and/or whinge less!).
Golf Australia has published guidelines/recommendations for this sort of process. Unfortunately, many courses/committees stick with the “status quo” and still use the old/traditional version. (They usually cite reasons like “But we already have scorecards printed up” or “we don’t have the resources to analyse stats like this”). But with the tens of thousands of rounds (on average) recorded annually for each club in GolfLink, surely the ‘powers that be’ can wave their magic wands and help these smaller clubs out?
Speaking of the ‘powers that be’…why not expand this process to adjust Slope?
I play many courses across Australia on a regular basis. And one thing I’ve noticed over the last couple of years is that the Slope ratings for many of these courses seem to be out of whack. The problem, I believe, is that the calculations for determining Slope aren’t necessarily relevant to all courses in Australia (our links-style courses play far differently to the pin-seeking dartboards in the U.S.). They also don’t generally take real-world statistics into account.
I have spoken to many people in the industry, from Course Architects to General Managers to Industry Professionals and the like. And very, very few of them can easily explain the intricacies of Slope without developing a nervous twitch.
In its most simple terms, The Slope rating system determines the level of difficulty of a course for a bogey golfer (versus a scratch golfer). It looks at obstacles/hazards at set distances down the fairway, the course topography, altitude, a ‘psychological factor’, etc. In truth, it’s a complex system that, at the end of the day, still relies on a rater’s opinion/interpretation, on any given day(s) during the visit.
So, instead of relying on an opinion, why don’t we simply look at the real-world stats by golfers who play the course every week?
If we want to compare the difficulty of a course between a “scratch” golfer and a “bogey” golfer, we could, for example, hop into GolfLink and compile the previous twelve month’s worth of rounds from a set group of low markers, and compare the data to an equal number of middle/high markers to determine how much more difficult the course ACTUALLY plays from each set of tees. Stroke/Medal rounds could be given a higher weighting than Stableford and Par rounds, while the DSRs could also be factored in. You could even look at summer/winter slopes, etc.
I’m not saying that my example above is THE answer. But in this age of statistics, surely there is a better way that what we have?
As always, I welcome your thoughts.