May 3, 2011
To put it simply, Rich Meyer is a numbers guy. Despite several years programming major-market stations, Rich's love of stats and analysis led him to co-found Mediabase, where he stayed for 25 years. Two years ago, he left only to see radio adopt a new monitoring service, PPM, which produces reams of new data to analyze. Enter his new venture, Airplay Intel. Already his analysis has discovered some interesting observations about how radio programming can impact PPM results, both positively and otherwise. Here's just a taste of what he has found...
After 25 years at Mediabase, as its co-founder and president, you left in 2009. What have you been up to since then?
I've always loved number crunching, which is basically what I did for more than two decades at Mediabase. So I looked at opportunities that would let me continue to do the thing I love ... analyzing airplay data. I have been intrigued by the PPM data since its inception. I studied it and analyzed trends over a long period of time, and decided there was a business opportunity there for me to pursue. That opportunity has manifested itself into Airplay Intel.
There's so information you can distill from raw PPM data if you take the time to really pour through it. One can compare and contrast the numbers of not just one station, but of all of the relevant stations in a market. Further, when you begin looking at numerous similarly-formatted radio stations, trends tend to emerge very quickly. You can become an expert at the science, if you study it long enough. It's pretty much like anything else in life. If you study it long enough, you can speak fairly intelligently about it.
Everyone knows how overworked today's programmers are day-to-day. I wrap everything up in a nice package and deliver it to them every Sunday night, just in time for their music meetings on Monday.
On the content side, the same principal applies; that is, if you study the information and the correlated meters as you got through a programming segment, trends pop up and you say to yourself, "Wow!" One example, from the personality-driven morning shows and syndicated shows I have examined, I have concluded that, generally speaking, it is better to come out of a song, do a quick but effective tease of a feature or bit that's "coming up in less than four minutes," run the spots and come back to content, than it is to go music, content, spots. People might stick around thinking the bit or feature is going to be cool. But if you do the bit or feature first, and the listener gets bored 15 seconds into it, you've probably lost the average quarter-hour. I don't consider this a "game" of trying to fool the meters; I just think it's the sensible thing to do as a programmer.
Meantime, on the music side, I see interesting trends as well. For example, the epic rock songs like "I've Seen All Good People," "Do You Feel Like We Do," Blinded By The Light," "November Rain" and several more seem to ALWAYS test at the top. I see it station after station. Given the fact that those are all long songs, it seems reasonable to conclude that those songs are most likely to get you the most bang for your buck. Again, not a "trick" ... just sound programming judgment and a pretty simple deduction.
That last conclusion just sent shivers up my spine, because being a teenager/collegian in the '70s, I heard "Stairway To Heaven" so often that now I can't get past the guitar intro before flipping the station.
"Stairway To Heaven" is an anomaly in and of itself, just because it IS "Stairway To Heaven." Zeppelin often produces polarizing numbers, just because of the exposure they have received over the past 40-plus years. As for "Stairway," it is usually the very top or the very bottom, but having said that, this brings up another interesting point.
When you examine data for shows hosted by the Jim Ladds of the world, or some of those Sunday shows with The Beatles, for instance, it doesn't really seem to make much of a difference what they play. Listeners are there for the experience; there's not a lot of button pushing going on.
One of other interesting things I've found is that listeners tend to be much more patient with their P1 radio station. Ryan Seacrest listeners are tuned in to catch the buzz, just as much as they are the music, which obviously makes them more tolerant if they hear a song they do not particularly like.
That says something for the power of heritage.
Absolutely. You look at Z100 and KIIS' numbers and you can see it immediately. There is a reason why the songs' numbers produced on those stations are usually strong from top to bottom. Station branding and loyalty go miles on legendary stations like Z100 and KIIS.
Further, if you look at a format like Country, when there's only one Country station in the market, the deviation from top of bottom in retention scores is very slight because listeners have nowhere else to go. The same holds true for AC stations that own the lion's share of in-office listening in any given market.
How do you come up with such conclusions?
I look at the raw music scores as presented by Mscore, and I apply a 1-100 point scale for all the stations. The song that retains the most meters is awarded a score of 100; an index if you will. Every other song played will fall somewhere below 100. The scale is very similar to many callout scoring systems, and the scores that it produces are very, very stable week-to-week.
When crunching the PPM numbers to create the Airplay Intel, did any of the conclusions that you could draw from them surprise you?
Actually, a number of things, First, I found that almost every real hit burns much slower than listeners might tell you in other forms of research, Songs like "Viva La Vida" or "Hey Soul Sister," and now, "DJ Got Us Fallin' In Love." "DJ" is riding along now as a recurrent with 20% of the airplay that the #1 current has in spins. PDs are so used to running through the traditional song cycle, which is understandable, but this new intelligence now available to PDs and MDs really make you take a step back and take a fresh look at everything.
Secondly, in most but not all cases, songs will not run a life-cycle on a bell curve, as they do in airplay. Generally speaking, a programmer can tell pretty early whether or not each song is a hit. Remember, by the time the data is even presented to programmers, it has already been on the air for more than a month. I believe that's ample time and a sufficient sample to begin drawing conclusions.
What else has been a surprise to you after analyzing so much data?
People often tell you one thing, and then do something completely differently. I often use the example of the movie, "Tommy Boy" with Chris Farley and David Spade. In one scene they are driving down a mountain, the top is ripped off the car, everything is pretty much a mess, and they are punching around and can't find anything they want to listen to on the radio. They finally stumble onto the Carpenters, "Superstar." They kind of shrug their shoulders and agree, "Eh, okay, I guess." In the next scene, they are coming around the mountain singing the hook at the top of their lungs with the radio cranked to 11. In any other controlled environment, neither would have probably admitted liking the song, but in real life, something totally unexpected happened.
That's what PPM is all about ... what happens in real life. I could surmise that I might get 10 years for robbing the liquor store down the street. I could probably ask a number of people and come to that conclusion. But what if I DO rob the liquor store and the judge sends me away for 25 years? The judges, in radio ratings today, are those people carrying around those precious meters. Like it or not, that's reality.
Is the converse true as well?
Yes, it is. People might tell you they like a particular song or artist because they think they are supposed to like that artist. So, they will likely verbally respond positively, but in reality, play it ... and you may very well lose them. Grunge bands from the '80s are a good example.
So the oft-heard notion that a song needs 100-150 spins to see if it pans out as a hit ... not so much?
If the song doesn't test well initially, it likely won't test well as time goes on. However, there are always exceptions. Cee Lo Green tested terribly for several weeks. Cee Lo then got some tremendous television exposure and turned "Forget You" into a #1 song.
The good news for the label promotion reps is that if they have a song that IS testing well initially, they can gear their efforts to accelerate the positive movement that is already there. The other good news for label executives is that programmers often see songs test toward the top that they are barely playing. Usually, they will tell that they were not sure that the tempo or texture of the track was right for their radio station. I've seen songs that would have never gotten out of light rotation for that reason, which ended up being powers.
Arbitron has been putting considerable effort in increasing sample size, especially for minorities. Have you seen the impact of their efforts in your data?
I create a larger sample by using more data. I obtain more data by averaging over a greater period of time. I start with the same base data from Mscore, which I truly appreciate, and temper it to create lesser wobble or bounce from week to week.
Coleman Insights unveiled a PPM study at a recent Arbitron PPM Client briefing. The main gist of the presentation was to warn programmers not to make knee-jerk reactions to a bad score coming out of one particular daypart or day. I assume you concur.
That's what I've done with Airplay Intel; my first goal was to create a score where PDs would have no reason to make a knee-jerk reaction. Again, the data we produce is tempered to a degree where even if a song has a terrible PPM score in one particular week, that one week is not going to kill the song, or even damage it significantly. If it happens over a series of weeks, then yes, the scores would either fall or rise consistently every week.
Is there a noticeable difference in Airplay Intel data between a new song from a new act and one from an established act?
It depends. As I mentioned earlier, I've seen songs from unfamiliar artists finish in the top five on major radio stations that barely play them, which has caused that station to play it more and more until it reached power. Conversely, I've seen stations pound new releases from superstars out of the box -- without data to support the airplay. From time to time, the first week scores appear, and they are very scary. You never know until you have the data in-hand.
Do you measure new music the same way you do gold and recurrents?
No. I separate currents from recurrents and gold. I believe you have to do that to generate actionable music research. Statistically, you should always win when you play nothing but recurrents because research indicates that recurrents test the highest. That's probably because recurrents have lasted on a radio station long enough to become hits. On the other hand, not every current on a Top 40 is a proven hit, so there's not as high a tolerance level than with a recurrent or gold. The separation of the categories provides a clearer picture in my opinion.
Do all recurrents age gracefully into golds ... and are all gold equal?
I've noticed some trends with them. Novelty songs generally do not test well. I get the impression listener response is "Enough ... that was cute 20 years ago! I don't need to hear that again." I've also seen some songs consistently test at the bottom that you wouldn't expect.
Finally, it certainly looks like you've created your own niche in the industry. Has this experience changed the way you look at your career from here on out?
I just am who I am. My philosophy is to find something you really enjoy and figure out a way to make a living out of it ... my days on the air in the '70s and '80s, my days as a PD at some incredible, legendary Rock stations, the creation of Mediabase, and now the creation of Airplay Intel.
The thing I am enjoying more than anything is having the privilege of working day-to-day with the brightest programming minds in the radio industry. They are really the people who were the inspiration behind Mediabase ... and they are the professionals who provided the feedback and input to make it what it is today. I will continue to apply the same philosophy at Airplay Intel.