January 21, 2014
Few people have a better long-range perspective on the evolution of radio than Jon Coleman, who has been consulting stations and groups for upwards of 35 years. Yet his continued success is not dependent on just the "Big Picture," but on keeping a sharp eye on the latest trends and changes in the industry. After recently posting an essay on how the PPM can be misread by radio management and programmers, here he delves deeper into that issue and several other pressing topics.
What did you do before you got into consulting?
I worked in radio for a few years in San Francisco, then I went to work for Frank Magid and Associates in Iowa, where I did media research for a few more years before I formed my own company in 1978.
Obviously, you've witnessed enormous changes in the radio business in that time, not the least of which are the consolidation boom and the growing use of "multi-market programming" in the form of syndication and voicetracking. How has that changed the way you consult radio?
Intellectually, not very much. Our goal is still to provide clarity in times of change and to provide perspective on the issues media companies have with consumers. They want to (and need to) understand their brands, how consumers use their brands and what motivates them. We advise clients on how to maximize their audience, so in that respect, things also haven't changed very much at all. The technology of research and the nature of our clients -- those kinds of things have certainly changed.
Research techniques have evolved over time; our methods of collecting data have changed and our client base has evolved. Firstly, it used to be a phone sample was universal in terms of being representative. This is no longer the case. Now, we use multiple methods of reaching consumers so that we reach people most likely to carry a PPM meter.
We have also greatly evolved our insight tools, from focus groups and phone perceptual studies back in the day to new techniques like mediaEKG, where we measure moment-to-moment reactions and talk content. We also do personal anthropological studies where we literally follow consumers. Our music testing has been enhanced over the years from just popularity, now we measure Fit, TSL weighting, Compatibility, infinite rotations, etc. Much more advanced thinking goes into our research now.
Also, our client base has changed. We used to work for individual radio stations or small companies; now we work for larger companies and for groups of stations in individual markets. If you had a rolodex (remember that?) of your customers back in '80s and '90s, you'd be in 70 different markets with 70 different stations. Now, you might have 25 different corporate customers, but work with over 200 different station entities as clients.
Today, we work more closely with a smaller group of people who work for CBS, Clear Channel, Hubbard, Bonneville and so on. Working with management of a few companies offers a more intimate relationship with the companies and their corporate people; however what our customers really hire us to do hasn't changed much.
This change in ownership structure has changed their priorities and how we do research. One practical implication of this is that not every station is a competitor. So often, it is about increasing ratings by better aligning your two stations and it is less about taking audience away from a competitor.
In a recent blog, you discussed how PPM results can be misread. Is that because some people don't understand that it's a passive listening measurement and not an engagement measurement?
I would say that passive listening measurement like PPM is not good at understanding motivation. PPM cannot tell the difference between a listener who is engaged at an emotional level and one who is not engaged. So, stations focus their attention on tune-out, not positive engagement. Unfortunately, there is not a "tune-in" measure in PPM.
So exactly how are some parties misreading PPM data?
When PPM started, it provided a set of data on behavior that radio never experienced before. TV had experienced it for years; it was able to track, minute by minute, the audience level in TV programs. They were able to see the arc - viewer growth, its peak and decline -- and over time, draw inferences from that, but they didn't have anything like that in radio until PPM came along.
The first thing PPM data allowed radio to do was to capture station switching and tune-out in real time. The problem is that tune-out is only a small part of how ratings are generated. PPM does not enable anyone to capture tune-in or ongoing satisfaction; it can only capture the current audience level.
So, let's say you start with 100 people and each minute you are adding or losing listeners. The problem is that you don't know, from the meter, why someone tunes in -- a lack of understanding of the motivation for listening. But you can pinpoint what the station was doing when people tune out. When you can capture tuning-away events, people then put a great deal of energy into trying to understand the motivation behind that change. People have come to believe that switching to another station is an expression of motivation for the whole audience. If you don't tune out, you must like what the station is doing. If you do tune out, you don't like what the station is doing. So if you lost 5% of your audience in minute five or six, you assumed that the audience had a negative motive. We think this is overly simplistic.
While it may be true that an event motivates 5% to switch, we don't know what happens to other 95% who keep listening. When it comes to people engaging with radio, we have found or inferred that the very thing that causes 5% of audience tune-out, sometimes causes the other 95% to keep listening -- and maybe to tune in again later in the day or tomorrow.
Quantitatively, we know that tuning in more times a day or week is how you generate ratings. There is little evidence that preventing tune-out does more than account for a small portion of a station's ratings. You can't completely discern the health of radio programming by negative numbers. Some of the things that a small percentage of people don't want to listen to can be extremely engaging to the other 95% or the great majority of the audience.
In other words, those who program solely by eliminating negative events are essentially programming not to lose.
That's a good way to put it -- and if you do that long enough, you'll see a slow attrition of your audience. That's what happens if you never program to engage your audience, but just pacify them.
Are there ways to interpret PPM data to find out just what does engage the audience?
To some degree, but as I said before, PPM doesn't measure the engagement level or what people tune into. It can't tell you why someone tunes in, switches to another station or comes back. It's very hard to tell from PPM data alone. You can use that data, on some level, as an overlay to other research, such as people's attitude about the overall programming, the personalities, the contests and music. When you get a sense of people liking those things or not and then overlay it with PPM data, you'll probably get a sense of engagement -- but it's certainly not a direct measure of it.
How does any research measure good content, such as a good morning bit or a contest?
There are several research companies and consultants who can offer either hard data on what is good content, or have a good understanding from their years of experience to identify good content. Some PDs have that knowledge intuitively, but there are research companies -- ours included -- that measure music appeal, personality appeal and content appeal. We can measure individual features and bits, and get down to the granular level.
There are also some smart PDs who are doing consumer research -- either the paid-for kind or research they do just by listening to their customers and paying attention. If you want to call that "gut instinct," fine. I call it "intelligent gut" -- gut developed by formal and informal research.
When a PD or morning man goes out to a remote and three or four people mention how much they loved a certain bit, hearing reactions to those kinds of events over time enables a PD to eventually find out what people do and don't like. It's not quite as accurate as real research, but smart programmers know how to use it. That's their gut instinct based on their experience with consumer feedback.
On top of that there is a lot of formal research. Big radio companies now spend a lot of money testing music every single day of the year, testing personalities and discovering which features and content listeners find appealing.
Do you feel that programmers have a tendency to use negative PPM data to pull content and even air personalities too soon?
Absolutely. Some people in programming and at some companies probably pull the trigger too fast, just as there are those who pull it too slowly. Some only use PPM data to make such decisions and some use a combination of PPM data and other attitudinal research. While there's the potential of pulling the trigger too fast, that doesn't mean everyone's doing it.
But if you have PPM data that says 40% of your audience is abandoning you every day at 10:10a for four straight weeks when you air certain content, it doesn't take a genius to figure out where the problem is. Where gets dicier is with smaller numbers, where the percentage of audience lost is smaller. What should the cut-off be? Forty percent sounds like a no-brainer, but what about 10% or 5%?
But to answer the question, "Has there been a tendency to pull the trigger too quickly on some things," the answer is yes because they're responding to tune-out with no idea of the positive. An example might be a spoof call in the morning show: As the feature starts you can see tune-out because some people just don't like the concept, but we also know from attitudinal research that the feature can be one of the most compelling to listeners.
How do you, as a consultant, counsel patience to clients who feel like pulling the trigger because of PPM tune-out, and instead encourage adventurous content?
It's not an either/or question. It's not a choice between the two. That's actually two questions. When it comes to patience, yes, in many instances, some stations and programmers should be more patient. When it comes to being more adventuresome, that depends on the example of adventuresome. Do you want to do a morning show feature that rocks the boat and might take a while to catch on? You're going to need to suffer a bit of negative tune-out before you turn it positive. If you consider that adventurous, I'd probably say that's a good idea within reason.
If it's smartly done, you ought to be patient. Take a morning show, for example -- and this happens with some frequency -- a new show is introduced and it's on a station that has 100,000 listeners. After the first week and by the end of the second week, it has 70,000, then 50,000, and in the third week it stays at 50,000 and the week after that and so on. Do you pull the show? Yes? No? In many cases we have seen "no" be the answer, because after a certain number of weeks the audience can grow back from 50,000 to 70,000 and then by the 20th week it might be back to 100,000 and a month after that 150,000. That's where patience is needed and where sometimes management needs put a tight seat belt on and wait. The best idea here is to know if the show is really popular with your type of audience. If you steal a show just because it has big numbers, you could get in trouble.
On another topic, there seems to be considerable debate over how much a station should balance its terrestrial efforts with developing its digital platforms. Where do you stand on this?
That's a practical question for business. The fact is that for broadcast stations, digital in general has not been a win economically, so during the recent tough times over the last four or five years, many companies have been gun-shy in investing in digital platforms and content.
Today, however, I believe radio needs to invest in digital platforms and content simply because that is where the consumers are going to consume a large portion of their product. In the future, will all of their revenue come from digital? Will digital kill over-the-air radio? Probably not. History tells us that there will likely be a place for both, but you need to have your content at the places where consumers consume -- and if your product is not available to them in those areas or you're providing content that doesn't interest them, they're not going to participate in that audience in terms of ratings or revenue.
That said, it may be possible that a company could decide to be the best terrestrial broadcaster there ever was -- a company that is only going to invest in terrestrial broadcasting because that is their business model. If the overall audience to terrestrial radio is down by 30-40% over the next few years, they may say "so be it." If that can be very profitable with the remaining 65% of the audience, then they may not have to worry about audience loss to the other media. In contrast, most other companies believe they need to connect with 100% of their potential audience, so they have to invest in digital platforms as well.
Finally, what is your view on the present and future of the radio business?
I'm optimistic radio will have an important place in consumers' lives. However, stations need to know what audience they are serving and what they want and expect. If they keep going with exactly the same model as they did in 1995, they may be trying to appeal to a type of consumer who wants something entirely different.
As a business, I am also optimistic, as long as the owner got into the business at the right price and their debt is not overly burdensome. You can moderate revenue growth and still make a 40-50% profit. In that case, the company can be whole in five to seven years.