-
The Experience Is Everything
July 3, 2007
Have an opinion? Add your comment below. -
Make It Memorable, And You Make It Last
The experiences we have each day are shaped by our senses - what we see, smell, touch and hear. A delicious meal, a beautiful sunset, the scent of flowers, favorite songs on the radio - these simple pleasures cause us delight and help to make a difference in our mood, our lives and how we perceive our surroundings. Our senses help to set the emotional state, enrich our days and even allow for a little exploration.
It's for these reasons that we put such an emphasis on the sensory experience for all of our listeners. Now that summer is here, we must be aware of how the audience temperament has changed. Kids are out of school. Parents are on vacation. There is a whole summer feeling. Listeners use radio differently in the summer. And let's not forget that the days are longer. And because of all of these things there is a need to lean the station a little younger. This is particularly true for urban adult stations.
In addition, today's program and music directors must be more intelligent and better able to read between the research lines. Years ago, programmers picked a stack of songs and hoped they were the right ones. Now the music selection process has become more sophisticated and we are better able to determine if well-testing songs are compatible with the format and have the ability to become "favorite songs on the radio." Ours is a never-ending quest to find and play the most favorite songs at the right time. We can't effectively do this without delving into the topic of music research. Some of the approaches that worked in the late 90's are still valid. Others are not. Indeed, what we've learned is that many of the most reliable techniques still have some risks attached.
As the endless summer sails on, we find ourselves still gathering data, examining call-out research and hoping to make our listeners feel comfortable every minute of their stay with us. In our relentless ratings pursuit, we urge you to take a step further and ask yourself, what are most of your listeners and potential listeners doing while they're listening (dayparting) and what do they really want to hear?
For many of us in urban radio, we wondered if there wasn't a better way to reduce the risk and get better answers to these questions. But, there were concerns. One concern was the quality of listeners' feedback. When our stations are unable to give a new song enough spins for it to become familiar enough to show reasonable passion scores, there is a problem. When we interrupt someone's life with a phone call, barrage them with hooks of the songs we want to test, and then put them on the spot regarding how they felt about each tune, that's a problem. Was the approach getting top quality results? Or would we have to accept just any old answer to get the interview over so the listeners could return to dinner or whatever we interrupted. Callout on currents is done more frequently with fewer titles and is much stronger and can be effective providing the weekly spin totals can get over 50.
Stations have to be careful when they try to test their entire library through current callout. If you don't reach the target, passive audience you can wind up with a distorted view of what the real summer audience wants to hear from the oldies base.
Using At-Home Panels
It occurred to researchers and consultants that certain types of listeners didn't want to be hassled with an intrusive, call-out interview. So what did they do? They set up at-home panels. There are several ways to accomplish this, but the bottom line is that listeners are recruited (either by a brief phone call, responding to an over-the-air message, or to a mailer sent out to your key zip-codes to rate records, but at their convenience and at home. That way, there are no long intrusive calls, having to listen to hook-after-hook. Instead, these panelists receive cassettes or CDs weekly, containing the cuts that station wants to obtain feedback on. They then listen to the songs at the time/place of their choice when supposedly they're able to better reflect on how each song really hits their ears. Responses can then be returned weekly either phoned in, mailed back (a questionnaire can be included with each cassette or e-mailed back to the station.) Another way to achieve this is to use the Internet to send and receive the information, but we have found that response rates drop significantly because the respondent's attention is divided and some of our potential listeners still do not have Internet access.
Each listener is typically involved in the music research panel for a month. This allows for tracking of their perspectives over a several week period. Usually, in return for their cooperation, listeners are sent premiums by the station - anything from station merchandise to coupons for restaurant trade to cash. One major market programmer we know even sent CD players and iPods to each respondent and when the person was through being on the panel they got to keep both the CD players and the iPods.
There are some drawbacks. For one thing, it's difficult to keep the sponsoring station or group confidential, which can affect the opinions of your panelists. In research it's usually best if you can keep the individual's responses as objective as possible by withholding the name of the project sponsor. Also, you can't always be sure who's filling out the questionnaire (which is the same problem Arbitron has with its diaries). When it's completed at home - the targeted person may give it to their brother, sister or friend to fill in, maybe as a lark and you may wind up with data from a person not in your target.
Now with the PPM, there could potentially be another problem since Arbitron is now going to measure children starting at age six. Suppose a seven-year old is in the car wearing a meter and his/her parents are listening to a news/talk station. The PPM is going to pick up that listening and cause the ratings to reflect that this child was listening to a news/talk station because their parents had tuned it in. Remember the PPM doesn't measure listening as much as it measures exposure. And it is going to pick up a lot of unintended listening.
Nevertheless at-home panels are an interesting alternate to call-out. It's one approach that a few urban stations have used.
Accurate Auditorium Music Tests (AMTs)
The most widely used form of music research is still AMTs. It's rare when a music research technique generates strong controversy, but recently, auditorium music tests have and do. Why? Well, let's look at the reasons for some of the controversies.
Several researchers, including myself, have modified and adapted this technique for urban radio. The basic process is really simple. Approximately 100 target listeners are recruited, screened and placed in an auditorium or ballroom, then exposed to (ideally) 250-300 hooks they're supposed to rate. The result is list of songs that are "OK" to play - and just as importantly, a tally of songs that are burned, unfamiliar or are tune-out factors.
AMTs are best used in specialized cases. This approach should primarily be used to gauge reactions to gold or re-currents. As such, AMTs have become relatively widely used primarily by urban AC stations. As opposed to on-going music research such as callout or at home panels. AMTs are used sporadically (largely due to cost factors). Most who use this technique look at it as a way to get a "fitness check-up" just before the start of a key sweep. It helps to ensure your play list is fine-tuned for the crucial survey.
While AMTs have become a very much relied upon for of music research, there are concerns.
Many record companies, for example, complain that such tests are becoming too dictatorial - taking PD judgment out of the picture and that resulting play lists are too conservative. Some broadcasters have the same feeling - that stations that rely on AMTs become boring clones which then have to spend huge dollars in marketing to stand out form the competition (rather than have a product that musically sounds different.) Luther Vandross, Marvin Gaye and Lionel Richie tracks often test well in certain 25-49 demos, yet what happens if all stations are airing the same "safe" songs?
Another key factor is cost. Well-done music tests should run between $20,000-$40,000. This covers recruits (you'll need to recruit at least 200 people in order to have 100 actually show up) renting the site for the evening's test (which usually takes about 90 minutes), cash premiums as incentives for people to come out after work and to this (premiums run between $25-$50 per person, depending on the city/location and the demos involved) and data procession- not to mention the stilled moderator and the creative of the tape, with the 300 or so hooks. If a researcher quotes you a figure way below that range, beware! They may be taking shortcuts than can hurt the quality of the data you receive.
Given the amount of a typical investment in an AMT, you should expect top quality. As you're considering doing such a test, ask prospective researchers if they can customize the music results to your needs, providing you information in a form you can understand. (Or do you have to fit into their standard software?) Also be vigilant regarding recruiting - the key to any successful research effort. We know a top station that was told by the researchers that recruits would be targeted urban fans - people who shopped in the neighborhood, those who attended urban concerts etc. It turned out that even though the station paid extra for this special recruiting the people were just standard recruits, not urban-targeted folks.
The final decision-making word once the audience research data has been coordinated
rests with the local program director. Good researchers have a good national overview of what the real audience wants to hear from their tests. When the research is completed it's important that the PD listens to the researcher's perspective. But it's the PD's responsibility to create the station's sound by carving music into different categories and rotations. It's easier for programmers to make decisions when they're offered many different perspectives. The programmer must take an active role in the process.
Done well, AMTs are a very useful and effective approach to finding favorite songs to put on the radio. However, as with any of the music research systems discussed, be careful how you use and implement these techniques. The objective is to find and properly rotate the songs that will consistently reward your summer listeners with a sensory encounter that will elicit a tremendously high emotional response -- that "favorite song on the radio feeling." That experience is everything. Make it memorable, and you make it last. Make it last, and you instantly increase TSL. And it's one more way to keep those fickle fingers faithful to your frequency.
Word.
-
-