The fundamental idea of business-to-business CRM is often identified as allowing the bigger business to be as responsive to the requirements of its customer as a small business. In the past of CRM this became translated from “responsive” to “reactive”. Profitable larger businesses acknowledge that they need to be pro-active to find [paying attention to] the views, concerns, needs and levels of satisfaction from their customers. Paper-based surveys, such as those left in hotel bedrooms, tend to have a low response rate and are usually completed by customers that have a complaint. Telephone-based interviews tend to be affected by the Cassandra phenomenon. Face-to-face interviews are costly and can be led by the interviewer.
A big, international hotel chain wanted to attract more business travellers. They decided to conduct a customer satisfaction survey to find out the things they necessary to improve their services for this kind of guest. A written survey was put into each room and guests were required to fill it out. However, if the survey period was complete, the resort found that the sole individuals who had completed the surveys were children as well as their grandparents!
A large manufacturing company conducted the first year of the items was designed to become Customer satisfaction survey. The initial year, the satisfaction score was 94%. The 2nd year, with similar basic survey topics, but using another survey vendor, the satisfaction score dropped to 64%. Ironically, at the same time, their overall revenues doubled!
The questions were simpler and phrased differently. The order in the questions was different. The format in the survey was different. The targeted respondents were in a different management level. The Overall Satisfaction question was placed at the end of the survey.
Although all client satisfaction surveys are used for gathering peoples’ opinions, survey designs vary dramatically long, content and format. Analysis techniques may utilize a wide variety of charts, graphs and narrative interpretations. Companies often make use of a survey to test their business strategies, and lots of base their business plan upon their survey’s results. BUT…troubling questions often emerge.
Are definitely the results always accurate? …Sometimes accurate? …Whatsoever accurate? Are there “hidden pockets of customer discontent” that the survey overlooks? Can the survey information be trusted enough to take major action with assurance?
As the examples above show, different survey designs, methodologies and population characteristics will dramatically alter the results of market research. Therefore, it behoves a company to create absolutely certain that their survey process is accurate enough to produce a real representation of the customers’ opinions. Failing to do so, there is not any way the business are able to use the final results for precise action planning.
The characteristics of the survey’s design, as well as the data collection methodologies employed to conduct the survey, require careful forethought to ensure comprehensive, accurate, and correct results. The discussion on the next page summarizes several key “rules of thumb” that must be followed in case a survey is to become a company’s most valued strategic business tool.
Survey questions ought to be categorized into three types: Overall Satisfaction question – “How satisfied are you currently overall with XYZ Company?” Key Attributes – satisfaction with key regions of business, e.g. Sales, Marketing, Operations, etc. Drill Down – satisfaction with problems that are unique to each and every attribute, and upon which action might be taken to directly remedy that Key Attribute’s issues.
The Overall Satisfaction real question is placed at the end of the survey so that its answer is going to be afflicted with a far more thorough thinking, allowing respondents to have first considered solutions to other questions. Market research, if constructed properly, will yield an abundance of information. These elements of design needs to be taken into account: First, the survey should be kept to some reasonable length. Over 60 questions in a written survey can become tiring. Anything over 8-12 questions begins taxing mdycyz patience of participants in a phone survey.
Second, the questions should utilize simple sentences with short words. Third, questions should demand an opinion on just one single topic at a time. For example, the question, “how satisfied have you been with the services and products?” cannot be effectively answered because a respondent might have conflicting opinions on products versus services.
Fourth, superlatives including “excellent” or “very” should not be utilized in questions. Such words often lead a respondent toward an opinion.
Fifth, “feel great” questions yield subjective answers where little specific action may be taken. For example, the question “how do you feel about XYZ company’s industry position?” produces responses which can be of no practical value in terms of improving a surgical procedure.
Even though the fill-in-the-dots format is probably the most common kinds of survey, you can find significant flaws, which could discredit the results. For example, all prior answers are visible, which results in comparisons with current questions, undermining candour. Second, some respondents subconsciously tend to find symmetry inside their responses and become guided by the pattern with their responses, not their true feelings. Third, because paper surveys are typically categorized into topic sections, a respondent is much more apt to fill down a column of dots within a category while giving little consideration to each question. Some INTERNET surveys, constructed inside the same “dots” format, often result in the same tendencies, specifically if inconvenient sideways scrolling is necessary to reply to an issue.
In a survey conducted by Xerox Corporation, over one third of all responses were discarded as the participants had clearly run down the columns in each category instead of carefully considering each question.
TELEPHONE SURVEYS Though a telephone survey yields a more accurate response compared to a paper survey, they may likewise have inherent flaws that impede quality results, such as:
First, when a respondent’s identity is clearly known, concern over the chance of being challenged or confronted with negative responses at a later date produces a strong positive bias inside their replies (the so-called “Cassandra Phenomenon”.)
Second, studies have shown that people become friendlier as being a conversation grows longer, thus influencing question responses.
Third, human nature says that people like to be liked. Therefore, gender biases, accents, perceived intelligence, or compassion all influence responses. Similarly, senior management egos often emerge when trying to convey their wisdom.
Fourth, telephone surveys are intrusive over a senior manager’s time. An unannounced phone call may create a preliminary negative impression in the survey. Many respondents may be partially focused on the clock as opposed to the questions. Optimum responses are depending on a respondents’ clear mind and spare time, 2 things that senior management often lacks. In a recent multi-national survey where targeted respondents were offered the choice of a telephone or other methods, ALL chose the other methods.
Taking precautionary steps, like keeping the survey brief and ultizing only highly-trained callers who minimize idle conversation, can help minimize the aforementioned issues, and definitely will not get rid of them.
The objective of any survey would be to capture an agent cross-section of opinions throughout a group of people. Unfortunately, unless most the people participate, two factors will influence the results:
First, negative people have a tendency to answer a survey more often than positive because human nature encourages “venting” negative emotions. A small response rate will usually produce more negative results (see drawing).
Second, a reduced portion of a population is less associated with the entire. For instance, if 12 people are asked to have a survey and 25% respond, then this opinions from the other nine folks are unknown and could be entirely different. However, if 75% respond, then only three opinions are unknown. One other nine will be more prone to represent the opinions from the whole group. Anybody can assume that the higher the response rate, the more accurate the snap-shot of opinions.
Totally Satisfied vs. Very Satisfied ……Debates have raged within the scales used to depict degrees of customer care. Recently, however, research has definitively proven that the “totally satisfied” customer is between 3 and ten times very likely to initiate a repurchase, and that measuring this “top-box” category is quite a bit more precise than any other means. Moreover, surveys which measure percentages of “totally satisfied” customers as opposed to the traditional sum of “very satisfied” and “somewhat satisfied,” provide an infinitely more accurate indicator of business growth.
Other Scale issues…..There are more rules of thumb that are often used to ensure more valuable results:
Many surveys give you a “neutral” choice on the five-point scale for individuals who may not desire to answer a question, or for people who are unable to create a decision. This “bail-out” option decreases the amount of opinions, thus diminishing the survey’s validity. Surveys which use “insufficient information,” as being a more definitive middle-box choice persuade a respondent to produce a decision, unless they merely have inadequate knowledge to answer the question.
Scales of 1-10 (or 1-100%) are perceived differently between age groups. People who were schooled employing a percentage grading system often look at a 59% to get “flunking.” These deep-rooted tendencies often skew different peoples’ perceptions of survey results.
There are several additional details that will enhance the overall polish of the survey. While market research needs to be a fitness in communications excellence, the event of taking a survey should also be positive for your respondent, along with valuable for that survey sponsor.
First, People – Those in charge of acting upon issues revealed within the survey ought to be fully involved in the survey development process. A “team leader” should be responsible for making sure all pertinent business categories are included (as much as 10 is perfect), which designated individuals assume responsibilty for addressing the results for each and every Key Attribute.
Second, Respondent Validation – After the names of potential survey respondents have already been selected, they may be individually called and “invited” to sign up. This task ensures the person is willing to take the survey, and elicits a binding agreement to do this, thus improving the response rate. In addition, it ensures the person’s name, title, and address are correct, an area in which inaccuracies are commonplace.
Third, Questions – Open-ended questions are typically best avoided in favour of simple, concise, one subject questions. The questions also need to be randomised, mixing the topics, forcing the respondent to get continually thinking about another subject, and not building upon a response from the previous question. Finally, questions ought to be presented in positive tones, which not just helps maintain an unbiased and uniform attitude while answering the survey questions, but allows for uniform interpretation of the results.
Fourth, Results – Each respondent receives a synopsis from the survey results, in a choice of writing or – preferably – in person. By providing at the outset to talk about the final results in the survey with each respondent, interest is generated in the process, the response rate increases, as well as the clients are left with a standing invitation to come back for the customer later and close the communication loop. Furthermore that provide a method of dealing and exploring identified issues on a personal level, nevertheless it often increases an individual’s willingness to participate in in later surveys.
A properly structured customer satisfaction survey can offer a great deal of invaluable market intelligence that human nature is not going to otherwise allow access to. Properly done, it could be a way of establishing performance benchmarks, measuring improvement with time, building individual customer relationships, identifying customers at risk of loss, and improving overall customer satisfaction, loyalty and revenues. If a company is not careful, however, it may become a source of misguided direction, wrong decisions and wasted money.