They ensure the customer satisfaction survey is starts out in the right direction and put it on firm ground for future implementations of the survey. This involves conducting in-depth interviews face-to-face or over the phone with select customers to make sure you understand what is most important to customers, what their needs are, what they think of your company and what they think about providing feedback to you on your products and services.
This two-way communication early in the process has been found to be a critical success factor for obtaining actionable information and ensuring that your survey asks the right questions of the right people in the right way.
The success of a customer satisfaction survey is measured by the actions it drives. It is critical that the survey ask the right questions and address the needs of customers, but the feedback system also needs to be integrated into the operations of your company. Since your customer satisfaction survey is going to be an ongoing part of running your business, it becomes very important for the survey to have a broad base of support and understanding.
A strategy session should have key people from sales, marketing, development and operations who get together for a four to six-hour meeting to work on the survey objectives and how they tie to the company mission and values. For all your customer and market research needs, call, email or simply submit our Contact Form.
This knowledge can significantly affect the conclusions derived from a survey. Provides an integrated, case-studies based approach to analysing customer survey data. Contains classical techniques with modern and non standard tools.
Accompanied by a supporting website containing datasets and R scripts. About the Author Ron S. Kenett , KPA Ltd. Permissions Request permission to reuse content from this site. Kenett and Silvia Salini 2. Demographics of respondents 20 2. Overall satisfaction 22 2. Analysis of topics 24 2. Strengths and weaknesses and decision drivers 27 2. Other dimensions of relevance to the B2B context 7.
Some methodological considerations 7. An assessment programme 7. Questionnaire design 7. An evaluation 7. The tools are not mutually exclusive and a self-completion element could be used in a face to face interview. So too a postal questionnaire could be preceded by a telephone interview that is used to collect data and seek co-operation for the self-completion element. When planning the fieldwork, there is likely to be a debate as to whether the interview should be carried out without disclosing the identify of the sponsor.
If the questions in the survey are about a particular company or product, it is obvious that the identity has to be disclosed. When the survey is carried out by phone or face to face, co-operation is helped if an advance letter is sent out explaining the purpose of the research.
Logistically this may not be possible in which case the explanation for the survey would be built into the introductory script of the interviewer. If the survey covers a number of competing brands, disclosure of the research sponsor will bias the response.
If the interview is carried out anonymously, without disclosing the sponsor, bias will result through a considerably reduced strike rate or guarded responses. The interviewer, explaining at the outset of the interview that the sponsor will be disclosed at the end of the interview, usually overcomes this.
Customers express their satisfaction in many ways. When they are satisfied, they mostly say nothing but return again and again to buy or use more. When asked how they feel about a company or its products in open-ended questioning they respond with anecdotes and may use terminology such as delighted, extremely satisfied, very dissatisfied etc.
Collecting the motleys variety of adjectives together from open ended responses would be problematical in a large survey. To overcome this problem market researchers ask people to describe a company using verbal or numeric scales with words that measure attitudes. People are used to the concept of rating things with numerical scores and these can work well in surveys. Once the respondent has been given the anchors of the scale, they can readily give a number to express their level of satisfaction.
Typically, scales of 5, 7 or 10 are used where the lowest figure indicates extreme dissatisfaction and the highest shows extreme satisfaction. The stem of the scale is usually quite short since a scale of up to would prove too demanding for rating the dozens of specific issues that are often on the questionnaire. Measuring satisfaction is only half the story. The measurement of expectations or importance is more difficult than the measurement of satisfaction. Many people do not know or cannot admit, even to themselves, what is important.
Consumers do not spend their time rationalising why they do things, their views change and they may not be able to easily communicate or admit to the complex issues in the buying argument.
The same interval scales of words or numbers are often used to measure importance — 5, 7 or 10 being very important and 1 being not at all important. However, most of the issues being researched are of some importance for otherwise they would not be considered in the study.
As a result, the mean scores on importance may show little differentiation between the vital issues such as product quality, price and delivery and the nice to have factors such as knowledgeable representatives and long opening hours. Ranking can indicate the importance of a small list of up to six or seven factors but respondents struggle to place things in rank order once the first four or five are out of the way.
It would not work for determining the importance of 30 attributes. Derived importance is calculated by correlating the satisfaction levels of each attribute with the overall level of satisfaction. Where there is a high link or correlation with an attribute, it can be inferred that the attribute is driving customer satisfaction. The scores that are achieved in customer satisfaction studies are used to create a customer satisfaction index or CSI.
There is no single definition of what comprises a customer satisfaction index. Some use only the rating given to overall performance. Some use an average of the two key measurements — overall performance and the intention to re-buy an indication of loyalty. Yet others may bring together a wider basket of issues to form a CSI.
The average or mean score of satisfaction given to each attribute provides a league table of strengths and weaknesses. As a guide, the following interpretation can be made of scores from many different satisfaction surveys:. Someone once told me that the half way point in a marathon is 22 miles. Given the fact that a marathon is Their point was that it requires as much energy to run the last 4. The same principle holds in the marathon race of customer satisfaction.
The half way point is not a mean score of 5 out of 10 but 8 out of Improving the mean score beyond 8 takes as much energy as it does to get to 8 and incremental points of improvement are hard to achieve. It is argued that these are the scores that are required to create genuine satisfaction and loyalty. If suppliers fail to achieve such high ratings, customers show indifference and will shop elsewhere.
Capricious consumers are at risk of being wooed by competitors, readily switching suppliers in the search for higher standards. This raises the interesting question — what is achievable and how far can we go in the pursuit of customer satisfaction.
As marketers we know that we must segment our customer base. It is no good trying to satisfy everyone, as we do not aim our products at everyone. What matters is that we achieve high scores of satisfaction in those segments in which we play. Obtaining scores of 9 or 10 from around a half to two thirds of targeted customers on issues that are important to them should be the aim. Plotting the customer satisfaction scores against the importance score will show where the strengths and weaknesses lie, see diagram 2 with the main objective to move all issues to the top right box.
XY graph to show where customer satisfaction needs to improve How To Use A Customer Satisfaction Survey To Greatest Effect No company can truly satisfy its customers unless top management is fully behind the programme. This does not just mean that they endorse the idea of customer satisfaction studies but that they are genuinely customer orientated. It has an excellent website and a sophisticated telephone sales centre; a senior executive responsible for customer service; a well thought-out frequent flyer programme; and twelve customer service commitments signed by Richard Anderson its chief executive.
It also appears to be carrying planeloads of disgruntled passengers. The American Customer Satisfaction Index, based on interviews with a random sample of 65, consumers, gave Northwest a score of 56 out of a possible So what is Northwest doing wrong? It is salutary to look at what rivals such as Continental Airlines have been doing right. Continental was the success story of the s in the US airline industry and, according to the ACSI, the only scheduled carrier to maintain customer satisfaction throughout the decade.
It makes sense to reward, in Pavlovian fashion, immediately after the event and not six or twelve months down the line when the effect will have been forgotten.
Two companies, both ostensibly committed to customer satisfaction, but one markedly outperforming the other. The customer satisfaction scores are only part of the story. A customer satisfaction index is a snapshot at a point in time.
Customer satisfaction surveys & analysis from Clarabridge provides actionable feedback and insights to enhance your customer experience.
You’ve collected your survey results and have a survey data analysis plan in place. Now it’s time to dig in, start sorting, and analyze the data. How to analyze survey data Customer Satisfaction Survey.
CUSTOMER SATISFACTION SURVEY: Analysis, Findings, Recommendations Prepared by: what is learned from the analysis to provide guidance and reinforcement of policy. We realize that every This report examines the customer satisfaction survey project conducted by the. Developing a customer satisfaction programme is not just about carrying out a customer service survey. Surveys provide the reading that shows where attention is required but in many respects, this is the easy part.
The most common method for measuring customer satisfaction is through surveys, either during or after the call. While this method provides some intelligence around the customer satisfaction analysis cause analysis. Studying the root cause arms companies with the information needed. Building off of traditional customer survey programs, customer satisfaction analysis looks goes a step further for more accurate results. It takes raw satisfaction scores and pairs them with other sources of data to find the root causes driving the scores. All in all, the key to customer satisfaction analysis is to get as much data as you.