Sending out a customer or employee survey is just the beginning. It’s only when you start to analyse and breakdown the insights that you receive when the magic happens. In this article, we present 5 quick tips on how to analyze survey data.
1. Remind yourself of why you did the survey
If you did your preparatory survey planning, the first thing you should do is refer back to your notes. Reminding yourself of the purpose behind your survey will help you to make sure that you’re getting the most out of your analysis.
- What were your main goals?
- What insights did you hope to receive?
- How do you plan to work with the insights you receive?
Keeping these questions in mind will help you to remain focused when analyzing your survey data. It will allow you to more easily navigate through the feedback to find the data that is most valuable for you. Of course, you may also find results that are entirely unexpected, but it’s wise to keep your initial intentions in mind when you first begin your analysis.
2. Consider how the survey performed
While survey performance isn’t directly linked to how to analyze survey data it can a) offer some additional insights when considering your results and b) be a useful guide for your next feedback project.
- First consider your response rate. Was it as expected, maybe better, or worse? Can you find any reasons why?
- Was there a high drop-off rate for any questions? This is great to know when you’re creating your next survey. It helps you to try and correct the issue so more people complete the survey.
- Did you distribute the survey in different ways? If so, how did the different channels perform?
When it comes to how to analyze your survey data, the performance can also be important when considering the validity of the feedback you receive. For example, if very few people responded, do you have enough data on which to make a solid conclusion?
3. Get an overview of the result
Start with an overview of all the data you received. What was the percentage distribution among parameters such as gender, age, geographic location, departments or industry? Then you can continue to analyze questions that will give you the real value of the survey while keeping this background data in mind.
1. How content are your employees overall with their workplace?
2. How many of your customers have bought your new product?
3. How many in your target group have actually seen your latest ad campaign?
4. Take notes on the most interesting results – they are a good starting place when it’s time to break down the results.
4. Break down your results and find the hidden treasures
Looking at the overview of your findings will give you a fair insight into the ‘big picture’ of your results. The real treasures, however, are likely to appear when you begin breaking your results down and looking more closely at the data you have received.
An example: You are analyzing the results of a product survey. In the overview you notice that your customers have an average knowledge of your latest product, but you want to find out if this is applicable to all of them. So you start to break down the question, “Are you aware of our new product X”, with the question where the respondent filled in their region of residence. Now you find that awareness of the new product is much higher in metropolitan areas and below average in the rest of the country. You continue by adding age to the breakdown and now you see that awareness of your new product is significantly higher among people under 30 years.
So, what do you know now? The launch of your new product was successful in most parts of the country, but the fact is you seem to have missed people over 30 years old who don’t live in cities. How do you reach them then? Well, that’s up to you, but now when you know who to target, you will save a lot of time and money for the marketing department!
5. Examine open-text responses carefully to get in-depth insights
This is often the most important part of how to analyze survey data because it gives you a great opportunity to find out why people answered as they did. If you included, as we recommend, the option for respondents to to add elaborative comments to some questions you will find that a surprisingly high number of people like to add a small comment to their answer. Reading these will help you get a picture of why some of the results are high or low. They will probably also give you a bunch of new ideas about how to improve your products or services.
By making a word cloud of all of the written answers to a question, you can get a quick overview of the most prominent words written by your respondents to that question. It helps you to quickly see if there are a lot of alarming comments, such as if you get “lousy” and “support” you probably should start reading the comments pretty soon. (Bonus tip: word clouds also look great in your presentation of the results – and help the viewers to quickly get a picture of the answers!)
Start sharing your results
This is not part of the five-step guide to survey analysis, but maybe even more important: make sure you share your results with every person it concerns. Do it via email, Excel reports, PowerPoint slides, logins directly to survey results or via real-time dashboards. You have done a great job and most likely found feedback and ideas on how the company can improve. Make sure you spread them!
Congratulations on completing your recent employee engagement survey! Getting employee feedback is the first step towards creating a high-trust, high-performing organization.
Now it’s time to:
- Make sense of your data
- Conduct listening sessions
- Decide what focus areas you want to improve on
Focusing on the areas that have the most impact on company culture will increase the likelihood of seeing a tangible ROI. But how do you know which areas those are ?
Over the past 11+ years, I’ve worked with hundreds of companies who have used our Trust Index™ employee survey to get feedback and change their cultures. Here’s what I’ve learned about identifying which areas will be most (and least) impactful as you work toward meaningful culture change.
Don’t get distracted by your lowest scoring areas
It may seem counterintuitive, but one of the least effective approaches a compan y can take is to simply focus on the areas with the lowest scores.
Across companies, employees tend to be most critical about the same topics, including employees at the Fortune 100 Best Companies to Work For ® :
- Fair compensation
- Fair promotions
- Workplace politics
These areas often get the lowest satisfaction marks from employees, regardless of company size or industry.
Low scores in these areas should really only be a focal point if they’re low compared to an industry benchmark.
One way to benchmark your employee experience is to compare it against the Fortune 100 Best by using our Trust Index. You may find that what seems like a low score to you is in fact better than many other companies’ scores.
Many companies’ first inclination after seeing their lowest score in these areas is to:
- Increase salaries
- Give bonuses
- D efine processes
- Do more staff celebrations
They naturally think these actions are the key to quickly improving their overall employee experience.
While equitable compensation and benefits are essential foundations for employees’ well-being, they alone do not have a high impact on overall employee experience.
It’s easy to understand why companies work on these areas: these are also the most common topics employees bring up in listening sessions.
The question many companies fail to ask is, Why are these the areas our employees are bringing up ?
In my experience, employees feel safer speaking up about tangible things such as money, IT equipment, facilities, or time-off instead of intangibles such as whether their bosses treat them with respect.
This is why it’s so important to dig deeper in listening sessions to get beneath the surface and learn what the real issues are. There’s always more to the employee experience than what employees will initially say.
Focus on high-impact areas
Between our Trust M odel methodology and my direct experience working with clients, I’ve learned that some areas are more impactful than others on company culture.
If most employees have a consistently positive experience in these high-impact areas, then all areas of the employee experience tend to improve ( even those unpopular topics above!):
- Showing appreciation for everyone in the organization
- Seeking and responding to peoples’ ideas
- Involving people in important decisions
- Making leaders approachable
- Ensuring employees can get straight answers from leaders
I’ve seen dozens of companies focus their efforts on just one or two of these high-impact areas consistently with all their leaders. When they do this well, their next employee survey results show a significant overall improvement, especially when it comes to perceptions of compensation and fairness.
Look at gaps between managerial levels
Rather than focusing on survey scores for individual managers, it is more effective to compare scores between manager levels.
Look for differences between the experiences of:
- Your individual contributors
- Their managers
- Other leaders in the organization, including executives
How widely does the workplace experience vary between these groups ?
Once you identify that a particular level is having a less positive experience, you can support their leaders to improve. This approach helps to increase leaders ’ accountability by clearly showing them how they influence their employees’ experiences.
Our research shows that reducing gaps between managerial levels leads to increased revenue growth and innovation . When experiences vary widely depending on the job level, organizations miss the benefits of agility and adaptability.
Make choices you can sustain
When defining your actions for improvement, it’s important to choose only one or two areas where you can continually support as many leaders as possible to deliver a better experience to their teams. This is far more impactful than a series of one-time initiatives.
Exchange knowledge among leaders
One of the most powerful things I see companies doing is sharing insights from their “pockets of greatness” with the rest of the organization. In almost all companies there are positive examples already happening — you simply need to uncover them!
At the Best Workplaces ™ , leaders leverage the knowledge of managers who are already creating great experiences for their teams to help other leaders who want to improve.
By evaluating your employee survey results, sharpening your focus and creating a workable action plan, you can drive steady, sustainable, positive change in your company culture.
Emprising™, our culture management platform, allows you to analyze your data to make these kinds of data-driven people decisions. If you’re not analyzing these focus areas in your employee survey, that’s OK. Reach out to us if you would like to learn more about Emprising and our process here
Lorena Martinez is a former employee survey implementation consultant at Great Place to Work®. With a background in change management consulting and culture transformation, Lorena helped drive global growth within Great Place to Work by building strategic business transformation capabilities in international key markets.
Sarah Cho 4 min read
Greetings, dear customers! The brand new Survey Research team here at SurveyMonkey is making its official debut on the blog today and we couldn’t be more excited to share best practices and tips with all of you. Let’s jump right in, shall we?
Using SurveyMonkey Audience–a powerful tool for targeting a specific demographic–we recently launched a political survey in seven swing states asking people their opinions on the Affordable Care Act aka “Obamacare.” Each of these states was identified by the nonpartisan online newsletter, Cook Political Report, as having competitive statewide races in the 2014 election year.
We wanted to dig deeper into one of the open-ended questions we asked on the survey, and let you in on how you can make sense of those responses and ensure the quality of your data. After all, smart data leads to even smarter decisions.
Here’s our open-ended question: “In a few words how would you describe your feelings about the health care law known as Obamacare?” And a peek at some of the responses we received:
One of the great things about open-ended questions? People aren’t limited to a predetermined set of possible answer choices so you end up collecting a rich pool of genuine opinions from folks on your survey topic. However, they also present an analytical challenge–just how do you make sense of all these unique answers?
For starters, thanks to our partnership with NVivo you can easily import your SurveyMonkey data into the NVivo platform to analyze the text. Alternatively, you can do a basic analysis right inside our Analyze tool and categorize the responses to provide not only a detailed picture of what people’s opinions are in their own words, but also to know how many people feel that way. To use the Categorize feature, just tick the box next to each response in order to place it into a category. So! Ready to become an open-ended expert?
Five tips to get you started
- Read through a couple of responses to get a sense of what folks are saying. As always, know your data. By glancing at the images of the responses above, you’ve already done the first step.
- Map out a few general categories to put each of the responses in. As you read the responses above, you can already see that folks are pretty much divided into three camps–those that like the law, those that hate it, and those who don’t have an opinion either way–so we created three categories: positive opinions about the law, negative opinions, and neutral opinions.
- Create sub-categories underneath your general ones to provide even richer detail. As you move along and put responses in their respective buckets (positive, negative, or neutral in this example), you’ll see some reoccurring themes popping up within each general category. Group and tag those themes together to create new sub-categories. For example, we saw lots of respondents saying that the law was “socialistic,” so we created a sub-category for that underneath “negative feelings.” This extra information is totally optional, so if you don’t need this level of detail, feel free to skip on down to step 4.
- Double check and re-categorize. Go back and re-read responses to make sure they properly fit in the categories that you’ve assigned them to. If you’ve added sub-categories along the way, you’ll be able to tag previous responses to the new sub-categories on this second look-through. Also, don’t be afraid to assign multiple categories to one response since often open-ended comments cover more than one category. But note that since each response can be assigned to multiple categories, your percentages may not add up to 100.
- Put a number on it! Congrats, you’re done! Click over to the My Categories tab in your question summary to see the percentages. After finishing our open-end response analysis in our political survey, we found that among those we surveyed–35% had positive things to say about the law, 55% had negative comments, and 10% had neutral comments, similar to national opinion on the law.
Since we added extra sub-categories, we were able to tell a more detailed story than if we had only looked at positive, negative and neutral as categories. For example, 10% of folks mentioned that the law was a good idea, but that due to the technical problems with the roll-out of the Obamacare website, the law needed work.
That’s it for today, everyone. We hope these tips help out the next time you want to make sense of your open-ended responses and don’t be shy, let us know your thoughts!
We’re looking for more of your questions on all things Survey Science. Need advice on how to keep your next survey project methodologically sound? Please let Sarah and the Survey Research team know in the Comments section below.
You can view and analyze your results at any time during the collection process in the Analyze section of the survey. Here you can see a summary view of your data; browse individual responses; create and export dynamic charts; use filter, compare, and show rules to analyze specific data views and segments; view and categorize open-ended responses, and easily download your results in multiple formats.
Viewing Survey Responses
You can view summaries of each survey question, or browse through individual survey responses.
Viewing Question Summaries, the default Analyze view, gives you quick insight into the overall results of your survey. For close-ended questions, dynamic charts are generated automatically for visual analysis.
Viewing Individual Responses is useful if you’d like to view each respondent’s complete set of answers to your survey. Additionally, each individual response includes respondent metadata, allowing you more insight into:
- Who submitted the response (if you tracked responses)
- The collector through which the response was submitted
- The start and end date and time
- The time spent entering a response
- Response completeness
- The IP address of the respondent
To browse individual responses, click the Individual Responses tab toward the top of the Analyze page. Use the left and right arrows to navigate through each response.
You can view open-ended responses under the Question Summaries tab and under the Individual Responses tab. When you view an open-ended question in the Question Summaries area, you may need to click the Responses link to view all responses.
With some paid plans, you can use the text analysis features to identify and tag recurring words or themes in your responses.
Using Rules to Analyze Data
After viewing the overall Question Summaries, you can create rules to answer more specific questions about your data. Filter, Compare, and Show rules allow you to focus in on specific subsets of your data, so you can analyze your results in a way that’s most meaningful to you.
Paid users may create an unlimited number of rules. Free users may create one rule.
Use filter rules to focus on a specific subset of your data based on certain criteria that you define. When a filter is applied, only results that meet that criteria will show in the filtered view. Filters carry over to both the Question Summaries and Individual Responses tabs.
For example, if you only want to view responses submitted within a certain time frame, create a Filter by Time Period. If you only want to see responses from Females who submitted responses within that time frame, create a Filter by Question & Answer and apply both filters at once.
You can filter your results by:
- Question & Answer
- Time Period
- Respondent Metadata
To create a Filter rule, click +FILTER in the left sidebar.
Compare rules allow you to cross-tabulate your data to compare the answer choices to one question across the rest of the survey. In statistical terms, it is a joint distribution between two (or more) discrete variables such as product usage and demographics.
For example, if you included a survey question asking respondents to select their gender, you can create a Compare rule to cross-tabulate and compare the survey results from each gender side by side.
To create a Compare rule, click +COMPARE in the left sidebar.
You can use Show rules to display only certain survey questions or pages in the result summary. If you’re only interested in analyzing certain questions or pages in your survey at a time, creating a Show rule will help you focus on those parts of the survey without the clutter of the rest of your survey.
For example, if you used skip logic in your survey to direct certain respondents to different questions based on their answers to previous questions, you can create a Show rule that contains only the questions included in that logic path for easier analysis.
To create a Show rule, click +SHOW in the left sidebar.
A View is a snapshot of your data made up of any Filter, Compare, or Show rules that you apply to the survey results. With any paid plan, you can save views so that you can easily toggle back and forth between different views of your data at any time, without having to constantly recreate them.
With any paid plan, you can download your results in a variety of formats. You can keep an offline copy of your survey results, send the exports to others, download individual responses for printing, or export your raw data for further analysis.
Exports are available on paid plans.
Summary data exports contain the response percentages, response counts, and open-ended responses (optional). The PDF, PPT, and Excel exports also include presentation-ready graphs and charts.
To export Summary Data, click the Save As button in the upper right corner of the Analyze page, select Export file, and select All summary data.
All responses data exports allow you to download your survey’s raw data for further analysis. All responses data spreadsheet exports are available in many formats. You may also download individual responses in PDF format for easy printing and sharing.
To export All Responses data, click the Save As button in the upper right corner of the Analyze page, select Export file, and select All responses data.
Exporting Individual Responses produces a PDF document that contains each respondent’s full answer set, as well as respondent metadata. You can also export one single response under the Individual Responses tab in the Analyze section.
To export Individual Responses to PDF, click the Save As button in the upper right corner of the Analyze page, select Export file, and select All individual responses.
Under the Question Summaries tab (the default view), charts are generated for each close-ended or numerical textboxes question in the survey.
To export a chart:
- Click Export in the upper right corner of the question.
- Choose Question chart only
- Click Export.
Log in to see if the paid features in this article are available on your plan.
This channel is underwritten by QuestionPro, makers of online survey software that allows users to generate the insights they need to make better business decisions. The software includes polling, tablet and smartphone research, and data visualization for analysis.
Surveys can be a great source of information about your customers or your employees. But in order to get the most out of that information, you need to be able to analyze and interpret the results. Finding the most valuable information within piles of survey results requires some work. Here are a few tips to help you find that information and use it to improve your business.
Choose Questions Based on What You Want to Learn
Accurately analyzing results actually starts before you even receive the responses. When crafting your survey, it’s important that you first have a clear, single goal in mind. Then, write your questions in a way that will get you the information you need while also being sure they are all relevant to the goal you have for the survey.
For example, if you need to determine an accurate median age or income level for your respondents, don’t include ranges for them to choose from. If you don’t know the exact numbers, then you can’t accurately determine a median or average. Tailor your questions to the exact type of information you are looking to find.
Do a Quick Review of the Results
It might sound obvious that the first step in analyzing results once you receive them is to read them. But it can be tempting for some researchers to immediately begin organizing and categorizing results. A quick read-through, however, can help you get the overall picture of the results, ensure that you don’t miss anything important, and also help you avoid bias.
Sometimes, when people perform surveys, they go into it with a hypothesis about the results. When collecting and analyzing the results, it’s important not to just jump right in and see if your hypothesis is correct. People might have differing views than you on the topic of your survey, or they might have something important to add about another area of your research altogether. Don’t miss important information by skipping ahead.
Once you have gone through all of the results, then it’s time to find patterns. Depending on what type of questions and format you used, this could involve counting out the responses or going through the basic stats on your online survey software.
While doing this, you need to look for the most popular responses among your respondents. But also keep an eye out for surprises. For instance, if the majority of the respondents in your customer satisfaction survey seem satisfied in most areas, but unsatisfied in one, that’s probably where you should focus your energy.
Don’t be shy to use what can look like advanced analytics from survey software. For example, QuestionPro offers banner tables and crosstabs that can be used to segment your data based on answers to other questions. This could allow you to see data based on demographic information, or even to see how people who said they were unsatisfied with one area answered other questions, giving you clues about what might be affecting their dissatisfaction.
Create a Visual Representation
It can also be helpful to put the results into visual formats like charts, graphs or word clouds. Seeing the results in one or more of these formats can help you better understand how all the responses measure up against one another.
Determine What Action the Results Warrant
Once you’ve found the patterns, it’s time for you to figure out what to do about them. If one demographic of your customers are showing they are dissatisfied with an area of your business, you can create plans to address the issues. You can also use analysis to help drive how to communicate and market to your customers.
Be aware: not all surveys warrant action. If you run an employee satisfaction survey and people seem happy, you might be best served to just maintain the status quo. But all surveys at least warrant consideration of action. So carefully go over the results and keep an open mind about what they might mean.
The survey results are in: Customer feedback gives you valuable information about how to improve your business. The next step is to analyze that survey data. You don’t need an advanced math degree to understand the results. Instead, a few simple tools can uncover the most helpful information.
Just so you know
Start collecting and analyzing online survey responses fast with Jotform!
Analyze four types of survey questions
Now it’s time to look at the information gathered through the survey questions. Customize this analysis based on the type of question. Most survey questions fit into one of these four categories:
- Categorical data. When the customer chooses an answer from a list of responses, this is known as categorical data. For example, “What is your favorite product feature?” might have responses such as ease of use, size, and affordability. Categorical data is simple to evaluate because the analysis involves counting and dividing the information to identify the most popular responses.
- Ordinal data. If the responses fall into a logical order, then you are gathering ordinal data. An example is a survey question like “How often do you visit our restaurant?” with potential answers including rarely, once a year, once a quarter, once a month, and weekly. This information will help you see how often your customer is interacting with your product or service.
- Ratio data. Any question that asks for precise information falls in the category of ratio data. For example, you might ask about the customer’s budget, with an open-ended input field. Ratio data can be helpful in looking at measures of variance or calculating averages.
- Interval data. Using interval questions can be valuable in segmenting your customers so that each group gets relevant questions. For example, you might ask a question about their preferred budget, with potential answers listing predetermined dollar amounts: <$20, $21 – $100, $101 – $200, and $201+. When the intervals are sized equally, you can calculate data averages to summarize the information.
It’s usually best to look at the numbers before you review long-form answers. Crunching the numbers first means that you have a foundation to use when reading through the open-ended responses.
If you want to gain the most insight from these survey results, include cross-tabulation in your analysis. This process gives the data context so you can see the factors that might affect specific outcomes. Additionally, it can be helpful to distinguish how different groups of people respond.
With cross-tabulation, you can see differing satisfaction levels between age groups or other categories. For example, if you determine that 56 percent of the participants were highly satisfied with the product, and 44 percent were dissatisfied, then consider mapping another variable to identify factors influencing their experience. You might find that the satisfaction levels change with age, budget, or any other category relevant to your target demographic.
Tools for survey analysis
Depending on the number of participants and the frequency of survey responses, it can be helpful to use a survey tool to analyze the responses. Manual analysis is feasible if you have fewer than 100 responses. But it can turn into a time-intensive project when you need to evaluate hundreds or thousands of survey responses.
When you consider the time requirements for manual analysis, it’s important to note that this method could actually cost more money overall. Plus, manual methods increase the likelihood of inaccurate analysis. Excel is sometimes used as a DIY analysis tool, but this only works if you know how to use the program’s more technical features.If you’re wondering how to analyze survey data effectively, you need to start at the beginning: How are you gathering the information? Efficient survey analysis tools include software programs with built-in features for consolidating data and comparing results. The Jotform online survey maker provides a simple solution to share an online survey and collect responses instantly.
How to analyze data with Jotform Report Builder
Jotform Report Builder seamlessly transforms data into charts and graphs that make your findings easy to understand.
All you have to do is create an online form, share it, and then use Jotform Report Builder to build an elegant presentation using the data you collected through your form.
You can automatically generate a professional-looking report — with either one or two graphs per page — or start with a blank slate.
Regardless of which you choose, Jotform Report Builder has a wide variety of design options that enable you to personalize your report and display data in any number of ways. For instance, you could add a business logo, upload a background photo, change the fonts, and add fun icons to give your report some character.
Along with changing the types of graphs or charts displayed in your report, you can also indicate how many people answered a question, show how many people selected a specific answer, and illustrate how selected answers to a question represent a certain percentage of all responses.
You can even make your data easier to understand by rearranging the order in which it’s displayed and adding a legend or table grid.
Jotform Report Builder can analyze responses submitted within a certain time frame or filter data in your report based on specific answers to certain questions so you can glean meaningful and relevant insights.
After creating a thorough and attractive report that meets your needs, you can download the report as a PDF, print it, share it in an email with a personalized message, distribute it using a dedicated link, or embed it on a website.
There are even safeguards that allow you to control who can view your report, what they can do with it, and how long they can access it.
Whenever someone fills out and submits an online form that’s tied to a report, the information in the report is updated instantly. That means you no longer need to spend valuable time manually recreating a report.
When all is said and done, Jotform stands out as a powerful, all-in-one solution that allows you to gather data and leverage it to seamlessly create professional-looking reports in no time. With a robust set of features that display only relevant data and make it easier to understand, Jotform Report Builder can uncover critical insights that drive informed decisions and inspire creative ideas.
You can analyze survey results anytime during the response collection process by simply accessing the Analyse Result section. Here you can analyze the survey results of each individual response or club them together to get a data trend. You can further tabulate them, apply rules, export them, make charts, use filters and download the results into excel or SPSS form.
Viewing Survey Responses
Once you have started collecting responses to the survey via Survey Collector, you can view the results in the analysis part. This section allows you to manually analyze individual responses, at the same time you can also check the data trends, indicating if your customers are happy with you or not. You can check the responses to each question and each answer choice.
Using rules to manage and analyze data
In the Analyse Results section, you can use rules and combination methods to get an answer to your more complex data needs. You can use methods like cross-tabulation, derived variables to check one set of data in comparison to another. Other available options are to use filters to manage and crop out a set of data as per your requirement. You can also check the data of dropouts to further improve your future surveys.
While you can customize and manage the survey responses, you can also export the data in your preferred format. You can download it and share it with others. Read more about exports here.
Now that you have your data, It's time to put it to use. There are quite literally hundreds of things that can be done with your data in order to interpret it. Statistics can sometimes be fickle because of this. For instance, I could say that the average weight for a baby is 12 pounds. Based on this number, any person having a baby would expect it to weigh approximately this much. However, based on standard deviation, or the average difference from the mean, the average baby could actually never weight close to 12 pounds. After all, the average of 1 and 23 is also 12. So here's how you can figure it all out!
Added Total of All X Values = 212
Finding the Arithmetic Mean
The mean is the average value. You probably learned this in grade school, but I'll give a short refresher just in case you've forgotten. In order to find the mean, a person must add together all values and then divide by the total number of values. Here's an example
If you count the total number of calculations added, you'll get a value of ten. Divide the sum of all x values, which is 212, by 10 and you'll have your mean!
21.2 is the mean of this number set.
Now this number can sometimes be a very decent representation of the data. Like in the above example of weights and babies, however, this value can sometimes be a very poor representation. In order to measure whether it's a decent representation or not, standard deviation can be used.
Standard deviation is the average distance numbers lie from the mean. In other words, if the standard deviation is a large number, the mean might not represent the data very well. Standard deviation is in the eyes of the beholder. Standard deviation could be equal to one and be considered large or it could be in the millions and still be considered small. The importance of the value of standard deviation is dependent on what's being measured. For instance, while deciding the reliability of carbon dating, the standard deviation might be in millions of years. On the other hand, this could be on a scale of billions of years. Being a few million off in this case wouldn't be such a big deal. If I'm measuring the size of the average television screen and the standard deviation is 32 inches, the mean obviously doesn't represent the data well because screens do not have a very large scale to them.
From the Survey123 website, you can view any results submitted to your survey, including analyzing the results through graphs, viewing the results on a map, and downloading all collected responses.
You can visualize and explore the results of your survey on the Analyze page of the Survey123 website. The Analyze page summarizes your survey data in charts and tables, providing insight into your survey responses and helping you to identify trends.
Use the Navigation dialog box to hide and show questions, as well as quickly navigate to questions in the survey. You can also filter responses by content or date submitted.
The visualizations available for each survey question depend on the question type. Supported question types and their visualizations are as follows:
Word cloud view and individual responses
Column, bar, and pie charts and the map view
Column and bar charts
Column, bar, and pie charts and the map view
Column, bar and pie charts and the map view
Some question types have additional settings to further control the way responses are presented.
When using the word cloud view for text questions, there is a setting to exclude stop words such as the and is. This option is only available in English.
For numeric questions, you can choose one of five classification methods: Equal interval , Natural breaks (Jenks) , Quantile , Standard deviation , and Manual interval , with the default method being Equal interval . You can drag the dividers between the classes to set your own class breaks, which sets the method to Manual interval if not already selected.
For ranking questions, there are settings to show and hide specific answers, as well as altering the score allocated to each possible answer. By default, the first answer in a response is given the highest score, with each answer afterward scored successively less. For example, in a ranking question with five choices, the choice selected first in a response gets a score of 5, the choice after that a 4, and so on. You can modify these scores on the Analyze page, including associating a value of 0 to answers that you don’t want factored into results.
Print current view
Click the Navigation button to open a checklist of questions, sorted by group. Uncheck any question to remove it from the display of questions on the Analyze page. Show All and Hide All options are also available.
This allows you to narrow down your display for the Print current view button, which prints only visible questions as they currently appear. You can also change the appearance of individual questions, including hiding the results table and categories with no responses in them, which also carries through to the printout.
View all data
On the Data page of the Survey123 website, you can also use the data table to view all collected records in your survey.
The data table lists the answers to all questions using one column for each question. If your survey uses repeats, the answers to questions within those repeats will be available in another tab of the table view. The Layer List icon in the upper-right of the map view allows you to show or hide any tables in the feature layer, while the drop-down menu in the upper right of the table view allows you to check whether or not selecting a record will also select related records. If you check Form view , clicking a record also shows answers to all the questions for that record, including attachments, to the right of the table. In the form view section, you can also choose to print the results, as well as open the Settings menu to select whether or not to display related repeat records or the display size of any attached images.
The individual response panel also provides an Edit button, to allow the survey owner or users who have been given permission to edit their own features to edit existing records. The survey owner can also edit responses in the data table by double-clicking an individual field.
For spatial analysis, click Open in Map Viewer . You can also download the results.
The three scenarios where downloading your survey data is useful are as follows:
- Analysis in third-party tools—Stata, SPSS, SAS, Tableau, and Microsoft Excel are examples of the many tools available to analyze data. You can download data captured with ArcGIS Survey123 in formats that these tools understand, such as CSV. You can also download your data in shapefile and file geodatabase formats if you want to use GIS tools compatible with these formats.
- Backup—Download the data you capture so you have an extra copy.
- Enterprise integration—Download your data so you can load it into your own database (Microsoft SQL Server, Oracle, and so on) or refine it before you bring it into your own enterprise system.
To download your data using the Survey123 website, complete the following steps:
Open the Survey123 website. Sign in to your ArcGIS organizational account.
- Percent Agree (78%) : An old marketing trick is to summarize the percent of respondents who agreed to the item. There were 14 of the 18 respondents who chose a 4 or 5 (the Agree’s).
- Top-Box (56%) or Top Two box (78%) scoring : For 5-point scales the top box is strongly agree, which generates a score of 56%. The top-two box score is the same as the agree score.
- Net Top Box (50%): Count the number of respondents that select the top choice (strongly agree) and subtract the number that select the bottom choice (strongly Disagree choice). The popular Net Promoter Score uses a variation on this one (it subtracts the bottom six from the top 2 boxes). A Forrester annual report called the Customer Experience Index subtracts the top 2 bottom responses from the top-2 top responses (called the CxPi).
- Z-Score to Percentile Rank (56%): This is a Six-Sigma technique. It converts the raw score into a normal score—because rating scale means often follow a normal or close to normal distribution. We just need a reasonable benchmark to compare the mean to. I’ve found that 80% of the number of points in a scale is a good place to start (a meta-analysis by Nielsen & Levy also found this). For a 5 point scale use a 4 (5*.80=4), for a 7 use 5.6 and for 11 use 8.8. Next follow these three steps.
- Subtract the benchmark from the mean: 4.167-4 = .167
- Divide the difference by the standard deviation: . 167/1.21 = .1388. This is called a z-score (or normal score) and tells us how many standard deviations a score of 4.167 falls above or below the benchmark.
- Convert the Z-score to a percentile rank: Using the properties of the normal curve we find out what percent of area falls below the .1388 standard deviations above the mean using a calculator or lookup table, we get .556 or 56%.
As you can see, many of the methods generate reassuringly similar results. Here’s another example using 15 responses to a 7 point scale on perceived ease of use:
7, 5, 2, 3, 6, 1, 5, 7, 7, 6, 6, 6, 7, 7, 6
This generates a mean of 5.4 and a standard deviation of 1.92
I’ve summarized the results in the table below along with the results of the five point scale.
5-Point Example 7-Point Example Percent Agree 78% 80% Top-2-Box 78% 67% Top-Box 56% 33% Net Top Box 50% 27% Z-Score to % 56% 46% CV 36% 29%
Which is the best approach?
The “best” approach depends on the context and your situation. I’ve used all these at some point but I prefer the z-score approach for three reasons.
- It’s the only metric that includes variability in the score.
- It offers the most precision because it uses the mean.
- It tends to generate results in the middle of the others.
However, there are times when executive comprehension is more important than statistical precision. If you find it hard to explain the z-score approach and are unsure whether others will be comfortable with it, one of the other approaches will generate similar results (albeit less precisely).
The metrics are even more meaningful with confidence intervals, but that’s a topic for another blog. To help you get started, you can download an Excel file with the appropriate calculations for 5 and 7 point scales.
HR analytics is quickly becoming a powerful tool for CHROs survey data analysis to make an impact on strategic decisions and business outcomes. Once your employee survey results are in, it’s time to ensure that your survey analysis obtain’s an intelligent, business-oriented interpretation of the data. You’ll need to decide whether or not to use benchmarking for tracking progress and understand how to use results to drive change and improvement.
- Intelligent survey data analysis with HR analytics
Having reliable data to inform strategic workforce decisions means understanding the relationship between your survey results and the underlying issues impacting your organization. HR needs to be more data-driven in their survey analysis, not merely relying on descriptive analytics but rather focusing on predictive analytics. Thus, survey data analysis with HR analytics should connect your survey data to actionable results that will lead to better business decisions as well as more engaged employees who are equipped with the right tools to improve.
How will you structure your results? Are you looking for insight at organizational, team and individual levels?
If you’re designing your own employee engagement survey, keep in mind that the most useful results are those that are tailored and personalized for each team and every employee. Professional surveys tend often greater insight at this stage. Effectory’s reports are based on years of scientific research include best practices, guidance tips and ideas for improvement to ensure that employees feel that being able to make improvements and contribute to the organization is within their control.
Your survey analysis results should give you a concrete list of what’s working and what’s not working across your organization, rather than vague complaints. Structure your data to reveal:
- The successes
- The areas in need of improvements
Celebrating progress and achievements is as important as addressing underlying issues.
How to gather feedback from your employees
The step-by-step guide to creating your employee engagement survey.
Benchmarking, if done correctly, can be used to measure progress and facilitate further improvements. There are two types of benchmarking that you may want to consider for your survey data analysis: internal benchmarks, in which you compare results of your current survey with those of previous surveys, and external benchmarking, in which you put your results in perspective against a reliable global index.
Benefits of internal benchmarking include:
- Consistent comparisons that reveal trends and priorities within your organization
- Comparisons between teams or across departments to encourage a culture of continuous improvement
In addition to the above, benefits of Effectory’s independent benchmark include:
- Independent and consistent data
- Adjusted for cultural differences
- Comparisons across all regions and specified for 56 major economies
- Detailed assessment of all relevant HR themes (engagement, leadership, etc.)
If you choose to focus on internal benchmarking, ensure that you define your benchmarking metrics scientifically to avoid misleading or biased results. Relying only on internal benchmarks can be limiting if you’re looking for fresh ideas to improve employee engagement, or insightful ways to grow your business and strengthen customer satisfaction.
Many high-performing organizations evaluate the effectiveness of their employee engagement strategies against that of their leading competitors, or other innovative organizations across different industries.
Global Employee Engagement Index™
A comprehensive overview of employee engagement with benchmarks from 57 countries with essential lessons for your HR strategy .
Much time can be wasted with employee survey analysis by drawing hasty, incomplete conclusions. For example, it is common for a theme like remuneration to gets a low overall score. No one wants low scores, so you may come to the conclusion that you should act upon it. This however is not always wise.
A good benchmark can inform you where your organisations stands in comparison to others. In this example, an independent benchmark could inform you that despite your low score, your organisation scores better than many comparable organisations. Such survey data analysis insights can really help you decide where to take action, and further help ensure that you do not devote unnecessary time and money to an area where it is not needed
Survey data analysis to set priorities
What do your employees consider important and in which areas do your teams, or your organization score (relatively) low? A statistical program is a useful tool in this prioritisation. It enables you to measure the effect of each factor on various HR themes. In this way you obtain in your suvery analysis a list of priorities showing which aspects employees are proud of and which ones call for improvement.
You can see at a glance where the priorities lie and which points have a direct impact on the way your employees perceive their work. Furthermore, it immediately becomes clear which elements make you stand out as an employer in the labour market.
Compare your current scores with those from the previous survey and to your competitors if using an external benchmark in order to follow trends in the survey analysis results. This will provide insight into the effectiveness of the improvement measures you have taken. Once again, communicate this clearly to your entire organization. Make sure your employees can see the impact of the survey results in action!
Keep an eye out for the next article in this series on action planning and the follow up process. Subscribe to our newsletter if you’d like to get the latest HR insight delivered to your email.
Read the whole series on how to create engaging employee surveys:
Book a free demo. See our solutions in action.
Effectory is Europe’s Leading provider of Employee Listening Solutions. Schedule a product demo and discover how to enhance your employees’ engagement.
Sending surveys to your customers means so much more than simply crafting questions and sending them out. Your surveys provide you a wealth of information that you can then use to make decisions and changes at your business.
What you do with those survey results is important to your overall success.
In this article, we look at how to analyze survey results to identify improvements.
Study Your Data
Your first step is to analyze the data. Look for your respondent’s answers and compile them in an organized fashion.
Filter your data and separate it so you can analyze the results and then move forward with your own conclusions.
Only once you’ve analyzed and compiled that data can you make conclusions and ultimately a plan of action for improvements.
Present Your Results
When working with your team to identify ways to improve your business, you want to think about your presentation.
There are a few ways to look at your results with your staff:
- Create a chart or a graph. These are easy on the eyes and great for your visual team. Charts and graphs can help your team identify ways to make your business better in a way that is straightforward and easy to understand.
- Create a data table when your information is numerical. This is also easy for team members to gather information.
- Make an infographic. This is another great tool for visual team members. Your staff can easily digest the results so you can get started identifying improvements.
The most important thing you can do with your survey data is analyze it, report it, and then act on it.
When you identify improvements, you can work with your staff to make any needed changes so your next survey comes back with very positive results. (tweet this)
Bottom line: analyzing your survey analytics helps you understand your customers so you can improve your products and services.
Surveys can help you get valuable customer feedback. You can then use this feedback to improve your business. Are you ready to get started with your Survey Town account? Start with your account today.
You want to give your survey respondents the opportunity to answer open-ended questions or elaborate on their responses. But how do you analyze the free-form text data from your survey? I’ll show you three different methods and explain when you might want to use each.
What is Free-Form Text Data from Surveys?
Customer feedback surveys often allow respondents to answer questions in their own words. For example, a question may ask “What is the first brand that comes to mind when you think of insurance?” or “What don’t you like about Tom Cruise?”. The data generated from these questions is known variously as text data, free-form text data, verbatims, and open-ended data. There are three main ways of analyzing such data: coding, text analytics, and word clouds.
How do you analyze free-form text data?
The traditional approach to analyzing text data is to code the data. Coding works as follows:
- One or two people read through some of the data (e.g., 200 randomly selected responses), and use their judgment to identify some main categories. For example, for the question asked about attitudes to Tom Cruise, the categories may be: 1. Like him; 2. Hate him; 3. Don’t know who he is; and 4. Other. The list of categories and their associated codes is known as a code frame.
- Then someone reads all the data text and manually assigns a value or values to each response. The assigned value reflects the code created in the previous stage. If the person said, “I really love Tom!”, the code assigned would be 1. Depending on the data, each response will be assigned either one value (single response), or multiple values (multiple response). In the case of the question “What don’t you like about Tom Cruise?” it would be appropriate to permit multiple responses.
- Variables created in the previous step are then analyzed (e.g., using frequency tables or crosstabs).
2. Text analytics
Text analytics involves using algorithms to automatically convert text to numbers to perform a quantitative analysis. For example, sentiment analysis automatically calculates the sentiment of phrases based on the number of positive and negative words that appear.
All else being equal, text analytics is less informative than coding, as humans are better at correctly interpreting meaning in text than algorithms. For example, it is hard to train a computer to correctly analyze “I love Coke. Not!” or “Coke is wicked.”
However, coding is very expensive, so text analytics is the usual method for larger quantities of text.
3. Word clouds
A word cloud is a visualization that shows all the words in text, packaged closely together, with the font size indicating the frequency with which words appear, with less interesting words (e.g., “the”) automatically excluded. A word cloud of answers to “What don’t you like about Tom Cruise?” is shown below. This is the most simplistic approach to analyzing text data but also the cheapest and fastest.
You can adjust this word cloud to take out words that are not useful, like ‘Tom’ or ‘Cruise’. You can also tidy your word cloud using text analytics or easily show sentiment in word clouds.
Your engagement survey results are in. After weeks of brainstorming questions and nudging employees to participate, you’re ready to roll up your sleeves and act on their feedback. To take that step, you’ll need more than a birds-eye view of the data.
Survey says: Getting to the bottom of your results requires you to crunch the numbers and read between the lines. Before you draft an action plan, follow these steps to get more out of your engagement survey results.
1. Look at cross-sections.
Right off the bat, you’ll be tempted to look at your overall scores. But while high-level averages might give you a pulse on how the organization is doing as a whole, you won’t be able to diagnose issues without more detail. Dive even deeper by filtering your survey results by specific characteristics, including:
- Job category (full-time, part-time, contractor)
- Job level
- Months since last title change
- Office location
- Salary band
In doing so, you might discover both positive and problematic outliers. For example, if certain demographics report feeling disengaged or excluded from company culture, that might call for new diversity and inclusion goals. Conversely, maybe one department or office location is significantly more engaged than the rest of the company. In setting any next steps, you may want to look at what that segment is doing differently.
Have a hypothesis that you’d like to test with your survey? Try to keep potential cross-sections in mind before sending out your next survey. Having these fields in your survey tool or HR information system (HRIS) will make it much easier to reconcile your data and filter results.
2. Consider context and benchmarks.
If you’re seeing a shift in engagement since your last survey, consider what happened in that timeframe. Did your business just go through a major event like a layoff? Conversely, did headcount increase significantly? A change in leadership may have also impacted a department’s scores. Considering scenarios like these will give your analysis context.
Remember that scores can also ebb and flow based on seasonality. You’ll want to compare apples to apples. If year-end is your company’s busiest time of year, make sure you’re looking at fourth-quarter results over the last few years. Seeing change along this dimension gives you something more tangible to work with. For example, while your recent engagement scores may have dipped from last quarter, maybe they marked a significant improvement from last year’s results. Whatever changes you implemented since your last busy season may have had a positive impact.
Similarly, consider industry benchmarks for a sense of what “good” looks like. Alone, your scores in the “collaboration” or “work relationships” category might seem troubling. While you always want to improve, maybe those scores are actually on par or better than the industry average. The added perspective can help you prioritize your action planning. If you use engagement software like Lattice, you’ll have access to these benchmarks for specific questions and your overall results.
3. Read the comments.
When crafting your survey, you should always make it an option to leave comments. While harder to analyze, these notes can give you the most telling and direct feedback. Even subtleties like word choice can speak volumes. Using sentiment analysis software can help you distinguish positive, neutral, and negative responses more efficiently.
For smaller teams, reading the comments might take just a few minutes. It can take hours or even days at larger companies. Either way, don’t skimp on this part of your analysis. You’ll want to identify recurring themes along the cross-sections mentioned earlier (department, age, ethnicity, etc.) for even greater insight. While it might be tempting to focus on negative comments, don’t overlook positive feedback. Those notes can help you identify what’s working and whether you need to apply those practices elsewhere.
Some teams find it helpful to aggregate results using a world cloud. In addition to helping you identify themes, they also make for powerful visuals when it comes time to share your survey results with leadership and the rest of the company.
4. Filter by performance.
Disengagement is always bad news — but when you discover that it affects your high performers the most, that’s really bad news. Cross-reference your survey results with your latest performance ratings to identify pressing attrition risks. If high performers are consistently bringing up things like lack of career growth or development opportunities in their comments, those insights can help you formulate a response.
When you maintain engagement and performance data in separate platforms, it can be hard to reconcile them while maintaining employee anonymity. When these data sets live together in a people management platform like Lattice, you can easily filter results by performance, manager, and other criteria without compromising employee trust. Learn more about our analytics dashboards here.
5. Experiment with visuals.
A picture is worth a thousand words — don’t limit your analysis to spreadsheets. Some insights only become apparent when you visualize the survey data. Import your results into a data visualization tool and experiment with heat maps, scatterplots, spider charts, and other kinds of graphs. In addition to giving you a new perspective on the data, these visuals are powerful storytelling devices when it comes time to present your findings.
If your engagement survey tool has an open API, you’ll be able to export results into your visualization tool of choice. Some HR platforms have built-in dashboards that allow you to visualize data on the fly. Lattice makes it easy to view engagement and performance data using heat maps, nine-box scatterplots, and other visuals.
When you ask employees to complete a survey, they expect that their responses will be put to good use. Analyzing their feedback is the first step in making good on that promise. Next, you’ll need to decide which opportunities to focus on and set goals accordingly. Learn how to make that important leap by reading our ebook, How to Turn Engagement Survey Results Into Action.
Today, we’ll tackle a common problem with importing data to an SQL database, using a real-life example. Suppose your company conducted a survey on the most popular programming trends and preferences, striving to meet the expectations of its users. Your user base was overwhelmingly responsive, as the questionnaire was completed by more than 15,000 people! Unfortunately, someone has to analyze those results—someone like you. So, let’s get right to the task!
Among many others, users had to answer the following questions, with possible answers listed in parentheses:
- How old are you? (17, 19, 25, 42, 33)
- What is your gender? (M, F, N–prefer not to answer, O–other)
- What is your current employment status? (E–employed; U–unemployed)
- What other topics would you like to learn? (Data science, Statistics, Data visualization, etc.)
- Do you have any experience in programming? (Yes, No)
- What is your preferred method of learning? (Course, Video, Articles, Webinar, Mix)
We’ve stored our data in an Excel file called survey.xlsx. Here’s what our table looks like:
Id age gender emp_
other_topic experience pref_
1 17 M E Python Data science yes Video 2 21 F E SQL Statistics yes Course 3 24 F E Python Statistics no Course 4 38 N E Python Excel no Mix
Of course, this is only a small sample of data from a file that has around 15,000 rows containing the results of our survey.
If you already have some experience with SQL, you may know that you can insert existing data into a table with the INSERT INTO command:
If you have no idea how to insert data into a table using SQL, I recommend you take our free Operating on data in SQL course before reading any further. The INSERT INTO command can come in handy if you need to manually create a few rows of data. However, it would be tedious to fill in a table with 100,000 rows of data this way unless you were using a generated script. Fortunately, many database systems already provide a means of importing data from a spreadsheet into a database.
Save XLSX File as a CSV
To make things work, we need to first prepare an .xlsx file and then change its file extension from .xlsx (or any other common spreadsheet extension) to .csv, which is short for comma-separated values (CSV). This file format is used to store tabular data in plaintext. In most spreadsheets, all you have to do is click Save As and choose the .csv extension. You may be asked to specify a field delimiter value when you do this—you can use any character you like, but it should strictly be a character that is not used in the spreadsheet itself.
Create a Table
Now that our file is ready, we can begin preparing the table that will store all our survey data. We’ll create the table with the same column names as in our CSV file to avoid confusion.
Regardless of their names, the columns must contain the correct data types and be defined in the order in which they appear in the CSV file. As you’ve already seen, each column of our CSV file stores numbers, single characters, or strings. In our case, we’ve opted to use the following data types:
- int for numbers
- char(n) for single-character responses, where ‘n’ signifies the exact number of characters that are to be stored in a given column’s entry
- varchar(n) for strings, where the ‘n’ signifies the maximum number of characters that may be stored in a given column’s entry.
- boolean for yes/no responses.
Remember, the order of the columns in our table must match the order of the columns in the CSV file.
Fill the Table With Data
Finally, we can import data from our CSV into our database. We’ll use the COPY command to do that, like so:
What exactly does this query do? Basically, we tell our database to copy data from the given path (note the quotation marks) to the survey table. We then specified the following parameters:
- CSV : informs the database that the file we would like to copy from is a csv.
- HEADER : we indicated that the the first row of our file is a header and should be omitted.
- DELIMITER ‘,’: informs our database that the delimiter should be set to a comma.
However, keep in mind that the order in which we list these parameters after the path does not matter. We could’ve just as well written DELIMITER ‘,’ HEADER CSV , and everything would have worked fine. You also have to remember that the COPY command appends data to the table. Every time you use it, the database adds the data to anything that already exists in a given table and does not replace anything.
The NULL Parameter
Sometimes, the dataset we’d like to import is incomplete, meaning responses are missing from some of the cells. By default, the COPY command assumes that nulls are represented as empty, unquoted strings in a CSV file—in other words, as just an empty field. Obviously, this is not always the case. Someone could leave the field empty, but another person may write “NO DATA” instead to indicate that a response was not provided. We can handle this by using NULL AS ‘something’ to inform our database what should be interpreted as a null value:
In the above example, all CSV fields that have ‘NO DATA’ written inside them will be converted to nulls.
Analyze Your Data
Awesome! Our survey responses are now in the database! The only thing that remains is analyzing the information we’ve gathered. We can query almost anything we’d like. It all just depends on what you’d like to know. The following example is a simple demonstration of some SQL queries you could execute:
In the above query, we compared the number of users with no prior experience who want to learn Python and are interested in Data Science to users who have no prior experience, want to learn Python, and are also interested in Data Science. It’s easy, readable, and simple!
SQL’s quite a powerful and useful language to learn, especially if you plan to analyze data in your line of work. If you’d like to learn about data types, creating tables, and much more, check out our Creating Tables in SQL course. Or, if you’re just getting started with SQL, you can learn the core principles of the language by signing up for our SQL Basics course.
While no better way to know what your customers want than directly obtaining this from the source, customer surveys will only help an organization if they are able to quickly analyze results, understand the feedback, and translate it into meaningful action that best serves their customer base. If you don’t take action on your insights, your customers won’t feel heard and your credibility is at risk.
One of our clients is a large medical membership association with members across the country operating their own clinical practices. Our client needed a clear understanding how their members were being impacted by COVID-19 and what approaches they were taking to deal with the drastic change in business operations. They created a 50-question survey with questions about operational changes, client numbers, PPE, and financial impact to be sent to more than 2,000 members.
Aware of the limitations of their existing survey structure, they knew a survey of this magnitude would take weeks to clean and aggregate the data into a format from which they could draw actionable insights.
Modernizing the Data Architecture For Faster, More Insightful Survey Analysis
Instead of investing in new technology and learning new tools, we took advantage of managed services offered in their Microsoft Azure environment.
With a more flexible and agile data architecture, the process to analyze their survey data was cut down from weeks to under one hour.
Here’s how we did it:
- Azure Data Lake Gen 2 became our landing zone for the raw files output from the vendor.
- The creation of the blob file in the Data Lake kicked off a Trigger in Azure Data Factory which ran an Azure Databricks notebook that used Python to clean the data into an acceptable format before landing it in another Data Lake Gen 2 container.
- Once the file had been manually reviewed by a member of the client’s Economics team, it was loaded into a new container which started another Azure Databricks notebook. This one aggregated and manipulated the data into a new format which was then loaded into a SQL Database.
- We then employed our standard three-tier ETL architecture to move the data through the database and store it in the data warehouse.
Not only did this infrastructure significantly improve the speed by which our client could begin to analyze the data, it also served as a central repository of all survey information. By loading other types of surveys in the same format, we created a central data store which can be used by the client’s data scientists to perform more advanced analysis.
Translating Results into Action
The real value of survey results comes when your organization makes positive changes from gleaned insights. Our client is using the survey data to help their members better plan, strategize, and succeed during COVID-19 and beyond:
- Identified and shared strategies working most successfully for some practices that can be replicated by others
- Guided the development of tools and resources to support practices
- Provided economic forecasts to prepare for what’s ahead
- Provided survey data at the state level to help guide local support efforts
Through efficient and informed use of their data, our client is providing value to their customer base during a difficult time and helping shape effective policy moving into the next phase of recovery.
Learn more about responding to customer needs
In this on-demand webinar, data experts participated in an interactive panel discussion and discussed in more detail about how Customer Behavior, Supply Chain, and HR/People Analytics can help you handle new challenges during the recession.
Quantitative data is information comprising numbers or quantities. This kind of data is usually used in various research projects and is generally obtained from questionnaires, polls, and surveys. While qualitative surveys give open-ended questions, quantitative surveys mostly use a close-ended question format. Quantitative surveys may require answers to questions like “How much?”, “How fast?”, “How often?” or can ask respondents to provide simple “yes” and “no” answers. There are also scaled-question surveys that consumers can use to rate their experience with particular products on a scale, for example, one to ten. Surveys also allow making predictions on customer behavior and whether they will buy a given product. Collecting survey data is one thing, but successfully analyzing it is another – it is a challenging task. Here are some steps and methods that can be useful in this process.
1. Sample size
Make sure you have enough feedback on your survey before analyzing the data. The response rate may vary depending on which method you are using to conduct a survey. Online surveys reach a broader audience compared with traditional ones. If you are not sure how many people you need to survey, you can do a sample size calculation. There are many online tools that let you determine the right number of responses.
2. Survey participants and questions
You can get hundreds of respondents to take part in your survey and yet not get any desired results with it. That’s because you should have a clear target group, especially if you are doing a market survey. For instance, sending surveys to both long-time and new customers might result in different outcomes. These two groups of people may not feel the same about your product because one is using it for a long time and can have a biased response, and the other is a new customer who still hasn’t formed any opinion on it. You can do two surveys with different sets of questions according to that criteria and get two sets of responses.
If your participants are all from different backgrounds, demographics, etc., you can use this method to analyze the data. This method puts the respondents in subgroups according to their place of living, gender, age, social status, etc. Using cross-tabulation ensures that you know how your target group answered the questions and that your surveys aren’t overrun by the “non-target” group. You should have a large enough sample size because every time you put people in small subgroups, your sample size for each subgroup decreases. As mentioned in paragraph number one, the sample size calculation is helpful in this situation.
4. How are the questions answered?
The next step is to look at the feedback on the different questions that are part of your survey. Check for common responses and see if they are repeated enough times to form a trend. It is generally accepted that you can draw a conclusion from your research survey if you see a trend. It is also possible to compare your survey results with previous ones if they are available. This way, you can check different patterns that have changed between past and current surveys.
Large amounts of data can sometimes make it impossible to analyze it without using a proper tool. SurveyMonkey, Google Forms, Zoho Survey, Typeform, and similar software can make the process fully automated and reduce the turnaround time.
As soon as your survey has gone live and collected one completed response, the results page of your survey goes live too. From the results page, you have access to all of the analytical tools. No need to export data to do the analysis.
The Survature Results Dashboard
Normally, the first bits of information to look at are the operational statistics of the survey. You can see the Start Date, End Date, Mean Completion Time and Last Response Time. Visited show how many survey takers have arrived at the opening page of your survey. Started are those who clicked on the Continue button to actually start taking the survey. Completed are those that have finished the survey, i.e. seen the closing page. Completion Rate is computed by dividing Completed over Started.
You can then glance through the results page, question by question, to see whether the results confirm what you already know about your business or reveal interesting trends. Don’t worry about the average scores too early. When you have fewer than 15–20 responses, average scores can vary widely with every new survey response.
Next step is to create some segments using the Segment Builder. This of course requires that you have prepared your survey with the kind of segments that you need. Commonly used segments come in two kinds: how happy/satisfied the participants are vs. who the participants are (such as first timers or returning customers).
If you have used Data Overlays, you can also segment against the various overlay variables, which can include the customer’s actual annual purchasing amount, how often they have bought a certain product, or how long they have been your customer.
With segments, you can look at AnswerCloud question results and make direct comparisons in the priority charts, just like in this real example from our Tennessee Theatre case study.
The Survature Priority Matrix
You can also use crosstab like in this real example, also from our Tennessee Theatre case study. The table at the bottom shows residual analysis by default.
The Survature crosstab
In the table, the count in each cell is a clickable link that takes you to a segmented results page, such as if you had already created a segment using two variables. For example: first-time customers that think the price is “Lower than I expected” Dashboard Link.
In addition, note that in the Row Selection and Column Selection, you can choose multiple questions, segments, or survey targets to be in the crosstab.
If you have a satisfaction question in your survey, or other question where users are asked to rate something on a scale, you can convert this into useful mean values by analysing the responses using a score. You can then have a column that summarises the satisfaction value of all the data.
Scores allow you to assign a value to each code, including a No Response value.
This worksheet shows how to create a score to analyse the satisfaction values in the Crocodile Rock survey supplied with Snap. It explains how to discard the “Don’t know” responses.
Scores are a way of manipulating your data for analysis. You can assign a score to each question code, and then calculate analyses using the score instead of the code. This is generally used for scoring satisfaction surveys, so that positive ratings are given positive values and negative ratings are given negative values. You can then summarise the whole satisfaction by calculating the mean of all the cases, so you know whether people are generally satisfied or not, by how high or low the mean is.
If people have marked a question as “Don’t Know” or “No opinion” this has different effects on mean values, according to whether it has a value assigned to it or not (even if that value is 0). For example, if you have four respondents to a survey, who have given the following satisfaction values (from a range of -2 to +2).
Person 5: Don’t know
To calculate the mean, you sum the values and divide by the number of cases. If you are scoring “Don’t know” as 0, your mean would be (-1 + 1 + 2 + 2 + 0) / 5 or 4/5. If you discard the Don’t Know, your mean would be (-1 + 1 + 2 + 2 )/4 or 4/4. It’s obviously quite important to decide what you do with your “Don’t Know” values in terms of judging the satisfaction of your respondents. By using a score you can choose to score them as a neutral value, or discard them from the calculation.
In the Crocodile Rock survey provided with Snap questions 6 is a satisfaction survey. Respondents are asked to rate how satisfied they are with aspects of the service.
Summary of steps
Step 1: Creating the score weight
The questions to be scored are 6.a to 6.e in the crocodile survey.
To check which question this is, open the Crocodile Rock survey (snCrocodile) and scroll down to question 6.
Step 2: Using the score in an analysis
- Click to open the Analysis Definition window for a table.
- Click the to create a new analysis.
- Set the analysis value to be Q6.a
Q6.e (this includes all questions from Q.6a to Q6.e).
Step 3: Showing the effect of including and excluding Don’t Know responses
You can examine the effects of including the Don’t Know responses as zeroes by changing the score.
- Leave the window showing your table open.
- If the weights window is not already open, click or press [Ctrl]+[W] to open it.
- Double-click the scoreSat to open the score window.
- Edit the last score in the list, changing it from NR to 0.
Search our knowledge base for loads of useful advice and answers to common questions
If you’re still stuck you can always submit a support request and we’ll get back to you ASAP
- Citizen Space
- Skip Logic
Jess Henderson – September 28, 2021 22:30
Please note: while the written guidance in this article should be up to date, the screenshots may not be. Please bear with us. We’re carrying out work to update all the screenshots in the Knowledge Base for Citizen Space 7 and (hopefully) to automate the process for future. Thank you for your patience!
For the most part, you shouldn’t notice any major changes to your analysis and reporting when using Skip Logic rules in your survey. However, you should be aware of your respondents’ journeys through the survey and analyse the data accordingly.
For example, a common conditional question for skip logic is “Are you responding as an individual or on behalf of an organisation?”. This means you effectively have two surveys – one for organisations and the other for individuals. You can then analyse the data for each group independently by using filters.
Let’s say we wanted to start by looking at our ‘Individuals’ responses:
1. Go to Responses Organised by Question.
2. Then select your ‘conditional’ question (in this case, “Are you responding as an individual or on behalf of an organisation?”)
3. To look at all the responses given by anyone that said they were an individual, select the total number to take you through to the next page showing those responses
4. Finally, download your .xlsx file of filtered responses. This will contain what every person who said they were an ‘Individual’ has answered for each question
Things to note:
- It is possible that some responses may contain answers for more than one route through the survey. This can happen if a respondent chooses one answer to a conditional question, then goes back to the beginning of the survey, changes their answer to the conditional question and then follows a new route through the survey. Citizen Space does not delete answers to the un-followed route, as this allows the respondent to go back and change their mind several times, and their original answers will still be available to them. Because of this, you should not make assumptions about a respondent’s final route based on the presence or absence of answers on destination pages; you should follow the steps above to filter your responses based solely on answers to the conditional question.
- On the “Responses by Question” pages, percentages are all based on the total number of respondents (or number of respondents matching the current filters), and not the number of respondents who answered (or saw) a specific page. This is the same as when analysing non-linear surveys in Citizen Space, where your respondents may have only answered questions in one or two chapters.
- On the “Analyse Responses” page you will see every question on all pages, even if the respondent did not see that page. This is so you are able to apply tags on certain questions if required.
- Skip logic: a quick start guide
- Quick start user guide – Citizen Space
- Release notes: Dialogue v6.1.1
- How do I design a survey with Skip Logic?
- What is ‘fixed expenditure’?
Delib Limited, Co. #5158056, Registered at 4th Floor, 63 Queen Square, Bristol, UK BS1 4JZ | Delib Australia Pty Ltd., ABN 98 156 313 174
We anchored the survey items in the Diversity and Inclusion Survey around engagement; because we have decades of academic convergence showing that engagement ties to business outcomes. Culture Amp administrators will quickly recognize that the survey dashboard looks similar to the Engagement dashboard. While the mechanisms for analyzing the results are similar, the method and approach are slightly different.
A popular approach:
1. Participation tab: Look at participation by each demographic group and ask: are there any groups who participated at a low rate and are thus underrepresented in the data?
2. Insight tab: Use the Insight tab to look at the 30,000 foot view of your results. Begin to identify a few hypotheses you’d like to explore as you dig further into the results. Maybe one of the factors stands out to you as the lowest overall, or lowest compared to the benchmark. Maybe one of the top 5 questions impacting engagement for your organization surprises you.
3. Questions tab: There’s a lot to dig into on the Questions tab:
- Look at the favorability scores for each question (i.e. the breakdown of favorable, neutral, and unfavorable): did you score significantly higher or lower on any items than you anticipated?
- Look at each question’s comparison to the benchmark: where did you score at least 5 points above benchmark or below benchmark?
- Using the Impact Analysis and the Focus Agent filters, click into a few questions and go a little deeper. Explore the spread of scores by demographic – you will likely notice that, like engagement, your scores and culture are not evenly distributed. You will have pockets of high scores and low scores on a single question.
4. Heatmap tab: Go through every demographic option in the dropdown. Pay attention to where disparities exist among each demographic group. Gender is a common place to start. Ask yourself – is one gender scoring higher than others by a large, statistically meaningful margin (which the colors can help you identify)? What about on race/ethnicity lines? Are these disparities happening on questions that were identified as high impact from the Questions tab? Are the sizes of these groups vastly different?
5. Custom tab: Explore how intersecting identities experience specific topics differently by creating intersectional data lines. For example, consider creating a data line for “Women of Color” using a combination of Gender Identity and Race/Ethnicity. What other intersectional identities could you compare their experience to? What other demographic groups are critical to your workforce that you want to explore? How are these populations scoring on high impact questions?
6. Text Analytics: Explore the questions that have piqued your interest throughout this process. Read comments to get more context and color as to why people feel positively or negatively about that topic.
Market research sometimes requires that a fairly large number of ideas or attributes be sorted and classified according to relationships or attributes. Often, market researchers ask consumers, customers, or clients to organize their ideas. Sometimes it is the market researchers themselves who must classify data. Three ways to organize and analyze qualitative data are described here: affinity diagram, card sort, and constant comparison.
Affinity diagrams are primarily used to organize information compiled during a brainstorming session. Problems and solutions are often “worked through” by using an affinity diagram. An affinity diagram is one way to organize ideas or attributes. Use of an affinity diagram is also referred to as the KJ Method, named after Kawakita Jiro, who popularized the method in quality improvement circles. Creating an affinity diagram is a six-step process.
- Determine the reason for doing the process
- Identify a logical set of classifications
- List factors related to the classifications
- Place each factor or idea under a classification
- Reduce the classifications by combining and simplifying
- Analyze the diagram—the total group of classifications
Card Sort is a Low-Tech Way to Gain Research Insights
Card sort studies have been used in psychology and cognition research since the military tested soldiers before and during World War II. Today, card sort strategies are often used to test the usability of software architecture. Card sort methods generate information about how respondents associate and group ideas, constructs, or products. As a qualitative process, card sorting helps to support the development of insights.
To participate in a card sort activity, respondents need to organize unsorted cards into groups. They may also be asked to label the categories they create. There are two versions of the card sort activity: closed card sort and open card sort. In an open card sort activity, respondents create their own categories. In a closed card sort, respondents are asked to sort cards into categories that have identified in advance by the market researcher.
Card sorting is a very low-tech method that employs Post-It™ notes or index cards. There are, as you might guess, software packages that support the creation of digital cart sort activities. Card sorting can be conducted with individual respondents, with a small group in which concurrent card sorting is conducted, or as a hybrid activity where respondents individually perform a card sort and then come together as a group to discuss how they approached the task and compare their outcomes.
A card sorting study produces quantitative data in the form of a set of similarity scores. The similarity scores are a measure of the match for various pairs of cards. For example, given a pair of cards, if all the respondents sorted the pair of cards into the same category then the similarity score would be 100 percent. If exactly half of the respondents sorted the two cards into the same category, but the other half sorted the cards into different categories, then the similarity score would be 50 percent.
It is interesting to note that the card sorting technique, which is a qualitative research process, has been used to replace a quantitative technique known as exploratory factor analysis. The citation for this study is as follows: Santos, G. J. (2006), “Card sort technique as a qualitative substitute for quantitative exploratory factor analysis,” Corporate Communications: An International Journal, 11 (3), 288–302.
Constant Comparison for Coding Naturalistic Research Data
The constant comparison method is a well-known qualitative research method first described and refined by naturalistic research teams such as Glaser & Strauss and Lincoln & Guba. The constant comparison method is carried out in four stages: (a) comparing data that is applicable to each category, as the categories emerge; (b) integrating the categories and their properties to reduce the data set and data noise; (c) further delimiting the theory based on reduced data set; and (d) writing the theory.
Unlike quantitative research methods, in which a hypothesis is generated before the research even begins, the constant comparison method generates the theory as it progresses. Instead of having a hypothesis to direct the research, themes emerge as the data is coded and analyzed. This is called naturalistic research or grounded theory. Because of the continual building of theory through analysis, the discovery of relationships begins as the initial observations are analyzed. A process of continuous refinement occurs as the coding is integral to the data collection and data analysis.
The narrative content of interviews and open-end survey questions is analyzed for key patterns. The patterns are identified, categorized, and coded in order to uncover themes. A constant comparison process is inductive research. That is, the categories and the meaning of the categories emerge from the data rather than being imposed on the data before the data is even collected or analyzed.