Data quality. Two words that I hear in every conversation, at every conference, and thrown into the title of article after article in the market research space. But what is encapsulated in those two words? Data quality is an important topic and, arguably, when discussing survey data and the issues around fraudsters, bots, speedsters, professional survey takers, it could be considered “the” problem. I see why in a world of consumable insights, we throw a range of issues into a “data quality box” and make the problems the industry faces easier to divert or put on someone else’s plate.
Here’s the industry-shaking challenge. As companies branch more into “Big Data” or even “Small Data”, data quality overshadows and often oversimplifies the problems of the current data landscape to a point most people ignore or altogether push other issues in research to the background. To get quality insights, you need quality data behind them, which means creating a robust research methodology that also considers all of the biggest problems with big data.
Here are 4 other equally important problems facing the data economy and the market research industry.
When it comes to security, sometimes it is easier to protect one centralized data lake, also known as protecting the Fort Knox of data, instead of securing multiple individual data silos. With hackers shooting arrows at a central target, your system is a sitting duck. If you are collecting, storing, and securing your data in one place you’re always one arrow away from your company’s feature in a data breach headline.
Here are some alternative methods to centralized repositories to think about:
For data from things like steps on a Fitbit, YouTube subscriptions, or even Spotify playlists:
At the root of privacy is the power and choice to share data, deny data from being used at high and granular levels, delete or revoke that data access, and minimize the information needed to get to an insight.
The market research industry (especially the big fish like Nielsen) should stop cutting backdoor deals with companies like Netflix and use their power and influence to open up API’s that have been conveniently shut down to protect walled-gardens. At the end of the day, it’s the users’ viewership so shouldn’t they be the central part of the process of accessing and sharing this data?
As an industry, those with power should embrace working directly between brands (who often hold massive amounts of consumer data) and panelists (who have access to the most complete sets of their own data) so that data silos can be opened up to individual users to share as they choose for the purposes of research.
Let’s look at the YouTube API. YouTube gives access to subscriptions, likes, and playlists, but leaves out the most important data of all: watch history. Yet, with recent legislation, a Google account holder can now download their entire history, without needing Google’s permission. To protect their assets, this copy of the data is made intentionally difficult to utilize, making it less secure in the process.
We all know large companies make deals behind the scenes or break company terms of conditions to access this data–but what if the industry came together to empower individuals with their data. As personal data becomes more accessible to the people generating it, will they begin to choose what research firms or brands are worthy of their information? Only with true competition in the data marketplace (between the creators and the controllers), can data-driven insights become more ubiquitous.