Data

Top 10 Data Trends for 2023

We take a look at the top 10 trends emerging from the world of data.

Data is a commodity that is as valuable, or more valuable than oil – the parallels are striking. A barrel of oil is worth $85 at the time of writing; the cost of a single data list from an approved source (eg: Cognism) can set you back more than $100[1]. Whilst we can’t compare measurements (how many email addresses can fit into a barrel?), the comparison serves to show that data is valuable enough that people want to trade it.

The major difference is whilst we may one day run out of oil, we’re unlikely run out of data: more people own more devices, use more apps and online accounts and interact more with businesses in a digital way than ever in history. As a result, there is more data stored on planet earth than there has ever been in history.  

You likely know the benefits that a data-driven approach can bring: more informed, smarter decision making, a better understanding of your customers and the time and cost efficiencies that come from focusing on what matters.

That’s why it’s so important to understand the direction that the technology and trends are moving in. This blog will list the top 10 trends for data in 2023.

1. Artificial intelligence

Surprise! Well, perhaps not: AI has been in the news a lot this year, in large part because of ChatGPT. A glance at Google Trends will show you that AI-related searches spiked around the time that ChatGPT was released to the public.

However, don’t let this obscure the bigger picture: Artificial Intelligence has a lot to offer when it comes to data. Organisations are already using out-of-the-box solutions such as Google Cloud and AWS (amongst others) to automate extraction, cleaning and analysis of data, freeing up analysts to focus on turning data points into business value.

What’s more, artificial intelligence learns, and learns quickly. The quality of the predictive analytics that AI-solutions can provide continues to increase, given that AI can make sense of swathes of data that a human brain could not process, in a fraction of the time. Google Maps is an example of this: using user travel data as input, maps is able to correctly predict your estimated time of arrival (ETA) by predicting traffic patterns based on what it has learnt.[2]  

2. Unstructured data

Unstructured data is information that cannot be organised in a pre-defined manner. It is usually text-heavy and irregular: for example, customer reviews cannot be easily sorted as they vary in content and length.

Unstructured data now makes up 80% of all data and far from being written off as “junk data”, advances in technology make it possible to extract insights and detect patterns that were previously impossible. For example, in customer service you can use natural-language processing (a form of AI) to analyse your customer reviews and quickly find the most common complaints and comments that are flagged.

Take Intel and Siemens for example: they trained an AI model on images of cardiac MRI scans and developed a model that can automate the process of identifying each heart chamber (usually a manual process). What does this mean? It means that the unstructured image data comes to the consultant pre-annotated, speeding up the time to diagnosis.[3]

3. Automation

This is closely linked to AI, but not dependent on it – automation can be achieved without AI.

The data visualisation tools market (think PowerBI, Tableau, Data Studio) is forecast to reach $19.5bn in value by 2030[4], and a core advantage of visualisation tools over manual reporting is the automation of many data functions.

Whilst you need to manually set up a report, the updating of data is done automatically: no need to copy new data into a spreadsheet, change formulas or record macros. As AI continues to evolve, it will likely have an increased role in running and designing data reports, but one thing will not change: you’ll need a person to ensure the automation is achieving something useful.

4. The human element

Point #3 ties nicely into this next point: the human element. Without people, crude oil is just crude oil – it needs to be refined to have any value, and the same is true of data.

Whilst AI and automation can, for lack of a better term, eliminate some of the grunt work, the direction of a person is still necessary to ensure that the data is used ethically, legally and in a way that drives business value. The advantage will be with companies who teach their people how to use AI, rather than “leaving AI to its own devices”.

5. Self-service

As organisations move to establish a data culture, non-technical stakeholders are becoming more involved in data analysis, as data-driven decision making becomes the focus.

According to McKinsey, organisations that make the most progress in data accessibility before 2025 stand to capturer the highest value from data-supported capabilities.[5] For example, with SalesForce or other CRM platforms, salespeople can see revenue figures and customer touchpoints at the touch of a button, allowing them to make informed decisions quickly and tailor their conversations.

This is not without its challenges: many people are hostile to the idea of data-analysis (see below) and there are compliance considerations that need to be overcome.

6. Data for all

Becoming a data-driven organisation is simply not possible without buy-in from everybody – after all, the data is going to be informing business decisions, so if non-technical business professionals are not on board, nothing will change.

Self-service is one way of making data accessible, but another way is through demystifying the misconceptions about data through data literacy programmes. Many people wrongly believe that making data-driven decisions is beyond their capability…often traumatised by their experience of maths at school.

7. Risk and compliance

Data is a valuable commodity and in the wrong hands, a powerful weapon. The estimated price of illegally sourced data is estimated to be between $20 - $200 per item of login info, whilst the value of the purchases data to a criminal could be anything they can get their hands on with the login.[6] With data now being so vast, generated by more devices than ever and being more accessible via self-service, there are more human touchpoints than ever before. With more than 80% of data breaches being caused by human error[7], you need to make sure you’re upscaling your data capability in a secure way.

In-built compliance is now essential, particularly when we consider how data is transferred and stored across different elements of our tech-stacks. This needs to be considered both when procuring systems (eg: cloud computing) and when architecting systems.

8. Hybrid cloud approaches

Making data secure and accessible means having it in the right place at the right time. Organisations with hybrid clouds are able to effectively protect data by using their private servers and security protocols. That’s not to say public clouds aren’t catching up to this: providers like Google are developing public cloud infrastructure that better conforms to legal and regulatory requirements.

What’s more, you can upscale your storage as and when you need, rather than having to invest in physical server infrastructure.

9. Edge computing

The internet-of-things means that data is not just generated in central databases, but in smartphones, cloud systems, web applications, with much of it being unstructured. Traditionally, organisations using such data would process it centrally, meaning that these IoT devices do not carry a lot of the burden of the storage and interpretation of the data. The sheer amount of data being generated, combined with bandwidth limitations and latency, makes this approach increasingly unsuitable.

New edge computing techniques are set to shift the onus of storage away from central databases to the “edge”, or the place where the data is generated. As an example, think about smartwatches: they are all connected to an app. Imagine if the healthcare or tech company had to interpret the smartwatch data centrally before they could share it with the user. Simple information like heart rate, steps and the like would be more expensive to provide and would be difficult to provide in real-time – not much use if you’re on a treadmill. they could share it with the user. Simple information like heart rate, steps and the like would be expensive to provide and would not be real-time – not much use if you’re on a treadmill.

10. Skills gaps

This is perhaps the most concerning trend. Over half of large and medium size businesses are struggling to recruit for data roles[8].

The key is to work with your existing talent and factor reskilling and upskilling into your plan for building data capability. Creating effective routes for people to widen their skillsets or switch to a career in data makes good business sense: you’re more likely to retain employees who feel invested in, you’[re building capability without having to go through an expensive hiring process and ultimately, you realise the benefits that a data-driven approach brings.

Learn more about building a data strategy in your business, and the in-demand data skills your business needs. 

Learn more about our data training and courses