In 2017, Peter Thomas, chief operating officer of the Leasing Foundation, wrote a series of blogs on new technologies that will impact the leasing industry. What is Big Data is explained in this article.

What is Big Data?

There seems to be nothing complicated going on in the phrase ‘Big Data’: it’s just data. Lots of it. All collected together.

So what is Big Data seems simple, but its detail that is important.

You could argue that Big Data has been going on for a long time – ever since we needed databases to store the increasing amounts of information that computers generate.

But, as many of the articles you will read about Big Data say, we are entering a different world where the volume of data available to us, how we store it, and the tools we can use to analyse it, are opening up new possibilities.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

And those articles are right.

What is Big Data and how does it work?

One of the key technologies that has driven, and has been driven by, large volumes of data are databases.

Traditional databases were designed for ‘structured’ data – information stored in columns and rows. These ‘relational databases’ (called this because the columns and rows are organised in groups of related data items) have been around for a long time.

Now, with the need to store different types of unrelated data – for example tweets, Facebook posts, emails, text documents, images, location data – we need more powerful and flexible ‘unstructured’ databases.

One example of this new type of database is Hadoop – a kind of database that not only allows for the storage and retrieval of many types of data, but does it across thousands of computer servers very quickly, allowing it to deal with volumes of data that it would have been impossible to handle before. Hadoop is one of the technologies that helps run companies like Uber and Netflix, and is behind applications that are run on Amazon and Google’s cloud services.

To give you an idea of the scale of the Big Data that Hadoop handles, every second we generate 40,000 Google search queries – that’s 1.2 trillion searches a year, all of them stored by Google; 300 hours of video are uploaded to YouTube every minute; by 2020, there will be over 50 billion connected devices that collect and share data – all of that data stored for analysis.

The challenge, though, is not just storing data, but doing something useful with it – creating insights and then performing actions. It’s here where the value of Big Data, and its biggest challenges, lie.

One of these challenges is to figure out which of the many thousands of data points are meaningful at any point in time. To do that we need tools that are sensitive and flexible enough to adapt to changing circumstances – circumstances, of course, which might change as we collect possibly relevant new data.

What is Big Data and why is it important?

Big Data – the sources of data available, the technologies to store it and the tools to analyse it –  is now starting to touch every area of our lives.

It is finding applications in every industry, from retail – working out what popular products might be, forecasting demand, optimising pricing, identifying the right customers and working out what to sell them next; through manufacturing – optimising production schedules based on data about suppliers and customers, machine availability and cost constraints (what is now being called ‘Industry 4.0’), to sports – integrating athlete statistics, opponent performance data and video analysis of events, to fine-tune competition performance.

Consider Netflix: it has 65m streaming media users in 50 countries who are said to account for one-third of peak-time internet traffic in the US and who watch 100 million hours of TV shows and movies every day.

Data from subscribers is collected to understand viewing preferences – which is why Netflix shows you some movies and not others, and how it chooses what content to make available. What makes Netflix a Big Data company is not the amount of data, but the sophisticated analysis tools that create insights about TV viewing from that data.

Or think about Uber. It stores data on every trip we take – not just routes but road and traffic conditions – and uses it to predict demand and set fares (on-demand or ‘surge’ pricing’).

Uber also collects data on road conditions and public transport network usage, and combines this with its own data to develop predictive models of travel habits.

And, as we see the emergence of new devices that have sensors built in – especially those connected to what is being called the Internet of Things (IoT, about which more in a later column) – that create a constant stream of data from things like cars, buildings or a wide range of industrial machinery, there are going to be more challenges around Big Data.

For example, it’s not feasible to send all the data from every camera in a massive CCTV installation across the network and store all of it on a server; this would be hugely expensive, and in any case much of the data would be video of nothing happening. Instead, we can use something called ‘distributed analytics’, which means designing enough intelligence into devices to allow them to do some simple analysis of what might be important before sending it into the network.

This approach is also called ‘edge analytics’ because the analysis is done at the edge, not the centre, of a network. In applications such as transport – especially large fleets of vehicles – this will be important, as it will be necessary to optimise the use of expensive bandwidth. We can expect vehicles to analyse their own data.

What is Big Data and what impact will it have?

It should be clear by now what impact Big Data will have. It is, and is increasingly, revolutionising all areas of industry and commerce.

And financial services – and leasing and asset finance – will be massively transformed by Big Data.

To return to the previous example, vehicles and the process of financing them will be transformed. The data that is generated by vehicles, by fleets of vehicles and by the use of those vehicles, will increasingly be captured, turned into insights that allow manufacturers and lessors to predict usage, offer value-added services – or even trade the data.

And that is not just driving style data and collision data for usage-based insurance, or mechanical data for predictive servicing, but drivers’ media habits, network usage and location-based data should they choose to share it as part of their contract.

In other areas, fintech companies – and some mainstream lenders – are now using Big Data from social networks, mobile activity, bank transactions and behavioural data as part of their credit risk analysis to create improved underwriting models.

We can only expect to see this trend to continue. The challenge for leasing and asset finance companies is to understand the ways that Big Data will impact their business models and the technology challenges that need to be met to maximise ROI.