What’s the BIG deal about BIG data?

Big? You ain’t seen nothing yet! We are struggling to process what already exists and the words “tip” and “iceberg” spring to mind…

In the beginning there was cathode ray storage. And it was tiny. Technology is a rapidly evolving beast, though. Consider that sixty-six years after the very first airplane flight that we landed on the moon. Humanity gets bad rap, and let’s face it, we deserve it much of the time, but you know what?

We’re amazing.

Less than three hundred years since the industrial revolution, the planet has been transformed into something unrecognisable and spectacular. Don’t get me wrong – there are many appalling prices to be paid for this incredible progress, and we often don’t understand the damage we are causing until it is far too late.

We’re still amazing though, for all our flaws and hubris.

Information technology is the new frontier (sorry, Captain Kirk), and what is driving it forward right now is data, and our capacity to create and store it. It is estimated that there are currently about three zettabytes of data in the digital universe, which may not sound a lot – only three? – but it’s a staggeringly large amount of information.

I feel like I need to explain a zettabyte. I realise that the more tech-savvy readers will be raising their eyebrows at this point, but I think it’s important to break it down so that we can appreciate exactly how much data we are talking about.

It all starts with a bit. This is a solitary piece of data. Eight bits make a byte, and 1024 (it’s always a factor of 1024 now – it’s a binary thing) bytes make a kilobyte. The very first home computers in the early eighties dealt in kilobytes. Perhaps the most successful – for a while – was the Sinclair ZX Spectrum, and when it was released, it had a memory capacity of 16 kilobytes. 16 kb to do everything. It only did one thing at a time, and that thing was usually games. Every game written for the original incarnation of the Spectrum had to work within this tiny limitation. Let me put it in perspective – a social media post with a photo today will use 500 kilobytes of data.

The early games developers were alchemists to utilise the tiny amount of programming power that they had so effectively.

Right, I’ll escalate this. 1024 kilobytes make a megabyte, followed by a gigabyte. Next is a terabyte, and at this point we are still using jargon that just about anyone who uses a computer is comfortable with, and now we are veering into terminology that is not so commonplace.

1024 terabytes make a petabyte. Then we have exabytes, and at last we have reached zettabytes. And when we reach 1024 of them, we have reached the theoretical summit (for the time being) with a yottabyte. Okay, so there’s a brontobyte too, but at some point, you have to think that someone is making these up for the sake of it.

We started with a bit, and by the time we have reached the current zenith that is a zetabyte, we have more than 1000000000000000000000 of them. And that is a greater number than every grain of sand on the planet.

That’s an inconceivable amount of data. And this is just the start.

You see, the production of data is increasing enormously. Smart phones have maybe ten sensors in each one, and every sensor is capable of collecting a range of data. And there are about two billion smart phones out there right now, and this number is going to doing nothing but rise. Massively.

And this is but one example. Every time you do pretty much anything on a phone, a computer, any device, your activity is recorded in some way, and that data is stored away somewhere. Not all of this data is expected to be particularly useful, with estimates that about a third of it could be productively processed.

At the moment, we are getting through about 0.5% of it.

And most of that processing is being done by the big players in the digital world – Amazon, Facebook, Google, Netflix and YouTube – but this is changing. Pretty much any corporate entity is going to have to embrace the processing of big data if it wants to stay ahead of the pack. You see, what drives this process is profit.

Most of it – perhaps all of it – is an attempt to get under the skin of consumers, and to wrestle their cash off them.

That’s the way of the World right now.

So, we already have a massive amount of data, and we are struggling to do anything with nearly all of it. And this is just the start. We sit at three zettabytes, but various estimates suggest that by 2025, this figure will leap to 163 zettabytes. And it won’t be long before we hit the first yottabyte.

The burgeoning AI robotics industry will be at the heart of this. Robotic Process Automation (RPA) is already processing whatever is being processed effectively, and in the coming decades, this fascinating technology will take the World to places that we can’t even understand yet.

The Internet of Things (IoT) is just beginning. This is where smart technology communicates with itself, without the need for human input. It’s happening now. When Google Maps tells you your estimated journey time, it is doing this in real time, monitoring traffic flow and accidents. We are on a path now – we have been on it for some time – and the tipping point is way behind us. We are utterly reliant on technology, and this reliance is only going to get stronger. Cue future-shock stories of our impending doom…

Autonomous vehicles are another coming development, and the technology behind these generates gigantic amounts of data, as companies like Google and Uber fight to corner the market. A decade ago, the concept of a self-driving car was little more than fanciful science-fiction, but big data is changing all that.

Where else will it take us? It’s impossible to say, but there is no doubt. I mean absolutely none. Big data – soon to be transformed in huge data – is the most important technological process in the World.

And it isn’t the future. It’s now.