The present approach in this era of Big Data seems to be: let’s get all the data, all the time, everything we can. We aren’t really sure what to do with it, but one day it might be useful!
I recently heard another speaker at a trade show proudly proclaim that their trucks produced over 2 gigs of data an hour and that they had 50 of them. What they did with all of this data, he wasn’t sure – but it’s a big number so that must be good, right?
The issue most people have with Big Data is the name. By its nature, Big Data sounds like you need lots of it, the more the better, when in fact that couldn’t be further from the truth. What businesses actually need is the right data, in the right place, and at the right time.
In my experience, mining companies have historically struggled with this fact.
On one hand, many companies use data analysis today to look at, for example, engine temperatures and link the rise in operational temperatures to a fault, so they can trend this to spot faults before they happen. Drawing a conclusion on a single metric may be a useful indicator for providing predictive maintenance, but it won’t tell you with great clarity the several factors that can contribute to a particular event.
On the other hand, some companies capture and keep every data point that they produce, which often leads to capacity problems at a network level as the data is sent back to the applications that are needed to process it. Following this approach, you should end up with the data to draw better conclusions, but are the right data points being analysed to provide insight, in an economical way? Often not.
To my mind, the ideal approach is leveraging intelligent edge computing (processing data at the site, rather than sending everything to the cloud) to harness Big Data. This means you can monitor multiple data points at the edge, but you only surface the essential points or actionable alarms – which for a vehicle could be engine temperatures, oil pressure, and driver behaviour – to identify when failures will happen and which behaviours cause them.
This approach means you don’t have to send lots of data back to the cloud, clogging up the network. Key to this approach is the idea that the volumes make little difference. It’s all in the variables, the exception and the analysis – you don’t need all the metrics and factors available in your applications, just the smart ones.
What miners need are effective data management strategies, and luckily we have five key rules to keep in mind when designing yours:
- What problem are you trying to solve? What do you need to understand better?
- What are the minimum data points you need to draw these conclusions? Are they exceptions? What are the time intervals that will yield the patterns you need?
- What data can be processed at the edge? What data needs to be sent to the cloud?
- What connectivity networks do you need to get data to its intended destination?
- Who needs to access the data? Are they on-site or remote?
Intelligent edge computing is vital in ensuring only the data you need is where it needs to be. Tied with smart, simple, easy-to-use networks these systems dramatically reduce complexity in terms of capture and analysis of data.
Inmarsat has been in the game of remote data management for 40 years. Over time we’ve learnt that what really matters is getting the right data to the right people and machines quickly, reliably and securely.
Talk to us about how we can help you develop intelligent edge solutions that support smarter insights and operations.