- 16th April 2018
- Posted by: Manolis
Data lakes didn’t quite pan out so now it’s all about an abstraction layer and machine learning to save the day. Hopefully, machine learning can cleanse your data on the fly since humans have proved repeatedly they aren’t meticulous enough.
Will there ever be a technology that can fix decades of poor data hygiene? Probably not, but that isn’t going to stop technology vendors from trying. The good news: Machine learning may come closest to saving your data management hide.
Luckily, technology vendors have a magic elixir to sell you…again. The latest concept is to create an abstraction layer that can manage your data, bring analytics to the masses and use machine learning to make predictions and create business value. And the grand setup for this analytics nirvana is to use machine learning to do all the work that enterprises have neglected.
I know you’ve heard this before. The last magic box was the data lake where you’d throw in all of your information–structured and unstructured–and then use a Hadoop cluster and a few other technologies to make sense of it all. Before big data, the data warehouse was going to give you insights and solve all your problems along with business intelligence and enterprise resource planning. But without data hygiene in the first place enterprises replicated a familiar, but failed strategy: Poop in. Poop out. And you wouldn’t want to make your in-demand data scientists deal with poo.
IBM’s Seth Dobrin, chief data officer for IBM, said “the idea that you could use a data lake and Hadoop (MapReduce) instance where you can dump all this crap in is a mistake.” Not too surprisingly, IBM has its Watson Data Platform and a series of tools that use machine learning to clean data, append meta data and make connections between data stores. IBM’s data platform sounds like a mix of middleware and operating system, but you get the idea. IBM data platform will also recommend models and algorithms.
Other vendors in the space include Alation, Io-Tahoe as well as Cloudera and HortonWorks. While the approaches vary, the general idea is to use machine learning to make data more usable. Ovum’s Tony Baer, also a ZDNet contributor, is betting that this data abstraction layer will be a key 2018 trend for big data, data science and machine learning.
- IBM enhances Watson Data Platform, with an eye towards AI
- How to build a data science team
- Tableau extends its footprint
- Strata 2017 Postmortem: More virtual data lake, more operational machine learning
- Strata NYC 2017 to Hadoop: Go jump in a data lake
- Data lakes going the way of the visual spreadsheet?
- Hortonworks DPS reaches out to the virtual data lake
Know this: Every technology vendor you have will have some spin on this data abstraction layer to pitch AI and analytics. Also know this: You’ll listen since your data hygiene has been terrible and you need a bail out.
Salesforce at its Dreamforce powwow preached the democratization of artificial intelligence and analytics. Salesforce’s Einstein platform will provide a bevy of insights. Data hygiene presumably won’t be a problem since the enterprises that go with Einstein have most of their data with Salesforce.
And Salesforce isn’t alone. One argument for the cloud is that data can be standardized and live on one platform and data model. Substitute Oracle, SAP and Workday for Salesforce and the concept is basically the same. Microsoft has its Common Data Platform. In the end, the subtext is the same: Dear enterprise put all of your data with us.
I noted how the Internet of things and cloud muddy the data ownership waters a few weeks ago. Now it’s worth pondering what vendors will own your queries. IBM is betting that its open strategy will win the day and be that abstraction layer to multiple data stores (with cleansing on the fly). Toss Tableau in the mix to own your queries. We’ll see. The only certainty is that data hygiene will be an ongoing issue that scales.