According to a recent Gartner report, 64% of enterprises surveyed indicate that they’re deploying or planning Big Data projects. Yet even more acknowledge that they still don’t know what to do with Big Data. Have the inmates officially taken over the Big Data asylum? …
That’s a big jump (64% in 2013 compared to 58% in 2012), and it reflects a growing confidence that Big Data can help enhance the customer experience (54% cited this as their driving motivation), improve process efficiency (42%) and launch new products or business models (39%).
So, slowly but surely, Big Data is making significant inroads in the enterprise, right?
Organizations would benefit a great deal from the advice of independent thought leaders.
Not so fast. I had my doubts about big-data enterprise adoption when I wrote Too Big to Ignore and, if anything, they’ve only solidified in the past year. For every Amazon, Apple, Facebook, Twitter, Netflix, and Google, I would wager that thousands of midsized and large organizations are doing nothing with Big Data beyond giving it lip service. That is, the fact that a CXO has heard of Big Data is hardly to the same thing as her company actually doing anything with the massive amounts of unstructured data flying at us faster than ever.
This begs two simple yet critical questions: Why the lack of adoption? And how can organizations overcome the obstacles currently impeding them?
In short, most organizations today are making one or more of the following mistakes around Big Data:
- They are trying to ascertain the ROI of Big Data. That’s a big mistake (pun intended). They need to embrace uncertainty and data discovery. They need to disabuse themselves of the notion that they know what they’ll find. Certainty is a myth.
- They don’t know where to start. Sure, anyone can go to Kaggle and post a project. For a relatively small amount of money, you can crowdsource a sea of data scientists. But to fully unleash the power of Big Data throughout the organization, though, one needs the commitment of everyone. Digital advertising company Quantcast (covered in Too Big to Ignore) spend a great deal of financial resources to fork Apache Hadoop‘s distributed file system HDFS to make Hadoop even bigger, improve its performance, and meet Quantcast’s specific needs. (For more on the project, click here.) Forking Hadoop in this manner certainly isn’t for the faint of heart and demonstrates the executive commitment at Quantcast to Big Data.
- They are failing to embrace new technologies. Many companies erroneously believe that traditional BI tools and relational databases can handle Big Data. They can’t. They need to make some new investments (re: Hadoop, NoSQL databases, etc.).
- They are thinking of Big Data as “IT projects” and, as I know all too well, organizations’ batting averages with IT projects are abysmal. Healthcare.gov only differs from thousands of CRM and ERP failures by a matter of degree.
- They are looking at what big-data heavyweights like Amazon, Apple, Facebook, Google, Twitter, and Netflix do and are justifiably intimidated. They are thinking that they cannot begin to do the same things. They fail to realize, however, that these companies have built their internal data management and discovery capabilities over the course of more than a decade. You don’t go from zero to Google overnight.
- They are confused (again, justifiably) by the incessant noise around Big Data. Social media has given every software vendor a “platform” to tout its wares. This is a bit self-serving, but I believe that organizations would strongly benefit from the advice of independent thought leaders with zero skin in the game.
What say you? Are you implementing Big-Data tools and strategies or just talking about them?
This post originally ran on InformationWeek.