BI Trends

Big Data: The Most Overhyped, Underutilized, and Misunderstood BI Trend

By Charles Caldwell
Share on LinkedIn Tweet about this on Twitter Share on Facebook

Being an engineer in a sales support role, I get the opportunity to speak to hundreds of companies about their near-term and long-term technology projects. I am frequently asked, “How does your technology work with Big Data?” And I often turn the question around and ask, “What are your plans for working with Big Data?” The most common response, you ask?


Big Data hype and confusion make the Cloud look 19th century. People have been comfortable not quite understanding how computers work for a long time, so just thinking of the cloud as a “big computer out there” seems to work for folks (it isn’t, but that’s OK for now). But Big Data has everyone in a tizzy. People either see it as the great technology savior that will solve all problems, or they see it as “BS”.

Is Big Data real? Of course it is. Amazon revolutionized the business model in retail, putting companies like Circuit City and Borders out of business. Facebook, Netflix, Google, and many more have transformed the social and economic landscape. The ability to process ever larger data volumes and apply increasingly complex algorithms to derive value from them is a game changer. And frankly, there’s no going back. Big Data has changed the way we live and do business.

So why all the hype? Well, Clarke’s third law certainly applies. “Big Data” isn’t really any thing. It is a non-specific term for a collection of technologies and processes for dealing with a data problem that is too big to solve with “traditional computing” (whatever that really means). Basically, the problem got too big for any single machine (even perhaps Deep Thought), so we figured out a way to put a bunch of machines against the problem and “brute force” a solution. Big data is actually a bunch of different technologies/tools used for solving similar, but different, types of problems. Take the nuance of that umbrella term, mix in a little advanced statistics and computing algorithms, and there’s little hope that many people will really understand what is going on in there. It’s… well… magic.

And this is really how it goes with most “general purpose technologies”. Until they are mapped to distinct problems that real people can recognize, they are misunderstood and underutilized. Fast Fourier Transform isn’t anything my children care about, but they love my 4G phone streaming cartoons into the back seat on long road trips. “Big Data” doesn’t solve any “real world” problem. It provides capabilities that can be applied to “real world” problems. And there-in lies the rub. The big hype is that Big Data “out-of-the-box” doesn’t solve any problems for you. You have to map Big Data capabilities to your real-world problems.

And that, frankly, is the biggest inhibitor to adoption and realizing real value from big data. You have to know your business well, identify key leverage points and “interesting questions”, and then have the operational excellence to do something about any answers you might get from “Big Data.” Big Data didn’t create Amazon, or Netflix, or Facebook. Those companies started asking really interesting, high-value questions that were hard to solve. They then invented Big Data to generate solutions. And then the hard part: they out-executed the competition.


Originally published May 20, 2014; updated on August 9th, 2017

About the Author

Charles Caldwell is the Vice President of Product Management at Logi Analytics. Charles came to Logi Analytics with a decade of experience in data warehousing and business intelligence (BI). He has built data warehouses and reporting systems for Fortune 500 organizations, and has also developed high-quality technical teams in the BI space throughout his career. He completed his MBA at George Washington with a focus on the decision sciences and has spoken at industry conferences on topics including advanced analytics and agile BI.