Edvancer's Knowledge Hub

Three real-world examples of big data objectives

Examples of a big data objective

Like many new information technologies, big data can bring about dramatic cost reductions, or new product and service offerings. Just as traditional analytics, it can also support internal business decisions. You can achieve a variety of objectives using big data, but it is essential to focus on just one goal – at least at first. You must be clear on what exactly you want to achieve through the big data initiative. What are the business outcomes? What are your expectations? What key areas of the business would you like to enhance using big data analytics? Most importantly, ensure that your big data objective is aligned with your company’s business  goals. Without further ado, here are the three examples: Develop new products and services The era of big data has created substantial opportunities for developing products aligned with consumer demands. Using big data to inform new product developments has many benefits. Firms can develop products that connect with the consumer, provide increased consumer value, and minimize the risks associated with a new product’s launch. Through data mining, firms can also identify needs it might not otherwise have captured. For example, LinkedIn has used big data and data science to develop a broad array of product offerings and features. The firm employs big data to revamp its job listings,  “who’s viewed your profile”, and “who’s viewed your posts”. These offerings have brought millions of new customers to LinkedIn, and have helped retain them as well. Another strong contender for the best at developing products and services based on big data is Google. This company, of course, uses big data to refine its core search and ad-serving  algorithms. Google is continually developing new products and services that have big data algorithms for search or ad placement at the core, including Gmail, Google+, Google Apps, and others. Google even describes the self-driving car and Google glass as a big data application. If an organisation is serious about creating new products with big data then they should develop a process for testing these new products on a small scale before releasing them to customers.  Most importantly, business leaders should should spearhead these projects rather than a technician or data scientist. Support internal business decisions The primary purpose of traditional small data analytics was to support internal business decisions. What offers will resonate with your customers? Which customers are most likely to churn? How much inventory should you hold in the warehouse? What should be the price of your product during a particular season? Previously, companies have been analyzing only structured data sources to answer these questions. Whereas, today a lot of data about how customers feel is unstructured such as data from social media, emails, customer calls, etc. The problem with analyzing unstructured data is that it doesn’t have the sort of structure that you usually expect for structured data: tables of records with fields of meanings and connection between tables. In the past, unstructured data couldn’t be analyzed due to the lack of the advanced big data tech stack. Now, however, businesses are analyzing unstructured data using natural language processing techniques to identify customers who use words that express strong dissatisfaction and then do some sort of intervention to identify the source of dissatisfaction. Cost reduction from big data technologies If you’re primarily seeking cost reduction, you’re probably conscious of the fact that MIPS (millions of instructions per second—how fast a computer system crunches data) and terabyte storage for structured data are now most cheaply delivered through big data technologies like Hadoop clusters (Hadoop is an open source software framework specifically built to process large amounts of data from terabytes to petabytes and beyond). In an interview, Cloudera Vice president Charles said that the cost of a Hadoop data management system, including hardware, software, and other expenses comes about $1000 a terabyte — which is way lesser than other data management systems. Of course, these comparisons are not entirely fair, because to implement a new Hadoop cluster and all its associated tools, you may need to hire some expensive engineers and data scientists. But if you already have the necessary people, a Hadoop-based approach to big data could be a great bargain for your company. I hope the above examples will enable you to make confident choices, when choosing your big data objective.

Manu Jeevan

Manu Jeevan is a self-taught data scientist and loves to explain data science concepts in simple terms. You can connect with him on LinkedIn, or email him at manu@bigdataexaminer.com.
Manu Jeevan
Share this on
Facebooktwitterredditlinkedinmail

Follow us on
Facebooktwitterlinkedinrss
Author :
Free Data Science & AI Starter Course

Enrol For A Free Data Science & AI Starter Course

Learn R, Python, basics of statistics, machine learning and deep learning through this free course and set yourself up to emerge from these difficult times stronger, smarter and with more in-demand skills! In 15 days you will become better placed to move further towards a career in data science. Upgrade to the specialization programs at attractive discounts!

Don't Miss This Absolutely Free, No Conditions Attached Course