Big Data Analytics’ Experimental Phase
Great analytics – like most teenagers, budding artists and even entrepreneurs – needs to experiment. Find answers to questions, test theories, plot unchartered territories and discover more.
Variances between simulated and actual results may be opportunities not only for correction, but for experimentation. Using analytic learning loops in conjunction with systematic experimental design, companies may discover opportunities that are not evident to competitors and gain forward-looking insights into how customer behavior is evolving.
The quantity and variety of data used increases the range of experiments companies can conduct in a short amount of time while still producing statistically significant results. In addition to testing variations on well-performing decisions strategies, companies should test strategies that are beyond the edges of business as usual and organizational comfort zones. The edge-probing deliberately introduces controlled variation into the production data, thereby expanding what can be learned from it.
Moreover, bold front-end experiments like these will increasingly be supported by new approaches to back-end exploratory analysis. Emerging methods in the era of Big Data involve analyzing larger amounts of data, initially, in a rough or “good enough” way, then iteratively working toward narrower, more precise results. Such methods can supply unexpected predictive attributes to be used in unconventional decision strategies for edge-of-envelope experiments in the production environment. As the consulting company Ovum states, “Big Data is a change of mindset regarding the art of the possible.”
For more information on this topic, check out our Insights paper (registration required).