Harnessing Big Data for Product Innovation & Deeper Customer Insights
About The Company
This case study is about the leading independent broker and agency writer of automobile insurance in California and has been one of the fastest growing automobile insurers in the nation. It is ranked as the fourth largest private passenger automobile insurer in California, with total assets over $4 billion. They also write automobile insurance in Arizona, Florida, Georgia, Illinois, Nevada, New Jersey, New York, Oklahoma, Texas and Virginia. In addition to automobile insurance, and other lines of insurance in various states, including mechanical breakdown and homeowners insurance.
A New Wave of Innovation in the Insurance Industry
The insurance business is arguably one of the most data-centric industries to date. From driving business operations, managing risk and understanding their customers, insurance vendors depend on their data management systems and capabilities in order to maintain a competitive edge in these dynamic markets. While the insurance industry has been struggling with getting a good handle on their data for decades, both on the transactional and risk management side, the recent rise of “Big Data” – new sources of data extending beyond traditional sources – has renewed interest in new approaches to data management. However, the volume, velocity and variety of these data sources are pushing traditional relational database management technologies to their limits.
“In the future, the creative sourcing of data and the distinctiveness of analytics methods will be much greater sources of competitive advantage in insurance. New sources of external data, new tools for underwriting risk, and behavior-influenced data monitoring are the key developments that are shaping up as game changers.”
Given the relative nascency of enterprise-level integration of Big Data technologies and utilities, insurance organizations are still only beginning to understand the use cases and applications for which Big Data investments would be most suitable. Even more, once a particular application has been realized, there remains a dearth of best practice resources and methodologies for their integration into existing environments.
“While more data, better tools, and new applications are creating opportunity in the insurance industry, to adapt and thrive in this emerging world of advanced analytics, insurers need to manage complex and large-scale organizational change.”
When the leader in insurance approached Systech for a Big Data solution, they wanted a way to automatically process vast amounts of unstructured XML data in order to perform indicative analysis, solve existing ETL challenges, and improve accuracy of its engine’s scoring. Given the significant technical challenges posed by such an undertaking, they knew that a careful reassessment of their current data managmeent practices & enterprise processes was in order. This case study presents how Systech helped this enterprise, a leading Insurance & Financial Services provider, implement & drive enterprise-wide adoption of Big Data strategy, to be even more competitive in the vast Personal Auto Insurance markets.
“The Company operates in the highly competitive property and casualty insurance industry subject to competition on pricing, claims handling, consumer recognition, coverage offered and product features, customer service, and geographic coverage… Reputation for customer service and price are the principal means by which the Company competes with other insurers.”
– General Annual Report, 2014
Improving Customer Acquisition & Retention with Big Data
The Old Approach to Calculating Personal Auto Insurance Quotes
Previously, when a potential customer would approach the leader in insurance for a Personal Auto Insurance policy, they collected a limited set of predetermined customer details (e.g., driving history, vehicle details, locality, etc.) that would be fed into their Rating Engine (RE). The RE would then process, score and return a personalized premiums quote based on its calculated risk assessment for the individual. Each transaction through the RE also incurs a cost to the insurance provider. The potential customer would then weigh their options and choose whether or not they would use leader in insurance as their insurance provider. With the goals of closing more customer deals, they wanted to understand more about the customers they were winning and those that they were losing, and why.
The leader in insurance knew that if they would be able to amass, process and analyze larger scales of data on their customers, the Rating Engine could be much more finely attuned to generate more precise risk assessments, and develop more personalized portfolio options & quotes appealing to the customers they would otherwise lose.
For the enterprise, early investments in analytics were largely managed as IT projects. However, with over 25 million XML files of customer data in tow (gathered from external sources), the challenge would be processing, storing, and then integrating & analyzing this Big Data in context to internal information & account details. The scale of such an undertaking would render any traditional ETL models being used, as impractical. It was clear that the project demanded a thoughtful approach to transforming their data management ecosystem. With their IT teams facing the unprecedented challenges of handling these new data sources, and an imperative to deliver in a timely manner, they approached Systech to help achieve their goals.
1. Volume: They presented a batch of 25 million XML files, with each file containing over 250 variables of customer information. Traditional data models would be limited by the amount of variables that could be exposed, essentially reducing the granularity at which unique aspects of individual customer profiles could be understood.
2. Variance/Inconsistency: The details gathered on each customer and encoded within the XML structure, varied wildly between each file set. With standard data processing approaches, this would entail running a different process for each type of file set.
3. Handling Unstructured Data: The lack of standardization in data types and the semi-structured XML file format would place cost-inefficient demands on traditional data management architectures.
4. Time to insights: The batched XML files would have to be extracted and processed individually – a cumbersome process entailing 1-2 months of work for IT teams before even being touched by analysts, placing a heavy burden on resources.
In today’s digital world, the ever-changing customer profile evolves on a much shorter temporal scales than ever before. This places a high value on the speed at which analyses can be performed, and deriving actionable strategic insights to boost their competitive edge.
An Automated Customer Analytics & Pricing Engine
The Big Data architecture that Systech designed, developed & deployed, was created not only inform and refine the rating engine, but also to establish a standardized, robust way for the leader in insurance to perform powerful analytics on large-scale data set – a relatively new challenge for businesses only recently learning to capture the value locked within Big Data.
1. Cost-Effective Infrastructure: Designed & Implemented a low-cost data storage architecture facilitating average of 800 GB – 2.4 TB of information, at a time
2. Data Processing: Developed tools to parse & expose every variable within the large volumes of semi-structured (XML) data, and effectively map it into a more standardized tabular format.
3. Automation: Systech built the tools to automate batch processing tasks on a regular basis, accelerating the data ingestion process and establishing a method to consistently perform on data of any scale.
4. Reduced Timelines: This implementation reduced the time at which useful and pre-processed data could be delivered to analysts, from a matter of 1-2 months to near real-time speeds.
5. Organization Enablement with Self-Service Analytics: Systech oversaw the integration & adoption process of the new data management strategies into everyday workflows. The new environment enabled analysts to access the critical data they need, when they need it. Additionally, data was fully prepped to be analyzed and visualized for immediate decision-making purposes.
In their closing 2014 Annual Report, the leader in insurance made it clear that they had their sights set on expanding has continued to transform its operations & technology to:
Short-term goals: Expand into new markets
Breadth: Enter new territories, expand service offerings & customer base to other states, while remaining profitable.
Depth: Penetrate deeper into niche markets with non-standard or customized product offerings
Long-term goals: Increase profit margins Create better policies that maximize their underwriting profit margin
The enterprise’s imperative to grow their current market share, underscored a need to revitalize customer analytics capabilities throughout the organization and gain a deeper understanding of the customers they have won and lost. For example, in their 2014 Annual Report, the leader in insurance stated that the losses incurred during efforts to expand services into new states could be accounted for by unprecedented risk factors in the new customer types. A more robust customer analytics portfolio would help them forsee & plan accordingly for such challenges. Overall, these insights would lend themselves to their larger strategic goals of increasing customer acquisition and retention, while reducing customer churn.