With big data now a hot topic for insurers, Zack Schmiesing, director of thought leadership at Verisk Commercial Property, takes a look at some of the pitfalls that await the unwary.
The value of big data is a hot topic in business media circles today. But the insurance industry has embraced and relied on big data for decades to help refine pricing and assist in setting short-, medium-, and long-term strategies. Most insurers know that data quality, although essential, is often misunderstood or overlooked for the sake of expediency. An insurer can have the best model and processing available, but poor data quality very often leads to incorrect pricing. Inferior data can have a significant impact, particularly on the commercial property underwriting universe.
Residential property exposures tend to be homogeneous in their structural and engineering complexity, save for a few variances in aesthetics and cladding materials. Commercial property exposures, on the other hand, tend to have more intricate structural, architectural, and material characteristics. When content coverage is added to the mix, the risk can become very complex to represent in the modeled space.
Inconsistencies and errors often begin at the very start of the policy process, when an insured is either asked by an agent or volunteers exposure information. Is the insured technically trained to accurately define construction type, physical features, or even the year that a structure was built? The agent often then enters the data manually into a quoting portal to find the best price (or their preferred carrier, based on ease of use or commission). The insurer many times provides a premium quote based on its proprietary pricing model on an assumption the data is accurate. It’s like a real-life version of the telephone game: how much accuracy is lost between the true nature of the property, the insured’s assessment, the agent’s interpretation, and data entry? A minor error at any point in the process can have a significant impact.
Big data, insurance