Aggregates; what you don’t know can hurt you

by Artemis on December 4, 2013

Dmitry Mnushkin president of Treefrog Consulting Ltd (sponsor), a Bermuda-based risk software firm which specialises in solutions for the reinsurance and ILS industry, writes about the importance of risk portfolio aggregation and how software can help you to make sense of it.

Know your aggregates

Recently I had a chance to sit down with a reinsurance industry veteran. It was a normal business dinner; we made some small talk, shared stories about the kids. Then I asked how often they did their portfolio aggregations and how long it took. The answer started off pretty typically with “Once every few months, takes 3-4 weeks each time” and then it got interesting. “Our portfolio 1 in 250 PML increased by 20% and we have no idea why”.

I was dismayed to hear this because a move of this magnitude without a rock solid explanation would have rating agencies swarming for a closer look and institutional investors getting nervous. It also brought home the hard fact that without a systematic approach to portfolio rollup and aggregation, reinsurance companies are constantly in danger of losing control of their risk exposure.

It should surprise no one that the most widely used system for managing risk portfolios is Excel. Actuaries, underwriters, analysts and assistants spend the majority of their time searching through and updating these spreadsheets, often touching them a dozen times during a deal submission. Data is entered in multiple places, extracted manually for management review, double-entered into accounting systems, lost and retrieved, cursed but relied upon in the absence of anything better.

Lack of a central system also begets corporate silos and an us vs them attitude. The Australia branch does things one way; the UK branch does something different. Each has evolved its own set of tools to solve their local problems with the end result that getting an overall picture of corporate risk becomes a vastly time-consuming, manual and error-prone exercise. If not caught early and managed carefully, this can also lead to internal tool sponsors vying for corporate supremacy, stonewalling and resisting integration in a process as wasteful as it is unproductive.

In the end the inability to roll up a corporate risk portfolio and look at aggregate PML on demand is bad for business. How can the company decide the best place to allocate capital if the feedback loop is several months long and the end result likely inaccurate? Worse, how can a company compete against nimble startups or technologically advanced market players who can impress clients and investors with shiny systems permitting all risks to be viewed from every angle at the touch of a button?

The first step is to recognize the magnitude of the problem and understand that without solving it the business is at a significant disadvantage. With Startups the challenge is to keep up systems development with capital deployment. There tend to be a large number of distractions in the first year or two and priority for an application to help guide business decisions can fall by the wayside. Established companies faced with this problem have a different challenge. There the issue is overcoming the inertia inherent with doing things one way for a long time. People don’t like change so getting everyone onboard requires a combination of strong leadership and a vocal proponent from the management team.

The next step is to determine what kind of system is required and who will build/provide it. Many companies determine to develop systems internally. This is a natural desire as it promises to keep IP in house, utilizes existing resources and appears to be the least expensive option. However, this approach suffers from several key issues. The first is that rarely does one company possess business people technical enough to properly describe the problem to their developers. Even if they do, the developers are generally unable to anticipate future requirements and end up building systems that have to be re-written in a year or two. The subsequent wasted effort chews up time and money at a remarkable pace as additional developers are hired to overcome technical debt created by the first, inadequately architected efforts. This is why established technological leaders can each spend tens of millions annually on software development efforts to maintain their lead.

One of the most common arguments used to justify the cost of internal development is the preservation of IP. If a company relies on outsiders to develop their system, what stops others from receiving the same secret sauce? The answer is quite simple. There really is no secret sauce. The requirements of a proper Risk Portfolio Management system are well understood. Marginal pricing, portfolio aggregation, treaty term application, comprehensive reporting, etc. The difficulty comes in actually building a system capable of handling all these requirements. Success here requires experience and this is where outside assistance cuts years and millions off the development effort.

For companies that have acknowledged the need for such a system without the desire to build their own, the next logical choice is to buy one off the shelf. Until quite recently, this would have involved approaching a competitor who had developed one and negotiating to license their technology. In the past 3 years some other options have become available typically consisting of all-inclusive packages that lock you in to a single vendor or partial solutions that automate some, but not all of the pricing and management processes.  Consultancies such as the company I represent have also been formed that allow customers to rent everything from architectural know-how to entire development teams that will build applications to spec and subsequently hand over the source code to the customer.

In many cases a company is not ready to invest in a full-blown underwriting platform. The business may not be sufficiently complex to warrant such a commitment or the funds may not yet exist to permit significant development efforts. Our recommendation to clients in this position is to expand and automate as much as they can using the tools they are already familiar with. For instance, if a CAT fund manager already maintains much of their portfolio in a spreadsheet, they could continue to do so but tweak that spreadsheet to eliminate tedious human intervention, start treating the data as more of a database and introduce powerful pivot table reports to allow access to all that information they’ve been collecting. This relatively small effort (weeks, not months) results in markedly improved usability, makes users think about other improvements that can be made and fosters the tool-building mindset.

Bottom line, if you don’t know where you’ve been you won’t know where you’re going. Having the tools to understand your aggregates in a timely and accurate fashion shows you where you’ve been. Where you’re going is up to you.

This article from our sponsor:

Treefrog Consulting LtdDmitry Mnushkin has been architecting and developing software for the reinsurance industry for the past 14 years. He is currently the president of Treefrog Consulting Ltd (www.treefrogconsulting.com), a Bermuda-based software developer specializing in custom risk portfolio management systems.

Subscribe for free and receive weekly Artemis email updates

Sign up for our regular free email newsletter and ensure you never miss any of the news from Artemis.

← Older Article

Newer Article →