THANK YOU FOR SUBSCRIBING
Applying Machine Learning to Insurance: Practical Principles for Success
By Dan Taylor, General Manager, Innovation, TAL Australia
Our team of underwriters handle thousands of customer applications through our direct channel every year and the sheer volume and complexity of applications creates a use case well suited to machine learning. Whilst it was previously impractical to review every application manually, now our WunderWriter tool uses machine learning algorithms to review 100 percent of applications and prioritises those for our QA team to audit. As a result, we are now focused on the right cases rather than just a random sample, and we have real confidence in all those cases not audited by the team.
This solution is just one example of how machine learning can transform a process and the related customer experience. Machine learning is particularly exciting as it can transform not just the speed of a process but also the outcome because it uses vast quantities of data to automate decision making, rather than just automating tasks like how Robotic Process Automation does.
In a data-rich industry such as life insurance, machine learning brings the potential to make faster, more consistent and more tailored decisions than ever before, which could transform customer targeting, underwriting and claims processes, and importantly, the resulting customer experience.
Yet for many of us, machine learning remains a ‘magic black box’ with huge potential but limited understanding of how to best use it in practice.
Whilst partnerships are clearly important to benefit from external expertise, it is also necessary to build our own internal capabilities and understanding of how artificial intelligence (AI) works and how to best use it, as we believe that it will be a core competence for our future business. Therefore, TAL created a dedicated internal team to experiment with the technology and build out the best use cases.
Understanding the challenges
There are two major challenges in deploying machine learning into decision making processes.
1.Data: I have worked in a lot of organisations and have never come across one with perfect data. Data is the life blood of machine learning (it’s called ‘machine learning’ not ‘machine knowing’ for a reason!) and there are a few specific ‘watch outs’ to be aware of when considering the data sets:
a. Quality - There are always data gaps, inconsistencies between different data bases, typos from manual entry etc. that make it very difficult for a machine to learn correctly.
b. “Edge cases” - This is compounded further in life insurance by the uniqueness of individual applications and claims.
Whilst partnerships are clearly important to benefit from external expertise, it is also necessary to build our own internal capabilities and understanding of how artificial intelligence (AI) works
This can result in ‘edge cases’ which are so rare that there is insufficient volume of data for the machine to learn accurately.
c. Bias – The machine will typically learn any underlying bias already present. When Amazon reportedly trained an algorithmic CV screening tool on 10 years of its own hiring data to improve recruitment, it found a gender bias against female candidates. It’s important to understand what biases are present and whether they are acceptable.
d. Historical data: Machine learning is trained on historic data to predict future outcomes. That works, until your current parameters change away from the historic ones. For us, every time we change an underwriting rule, or the way a question is phrased, we need to retrain the model.
2. Probability based outcome: We are all used to calculators and computers that always produce a definite answer to a given problem, as a formula governed by a set of rules has been hardcoded in. However, machine learning predicts outcomes through the lens of a probability based on historical data. And without a clear reason why from the machine, that statement of probability, rather than a sense of certainty, raises issues of trust in the outcome from the machine.
Curiously, humans may often weigh the same conflicting data as the machine, but we typically land a clear recommendation rather than allocating a level of confidence or probability to a decision.
This probabilistic outcome can be challenging when applying it to business decisions. Imagine if a machine calculated the outcome of an insurance claim with an 80 percent probability— how can you confidently use that recommendation, and defend it, if there is a 20 percent chance it is wrong—particularly when the machine can’t explain why.
Our customers deserve better than that.
So how do you overcome these challenges to leverage machine learning?
Taking a principle-based approach
At TAL, we have been guided by some clear principles to make sure that we are comfortable with any machine learning decisions:
1) Triage use cases – As machine learning can only confidently predict some, rather than all, outcomes, it is best used to triage (and automate) simple decisions and enable humans to focus on the complex ones. TAL’s WunderWriter tool triages which applications should be audited and provides confidence in the simpler cases.
2) Commercial confidence – In order to gain trust in the machine learning outcomes, we run all of our algorithms in ‘shadow mode’ for a period before deploying them fully. To do this, our algorithms run in parallel to our current processes and we compare live decisions by humans against those of the machine to gain real confidence in the outcomes rather than simply running tests on historical data sets.
3) Transparency – In an ongoing quest for greater transparency, we arecurrently working on applying attention modelling to our solutions in order to provide some insights into why the machine is making a decision. Our QA team also audits the machine’s decisions; just like manual ones.
4) The customer can’t lose – We only deploy machine learning in a way that cannot negatively impact the customer. For example, if WunderWriter finds an error by one of our underwriters, it would only change the outcome for the customer if it is in their favour, otherwise the original decision would stand.
5) Virtuous circle – Simply deploying a tool is not enough. The real value comes from ensuring the feedback creates an ever-improving loop, both to improve the existing processes and training, and to continue to train the machine to achieve ever-greater accuracy.
By understanding some of the caveats behind the ‘magic black box’ of machine learning and practically applying these principles, we can deliver a world where decisions can be faster, more consistent, more accurate and, ultimately, better.