top of page

AI Case Study

ANZ bank identifies high risk loans and predicts customer defaults with deep learning

ANZ bank has collaborated with Nvidia and Monash University researchers to develop deep learning technology. The neural network was trained on customer credit card data and is able to
assess risk on a much more frequent basis than current practices and predict the client who are likely to default on payments. The system is currently a proof of concept as the bank has stated that it needs to fully understand how it works before it commercialises it.


Financial Services


Project Overview

"An ANZ proof of concept built in partnership with Nvidia and Monash University researchers has shown that deep learning techniques can be combined with customer data to better assess risk and to also do so on a more frequent basis than has previously been possible.
The proof of concept sought to use a neural network to predict which customers were likely to default on payments.

Identifying high risk loans is key to reducing the bank’s exposure and the need to maintain large asset reserves, ANZ’s head of retail risk Jason Humphrey today told the Nvidia AI Conference in Sydney."

He also said that "before the bank moves forward with its neural network-based model, a number of health checks need to be cleared." (zdnet)

Reported Results

ANZ’s head of retail risk Jason Humphrey reports that the POC showed great results, "including raising the Gini coefficient used in assessing risk from 0.78 to 0.82. Testing the model with some 200,000 accounts took only 30 minutes."


"The project used a feedforward neural network trained on a TensorFlow compute cluster running Nvidia’s DGX-1 platform with dual 20-core Intel CPUs, eight Nvidia Tesla P100 GPUs and 512GB of RAM."





"Historically banks have used two key methods for assessing risk. The first is application scoring, which is a static score calculated at the point of a customer’s application for a product.
[Head of retail risk Jason] Humphrey said that since the 1960s, the approach to assessing application risk hasn’t fundamentally shifted from applying a mathematical model that combines the information contained in a customer’s application and with the information the bank holds and credit bureau information. All that information is used to generate a score that represents the probability of default for a particular customer, he said.
However behaviour scoring has its own limitations and hasn’t significantly changed since the ’80s and ’90s, he said. It tends to be limited by the availability, accuracy and amount of data and the frequency at which you can reassess customers or update your models."



"The bank used credit card information from 1 million accounts, with data from 700,000-800,000 accounts used for training and the remainder for testing.
Data was prepared for ingestion using a five-node Spark compute cluster, with each node running an eight-core CPU and 48GB of RAM."

bottom of page