Banking Challenges on Cash Management

Cash-outs lead to Customer dissatisfaction

  • 1

    Inaccurate econometric models

  • 2

    Idle cash in many ATMs

  • 3

    Unnecessary replenishment visits

  • 4

    Increased cost of cash management

  • 5

    ATM out of cash

  • 6

    Manual estimation of ATM cash demand

  • 7

    Rapid changes on consumers behaviour

  • 8

    Vandalism of ATMs

Custom Made solutions

Problem

85% of big data projects fail (Gartner, 2017)

87% of data science projects never make it to production (VentureBeat, 2019)

“Through 2022, only 20% of analytic insights will deliver business outcomes” (Gartner, 2019)

Some of the most important reasons that lead to project failure include:

Data Management

Data Integration

Technology Complexity

Insufficient resources

Challenging to create an ML workflow

Difficult to ensure continuous training and deployment

From raw data to added value

1

Ingest Data

✓ Data Warehouse
✓ Hadoop HDFS
✓ External Sources

2

Data Preparation

✓ Outlier Detection
✓ hHndling Missing Values br> ✓ Normalisation
✓ Feature Engineering
✓ Feature Selection

3

Train Model

✓ Random Forests
✓ Neural Networks
✓ Boosted Trees
✓ Econometrics

4

Validate Model

✓ Error metrics
✓ Key performance indicators
✓ Model comparisons

5

Deploy

✓ Reporting
✓ Productionalising
✓ Automating
✓ Creating pipelines
✓ and APIs

How a machine Learns

1

Uses large volumes of data

✓ Structured ✓ Unstructured
2

Utilises advanced algorithmic models

✓ Algebra ✓ Calculous
3

Learns without being explicitly programmed

✓ No if-then-else functions
4

Discovers hidden patterns

✓ Uncovers the underlying rules

End-goal: Collective Intelligence

  • The synergy between man and machine makes for more efficient and effective decision making
  • A stack of technologies, frameworks, platforms and scientific domains

Our Solution

:

ExepnoCash for ATM Cash Replenishment
Optimisation

Options

Data-Driven Models considered

Standard Econometric Models

AR
ARMA
ARIMA
SARIMA

Machine Learning Models

Support Vector Machines
Random Forests
Gradient Boosted Trees

Neural Network Models

Feed Forward Neural Networks
Entity Embeddings
Wide & Deep
Recurrent LSTMs

Why Vertex AI?

1

Build, Deploy and scale ML models faster.

2

We have created a set of training models that can fit the data set of any bank

3

Vertex AI is evaluating and comparing models into production with the new ones to choose the most accurate

4

Reduce the complexity and management of the large-scale deployment of ML models

Google Vertex AI

Vertex AI Advantages

Top use cases of Vertex AI

Vertex AI Training

Vertex AI Training provides a set of pre-built algorithms but also allows users to bring their custom code to train models. A fully managed training service for users requiring greater flexibility and customisation or for users running training on cloud environments.

Read More

Model serving

Vertex AI Prediction makes it easy to deploy models into production, for online serving via HTTP or batch prediction for bulk scoring. You can deploy custom models built on any framework (including TensorFlow, PyTorch, Scikit or XGBoost) to Vertex AI Prediction, with built-in tooling to track your models’ performance.

Read More

Model monitoring

Continuous monitoring offers easy and proactive monitoring of model performance over time for models deployed in the Vertex AI Prediction service. Continuous monitoring identifies signals for your model’s predictive performance and gives alerts when the signals deviate. It then diagnoses the cause of the deviation and triggers model-retraining pipelines or collects relevant training data.

Read More

Model management

Vertex ML Metadata enables easier auditability and governance by automatically tracking inputs and outputs to all components in Vertex Pipelines for artefact, lineage, and execution tracking for your ML workflow. Track custom metadata directly from your code and query metadata using a Python SDK.

Read More

OUR IMPLEMENTATION - Components

get-atms-data

This step loads all the training data from cloud storage to BigQuery into the pipeline

prepare-atms-data

This step preprocesses the ingested training data and adds all the features/predictors, it will split the data into training and testing datasets.

train-atms-model

This step uses the preprocessed training data to train a model.

atms-model-evalution

This step evaluates the trained model using the test dataset.

Deploy based on condition

If current RMSE < previous RMSE, then deploy the trained model for predictions

train-atms-model

This step uses the preprocessed training data to train a model.

ExepnoCash

ExepnoCash provides an extremely reliable cash management solution by removig manual calculations in the ATM replenishment process. It reduces unwanted cash reserves, accurately predicts cash shortfalls and reduces emergency cash deliveries. It utilizes the power of Machine learning and it reduces the time that it takes to reliably go from data ingestion to deploying a new ML model into production based on updated facts.
  • ExepnoCash allows your bank to automate, monitor, and optimise cash management.
  • Provides important insights to managers and stakeholders
  • Known events and bank holidays can be added in advance to the system, when cash demand is likely to fluctuate
  • Its AI-brain makes continual comparisons between predictions and actual results to constantly improve prediction accuracy
  • Based on the unrestricted computational prowess of Google's Cloud Platform, it has an enormous advantage over traditional econometric models.
product pdf

The Key Point of Vertex AI components:

  • Each pipeline step (component instance) has inputs, outputs, and an associated container image. These inputs and outputs may depend on other component steps.
  • Input and output dependencies between pipeline workflow steps create a directed acyclic graph (DAG). The pipelines SDK uses this workflow graph which is based on these dependencies and runs steps when inputs and outputs are available (concurrently or sequentially).

Pipeline Components

Pipeline components are self-contained sets of code that perform one part of a pipeline's workflow, such as data preprocessing, data transformation, and training a model.

Components are composed of a set of inputs, a set of outputs, and the location of a container image. A component's container image is a package that includes the component's executable code and a definition of the environment that the code runs in.

ExepnoCash combines custom components with prebuilt components such as AutoML.

Components

Monitor model performance

Evaluate the performance of the current model and improve as needed

Route Optimisation

Determine ICT track routing to curtail costs further

Extend the use case ATMs

Scale to the entire ATM network / Include the entire branch cash demand

Explore similar cases

Credit card payment account optimisation

Use Case:

ExepnoCash

Roadmap

Artificial Intelligence Banking Applications

Credit Risk

Money Laundering Detection

Fraud Detection

Clustering / Customer Segmentation

Churn

Cross Selling

Recommendation Systems

Natural Language Processing

Channel Optimisation

Chat Bots

Image Recognition

ATM Cash Replenishment Optimisation

contact Us