34
MACHINE LEARNING Getting your machine learning right
As we enter the age of Siri and Alexa, the emergence of machine learning has a new implication for the financial services industry. So how do we get it right?
V. Ramkumar
Senior Partner, Cedar Management Consulting International LLC
M
achine learning (ML) in its most simplistic form is when a computer automatically learns, and uses that learning to predict the future course of action. The learnings
could relate to data across all its forms across the universe of Big Data, to deduce further information, relate to insights and drive the course of action on that basis with predictions and decisions.
While ML and artificial intelligence (AI) find their applications across a variety of sectors ranging from healthcare to hospitality, the impact it has already made in the financial services sector can be particularly related to a few specific applications including fraud deductions, cyber security threat reduction, automated smart- loan origination, preemptive collections, virtual assistance and algorithmic portfolio management. In addition to reducing the operational costs, ML environments allow for improved accuracy and a better customer experience.
The estimated financial benefits in each use case have generally been positive. For instance, ML is estimated to drive down bad debt provision by more than 35%. Similar estimates also exist for each of the above applications. While there are multiple use cases that one gets to see every day, from chatbots to optical character recognition (OCR) and sentiment analysis, we will explore the effectiveness of ML in the context of fraud deduction, here. This is about building a correlation with a seemingly unrelated set of data, and helping to determine potentially fraudulent activity.
ML allows for identifying and preventing frauds, which otherwise may not have been deducted as a regular fraudulent activity. For example, a self-adaptive ML program would be able to connect the dots that link geo-location data with past transaction behaviour data to alert on potential fraudulent card transaction. A natural corollary to this is where geo-location details are used to increase customer intimacy and wallet share – an obvious example of this could be an instant discount offer to customers at outlets in the neighborhood where he/ she is located at that point in time.
The successful deployment of ML is a function of effective training of the system and accurate scoring – understanding the data, creation of
models, generating insights and validation of output. This validation, confirming that there are no ‘false positives’ and training the system to detect such occurrences the next time, is an iterative process. This iteration, also called the Data Sciences Loop, is key to get the ML right. A case in point is where financial transactions are scanned for suspicious behaviour. While the effectiveness of the ML tool deployed is about predicting the fraudulent transactions accurately, it would be equally critical to minimise wrong deductions of legitimate transactions – which would result in excessive interference and customer dissatisfaction.
So how does one get the ML right, and what are the right components of an effective ML platform? We explore the three steps that are critical to getting ML to be truly effective.
Managing Big Data: getting the data model right
As ML platforms tend to be agnostic to data schema and the data sources tend to increase over a period of time, the ability to absorb new feeds of data becomes a key factor. Enterprise-level model development is taking centre stage, and the model outputs are now part of regulatory and business intelligence processes. A key challenge is where the data required by models sits outside the IT governance framework. Big Data management systems and data reservoirs have emerged to be the name of the game. Given that the data now made available spans across both human and machine generated in structured and unstructured forms – including social media, biometric, legacy and transactional data – having the right framework to clean, transform, store and enrich different types of data with a high-volume distributed file system is the first step for an effective ML. This, then, allows for driving a granular perspective of micro-segments, that allow for driving predictability. The key success
comes with the responsibility of more vigilant data security and governance
“
www.ibsintelligence.com | © IBS Intelligence 2018 Machine learning also
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52