Skip to main content

Trusting the Tech: How Honest Algorithms Can Boost ML Applications in Financial Organisations

Vast amounts of data are today collected by companies across all industries, and especially the financial sector. These swathes of collected data are too large to be processed by humans, so algorithms are employed to understand the data and perform high volume machine learning (ML) analyses. But these algorithms have become the subject of some negative press. In high-profile examples, such as Amazon’s recruitment tool and the Apple Card’s credit limit, the algorithms introduced gender bias into the ML-driven outcome.

An algorithm’s ability to quickly process and calculate data in a fraction of the time taken by a human makes it a valuable tool for the financial industry. The technology is highly effective at replicating human expertise within an organisation to solve business problems. However, it is important that ethical algorithms that steer clear of bias are adopted so that the benefits of ML applications can continue to be realised within the industry.

Removing the mystery behind ML

Hedge funds and private markets funds that use machine learning applications want to do so without discrimination, especially biased decision-making caused by a lack of transparency in the process. According to PwC, 84% of CEOs believe AI-based decision-making must be explainable in order to be trusted. This means ensuring algorithms are fully transparent in how they reached a conclusion and being easily validated and monitored by a human expert.

Machine learning tools must incorporate full accountability to evolve beyond unexplainable ‘black box’ solutions. Only by embracing AI and ML solutions with ‘baked-in’ transparency can we take advantage of the humble and honest algorithms that produce unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

Finding the origin of bias

The earliest stages of the machine learning process, such as the first data upload and review stage, can be the initial point where bias is introduced. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Sometimes it is the most useful parameters – such as gender – that cause bias in an analysis.  Gender may be a useful descriptor when applied to identify specific health risks, but using it in many other scenarios risks leading to discrimination. Machine learning models will inevitably exploit any parameters like gender in data sets they have access to, so it is vital for users to understand how a model reached a specific conclusion and whether it was ethically acceptable.

End-to-end visibility

The term ‘explainability’ refers to the visibility within a machine learning platform. When using a truly explainable solution, users can trace back steps of a machine learning process and identify the reasoning behind choosing and deploying a certain model for an analysis. Users are then suitably equipped to justify the outcome.

Visibility is essential right from the first step of an ML analysis. ML tools should be embedded with features to allow the inspection of data and provide metrics on model accuracy and health, including the ability to visualise what the model is doing. Key to this is the platform alerting users to potential bias during the preparation stage.

A truly transparent ML solution should offer this level of visibility throughout the analysis to demonstrate full explainability to users. Users need to be able to track each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations – such as the European Union’s GDPR ‘right to explanation’ clause – and demonstrate transparency to consumers.

To build greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

Putting accountability first

A common fault with ML tools is no option to review or authenticate the model that has been selected for a specific analysis. There will inevitably be numerous model types available, so users having no autonomy over appropriate model selection is a major challenge. Deep neural network models, for example, are inherently less transparent than probabilistic methods – resulting in rapid data preparation and ML model deployment but allowing little to no chance of visual inspection for users to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation – where it will visualise what the chosen model is doing and provide accuracy metrics – and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

Putting users back in the driving seat

When working with transparent ML solutions, the end goal is to put power directly into the hands of the users, enabling them to actively explore, visualise and manipulate data at each step – rather than simply delegating to an ML tool and risking the introduction of bias.

During model deployment, machine learning platforms should also extract extra features from data that are otherwise difficult to identify and help the user interpret what information the data conveys beyond the most obvious insights.

Removing the complexity of the data science procedure will help users discover and address bias faster – and better understand the expected accuracy and outcomes of deploying a particular model.

Ethical algorithm ambassadors will help reduce unethical practices

Putting transparency and accountability at the heart of platforms is a key step towards ML in the alternative investment industry becoming more ethical – but it is important that efforts are taken further. Both investment management firms like hedge funds and service providers to these firm that develop ML solutions, as well as other industry experts, need to act as ambassadors and educators to inform users of the dangers of bias in machine learning and help users identify and avoid unethical practices. Raising awareness in this manner will be vital to establishing trust for AI and ML in sensitive deployments such as financial decision-making, medical diagnoses and criminal sentencing.

Choosing transparency today will pay dividends for the financial sector

There is great potential ahead for ML platforms and there are numerous ways that hedge funds and private equity funds can implement the technology, whether it is an advanced application such as algorithmic trading, or a more routine use such as process automation. But, with the majority of the G7 countries expected to establish dedicated associations to oversee AI and ML design by 2023, ethical practices will need to be at the heart of ML operations to be compliant.

For the alternative investment industry to prosper with ML solutions they will need to prioritise platforms that operate with ethical and unbiased algorithms from the outset. Organisations that opt for transparency and explainability will avoid scrutiny from ever-tighter regulations and their ethical practices will help foster trust in the technology across the industry.

**********

Davide Zilli is Client Services Director at Mind Foundry

***

The views expressed in this article are those of the author and do not necessarily reflect the views of AlphaWeek or its publisher, The Sortino Group

Content role
Public

© The Sortino Group Ltd

All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency or other Reprographic Rights Organisation, without the written permission of the publisher. For more information about reprints from AlphaWeek, click here.