Why fairness is important to ensure data algorithms remain beneficial to those they were created to help

It is important that we ensure algorithms remain fair and beneficial for those they were intended to help

Preparations for GDPR are now complete and as processes and procedures for good data practice should now be integrated into the working environments of all businesses, policy makers are now turning their attention to wider questions such as data ethics.

With the UK government recently establishing a Centre for Data Ethics and the European Commission consulting on the use of data, it’s clear that this is an important topic for politicians across the UK and Europe.

Just last week, a Parliament report on bias in algorithms, specifically relating to those used in the public sector for decision making, was released sparking a conversation around algorithms that are meant to make improved or more efficient services.

Although the positive contribution that can be made by algorithms and the wide range of data they use is reiterated, concern has been widely expressed that algorithms can reproduce prevailing biases or reflect the bias of the people or institutions who created the programme. The report suggests the Centre for Data Ethics and Innovation should examine these biases and organisations should ensure algorithm developer teams include a wide cross-section of society. With a clear goal to ensure algorithms are fair.

So how do data-driven companies ensure that the algorithms we develop are fair and beneficial to those impacted by them?

The Floow is a company which uses telematics data to inform the development of algorithms whose purpose is delivering fairer car insurance and identifying risk from actual driving, so we know that the way we handle and use that data is extremely important to ensure fairness, consistency and results that remain without bias for our clients and their customers.

Every day, we collect millions of miles of journey data from vehicles and mobile sensors which is anonymised and fed into our scoring algorithms. It is then analysed and drivers are scored based on behaviours such as poor speed control and smoothness of driving, all of which can be interpreted from the data we have collected throughout a driver’s journey to inform fair views on risk.

Telematics data provides an extra layer to the traditional insurance algorithm as it focuses specifically on a driver’s behaviour when they are behind the wheel (i.e. exactly the risk being insured). The insights telematics delivers can therefore allow insurers to more accurately predict risk profiles and price policies, and it gives drivers visibility of their behaviour which presents the opportunity to do something about it by making positive improvements. Our scores help drivers to achieve fairer priced insurance and make the roads a safer and smarter place for everyone.

The proliferation of software and sensors and the amount of data collected daily was what led to legislation surrounding the appropriate handling of data. It’s also led to a conversation surrounding algorithms and how we ensure they remain unbiased. But in a world full of data, building algorithms is extremely useful to help us understand the information we are collecting and create products and services which are of benefit for consumers and businesses.

At The Floow, we believe that by respecting the data we collect and harnessing the power of technology, data science and social science, we can deliver telematics solutions which will help to bring about significant and fairer benefits to drivers, insurers and mobility as a whole.

To find out more about The Floow’s approach to data management, please contact us at info@thefloow.com

Get in touch