Mental health as algorithm bias

The Guardian has a story on insurance companies refusing life insurance cover for those with mental illness. A concern is raised that:

The suspicion is that insurers are cherry-picking customers to minimise risk and boost the bottom line.

(Marsh, 2018) So are the algorithm’s health being shown by human health?

It may or may not be the case but I think it shows a new aspect of algorithm bias. Writers such as Zeynep Tufecki explore this (TED talk) in a crocodile with Tim Wu’s work on The Attention Merchants, Sara Wachter-Boechter’s Technically Wrong which echoes racial bias in algorithms, and Propublica’s series on machine bias.

The decline result suggests that there is a series of operations that have determined success or not: a new credit score with an hidden set of values, judgements and operations as described by Frank Pasquale in The Black Box Society. As a knee jerk reaction, I saw this as another encroachment on the human by the mathetic world, the mathematical and scientific view that appears to have arisen as part of the Enlightenment. The algorithm creates a model to present an ordered and scientific view of the human to the human.

I leave the irony to the reader. But does it reveal something deeper?

The insurers do have a response, one that echoes with my brief time working for an intermediary,

applications for life insurance go through careful assessment and are evidence based.

(Marsh, 2018)

Various questions arise from this statement: What evidence? How is it gathered and collated? How often is it updated? Who got it and from where? It might also suggest that a model of the human is being created to develop the insurance premium. The form being filled in creates this model and is then placed into another model, the insurer’s one.

This other model is essentially a black box. We have no way of identifying how the algorithm(s) and inference(s) are operating and how the data is merged, filtered, and reduced. But there is a human element here as well. Business choices have been made around this issue, may be from the increased attention placed on the issues in the media and the workplace, and these are described and rationalised as use cases and a quantitative model created. This is then coded by a developer or team, tests run and a quality assessment made before going into production.

Humans are nuanced being, able to recover from illness, but can these ever be understood or known by machine learning techniques? It reflects deep questions about how humans are being seen: not only from a racial or sexual bias, but from the cases where human presents breaks in the perfect form.

This does raise technical and critical questions that I do not have answers to yet. However I am left wondering how mental illness is being treated and I am curious as to how this may be tested. It does show this area as being out of step with society and its discourse on the issue.

Marsh, S. People with mental illnesses refused access to insurance cover, The Guardian, https://www.theguardian.com/society/2018/jan/19/people-with-mental-illnesses-refused-access-to-insurance-cover, last accessed 20 january, 2018

No Comments

Leave a Reply

Your email is never shared.Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.