Bias in AI-applications

Hi everyone!

In the last few days, I’ve been thinking a lot about bias in AI. I’m sure you all heard stories similar to the one about Amazon’s recruiting system which discriminated against women, as the training-dataset contained mostly samples of men.

Without having a specific project in mind, I was wondering If some of you have experience with bias in AI and can share a few best-practices how to avoid it?

Cheers,
John

1 Like

Hello John,

Bias is quite a complex topic. All of us have inherent internal biases, and at the end of the day, it’s us who have to make decisions on data, and in doing so, inject our biases therein. Taking a more pragmatic approach: It mainly comes down who is responsible for checking bias in data, and how would they do it? I have a friend doing his PhD in ethics in AI and I’ll quote his opinion which I basically agree with: It’s not the job of the ML-engineer, but rather of the person/team who gathers the data.

Technically speaking, bias is not a bad thing necessarily. Depending on use-case, you sometimes want to have a biased dataset. If you’re doing analysis of female soccer players, you’ll want women only in the dataset and no men.

So, it’s rather a question for social scientists and carefully selecting the data than for the ML-engineer who’s trying to build a performant model.

Good evening Hasnain!

Thanks for your reply, it was quite interesting to read.

But I have to say that I disagree with you to a certain extent. ML-engineers cannot just say: “Neh, not my problem.” Wouldn’t it be a solution, when the ML-engineers check the dataset for a bias and then go back to the person selecting the data to ask if the bias is appropriate for the use-case?

Let me know what you think.

Cheers,
John

Hello John,

I like the direction of where you’re going. Establishing collaboration between the ML-engineer and the people overseeing the model in production definitely makes sense.

But I propose another system: whenever the model performs poorly on a sample (either in production or in testing), the sample should be flagged, and efforts can be made to reduce bias towards certain samples. Then the model will be retrained on the old data + the new, flagged and collected samples. This way, we can mitigate the risk of bias without having to make strong (unclear) assumptions in the beginning.

Tesla, for example, is training their computer vision models this way and this is how they became so good (if look at autonomous driving done with vision only) .

But to sum it up: Social scientists/AI ethics personnel should be included in the process for designing data collection guidelines. Model builders and engineers should provide feedback on any bias they notice in production, and this should be dealt with promptly.

Though, at that point, it might already be too late. Since the biased decisions of an algorithm in the real world can cause actual harm, even before this feedback loop can be completed. But I think this can be mitigated by good testing.

3 Likes

Hi Hasnain!

I like the approach, I definitely will try it out.

Thanks for help,
John

Really good question and interesting to see the responses!

I’m heading a data collection and annotation company and we’ve worked on a lot of projects over the last couple of years. So I can give some very practical insight on this part of AI projects even though the conversation is much wider (e.g. at project design, model implementation, etc)

Bias is especially dangerous when humans are involved (e.g. applications that use biometric data, detect age/gender/“race”, etc) because then it can actually affect people’s lives. In computer vision applications that are meant for industry use for example (e.g. detecting nails and bolts) it’s still undesirable but the consequences are more limited.

In terms of data collection, some companies don’t have access to real-life data before the model is deployed so they use some kind of a proxy (e.g. I want to detect medical masks in public on security cameras so I collect a dataset from online sources of people wearing masks). Bias can come in when the dataset you are using is not representative of the actual data the model will be running on (e.g. most of the photos in your dataset are selfies while it’s supposed to run on CCTV, or most of the people who appear are Asian while it will be applied in Africa, or you didn’t realize the number of men in the dataset is 70% vs other genders).

And then in terms of annotation, a very important step is the class definition: say you want to detect gender (which is quite a problematic endeavor in itself) and you set “male” and “female” as the 2 classes for annotation. In this case, maybe include a “non-binary”, “other” or “difficult to say” class as well? This is obvious when we talk about constructs like gender and race but it’s valid for all types of projects (e.g. a drink detection app: are you detecting Cola, Sprite, and Fanta, are you grouping them as “soft drinks”, or what happens with Schweppes which doesn’t appear anywhere in your class list?)

We’re actually publishing a whitepaper called “How to avoid bias in AI through better dataset collection and annotation” soon so stay tuned!

1 Like

Hi Iva,

thanks for your reply! Really looking forward to the white paper.

One question in advance already: do you have any best practices you can share on how to detect bias in datasets?

Best,
John

I’m not sure what your use case is and whether you are referring to the asset curation (images selection) or to the annotations.

However, one tool which is quite new and interesting for asset curation is Lightly.ai which can review all of your data and pick the most diverse samples from it.

In terms of annotations, beyond checking the class distribution and whether all classes are represented fairly, think about classes that are completely missing in the taxonomy and that the model might come across.

I hope this helps!

1 Like

Yes, I definitively check out Lightly.ai. Please also don’t forget to share the white paper, I really want to know what other expertise you can share.