After my first mappings, I found out that the recommendation system of the digital content sharing platform known as YouTube is based on an extremely sophisticated algorithm. These are applied by artificial intelligence (AI) and learning machines powered by Google which are able to understand through deep neural networks, what kind of content might “interest” you.
Nevertheless, we are not surprised that YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. However, these algorithms are learning from a vast amount of data that often leads to a large-scale manipulation of the content itself and the viewers, resulting in reinforcing already existing bias, for example, the one concerning gender.
After finding what really interests me about the YT environment and what I think is important to address nowadays, I rephrased my research question into:
“HOW CAN THE YOUTUBE ALGORITHM REINFORCE IMPLICIT BIAS?”
Conclusions based on my research
However, this is not the end of HelloBias! but just the beginning of a new fascination of mine. The project will continue as an experiment and will remain an open space for discussion about AI, how we feel about this relatively new technology and how we can solve its issues. I am also currently working with a student from the Erasmus University who is also doing a thesis about AI and the effects of its bias in different businesses and in relation with the users/clients, so I am always open for collaboration.
In my theory essay, I will explain more thoroughly about AI and what it stands for, as well as how is it possible for a machine to be biased and how we can solve these issues, being most of the cases a matter of language and translation. I would like to end this presentation by recommending a documentary produced by the DeepMind company that I saw at the beginning of my research into AI called “AlhpaGo – a Documentary” (available on YouTube) and leave you with a question.
We are programming these learning machines and at the same time, we are asking computer scientists to solve issues that have been embedded in our minds for centuries, such as the one concerning gender and race. At the end, which is the real threat to our society: AI or humans themselves? How can we solve the problem in machines we are programming ourselves before coming to terms with our own bias?