Go to content

Developing an intersectional framework to analyze biases in artificial intelligence and deep neural

Developing an intersectional framework to analyze biases in artificial intelligence and deep neural - Kimberley Paradis Artificial intelligence (AI) is rapidly gaining prominence as a problem-solving approach in a wide variety of disciplines, notably due to deep learning’s (DL) recent breakthroughs. AI is the simulation of human intelligence in machines that are programmed to think and act like humans. DL is a subfield of AI that deals with algorithms inspired by the biological structure and functioning of the brain to help machines with intelligence. DL relies on deep neural networks (DNNs): a hierarchical or layered architecture of neurons, similar to the neurons in the brain, connected to other neurons that transmit messages or signals in response to the received input, forming a complex network that learns via a feedback process (Moolayil, 2020). DNNs have led to extraordinary performance in computer vision, natural language processing, and complex strategy video games by training algorithms using historical data that reflect biases and discrimination. Training involves learning complex correlations between input and output features, often in the absence of consideration for the causation between these features. For example, a DNN may learn the correlation between height and educational level and move forward with the ‘knowledge’ that taller people have a higher educational attainment rate. This DNN would ignore the reality that children and teenagers are less educated and shorter than adults by virtue of their age and development. The false equivalency between correlation and causation is a significant challenge facing DL developers. The rise of deepfakes is an outcome of the fast growth of DNN that has marginalizing effects. Deepfakes involve superimposing one person’s face on another’s to produce a video that appears lifelike. 96 per cent of deepfakes are pornographic in nature, and 99 per cent involves swapping female celebrities’ faces with those of porn stars without consent. This exploitative and sexist use of DL reinforces historical gender biases (Wagner and Blewer, 2019), among other negative outcomes. The reinforcement of hegemonic power in the development of DL highlights the amplification of biases within AI. This underlines the importance of “ethical tech” (Bannister et al, 2020) and rigorous checks to identify and contextualize or discard discriminatory training data at all stages of training (Obermeyer et al, 2019). Check out our new channel: NDC Clips: @ndcclips Check out more of our featured speakers and talks at https://ndcconferences.com/ https://ndcoslo.com/

May 22, 2023