Back to Insights

Beyond the Screen: Addressing Algorithmic Bias in Advertising 


June 24, 2024 By Arin PhummaArin, Director of Digital Solutions, dentsu X Thailand

Algorithmic bias in advertising is a global concern, and Thailand is no exception. This article explores the potential pitfalls of AI-powered ad targeting, shedding light on how it could perpetuate social inequalities. Moreover, it presents innovative solutions, specifically focusing on the challenges and opportunities presented by the phasing out of third-party cookies, all while championing a key dentsu X ethos of "Responsibility Beyond Reach".  
Algorithmic bias stem from two main sources: the data used to train AI models, and the design choices guiding those models. When biased data influences algorithms, it can inadvertently favor or disadvantage specific groups. 

Algorithms also often rely on proxies, like zip code or browsing history to target individuals. This can lead to discriminatory outcomes, even if unintended.  

Biased algorithms might restrict individuals' exposure to job listings, educational opportunities, or housing options based on factors like race, gender, or socioeconomic status. This can widen existing inequalities and obstruct social progress. Biased advertising can reinforce negative stereotypes by disproportionately targeting certain groups with specific types of ads. Not only can this perpetuate harmful beliefs but it can also limit individual potential. Algorithmic features can create "echo chambers" wherein individuals would only be exposed to information that aligns with their existing views that would deepen biases and impede societal advancement. 
Imagine being denied a loan due to an algorithmic oversight. This scenario is becoming all too common as AI enters the realm of loan advertising, potentially discriminating against deserving borrowers.  

Redlining's Digital Cousin: 

Imagine a young couple with a stable income constantly seeing loan ads for high-interest payday loans because their zip code is flagged as "risky" by the algorithm. This digital redlining mirrors historical discriminatory lending practices, unfairly restricting access to prime loans for certain neighborhoods. 

The Stereotype Shuffle: 

A young entrepreneur from a minority community is inundated with ads for small business grants, while their white counterpart receives offers for business expansion loans. These algorithmic assumptions restrict access to the most suitable financial products based on stereotypes, not creditworthiness. 

Data Echoes of the Past: 

Advertising algorithms trained on historical loan data might perpetuate biases against certain demographics or neighborhoods, creating a cycle where past discrimination shapes future lending opportunities. 

The Filtering Fallacy: 

Highly targeted loan ads could overlook potential borrowers altogether. A blue-collar worker with a strong credit history might not see any loan ads because their online activity doesn't align with traditional borrower profiles. This creates blind spots in the digital advertising landscape.  

Fighting Bias  
Let's say a bank uses AI to target loan ads. Here's how they can operate responsibly:  

Clean Up the Data:

Merely removing biased data points isn't enough. Banks should work with data providers who collect information fairly, ensuring their AI system start on the right footing.  

Transparency Matters: 

Banks should advocate for clear explanations of how ad platforms target people. This allows for better decision-making and collaboration to reduce bias across the industry.  

The Human Touch: 

Diversity experts who understand different demographics should review ad campaigns. This ensures a critical eye is cast on the process, considering not just technical aspects, but also the potential social impact.   

The Cookie-less Challenge and Opportunity  

With the phasing out of third-party cookies, traditional targeting methods are becoming less effective. This presents an opportunity to embrace contextual targeting, embodying our "Responsibility Beyond Reach" ethos by reducing reliance on biased data by voiding individual user data. Contextual targeting minimizes the risk of perpetuating existing biases present in historical data.  
In addition, by focusing on relevant content, ads can reach a wider audience regardless of their socioeconomic background or location. This can help promote financial inclusion by ensuring advertisements reach individuals in all areas, not just those historically shown in specific geographical data, promoting inclusivity in advertising practices.  
Contextual targeting might not be as precise as individual user targeting however, potentially leading to a broader reach with a lower conversion rate. Analyzing website and ad content for ad placement also requires more manual effort and expertise compared to automated user data targeting.  

The Dawn of Responsible Digital Marketing  

The deprecation of cookies presents a crossroads for digital marketing. Despite its challenges, it offers a golden opportunity for brands to reimagine advertising strategies. By embracing contextual targeting and prioritizing "Responsibility Beyond Reach," we can usher in a new era of ethical and inclusive marketing. This approach mitigates algorithmic biases by, ensuring messages reach a broader, more relevant audience. Ultimately, this responsible use of AI and digital marketing paves the way for a fairer and more equitable advertising landscape, benefiting the client, consumer and society.  

For more wisdom and insights from other dentsu X leaders worldwide, download our report ahead 2024: branding beyond impact.