From the moment electric power illuminated our gas-lit world to the mass adoption of the internet, groundbreaking innovations have sparked apprehension. The rise of Artificial Intelligence (AI) is no exception. It’s natural to fear the unknown, especially when change unfolds at the speed and scale of AI.
The technology has enhanced human capabilities and reshaped how we live. AI’s ability to automate repetitive tasks, simulate experiments, and advance research and development are just a few ways it is disrupting industries—eliminating some jobs and creating others. With growing influence in critical areas like hiring, lending, and policing, the impact of AI becomes increasingly consequential. Those who find this unsettling have valid cause for concern. “We must address biases in [AI] systems that impact people’s lives,” said Dr. Rachel Gillum.
As Vice President of Ethical & Humane Use of Technology at Salesforce, she implements policies to protect the company’s technology from exploitation and misuse. Her experience as a government intelligence analyst, a key figure on the U.S. Chamber of Commerce’s AI Commission, and a consultant working alongside former U.S. Secretary of State Condoleezza Rice, makes the Stanford University scholar an early expert in the burgeoning field.
Gillum is well-versed in both the boundless potential of AI and the serious harm it can inflict on individuals and communities when used irresponsibly. “There are immediate harms we know about,” she told ESSENCE. “Things like [prison] sentence lengths and job prospects.” The transparent assessment is, frankly, refreshing. Still, Gillum remains cautiously optimistic that proactive interventions can mitigate these risks. “We can design AI with intentionality,” Gillum affirmed. Roles like the one she occupies were nonexistent in organizations just a few years ago.
Today, more and more companies are expanding their workforces to focus on the ethical oversight of AI. At Google, Tiffany Martin Deng leads that charge. The Director of Technical Program Management and Chief of Staff for Responsible AI ensures the company’s products align with Google’s AI Principles, embedding inclusivity and equity throughout.
Given Google’s unparalleled reach and influence, the standards Deng and her team implement impact billions and set the global benchmark for ethical AI. Managing a role of this magnitude requires precision, and Deng approaches it with a strong operational framework. “We have rigorous, end-to-end checks and balances for every product and team across the company,” she explained. Her robust career history compliments her process-oriented mindset. Deng’s background as a U.S. Army intelligence officer, Pentagon consultant, and algorithmic fairness specialist with Meta finds her uniquely prepared for the unprecedented tasks at hand.
Tiffany Martin Deng and Rachel Gillum are at the forefront of shaping the emerging field of ethical and equitable AI—it’s a powerful notion when you think of it. These Black women wield substantial influence in shaping transformative technology within systems that, historically, would have excluded their intersecting identities. Far from symbolic, their placement has the potential to redefine how systems, industries, and institutions serve and protect us all—building equity, accessibility, and ethical use into the very foundation of AI tools.
For ESSENCE, I spoke with them about how their work is shaping this transformative technology.
Tiffany, there has been a lot of discussion regarding the risks and possibilities of bias in AI. How do biases make their way into these systems?
Deng: Absolutely. Much of it comes down to how programs are trained. Take a speech recognition system, for example—if it’s trained on a narrow subset of voices, it may struggle to understand people from different backgrounds or fail to recognize diverse accents and dialects, resulting in systems that perform better for some groups than others. For instance, research has shown that voice recognition devices—used at home, in cars, or on phones—often fail to accurately recognize Black voices.
That context is helpful. In a scenario like the one you described, how do you approach solutions?
Deng: We create robust datasets to help machines learn diverse speech patterns. One project we’ve dedicated significant time and energy to is called Elevate Black Voices (EBV). This initiative is led by Courtney Heldreth, a wonderful researcher here at Google. We’re also collaborating with Howard University to broaden perspectives—engaging experts from diverse disciplines to help us anticipate challenges and uncover insights we might otherwise miss. These efforts ultimately allow us to serve all users more effectively.
Rachel, given your background and expertise in scoping out AI biases, what do you see as one of the most pressing concerns?
Gillum: I’m thinking a lot about the serious implications as AI continues to evolve and embed into systems. The most vital work right now is ensuring that AI doesn’t perpetuate existing biases. Training systems is a huge part of that, but it’s tricky—even with a perfect dataset, there is still risk of amplifying societal biases. How the data is collected and structured plays a significant role in shaping what the AI learns.
Can you share a specific example of how this might result in real-world harm?
Gillum: Sure. If you look at the criminal justice system, for instance, African Americans are disproportionately represented in arrest and incarceration statistics, and we know systemic bias contributes to these numbers. When AI encounters this data, depending on how it’s structured, it often lacks the nuance to process these complexities.
Unless we’re intentional, AI can “learn” associations that are neither accurate nor fair. In high-stakes areas like sentencing recommendations or crime prediction, we’ve seen systems exhibit heavier bias against Black individuals, even when the specifics of the case don’t justify those conclusions. When the data reflects bias, the outcomes will inevitably carry that bias forward, often with harmful consequences.
That’s actually terrifying. How do you go about vetting out that bias?
Gillum: It starts at the very beginning—testing models, checking for bias, and addressing toxicity in the design of the product itself. A big value people want is understanding the system so they can trust it. To build that trust, we incorporate features that provide transparency throughout the process, so users can understand what went wrong if something does and offer feedback.
It’s about keeping humans in control during the design phase and, ultimately, in how the technology is deployed into the world. My team specifically focuses on deployment—ensuring the technology is used responsibly.
Answers are edited slightly for brevity and clarity.