21/04/2026
Timnit Gebru was born in Ethiopia in the early 1980s. Her father, an electrical engineer with a PhD, died when she was five. Her mother raised her alone in Addis Ababa. Life was quiet, full of books, and built on the belief that education could carry a child out of almost anything.
Then the war came.
When Gebru was a teenager, the Eritrean-Ethiopian War erupted. Her family, who had Eritrean roots, was forced to flee the only country she had ever known. Denied a visa to the United States at first, she lived briefly in Ireland before eventually being granted political asylum in America. She later described the whole experience as miserable.
She resettled in Massachusetts and started high school as a refugee. She barely spoke English fluently yet. And still, school administrators tried to keep her out of advanced classes, despite her being one of the strongest students there. She fought her way in anyway. That quiet refusal to accept a smaller version of her own future would shape everything that came after.
Years later, she earned her PhD from Stanford University, one of the most respected artificial intelligence programs in the world.
Then she did something that changed her entire field.
In 2018, together with researcher Joy Buolamwini, she co-authored a study called Gender Shades. They tested commercial facial recognition systems from major technology companies and found something shocking. For lighter skinned men, the systems were wrong less than 1 percent of the time. For darker skinned women, the systems were wrong up to 34.7 percent of the time. The technology that was being sold to police departments and governments around the world was dramatically failing the very people most vulnerable to being misidentified by it.
The paper became a landmark. It changed the way the entire technology industry talked about bias.
That same year, Google came calling. They hired her as technical co-lead of their Ethical AI team, alongside researcher Margaret Mitchell. The company publicly pointed to her as proof that it cared about responsible AI. Under her leadership, the team became one of the most respected and most diverse AI research groups at any major technology company in the world.
Then came the paper that changed everything.
In 2020, Gebru and several co-authors wrote a study titled On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It raised three serious warnings about the large language models that were quickly becoming the heart of Google's business.
First, bias. These models learn from enormous amounts of internet text, which carries centuries of racism, sexism, and hatred within it. The systems absorb those patterns and then repeat them back into the world at scale.
Second, environmental cost. One study found that training a single large AI model can emit more than 626,000 pounds of carbon dioxide, roughly equivalent to the lifetime emissions of five average American cars. And the models keep getting bigger.
Third, a lack of accountability. The datasets are so vast that no single team can fully audit them. Nobody can guarantee what these systems will do or who they might harm.
The paper gently asked the industry to slow down.
Google did not want to slow down.
Two months before she was fired, Gebru had actually been promoted and received strong performance reviews. But after she submitted the paper, Google leadership asked her to retract it or remove her name. She asked who had reviewed it. She asked for specific feedback. She asked for transparency. She also sent a frustrated email inside Google expressing concern about the company's handling of diversity and bias.
A few days later, on 2 December 2020, Google ended her employment. The company framed it as a resignation she had not made. She had, in her own words, been fired.
The response was immediate and enormous.
Within days, more than 2,700 Google employees signed a protest letter in support of her. More than 4,300 academics and researchers joined them. Nine members of the United States Congress sent a letter directly to Google CEO Sundar Pichai demanding answers.
But Gebru did something even more remarkable than protesting.
She built.
Exactly one year later, on 2 December 2021, she launched DAIR, the Distributed Artificial Intelligence Research Institute. It began with 3.7 million dollars in funding from the Ford Foundation, the MacArthur Foundation, the Open Society Foundations, and the Rockefeller Foundation. Independent. No shareholders. No executives deciding which findings got published and which got buried. Researchers from Africa, Europe, North America, and Australia now work together to document how AI harms the most vulnerable people on the planet.
The recognition followed quickly.
In 2021, Fortune named her one of the world's 50 Greatest Leaders. That same year, Nature listed her among the 10 scientists who had shaped science that year. In 2022, Time magazine named her one of the 100 Most Influential People in the World. In 2023, BBC named her one of the 100 most inspiring and influential women on the planet.
Think about the arc of her life for a moment.
Fled war. Denied visas. Rebuilt everything from nothing. Earned a PhD at Stanford. Exposed discrimination in technology used around the world. Joined Google to help fix AI from the inside. Got promoted. Told the truth about dangerous technology. Got fired for it. Then built her own institution free from every company that had tried to silence her.
Google hired her to find problems.
She found them.
Google fired her for it.
The problems are still there.
The models keep getting bigger. The biases keep getting deeper. The companies keep getting richer.
And Timnit Gebru keeps telling the truth.
Whether the world is ready to hear it or not.
~Old Photo Club