What 60 Minutes Taught Me About the Dark Sides of Artificial Intelligence

What 60 Minutes Taught Me About the Dark Sides of Artificial Intelligence

Introduction: A Wake-Up Call from the Future

I recently watched a 60 Minutes episode that truly stopped me in my tracks. Titled “Dark Sides of Artificial Intelligence,” this investigation by Lesley Stahl was more than just a news segment — it was a sobering glimpse into the hidden costs of the AI revolution we’re all so eagerly embracing.

As someone who’s always been fascinated by the potential of AI — from ChatGPT to self-driving cars — I hadn’t really paused to think about who makes these systems possible, or what risks they carry. This episode did more than inform; it unsettled me, and I think it’s a message that needs to be heard more widely.

Behind the Code: The Invisible Workers Powering AI

One of the most powerful segments focused on a workforce I had never considered: the people in Kenya, and likely elsewhere in the Global South, who label, flag, and clean the data AI uses to learn.

These aren’t highly paid engineers in Silicon Valley. They’re low-paid workers — many earning just a few dollars an hour — who spend their days moderating deeply disturbing content: violence, abuse, hate speech. All so that our AI systems can “understand” what’s acceptable and what’s not.

The term “ghost work” suddenly took on a painfully literal meaning. It made me think: every polite chatbot response or filtered social media post has a human — often suffering — behind it.

The Ethical Minefield of AI Decisions

60 Minutes didn’t stop there. The episode dove headfirst into the ethical quagmires we’re currently ignoring. AI doesn’t just suggest the next YouTube video or autocomplete a sentence. It’s increasingly involved in decisions about:

  • Who gets a job interview.

  • Who gets a loan.

  • Who gets flagged by law enforcement.

What happens when these decisions are made by systems trained on biased data? As the episode shows, the potential for injustice isn’t hypothetical — it’s already happening.

We’re creating systems we don’t fully understand, and then asking them to judge humans. That’s not just risky. That’s reckless.

AI and the Battlefield: Autonomy with a Trigger

Another chilling portion of the episode looked at the military use of AI — specifically, autonomous weapons.

Imagine drones or robotic systems that can select and kill targets without human intervention. That’s not sci-fi anymore. It’s the near future, if not the present.

Experts warn that without global regulation, this could spark a new arms race. Unlike nuclear weapons, these tools are cheaper, scalable, and deployable by bad actors. There’s no Geneva Convention for AI — yet.

It made me wonder: will we only act after tragedy strikes?

The Push for Accountability and Regulation

Thankfully, not all hope is lost. The episode also highlights global efforts to regulate AI. From the EU’s AI Act to growing public pressure on tech companies, the conversation around responsible AI is gaining steam.

Still, it’s a race against time.

As one expert noted in the episode: “We’re building the plane as we’re flying it.” That’s a terrifying metaphor — especially when the plane might be armed and making decisions we can’t override.

Final Thoughts: Where Do We Go from Here?

Watching this 60 Minutes episode made one thing very clear to me: AI is not just a tool. It’s a reflection of us — our priorities, our ethics, and our blind spots.

If we treat it like a magic black box that solves problems without consequences, we’re in for a rude awakening. But if we open our eyes to the labor, pain, and risk behind the algorithms, maybe we can build something better — more ethical, more humane.

I still believe in AI’s potential. But now, I believe just as strongly in the need for transparency, regulation, and most of all, accountability.

Let’s not wait for a crisis to wake up.

Watch the full episode here:

Further Reading: The Dark Sides of Artificial Intelligence

  1. https://hai.stanford.edu/research/ai-ethics

  2. https://www.technologyreview.com/2022/12/19/1065214/ai-artificial-intelligence-training-data-labor-exploitation/

  3. https://time.com/6247678/facebook-moderation-kenya/

  4. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

  5. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  6. https://futureoflife.org/open-letter/autonomous-weapons-open-letter/

  7. https://www.stopkillerrobots.org/

  8. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

  9. https://www.whitehouse.gov/ostp/ai-bill-of-rights/

  10. https://ainowinstitute.org/reports.html 

join the chat 

more insights