ETHICS Blog 2: Crafting Arguments
Published on:
New Article
How Generative AI Works and How It Fails
Should We Trust AI Without Regulation? As a Black Muslim woman artist living in the West, I’m always thinking about who is included when new technologies are created. When I read the Harvard article “How to Regulate AI” by Sy Boles, I wondered what it means for people like me—not just tech companies or governments. The article argues that artificial intelligence is becoming powerful very quickly and that our laws are not ready for the risks it could create. Even though regulation sounds like something far away from my daily life, it could affect my identity, my art, and my safety.
The Article’s Argument
P1: AI systems are already more persuasive than skilled human negotiators. P2: AI is being adopted rapidly into business and financial systems. P3: Fast adoption creates risks we do not fully understand or control. P4: Our current legal and corporate systems are not built to handle these new risks. C: Therefore, AI should be regulated to prevent major harm before it happens. The idea is that AI is moving faster than our ability to manage its impact, so we need rules to protect society. Rebuttal: Where the Argument Is Weak I do agree that technology can be dangerous if nobody sets limits. But the article assumes that harm is guaranteed if we don’t regulate immediately. That feels like a slippery slope fallacy—jumping from “AI is powerful” to “AI will cause large-scale harm” without showing strong evidence that this harm is already happening. Challenges to the Premises Challenge to P1: Just because AI can persuade well in tests does not mean it will automatically harm people in real business situations. Challenge to P3: Many of the risks described are hypothetical. The article does not clearly show examples of AI causing major damage today. New Rebuttal Argument Here is a different way to think about it: P1: AI’s persuasive abilities alone do not guarantee harm. P2: There is limited evidence of serious, widespread harm from AI in business so far. P3: Regulating too early based only on imagined risks might slow innovation that could help people. C: Therefore, strict regulation based only on predicted dangers may be premature, and we need more research before making broad rules. This shows that the article might be overstating the risk to make its point stronger. My Own Position Even though the original argument has a flaw, I still believe we need regulation—especially for communities that are usually left out of these conversations. As a Black Muslim woman, I know technology can repeat bias from the real world. AI tools have already misidentified Black women’s faces, filtered out Muslim names in job searches, and used artists’ work without consent. These are not abstract issues for me—they are personal. So my conclusion is: C2: AI should be used only if there are clear protections against bias and misuse, especially for marginalized communities, and if those communities are included in the decision-making process. AI can create opportunities, but only if the rules are built to protect the people most at risk—not just the people with power.
