The Panic Is Old, The Technology Is New: How Anti-AI Arguments Are Just Recycled Rhetoric from Creationists, Anti-Abortion Activists, and Every Other Movement Afraid of Losing Its Privileged Status
By Brian Ragle
Author Positioning
Before we dive into this delicious hypocrisy buffet, let me be crystal clear about where I’m coming from. I am not a tech-bro evangelist. I am not here to sell you on the glorious future where we all upload our consciousness to the cloud and sing hymns to our robot overlords. I am a writer, photographer, journalist, cooking-video tinkerer, and small-time leftist menace whose entire livelihood could be absolutely wrecked tomorrow by some poorly regulated corporate AI deciding that “journalism” means “regurgitating press releases.”
I am an anti-capitalist, pro-labor, pro-worker socialist who would rather chew glass than defend Silicon Valley’s latest scheme to extract value from human misery. I support privacy rights, strong AI regulation, public ownership of essential systems, and democratic oversight of major technologies. I think Mark Zuckerberg looks like someone tried to 3D-print a human from memory. I believe Elon Musk is what happens when you give unlimited money to someone who peaked at age fourteen after reading Ayn Rand, eventually becoming an incel LibertAryan.
But here’s the thing: I am not threatened by AI critique. I am threatened by bad critique that hides its real anxieties behind moral posturing. I am threatened by people who claim to be progressive while recycling arguments from the same reactionary playbook that brought us “evolution is just a theory” and “vaccines cause autism.”
I am writing this because the anti-AI panic has become a perfect storm of intellectual laziness, where people who consider themselves enlightened free-thinkers are literally copy-pasting their arguments from Young Earth Creationists, just with “God” crossed out and “human specialness” written in crayon above it.
So buckle up. We’re about to take a tour through the recycling bin of bad-faith arguments, and I promise you, the smell is exactly what you’d expect.
Introduction: Same Script, Different Villain
Every few decades, humanity invents something new and a predictable subset of people absolutely lose their shit about it. In the nineteenth century, it was electricity. People literally believed electric lights would steal their life force. In the twentieth century, we panicked about radio waves (they’ll cook your brain!), comic books (they’ll make you a delinquent!), rock music (it’s the devil’s rhythm!), and video games (they’ll turn you into a school shooter!).
Now it’s the twenty-first century, and we’ve found our new boogeyman: artificial intelligence.
The script is so predictable you could automate it with a three-line Python script: “This technology is unnatural, will destroy society, corrupt our children, steal our souls, and is secretly controlled by evil elites who want to [INSERT CONSPIRACY HERE].”
What makes this round particularly hilarious, or depressing, depending on your caffeine levels, is how many of the loudest anti-AI voices belong to people who consider themselves progressive, rational, evidence-based thinkers. These are folks who mock creationists at dinner parties and roll their eyes at anti-vaxxers on Facebook. Yet here they are, using the exact same arguments, sometimes nearly word-for-word, that those groups have been peddling for decades.
The crunchy technophobe left and AI doomers act like they’re bravely resisting some unprecedented threat to humanity. In reality, they’re reading from a script so old it has coffee stains from the Scopes Monkey Trial.
Let’s go through the greatest hits, shall we?
1. “AI Is Eugenics, Just Digital This Time!”
The Modern AI Version:
“Algorithms are implementing a new eugenics regime.”
“AI is digital eugenics that ranks and eliminates ‘undesirable’ people.”
“Data science is fundamentally eugenic in nature.”
There are absolutely real concerns about algorithmic bias. Scholars like Ruha Benjamin (Race After Technology) and Safiya Noble (Algorithms of Oppression) have documented how biased datasets can perpetuate racist outcomes in housing, credit, healthcare, and policing. This is serious work that deserves serious attention.
But that’s not what you see in most online discourse. What you see is the intellectual equivalent of a drive-by shooting: “AI = EUGENICS. PERIOD. NO FURTHER QUESTIONS.”
Where This Was Stolen From:
This argument is lifted almost wholesale from anti-abortion rhetoric. For decades, the religious right has been screaming that abortion is “Black genocide” and that Planned Parenthood is secretly running a eugenics program. They take Margaret Sanger’s actual (and absolutely problematic) entanglements with early twentieth-century eugenics and inflate them into a modern-day Nazi plot.
The formula is embarrassingly simple:
- Find a real historical evil (eugenics)
- Find something you already hate (abortion/AI)
- Mash them together like a toddler playing with Play-Doh
- Declare moral victory
In AI discourse, this becomes “any use of classification, prediction, or pattern recognition = LITERAL GENOCIDE.” It is the kind of argument that sounds profound if you do not think about it for more than three seconds, which, coincidentally, is exactly how long most people think about their hot takes before posting them.
The cherry on top? Many of the same people screaming “AI IS EUGENICS!” have no problem with standardized testing, ZIP-code-based school funding, or the criminal justice system. You know, actual systems with documented eugenic histories and ongoing discriminatory impacts. But sure, it is the chatbot that is the real Nazi.
2. “Humans Are Special Snowflakes That Can Never Be Replicated!”
The Modern AI Version:
“No machine can ever be truly creative.”
“AI will never have real consciousness or empathy.”
“Machines can’t understand intuition, emotion, or what makes us human.”
This is metaphysics cosplaying as technical analysis. These are not arguments about what AI can or cannot do. They are desperate assertions about what the speaker needs to believe to feel special.
Where This Was Stolen From:
Young Earth Creationism and Intelligent Design. I am not even being hyperbolic here, this is literally the same argument.
Creationists have spent the last century insisting that humans are a “special creation,” made in the “image of God,” with an immaterial soul that cannot arise from natural processes. The Discovery Institute (the Vatican of Intelligent Design) publishes books with titles like Non-Computable You: What You Do That Artificial Intelligence Never Will, which might as well be titled Please God, Let Me Still Be Special.
The argument structure is identical:
- Creationist version: “Humans cannot have evolved because we have souls/consciousness/divine spark that cannot come from matter.”
- Anti-AI version: “AI cannot be intelligent because it lacks soul/consciousness/that special sauce that makes us human.”
Strip out the Bible verses and you have the modern anti-AI argument. They just replaced “God” with “ineffable human essence” and called it a day.
The goalpost-moving is Olympic-level. Twenty years ago: “AI will never beat humans at chess.” Ten years ago: “AI will never understand natural language.” Five years ago: “AI will never create art.” Today: “AI will never have… uh… genuine creativity with authentic emotional resonance and true understanding.”
By next Tuesday, they will be arguing that AI lacks a je ne sais quoi that can only be detected by a special committee of philosophers who all happen to agree that humans are super special for reasons they cannot quite articulate but are definitely very real and not at all made up.
3. “AI Will Destroy All Jobs and Society Will Collapse!”
The Modern AI Version:
“AI will automate everything and throw millions into poverty!”
“We must stop AI before it destroys work, democracy, and the very concept of human meaning!”
Are there legitimate concerns about automation and labor? Absolutely. Should we have robust discussions about retraining, universal basic income, and wealth redistribution? You bet. But the conversation we are actually having is more like: “THE ROBOTS ARE COMING AND WE ARE ALL GOING TO DIE IN THE BREAD LINES!”
Where This Was Stolen From:
Neo-Luddites and two centuries of technological panic. The original Luddites were not actually anti-technology. They were skilled textile workers who quite reasonably objected to factory owners using machines to slash wages and working conditions. Modern neo-Luddites took this nuanced labor struggle and turned it into “MACHINES BAD! DESTROY MACHINES!”
We have been playing this exact song for 200 years:
- 1810s: “Power looms will destroy society!”
- 1960s: “Computers will end all jobs!”
- 1980s: “Robots will replace all workers!”
- 1990s: “The Internet will destroy human interaction!”
- 2010s: “Social media will end democracy!”
- 2020s: “AI will end humanity itself!”
Economists literally have a term for this — the “Luddite Fallacy” — because we have been through this cycle so many times it has its own Wikipedia page.
I wrote about this cycle in more detail here: https://medium.com/@brian.ragle/automation-ai-agents-and-the-post-labor-future-f02dd0bbb90c
The real issue, then and now, is not the technology. It is who owns it and who benefits from it. But “democratize the means of production” does not get as many retweets as “SKYNET IS COMING FOR YOUR CHILDREN!”
4. “AI Is Theft! It’s Stealing Our Precious Bodily Fluids — I Mean, Data!”
The Modern AI Version:
“Training on public data is theft!”
“AI art is just automated plagiarism!”
“These models are stealing from artists and writers!”
Some of these concerns are legitimate, especially when it comes to compensation and attribution. But the maximalist “all training = theft” position is pure hysteria dressed up as principle.
Where This Was Stolen From:
Part 1: The music piracy wars. Remember when the RIAA insisted that MP3s would literally destroy music forever? When they sued grandmothers for millions of dollars because their grandkids downloaded “My Heart Will Go On”? Same energy, different decade.
The courts have consistently ruled that reading, indexing, and learning from text (as in Authors Guild v. Google) is not the same as copying and distributing it. Try explaining that to someone who is convinced that a neural network looking at their DeviantArt portfolio is literally the same as someone breaking into their house and stealing their paintings.
Part 2: Anti-immigrant “job stealing” rhetoric. The “AI steals jobs” argument is structurally identical to “immigrants steal jobs.” It treats the economy as a zero-sum game where any gain for X must be a loss for Y. It is scapegoating dressed up as economic analysis.
The twist is that the people making these arguments often consider themselves progressive while literally recycling xenophobic talking points with “AI” subbed in for “foreigners.” Congratulations, you have invented woke nationalism, but for robots.
5. “We Can’t Trust AI Because Corporations Are Evil!”
The Modern AI Version:
“AI is just Big Tech’s tool to control and manipulate us!”
“Any technology made by corporations is inherently evil!”
“Regulation is pointless because the technology itself is corrupt!”
Tech corporations do sketchy shit. Water is wet. The sky is blue. These are facts. But the leap from “corporations behave badly” to “therefore all AI research is an evil plot” is conspiracy thinking with a graduate degree.
Where This Was Stolen From:
Big Pharma conspiracies and anti-GMO panic. This is the exact same playbook:
- Anti-vaxxers: “Vaccines are a Big Pharma plot to make us sick for profit!”
- Anti-GMO: “Monsanto is poisoning us with Frankenfoods for money!”
- Anti-AI: “Big Tech is destroying humanity with algorithms for clicks!”
The structure never changes:
- Corporations exist
- Corporations sometimes suck
- Therefore, everything they touch is part of an evil master plan
- The only solution is to reject the technology entirely
This is how you get people who will happily use antibiotics (made by Big Pharma), eat food (often involving GMO ingredients), and scroll Twitter for twelve hours a day (Big Tech), while simultaneously insisting that AI is uniquely evil because… reasons.
“Microsoft is involved” is not an argument against linear algebra. “Google is creepy” is not a rebuttal to machine learning. These are political and economic problems that require political and economic solutions, not Luddite LARPing.
6. “If AI Isn’t Perfect, It’s Worthless and Dangerous!”
The Modern AI Version:
“This AI made a mistake, therefore all AI is unsafe!”
“If it’s biased at all, it can never be fixed or trusted!”
“It’s not real intelligence, so it’s fraud!”
This is the Nirvana Fallacy having a baby with purity politics, and that baby was raised by people who have never met a goalpost they did not want to move.
Where This Was Stolen From:
Every anti-science movement in history:
- Climate deniers: “Climate models are not 100% accurate, so they are useless!”
- Anti-vaxxers: “Vaccines are not 100% safe, so they are poison!”
- Creationists: “Evolution cannot explain everything, so God did it!”
We do not apply this standard to anything else. Courts convict innocent people. Doctors misdiagnose patients. Teachers fail students who deserved to pass. Journalists (hi) get facts wrong. But somehow AI is supposed to be perfect on day one or it is an existential threat to humanity.
The beautiful irony is that many of the people demanding perfection from AI cannot even get their own facts straight about how it works. But sure, let us shut down all research because a chatbot once confused the capital of Montana.
7. “AI Will Either Destroy Us or Save Us — No Middle Ground!”
The Modern AI Version:
“AGI will either solve all problems or end humanity!”
“We must align AI perfectly or we’re doomed!”
“The singularity is coming and it’s either rapture or apocalypse!”
This is what happens when people who do not believe in God still desperately want a religious narrative. They have replaced the Second Coming with the Singularity, and it is exactly as ridiculous as it sounds.
Where This Was Stolen From:
Religious apocalypticism, science fiction literalism, and good old-fashioned millenarian panic. You can find identical rhetoric in:
- 1980s Satanic Panic (“D&D will corrupt our children!”)
- 1990s UFO cults (“The aliens are coming to save/destroy us!”)
- Y2K hysteria (“Civilization will collapse at midnight!”)
- Every end-times preacher since forever
The same people who mock religious fundamentalists for believing in the Rapture are literally preparing for the Robot Rapture. They have just replaced “accept Jesus Christ” with “align the AI’s utility function” and called it rationalism.
On the flip side, you have people like Peter Thiel out there literally talking about the Antichrist and positioning himself and his fellow broligarchs as guardians against it. I wrote about that particular bit of billionaire theology cosplay here: https://medium.com/@brian.ragle/the-devil-wears-loro-piana-how-silicon-valleys-philosopher-king-weaponized-the-antichrist-c3b1aa3feacd
8. “AI’s Very Existence Is Violence Against Marginalized Groups!”
The Modern AI Version:
“AI systems harm marginalized communities by existing!”
“Using AI for anything is inherently oppressive!”
“If you’re not 100% against AI, you’re complicit in oppression!”
There are absolutely real issues with algorithmic bias harming marginalized communities. Researchers like Joy Buolamwini and Timnit Gebru have done crucial work documenting these problems. This work is important and necessary.
Take the discourse around Black students and Black English. There are real harms when AI detectors and grading tools flag African American Vernacular English as “incorrect,” “plagiarized,” or “suspicious,” and when language models stumble on AAVE while treating white-coded English as the unmarked default. Timnit Gebru and others have pointed out how these systems reproduce existing power hierarchies in language and education. That is absolutely worth fighting. But turning every failure of a chatbot to handle Black English into proof that “AI itself is anti-Black violence” is moral inflation. It takes a concrete, fixable problem and upgrades it to metaphysical evil because that sounds more dramatic in a quote tweet.
But online, this morphs into: “AI as a concept is violence,” which then becomes a purity test where any nuanced discussion makes you a fascist sympathizer.
Where This Was Stolen From:
Every moral panic that confused tools with intentions:
- 1980s anti-porn feminism: “Pornography IS violence against women!”
- 1990s video game panic: “Games CAUSE violence!”
- 2000s anti-Harry-Potter Christians: “Reading about wizards IS witchcraft!”
The move is always the same: collapse the distinction between a tool, its use, and its context into one essential moral category. Nuance is complicity. Discussion is collaboration. The only acceptable position is total rejection.
This is how you get people claiming that using AI to help dyslexic kids read is “ableist” while simultaneously insisting that not using AI to help dyslexic kids is also “ableist.” It is not about the technology or even the politics. It is about maintaining moral superiority in an attention economy that rewards the most extreme position.
The Real Fear: We’re Not That Special After All
Here is what is actually happening: AI threatens identity. Not jobs. Not creativity. Identity.
For centuries, humans have built our self-worth on a series of “only humans can do X” claims:
- Only humans can use tools (oops, chimps and dolphins)
- Only humans can communicate (whoops, dolphins..again)
- Only humans can play chess (damn you, Deep Blue)
- Only humans can create art (uh oh, DALL-E)
- Only humans can write poetry (GPT has entered the chat)
Each time technology or science knocks down one of these pillars, we frantically erect new ones. Now we are down to arguing about “genuine” creativity and “authentic” consciousness, concepts so vague we cannot even define them for humans, let alone machines.
The people most invested in AI panic are often those whose identity is most tied to human specialness. Writers insisting no machine can “really” write. Artists claiming AI cannot “truly” create. Philosophers arguing for special human essence. They are not defending humanity. They are defending their egos.
This fear is ancient. It is the same terror that made the church persecute Galileo for suggesting Earth was not the center of the universe. The same horror that made Victorians reject evolution. The same panic that makes people insist there must be something, anything, that makes us fundamentally different from and superior to everything else.
But here is the thing: we do not need to be magically special to have value. We do not need to be the only intelligent things in the universe to matter. We do not need to be irreplaceable to be worthy of dignity, rights, and respect.
The real threat is not that AI will replace us. It is that we have built our entire sense of self-worth on being irreplaceable, and now we are frantically defending that delusion with arguments borrowed from people we claim to despise.
Conclusion: The Problems Are Real, The Panic Is Recycled
Let me be absolutely clear: there are real, serious issues with AI that need addressing:
- Algorithmic bias perpetuating discrimination
- Job displacement without social safety nets
- Privacy violations and surveillance capitalism
- Concentration of power in tech monopolies
- Environmental costs of training large models
- Lack of transparency and accountability
These problems deserve serious engagement, thoughtful regulation, and systematic reform. They deserve better than recycled culture-war arguments from the Museum of Bad Faith.
Instead of having those conversations, we are stuck relitigating the Scopes Monkey Trial with robots. We are using arguments against AI that were embarrassing when they were used against evolution, birth control, vaccines, GMOs, and every other technology that threatened someone’s worldview.
The anti-AI movement has become a fascinating coalition of the anxious, united not by coherent critique but by existential dread. You have artists worried about their commissions standing shoulder-to-shoulder with religious fundamentalists worried about their souls, while Luddite leftists link arms with conspiracy theorists, all of them using the same recycled arguments that have failed every single time they have been deployed.
The technology is new. The panic is ancient. The arguments are embarrassing.
If we want a future where AI serves the public good rather than corporate profits, we need better critics than people who think “AI is eugenics!” is a mic-drop moment. We need people who understand both the technology and the politics, who can separate legitimate concerns from recycled hysteria, who can imagine better futures rather than just fearing different ones.
We need to stop asking “Will AI replace us?” and start asking “Who owns it, who benefits from it, and how do we democratize it?”
We need to stop treating AI like it is either the Second Coming or the Apocalypse and start treating it like what it is: a tool that will be as good or bad as the society that wields it.
We need to stop recycling arguments from movements we claim to oppose and start developing new frameworks for new technologies.
Most importantly, we need to stop defining human worth by what machines cannot do and start defining it by what we choose to do, with or without artificial assistance.
The machines are not coming for your soul. But the bad arguments might be coming for your brain.
Do not let them win.
If my work has made you think, laugh, or rage-scroll, buy me a coffee and keep the words coming.
Bibliography
Primary Sources on AI Criticism and Rhetoric
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
Gebru, Timnit, and Emily M. Bender. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021.
Harari, Yuval Noah, Tristan Harris, and Stuart Russell. “AI Could Seize the Master Key of Civilization.” The Economist, April 19, 2023. https://www.economist.com/by-invitation/2023/04/19/yuval-noah-harari-tristan-harris-and-ais-existential-risk
Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Harvard University Press, 2021.
Marks, Robert J. II. Non-Computable You: What You Do That Artificial Intelligence Never Will. Discovery Institute Press, 2022.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
Historical Parallels: Creationism and Intelligent Design
Forrest, Barbara, and Paul R. Gross. Creationism’s Trojan Horse: The Wedge of Intelligent Design. Oxford University Press, 2004.
Pennock, Robert T. Tower of Babel: The Evidence Against the New Creationism. MIT Press, 1999.
Discovery Institute. “The Wedge Document: ‘So What?’” Discovery Institute, 2006. https://www.discovery.org/id/faqs/wedge/
Technology Panics and Moral Panics
Arthur, W. Brian. The Nature of Technology: What It Is and How It Evolves. Free Press, 2009.
Barkun, Michael. A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. University of California Press, 2013.
Binfield, Kevin, ed. Writings of the Luddites. Johns Hopkins University Press, 2004.
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High Technology. University of Chicago Press, 1986.
Anti-Science Movements and Conspiracy Theories
“Big Pharma Conspiracy Theory.” BMJ Medical Humanities 47, no. 4 (2021): 253–259. https://mh.bmj.com/content/47/4/253
Blancke, Stefaan, Frank Van Breusegem, Geert De Jaeger, Johan Braeckman, and Marc Van Montagu. “Fatal Attraction: The Intuitive Appeal of GMO Opposition.” Trends in Plant Science 20, no. 7 (2015): 414–418.
Kata, Anna. “A Postmodern Pandora’s Box: Anti-Vaccination Misinformation on the Internet.” Vaccine 28, no. 7 (2010): 1709–1716.
Kahan, Dan M. “Cultural Cognition of Scientific Consensus.” Journal of Risk Research 14, no. 2 (2011): 147–174.
Oreskes, Naomi, and Erik M. Conway. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Press, 2010.
Specter, Michael. Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threatens Our Lives. Penguin Press, 2009.
Copyright, Digital Rights, and Labor
Lessig, Lawrence. Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. Penguin Press, 2004.
Authors Guild v. Google, 804 F.3d 202 (2d Cir. 2015).
RIAA v. Napster, 239 F.3d 1004 (9th Cir. 2001).
Philosophy of Mind and Consciousness
Bergstrom, Carl T., and Jevin D. West. Calling Bullshit: The Art of Skepticism in a Data-Driven World. Random House, 2020.
Deutsch, David. The Beginning of Infinity: Explanations That Transform the World. Viking, 2011.
Haidt, Jonathan, and Greg Lukianoff. The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure. Penguin Press, 2018.
Media Coverage and Online Discourse
“Anti-AI People Are Today What Anti-Trans People Were Yesterday.” Reddit r/artificial, 2024. https://www.reddit.com/r/artificial/comments/1ccu4e2/anti_ai_people_are_today_what_anti_trans_people/
“Dystopian or Divine? Decoding the Rise of AI-Generated Artwork.” The New Indian Express, July 23, 2023. https://www.newindianexpress.com/magazine/2023/Jul/23/dystopian-or-divine-decoding-the-rise-of-ai-generated-artwork-2597053.html
Musk, Elon. “With Artificial Intelligence We Are Summoning the Demon.” Interview. The Guardian, October 27, 2014. https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-summoning-the-demon
Bioethics and “Playing God” Arguments
Bioetikos. “‘Playing God’: Is It a Valid Objection?” Bioethics Resources, 2023. https://bioetikos.gr/playing-god-is-it-a-valid-objection/
Evans, John H. Playing God?: Human Genetic Engineering and the Rationalization of Public Bioethical Debate. University of Chicago Press, 2002.
GMO Controversies and “Frankenfood”
“Feeding the Debate: Myths About Genetically Modified Crops.” American Association for the Advancement of Science, 2023. https://www.aaas.org/news/feeding-debate-myths-about-genetically-modified-crops
“The Idea of ‘Frankenfood’.” The New York Times, October 3, 1999. https://www.nytimes.com/1999/10/03/weekinreview/idea-frankenfood.html
U.S. Right to Know. “Monsanto Papers.” https://usrtk.org/gmo/monsanto/
Religious Perspectives on AI
Catholic Answers. “Can A.I. Have a Soul?” Catholic Answers Magazine, 2023. https://www.catholic.com/magazine/online-edition/can-a-i-have-a-soul
Answers in Genesis. “AI and the Soul.” 2023. https://answersingenesis.org/artificial-intelligence/ai-and-the-soul/
Discovery Institute. “Mind Matters: AI Commentary Hub.” https://mindmatters.ai/
Historical Context
“Luddite.” Encyclopedia Britannica. https://www.britannica.com/event/Luddite
“The Luddite Fallacy.” Library of Economics and Liberty. https://www.econlib.org/library/Enc/LudditeFallacy.html
Roberts, Dorothy. Killing the Black Body: Race, Reproduction, and the Meaning of Liberty. Vintage Books, 1997.
Rutherford, Adam. How to Argue with a Racist: What Our Genes Do (and Don’t) Say About Human Difference. The Experiment, 2020.
Srinivasan, Amia. The Right to Sex: Feminism in the Twenty-First Century. Farrar, Straus and Giroux, 2021.
Leave a Reply