It’s difficult to leverage technical tools when everything lays on shifting sands owned by billionaires donating money to fascists.
When I was working for corporate America a year ago, in the years preceding, there was an enormous push for everyone to start using our enterprise AI tool to sharpen our writing, to strengthen and standardize our reports and documentation. The goal was to ease everyone’s workload with copy that was easier to digest, more compelling, more engaging. In simultaneous efforts to increase metrics and reduce headcount, everyone jumped on board to increase productivity. In time this sowed doubt in everyone natural ability. Should everything be run through these tools to ensure the right polish, the right “oomph”? While everyone was trying to deliver their “best” they were actually undermining their own abilities and proving themselves replaceable.
This has been happening across the country, and it’s creating the most turmoil in hiring processes. Companies use AI to screen resumes (often with inaccurate results), they even use AI to conduct screening interviews. But they are increasingly using tools to detect the use of AI by applicants and then use that to discredit them. It’s a cost-saving strategy when hiring departments use it – but represents a power imbalance when others are disqualified for doing the same. Throughout the industry there are no clear guidelines, expectations, or guardrails.
Worse still, readers have caught on to the tell tale sign of AI use in copy. Beyond the famous em-dash (which precedes AI and punishes legitimate authors who use it properly), there are certain common conventions and a certain cadence that reveals popular tool use. There are also a number of new tools available (for subscription of course) that promise to detect AI writing. Many are now reverting to analogue tools and processes as they struggle to adapt to this competitive and fickle environment. Will this result in writing output that is slower and clunkier in order to prove authenticity? Will this negatively impact “productivity metrics”? Or will only those who are disenfranchised economically be held to the analogue standard? It’s a chaotic time in many ways these days; this is just one more log on the dumpster fire.

But let’s return to two large elephants in the room, one that I touched on in the beginning. One: there’s currently a boycott brewing of ChatGPT for the massive donations OpenAI’s co-president has sent to Trump. Two: the disastrous impact that GenAI data centers have had on local water and energy resources.
I don’t have an easy conclusion or neat and tidy next steps. If I ran this piece through GenAI for optimization and restructuring it might come back with something pithy and punchy – but it also could flatten my overall voice and derail my intentions. These paragraphs could also end up sounding like the dozens we see regularly on LinkedIn.
These impacts were never my intention when I wanted to leverage these tools to just refine awkward transitions or clunky passages. Even with the best of prompts there’s a tendency for GenAI to dilute the voice and agency of authors. These tools can also be beguiling, flattering, and so incredibly helpful – which is dangerously attractive to writers dealing with critical, unsupportive employers or readers. But the trade-off is the weakening of one’s own instincts, creative judgement, and voice – and in this current environment that could be intellectually and culturally disastrous.
—–
Notes:
If this piece resonated with you and you’d like to explore related work or support me in other ways – please consider:
💡 On Patreon, I share additional writing, behind-the-scenes notes, and works-in-progress that don’t always fit here. You can find it here.
📖 I also self-published my MA thesis on Butoh, which looks at performance, embodiment, and cultural history. If that interests you, it’s available here.
💸 And if a smaller, one-time gesture feels more your speed, you can leave a $3 tip via Ko-fi here. It’s always appreciated.

















