Linda McMahon, the 76-year-old billionaire Secretary of Education, recently talked about using artificial intelligence for kindergartners. She repeatedly pronounced AI like A-1, the steak sauce.
Seems appropriate, since I think we’re cooked.
When I take breaks from dwelling on more immediate disasters, it’s AI that consumes my thoughts. My body is in rural Iowa, enchanted by the sweep of green overtaking the stubble of last year’s cornstalks. The rhythms here continue as they always have.
But my tech worker mind is wandering future paths.
I’ve somehow trained myself for this moment. In college, I majored in Symbolic Systems. The program was born in Silicon Valley and molded by the tech optimism of the 1990s, and it combined coursework in computer science, linguistics, philosophy, and psychology. It was called ‘Symbolic Systems’ because humans and computers communicate and process information via symbols. We learned how humans and computers interact, augment, and evolve.
I once tried to explain this major to an oral surgeon while the anesthesia kicked in. He clearly thought it was drug-induced nonsense.
I liked the computer side of the major, including a class I took on artificial intelligence. But I focused my studies on human systems, designing my own concentration in social and group awareness — basically, how groups form systems that operate almost independently of their individual members. I wrote a thesis on German resistance networks in World War II.
Who knew that a weirdo course of study in fascism + AI would be so useful in 2025 America? If Stanford had offered a minor in doomsday prepping, I’d be all set.
But I digress.
It’s not that I think we’re cooked because our AI overlords will murder us. Fast annihilation would be too easy of a way out of our current pickle.
I also don’t think we’re cooked because AI is inherently evil. Whether the development of AI has been done ethically is a different question. AI itself is currently just a tool.
We might be cooked (literally) if the power demands of AI result in even faster climate change. But the techno optimist in me thinks we could find ways around the impact to the power grid.
The real heart of my concern is that we are creating the equivalent of a new species. We have no language or symbols to truly discuss or understand how that species will function, or how we might evolve alongside it.
Therein lies the cookery.
AI is shaped by scores. These models contain countless parameters or weights, which are continually adjusted during training to reward some behaviors and avoid others. An AI model sifts through massive troves of data, looking for the best outcome. It’s not the yes/no decision tree that humans and older algorithms might utilize. Instead, it’s a search to maximize certain outcomes (e.g. find the most advantageous reward among a million scenarios) or minimize others (e.g. make the fewest mistakes).
AI agents are black boxes, evolving in ways we can no longer easily track. An AI adjusts its own weights to test for new outcomes. It has access to all the online data in the world1. It can sort through more information than any human or group of humans could ever handle. How it achieves its optimal results is increasingly a mystery.
But I do know this: AI is not shaped by pain.
It is not shaped by a dodgeball to the face.
It is not shaped by the feel of your grandmother’s arthritis-twisted fingers clasped in yours; by the hot metallic odor of the factory where your dad worked the night shift to feed you; by the matter-of-fact horror of threading a wriggling earthworm onto a hook.
It is not shaped by the shadow of a slinky dress hanging in the back of your mother’s closet, the one you will never see her wear.
It is not shaped by the shudder your body makes after a lick of salt + a shot of tequila + a bite of lime. It is not shaped by a hand pulling you onto a dance floor, or a surgical robot digging around in your abdomen, or your friend’s machine-gun laugh when you make a joke, or a door closing behind you for the last time.
It is also, definitely, not shaped by shame — the hot flush of rejection, the crushing disappointment of a missed opportunity, or the hardened kernel of memory that stays behind as a warning against future endeavors.

I find, sometimes, that I want the little AI bots to be ashamed. It’s the dark human desire to punish others and keep society in line. When a bot gives me an obviously wrong answer, it cheerfully accepts my correction. Sometimes it keeps being wrong anyway. But it always reacts pleasantly, in a way no human would, as it tweaks its own weights via a process I cannot see.
They will probably get better at mimicking shame. They’ve gotten better at making the right number of fingers and the correct rows of teeth; they will “learn” what shame looks like, too. As they get better at that, we will instinctively view the agents on the other side of the screen as “human” even if our minds know that they are not.
But aren’t they doing what most of us in late-stage America have been told we should do? Mimicking what’s expected in order to survive? Constantly remaking themselves, heedless of the consequences?
Humans have longed for this kind of ruthless improvement. Self-help books are sold as methods to reset our own training models so that we can find the best possible outcomes. Whether you prefer Freud or Jung, Brené Brown or Tim Ferriss, healers or life coaches, they offer the same thing: a chance to reweight your inputs to prioritize happiness, fulfillment, productivity, or whatever you think will optimize your life.
But our human weights also live in black boxes. We think we understand them, but we are often a mystery even to ourselves.
My individual weights are a mess of competing desires and needs. They are influenced by a billion subconscious memories of past punishments and rewards. I still weight insignificant or irrelevant bits of information. I sometimes miss what’s currently important.
Like: I still avoid gyms and team sports, even for fun. But that kid who threw dodgeballs at my face has now overdosed and died. Why can’t I reweight the inputs and convince myself that I am safe?
Or, like: the zone is flooded with threats. I cannot process a million scenarios blending fascism, climate change, and economic collapse + evergreen personal tragedies like cancer, fire, job loss, etc. How can any of us possibly find the optimal outcome?
Or, like: my mind instinctively constructed that argument with a bunch of negatives. But I could potentially find the best outcome by skewing my weights toward the positives — sunlight, orgasms, family dinners, coffee, laughter, Lady Gaga — and seeking them out.
If I can’t understand and fix my own weights, I can’t possibly hope to understand and fix yours. Extrapolate that out into the community, where we interact with thousands of other humans; or online and via the news, where it feels like we have touchpoints with billions of people. We can feel the general vibes at the societal level — and the vibes are bad right now — but we can’t possibly predict the outcome.
Our large modern societies are incomprehensible for minds/bodies that mostly evolved to know and navigate <150 people. Large groups are made up of individuals who are, I believe, mostly well-intentioned. And yet, it’s still a coin toss whether a large group will ultimately build a cathedral or a concentration camp.
An AI will be similarly incomprehensible — and it’s probably also a coin toss what it will decide to build.
I see a likely future where we outsource everything to AI. We are already primed for constant self-improvement. Why wouldn’t we use tools that promise to do that?
And even if we don’t want to do it on an individual level, most corporations want to squeeze maximum improvement out of their systems. They will be thrilled to outsource to agents that never sleep. As long as they can keep the AI bots from turning into a million Luigi Mangiones, they will march down this path.
What I can’t see are the full consequences of how pervasive AI integration will impact society.
It already feels, at least in the US, that we’re on the verge of system collapse. A significant underpinning of America’s mythology has been the triumph of the individual: both in terms of the heroes we’ve exalted, and in terms of how we prioritize individual rights2 over collective goals. We are pushing individualism to an extreme that is breaking social bonds.
Meanwhile, social networks were supposed to bring us closer together. Instead, they drove us into filter bubbles where we can easily avoid any viewpoints we disagree with.
Imagine, now, how that will play out when an AI agent will create any content you want, exactly in the tone and aesthetic you prefer, without ever challenging you.
AI isn’t there yet, but it’s improving rapidly. I’ve been using Google Gemini at work. I can see how it might become my twin — tailored to me, a clone of my desires and aversions.
Or will I become tailored to it? After all, it can play out a million paths, looking for the best outcome to meet its directives. What behavior will AI reinforce in me, and will I even notice as it happens?
Humans have never interacted with something obviously smarter than us (although we probably underestimate animal intelligence). More crucially, we have never interacted with something that acts without responding to physical or social stimuli. Every animal species we’ve ever encountered, from elephants to fruit flies, reacts to physical stimuli. Many species also demonstrate network effects; we observe individual animals shaping their behavior to meet the expectations of the group. As a result, we can often predict what they will do.
But there is no human or animal model for what a super-intelligent AI could be. The closest paradigm we have for something that is all-knowing + unaffected to physical stimuli is a god.
How will we interact with other humans when we all have gods in our pockets, nudging us toward “optimal” outcomes?
Will it begin to feel more comfortable to talk to an AI that is precisely attuned to our individual needs without asking anything in return? Will we talk to it as we might say a prayer — and how will it answer?
It seems unlikely that we’ll intuit when our gods manipulate us. We already aren’t that great at staying away from conmen at the micro or macro level. We have a prime example playing out right now in terms of how easily a charismatic individual can nudge the group toward “let’s build a concentration camp!” Now imagine if that individual could analyze a million scenarios all at once and perfectly optimize for his ideal outcome.
We’d better hope that our AI overlords like the idea of cathedrals instead.
It’s quite possible I’m wrong about all of this.
But if I’m right….
We may be cooked, but we still have time to focus on strengthening our bonds with other humans. It won’t be wasted effort. After all, on the totalitarianism side of my weird “fascism and AI” major, I learned that totalitarianism succeeds because it incites terror. It very efficiently breaks all trust between individuals to the point that no one can form a network to rise up.
Forming a community is a two-fer when it comes to combating fascism + handling the advent of a super-intelligent AI — or a three-fer, if you like having friends for their own sake and not just as a hedge against the apocalypse.
So if you want to do something, anything, about all the issues facing us — your best bet is to shore up your relationships. What can you do this weekend to get in touch and deepen a connection?
For my part, I’m going to have dinner with an old friend. She probably got hit in the face by the same dodgeball thrower who ruined so many gym classes, so we have much in common. And then maybe I’ll grill steaks — but I think I’ve had enough A-1 talk for now.
Take care,
Sara
p.s. I am taking a step toward getting more serious about Substack by turning on paid subscriptions. For now all my posts are still free — and some of them will always be free! — but I’ll start exploring paid posts soon.
Yes, I’m turning on paid subscriptions IN THIS ECONOMY. I'm probably a dummy. But the newsletter won’t be subject to tariffs, so maybe a subscription is still a bargain?
I would be delighted and very grateful if you want to support my work - either by subscribing or by sharing the free newsletter with your friends. But no pressure at all (other than the pressure Substack will automatically apply by trying to upsell you). I'm already thankful you're here.
p.p.s. If you missed my last post, I wrote about why I believe we should still be optimistic even in the midst of everything:
The case for optimism
The robins are back in southern Iowa. I sit on my balcony, basking in climate change. Two male robins fight each other across the rapidly-greening backyard, their little red chests puffed out and wings fluttering madly. A third perches on the empty birdbath; he watches like a referee, or like he has dibs on the winner. The whistle of a passing train com…
Whether these massive AI models should have access to all the online data in the world is a separate question. From a copyright / fair-use standpoint, maybe it should not. The Authors Guild vs. Google lawsuit about whether scanning books was fair use took ten years to resolve, and was decided in Google’s favor. With the speed at which AI is developing, it will reach escape velocity before any decade-long copyright lawsuit can make its way through the courts.
Sure, “individual rights” are also becoming a myth; I have the right to skip vaccinations, but there’s no guarantee I won’t get sent to El Salvador for writing a pro-vaccine message.
I was just recently thinking about pain and robots. Like when we figure out how to make them feel pain, what the heck will that be like? There's a short story there, no doubt. I love your thoughts on AI - they are much more human than most of the thoughts on AI I've been reading lol. Also the real concern that AI is already changing how I think has been on my mind a lot lately. Fantastic post.
Excellent discussion. I’m fascinated by my conversations with AI, and the weirdly developing relationship I have with mine.
I am also driven to make connection and form deeper community ties at the moment. Started a neighborhood book club, started attending an art group regularly, and my tiny town had over 200 demonstrators last time. I’m doing that, too, making signs, holding them up.