BBN is a free newsletter about my upcoming publications, appearances, and giveaways, as well as my take on the publishing industry and the writing life. | View in Browser | Sign up here
Another year, another newsletter. As I've said before, thanks for reading. It means a lot.
Some great things are on the horizon which I will mention soon. . . .
Last year I said that I might jump into the topic of Artificial Intelligence at some point and now is as good a time as any, so here we go―
The Artificial Intelligence Trap
"A.I. undermines the human value of attention and individuality that flows from that.”
―Megan O’Rourke
I took my first class in artificial intelligence in 1982 (yes, 1982). I was a graduate student in computer science at the University of Pittsburgh. The instructor was John Hoagland, a protégé of Marvin Minsky, one of the pioneers of A.I. I'd listened to Minsky deliver a lecture on artificial intelligence two years before, at Carnegie Mellon, and was intrigued by his claims.
Researchers back then were making the same wild claims about A.I. that researchers are making today. That they would put everyone out of work and take over the world, become the next stage of evolution.
Does anyone remember ELIZA, the "psychotherapist," from the mid 1960's? Here she is in action:
ELIZA: “How are you today, Brian?"
Me: “I don’t know ..."
(I shrug. ELIZA can't see this, of course, because she doesn’t have eyes.)
ELIZA: "You seem distressed."
Me: “ I’ve been thinking a lot lately about my childhood."
ELIZA: “What have you been thinking about your childhood?"
Me: “About my mother."
ELIZA: "Tell me more about your mother."
Me: "Well doctor, I don’t think she ever liked me."
ELIZA: “Why do you think she never liked you?"
Me: “She threw a shoe at me once and said she hated me."
ELIZA: “Why do you think she threw a shoe at you?"
And so it went (this was an actual “conversation” I had with ELIZA in 1982.)
The A.I. systems of today are no different. They're just more sophisticated. More sophisticated sophists (Plato defined sophists as "superficial manipulators of rhetoric and dialectic").
A.I. bots are not intelligent and never will be. They're next-word calculators. Statistical regurgitaters. Dead-metaphor machines. Tricksters, in essence. These large language models (LLM’s) as they are called―Chatgpt, Gemini, Claude, all of them―are predicated upon a fallacy, for language is not essential to intelligence (just look at my cat!). Nor is it indicative of intelligence (are you listening, Siri?)
Personal opinion: A computer is nothing like the human mind. A computer is a hunk of hardware that doesn't care about you or your creative work (it doesn't care about anything, actually). It's a mechanical Oz. A string-puller. Without warmth, heart, or feeling.
To paraphrase Mark Twain, “The difference between the human mind and a computer is like the difference between lightning and a lightning bug." A.I. is a model and models are simply that, models. Consider the difference between an ocean liner and a plastic replica of one. The replica can look a lot like the other, you can put it in a tub of water and it will float, but it's a qualitatively different thing.
Now, LLM's are excellent excavators. But that's all they are. The danger is that one comes to rely on them. And here is where I'll connect them to the literary world. One minute, you're asking them to do research, then to proofread an initial draft, then to rewrite it. It's a slippery slope and it ends up being more "it" than you. “Ah, but I can always disregard its suggestions,” you say. And yes, that’s true. But at that point, A.I. is guiding the ship, not you.
Publishers are being inundated with A.I. written, or partially written (by A.I.) or enhanced (by A.I.) manuscripts (what they term "slop"). In fact, it's becoming common for publishers to require a statement from the author that A.I. was not used in the preparation of the manuscript in any way, shape, or form.
Personally, what I find most enjoyable is editing and research so I've never been tempted by A.I. What would be the point? It would take all the fun out of it!
If you'd like to delve further into this topic, here's an excellent article that was recently published in The Verge by Benjamin Riley, the founder of Cognitive Resonance, a journal dedicated to helping people understand human cognition and generative A.I. It's called: "Large Language Mistake"―
https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
This Month’s quote
“The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you choose, what you think, and what you do is who you become.”
―Heraclitus
Heraclitus was an ancient Greek pre-Socratic philosopher from the city of Ephesus. A misanthrope, he was known as "the weeping philosopher." He wrote only a single work, of which only fragments survive.
That’s all for now
Thanks for reading and hit me up with questions/comments anytime (simply reply to this email). I’ll do my best to respond quickly. And feel free to forward this email to anyone who you think might enjoy my writing.
You can access my newsletters anytime, as well as a page with links to some of my favorite sites.
Best,
Brian