Has the Rejection of AI Gone Too Far?
So I saw a recent post on Facebook talking about some recent policies regarding AI….
So I saw a recent post on Facebook talking about some recent policies regarding AI. While I am certainly not in favor of using AI to write passages of work for you, I think perhaps maybe some of these policies have gone too far?

“The Fabulist” – “…The Fabulist is not open to works that include AI processes of any sort, including the generation of prompts, titles, names, outlines, dialogue, plot elements, descriptive passages, etc. We affirm to our readers and contributors that The Fabulist Magazine is, first and foremost, a venue for connections and encounters with unadulterated human creative works.
“Clarkesworld Magazine” – “We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.”
“Reckoning” – “We don’t publish work which was created using any of the tools now conventionally referred to as ‘AI’.”
For example, The Fabulist doesn’t want you to use AI to generate names for you.
Let’s say I need to find out the most popular names for baby girls in 1950. Does it really matter whether I use ChatGPT to look up these names, or whether I go to BabyCenter or the Social Security Administration for this data? I’m going to get about the same answers. And while I certainly wouldn’t trust ChatGPT to come up with accurate enough data for me to write a nonfiction article about trends in baby girl names over the last 75 years, it really wouldn’t matter whether I used the names Linda or Mary that I might find from ChatGPT or Peggy or Betsy that I might get from BabyCenter (those are just off the top of my head).
Or say I didn’t know the temperature that water froze at. Would it matter if I used ChatGPT to get that information, googled that information, or used a science textbook? I’d get the same answer either way.
I certainly wouldn’t want AI to write my work for me. The other day I needed to know what would happen if someone walked a short distance, barefoot, in the snow. ChatGPT offered to write a passage for me. Uh… no? For one thing, AI doesn’t know what I’m trying to accomplish in that passage; it also doesn’t know the voice of the character. And it wouldn’t really matter if AI was super good, unless and until it reads minds, it still wouldn’t know any of that (unless I told it, of course). And if I’m going to try to explain all of that to AI, I might as well write it myself.
In my opinion, AI is a tool. I used it to tweak the theme I’m using on this web site, in fact. The theme was Capitalizing The First Letter Of Every Word And It Was So Damned Annoying! I also don’t want people thinking I’m an idiot who doesn’t know how to capitalize a sentence, but mainly, it’s annoying (I don’t know how to fix that in the text editor so I’m just writing my post in Word before cutting and pasting). After trying to do Google searches for about an hour to fix my problem, ChatGPT was able to give me a couple lines of code in about five minutes.
And I thought it would be a useful tool to ask questions like “what are the effects of getting high?” but I guess book publishers frown on that? I actually didn’t use ChatGPT on that one (I used Google to find me some relevant sites) but I did discover that an imaginary drug one of my characters uses in my book would be in a category of empathogens in order to produce the effects on him that I’d like. But would it really make a difference if I had used ChatGPT instead? Since it’s a completely imaginary drug in an imaginary universe, there are no details I actually would have to get right.
So what is your opinion on using AI to ask questions?