Is AI writing assistance ethical?

What precisely is the difference between using a human editor and an AI editor? (Quality semantics notwithstanding.)
By the parameters of this discussion: nothing. Maybe that's the crux of the debate. Not the human writer vs the genitive AI writer, but the human editor vs the AI auxiliary editor. Having watched this debate over the last few years, and assuming we can stop clutching our pearls about machines imitating humans, that would seem to be the real question.
 
Last edited:
To be great, truly great, you have to be the kind of person who makes the others around you great.

That's something Mark Twain said. You know, the fella we quote for every birth, death or marriage. The sharp wit who has said something that's applicable whether you stub your toe or run for office.

I typed that into a word document recently and the whateveryoucallit suggested a rewrite. Use more concise language. The AI has the temerity to suggest editorial changes for one of the most everlasting voices of wisdom and wit and intellectual slam down, Use "must" instead.

We should let that thing go nuts on Huckleberry Finn.

AI is trying to murder Mark Twain.

Ethics, to my mind, come in when there's deceit or lack of transparency in usage. My reasons for not using AI in writing aren't connected to an ethics dilemma, more that the steering, however lightly applied, robs the writing of its individuality, which is what brought us to it in the first place. Others view that differently and say they can impose their own vision regardless of using some features of AI. I don't think I can and would prefer to be known by my imperfections.
 
Digging deeper into the ethics part, how many of you, like me, have downloaded or streamed books, films and songs for free? This is where hypocrisy can come into it i.e. we condemn one thing in which our creativity is involved, but not others where it is not. And also (because I'm on a roll after so much coffee) is DJ music i.e. the ones that mix songs and tunes and stuff also a type of AI as it is artificially created. (phew!) I'd better have a lie down now.
 
I typed that into a word document recently and the whateveryoucallit suggested a rewrite. Use more concise language. The AI has the temerity to suggest editorial changes for one of the most everlasting voices of wisdom and wit and intellectual slam down, Use "must" instead.

The AI doesn't have the temerity to do anything. It's simply doing what it's being told to do. It doesn't make a judgement on the value of that sentence, because it hasn't been told to. If anyone is to blame for that, it is Microsoft. But this is why generative AI shouldn't be used to actually do any writing, or to make word choices. You can also turn that particular check off in Word.

But, you know, the debate of ethics has been around long before AI. Is it ethical to put your name on a ghost-written "autobiography"?
 
DJ music i.e. the ones that mix songs and tunes and stuff also a type of AI as it is artificially created.

It isn't AI. There are two words in AI, and the second one is "intelligence". Where is the "intelligence" in that process, if a human is choosing what, and how to remix the music? If you only focus on the word "artificial", then almost everything, including most of the food you eat, the house you live in, the cup you drink from, to a hole someone dug in the ground is "AI", because it wasn't created "naturally".
 
It isn't AI. There are two words in AI, and the second one is "intelligence". Where is the "intelligence" in that process, if a human is choosing what, and how to remix the music? If you only focus on the word "artificial", then almost everything, including most of the food you eat, the house you live in, the cup you drink from, to a hole someone dug in the ground is "AI", because it wasn't created "naturally".
But how it is produced uses all sorts of AI to improve it. And I think the definition of AI is a very grey area.
 
And I think the definition of AI is a very grey area.
That's because it's a catchall marketing term, like "digital" was 25 years ago. My new clothes dryer is powered by AI that can tell when my closes are dry. The one I had before it was a digital dryer that could tell when my clothes were dry. The one before that predated tech marketing (but not domestic marketing) and was just a clothes dryer. It also could tell when my clothes were dry.

And also (because I'm on a roll after so much coffee) is DJ music i.e. the ones that mix songs and tunes and stuff also a type of AI as it is artificially created.
Was Pro-tools considered AI twenty years ago when it did the same thing?

It's all a bunch of marketing nonsense. Don't fall for it.
 
And I think the definition of AI is a very grey area.
It's not, actually. In this thread, and almost always, when we are discussing AI in writing, we refer to generative AI, specifically Large Language Models, and even then, specifically when it is used to write prose. Far fewer people have an objection to using it for research.

The grey area is really where it's used for critique or editing.

I mean, who cares if you use an AI coffee machine to brew your double soy latte with extra non-dairy cream to keep you going while you write?
 
I think we're getting close to the crux of the argument here.

By the parameters of this discussion: nothing. Maybe that's the crux of the debate. Not the human writer vs the genitive AI writer, but the human editor vs the AI auxiliary editor.

Yes. It's the difference between "AI Bot, write me a story about a cat and a grasshopper that live in Chicago in the Roaring Twenties" and AI Bot, here is a story about a cat and a grasshopper that live in Chicago in the Roaring Twenties. Please look it over for glaring defects in phrasing and for possible anachronisms."

If we're talking about using AI within the context of this forum, the rules are probably much different from those of the world at large. We expect that what we get from our fellow writers is the unalloyed product of their labor. When we enter a contest here, we expect the same. The training wheels are off. So using AI in this context would be unethical, in that we are not living up to the expectations of the other people on the forum.

But if you took something you've written, even when it's first posted on the forum, and then processed it through AI and released it into the wild, the the ethics are much blurrier. I might submit such a piece to my local paper, but would hesitate to send it to the New Yorker.

To be great, truly great, you have to be the kind of person who makes the others around you great.

(snip)
I typed that into a word document recently and the whateveryoucallit suggested a rewrite. Use more concise language. The AI has the temerity to suggest editorial changes for one of the most everlasting voices of wisdom and wit and intellectual slam down, Use "must" instead.
The AI substitutes "must" for "have to be" because its algorithm has been adjusted to do it. It presumes that regional variations and colloquialisms have no place in its purified output. Mark Twain knew better. He had the choice between "must" and "have to be" and chose the latter because the first one is commonly more used by a superior dictating something to an inferior, whereas the latter is more commonly used between equals.

Similarly, E. B. White pointed out in The Elements of Style that Mr. Lincoln was flirting with disaster when he used the phrase "four score and seven years ago" rather than the more concise "eighty-seven years ago" but he went with cadence rather than economy. When AI has the ability to discern between the two, and choose cadence, then we'll have a real problem with AI quality.

It bears repeating what Twain said in his foreword to Huckleberrry Finn:

In this book a number of dialects are used, to wit: the Missouri negro dialect; the extremest form of the backwoods Southwestern dialect; the ordinary “Pike County” dialect; and four modified varieties of this last. The shadings have not been done in a haphazard fashion, or by guesswork; but painstakingly, and with the trustworthy guidance and support of personal familiarity with these several forms of speech.

I make this explanation for the reason that without it many readers would suppose that all these characters were trying to talk alike and not succeeding.

I can see AI obliterating these nuances of dialect, resulting in the deletion of a major theme of the book.

In sum, bear in mind what Twain said: "The difference between the right word and the almost-right word is the difference between the lightning and the lightning bug." We can tell the difference when we see it. But can AI do the same?
 
Last edited:
But then there is the question of class and cost. Poor people won't be able to access human editors, in most cases. And I won't begrudge them editing work for their projects.
Great point. In addition to your example, an aspiring author may have a cognitive challenge where generative AI assistance could help them more accurately or conveniently express themselves.
 
I would be curious to see a general opinion of the publishing attribution percentages. As in, for books that get published with a line like "This work was written 1.5% by AI."
This strikes me as a pragmatic approach. I like that it aids transparency and may help to build trust with an audience.
 
This strikes me as a pragmatic approach. I like that it aids transparency and may help to build trust with an audience.
That chain goes all the way back to the submission level where agents and editors are already using AI to vet manuscripts. I'd like them to disclaim that as well. It's amazing how quiet the righteous indignation from the publishing sector quieted down when an opportunity for means of production presented itself.

1773339382684.png
 
And @transplant I hope you didn't feel I was picking on you specifically. Your post was just a convenient quote for the point I had seen several people make.
@defaux Not a problem. Pick away. I enjoyed the discussion.

I still think the the bottom line is that if a creative writer claims ownership, the test is "Is it yours?" A person should be ethically or honor bound to give credit for significant edits, but that doesn't mean it's somebody else's work. The current versions of public AI are a step above the tools that for decades have been steadily getting smarter and more useful. With AI they have progressed to the point where cheating is easy, not great, but for most not an issue.

@JLT Makes a good point in that rules and expectations change depending on where you are in the real world, but there should still be an ethical bottom line. Maybe it's as simple as "No plagiarism."

This is off-topic but the term "AI" is still the misnomer it was 40 years ago when it was applied to well-coded decision trees which were more properly called expert systems. LLMs and the other technologies are amazing, and a big leap forward. LLMs can pass the 'indistinguishable from a human in conversation' test, but not the yet 'general intelligence' test. When it reaches that level I suppose it could then be properly called "AI". Also, as @Homer Potvin says, it's also become a marketing buzzword.

Where can I buy an AI dryer? Will it tell me when my shirts are out fashion, or hopefully, back in fashion?
 
If we're talking about using AI within the context of this forum, the rules are probably much different from those of the world at large. We expect that what we get from our fellow writers is the unalloyed product of their labor. When we enter a contest here, we expect the same. The training wheels are off. So using AI in this context would be unethical, in that we are not living up to the expectations of the other people on the forum.
Right. Here we have a good example of crossing an ethical boundary. Using a tool (AI) in a context where that tool is explicitly prohibited. That boundary is deceit, as @Rigor Mortis also highlighted. I would argue that the act of deception itself is the ethical breaking point, and not the specific use of the tool.

If I carry a knife through an airport, I'm going to get arrested. But that doesn't inherently imply that knives are bad or wrong. I'm just doing it in the wrong context. And if I knew it was wrong when I brought it in, there is the ethical boundary - I chose to break the rules.

I can see AI obliterating these nuances of dialect, resulting in the deletion of a major theme of the book.
I completely agree, and I think it is an exceedingly sad state of affairs the path that artistic creation is tracking down this road. I also take issue with publishers re-releasing books like Roald Dahl edited to remove any sensitive "trigger" words. And frankly, I find the latter more ethically offensive than the former. It offends the same sensibilities @Rigor Mortis felt against having Twain reworded. The primary difference is attribution - I can accept that the AI is not really choosing to do this, but human editors/publishers chose to modify and republish those books. Far more damning IMHO.

bear in mind what Twain said: "The difference between the right word and the almost-right word is the difference between the lightning and the lightning bug." We can tell the difference when we see it. But can AI do the same?
I would argue that it can, given the correct set of parameters. But you have hit on the point people struggle with in using these systems for editing - they don't clearly specify requirements. If you just give it a Twain quote, without any instructions, it is going to try to modernise the language - that's the default behaviour essentially. If you give it your own work to critique, and specify this is how I want the voice to sound, the tone, or I'm writing this character in a regional dialect - then it's able to scrutinise much more accurately within those parameters.

I'm not trying to convince anyone of anything, just putting information on the table.

This strikes me as a pragmatic approach. I like that it aids transparency and may help to build trust with an audience.
Here, though, I disagree. I think an author stating that 1.5% of their work is AI is attaching a stigma to their own name (in the current climate). People are far more likely to read it as "This author uses AI, doesn't write his own words". Now, if your book is 100k words, that means 85k of your lovingly crafted work is going to be labelled "AI slop" by association.

Conversely, my earlier point was, I don't see how using editing services is any different. You wouldn't publish with "My editor rewrote 1.5% of my book" and no-one would complain that an author doesn't produce his own work because he uses an editor.

With AI they have progressed to the point where cheating is easy, not great, but for most not an issue.
Cheating is certainly another ethical boundary. Perhaps this is where the "AI, write me a story ..." generative approach fits. But I don't think critiquing or editing falls into the category of cheating.
Where this is most concerning for me is within the education system. If things continue the way they are at present, I foresee university degrees becoming completely worthless, and the collapse of the entire tertiary education system as we know it. (This, however, may not be a discussion for this forum.) I've never done any writing-based education, BA or MFA or anything, so maybe someone who has might comment?

Maybe it's as simple as "No plagiarism."
I feel like plagiarism is a more ill-defined area than a simple label. We all, as writers, take influence from the things we read and love. What if we borrow a specific word, or phrase, or line from our favourite author, does that become plagiarism? Some might call it homage. And I expect quantity is relevant, though again that ratio is not defined.

It made me think, as a thought experiment, if AI takes a word from each of a million sources, and puts them together to make something different, does that fall under plagiarism? It hasn't explicitly written any of it, it's all entirely borrowed from others. Yet I doubt any specific source could be cited or recognised within the result.
This is my biggest ethical quandary with AI, and it comes back to how the training data was supplied.

I suppose, in the end, one could argue that every word I've ever written, I've first read from somewhere else.
There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope.
- Twain

LLMs can pass the 'indistinguishable from a human in conversation' test
This is curious. My experience has been that the longer the conversation goes on, the more inconsistencies creep in. Kind of like having a conversation with someone who has dementia, can't remember the beginning of the conversation or things they've already said.

It can be very difficult to separate the hype from the facts, and there is a lot of hype around AI in both directions (for and against).
 
This is curious. My experience has been that the longer the conversation goes on, the more inconsistencies creep in. Kind of like having a conversation with someone who has dementia, can't remember the beginning of the conversation or things they've already said.

It can be very difficult to separate the hype from the facts, and there is a lot of hype around AI in both directions (for and against).
I think the test keeps evolving as LLMs evolve. Could one's grandmother tell the difference in a blind test today?

It made me think, as a thought experiment, if AI takes a word from each of a million sources, and puts them together to make something different, does that fall under plagiarism? It hasn't explicitly written any of it, it's all entirely borrowed from others. Yet I doubt any specific source could be cited or recognised within the result.
This is my biggest ethical quandary with AI, and it comes back to how the training data was supplied.
I love thought experiments: move to an extreme to test/isolate a factor. In the case of an LLM, though, its probabilities have been defined by the other works, so I'm not sure we can isolate "one word." If a human cut out one million words from one million magazine articles, he's just taking the word. AI gen knew all the words basically from the start. It's the contextual weighting that matters.


Here's a question that I think follows the point of ownership: what do we mean when we write/say "Written by X"?

Most literally, if I dictate a story to someone typing it, have I written it?

Next, if I only write a rough draft and don't edit a single passage, instead someone else performs all editing and revisions, is it still reasonable to say I am the author?

Further, if I ask my wife to supply the story details, character names, events, even themes, but I write all the prose, am I the author?
 
AI generated:
Miss Cordelia Finch arrived in town with the quiet confidence of a person who had long ago decided that the world was mostly nonsense but that she meant to get on with it anyway. She was not tall, nor particularly grand in dress, yet she carried herself with such steady composure that people found themselves stepping aside without quite knowing why. Her eyes had the habit of observing everything and commenting on nothing, which made the talkative uneasy and the foolish suspicious. The men of the town first noticed her because she had a way of listening that suggested she understood them better than they understood themselves, and the women noticed her because she never once appeared impressed by the men. It was therefore settled, before a fortnight had passed, that Miss Finch must either be a person of uncommon sense or a person of extremely dangerous intentions—and the town could not yet decide which.

I told it to do it in the style of Mark Twain. I have no idea how accurate this is, since I haven't read Twain since I was in primary school.

Robert Louis Stevenson:
Miss Cordelia Finch came into the town as quietly as the tide that steals upon a sleeping shore, and yet her presence altered the place in ways that no one could immediately name. She was a woman of modest height and sober dress, but there was in her bearing a certain resolution, as though she carried within her some private compass by which the world itself might be judged. Her eyes were keen, grey things, attentive to every motion and word, and though her speech was gentle, one sensed behind it a mind both lively and watchful. The townsfolk, who had long been accustomed to reading one another with easy certainty, found Miss Finch strangely difficult to reckon with; for she moved among them with courtesy and composure, yet gave the impression—faint but persistent—that she knew more of the world’s turns and tempers than she had any present intention of saying.

Sir Arthur Conan Doyle:
Miss Cordelia Finch made her appearance in our town under circumstances that were, at first glance, entirely ordinary; yet it soon became evident to those with any habit of observation that she was a person of rather uncommon character. She was a lady of moderate stature, plainly but neatly dressed, whose composed manner and steady grey eyes suggested both intelligence and self-possession. There was about her conversation a clarity and restraint which set her apart from the more excitable spirits of the place, and one had the curious sense that she was perpetually taking note of small particulars which escaped the rest of us altogether. Indeed, within a very short time it was remarked that Miss Finch seemed to understand certain matters in the town with surprising accuracy, though no one could quite determine how she had come by the knowledge. It was a circumstance which, I confess, excited my curiosity from the first moment I observed her.

I'm not suggesting these are particularly good - just pointing out that what AI can do these days is not the AI of two of three years ago. It's better than many human writers could manage. If you give it the right prompt, it can make a reasonable attempt at doing something other than a standard, modern, minimalist style. I wouldn't underestimate it. That's also a reason to fear what it can do, as well as use it (by which, I have to repeat yet again, is not to write it for you). Although, if you get it to criticise these passages in the styles of these authors, it will also point out all sorts of issues, so it's not consistent. YMMV.
 
Last edited:
AI generated:
Miss Cordelia Finch arrived in town with the quiet confidence of a person who had long ago decided that the world was mostly nonsense but that she meant to get on with it anyway. She was not tall, nor particularly grand in dress, yet she carried herself with such steady composure that people found themselves stepping aside without quite knowing why. Her eyes had the habit of observing everything and commenting on nothing, which made the talkative uneasy and the foolish suspicious. The men of the town first noticed her because she had a way of listening that suggested she understood them better than they understood themselves, and the women noticed her because she never once appeared impressed by the men. It was therefore settled, before a fortnight had passed, that Miss Finch must either be a person of uncommon sense or a person of extremely dangerous intentions—and the town could not yet decide which.

I told it to do it in the style of Mark Twain. I have no idea how accurate this is, since I haven't read Twain since I was in primary school.

Robert Louis Stevenson:
Miss Cordelia Finch came into the town as quietly as the tide that steals upon a sleeping shore, and yet her presence altered the place in ways that no one could immediately name. She was a woman of modest height and sober dress, but there was in her bearing a certain resolution, as though she carried within her some private compass by which the world itself might be judged. Her eyes were keen, grey things, attentive to every motion and word, and though her speech was gentle, one sensed behind it a mind both lively and watchful. The townsfolk, who had long been accustomed to reading one another with easy certainty, found Miss Finch strangely difficult to reckon with; for she moved among them with courtesy and composure, yet gave the impression—faint but persistent—that she knew more of the world’s turns and tempers than she had any present intention of saying.

Sir Arthur Conan Doyle:
Miss Cordelia Finch made her appearance in our town under circumstances that were, at first glance, entirely ordinary; yet it soon became evident to those with any habit of observation that she was a person of rather uncommon character. She was a lady of moderate stature, plainly but neatly dressed, whose composed manner and steady grey eyes suggested both intelligence and self-possession. There was about her conversation a clarity and restraint which set her apart from the more excitable spirits of the place, and one had the curious sense that she was perpetually taking note of small particulars which escaped the rest of us altogether. Indeed, within a very short time it was remarked that Miss Finch seemed to understand certain matters in the town with surprising accuracy, though no one could quite determine how she had come by the knowledge. It was a circumstance which, I confess, excited my curiosity from the first moment I observed her.

I'm not suggesting these are particularly good - just pointing out that what AI can do these days is not the AI of two of three years ago. It's better than many human writers could manage. If you give it the right prompt, it can make a reasonable attempt at doing something other than a standard, modern, minimalist style. I wouldn't underestimate it. That's also a reason to fear what it can do, as well as use it (by which, I have to repeat yet again, is not to write it for you). Although, if you get it to criticise these passages in the styles of these authors, it will also point out all sorts of issues, so it's not consistent. YMMV.

Honest question - how does any of this help someone become a better writer?
 
In the case of an LLM, though, its probabilities have been defined by the other works, so I'm not sure we can isolate "one word." If a human cut out one million words from one million magazine articles, he's just taking the word. AI gen knew all the words basically from the start. It's the contextual weighting that matters.
Right, it's like the million monkeys bashing away at a million keyboards, statistically, would eventually write the works of Shakespeare. Only, the LLM knows how to weight the probabilities.
 
Although, if you get it to criticise these passages in the styles of these authors, it will also point out all sorts of issues, so it's not consistent. YMMV.
Right, and that was my point. You need to instruct it how to critique your work in order to get useful results.
 
Back
Top