Is AI writing assistance ethical?

It's not just spelling. For example, in the UK, we say "in hospital". Apparently, Americans prefer "in the hospital". I didn't know that, and I have no desire to learn fluent Yankee just to do it.
And Brits don't know that a large portion of the US would be highly offended to be called a "yankee" lmao.

I will stop now, before someone starts getting annoyed
Honestly. And it's so silly.
 
Thanks for all your replies so far. Our red lines seem to vary. I do wonder how many editors etc. use AI for quickness and efficiency. Spellcheckers might have been seen as AI back in the day, or even software (computers) to write our stories. Where do you all draw the line?
 
Where do you all draw the line?
I think the field needs to be established first. What kind of AI? The generative AI that writes for you is a zillion miles from the editing and beta-reading AI many of the above are discussing. And those are million miles away from the killer AI robots. The term "AI" is a marketing buzzword. the likes of which we haven't seen since "digital" entered the consumer's lexicon. It needs several steps of qualification to be discussed intelligently. .

I do wonder how many editors etc. use AI for quickness and efficiency.
See my above remark. What kind of AI and how is it being used? I'm sure there are plenty of editors that will take your money and plug your manuscript into a AI model that you probably could have done yourself for free. Then there are plenty that use as an assist tool, because as others have pointed out, more clients, more output. Then there are plenty that won't touch it at all, but I imagine that population is shrinking. I'm sure you'll have your big time, best selling, legendary authors who will be fine, but EOTD there hasn't been a massive technological advance in human history that any of the "keep it real" crowd have survived, as far as I know. It's just not economically or temporally possible.

And I'd ask what you mean "ethical" in the overall sense. It isn't a right/wrong sort of thing.
 
I think the field needs to be established first. What kind of AI? The generative AI that writes for you is a zillion miles from the editing and beta-reading AI many of the above are discussing. And those are million miles away from the killer AI robots. The term "AI" is a marketing buzzword. the likes of which we haven't seen since "digital" entered the consumer's lexicon. It needs several steps of qualification to be discussed intelligently. .


See my above remark. What kind of AI and how is it being used? I'm sure there are plenty of editors that will take your money and plug your manuscript into a AI model that you probably could have done yourself for free. Then there are plenty that use as an assist tool, because as others have pointed out, more clients, more output. Then there are plenty that won't touch it at all, but I imagine that population is shrinking. I'm sure you'll have your big time, best selling, legendary authors who will be fine, but EOTD there hasn't been a massive technological advance in human history that any of the "keep it real" crowd have survived, as far as I know. It's just not economically or temporally possible.

And I'd ask what you mean "ethical" in the overall sense. It isn't a right/wrong sort of thing.
It's just I hear 'Ethical' being used for this kind of thing. Hearing that word triggered my thought process in the first place.
 
It's just I hear 'Ethical' being used for this kind of thing. Hearing that word triggered my thought process in the first place.
Exactly. And that's another warning sign, as the term (another trigger word, this time from the psychological realm) presupposes a set of legal/licensing guidelines that professionals are obligated to follow. Like doctors, lawyers, financial advisors, and the like. No such things exist in the writing publishing world that I'm aware of beyond the standard business practices of any industry. And a lot of the vitriol and righteous indignation you'll find on both sides of the debate is a result of the more ideological crowd believing such a thing existed in the first place. There's a lot of One True God pervading the whole thing.

Misrepresenting yourself as a person or company that does not use AI but in fact does would certainly be unethical, but there's no moral imperative filtering the debate one way or another.

You can say AI is unethical in that many of LLMs used copyrighted works without permission to train themselves, but that's irrelevant to this part of the debate. That's like saying the medical benefits or appeal of a certain drug are negated because they unethically tested them on animals or humans.

You can sort of say it's unethical in that it will take a bunch of human jobs, but the lever and inclined plane have blood on their hands too in that department.
 
Doesn't society at large decide what is ethical and not. A majority of us have agreed that murder is wrong. In the same vein, if a majority of people decide that heavy AI (editing assistance) usage is wrong, then I guess that is what the majority thinks/want to enforce against. Though majorities of peoples can sometimes be wrong...

If you publish your book, having used a lot of Gen AI for its creation, and it turns out a majority of the population has been laid off by various AIs and an AI boycott is in effect, you'd be doing yourself a disservice.

However, if AI cures cancer and is the holy grail to human advancement and everyone changes their opinion on it to favour it for everything, you'd be doing yourself a disservice not using it.

The issue is that no one can predict how it will all turn out. We can make guesses, maybe even educated guesses, but then something completely out of the blue could happen that turns the tech 180 degrees.
 
The ethics of it in the end, is - is it your work?

If yes, you have the right to put your name on it. Is it co-written by Chatgpt? Then ethically, the author is 'Joe Schmo and Chatgpt'. Maybe the next James Patterson best seller will be 'James Patterson in collaboration with Chatgpt!'. If you were to give it a prompt and the AI wrote a story, then the author changes a few lines and puts his/her name on it... not ethical. If the work was edited by a human, the editor would be given credit, so maybe 'Edited by Chatgpt'?

Last week I had Chatgpt critique a story I've been working on. It was the first time I've done that and it was surprisingly useful. It hit things I knew needed work and brought up other points that I both agree and disagree with. I specifically told it not to re-write anything or give any examples because I don't want to be influenced that way. I'll use going forward as a teacher, but I want the work to be my own.

I don't have any ethical concerns using grammar, punctuation and spell checkers or using the thesaurus add-in, will future writers feel the same way about generated prose? In the past those basic skills were a necessary part of writing and a purest today might not agree with my ethics in using tools so I don't have to learn them (again).
 
I couldn’t imagine letting an AI choose words for me, or judge the emotional tone I am aiming for. It’s my story, and I am the best judge of every part of it, not some machine. It’s not a question of ethics. It’s a question of artistry.

So – I prompted ChatGPT to complete this metaphor - Giving up word choice to an AI is like

And it responded –

…handing your paintbrush to a very skilled assistant who has never seen the landscape you’re trying to remember.
…letting someone else choose the spices for a meal that only you know the taste of.
…outsourcing your fingerprints.
…sending a ghost to sign your name.
…asking an echo to decide what you meant to shout.
…playing a violin while someone else chooses which notes your fingers land on.
…borrowing a voice that fits perfectly but remembers none of your wounds.
…letting a mirror write your diary.

So, AI can do it. But then it wouldn’t be mine.
 
The ethics of it in the end, is - is it your work?
This seems to be the question that everyone is circling. Ultimately I don't feel this is an accurate or complete representation of the ethical dilemma.

Forget for a moment of illustration how AI currently functions.

Imagine, as a computer programmer, I create a program that can write, or draw, or whatever. In order for it to achieve a result, good, bad or otherwise, I have had to program in the instructions on how to do that. The word choices or a methodology for choosing from a known dictionary. The placement of a line or colour. Everything the program knows how to do follows directly from an instruction that I gave it. So, would you say that the result is a product of my own work?

Consider human assistants. Michelangelo did not paint the Sistine Chapel by himself, he had a dozen assistants who performed perfunctory tasks, but also contributed to the painting. So, which parts or what percentage of the resulting artwork would you consider to be Michelangelo's?

Imagine a human assistant who had studied under Tolstoy, Tolkien, Dickens, Austen, Woolf, anyone who you consider to be a literary giant, everyone. Imagine this assistant came to your home and offered to help proofread, edit, make suggestions to your text. You would, I expect, evaluate the suggestions with the gravitas due that assistants experience. Assuming you adopted some of their suggestions into your work, does that make it less yours? At what point or percentage does the work become not yours?

Now, let's return to our AI assistant. At what point or percentage of using an automated assistant to edit or make suggestions, or even generate portions of text - which presumably will be reviewed and accepted by the author - does the work become no longer the authors?

Where I believe ethics may creep in - or at least some morally grey area - is in how the AI obtains its knowledge. It is not the case that the author here programmed in specific language instructions, so the results are not entirely of his own creative efforts. And, unlike our human assistant, the AI was not shared into expert knowledge voluntarily by the massive number of creatives that it has consumed. Was the scraping and training of AI unethical? Is it's redistribution of this knowledge unethical? (These questions may well be for a different topic.)

Indulge me in another hypothetical. Imagine a Human assistant, but this time they don't study under the artists. Maybe they go to the library for years on end, reading and consuming all books and art they can get their hands on. Maybe they sat in the Sistine Chapel, watched Michelangelo apply plaster to the walls, and pigments to the plaster. Maybe they trained themselves without anyone ever noticing. Granted, the lifetime for a human to match the volume of data consumed by an AI is probably unrealistic and unobtainable. But, for the hypothesis, if this assistant came to your door and offered their services, would it be unethical to accept?
 
Ahh the advance of technology dragging civilization along behind it like an old, worn-out comfort blanket.

Who can forget the social unrest caused by the invention of gas lighting and then electricity and the lightbulb. Our very human sleeping patterns have never been the same.

I think about the horse sometimes. The world standard in land transportation for thousands of years. Until the invention of the automobile. A large swath of civilized society were up in arms about these noisy, dangerous new contraptions. Relegated to race tracks and 'horse people'

Man yearned for flight since it witnessed the first bird take to wing and explore the skies more free than any human could ever wish to...until the airplane came along. Now we cry about crowding and sickness and TSA agents, metal detectors and strip searches...but we still board the plane.

I have misgiving about my bearing witness to the birth of the internet. Now it seems I am one of those who throws their arms to the skies and cries out for justice against this intrusion of technology into our ignorant society.

"Smart" phones...heads down, brainless and getting dumber with each generation.

Should I wail ineffectively against the rise of AI as well?

To what end...
 
I don't think automobiles and TikTok are the same thing.

Though, let it be known, I don't think fiction writers and dock workers are the same thing either; I'm not concerned about AI stealing the jobs of creatives, at least fiction authors. I suppose visual creators and multimedia creators have already lost work given the AI ads we're seeing.

I also don't think spell check is the same thing as an LLM. Nor are paid assistants. Nor are human brains.
Exactly. And that's another warning sign, as the term (another trigger word, this time from the psychological realm) presupposes a set of legal/licensing guidelines that professionals are obligated to follow. Like doctors, lawyers, financial advisors, and the like. No such things exist in the writing publishing world that I'm aware of beyond the standard business practices of any industry.
You can say AI is unethical in that many of LLMs used copyrighted works without permission to train themselves, but that's irrelevant to this part of the debate. That's like saying the medical benefits or appeal of a certain drug are negated because they unethically tested them on animals or humans.
Well, not to put too fine a point on it, but I can appreciate the link being drawn. The starting point, even legal guideline to use your terms, is: "It is unethical to take someone's work and publish it as your own." That's setting aside environment etc...

Machine gun nests and trenches are in place around where data harvesting to feed an algorithm fits into that guideline. It's obviously not the same as copying and pasting a passage from a Stephen King book—well usually, there are exceptions. However, it's not the same as emerging from the work-inspired mind either. It's somewhere in the middle. We do not agree on where it is in the middle.

For your example on medication, it would fit if producing the medication continuously was unethically using animals or humans. There might be some hesitation to use semaglutide if it required puppy spinal fluid... oh who am I kidding it wouldn't matter.
 
Last edited:
I don't think anyone is talking about the genetive AI, but rather the editing auxiliary. Unless we're just going for the full Terminator kill bots here...
Oh, not from you I guess. There was some stuff about the Sistine Chapel, gen AI in a published book, AI choosing one's next word. It felt to me that it was blending into the discussion, which makes sense since editing/gen is a fuzzier line than is being presented, but I'm happy to pull back.

I'll just defer to my other post: the question of ethics in that particular case should only relate to the tool and its corporation itself. I'm not going to know the difference if someone's creative decisions (i.e. emotional tone, what should happen next?) are being at least partially guided by the algorithmic product of multibillion dollar corporations. The fact that it would bother me shouldn't bother anyone else. I think it's an ego thing at that point, but I don't mean that disparagingly. It's similar to: will you ever take the time to write out a multi-step unit conversion or a quadratic equation? You have no reason to ever again. It's a complete waste of time, right? Well, if someone shrugs and says "Yes Stuart, it is a waste of time." I don't have a sharp rebuke or anything loaded in the chamber for them.

That's more or less a blabbier way of saying what Louanne said: it being a matter of artistry rather than ethics.
 
This seems to be the question that everyone is circling. Ultimately I don't feel this is an accurate or complete representation of the ethical dilemma.
So, "Is it yours?" is either an oversimplification or doesn't take into account the ethics of the AI tool itself? Yes, easier said than defined.

The first part, "Is it yours?" is either a personal choice, or if a commercial success, will be fought out in the courts.
make suggestions to your text.
Whether suggestions come from AI or from a human, the crux of the issue is that they are based on your text. A word suggested, a sentence, or flow suggestion that is based on your original work and is then presented to you, and incorporated by you, IMHO is still your work. I don't see that as any different than work shopping, editing, beta reading, etc.

At what point or percentage of using an automated assistant to edit or make suggestions, or even generate portions of text - which presumably will be reviewed and accepted by the author - does the work become no longer the authors?
Difficult to answer. I would make a distinction between editing and verbatim insertion/replacement. If one paragraph out of a thousand is written by a collaborator, I think the work is still yours. Michelangelo's assistants could not have painted the Sistine Chapel without him, the work is his.

Where I believe ethics may creep in - or at least some morally grey area - is in how the AI obtains its knowledge
I think AI is here to stay. It's becoming a fact of life like cell phones are. Was it's training data ethically obtained, i.e not stolen? We know that openAI took some data illegally. All of our individual data is collected and sold to anyone who wants it, then turned around to advertise to us or used for law enforcement. I never gave permission for my data to be collected (notwithstanding user agreements never read). However, I benefit everyday by using sites that depend on advertising to exist.

Is AI bad for society? Probably in some ways, will probably be great for society in others. I worry about the loss of skill when individuals can depend on AI to think, research and come to an informed opinion for them. The same way we've lost a tremendous amount of skills already with replaceable parts and mass produced furniture to name just a couple. AI will accelerate that negative trend into new areas.

But, for the hypothesis, if this assistant came to your door and offered their services, would it be unethical to accept?
I don't think so, no. That would be like ignoring all the knowledge that came before us and attempting to start from zero.


BTW @defaux -- big fan of your writing.
 
It's somewhere in the middle. We do not agree on where it is in the middle.
This. And I don't think that is going to be an easy agreement to come to. I can imagine popular opinion will swing wildly either way until some external factor comes to make a decision for us. The bubble may pop. Or the hype may simply fade over time as the current generation grows up with the technology being the norm.

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
-Max Planck

So, AI can do it. But then it wouldn’t be mine.
I agree. But then this does not define the choice as a question of ethics.

Is AI bad for society? Probably in some ways, will probably be great for society in others. I worry about the loss of skill when individuals can depend on AI to think, research and come to an informed opinion for them. The same way we've lost a tremendous amount of skills already with replaceable parts and mass produced furniture to name just a couple. AI will accelerate that negative trend into new areas.
There is a risk that such technology poses at a societal level. However, this is along the same lines as any technological advancement since the industrial revolution. Is it a loss for society, the skills that no longer exist? Perhaps, but this is largely opinion based. For the most part, society readjusts to accommodate. And again, this does not frame the issue as a question of ethics.

"Is it yours?" is either a personal choice, or if a commercial success, will be fought out in the courts.
Legally, as it stands, the courts refuse to assign copyright ownership to AI for generated works. So, legally the ownership does lie with the user/artist who used the AI tool to create the work. Does that imply the creative process is also theirs? Perhaps another matter of opinion, not ethics.

easier said than defined.
That was largely my point. Not that the question was incorrect, but that it does not encompass the entirety of what should be considered. And there is a reason I framed my post as questions - I don't presume to have the answers.

BTW @defaux -- big fan of your writing.
Thank you. That is truly appreciated.

And @transplant I hope you didn't feel I was picking on you specifically. Your post was just a convenient quote for the point I had seen several people make.

I would be curious to see a general opinion of the publishing attribution percentages. As in, for books that get published with a line like "This work was written 1.5% by AI." Is this fair? Should it be required? I certainly see a stigma getting attached to it.

On the other hand, you wouldn't see a book with a line like "This was written 1.5% by Such-and-such Editorial Services." Which may be equally true, if the author gets a manuscript back from the editor and accepts their revisions. How often is that the case? This is beyond my scope of experience. While the editor may be noted in the publication details, they certainly wouldn't be attached as co-author in a byline. Neither would there be such an expectation, nor stigma associated.

What precisely is the difference between using a human editor and an AI editor? (Quality semantics notwithstanding.)
 
On the other hand, you wouldn't see a book with a line like "This was written 1.5% by Such-and-such Editorial Services." Which may be equally true, if the author gets a manuscript back from the editor and accepts their revisions. How often is that the case? This is beyond my scope of experience. While the editor may be noted in the publication details, they certainly wouldn't be attached as co-author in a byline. Neither would there be such an expectation, nor stigma associated.
Another example that has come to mind in the past, in regard to "how much of this is written by you": at what point is one's spouse essentially a co-author? Though she'll probably at least get a mention in the back matter.
 
Back
Top