Is AI writing assistance ethical?

It's not just spelling. For example, in the UK, we say "in hospital". Apparently, Americans prefer "in the hospital". I didn't know that, and I have no desire to learn fluent Yankee just to do it.
And Brits don't know that a large portion of the US would be highly offended to be called a "yankee" lmao.

I will stop now, before someone starts getting annoyed
Honestly. And it's so silly.
 
Thanks for all your replies so far. Our red lines seem to vary. I do wonder how many editors etc. use AI for quickness and efficiency. Spellcheckers might have been seen as AI back in the day, or even software (computers) to write our stories. Where do you all draw the line?
 
Where do you all draw the line?
I think the field needs to be established first. What kind of AI? The generative AI that writes for you is a zillion miles from the editing and beta-reading AI many of the above are discussing. And those are million miles away from the killer AI robots. The term "AI" is a marketing buzzword. the likes of which we haven't seen since "digital" entered the consumer's lexicon. It needs several steps of qualification to be discussed intelligently. .

I do wonder how many editors etc. use AI for quickness and efficiency.
See my above remark. What kind of AI and how is it being used? I'm sure there are plenty of editors that will take your money and plug your manuscript into a AI model that you probably could have done yourself for free. Then there are plenty that use as an assist tool, because as others have pointed out, more clients, more output. Then there are plenty that won't touch it at all, but I imagine that population is shrinking. I'm sure you'll have your big time, best selling, legendary authors who will be fine, but EOTD there hasn't been a massive technological advance in human history that any of the "keep it real" crowd have survived, as far as I know. It's just not economically or temporally possible.

And I'd ask what you mean "ethical" in the overall sense. It isn't a right/wrong sort of thing.
 
I think the field needs to be established first. What kind of AI? The generative AI that writes for you is a zillion miles from the editing and beta-reading AI many of the above are discussing. And those are million miles away from the killer AI robots. The term "AI" is a marketing buzzword. the likes of which we haven't seen since "digital" entered the consumer's lexicon. It needs several steps of qualification to be discussed intelligently. .


See my above remark. What kind of AI and how is it being used? I'm sure there are plenty of editors that will take your money and plug your manuscript into a AI model that you probably could have done yourself for free. Then there are plenty that use as an assist tool, because as others have pointed out, more clients, more output. Then there are plenty that won't touch it at all, but I imagine that population is shrinking. I'm sure you'll have your big time, best selling, legendary authors who will be fine, but EOTD there hasn't been a massive technological advance in human history that any of the "keep it real" crowd have survived, as far as I know. It's just not economically or temporally possible.

And I'd ask what you mean "ethical" in the overall sense. It isn't a right/wrong sort of thing.
It's just I hear 'Ethical' being used for this kind of thing. Hearing that word triggered my thought process in the first place.
 
It's just I hear 'Ethical' being used for this kind of thing. Hearing that word triggered my thought process in the first place.
Exactly. And that's another warning sign, as the term (another trigger word, this time from the psychological realm) presupposes a set of legal/licensing guidelines that professionals are obligated to follow. Like doctors, lawyers, financial advisors, and the like. No such things exist in the writing publishing world that I'm aware of beyond the standard business practices of any industry. And a lot of the vitriol and righteous indignation you'll find on both sides of the debate is a result of the more ideological crowd believing such a thing existed in the first place. There's a lot of One True God pervading the whole thing.

Misrepresenting yourself as a person or company that does not use AI but in fact does would certainly be unethical, but there's no moral imperative filtering the debate one way or another.

You can say AI is unethical in that many of LLMs used copyrighted works without permission to train themselves, but that's irrelevant to this part of the debate. That's like saying the medical benefits or appeal of a certain drug are negated because they unethically tested them on animals or humans.

You can sort of say it's unethical in that it will take a bunch of human jobs, but the lever and inclined plane have blood on their hands too in that department.
 
Doesn't society at large decide what is ethical and not. A majority of us have agreed that murder is wrong. In the same vein, if a majority of people decide that heavy AI (editing assistance) usage is wrong, then I guess that is what the majority thinks/want to enforce against. Though majorities of peoples can sometimes be wrong...

If you publish your book, having used a lot of Gen AI for its creation, and it turns out a majority of the population has been laid off by various AIs and an AI boycott is in effect, you'd be doing yourself a disservice.

However, if AI cures cancer and is the holy grail to human advancement and everyone changes their opinion on it to favour it for everything, you'd be doing yourself a disservice not using it.

The issue is that no one can predict how it will all turn out. We can make guesses, maybe even educated guesses, but then something completely out of the blue could happen that turns the tech 180 degrees.
 
The ethics of it in the end, is - is it your work?

If yes, you have the right to put your name on it. Is it co-written by Chatgpt? Then ethically, the author is 'Joe Schmo and Chatgpt'. Maybe the next James Patterson best seller will be 'James Patterson in collaboration with Chatgpt!'. If you were to give it a prompt and the AI wrote a story, then the author changes a few lines and puts his/her name on it... not ethical. If the work was edited by a human, the editor would be given credit, so maybe 'Edited by Chatgpt'?

Last week I had Chatgpt critique a story I've been working on. It was the first time I've done that and it was surprisingly useful. It hit things I knew needed work and brought up other points that I both agree and disagree with. I specifically told it not to re-write anything or give any examples because I don't want to be influenced that way. I'll use going forward as a teacher, but I want the work to be my own.

I don't have any ethical concerns using grammar, punctuation and spell checkers or using the thesaurus add-in, will future writers feel the same way about generated prose? In the past those basic skills were a necessary part of writing and a purest today might not agree with my ethics in using tools so I don't have to learn them (again).
 
I couldn’t imagine letting an AI choose words for me, or judge the emotional tone I am aiming for. It’s my story, and I am the best judge of every part of it, not some machine. It’s not a question of ethics. It’s a question of artistry.

So – I prompted ChatGPT to complete this metaphor - Giving up word choice to an AI is like

And it responded –

…handing your paintbrush to a very skilled assistant who has never seen the landscape you’re trying to remember.
…letting someone else choose the spices for a meal that only you know the taste of.
…outsourcing your fingerprints.
…sending a ghost to sign your name.
…asking an echo to decide what you meant to shout.
…playing a violin while someone else chooses which notes your fingers land on.
…borrowing a voice that fits perfectly but remembers none of your wounds.
…letting a mirror write your diary.

So, AI can do it. But then it wouldn’t be mine.
 
Back
Top