Is AI writing assistance ethical?

I think you're fine so long as you're sticking to the quantitative aspects, as AI is basically a infinite curator of knowledge. I wouldn't rely on it for tone or emotional response, though. I'm sure there's some quantitative probability it can detect, but I couldn't imagine asking an AI whether a plot twist was effective or not. How would it know?

It would know based on how people have spoken about similar plot twists. Your plot twist is that the narrator is actually a ghost? It knows what people have said about that before. It's not quite that simple though. It will also base it on how the rest of the plot has played out before that point. By analysing billions of words, it'll get the word associations roughly right, because words are how we express our reactions.

It may not get it right. A human isn't guaranteed to either - but at least an AI wouldn't evaluate it based on whether it stubbed its toe that morning and their wife left them.

Addendum: What it can't do is actually feel. It's basically telling you what a large proportion of humanity have written that they feel with similar writing. Are those equivalent? Dunno.
 
Last edited:
By analysing billions of words, it'll get the word associations roughly right, because words are how we express our reactions.

I don't buy it. Does it ever say "I don't know" or "I can't answer that" or that is "beyond my capability" or anything. My experience has been the AI will give an answer no matter what, which, like a human who feels compelled to have an opinion or answer for anything no matter what, is probably a fool.

It's basically telling you what a large proportion of humanity have written that they feel with similar writing. Are those equivalent? Dunno.
Dear God, no, that is not equivalent. Will 50 AIs have 50 nuanced emotional opinions like 50 human readers will?
 
Do you not ask for feedback from humans? How is it different? If all he's doing is asking for impressions of how well something fits the theme, emotion, whatever, it's the same as asking a human what they think, isn't it? You aren't obligated to accept what a human says either.
Isn't that the fundamental question to all of this? Some here hold a position that it's the same thing. I disagree. Some find benefit from using AI to elicit their best version of what it is they want to write. I'd prefer to keep writing shit that gets rejected.

When my kids were at school, I sat down to assist them with English essays more than a couple of times. My daughter took a pragmatic approach, get the thing done. My son resisted, saying those aren't my expressions, I don't know that word, and refused point blank to accept any input. Stubborn little shit, don't know where he got it. He didn't love school but if he had to produce an English essay, it was going to be his own. The point? Two different people who wanted different things from the exercise. Forcing one to do the other would have been counterproductive.

Regarding human readers, whether or not they get your story, they bring a perspective to the exchange, with some level of dynamic energy. What lands and what doesn't. I don't read a lot of fantasy outside of the workshop here, and will sometimes avoid commenting when we're in that genre. I don't know that I can offer precision on other genres either, but I can read. I can find merit and be moved by things I read, pass remarks on what kind of impact it had. That's what I'm interested to hear when I post stuff for critique too.

Suggesting the reader who offers feedback just doesn't get this genre, or this subtext, or this style, or this artistic expression, is opening the possibility of laying the blame for shortcomings on the reader and not the writing. The author knows what they're trying to get at. If the audience doesn't, then that can be useful information, if even to recognise the limits of the target readership. Another drawback of AI? What about creating an unhelpful echo chamber?

I better go now before I start quoting myself.
 
Some here hold a position that it's the same thing.
I don't think that's it exactly. There are benefits to both. Humans can think and feel but can't compute. Machines can compute but can't think and feel. Along them lines, I wouldn't ask a regular human to crunch cosmological equations anymore than I would ask a machine for an emotional opinion.

There's a tool for every job.
 
I don't buy it. Does it ever say "I don't know" or "I can't answer that" or that is "beyond my capability" or anything. My experience has been the AI will give an answer no matter what, which, like a human who feels compelled to have an opinion or answer for anything no matter what, is probably a fool.

It does, actually. Especially if you challenge it a lot (which I definitely do).

Dear God, no, that is not equivalent. Will 50 AIs have 50 nuanced emotional opinions like 50 human readers will?

I haven't played with that many, but yes. Even different levels of calibration will answer differently, point out different things, etc.

Even so - I am NOT saying they're a replacement for a human, and I would much rather have human feedback, but when I'm on my game I write very fast and it's unrealistic to believe I could have a beta read 4 books across a series and tell me if I ever accidentally said "okay" for a certain character instead of "alright".
 
I don't buy it. Does it ever say "I don't know" or "I can't answer that" or that is "beyond my capability" or anything. My experience has been the AI will give an answer no matter what, which, like a human who feels compelled to have an opinion or answer for anything no matter what, is probably a fool.
Fake senario:
Me: How many rocks should I eat today?
AI: A average human needs to 2-3 rocks everyday.
 
I don't buy it. Does it ever say "I don't know" or "I can't answer that" or that is "beyond my capability" or anything. My experience has been the AI will give an answer no matter what, which, like a human who feels compelled to have an opinion or answer for anything no matter what, is probably a fool.

Not in that way, no. It will attempt to find an answer. It will often correct you if you tell it something false, but it will, more often, base an answer on the fact that what you tell it is true. But that's not how it would be used in this context - you aren't asking it for a true/false answer, you're asking it for a collective opinion, and the internet has no shortage of opinions.

Dear God, no, that is not equivalent. Will 50 AIs have 50 nuanced emotional opinions like 50 human readers will?

They can do. It depends on the model and the programmed personality. Hell, you could program an AI to react like exweedfarmer, if you really wanted to.
 
Suggesting the reader who offers feedback just doesn't get this genre, or this subtext, or this style, or this artistic expression, is opening the possibility of laying the blame for shortcomings on the reader and not the writing.

That's the usual thing people say, and a lot of the time, it's true, but it isn't always. Does the fact that some readers engage with a story and some do the fault of the author? People *do* read stories with a particular lens, whether you like it or not. Your job, as an author, is to distinguish between the two. That's where your skill and judgement comes in. An AI can, and will tell you "you're trying to do this, but it doesn't work because...". Then you decide whether you agree with it.

Saying it's always the author trying to shift blame is like blaming a musician for the fact that the audience didn't like their genre of music. If that were true, we wouldn't have different genres or styles of writing at all, and the Bible would have been rejected for telling instead of showing.
 
Y'all get so serious. We could easily swap the One True God in for AI and nobody would notice the difference, as far as opinions go.
 
Regarding human readers, whether or not they get your story, they bring a perspective to the exchange, with some level of dynamic energy. What lands and what doesn't. I don't read a lot of fantasy outside of the workshop here, and will sometimes avoid commenting when we're in that genre. I don't know that I can offer precision on other genres either, but I can read. I can find merit and be moved by things I read, pass remarks on what kind of impact it had. That's what I'm interested to hear when I post stuff for critique too.
I certainly didn't mean to imply they can't. As I've said, I much prefer human, but that isn't always available and depending on what I need (usually in my case I'm just looking for continuity checks) it does as I need.


Suggesting the reader who offers feedback just doesn't get this genre, or this subtext, or this style, or this artistic expression, is opening the possibility of laying the blame for shortcomings on the reader and not the writing. The author knows what they're trying to get at. If the audience doesn't, then that can be useful information, if even to recognise the limits of the target readership. Another drawback of AI? What about creating an unhelpful echo chamber?

Well no, I don't think that's what was intended, but you can't deny there are times that humans read things that are not in their wheelhouse or that they aren't familiar with, and their responses may not be accurate due to that disparity. Not saying it's not useful information, but it also may not be entirely helpful either. They're not mutually exclusive.

In the case of Nao's Vancian tale that he already brought up - I absolutely was not the right reader for that. I didn't even get through it. Not because Nao can't write but because the style of it makes my eyes bleed (sorry, Nao). But I said that up front and posted anyway because it either had no replies or not many and I was trying to help bump it.
 
I certainly didn't mean to imply they can't. As I've said, I much prefer human, but that isn't always available and depending on what I need (usually in my case I'm just looking for continuity checks) it does as I need.




Well no, I don't think that's what was intended, but you can't deny there are times that humans read things that are not in their wheelhouse or that they aren't familiar with, and their responses may not be accurate due to that disparity. Not saying it's not useful information, but it also may not be entirely helpful either. They're not mutually exclusive.

In the case of Nao's Vancian tale that he already brought up - I absolutely was not the right reader for that. I didn't even get through it. Not because Nao can't write but because the style of it makes my eyes bleed (sorry, Nao). But I said that up front and posted anyway because it either had no replies or not many and I was trying to help bump it.
How very human of you!
 
I also am unlikely to read 4 books in a row and tell you if the author accidentally said the wrong word ever, used the wrong color eyes, etc. I for sure can't do it in under an hour.
My comment was meant to be complimentary, if also poking a little.

The only argument for using AI that has any traction for me from what I've read is that one of its unending availability. It doesn't tire, or get bored, or have to run out to collect the kids or, indeed, stub its toe. It's not enough for me because it also doesn't laugh, or cry, or spit its coffee over the reading material. Nor, indeed, does it tire, or get bored, or...

It has data without wisdom and the entire library of human existence without understanding a single word of it.

We are bodies in motion. One of the things I'm reasonably confident about in my writing is the occasional nice phrasing, creating an expression that's as wise as Fionn MacCumhaill with his thumb in his gob while reading Marcus Aurelius and Gibran on audiobook in the background. A line of mine I like, that's relevant to this discussion, from a story that's not really a story is this:

Even a glancing impact on a body in motion can change its course to an emphatic degree, though the extent of the deviation may only become clear much further down the road.

Having failed to resist quoting myself, it's past midnight, I'm tired and off to bed. Goodnight to ye all.
 
My comment was meant to be complimentary, if also poking a little.
Ah. I apologize, Rigor.
We are bodies in motion. One of the things I'm reasonably confident about in my writing is the occasional nice phrasing,
Of course. When I said "use the wrong word" (if that's what you're referring to) I meant specific words or incorrect description words because I didn't mean to put them there. Not "wrong based on AI opinion". I do not let it make writing suggestions or generate prose EVER.

Have a good night.
 
Back
Top