If we're talking about using AI within the context of this forum, the rules are probably much different from those of the world at large. We expect that what we get from our fellow writers is the unalloyed product of their labor. When we enter a contest here, we expect the same. The training wheels are off. So using AI in this context would be unethical, in that we are not living up to the expectations of the other people on the forum.
Right. Here we have a good example of crossing an ethical boundary. Using a tool (AI) in a context where that tool is explicitly prohibited. That boundary is deceit, as
@Rigor Mortis also highlighted. I would argue that the act of deception itself is the ethical breaking point, and not the specific use of the tool.
If I carry a knife through an airport, I'm going to get arrested. But that doesn't inherently imply that knives are bad or wrong. I'm just doing it in the wrong context. And if I knew it was wrong when I brought it in, there is the ethical boundary - I chose to break the rules.
I can see AI obliterating these nuances of dialect, resulting in the deletion of a major theme of the book.
I completely agree, and I think it is an exceedingly sad state of affairs the path that artistic creation is tracking down this road. I also take issue with publishers re-releasing books like Roald Dahl edited to remove any sensitive "trigger" words. And frankly, I find the latter more ethically offensive than the former. It offends the same sensibilities
@Rigor Mortis felt against having Twain reworded. The primary difference is attribution - I can accept that the AI is not really
choosing to do this, but human editors/publishers
chose to modify and republish those books. Far more damning IMHO.
bear in mind what Twain said: "The difference between the right word and the almost-right word is the difference between the lightning and the lightning bug." We can tell the difference when we see it. But can AI do the same?
I would argue that it can, given the correct set of parameters. But you have hit on the point people struggle with in using these systems for editing - they
don't clearly specify requirements. If you just give it a Twain quote, without any instructions, it
is going to try to modernise the language - that's the default behaviour essentially. If you give it your own work to critique, and specify
this is how I want the voice to sound, the tone, or
I'm writing this character in a regional dialect - then it's able to scrutinise much more accurately within those parameters.
I'm not trying to convince anyone of anything, just putting information on the table.
This strikes me as a pragmatic approach. I like that it aids transparency and may help to build trust with an audience.
Here, though, I disagree. I think an author stating that 1.5% of their work is AI is attaching a stigma to their own name (in the current climate). People are far more likely to read it as "This author uses AI, doesn't write his own words". Now, if your book is 100k words, that means 85k of
your lovingly crafted work is going to be labelled "AI slop" by association.
Conversely, my earlier point was, I don't see how using editing services is any different. You wouldn't publish with "My editor rewrote 1.5% of my book" and no-one would complain that an author doesn't produce his own work
because he uses an editor.
With AI they have progressed to the point where cheating is easy, not great, but for most not an issue.
Cheating is certainly another ethical boundary. Perhaps this is where the "AI, write me a story ..." generative approach fits. But I don't think critiquing or editing falls into the category of cheating.
Where this is most concerning for me is within the education system. If things continue the way they are at present, I foresee university degrees becoming completely worthless, and the collapse of the entire tertiary education system as we know it. (This, however, may not be a discussion for this forum.) I've never done any writing-based education, BA or MFA or anything, so maybe someone who has might comment?
Maybe it's as simple as "No plagiarism."
I feel like plagiarism is a more ill-defined area than a simple label. We all, as writers, take influence from the things we read and love. What if we borrow a specific word, or phrase, or line from our favourite author, does that become plagiarism? Some might call it homage. And I expect quantity is relevant, though again that ratio is not defined.
It made me think, as a thought experiment, if AI takes a word from each of a million sources, and puts them together to make something different, does that fall under plagiarism? It hasn't explicitly
written any of it, it's all entirely borrowed from others. Yet I doubt any specific source could be cited or recognised within the result.
This is my biggest ethical quandary with AI, and it comes back to how the training data was supplied.
I suppose, in the end, one could argue that every word I've ever written, I've first read from somewhere else.
There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope.
- Twain
LLMs can pass the 'indistinguishable from a human in conversation' test
This is curious. My experience has been that the longer the conversation goes on, the more inconsistencies creep in. Kind of like having a conversation with someone who has dementia, can't remember the beginning of the conversation or things they've already said.
It can be very difficult to separate the hype from the facts, and there is a lot of hype around AI in both directions (for and against).