Hey folks - I don't want to tag anyone because I don't think that's necessary. I understand and see both sides, and if you check my posts you'll see I was quite anti-AI because I only knew about it from annoyingly repetitive SM posts, forum discussions, fear mongering (some justified, don't get me wrong), etc. I'm just going to tell you my experience with it, and if anyone has questions I'm happy to answer them, talk about it, whatever. If not, that's cool too.
I started out trying Gemini - just to see what it would say about a chapter or two. I paid for it because I wanted the option for my work not to be used for training. It thought I was super duper fantastic, a visionary, amazing. I told it to calm the hell down, and it did. I asked it for specifics, it gave them, I asked for genre specifics where I was doing well and where I wasn't, it gave examples and comparisons. Those parts were great. However - it did not learn the way I needed it to. It consistently reverted to cheerleading which is annoying AF, and made suggestions for "corrections" even though I repeatedly told it (and added to instructions) that it was NOT allowed to write for me - EVER. Not even suggestions. It would slip back into general genre (it's default appears to be mystery/suspense in my experience) and tell me all the things I wasn't living up to until I reminded it what genre it was reading. It struggles with first person present tense and gets confused easily. Hallucinates things that did not happen in the text. When you call out the hallucinations it WILL correct them, but they'll come up later because it seems unable to wipe them from its memory. So I cancelled and moved on.
Chatgpt - paid for it for the same reasons as above. The worst by far. Does not hold corrections, will not stop trying to rewrite even when told not to and will DOUBLE DOWN on being wrong. Literally suggested I reword a sentence to 3rd person past and when I reminded it what POV and tense the excerpt was from - it told me its line was better and it stood by its suggestion lol. I noped out of there in less than 2 days. It was clearly not going to be helpful for me and I wasn't going to waste my time trying to train something that didn't want to be trained.
Claude - Paid for again for same reasons. Claude was initially confidently wrong as well - told me I needed to cut my book by HALF because it was too long for the genre, told me it was dragging and my pacing was too slow, told me it was clean and cohesive, but way too long. I asked it how many words it thought it was (I had never given it a word count, it just guessed based on content, plot, world-building information in the excerpts) and it said 180k-220k. LOL. I said "Claude the book is 119,945 words." and it RE-CALIBRATED. It explained WHY it thought that and how it came to that number, and it understood. It suggested I change things that would flatten voice (did NOT write suggestions, just said what it would change and why) and I re-iterated POV and it reassessed and explained the error. It learned. It retained. It got better. It made less ridiculous assertions. It still occasionally veers into cheerleading and I'll say "Thanks but cut the hyperbole" and it DOES. It apologizes, explains WHY it veered there, then gives concrete examples of what from the text made it go there and why. It also did what ALL of them did, it would tell me the writing was great, polish level, ready for sub, etc. and then I would say something like "oh that's awesome. It's completely unedited" and then it would tear it apart (unedited is quite the trigger for all of them) and manufacture problems. Problems that did not exist. Problems it had already said weren't problems (which for me is hilarious). With Claude though (unlike the others) when it suggests a rewrite of something and I say (literally did this a couple of days ago) "Okay, go ahead and show me your suggested rewrite. You can write one." (Probably not exact words but close enough) it responded with a lot of thinking and rereading and said - "I don't think I should. I made that suggestion based on you stating it was unedited, but on re-read I manufactured a problem that wasn't there. I retract my suggestion as it would not fit the POV."
Claude CAN learn, and learns WELL. Claude WILL call itself out if you ask it to show its work. It will ONLY double down if it is correct and has legitimate reasons it can show you. And that goes for both praise and criticism. Claude is the only one that has performed that way and I used the most advanced models - the ones suggested for beta reading and such - and I paid for them. In my opinion, Claude is the only one I've used that is truly valuable, learns, and is NOT easily influenced by what YOU want it to say, provided you've taught it and calibrated it properly. If it's actually right it will fight for it (and also understand that we're talking about creative work and will not be excessively pushy for it's opinions) and if it's wrong it admits it. It brings receipts.
It is not always correct, it is not perfect, but it is excellent at finding discrepancies and problems if you train it properly. If anyone does decide to use Claude - I strongly recommend you create a "project" within it, and paste all excerpts and text to the "files" section that you want it to reference throughout. That way, when a conversation gets laggy and you have to start a new one, you just start another within that project and it already has the files and relevant memory to keep going without you having to reintroduce information or retrain it. It's better for usage, and better for the purposes of beta reading. It can track plot progression of literally anything you ask it to when it has the relevant files. It can catch things that you specifically ask it to - for instance the current series I'm working on is dual POV. As part of their "voice" one ONLY ever says "okay" when they speak and the other only ever says "alright". It's only a small part of their voice, of course, but it's important to me that it be consistent. Claude checks for it in everything related to the series that I post in files, without me asking it to every time. It knows that's important and even when there are no discrepancies it says "X character only says alright, Y character only says okay. Text is clean."
Do I still use human betas? Of course! But Claude is a real time beta (within the constraits of what it can do - which is frankly a LOT that is helpful - and a human beta is never going to be able to give me a perfect characterization of a single character, across three books, with receipts. You do have to know craft to train it though, if you're going to use it for suggestions, or any of them will sand your characters into cardboard. Training is key, and knowing when to push back is important. It is a tool, not a magic wand.
And of course, as to it actually helping with plot suggestions, or giving ideas where to go next - I do not use it for that and have not used any of them for things like that because that is not what I need at all, so if that's what you're looking for I can't help.