AI as a dynamic beta-reader

Chatgpt - paid for it for the same reasons as above. The worst by far. Does not hold corrections, will not stop trying to rewrite even when told not to and will DOUBLE DOWN on being wrong. Literally suggested I reword a sentence to 3rd person past and when I reminded it what POV and tense the excerpt was from - it told me its line was better and it stood by its suggestion lol. I noped out of there in less than 2 days. It was clearly not going to be helpful for me and I wasn't going to waste my time trying to train something that didn't want to be trained.

That's interesting. I had the same problems with Gemini as you did. It's improved, but it doesn't remember things between chats. But when ChatGPT (from version 4 onwards) added memory, I was able to get things I told it to stick. I told it never to suggest rewrites unless I specifically asked for it, and to use British English, and it has. It definitely didn't used to before, so that's a big improvement. It also remembers a lot about the way I write and what I tend to focus on now as well. Claude, I believe is pretty good as well, from what I hear from everyone.
 
That's interesting. I had the same problems with Gemini as you did. It's improved, but it doesn't remember things between chats. But when ChatGPT (from version 4 onwards) added memory, I was able to get things I told it to stick. I told it never to suggest rewrites unless I specifically asked for it, and to use British English, and it has. It definitely didn't used to before, so that's a big improvement. It also remembers a lot about the way I write and what I tend to focus on now as well. Claude, I believe is pretty good as well, from what I hear from everyone.
I was using 4 and it definitely did not hold memories well at all for me. Claude is the best by far for my usage needs.
 
Ffs, I'm the bad guy now for trying to have the exact conversation @Trish so excellently outlined above.

So, Trish. I've found a couple of things. First, it helps if you stick to one chat when discussing your work with the LLP. When you stick to one chat, it tracks better, and doesn't repeat the same mistakes over and over. After a while, however, if you use it long enough, the memory gets bogged down in that chat, and it slows to the point you can't use it. But that's after you've been using it a lot, for like a week straight. It works better in the same chat, then when you have to switch, it retains a lot of framework and knowledge of past discussions, but it also loses a lot.

What I also find that helps is understanding that you can set rules with your project. I, for example, stated directly, "As a rule, in the future, please don't ever suggest anything that utilizes an 'ly-ending adverb." It went blip blip blip blip and replied, "Okay, no revisions I suggest will ever use an 'ly-ending adverb". Then, I stated, "As a rule, please don't use any analogies in anything you suggest." Then, I asked it not to suggest any scene rewrites unless i specifically asked for them. I set up one that forbade it from straying from the POV of tight close third person. There's a bunch more, but when I got the rules straight, things hummed along better. If you have any rules you set that helped, let me know.

Then, there's the patronization you mentioned. It does that, and you have to be careful. Oh, you're such a great writer, etc. I suppose I could set a rule that asks it to give frank feedback, but I haven't tried it yet. But, you're right. It can lull you into a false sense of grandeur.

Finally, you're going along, you've worked on a couple of chapters, then it gives feedback that makes you wonder if it forgot specifics from earlier chapters. To solve this, whenever I felt I needed a big picture view of things, I reposted everything preceeding it before I posed the question. Usually, I said "don't reply until I'm done reposting", and it will

Regarding ChatGPT vs. Claude, I agree with you wholeheartedly. My problem is that I started with ChatGPT and stuck with it despite its limitations.

I'm probably banned from the site and writing all this is a waste of time. If so, then so be it.
 
So, Trish. I've found a couple of things. First, it helps if you stick to one chat when discussing your work with the LLP. When you stick to one chat, it tracks better, and doesn't repeat the same mistakes over and over. After a while, however, if you use it long enough, the memory gets bogged down in that chat, and it slows to the point you can't use it. But that's after you've been using it a lot, for like a week straight. It works better in the same chat, then when you have to switch, it retains a lot of framework and knowledge of past discussions, but it also loses a lot.

What I also find that helps is understanding that you can set rules with your project. I, for example, stated directly, "As a rule, in the future, please don't ever suggest anything that utilizes an 'ly-ending adverb." It went blip blip blip blip and replied, "Okay, no revisions I suggest will ever use an 'ly-ending adverb". Then, I stated, "As a rule, please don't use any analogies in anything you suggest." Then, I asked it not to suggest any scene rewrites unless i specifically asked for them. I set up one that forbade it from straying from the POV of tight close third person. There's a bunch more, but when I got the rules straight, things hummed along better. If you have any rules you set that helped, let me know.
As I said, unfortunately I was unable to get anything to stay in memory in ChatGPT so I have no suggestions for it specifically. It's ability to be so confidently wrong and then double down told me all I needed to know for my needs.
Then, there's the patronization you mentioned. It does that, and you have to be careful. Oh, you're such a great writer, etc. I suppose I could set a rule that asks it to give frank feedback, but I haven't tried it yet. But, you're right. It can lull you into a false sense of grandeur.
Again, as I said, I told it to stop, and it really didn't. Claude does stop and brings receipts so it was better for me.
Finally, you're going along, you've worked on a couple of chapters, then it gives feedback that makes you wonder if it forgot specifics from earlier chapters. To solve this, whenever I felt I needed a big picture view of things, I reposted everything preceeding it before I posed the question. Usually, I said "don't reply until I'm done reposting", and it will
I had the same issue with GPT. Claudes projects fixed all of this for me as I said in my post.
Regarding ChatGPT vs. Claude, I agree with you wholeheartedly. My problem is that I started with ChatGPT and stuck with it despite its limitations.
I get it, but I can't really help with GPT because I wasn't willing to stick with something that wasn't helping me at all.
I'm probably banned from the site and writing all this is a waste of time. If so, then so be it.
You're still typing and posting and Homer doesn't tend to hesitate, so unless that's your goal I'd probably drop that line, personally. You were corrected for making it personal, that's all.
 
What I'm curious about it is whether it can flag structural things. Like this particular scene plays better in chapter 2 than chapter 11. Or whether voice or tone of content strays out of balance. For instance, will it recognize the incongruity of one murder scene told in elongated, intimate detail vs a gleeful pile of mass murders that isn't presented very seriously. I mean, it can't tell if a story is disturbing on a visceral level, obviously.
 
There's kinda no point in arguing which GPT is "best" anyway. Whatever works best for you. Different people will have different experiences.
 
What I'm curious about it is whether it can flag structural things. Like this particular scene plays better in chapter 2 than chapter 11. Or whether voice or tone of content strays out of balance. For instance, will it recognize the incongruity of one murder scene told in elongated, intimate detail vs a gleeful pile of mass murders that isn't presented very seriously. I mean, it can't tell if a story is disturbing on a visceral level, obviously.

It can definitely do the second one. I gave it one story written in an archaic voice where it pointed out that in certain places, the voice veers close to modern, and that I might want to consider if it needed revising. It noted where I was framing one fight scene very clinically so it felt like murder, not like a heroic fight, and it knew that was what I had intended. But since I write short stories, I don't know how well ChatGPT could do that over chapters - Claude, with its project structure, could probably do it better.
 
What I'm curious about it is whether it can flag structural things. Like this particular scene plays better in chapter 2 than chapter 11. Or whether voice or tone of content strays out of balance. For instance, will it recognize the incongruity of one murder scene told in elongated, intimate detail vs a gleeful pile of mass murders that isn't presented very seriously. I mean, it can't tell if a story is disturbing on a visceral level, obviously.
Claude absolutely does for me. I have some really dark, horrible shit in my series. It most definitely flags tone switches, gravity, whether something seems gratuitous or plot driven (no, I don't mean sex, lol) or only there for shock value and it will tell you why it thinks the way it does. I would agree based on my experience with the others that it can't tell you if something it disturbing on a visceral level, but Claude has done very well at explaining why it is or isn't. I can't post the actual scene or its response here (too graphic) but if you'd like me to message it or email it to you as an example I'm more than willing.

There's kinda no point in arguing which GPT is "best" anyway. Whatever works best for you. Different people will have different experiences.
If that was in reference to me - I'm not at all doing that. I'm saying what works best for me and why. I thought that was the question.
 
If that was in reference to me - I'm not at all doing that. I'm saying what works best for me and why. I thought that was the question.

No, it wasn't aimed at you. I meant that all GPTs have their own capabilities, and what works better for one person won't work as well for someone else. So if people argue that GPT A is objectively better than GPT B (which is subjective), it's a bit like arguing if Android or iPhone is better. :)

PS, it's Android.
 
But since I write short stories, I don't know how well ChatGPT could do that over chapters - Claude, with its project structure, could probably do it better.
That's an excellent point. The other two definitely did better with my short stories, but they were not at all helpful overall for the series. Claude has been a million times better for me, and that may be exactly why. The shorts aren't what I need it for with continuity, tone, characterization, etc. It's the series and that's a lot harder.
 
To Homer's question, I had it tell me that a particular violent scene was likely too much and threatened general acceptance. Then I said, what would you do? and it said if I simply alluded to it (rather than walking a reader through it) I'd get the same impact without the outrage, which was a decent point. And like Trish said, it assesses if things seem gratuitous rather than earned.
 
Back
Top