The ethics of it in the end, is - is it your work?
This seems to be the question that everyone is circling. Ultimately I don't feel this is an accurate or complete representation of the ethical dilemma.
Forget for a moment of illustration how AI currently functions.
Imagine, as a computer programmer, I create a program that can write, or draw, or whatever. In order for it to achieve a result, good, bad or otherwise, I have had to program in the instructions on how to do that. The word choices or a methodology for choosing from a known dictionary. The placement of a line or colour. Everything the program knows how to do follows directly from an instruction that I gave it. So, would you say that the result is a product of my own work?
Consider human assistants. Michelangelo did not paint the Sistine Chapel by himself, he had a dozen assistants who performed perfunctory tasks, but also contributed to the painting. So, which parts or what percentage of the resulting artwork would you consider to be Michelangelo's?
Imagine a human assistant who had studied under Tolstoy, Tolkien, Dickens, Austen, Woolf, anyone who you consider to be a literary giant,
everyone. Imagine this assistant came to your home and offered to help proofread, edit, make suggestions to your text. You would, I expect, evaluate the suggestions with the gravitas due that assistants experience. Assuming you adopted some of their suggestions into your work, does that make it
less yours? At what point or percentage does the work become
not yours?
Now, let's return to our AI assistant. At what point or percentage of using an automated assistant to edit or make suggestions, or even generate portions of text - which presumably will be reviewed and accepted by the author - does the work become no longer the authors?
Where I believe ethics may creep in - or at least some morally grey area - is in how the AI obtains its knowledge. It is not the case that the author here programmed in specific language instructions, so the results are not entirely of his own creative efforts. And, unlike our human assistant, the AI was not shared into expert knowledge voluntarily by the massive number of creatives that it has consumed. Was the scraping and training of AI unethical? Is it's redistribution of this knowledge unethical? (These questions may well be for a different topic.)
Indulge me in another hypothetical. Imagine a Human assistant, but this time they don't
study under the artists. Maybe they go to the library for years on end, reading and consuming all books and art they can get their hands on. Maybe they sat in the Sistine Chapel, watched Michelangelo apply plaster to the walls, and pigments to the plaster. Maybe they trained themselves without anyone ever noticing. Granted, the lifetime for a human to match the volume of data consumed by an AI is probably unrealistic and unobtainable. But, for the hypothesis, if
this assistant came to your door and offered their services, would it be unethical to accept?