They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.
I think the public domain would be fair game as well, and the fact that AI companies don’t limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.
Possibly, but the intention behind it is more about not exploiting other people. If it’s only trained on my work, and only used by me, I’m the only one harmed by it, and that’s my choice to make.
That’s very deontological. Suppose you train a model that is equally good as other models, but only using your own work. (If you were a billionaire, you could commission many works to achieve this, perhaps.) Either way, you end up with an AI that allows you to produce content without hiring artists. If the end result is just as bad for artists, why is using one of those ethical?
True, but that’s why I specified that it could only be used for my own personal use. Once you start publishing the output you’ve entered unethical territory.
I don’t see the relevance of its personal use here. If it is ethical to use your own AI for personal use, why is it unethical to use an AI trained on stolen data for personal use?
They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.
I think the public domain would be fair game as well, and the fact that AI companies don’t limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.
This is a very IP-brained take. This is not the reason that AI is harmful.
Possibly, but the intention behind it is more about not exploiting other people. If it’s only trained on my work, and only used by me, I’m the only one harmed by it, and that’s my choice to make.
That’s very deontological. Suppose you train a model that is equally good as other models, but only using your own work. (If you were a billionaire, you could commission many works to achieve this, perhaps.) Either way, you end up with an AI that allows you to produce content without hiring artists. If the end result is just as bad for artists, why is using one of those ethical?
True, but that’s why I specified that it could only be used for my own personal use. Once you start publishing the output you’ve entered unethical territory.
I don’t see the relevance of its personal use here. If it is ethical to use your own AI for personal use, why is it unethical to use an AI trained on stolen data for personal use?
That’s what my workplace does since 1985!