One can use sophisticated tools, like depth maps and controlnet, to compose an image/video in all sorts of ways. One can spend hours touching up a generation in photoshop, like, you know, an artist that actually cares about what they’re presenting. One can use models that don’t feed blood sucking corporations. And like you said, one can disclose the whole process, upfront.
It’s just that the vast majority is crap from a few keywords mashed into ChatGPT, with zero deliberate thought in the work and that full ‘tech bro quick scam’ vibe.
So I guess what I’m saying is this:
Tell people that’s what you’re doing.
Is an unlikely scenario.
“AI artists” seem to be scammers. They will lie about their process. That’s who will attend things like this.
Meanwhile, the few hobbyist artists with diffusion in their creative pipeline would never dare show up to a place like this, because of the scammers ruining any hope of a civil reception.
It’s also important to remember these models are trained by sampling (imitating aspects of) images they don’t have the rights to use directly. I think it’s justified being angry about someone using your work -insignificantly mashed together with millions of other people’s work- without your permission, even if it’s to extend a background by 10 pixels lol
Not all them. Some are trained on pure public domain data (though admittedly most folks running locally are probably using Flux or Stable Diffusion out of convenience).
And IMO that’s less of an issue if money isn’t changing hands. If the model is free, and the “art” is free, that’s a transformative work and fair use.
It’s like publishing a fanfic based on a copyrighted body. But try to sell the fic (or sell a service to facilitate such a thing), and that’s a whole different duck.
I guess there was no incentive for Stephenie Meyers and E. L. James (and their movie adaptation money banks, Lionsgate and Universal) to sue. But apparently it was brought up in some kind of lawsuit over an actual pornographic adaptation:
In June 2012, the film company Smash Pictures announced its intent to film a pornographic version of the Fifty Shades book trilogy…
Smash Pictures responded to the lawsuit by issuing a counterclaim and requesting a continuance, stating that “much or all” of the Fifty Shades material was part of the public domain because it was originally published in various venues as a fan fiction based on the Twilight series. A lawyer for Smash Pictures further commented that the federal copyright registrations for the books were “invalid and unenforceable” and that the film “did not violate copyright or trademark laws”.[206] The lawsuit was eventually settled out of court for an undisclosed sum and Smash Pictures agreed to stop any further production or promotion of the film.[207]
It’s also important to remember these models are trained by sampling
IMO, training software on the corpus of human art without payment or attribution is not good for society and art in general, but, humans who create non-abstract are trained and honestly create in a strikingly similar way. The person hired to make an art piece of Catherine the Great doesn’t disclose that he looked at Alexander Roslin’s painting of her and is greatly copying the look and feel for the face or the Google search they used to find options for 1700’s royal clothing. The big difference in process between AI and an artist with reference art is the removal of the human element, and that’s super important.
But instead, we focus on how it was trained, when we train much the same way, or we call it all slop regardless of the actual quality, instead of calling out the real problem, the one problem that we can do something about, it’s taking a living away from humans.
You can hate the output, you can hate copying the IP, you can hate the people involved, but the process is nothing like tracing. It’s closer to black fucking magic :)
It’s possible for ‘AI art’ to not be crap.
One can use sophisticated tools, like depth maps and controlnet, to compose an image/video in all sorts of ways. One can spend hours touching up a generation in photoshop, like, you know, an artist that actually cares about what they’re presenting. One can use models that don’t feed blood sucking corporations. And like you said, one can disclose the whole process, upfront.
It’s just that the vast majority is crap from a few keywords mashed into ChatGPT, with zero deliberate thought in the work and that full ‘tech bro quick scam’ vibe.
So I guess what I’m saying is this:
Is an unlikely scenario.
“AI artists” seem to be scammers. They will lie about their process. That’s who will attend things like this.
Meanwhile, the few hobbyist artists with diffusion in their creative pipeline would never dare show up to a place like this, because of the scammers ruining any hope of a civil reception.
It’s also important to remember these models are trained by sampling (imitating aspects of) images they don’t have the rights to use directly. I think it’s justified being angry about someone using your work -insignificantly mashed together with millions of other people’s work- without your permission, even if it’s to extend a background by 10 pixels lol
Not all them. Some are trained on pure public domain data (though admittedly most folks running locally are probably using Flux or Stable Diffusion out of convenience).
And IMO that’s less of an issue if money isn’t changing hands. If the model is free, and the “art” is free, that’s a transformative work and fair use.
It’s like publishing a fanfic based on a copyrighted body. But try to sell the fic (or sell a service to facilitate such a thing), and that’s a whole different duck.
50 Shades of Grey be like…
Yeah, that’s an interesting case.
I guess there was no incentive for Stephenie Meyers and E. L. James (and their movie adaptation money banks, Lionsgate and Universal) to sue. But apparently it was brought up in some kind of lawsuit over an actual pornographic adaptation:
IMO, training software on the corpus of human art without payment or attribution is not good for society and art in general, but, humans who create non-abstract are trained and honestly create in a strikingly similar way. The person hired to make an art piece of Catherine the Great doesn’t disclose that he looked at Alexander Roslin’s painting of her and is greatly copying the look and feel for the face or the Google search they used to find options for 1700’s royal clothing. The big difference in process between AI and an artist with reference art is the removal of the human element, and that’s super important.
But instead, we focus on how it was trained, when we train much the same way, or we call it all slop regardless of the actual quality, instead of calling out the real problem, the one problem that we can do something about, it’s taking a living away from humans.
Slop machines have more in common with tracing than with learning.
https://www.youtube.com/watch?v=iv-5mZ_9CPY
You can hate the output, you can hate copying the IP, you can hate the people involved, but the process is nothing like tracing. It’s closer to black fucking magic :)
It’s not magic.
You’re in a cult.
funny, I didn’t feel like a kettle this morning.
anyway have a good one
for 1700s* royal clothing