Regarding AI art – I use it (though mostly for conceptual stuff) and I don’t apologize for it.
Ot course it can be abused. I don’t think anybody’s saying it’s cool to intentionally direct a series of prompts to make it deviate from its standard style to fully emulate the work of a specific living artist instead of paying them, and allowing artists to opt out of their material being used as training is the right thing to do.
At the same time, there’s a lot of ignorance out there about how neural networks operate and how MidJourney’s plain, no-specific-artist-name-attached style prompt works. Please research it before you accuse the engine of inherently stealing everything it produces. As dumb as the far right looks when they inherently argue from ignorance, we don’t look much better when we say “computers can’t learn to make original content they can only steal!”
I’m by no means an expert on neural networks, but I happened to work on one as part of a college final project (though it was waaaay simpler than the neural networks that Stable Diffusion uses), so that means I know a little more than the average layperson. They are modeled after human brains for a reason — to process or (in this case) generate content they’ve never seen before based on what they learn from images they’ve seen in the past. So yes, this is absolutely similar to what humans do when they take images from the web to study and incorporate what they learn into their own style.
MidJourney’s default style (when you don’t directly emulate a specific artist) is a unique blend, and there’s nothing wrong or illegal about that. It is not a dice roll of “which exact style should I emulate from these art works?” — it’s its own cocktail. The signatures you sometimes see aren’t copied and pasted, they’re from the engine trying to imitate what it’s seen by signing it (honestly, that should probably be weeded out of the training data).
I’m sure there will be a lot of lawsuits around this in the future. I don’t know how the courts will rule on them, as human judges can be unpredictable even for the most apparently straightforward cases (read “Hello World” for a scary example), but there’s s good chance that if what the AI is doing isn’t illegal for a human to do (memorize publicly displayed art, file it away in their brain, incorporate some techniques from that art into part of their unique style) then it won’t be illegal at all.
The resolution I see will be to 1) block user prompts from directing the AI to emulate a specific style of any artist who hasn’t given permission, instead of using its default style or more general prompts like “hyperrealism”. And 2) allow artists to opt out of their material being used for training.