I wanted to make sure I understood which side of the equation I fell on. And I must say, it looks to me like a lot of people in the "weak" camp aren't helpless martyrs though, myself included. People are excited and enthusiastic about AI and are actively reaping the benefits of progress. I don't think your analogy is quite apt.
Define "a lot"? Most people barely know how to use their email. Even among the minority who do actively use "AI" and excited about it, outside of ML engineers they aren't well-informed or aware what data is used for training, or even what training means and how these models work to begin with.
> People are excited and enthusiastic about AI and are actively reaping the benefits of progress.
Except the terms were already violated in the initial training phase before the services were even public and saw adoption.
That's like pointing at a rape victim who got some form of compensation later, saying:
see how she's "reaping the benefits"
So let's not play the people wanted it card.
By the time some people started raising concerns, OpenAI claimed the cat was already out of the bag and "if we didn't do it, someone else will, so deal with it."
Similar to privacy, just because some people don't care, lack awareness, or don't want the hassle of fighting for it, doesn't justify taking it away from others.
Your argument seems to be that the majority of the world are the weak being sacrificed but are too ignorant to realize it. I wholeheartedly disagree with this theory.
Yes. Your intellectual labor to the maximum degree possible will be exploited by "AI" companies who are anything but.
This is repackaging content, laundering it, and reselling it.
As others have noted, IP law has lots of problems; Sam Altman et al are exploiting the gap left between the speed of technology and law and using their own version of social good without waiting for the consent of those they're exploiting.