“What’s he telling you?
“You don’t want to know.”
(thought you were safe from my dnangel fanart onslaught? THINK AGAIN)
270 notes
·
View notes
Brief thoughts on AI writing/art data-scraping and subsequent content production, & the conclusion I've come to.
Thought #1: There has been a lot of discussion about how AI is or is not art theft (or writing theft); from my understanding every model works slightly differently. What isn't up for debate, though, is that all AI models require data to function, and that data has to come from somewhere. The companies developing AI have a strong incentive to get data by any means possible; the internet is the easiest place to start, but there's no way to get permission from every single person who has ever put something on the internet for the use of that thing to develop the AI, even if every single person were inclined to give it.
Conclusion #1: Doesn't matter if the AI's output is a copyright violation; instead, it was a violation of copyright to feed that data to the AI in the first place, making the AI itself inherently legally problematic.
("BRIEF" DO NOT @ ME OKAY. SEE BELOW FOR THE REST OF MY BIG ASS ESSAY. I WILL REBLOG WITH THE SHORTEST TL;DR I CAN MANAGE.)
Thoughts #2&3: Due to how easy it is to scrape data online, and the way technology is currently progressing (silicon valley motto of Never Ask "Should" I Do It, Just "Can" I Do It), there is almost no way to prevent these AI from being developed with stolen data, and there's enough out there to make these very, very good. They've gotten immeasurably better in just the past few years. Also, preventing them from scraping one thing (ie archive-locking fic) is probably not going to do anything about the problem as a whole, even if it stops that one thing from getting used (and if it even does prevent that thing from being used; I am not sure there's not ways to get around that kind of obstacle).
Conclusions #2&3: Can't stop the technology from developing, and trying to prevent your data from being accessed through technological barriers is at best small potatoes and at worst futile.
Thought #4: What is the incentive for people to do this? Money. These AI are being developed in hopes that they can be used to do things humans can currently do, for cheaper, so they can sell them to companies who will then use them to replace human labor. Will it produce results as good as human labor? No. Will that matter? Not enough, and not in all circumstances.
Conclusion #4: How to prevent this from happening in a way that loses people jobs (or loses the least jobs, or at least protects creative work, or does the whole thing slowly enough to save your job and my job)? Make it so companies cannot legally make money by using the output of these AIs.
WHICH... takes us back to Conclusion #1 -- due to the copyright violation inherent in these programs, it is important to make sure the output can't be copyrighted. Which, at the moment, legal precedent says it can't be. But that's something that companies which stand to make money off AI-generated work are going to try to change.
THEREFORE... we gotta fight those fuckers every step of the way to make sure that AI generated work can't be copyrighted. Which, IMO, means:
educating people about how these models are developed using data theft
make the connection between AI development and potential harms clear (both things like face recognition tech and hurting creatives by replacing them in jobs)
encourage people to fight legally instead of technologically; ie instead of archive-locking work on AO3, continue to throw a fit at the AI company, file legal copyright complaints, etc (any useful suggestions here would be great!)
And then, bonus, if your company is considering using this kind of technology to replace artists or writers, throw a giant fucking shit-fit. Bring up possible legal ramifications. Bring up possible public backlash ramifications. Bring up ramifications of you personally quitting and being a huge bitch about it the whole time. Whatever you can safely do!
I don't think we can prevent AIs, nor do I necessarily think they're inherently evil; I DO think they are being made by people who do not care if they are being used or made in an evil way or not. I'm not sure we can prevent their usage to replace creative jobs entirely, but I think we should try. And I am willing to put my money where my mouth is on that. Which is all I can say about it!
NOTE: I am not a technical expert or legal expert on AI; I am some guy online, but I have a vested interest in this both as someone who pays to have art made and who makes art themselves. I have recently done a fair amount of research into this, and this is what I came to personally. If you have more information from a legal or technical perspective that contradicts this, I'd love to hear it!
211 notes
·
View notes