Head for Analytics

Monday, December 5, 2022

Discuss - I Low Key Hate AI Art

 Discuss - I Low Key Hate AI Art


I low-key kind of hate a lot of forms of AI Art.


I see the trend this weekend was for a lot of people to post their selfies using Lensa, and I get it, it looks cute, it's funny, but to the average layperson it must seem no different than applying a goofy photoshop filter or two and bam you get an instant new avatar.


It's not that. And you should be educated a little bit about some of what AI Art is, and is doing.


First off, so you know, and so we're all clear... when you use Lensa, (the avatar AI image-generating app) to make an avatar image. You should know that Lensa takes all rights to use the photos you submit and everything they create in perpetuity for marketing, study, or really...whatever they want.


This isn't particularly new, nor particularly shattering. Lots of image sits and posting setups have the same ToS. But let's keep it in mind. What you send in, and what you create is theirs and part of the model. (And is exceptionally difficult to extract, unlike your image of your birthday party on facebook that might end up in marketing material somewhere).


Everyone has a crackshot joke about "AI is coming anyway" or "I don't own anything personally anyway that they aren't already taking", and that's sort of true. But take it from someone who literally studied AI Ethics...there's a lot of dangerous ground being rapidly lost here, and people are not asking any of the even simplest questions around the technology that are probably worth asking.


"Well Lester, I don't care about them using my image for marketing"


Sure, fair. Are you concerned if they sell your photos and images to a military or national power for surveillance? What about a private company? What about individuals? Do you care if your image gets sold to a country with perhaps less-democratic intentions like...Iran?

Do you care if someone with low scruples like Musk and Twitter suddenly own rights to create with your likeness? Or derivatives with your likeness?


Do you care if policing institutions are perhaps using sorting likenesses in order to pre-screen people for crimes based on race and facial markers/tattoos/hair colors styles, genetic indicators?

Would you be mad if a private company say suddenly had photos of you? Like what if it was the United Conservative Party? What if it was Rebel Media? Probably wouldn't even be that hard.


Stable Diffusion (the AI learning dataset that Lensa is based on) is also utilizing technology methodologies based on illegally acquired medical records (specifically facial imaging, images from studies, and medical scans), acquired without consent (or knowledge) of patients. As well, they disavowed 'awareness' or responsibility that their image gathering techniques could be sourcing potentially problematic material (such as illegal pornography).


The Laion Dataset (from which Stable Diffusion sources its data), is notoriously predatory. And built without an opt-out system. When confronted about how to get materials removed from it, they said the best way is to remove the image from the internet itself, take it off a website. Think about that. That's not a solution. That also doesn't express anything about the image already in the model, nor about how the image might've already been used. They don't know. And don't care to figure it out. Images taken from medical telemetry, images taken from non-consensual porn, images taken of children, images taken from ANYWHERE.

And let me tell you, that using it as a convenient "I didn't know it did that," or "It's an AI, we can no longer control what it samples or how it is used," is not the ethical way for humans to be interacting with it. 

I know this particular technology is not going back in the box. I'm not so foolish to think we can all just convince people to 'stop using AI Art'. It represents a shortcut, and the shortcut is attractive. But studying AI Ethics is 75% a look at how we treat learning technologies and their expansive growth with an eye to the future. 25% of that is how we treat one another with respect to AI. I wish to point out as pointedly as I can, the last time we granted investiture of personhood to a non-thing like "ahem" corporations...it has not worked out so well. In an ideal society, we would not make the same mistakes with AI. 

Before we go too far down that route...


You might want to ask yourself some questions, and take the time to educate yourself about your values around it.