AI 🌯 Ads & Deepfake 🍆🍑
Hey ChatGPT, can you write me an intro?
“Certainly, here is a possible introduction.” Hey, if an AI-written intro is good enough for the editorial team at a prestigious peer-reviewed scientific journal, why not you, dear reader? This week we’re looking at states that are beginning to restrict AI usage in election marketing, as well as the insidious deepfake porn industry. Let's get into it. As always, send us your thoughts/tips at ai@goodbad.ai—or you can just reply to this email.
The Good
Illegal Meme Squad
As a follow-up to last week’s depressing news about AI Election Misinformation, it looks like some state governments are responding to the issue, and unrelatedly, pigs have been seen flying overhead. Idaho and Georgia’s Senates both pushed laws forward that address AI-created audio, video, and image content in elections. Idaho’s legislation would require a disclaimer but rely on lawsuits in civil courts to enforce the law. Georgia’s proposal, on the other hand, would make the undisclosed deceptive use of AI by political actors a felony. With a penalty of 2–5 years in prison! Anybody know if the person who made the TSwift image below lives in Georgia? I just wanna talk.
The constitutionality of these laws will inevitably be challenged, and their value will ultimately have a LOT to do with the courts’ definitions of “deceptive” and “intentional.” But while it’s possible that the courts will undercut the laws’ efficacy, it’s important to note that both of these bills are bipartisan (what?! really?!), so that’s good news for similar laws that may be developing in other states.
The Bad
There’s probably a Taylor Swift lyric that’s perfect for this section header, someone reply and tell me
You may have seen news swirling around recently about AI-generated images of Taylor Swift. In a truly depraved display of the dark side of AI, the images ran the gamut from election denialism to deepfake pornography. One silver lining to this was that these incidents prompted a rush of interest in laws around the country concerning non-consensual AI-generated images of women and children. The urgency of these bills’ advancement is nice to see, but this is not a new problem, or one that’s going away anytime soon.
With the proliferation of increasingly realistic AI generation (with some notable exceptions—RIP Kate Middleton), making convincing deepfakes on demand is getting cheaper. The danger of this is obvious. Deepfake sex tapes of celebrities or political figures would be bad enough, but it seems like just around the corner is cheap and fast deepfake pornography of exes, crushes, or rejections—and with the spotty legal protections against revenge porn, it’s hard to see legislators responding quickly to this potential deluge of sexual AI generations.
The AI
The Weird, Wild, and Unnerving Side of AI
If you only click on one link in this email, please click on this one. As a heartfelt tribute to their local burrito chain, somebody used AI to make a psychotic Hot Topic looking-ass video backed with an equally insane song. For some reason, the restaurant has not embraced this, but they do have their own Burrito Video on their website, which, while definitely not made with AI, is still just as uncanny and wonderfully weird.
If you only click on two links in this email, make this your second: We wanna do the fun stuff.
One of my favorite AI-related movies, I, Robot, is set in the year 2035. In 2004, when it was released, that seemed way too soon for mobile, articulate, thinking robots. It doesn’t anymore with the demo video of Figure 01, a robot powered by Open AI.
Donald Trump has taken to calling any legitimate video of himself that he doesn’t like AI. I shouldn’t have expected any different.
Just because an app tells you something won’t kill you, don’t automatically believe it. People are getting sick by trusting spurious apps that purport to use AI to tell you if a wild mushroom is safe or deadly. Australian scientists tested the top apps and found the very best one was only effective at identifying dangerous mushrooms 44% of the time. 44%!
Speaking of dangerous mushrooms (perfect segue 😙🤌) the new app Calmara claims to use AI to identify STIs by allowing you to upload a picture of your or a partner’s “mushroom”. Guys, c’mon. Don’t send a picture of your funky fungi to some company on an app. Talk with a doctor for godsakes. Good god.
Quick Hit News to Know
The Apple rumor mill is abuzz with the news that earlier this year, Apple acquired Darwin AI, an AI startup specializing in using AI to identify manufacturing defects. The rumors are flying that Darwin may underpin new consumer AI experiences at Apple, but I’m not buying it. Apple manufactures a lot of tech to exacting specs. This feels like a QC acquisition, not the new Siri.
Underscoring that, Apple is apparently in talks with Google to leverage Gemini, Alphabet’s Generative AI, on iOS devices, which I feel like Apple wouldn’t do if they were close to their own generative solution.
Caveat Emptor! The “AI-powered” service you’re buying may not be AI-powered at all and the SEC is cracking down on financial firms that promise AI in their products but don’t deliver.
Until next time, don’t use AI to identify mushrooms or your STIs! See ya next week.
Kyle, John, & Sven