AI đŻ Ads & Deepfake đđ
Hey ChatGPT, can you write me an intro?
âCertainly, here is a possible introduction.â Hey, if an AI-written intro is good enough for the editorial team at a prestigious peer-reviewed scientific journal, why not you, dear reader? This week weâre looking at states that are beginning to restrict AI usage in election marketing, as well as the insidious deepfake porn industry. Let's get into it. As always, send us your thoughts/tips at ai@goodbad.aiâor you can just reply to this email.â
The Good
Illegal Meme Squad
As a follow-up to last weekâs depressing news about AI Election Misinformation, it looks like some state governments are responding to the issue, and unrelatedly, pigs have been seen flying overhead. Idaho and Georgiaâs Senates both pushed laws forward that address AI-created audio, video, and image content in elections. Idahoâs legislation would require a disclaimer but rely on lawsuits in civil courts to enforce the law. Georgiaâs proposal, on the other hand, would make the undisclosed deceptive use of AI by political actors a felony. With a penalty of 2â5 years in prison! Anybody know if the person who made the TSwift image below lives in Georgia? I just wanna talk.
The constitutionality of these laws will inevitably be challenged, and their value will ultimately have a LOT to do with the courtsâ definitions of âdeceptiveâ and âintentional.â But while itâs possible that the courts will undercut the lawsâ efficacy, itâs important to note that both of these bills are bipartisan (what?! really?!), so thatâs good news for similar laws that may be developing in other states.
The Bad
Thereâs probably a Taylor Swift lyric thatâs perfect for this section header, someone reply and tell me
You may have seen news swirling around recently about AI-generated images of Taylor Swift. In a truly depraved display of the dark side of AI, the images ran the gamut from election denialism to deepfake pornography. One silver lining to this was that these incidents prompted a rush of interest in laws around the country concerning non-consensual AI-generated images of women and children. The urgency of these billsâ advancement is nice to see, but this is not a new problem, or one thatâs going away anytime soon.
With the proliferation of increasingly realistic AI generation (with some notable exceptionsâRIP Kate Middleton), making convincing deepfakes on demand is getting cheaper. The danger of this is obvious. Deepfake sex tapes of celebrities or political figures would be bad enough, but it seems like just around the corner is cheap and fast deepfake pornography of exes, crushes, or rejectionsâand with the spotty legal protections against revenge porn, itâs hard to see legislators responding quickly to this potential deluge of sexual AI generations.
The AI
The Weird, Wild, and Unnerving Side of AI
If you only click on one link in this email, please click on this one. As a heartfelt tribute to their local burrito chain, somebody used AI to make a psychotic Hot Topic looking-ass video backed with an equally insane song. For some reason, the restaurant has not embraced this, but they do have their own Burrito Video on their website, which, while definitely not made with AI, is still just as uncanny and wonderfully weird.
If you only click on two links in this email, make this your second: We wanna do the fun stuff.
One of my favorite AI-related movies, I, Robot, is set in the year 2035. In 2004, when it was released, that seemed way too soon for mobile, articulate, thinking robots. It doesnât anymore with the demo video of Figure 01, a robot powered by Open AI.
Donald Trump has taken to calling any legitimate video of himself that he doesnât like AI. I shouldnât have expected any different.
Just because an app tells you something wonât kill you, donât automatically believe it. People are getting sick by trusting spurious apps that purport to use AI to tell you if a wild mushroom is safe or deadly. Australian scientists tested the top apps and found the very best one was only effective at identifying dangerous mushrooms 44% of the time. 44%!
Speaking of dangerous mushrooms (perfect segue đđ€) the new app Calmara claims to use AI to identify STIs by allowing you to upload a picture of your or a partnerâs âmushroomâ. Guys, câmon. Donât send a picture of your funky fungi to some company on an app. Talk with a doctor for godsakes. Good god.
Quick Hit News to Know
The Apple rumor mill is abuzz with the news that earlier this year, Apple acquired Darwin AI, an AI startup specializing in using AI to identify manufacturing defects. The rumors are flying that Darwin may underpin new consumer AI experiences at Apple, but Iâm not buying it. Apple manufactures a lot of tech to exacting specs. This feels like a QC acquisition, not the new Siri.
Underscoring that, Apple is apparently in talks with Google to leverage Gemini, Alphabetâs Generative AI, on iOS devices, which I feel like Apple wouldnât do if they were close to their own generative solution.
Caveat Emptor! The âAI-poweredâ service youâre buying may not be AI-powered at all and the SEC is cracking down on financial firms that promise AI in their products but donât deliver.
Until next time, donât use AI to identify mushrooms or your STIs! See ya next week.
Kyle, John, & Sven