No Way, San Jose
It’s good, but I’m not happy about it.
This week, we’ve got an AI homelessness detection system, the feds greenlighting AI in government, a hilarious autocorrect-sounding propaganda video, and Elon being sort of right about something? Let’s get into it. As always, send us your thoughts/tips at ai@goodbad.ai—or you can just reply to this email.
The Good
Grok Goes Open-Something
I can’t believe I’m saying this, but I... agree with... Elin Mosk. No, wait. Elong Mask? I agree with Elon Musk! There, fine, bleh. I’m not referring to his freak views on race, or “free speech”, or just about anything, really, but about his views on transparency in AI. What happened is that Elon’s xAI company announced their Grok chatbot will be open-source, making them one of the few AI outfits pursuing that approach. In this case, what they’ve released are the source code and weights of the data, which I guess makes it open-weight, technically. This goes a lot further than, say, OpenAI is willing to go, but we still don’t have the actual sources of what types of data it was trained on. I’m assuming it’s not -only- tweets, uh posts, but nonetheless, we have more information about Grok than we do for most LLMs.
The move is clearly part of the larger spat between Elon and OpenAI regarding how “open” AI should be and its profit motivation (disregard Elon’s cringe “ClosedAI” joke). Now, Elon hasn’t exactly been perfectly consistent in both those areas as we’ve mentioned before, but one thing Elon is definitely good at is changing the conversation. Who better to make the public think about a thing and drag piles of money along with him than the One, True bored ape, Elon Musk. If he’s gonna tweet a bunch of “great replacement theory” garbage, the least he can do is also get people interested in AI transparency.
If you believe that AI has the potential to upend nearly every facet of life as we know it, then the public has the right to know what goes into these things and how they work (to say nothing of government funding). Having a large, open community of people working in your system helps ensure issues will be uncovered, bugs will be fixed, and the platform should have the trust and resilience to last.
Proponents of closed-source AI say that China/the boogeyman is gonna steal our tech and use it against us if we go open source, to which I say, if China wants to copy our AI so bad they’ll do it regardless of whether we let them. Making AI closed-source isn’t going to stop China from mastering AI, it’s just going to let a handful of companies monopolize it. So it may only be a small step in the right direction, but I’ll give Elon this one point- our future robot overlords should be transparent, even if it takes some time to get there.
The Bad
San Jose Is Doing Robocop Shit
San Jose, California has started a pilot program using AI to have cameras on municipal vehicles take images that will be, per The Guardian, “fed into computer vision software and used to train the companies’ algorithms to detect the unwanted objects.” Unwanted objects? Like what, you ask? You know, potholes, overgrown trees, normal municipal stuff, says San Jose. Oh, and the homeless. Haha, almost forgot!
To collect the data for the AI, they’re sending these Google-Street-View-From-Hell cars into areas known to have homeless people. Another way of saying that is that we already know where the homeless people are in San Jose. Remind me again why we need AI to help detect lived-in cars or tents? In addition to being incredibly dystopian, this is another entry in the long tradition of AI Innovations being a solution in search of a problem.
San Jose says it’s working to maintain people’s privacy and that it should be used for the city to provide services to people (towing the cars people live in), but they also want this to be a scalable (sellable) technology for other cities. It doesn’t take a superintelligence to guess how this is going to get used when most cities don’t have big budgets for homeless services.
This is also happening at the same time as the recently passed CA Proposition One, which supporters say is a necessary overhaul of CA’s homeless programs, and opponents say will create a system of locked-door mental institutions. However Prop 1 affects the homelessness situation in California, I feel confident saying that a system with the potential to create old-timey out-of-sight, out-of-mind lunatic asylums should not be armed with an AI-powered surveillance state.
Guys, we have enough anti-homeless stuff and creepy surveillance tech. Please don’t create the Torment Nexus.
Bonus: Referring to other potential uses for the aforementioned torment nexus, “The target objects could expand to include lost cats and dogs” said one official. “Give me a break” said me.
The AI
The Weird, Wild, and Unnerving Side of AI
In a story almost dystopian enough to be our “bad” story of the week, a woman has come forward amidst the firestorm of comments about an AI-generated influencer on TikTok to say that she’s the model and she authorized it, getting paid something like a couple hundred bucks to license her entire likeness to talk about whatever the company wants. Can you say AI Gig Economy?
And so the Amazon AI-race-to-the-bottom-begins. A man took to X/Twitter to share the crockpot recipe book his parents gave him and his wife and it seems to be almost definitely completely AI-generated. AI recipes actually frighten me a little. Call me old fashioned, but I’d like a human to have at least tried the food before I make it for my kids.
This article about AI images taking over Boomer Facebook Groups is interesting, but really, just click on the story to see the wild image of Shrimp Jesus.
In another Torment Nexus story, researchers are predicting a future where all the AI’s get together and become basically The Borg from Star Trek.
Quick Hit News to Know
You could use AI to hunt for homeless people, or you could use it to count hedgehogs. The National Hedgehog Monitoring Programme (I love this organization already) is training AI-enabled cameras to count hedgehogs and support conservation efforts. More of this, please.
Amazon drops 2.75 big ones on Anthropic, makers of the Claude AI chatbot. Anthropic was started by two former OpenAI folks, siblings Daniela and Dario Amodei. The Amodeis may not have the same ring as the Winklevoss twins, but then again, Bezos is giving the Amodeis’ 2.75 B while the Winklevoss’ Gemini exchange is shelling out 1.1 B to customers after a settlement, so you be the judge.
Prez Sez to step up the use of AI, but to vet the crap out of it. The OMB just released the first governmentwide guidance on how government agencies should use AI. This could be a Real Big Space for opportunity as the feds have been somewhat skittish about AI so far.
One day, perhaps not too far off, AI-generated video is going to be a really cool add to the toolbox of filmmakers. Until then, we’ve got Sora, OpenAI’s generative video model. OpenAI had a bunch of filmmakers use Sora to show off what it can do and the results are... fine. Lots of potential, but still very much in the “mobile game ad with a community note saying it sucks on Twitter” stage right now.
Speaking of overhyped, mediocre-to-the-point-of-comedy videos, China is making anti-American propaganda using AI video generation. Don’t miss the part where a giant hand crushes a guy.
Oh look, here’s Elon backtracking on his 2023 worries that AI tech is moving too fast and calling for a pause on Open AI’s development. This surely has nothing to do with Grok being released.
Whatever you do, I beg you, please don’t create the Torment Nexus.
John & Kyle