Chat GPT 4o is better at jokes and worse at nuance than I want it to be
A special preview of our paid newsletter.
Hey friends, Kyle here with a behind-the scenes look at this week’s newsletter and the AI we use to build it. We’ve just moved over to Substack and we’ll be doing this every so often as part of our paid newsletter here. Today, everyone’s getting a sneak peek. If you want more of this, you can subscribe now for 50% off, making our membership just $2.50 a month or $25/year. There’s other benefits, too, but for now, let’s take a look at our AI process.
A quick summary of how tGtB&tAI is produced
Up until last week’s issue, we had been making this newsletter the old fashioned way, researching, writing, editing, and sending it by hand. Each issue took all together about 12 hours to put out.
The July 9th edition took just an hour and a half. How? Well, AI.
I was introduced to the software Cassidy at work. Cassidy is an incredibly powerful tool for businesses to leverage AI, building chatbots that reference an internal database of content, automated content workflows, and more. But I wanted to see if I could use Cassidy for something a little more oddball.
So for the past month I’ve been tweaking an AI in Cassidy we’ve called Hux (named after Aldous Huxley, the author of Brave New World).
Cassidy has a workflow feature where you tell it in plain language what you want AI to do and it builds a workflow to achieve it. I described our newsletter and inputted sample content from the most recent issues and it spit out a hell of a good option on just its’ third try.
Here’s the AI’s third attempt at a news story with some snarky commentary influenced by our previous issues:
Consulting Firms' AI Boom: The New Snake Oil Salesmen
Just when you thought consulting firms had run out of things to charge exorbitant fees for, AI came strutting along like a gift-wrapped unicorn. The New York Times reports that businesses, jittery about the latest tech trends, have flocked to consulting giants like Boston Consulting Group, McKinsey, and KPMG. These firms are now the early winners of the AI boom, peddling generative AI strategies like it’s the newest elixir for corporate success.
Picture this: Your company's IT guy, who still can't find the power button on a Mac, is now leading an AI transformation initiative. Enter the consultants, ready and willing to “guide” you for the low price of your annual budget. Need your employees trained? No problem. The consulting firm can do it. Need to integrate AI into your customer service? Easy, for a few thousand dollars more. And if it all goes south, at least you can say you were on the cutting edge—right before you cut half your staff to pay for it.
But hey, at least these consulting firms are providing a service that keeps everyone’s job interesting. Your Monday morning meeting just went from discussing paperclip budgets to pondering the existential questions about the future of AI in your industry. So, here's to the new age of AI—a time when even the clueless get to feel like tech innovators, guided by the handsomely paid prophets from McKinsey.
The jokes were pretty funny and it had the edge I was looking for, but in a little dystopian twist, Hux had considered this the “Good” story of the week. We’ll go deeper in future paid-subscriber posts about how we built and tweaked Hux, but for now I want to show you what Hux produced this week and how it compared to what ended up in your inbox.
Here’s Hux’s output this week compared to the final newsletter.
THE GOOD
Hux:
Imagine swapping your third soda of the afternoon for water with lemon because your AI health coach told you it would be a good idea. No, we're not talking about your mom's passive-aggressive texts, but rather an AI-driven initiative that's transforming healthcare as we know it. According to TIME, the latest wave in AI technology might just make your Fitbit look like a fossil.
Turns out, these AI health coaches can make really precise recommendations tailored to each individual. Whether it's suggesting you take a 10-minute walk with your child to burn off some steam or swapping that calorie-heavy coffee for a green juice, these smart systems seem to know you better than your Netflix recommendations. They might even put your favorite diet guru out of business.
But let’s take a step back—this sounds a bit dystopian, right? An AI that knows your every unhealthy habit and nudges you towards better choices? What next, an AI that can detect when you're lying about sticking to your diet? Oh wait, we might be giving them ideas.
The upside is huge, though. With a smirk, we say goodbye to the days of generic health advice and embrace a future where our AI buddies cheer us on towards healthier lives. So, don’t be surprised if your AI health coach becomes your new BFF, minus the judgmental looks you get from actual people.
In a world where AI continuously gets a bad rap for job-stealing and data breaches, it’s refreshing to see a story where AI might quite literally save your life—or at least make it a bit more bearable.
Here’s to a future of AI-driven health nudges—because if anything can convince you to swap soda for water, it might as well be an algorithm.
Not bad, but Hux was mostly repeating the article, jokes and all, and did not take into context that the article was an editorial penned by the very founders of the company. All in all, he sounded a little like a salesman for Thrive. I decided to take out most of the end that just sounded sales-pitchy and take a slightly longer look at what will worry most people, the data privacy side.
Here’ the Final:
Imagine swapping your afternoon coffee for water with lemon because you got a text to remind you of your health goals. No, we're not talking about your mom's passive-aggressive missives, but rather an AI-driven initiative that's transforming healthcare as we know it. According to Sam Altman and Ariana Huffington writing for TIME, the latest wave in AI technology might just make your Fitbit look like a fossil.
Huffington and Open AI’s new venture, Thrive AI Health looks to use AI health coaches to make really precise recommendations tailored to each individual. Whether it's suggesting you take a 10-minute walk after picking your kid up from school or swapping that calorie-heavy coffee for a green juice, these smart systems are being designed to know you better than your Netflix recommendations. They might even put your favorite diet influencer out of business.
While giving any company a huge amount of personal data as sensitive as your health info is a little scary, the upside is huge. In a world where AI continuously gets a bad rap for job-stealing and training on copyrighted data, it’s refreshing to see AI applied in a way to improve lives and maybe even mental health. A non-judgemental AI assistant might help people get over the shame hump of getting help from a real person.
Now, The Bad section.
THE BAD
Hux’s first pass:
In the latest episode of "What Fresh Hell Is This?", even Google's experts are sounding alarms about the potential for AI to distort reality to unfathomable extents. CNET reports that high-ranking officials within Google are warning that AI-generated content, from deepfakes to photorealistic fabrications, could fundamentally alter our perception of what's real. It seems we're just one algorithm away from mistaking a toaster for the President of the United States.
The commentary from Google’s experts paints a rather grim picture: AI's ability to manipulate visuals and text is reaching a point where fact-checking could become as practical as herding digital cats. This isn't your run-of-the-mill fake news; it's an entire alternate reality constructed with machine-learned sleight of hand. When you can no longer trust your eyes, what do you rely on? Your gut feeling that the moon landing probably didn't involve a green screen?
So, what's the game plan? Google's answer seems to be a combination of improving digital literacy and deploying more sophisticated AI to catch AI-generated lies—a sort of Skynet policing Skynet. However, this feels a bit like handing out fire extinguishers after the building is already engulfed in flames.
In a world where our grasp on reality is as slippery as a greased algorithm, we might just have to face the possibility that our future is one big, elaborate game of digital "Gotcha!"—and the AI always wins.
Again, not bad, but in this scenario the jokes fell a little flat for me so I edited or removed them and added a little more commentary to the end.
Here’s the final:
In the latest episode of "What Fresh Hell Is This?", Google's experts are sounding alarms about the potential for AI to distort reality to unfathomable extents. CNET reports that high-ranking officials within Google are warning that AI-generated content, from deepfakes to photorealistic fabrications, could fundamentally alter our perception of what's real.
The commentary from Google’s experts paints a rather grim picture: AI's ability to manipulate visuals and text is reaching a point where fact-checking could become impractical, gish-galloping us with misinformation on a global scale. When you can no longer trust your eyes, what do you rely on? Your gut feeling that the moon landing probably didn't involve Stanley Kubrick? Honestly the shots would have been framed better.
So, what's the game plan? Google's answer seems to be a combination of improving digital literacy (which can’t possibly move fast enough) and deploying more sophisticated AI to catch AI-generated lies (which I’m not currently optimistic about, given Google wasn’t even able to catch its own AIs recommending glue on pizza).
In the AI roundup section, I borrowed from several of the most recent tests and used about half the jokes straight out of the AI. Hux wrote the jokes about your cat asking for the WiFi password, “Trust me bro,” and the Nigerian Prince. The others I was influenced by his draft but tweaked it to make the joke land (hopefully) a little better.
Interestingly, Hux often linked the same stories he summarized above and several press releases, basically turning himself into a shill for AI companies.
That’s where I’ll leave us today, with these parting thoughts: I have been pretty impressed with Hux’s ability to craft a decent joke. I laughed out loud at several he wrote over the last month of testing. But at the same time, Hux does a terrible job parsing context and nuance. A press release and an op-ed read the same to him as a scholarly article and he’s wont to take them both for truth at face value. Interestingly, that may be his most human trait, but it’s one I hope to eventually train out of him.
If you want to stick around for more of these breakdowns and additional commentary from us, don’t forget to subscribe.
Until next time,
Kyle