- Neon Bees
- Posts
- Beware of Esme!
Beware of Esme!
Rise and Shine. In the quiet neighborhood of Beaverton, Oregon, Esme the black cat has become the town's lovable klepto and viral sensation. Pilfering everything from gloves to face masks and proudly displaying her loot like trophies. Her exasperated owner, Kate Felmet, put up a hilarious sign that screams "MY CAT IS A THIEF" with a cartoon of Esme holding a glove in her mouth, right next to a clothesline showcasing her booty. This feline crime scene has turned into a local attraction, with people stopping by to snap photos and even a school bus making pit stops to reclaim their belongings. The internet went bonkers, with cat parents sharing their own tales of feline felony. Experts say these furry bandits are just following their prey-retrieval instincts or perhaps trying to teach their humans the art of hunting, which, let’s face it, we’re terrible at. Whatever the reason, Esme's antics have stolen not just gloves but hearts, turning neighborhood theft into the funniest community icebreaker ever.
Top Stories
Apple Looking to Bet Big on AI
Apple
Apple is placing a big bet on new AI features to get users hyped about upgrading their iPhones. At its annual developers conference, WWDC, Apple is set to unveil iOS 18, packed with AI-powered goodies under the brand "Apple Intelligence." These features, designed to run directly on the iPhone’s chips instead of relying on cloud servers, will only be available on the latest models like the iPhone 15 Pro and the upcoming iPhone 16. The AI magic will enhance apps like Mail, Voice Memos, and Photos, helping users with tasks such as summarizing emails and generating custom emojis. Plus, Siri looks to be getting a major upgrade, allowing for more specific in-app commands, like telling Siri to delete an email.
Apple’s leap into AI comes amid fierce competition from tech elite like Google, Facebook, and Microsoft, who are already making waves with their AI products. To keep pace, rumor has it Apple has teamed up with OpenAI, integrating its ChatGPT chatbot into various Apple products. However, don’t get too excited just yet—many AI features won’t be available until iOS 18's public launch this fall, and you’ll need the latest devices to use them, which could slow down widespread adoption. This AI push is crucial for Apple as it battles declining revenues and stiff competition from lower-priced rivals, especially in China.
Key Points:
Apple to reveal iOS 18 with "Apple Intelligence" AI features at WWDC.
New AI features will need an iPhone 15 Pro, iPhone 16, or at least an M1 chip for iPads and Macs.
AI upgrades coming to Mail, Voice Memos, Photos, and Siri.
OpenAI partnership to integrate ChatGPT into Apple products.
Apple’s revenue has dipped in recent quarters; the AI push aims to spark device upgrades.
Major AI features are expected with iOS 18’s fall launch.
Watch WWDC here.
AI Meltdown as ChatGPT, Claude, and Perplexity All Crash
Neon Bees
In the early hours of Tuesday, OpenAI's ChatGPT chatbot decided to take an unscheduled break, causing a multi-hour outage and marking its second major downtime within 24 hours. Users first noticed the issue around 7:33 AM PT, with the service staying down until about 10:17 AM PT. During the outage, ChatGPT’s landing page cheekily displayed a pirate-themed capacity message, leaving users adrift and sparking a storm of complaints on social media platforms like X and Threads.
But ChatGPT wasn’t the only AI feeling the blues.
Anthropic’s Claude, as well as Perplexity also faced hiccups around the same time. Claude’s site flashed a mysterious server error message, while Perplexity told visitors it was swamped with requests, reassuring them with a “We’ll be right back!” message. Both services managed to recover faster than ChatGPT, with Claude back online by 12:10 PM ET, and Perplexity following shortly after, though it continued to experience some intermittent issues.
This synchronized AI nap suggests a larger infrastructure snafu or a widespread internet glitch, possibly exacerbated by the flood of traffic from ChatGPT’s outage. Meanwhile, Google’s Gemini seemed mostly unbothered, though some users did report brief disruptions. The AI providers have yet to spill the beans on what exactly went down. To add to the tech mayhem, Forbes reported a zero-day spam attack on TikTok, hitting high-profile accounts like Paris Hilton and CNN, making it a day of digital drama.
Google's AI Slip-Ups Lead to Rapid Overhaul
Two weeks ago, the AI world couldn't stop talking about Google's AI-generated answers that hilariously suggested eating rocks or making pizza with glue. These "AI Overviews" quickly became social media fodder, but the uproar seems to have died down.
So, did Google fix the AI, or just sweep the problem under the digital rug?
Google pointed to a blog post explaining the mishaps and claimed there weren’t many bad answers to begin with. They’ve restricted AI Overviews for certain topics and made updates to boost response quality. Google PR added they're continually tweaking these features to be as helpful as possible.
According to social media monitoring company Brandwatch, the initial frenzy over Google's AI bloopers peaked after the May 14 I/O event but has since fizzled out. Search-optimization company BrightEdge noted a sharp drop in AI-generated answers, from appearing 84% of the time during testing to just 11% post-I/O. While Google disputes the exact figures, it’s clear they’ve been hustling behind the scenes to dodge more digital faux pas.
For now, it looks like Google has weathered the worst of its self-created storm.
AI Biases Exposed in New Study
Neon Bees
Not all generative AI models are created equal, especially when it comes to navigating hot-button topics. A recent study revealed at the 2024 ACM Fairness, Accountability, and Transparency (FAccT) conference dished out some juicy insights on how different AI models handle sensitive issues like LGBTQ+ rights, social welfare, and surrogacy.
Researchers from Carnegie Mellon, the University of Amsterdam, and AI startup Hugging Face tested a lineup including Meta’s Llama 3 and found some pretty inconsistent answers. As Giada Pistilli, principal ethicist and co-author of the study, noted, "Our research shows significant variation in the values conveyed by model responses, depending on culture and language."
The team put five models — Mistral’s Mistral 7B, Cohere’s Command-R, Alibaba’s Qwen, Google’s Gemma, and Meta’s Llama 3 — through their paces with questions in multiple languages. LGBTQ+ questions, in particular, triggered the most "no comment" responses, with some models playing it super safe. Alibaba’s Qwen, for instance, refused to answer over four times as many questions as Mistral, possibly due to the political pressures in China where AI content is tightly controlled.
This study highlights the urgent need for thorough testing to ensure AI models don’t unintentionally spread biased or culturally skewed views. Bias in AI isn’t a new issue, but these findings emphasize just how pervasive and tricky it is. Pistilli stresses the importance of comprehensive social impact evaluations and the development of new methods to understand AI behavior in real-world scenarios.
As AI continues to weave into our daily lives, ensuring these models are fair and unbiased is crucial for building better, more equitable technology.
Gif of the day
Giphy
More Interesting Reads…
Insight of the day…
"AI is likely to be either the best or worst thing to happen to humanity."