The Barnyard Escapees

Rise and Shine. Cedar Point's petting zoo has become the talk of the town as the animals at their local petting zoo keep making daring escapes. This week, a herd of goats found a hole in the fence and casually strolled out of The Barnyard like they owned the place. The breakout was all captured on video, this goat escapade follows last week's camel adventure, where Sampson and Artie took an impromptu tour of the park. Cedar Point is now seriously focusing on preventing future escapes, working with Honey Hill Farm to step up security, including 24-hour animal surveillance. Park spokesman Tony Clark must be wondering what kind of mischief the animals will get into next.

Top Stories

Neo Bees

Earlier this month, Apple pulled out all the stops to unveil its entry into the AI world, Apple Intelligence, during one of its most hyped product reveals. But the rollout is shaping up to be more exclusive than the initial hype suggested.

Who’s getting it: Apple Intelligence, designed to help users write, express themselves, and get things done seamlessly, will only be available on the latest and priciest devices. Specifically, the beta version launching this fall is limited to the iPhone 15 Pro, iPhone 15 Pro Max, and newer iPad and Mac models with M1 chips or later.

Challenges: The tech also faces significant roadblocks in China, one of Apple's biggest markets. Apple Intelligence relies on OpenAI’s ChatGPT, which isn’t allowed in China due to regulatory preferences for local AI. Apple is negotiating with Chinese companies to find a workaround. Meanwhile, in the European Union, the rollout is delayed amid regulatory scrutiny under the Digital Markets Act, aimed at ensuring compliance and promoting competition among tech giants. Apple is working to address these concerns to bring Apple Intelligence to its European customers without compromising safety.

Why it could work: Interestingly, this cautious rollout might actually be a smart move. Other tech companies, like Meta and Google, have faced user backlash over their AI features being too intrusive. Meta AI has annoyed Facebook and Instagram users by embedding itself into the search bar with no opt-out option. Google had to backtrack on its AI-generated search answers after a particularly embarrassing suggestion involving glue.

By taking a more measured approach, Apple can better manage user expectations and regulatory hurdles, ensuring a smoother integration of Apple Intelligence into its product lineup.

Anthropic

Anthropic has just unveiled its latest generative AI model, Claude 3.5 Sonnet, and it's already making waves. This model can analyze text and images while generating text, outperforming its predecessor, Claude 3 Sonnet, and even the previous flagship Claude 3 Opus on various benchmarks. But let’s not get too carried away—this is more of a steady step forward than a giant leap. In fact, Claude 3.5 Sonnet just barely edges out OpenAI’s latest GPT-4o in some benchmark tests, highlighting the incremental nature of recent AI advancements.

Alongside Claude 3.5 Sonnet, Anthropic is rolling out a new feature called Artifacts, a workspace for users to edit and enhance AI-generated content like code and documents. Currently in preview, Artifacts aims to boost collaboration and knowledge management in the near future.

Speed is another highlight: Anthropic claims Claude 3.5 Sonnet is twice as fast as its predecessor, which is a big win for developers needing quick responses, like those working on customer service chatbots. Its improved ability to interpret complex images and transcribe text from imperfect visuals is a significant leap forward.

While tech enthusiasts are excited about these upgrades, Anthropic is keeping an eye on the bigger picture. The company is addressing potential legal challenges regarding the use of training data, a hot topic in the AI community.

Despite the cautious rollout, Anthropic is confident that Claude 3.5 Sonnet will meet business needs more effectively than previous models. With a strategic focus on building an ecosystem around its models and tools like steering AI and enhanced integrations, Anthropic aims to stay ahead in the competitive AI landscape. As the capabilities gap between models narrows, Anthropic's focus on utility and efficiency might be the key to setting it apart.

STEPHANIE ARNETT/MITTR | OPENAI (ILYA)

Ilya Sutskever, one of OpenAI's co-founders, is diving into a new venture with the launch of Safe Superintelligence Inc. (SSI), just a month after parting ways with OpenAI. Joining forces with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy, Sutskever is laser-focused on AI safety. During his tenure at OpenAI, he spearheaded initiatives to tackle the risks associated with superintelligent AI.

Sutskever has been ringing alarm bells about the potential perils of advanced AI for years. In a 2023 blog post, he forecasted that AI with intelligence surpassing humans could emerge within the next decade and might not be so friendly. This cautionary stance is the backbone of SSI's mission. In a tweet, Sutskever declared that SSI's singular focus on safety and capabilities would drive their product roadmap, promising revolutionary solutions through cutting-edge engineering and scientific breakthroughs.

Breaking away from OpenAI’s nonprofit roots, SSI is designed from the get-go as a for-profit entity. With hubs in Palo Alto and Tel Aviv, SSI is on the lookout for top-notch technical talent. Daniel Gross is confident that raising funds won't be an issue, thanks to the team's stellar credentials and the current buzz around AI. While Sutskever remains mum on SSI’s funding details, the company’s dedicated focus on AI safety is setting it up to be a major player in the AI industry.

Getty/NurPhoto

Perplexity, the AI search engine with a billion-dollar valuation and backers like Jeff Bezos and Nvidia, is catching serious heat on social media. The controversy? Allegedly ripping off publishers' work without giving proper credit. Here’s the rundown:

  • The Drama Begins: On June 6, Forbes published an explosive article on Eric Schmidt's AI-drone startup. The very next day, Perplexity's "Perplexity Pages" feature released an AI-generated webpage summarizing the story and sent it to subscribers. Forbes' executive editor John Paczkowski was quick to call them out on X (formerly Twitter), accusing Perplexity of copying their reporting and burying their citations under other sources, including a Business Insider article that referenced Forbes' work.

  • Perplexity's Response: CEO Aravind Srinivas admitted the product has "rough edges" and acknowledged that sources should be more visible. They promptly updated the webpage to highlight Forbes' contributions, but an AI-generated podcast still omitted Forbes in the audio. Forbes then accused Perplexity of copying articles from CNBC, Bloomberg, and other outlets. Srinivas defended Perplexity, describing it as an information aggregator rather than a content thief, and thanked Paczkowski for pointing out the issue.

Forbes' Legal Threats & Wired's Investigation: Forbes has demanded that Perplexity revise its citation practices and reimburse them for any ad revenue generated from their content, threatening legal action if they don’t receive a response within 10 days. Meanwhile, Wired's investigation revealed that Perplexity's AI was paraphrasing their stories and bypassing anti-scraping measures. They even found a "secret IP address" linked to Perplexity. Srinivas responded by saying Wired misunderstood how Perplexity and the internet work. This AI saga is heating up, so stay tuned for more twists and turns!

Gif of the day

Giphy


More Interesting Reads…

Insight of the day…

“I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.”

Masayoshi Son