- Neon Bees
- Posts
- Waymo 1, Phoenix PD 0
Waymo 1, Phoenix PD 0
Rise and Shine. In Phoenix, a Waymo self-driving car decided to have a little adventure, running a red light and cruising into oncoming traffic like a daring stunt driver. The car finally found its way to a parking lot, where a puzzled police officer approached, ticket book in hand. Imagine his surprise when he found no one inside! The whole scene was caught on the officer's bodycam, capturing his bemusement as he found himself ticketing thin air. The dispatch notes humorously reported that the car "FREAKED OUT," leaving the officer scratching his head and chuckling, unable to issue a citation to a rogue robot. Waymo later explained that their adventurous car got spooked by some tricky construction signs and promised their robo-cars will stick to the straight and narrow next time.
Top Stories
Neon Bees
OpenAI recently experienced a security breach that has raised eyebrows across the industry.
The incident was first mentioned by former employee Leopold Aschenbrenner on a podcast, making it sound like a major scare. However, sources clarified that the hacker only accessed an employee discussion forum.
While this might seem like a minor hiccup, it’s a stark reminder that AI companies have painted targets on their backs. With vast amounts of high-quality training data, billions of user interactions, and sensitive customer info, these firms are a goldmine for cyber intruders.
Despite OpenAI’s reassurances that no critical systems or sensitive models were compromised, this breach is a wake-up call. It highlights just how valuable and vulnerable AI companies have become.
Those meticulously curated training datasets? They're pure gold for competitors and adversaries. And the treasure trove of user data, encompassing billions of conversations, offers deep insights that marketers, analysts, and developers would love to get their hands on.
As AI firms like OpenAI navigate this increasingly risky landscape, they need to amp up their cybersecurity measures. Robust security protocols are now more essential than ever.
The industry must stay vigilant because, as they say in the tech world, it’s not a matter of if, but when the next attack will come.
Neon Bees
The tech industry is in the midst of an all-out war for top AI talent.
Companies like Google, Microsoft, OpenAI, and Meta are fiercely competing to lure the best minds in artificial intelligence with sky-high compensation packages.
Who are the real winners: The AI professionals, who are cashing in big time.
Ram Srinivasan, a future of work expert and managing director at consulting firm JLL, told Business Insider that the race for AI talent is creating a "promising landscape for tech professionals." With companies offering staggering compensation, AI experts are seeing their value soar.
Some of the eye-popping offers include:
Total compensation exceeding $1 million, with substantial equity stakes
Entry-level AI engineers in the US earning $300,600 by March 2024, up from $231,000 in August 2022
Median total compensation for machine learning or AI software engineers at $140,823
According to PwC's 2024 Global AI Jobs Barometer, workers who master AI skills are poised for bright futures, even as AI reshapes the job market. The demand for AI expertise is driving salaries through the roof, with US-based job postings for AI roles offering a 25% wage premium. Levels.fyi reports show that entry-level AI engineers now earn 8.6% more than their non-AI counterparts, while senior AI engineers command nearly 11% more.
As the AI gold rush continues, tech professionals with the right skills are finding themselves in a prime position to negotiate top-tier pay and benefits.
Google / Meta
It seems like artificial intelligence is everywhere these days, but some of the biggest companies are finally acknowledging the risks it poses.
Since OpenAI unveiled ChatGPT to the public in November 2022, AI has dominated the conversation.
Companies like Google, Meta, Microsoft, and others have been pouring money into their AI projects, eager to lead the charge in this new frontier. But amidst the AI rush, these industry leaders are starting to quietly admit that there might be some clouds on the horizon.
In its 2023 annual report, Google’s parent company, Alphabet, pointed out that AI products and services "raise ethical, technological, legal, regulatory, and other challenges, which may negatively affect our brands and demand." Similarly, filings from Meta, Microsoft, and Oracle to the Securities and Exchange Commission have all flagged concerns about AI, with Microsoft highlighting that their generative AI features "may be susceptible to unanticipated security threats from sophisticated adversaries."
Meta’s 2023 annual report delves deeper into the potential pitfalls, listing misinformation (especially during elections), harmful content, intellectual-property infringement, and data privacy issues as risks associated with generative AI.
These concerns are not just theoretical—there's a real fear that AI could harm users and expose companies to litigation. The public, too, has voiced worries about AI making some jobs obsolete, the use of personal data for training large language models, and the spread of misinformation.
Adding fuel to the fire, a group of current and former OpenAI employees signed a letter on June 4, urging tech companies to do more to mitigate AI risks and protect employees who raise safety concerns. The letter outlined fears ranging from "the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."
This serves as a stark reminder that while AI holds incredible potential, its development and deployment must be handled with great care.
Neon Bees
In the age of generative AI, the line between innovation and intellectual property infringement can get pretty blurry.
Enter Perplexity AI, a startup that's been making waves with its unique blend of a search engine and a language model. Unlike ChatGPT and Claude, Perplexity doesn’t train its own models but uses open or commercially available ones to generate detailed responses from internet-sourced content.
Sounds cool, right? But hold onto your hats—recent accusations suggest Perplexity might be skirting the edges of ethical boundaries.
In June, Forbes and Wired threw some serious shade at Perplexity. Forbes accused the startup of plagiarizing its news articles, while Wired claimed Perplexity was sneaking past web protocols to scrape content illicitly. Despite these claims, Perplexity, backed by the likes of Nvidia and Jeff Bezos, insists it’s playing by the rules. The company argues that summarizing content directly requested by users doesn’t constitute web crawling, and they maintain they honor publishers' requests not to scrape content. But, as Wired discovered, when you ask Perplexity to summarize a URL, its bot shows up like an uninvited guest at a party, sometimes copying text verbatim.
So, where does this leave us? AI’s rise is undeniably thrilling, but it also comes with a truckload of ethical dilemmas. Perplexity’s situation spotlights the nuances of the Robots Exclusion Protocol and the grey areas of fair use in copyright law. The startup is treading a fine line, aiming to offer innovative AI-driven summaries without crossing into plagiarism territory. As AI continues to evolve, the industry must grapple with these challenges to ensure technology enhances our lives without trampling on creators' rights.
Gif of the day
Giphy
More Interesting Reads…
Insight of the day…
“To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.”