The Stanford Institute for Human-Centered AI publishes its Artificial Intelligence Index Report 2024, one of the most authoritative sources for data and insights on #AI.
Link to the full report:
Below are its top 10 takeaways:
The Stanford Institute for Human-Centered AI publishes its Artificial Intelligence Index Report 2024, one of the most authoritative sources for data and insights on #AI.
Link to the full report:
Below are its top 10 takeaways:
XAI launched Grok-1.5V, a new multimodal AI that can understand text, images, diagrams, and more. xAI claims Grok-1.5V outperforms competitors on key benchmarks such as RealWorldQA.
Self-driving cars do not get drunk, they do not fall asleep, they do not get distracted by text messages, and experts and manufacturers agree they could be the answer to slashing the road toll.
It’s one of the reasons why autonomous vehicles are in the spotlight again, with Tesla promising to unveil a robotaxi in August and Hyundai showing off the results of its driverless car trial in Las Vegas.
But debate is raging in the industry over whether the technology is or will ever be ready to drive in busy, unpredictable environments without any human oversight.
Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report (see ‘Speedy advances’). Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.
These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.
In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”
The CEO of Europe’s brightest new AI firm is calling bull on the quest for so-called “artificial general intelligence” (AGI), which he says is akin to the desire to create God.
In an interview with the New York Times, Arthur Mensch, the CEO of the AI firm Mistral, sounded off on his fellow AI executives’ “very religious” obsession with building AGI.
“The whole AGI rhetoric is about creating God,” the Mistral CEO told the newspaper. “I don’t believe in God. I’m a strong atheist. So I don’t believe in AGI.”
With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.
The strategy paid off.
Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.
The streaming audio giant’s suite of recommendation tools has grown over the years: Spotify Home feed, Discover Weekly, Blend, Daylist, and Made for You Mixes. And in recent years, there have been signs that it is working. According to data released by Spotify at its 2022 Investor Day, artist discoveries every month on Spotify had reached 22 billion, up from 10 billion in 2018, “and we’re nowhere near done,” the company stated at that time.
Over the past decade or more, Spotify has been investing in AI and, in particular, in machine learning. Its recently launched AI DJ may be its biggest bet yet that technology will allow subscribers to better personalize listening sessions and discover new music. The AI DJ mimics the vibe of radio by announcing the names of songs and lead-in to tracks, something aimed in part to help ease listeners into extending out of their comfort zones. An existing pain point for AI algorithms — which can be excellent at giving listeners what it knows they already like — is anticipating when you want to break out of that comfort zone.
With the ease of availability and access of AI tools and technology, people are putting AI into a wide range of products and services, and even in applications where AI is a dubious fit, at best. Many times, organizations are feeling the motivation, “fear of missing out”, and perhaps customer or shareholder pressure to add AI capability to their offerings. It should come as no surprise that many of these AI projects are often half thought-out, at best, and often fail to deliver the desired results, if the results have even been considered ahead of time.
Sometimes, AI projects have a high-level, big vision, where the AI efforts are focused. Other times, AI applications are being focused on smaller tasks, or shoehorned into existing applications. The challenge is that for AI projects to be successful, there needs to be a combination of a larger vision for where AI could add value while at the same time, smaller, focused projects that allow organizations to ensure value in the real-world before diving deeper into AI capabilities and investment.
Dive into Yann LeCun’s perspective on the shortcomings of generative AI and his advocacy for Objective-Driven AI, offering a transformative approach.