Archive for the ‘robotics/AI’ category: Page 310
Apr 2, 2024
Samsung to Battle Nvidia in 2025 With Its New Mach-1 AI Chip
Posted by Shailesh Prasad in categories: futurism, robotics/AI
The company is pouring billions into R&D with plans to disrupt the AI market in the near future.
Apr 2, 2024
People liked AI art — when they thought it was made by humans
Posted by Dan Kummer in category: robotics/AI
More proof most people are full of it, in case you needed more proof.
But people were bad at assessing whether images were made by artificial intelligence or an artist.
Apr 2, 2024
AI chatbots beat humans at persuading their opponents in debates
Posted by Dan Kummer in category: robotics/AI
When people were challenged to debate contentious topics with a human or GPT-4, they were more likely to be won over by the artificial intelligence.
Apr 2, 2024
U.S., U.K. Will Partner to Safety Test AI
Posted by Kelvin Dafiaghor in categories: government, health, robotics/AI
“I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government,” Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. “I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually.”
The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023. While the two organizations’ cooperation was announced at the time of their creation, Donelan says that the new agreement “formalizes” and “puts meat on the bones” of that cooperation. She also said it “offers the opportunity for them—the United States government—to lean on us a little bit in the stage where they’re establishing and formalizing their institute, because ours is up and running and fully functioning.”
The two AI safety testing bodies will develop a common approach to AI safety testing that involves using the same methods and underlying infrastructure, according to a news release. The bodies will look to exchange employees and share information with each other “in accordance with national laws and regulations, and contracts.” The release also stated that the institutes intend to perform a joint testing exercise on an AI model available to the public.
Apr 1, 2024
EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI!
Posted by Kelvin Dafiaghor in category: robotics/AI
If You Enjoyed This Episode You Must Watch This One With Mustafa Suleyman Google AI Exec: https://youtu.be/CTxnLsYHWuI0:00 Intro 02:54 Why is this podcast im…
Apr 1, 2024
Google’s AI Still Giving Idiotic Answers Nearly a Year After Launch
Posted by Kelvin Dafiaghor in category: robotics/AI
Despite having been in public beta mode for nearly a year, Google’s search AI is still spitting out confusing and often incorrect answers.
As the Washington Post found when assessing Google’s Search Generative Experience, or SGE for short, the AI-powered update to the tech giant’s classic search bar is still giving incorrect or misleading answers nearly a year after it was introduced last May.
While SGE no longer tells users that they can melt eggs or that slavery was good, it does still hallucinate, which is AI terminology for confidently making stuff up. A search for a made-up Chinese restaurant called “Danny’s Dan Dan Noodles” in San Francisco, for example, spat out references to “long lines and crazy wait times” and even gave phony citations about 4,000-person lines and a two-year waitlist.
Apr 1, 2024
The Neuron vs the Synapse: Which One Is in the Driving Seat?
Posted by Dan Breeden in categories: physics, robotics/AI
A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain.
The brain is an immense network of neurons, whose dynamics underlie its complex information processing capabilities. A neuronal network is often classed as a complex system, as it is composed of many constituents, neurons, that interact in a nonlinear fashion (Fig. 1). Yet, there is a striking difference between a neural network and the more traditional complex systems in physics, such as spin glasses: the strength of the interactions between neurons can change over time. This so-called synaptic plasticity is believed to play a pivotal role in learning. Now David Clark and Larry Abbott of Columbia University have derived a formalism that puts neurons and the connections that transmit their signals (synapses) on equal footing [1]. By studying the interacting dynamics of the two objects, the researchers take a step toward answering the question: Are neurons or synapses in control?
Clark and Abbott are the latest in a long line of researchers to use theoretical tools to study neuronal networks with and without plasticity [2, 3]. Past studies—without plasticity—have yielded important insights into the general principles governing the dynamics of these systems and their functions, such as classification capabilities [4], memory capacities [5, 6], and network trainability [7, 8]. These works studied how temporally fixed synaptic connectivity in a network shapes the collective activity of neurons. Adding plasticity to the system complicates the problem because then the activity of neurons can dynamically shape the synaptic connectivity [9, 10].
Apr 1, 2024
Theory of Coupled Neuronal-Synaptic Dynamics
Posted by Dan Breeden in categories: physics, robotics/AI
A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain.
See more in Physics
Click to Expand.
Apr 1, 2024
When It Comes to Making Generative AI Food Smart, Small Language Models Are Doing the Heavy Lifting
Posted by Shubham Ghosh Roy in categories: food, health, robotics/AI
Since ChatGPT debuted in the fall of 2022, much of the interest in generative AI has centered around large language models. Large language models, or LLMs, are the giant compute-intensive computer models that are powering the chatbots and image generators that seemingly everyone is using and talking about nowadays.
While there’s no doubt that LLMs produce impressive and human-like responses to most prompts, the reality is most general-purpose LLMs suffer when it comes to deep domain knowledge around things like, say, health, nutrition, or culinary. Not that this has stopped folks from using them, with occasionally bad or even laughable results and all when we ask for a personalized nutrition plan or to make a recipe.
LLMs’ shortcomings in creating credible and trusted results around those specific domains have led to growing interest in what the AI community is calling small language models (SLMs). What are SLMs? Essentially, they are smaller and simpler language models that require less computational power and fewer lines of code, and often, they are specialized in their focus.