Menu

Blog

Archive for the ‘robotics/AI’ category: Page 248

Apr 12, 2024

Meta unveils new, faster AI chips

Posted by in category: robotics/AI

The company is aiming to ramp up its artificial intelligence efforts and reduce reliance on outside suppliers like Nvidia.

Apr 12, 2024

Sanctuary AI and Magna Partner to Deploy Humanoid Bots in Automotive Manufacturing

Posted by in category: robotics/AI

Herbert Ong Brighter with Herbert.

Apr 11, 2024

Spinal Cord Learns and Remembers Movements Autonomously

Posted by in category: robotics/AI

A team of researchers at the Leuven-based Neuro-Electronics Research Flanders (NERF) details how two different neuronal populations enable the spinal cord to adapt and recall learned behavior in a way that is completely independent of the brain.

These remarkable findings, published today in Science, shed new light on how spinal circuits might contribute to mastering and automating movement. The insights could prove relevant in the rehabilitation of people with spinal injuries.

The spinal cord modulates and finetunes our actions and movements by integrating different sources of sensory information, and it can do so without input from the brain.

Apr 11, 2024

Semiconductor Companies by Industry Revenue Share

Posted by in category: robotics/AI

Nvidia is coming for Intel’s crown. Samsung is losing ground. AI is transforming the space. We break down revenue for semiconductor companies.

Apr 11, 2024

How blue-collar workers will train the humanoids that take their jobs

Posted by in categories: information science, robotics/AI, transportation

Carnegie Mellon University (CMU) researchers have developed H2O – Human2HumanOid – a reinforcement learning-based framework that allows a full-sized humanoid robot to be teleoperated by a human in real-time using only an RGB camera. Which begs the question: will manual labor soon be performed remotely?

A teleoperated humanoid robot allows for the performance of complex tasks that are – at least at this stage – too complex for a robot to perform independently. But achieving whole-body control of human-sized humanoids to replicate our movements in real-time is a challenging task. That’s where reinforcement learning (RL) comes in.

Continue reading “How blue-collar workers will train the humanoids that take their jobs” »

Apr 11, 2024

35-gram Hopcopter revolutionizes robotics with its hops and flight

Posted by in category: robotics/AI

Egineers develop a new hybrid robot that hops, flies, adjusts jump heights, and executes tight turns with high frequency and agility.

Apr 11, 2024

How AI Powerhouse Nvidia Validates Humanoid Robots With New Initiative

Posted by in category: robotics/AI

Humanoid robots leaped from a curiosity to the next big thing after Nvidia Chief Executive Jensen Huang touted the emerging technology.

Apr 11, 2024

European car manufacturer will pilot Sanctuary AI’s humanoid robot

Posted by in categories: Elon Musk, robotics/AI, transportation

Sanctuary AI announced that it will be delivering its humanoid robot to a Magna manufacturing facility. Based in Canada, with auto manufacturing facilities in Austria, Magna manufactures and assembles cars for a number of Europe’s top automakers, including Mercedes, Jaguar and BMW. As is often the nature of these deals, the parties have not disclosed how many of Sanctuary AI’s robots will be deployed.

The news follows similar deals announced by Figure and Apptronik, which are piloting their own humanoid systems with BMW and Mercedes, respectively. Agility also announced a deal with Ford at CES in January 2020, though that agreement found the American carmaker exploring the use of Digit units for last-mile deliveries. Agility has since put that functionality on the back burner, focusing on warehouse deployments through partners like Amazon.

For its part, Magna invested in Sanctuary AI back in 2021 — right around the time Elon Musk announced plans to build a humanoid robot to work in Tesla factories. The company would later dub the system “Optimus.” Vancouver-based Sanctuary unveiled its own system, Phoenix, back in May of last year. The system stands 5’7” (a pretty standard height for these machines) and weighs 155 pounds.

Apr 11, 2024

Researchers at Stanford and MIT Introduced the Stream of Search (SoS): A Machine Learning Framework that Enables Language Models to Learn to Solve Problems by Searching in Language without Any External Support

Posted by in categories: information science, policy, robotics/AI

Language models often need more exposure to fruitful mistakes during training, hindering their ability to anticipate consequences beyond the next token. LMs must improve their capacity for complex decision-making, planning, and reasoning. Transformer-based models struggle with planning due to error snowballing and difficulty in lookahead tasks. While some efforts have integrated symbolic search algorithms to address these issues, they merely supplement language models during inference. Yet, enabling language models to search for training could facilitate self-improvement, fostering more adaptable strategies to tackle challenges like error compounding and look-ahead tasks.

Researchers from Stanford University, MIT, and Harvey Mudd have devised a method to teach language models how to search and backtrack by representing the search process as a serialized string, Stream of Search (SoS). They proposed a unified language for search, demonstrated through the game of Countdown. Pretraining a transformer-based language model on streams of search increased accuracy by 25%, while further finetuning with policy improvement methods led to solving 36% of previously unsolved problems. This showcases that language models can learn to solve problems via search, self-improve, and discover new strategies autonomously.

Recent studies integrate language models into search and planning systems, employing them to generate and assess potential actions or states. These methods utilize symbolic search algorithms like BFS or DFS for exploration strategy. However, LMs primarily serve for inference, needing improved reasoning ability. Conversely, in-context demonstrations illustrate search procedures using language, enabling the LM to conduct tree searches accordingly. Yet, these methods are limited by the demonstrated procedures. Process supervision involves training an external verifier model to provide detailed feedback for LM training, outperforming outcome supervision but requiring extensive labeled data.

Apr 11, 2024

Robot dogs train at 6,000ft in snow-clad mountains for moon missions

Posted by in categories: robotics/AI, space

A multidisciplinary team is teaching dog-like robots to navigate the moon’s craters and other challenging planetary surfaces.

As part of the research funded by NASA, researchers from various universities and NASA Johnson Space Center tested a quadruped named Spirit at Palmer Glacier on Oregon’s Mount Hood.

Continue reading “Robot dogs train at 6,000ft in snow-clad mountains for moon missions” »

Page 248 of 2,431First245246247248249250251252Last