Menu

Blog

Archive for the ‘robotics/AI’ category: Page 43

Oct 18, 2024

AI Detectors Falsely Accuse Students of Cheating—With Big Consequences

Posted by in category: robotics/AI

It is morally wrong to use AI detectors when they produce false positives that smear students in ways that hurt them and where they can never prove their innocence.

While some educators…


About two-thirds of teachers report regularly using tools for detecting AI-generated content. At that scale, even tiny error rates can add up quickly.

Continue reading “AI Detectors Falsely Accuse Students of Cheating—With Big Consequences” »

Oct 18, 2024

The huge protein database that spawned AlphaFold and biology’s AI revolution

Posted by in categories: biological, robotics/AI

It’s easy to marvel at the technical wizardry behind breakthroughs such as AlphaFold.


Pioneering crystallographer Helen Berman helped to set up the massive collection of protein structures that underpins the Nobel-prize-winning tool’s success.

Oct 18, 2024

Boston Dynamics teams with TRI to bring AI smarts to Atlas humanoid robot

Posted by in category: robotics/AI

Boston Dynamics and Toyota Research Institute (TRI) Wednesday revealed plans to bring AI-based robotic intelligence to the electric Atlas humanoid robot. The collaboration will leverage the work that TRI has done around large behavior models (LBMs), which operate along similar lines as the more familiar large language models (LLMs) behind platforms like ChatGPT.

Last September, TechCrunch paid a visit to TRI’s Bay Area campus for a closer look at the institute’s work on robot learning. In research revealed at last year’s Disrupt conference, institute head Gill Pratt explained how the lab has been able to get robots to 90% accuracy when performing household tasks like flipping pancakes through overnight training.

“In machine learning, up until quite recently there was a tradeoff, where it works, but you need millions of training cases,” Pratt explained at the time. “When you’re doing physical things, you don’t have time for that many, and the machine will break down before you get to 10,000. Now it seems that we need dozens. The reason for the dozens is that we need to have some diversity in the training cases. But in some cases, it’s less.”

Oct 18, 2024

DBS CEO Says Only Half of Banks Are Making Enough Tech Progress

Posted by in categories: business, finance, robotics/AI

The head of Singapore’s biggest lender said only about half of the banking industry has made sufficient progress in transforming their businesses to embrace digitalization and artificial intelligence.

Oct 18, 2024

AI could Predict Breast Cancer risk via ‘Zombie cells’

Posted by in categories: biotech/medical, health, robotics/AI

Women worldwide could see better treatment with new AI technology, which enables better detection of damaged cells and more precisely predicts the risk of getting breast cancer, shows new research from the University of Copenhagen.

Breast cancer is one of the most common types of cancer. In 2022, the disease caused 670,000 deaths worldwide. Now, a new study from the University of Copenhagen shows that AI can help women with improved treatment by scanning for irregular-looking cells to give better risk assessment.

The study, published in The Lancet Digital Health, found that the AI technology was far better at predicting the risk of cancer than current clinical benchmarks for breast cancer risk assessment.

Oct 18, 2024

DeepSeek AI Releases Janus: A 1.3B Multimodal Model with Image Generation Capabilities

Posted by in category: robotics/AI

Multimodal AI models are powerful tools capable of both understanding and generating visual content. However, existing approaches often use a single visual encoder for both tasks, which leads to suboptimal performance due to the fundamentally different requirements of understanding and generation. Understanding requires high-level semantic abstraction, while generation focuses on local details and global consistency. This mismatch results in conflicts that limit the overall efficiency and accuracy of the model.

Researchers from DeepSeek-AI, the University of Hong Kong, and Peking University propose Janus, a novel autoregressive framework that unifies multimodal understanding and generation by employing two distinct visual encoding pathways. Unlike prior models that use a single encoder, Janus introduces a specialized pathway for each task, both of which are processed through a unified transformer. This unique design alleviates conflicts inherent in prior models and provides enhanced flexibility, enabling different encoding methods that best suit each modality. The name “Janus” aptly represents this duality, much like the Roman god, with two faces representing transitions and coexistence.

The architecture of Janus consists of two main components: an Understanding Encoder and a Generation Encoder, each tasked with handling multimodal inputs differently. For multimodal understanding, Janus uses a high-dimensional semantic feature extraction approach through SigLIP, transforming the features into a sequence compatible with the language model. For visual generation, Janus utilizes a VQ tokenizer that converts visual data into discrete representations, enabling detailed image synthesis. Both tasks are processed by a shared transformer, enabling the model to operate in an autoregressive fashion. This approach allows the model to decouple the requirements of each visual task, simplifying implementation and improving scalability.

Oct 18, 2024

Top “Reasoning” AI Models Can be Brought to Their Knees With an Extremely Simple Trick

Posted by in category: robotics/AI

A team of Apple researchers has found that advanced AI models’ alleged ability to “reason” isn’t all it’s cracked up to be.

“Reasoning” is a word that’s thrown around a lot in the AI industry these days, especially when it comes to marketing the advancements of frontier AI language models. OpenAI, for example, recently dropped its “Strawberry” model, which the company billed as its next-level large language model (LLM) capable of advanced reasoning. (That model has since been renamed just “o1.”)

But marketing aside, there’s no agreed-upon industrywide definition for what reasoning exactly means. Like other AI industry terms, for example, “consciousness” or “intelligence,” reasoning is a slippery, ephemeral concept; as it stands, AI reasoning can be chalked up to an LLM’s ability to “think” its way through queries and complex problems in a way that resembles human problem-solving patterns.

Oct 18, 2024

Where in the world is all the battery storage?

Posted by in categories: economics, robotics/AI

Investing in humanoid robotics could revolutionize economies, boosting productivity and prosperity exponentially. Find out why countries should prioritize this game-changing technology.

Oct 18, 2024

TSMC Hikes Revenue Outlook in Show of Confidence in AI Boom

Posted by in categories: robotics/AI, sustainability

The world’s largest maker of advanced chips has been one of the biggest beneficiaries of a global race to develop artificial intelligence.


Taiwan Semiconductor Manufacturing Co. shares hit a record high after the chipmaker topped quarterly estimates and raised its target for 2024 revenue growth, allaying concerns about global chip demand and the sustainability of an AI hardware boom.

Oct 17, 2024

The Future of Lunar Resource Extraction: Teleoperation and Simulation

Posted by in categories: robotics/AI, space

“One option could be to have astronauts use this simulation to prepare for upcoming lunar exploration missions,” said Joe Louca.


How will future missions to the Moon help extract valuable resources that can be used for scientific research or lunar settlement infrastructure? This is what a recent study being presented this week at the IROS 2024 (IEEE/RSJ International Conference on Intelligent Robots and Systems) hopes to address as a team of researchers from the University of Bristol investigated how a combination of virtual simulations and robotic commands could help enhance teleoperated robotic exploration on the lunar surface on future missions.

For the study, the researchers used a method called model-mediated teleoperation (MMT) to create simulated regolith and send commands to a robot that carried out the task. In the end, the researchers found effectiveness and trustworthiness of the simulated regolith to the robot conducting the tasks was 100 percent and 92.5 percent, respectively. The reason teleoperated robots are essential is due to the time lag between the Earth and the Moon and extracting resources from the lunar surface, known as in-situ resource utilization (ISRU), is also being deemed an essential task at developing lunar infrastructure for future astronauts.

Continue reading “The Future of Lunar Resource Extraction: Teleoperation and Simulation” »

Page 43 of 2,426First4041424344454647Last