Toggle light / dark theme

Johns Hopkins computer scientists have created an artificial intelligence system capable of “imagining” its surroundings without having to physically explore them, bringing AI closer to humanlike reasoning.

The new system—called Generative World Explorer, or GenEx—needs only a single still image to conjure an entire world, giving it a significant advantage over previous systems that required a robot or agent to physically move through a scene to map the surrounding environment, which can be costly, unsafe, and time-consuming. The team’s results are posted to the arXiv preprint server.

“Say you’re in an area you’ve never been before—as a human, you use environmental cues, past experiences, and your knowledge of the world to imagine what might be around the corner,” says senior author Alan Yuille, the Bloomberg Distinguished Professor of Computational Cognitive Science at Johns Hopkins.

Genesis supports parallel simulation, making it ideal for training reinforcement learning (RL) locomotion policies efficiently. In this tutorial, we will walk you through a complete training example for obtaining a basic locomotion policy that enables a Unitree Go2 Robot to walk. With Genesis, you will be able to train a locomotion policy that’s deployable in real-world in less than 26 seconds (benchmarked on a RTX 4090).

Acknowledgement: This tutorial is inspired by and builds several core concepts from Legged Gym.

Your safety framework must include content filtering, output validation, rate limiting and detailed audit logging. I’ve found that implementing circuit breakers—automatic capability disablers triggered by anomalies—prevents small issues from becoming major incidents. For example, if an agent starts generating an unusual number of error responses, the system should automatically restrict its capabilities and alert the operations team.

Last year, I spoke to a tech company whose AI assistant became a victim of its own success. The system that flawlessly handled 1,000 daily requests crashed when usage jumped to 100,000 requests after a successful product launch. This taught us the importance of building for scale from day one. Even well-established companies like Netflix occasionally face challenges with scale, as seen during the recent live-streaming outages for the Jake Paul vs. Mike Tyson fight.

A production-ready architecture needs several key components working in harmony. The core engine should be modular, making updates and maintenance straightforward. Your integration layer should connect smoothly with enterprise systems through standardized APIs. Comprehensive monitoring helps you spot issues before they impact users, and robust memory management ensures consistent context handling across interactions.

Unifying machine learning and physics.


In this video, Dr. Ardavan (Ahmad) Borzou will discuss the history of unifications in physics and how we can unify physics and machine learning.

CompuFlair Website:
www.compu-flair.com.

Video Footage Courtesy of CERN:
Video link:
https://videos.cern.ch/record/2020780
Terms of use:
“CERN provides the image free of charge for educational and informational use”
http://copyright.web.cern.ch.

Chapters:

The outgoing head of the US Department of Homeland Security believes Europe’s “adversarial” relationship with tech companies is hampering a global approach to regulating artificial intelligence that could result in security vulnerabilities.

Alejandro Mayorkas told the Financial Times the US — home of the world’s top artificial intelligence groups, including OpenAI and Google — and Europe are not on a “strong footing” because of a difference in regulatory approach.

He stressed the need for “harmonisation across the Atlantic”, expressing concern that relationships between governments and the tech industry are “more adversarial” in Europe than in the US.

Atomic simulations deepen the mystery of how engineered materials known as refractory high-entropy alloys can suffer so little damage by radiation.

Refractory high-entropy alloys are materials made from multiple high-melting-point metals in roughly equal proportions. Those containing tungsten exhibit minimal changes in mechanical properties when exposed to continuous radiation and could be used to shield the crucial components of future nuclear reactors. Now Jesper Byggmästar and his colleagues at the University of Helsinki have performed atomic simulations that explore the uncertain origins of this radiation resistance [1]. The findings could help scientists design novel materials that are even more robust than these alloys in extreme environments.

The researchers studied a tungsten-based refractory high-entropy alloy using state-of-the-art simulations guided by machine learning. In particular, they modeled the main mechanism by which radiation can disrupt such an alloy’s atomic structure. In this mechanism, the incoming radiation causes one atom in the alloy to displace another atom, forming one or more structural defects. The team determined the threshold energy needed to induce such displacements and its dependence on the masses of the two involved atoms.

A new visual recognition approach improved a machine learning technique’s ability to both identify an object and how it is oriented in space, according to a study presented in October at the European Conference on Computer Vision in Milan, Italy.

Self-supervised learning is a machine learning approach that trains on unlabeled data, extending generalizability to real-world data. While it excels at identifying objects, a task called semantic classification, it may struggle to recognize objects in new poses.

This weakness quickly becomes a problem in situations like autonomous vehicle navigation, where an algorithm must assess whether an approaching car is a head-on collision threat or side-oriented and just passing by.

Artificial intelligence (AI) systems tend to take on human biases and amplify them, causing people who use that AI to become more biased themselves, finds a new study by UCL researchers.

Human and AI biases can consequently create a , with small initial biases increasing the risk of human error, according to the findings published in Nature Human Behaviour.

The researchers demonstrated that AI bias can have real-world consequences, as they found that people interacting with biased AIs became more likely to underestimate women’s performance and overestimate white men’s likelihood of holding high-status jobs.