Menu

Blog

Archive for the ‘robotics/AI’ category: Page 56

Oct 6, 2024

NASA’s exoplanet hunter TESS spots a record-breaking 3-star system

Posted by in categories: robotics/AI, space

The team spotted the record-breaking triple star system because of strobing starlight caused by the stars crossing in front of each other, as seen from our position on Earth.

The team turned to machine learning to analyze vast amounts of data from TESS to spot a pattern indicating these eclipses. They then called upon the aid of citizen scientists to further filter this data to spot interesting signals.

“We’re mainly looking for signatures of compact multi-star systems, unusual pulsating stars in binary systems, and weird objects,” Rappaport said. “It’s exciting to identify a system like this because they’re rarely found, but they may be more common than current tallies suggest.”

Oct 6, 2024

China Telecom say AI model with 1 trillion parameters trained with Chinese chips

Posted by in category: robotics/AI

The state-owned telecoms operator did not reveal what chips it used for its 1 trillion-parameter model, but it has a partnership with Huawei.

Oct 5, 2024

AI agent promotes itself to sysadmin, breaks boot sequence

Posted by in category: robotics/AI

Fun experiment, but yeah, don’t pipe an LLM raw into /bin/bash.

Oct 5, 2024

MIT Researchers Introduce Generative Modeling of Molecular Dynamics: A Multi-Task AI Framework for Accelerating Molecular Simulations and Design

Posted by in category: robotics/AI

Molecular dynamics (MD) is a popular method for studying molecular systems and microscopic processes at the atomic level. However, MD simulations can be quite computationally expensive due to the intricate temporal and spatial resolutions needed. Due to the computing load, much research has been done on alternate techniques that can speed up simulation without sacrificing accuracy. Creating surrogate models based on deep learning is one such strategy that can effectively replace conventional MD simulations.

In recent research, a team of MIT researchers introduced the use of generative modeling to simulate molecular motions. This framework eliminates the need to compute the molecular forces at each step by using machine learning models that are trained on data obtained by MD simulations to provide believable molecular paths. These generative models can function as adaptable multi-task surrogate models, able to carry out multiple crucial tasks for which MD simulations are generally employed.

These generative models can be trained for a variety of tasks by carefully choosing and conditioning on specific frames of a molecule trajectory. These tasks include the following.

Oct 5, 2024

Do AI companies work?

Posted by in category: robotics/AI

Do #AI companies work?

“The market needs to be irrational for you to stay solvent.”

Oct 4, 2024

AI assistants are blabbing our embarrassing work secrets

Posted by in category: robotics/AI

Workplace AI tools can do tasks by themselves. Getting them to stop is the problem.

Oct 4, 2024

AI can reduce a 100,000-equation quantum problem to just 4 equations

Posted by in categories: information science, quantum physics, robotics/AI

The Hubbard model is a studied model in condensed matter theory and a formidable quantum problem. A team of physicists used deep learning to condense this problem, which previously required 100,000 equations, into just four equations without sacrificing accuracy. The study, titled “Deep Learning the Functional Renormalization Group,” was published on September 21 in Physical Review Letters.

Dominique Di Sante is the lead author of this study. Since 2021, he holds the position of Assistant Professor (tenure track) at the Department of Physics and Astronomy, University of Bologna. At the same time, he is a Visiting Professor at the Center for Computational Quantum Physics (CCQ) at the Flatiron Institute, New York, as part of a Marie Sklodowska-Curie Actions (MSCA) grant that encourages, among other things, the mobility of researchers.

He and colleagues at the Flatiron Institute and other international researchers conducted the study, which has the potential to revolutionize the way scientists study systems containing many interacting electrons. In addition, if they can adapt the method to other problems, the approach could help design materials with desirable properties, such as superconductivity, or contribute to clean energy production.

Oct 4, 2024

Researchers Summon AI-powered Maxwell’s Demon to Find Strategies to Optimize Quantum Devices

Posted by in categories: quantum physics, robotics/AI

Artificially intelligent Maxwell’s demon for optimal control of open…


A team of researchers used reinforcement learning (RL) to optimize feedback control strategies in quantum systems.

Oct 4, 2024

Open-Ended AI: The Key to Superhuman Intelligence?

Posted by in category: robotics/AI

Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature.

TOC:
00:00:00 Introduction to Open-Ended AI and Key Concepts.
00:01:37 Tim Rocktäschel’s Background and Research Focus.
00:06:25 Defining Open-Endedness in AI Systems.
00:10:39 Subjective Nature of Interestingness and Learnability.
00:16:22 Open-Endedness in Practice: Examples and Limitations.
00:17:50 Assessing Novelty in Open-ended AI Systems.
00:20:05 Adversarial Attacks and AI Robustness.
00:24:05 Rainbow Teaming and LLM Safety.
00:25:48 Open-ended Research Approaches in AI
00:29:05 Balancing Long-term Vision and Exploration in AI Research.
00:37:25 LLMs in Program Synthesis and Open-Ended Learning.
00:37:55 Transition from Human-Based to Novel AI Strategies.
00:39:00 Expanding Context Windows and Prompt Evolution.
00:40:17 AI Intelligibility and Human-AI Interfaces.
00:46:04 Self-Improvement and Evolution in AI Systems.

Continue reading “Open-Ended AI: The Key to Superhuman Intelligence?” »

Oct 4, 2024

MIT spinoff Liquid debuts non-transformer AI models and they’re already state-of-the-art

Posted by in category: robotics/AI

The startup from MIT’s CSAIL says its Liquid Foundation Models have smaller memory needs thanks to a post-transformer architecture.

Page 56 of 2,426First5354555657585960Last