Jan 21, 2025
Deepseek-ai/DeepSeek-R1 · Hugging Face
Posted by Cecile G. Tamura in category: robotics/AI
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
UBTech partners with Apple’s manufacturing partner, Foxconn, to build the iPhone using Walker S1 humanoid robots.
AI-powered data analysis tools have the potential to significantly improve the quality of scientific publications. A new study by Professor Mathias Christmann, a chemistry professor at Freie Universität Berlin, has uncovered shortcomings in chemical publications.
Using a Python script developed with the help of modern AI language models, Christmann analyzed more than 3,000 scientific papers published in Organic Letters over the past two years. The analysis revealed that only 40% of the chemical research papers contained error-free mass measurements. The AI-based data analysis tool used for this purpose could be created without any prior programming knowledge.
“The results demonstrate how powerful AI-powered tools can be in everyday research. They not only make complex analyses accessible but also improve the reliability of scientific data,” explains Christmann.
Millions of PHP servers compromised by Python bots using GSocket to target Indonesian users with gambling redirects.
Gould’s thesis has sparked widespread debate ever since, with some advocating for determinism and others supporting contingency. In his 1952 short story A Sound of Thunder, science fiction author Ray Bradbury recounted how a time traveler’s simple act of stepping on a butterfly in the age of the dinosaurs changed the course of the future. Gould made a similar point: “Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel.”
Scientists have been exploring this problem through experiments designed to recreate evolution in the lab or in nature, or by comparing species that have emerged under similar conditions. Today, a new avenue has opened up: AI. In New York, a group of former researchers from Meta — the parent company of social networks Facebook, Instagram, and WhatsApp — founded EvolutionaryScale, an AI startup focused on biology. The EvolutionaryScale Model 3 (ESM3) system created by the company is a generative language model — the same kind of platform that powers ChatGPT. However, while ChatGPT generates text, ESM3 generates proteins, the fundamental building blocks of life.
ESM3 feeds on sequence, structure, and function data from existing proteins to learn the biological language of these molecules and create new ones. Its creators have trained it with 771 billion data packets derived from 3.15 billion sequences, 236 million structures, and 539 million functional traits. This adds up to more than one trillion teraflops (a measure of computational performance) — the most computing power ever used in biology, according to the company.
On a broader level, by pushing AI toward more human-like processing, Titans could mean AI that thinks more deeply than humans — challenging our understanding of human uniqueness and our role in an AI-augmented world.
At the heart of Titans’ design is a concerted effort to more closely emulate the functioning of the human brain. While previous models like Transformers introduced the concept of attention—allowing AI to focus on specific, relevant information—Titans takes this several steps further. The new architecture incorporates analogs to human cognitive processes, including short-term memory, long-term memory, and even the ability to “forget” less relevant information. Perhaps most intriguingly, Titans introduces a concept that’s surprisingly human: the ability to prioritize surprising or unexpected information. This mimics the human tendency to more easily remember events that violate our expectations, a feature that could lead to more nuanced and context-aware AI systems.
The key technical innovation in Titans is the introduction of a neural long-term memory module. This component learns to memorize historical context and works in tandem with the attention mechanisms that have become standard in modern AI models. The result is a system that can effectively utilize both immediate context (akin to short-term memory) and broader historical information (long-term memory) when processing data or generating responses.
AI-powered DIMON solves complex equations faster, boosting medical diagnostics and engineering simulations.
This article explores why the convergence of these technologies could represent the next quantum leap in artificial intelligence.
Neural network models that are able to make decisions or store memories have long captured scientists’ imaginations. In these models, a hallmark of the computation being performed by the network is the presence of stereotyped sequences of activity, akin to one-way paths. This idea was pioneered by John Hopfield, who was notably co-awarded the 2024 Nobel Prize in Physics. Whether one-way activity paths are used in the brain, however, has been unknown.
A collaborative team of researchers from Carnegie Mellon University and the University of Pittsburgh designed a clever experiment to perform a causal test of this question using a brain-computer interface (BCI). Their findings provide empirical support of one-way activity paths in the brain and the computational principles long hypothesized by neural network models.
In today’s AI news, OpenAI CEO Sam Altman is trying to calm the online hype surrounding his company. On Monday, the tech boss took to X to quell viral rumors that the company had achieved artificial general intelligence. “Twitter hype is out of control again,” he wrote. “We are not gonna deploy AGI next month, nor have we built it.”
In other advancements, Stuttgart, Germany-based Sereact has secured €25mn to advance its embodied AI software that enables robots to carry out tasks they were never trained to do. “With our technology, robots act situationally rather than following rigidly programmed sequences. They adapt to dynamic tasks in real-time, enabling an unprecedented level of autonomy,” said Ralf Gulde, CEO of Sereact (short for “sense, reason, act”).
Then, seven years and seven months ago, Google changed the world with the Transformer architecture, which lies at the heart of generative AI applications like OpenAI’s ChatGPT. Now Google has unveiled a new architecture called Titans, a direct evolution of the Transformer that takes us a step closer to AI that can think like humans.