AI Vs AI: Hackers use Artificial Intelligence for deepfakes and smart malware, while defenders counter with AI threat detection and predictive security.
Category: information science
Just as pilots use flight simulators to safely practice complex maneuvers, scientists may soon conduct experiments on a highly realistic simulation of the mouse brain. In a new study, researchers at Stanford Medicine and their collaborators developed an artificial intelligence.
Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.
That’s the question raised by physicist Dr. Richard Lieu at The University of Alabama in Huntsville. In a paper published in the Monthly Notices of the Royal Astronomical Society, Lieu offers a theory that could challenge one of the biggest assumptions in astrophysics. His idea: gravity can exist without any mass at all.
The study explores a different solution to the same equations that normally describe gravity—both in Newtonian theory and in general relativity. These equations link mass with the gravitational force it creates. Lieu focused on what’s known as the Poisson equation, a simplified form of Einstein’s field equations used for describing gravity in weaker fields, like those around galaxies.
This equation typically has one well-known solution: gravity that weakens with distance, created by mass. But there’s another, lesser-known solution that’s often ignored. It can also create an attractive force but doesn’t come from any actual matter.
How does the brain work? Where, and when, and why do neurons connect and send their signals? Scientists have created the largest wiring diagram and functional map of an animal brain to date to learn more. Research teams at Allen Institute, @BCMweb and @princeton worked together to map half a billion synapses, over 200,000 cells, and 4km of axons from a cubic millimeter of mouse brain, providing unparalleled detail into its structure and functional properties. The project is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain. Research findings reveal key insights into brain activity, connectivity, and structure—shedding light on both form and function—within a region of the mouse visual cortex that plays a critical role in brain health and is often disrupted in neurological conditions such as Alzheimer’s disease, autism, and addiction. These insights could revolutionize our ability to treat neuropsychiatric diseases or study the influence of drugs and other changes on the brain.
This extraordinary achievement begins to reveal the elusive language the brain uses to communicate amongst its millions of cells and the cortical mechanisms of intelligence—one of the holy grails of science.
Learn more about this research: https://alleninstitute.org/news/scien… open science data: https://www.microns-explorer.org/ Explore the publications in Nature: https://www.nature.com/immersive/d428… Follow us on social media: Bluesky — https://bsky.app/profile/alleninstitu… Facebook — / alleninstitute X —
/ alleninstitute Instagram —
/ alleninstitute LinkedIn —
/ allen-institute TikTok —
/ allen.institute.
Access open science data: https://www.microns-explorer.org/
Explore the publications in Nature: https://www.nature.com/immersive/d428…
Follow us on social media:
Bluesky — https://bsky.app/profile/alleninstitu…
Facebook — / alleninstitute.
X — / alleninstitute.
Instagram — / alleninstitute.
LinkedIn — / allen-institute.
TikTok — / allen.institute
A machine learning method has the potential to revolutionize multi-messenger astronomy. Detecting binary neutron star mergers is a top priority for astronomers. These rare collisions between dense stellar remnants produce gravitational waves followed by bursts of light, offering a unique opportunit
Enthusiasts have been pushing the limits of silicon for as long as microprocessors have existed. Early overclocking endeavors involved soldering and replacing crystal clock oscillators, but that practice quickly evolved into adjusting system bus speeds using motherboard DIP switches and jumpers.
Internal clock multipliers were eventually introduced, but it didn’t take long for those to be locked down, as unscrupulous sellers began removing official frequency ratings and rebranding chips with their own faster markings. System buses and dividers became the primary tuning tools for most users, while ultra-enthusiasts went further – physically altering electrical specifications through hard modding.
Eventually, unlocked multipliers made a comeback, ushering in an era defined by BIOS-level overclocking and increasingly sophisticated software tuning tools. Over the past decade, however, traditional overclocking has become more constrained. Improved factory binning, aggressive turbo boost algorithms, and thermal ceilings mean that modern CPUs often operate near their peak potential right out of the box.
Quantum computers promise to outperform today’s traditional computers in many areas of science, including chemistry, physics, and cryptography, but proving they will be superior has been challenging. The most well-known problem in which quantum computers are expected to have the edge, a trait physicists call “quantum advantage,” involves factoring large numbers, a hard math problem that lies at the root of securing digital information.
In 1994, Caltech alumnus Peter Shor (BS ‘81), then at Bell Labs, developed a quantum algorithm that would easily factor a large number in just seconds, whereas this type of problem could take a classical computer millions of years. Ultimately, when quantum computers are ready and working—a goal that researchers say may still be a decade or more away—these machines will be able to quickly factor large numbers behind cryptography schemes.
But, besides Shor’s algorithm, researchers have had a hard time coming up with problems where quantum computers will have a proven advantage. Now, reporting in a recent Nature Physics study titled “Local minima in quantum systems,” a Caltech-led team of researchers has identified a common physics problem that these futuristic machines would excel at solving. The problem has to do with simulating how materials cool down to their lowest-energy states.
Pressure waves propagating through bubble-containing liquids in tubes experience considerable attenuation. Researchers at the University of Tsukuba have derived an equation describing this phenomenon, demonstrating that beyond liquid viscosity and compressibility, variations in tube cross-sectional area contribute to wave attenuation.
Their analysis reveals that the rate of change in tube cross-sectional area represents a critical parameter governing pressure wave attenuation in such systems.
Pressure waves propagating through bubble-containing liquids in tubes, known as “bubbly flow,” behave distinctly from those in single-phase liquids, necessitating precise understanding and control of their propagation processes.
Researchers at Ben-Gurion University of the Negev have developed a machine-learning algorithm that could enhance our understanding of human biology and disease. The new method, Weighted Graph Anomalous Node Detection (WGAND), takes inspiration from social network analysis and is designed to identify proteins with significant roles in various human tissues.
Proteins are essential molecules in our bodies, and they interact with each other in complex networks, known as protein-protein interaction (PPI) networks. Studying these networks helps scientists understand how proteins function and how they contribute to health and disease.
Prof. Esti Yeger-Lotem, Dr. Michael Fire, Dr. Jubran Juman, and Dr. Dima Kagan developed the algorithm to analyze these PPI networks to detect “anomalous” proteins—those that stand out due to their unique pattern of weighted interactions. This implies that the amount of the protein and its protein interactors is greater in that particular network, allowing them to carry out more functions and drive more processes. This also indicates the great importance that these proteins have in a particular network, because the body will not waste energy on their production for no reason.
Perhaps the most profound insight to emerge from this uncanny mirror is that understanding itself may be less mysterious and more mechanical than we have traditionally believed. The capabilities we associate with mind — pattern recognition, contextual awareness, reasoning, metacognition — appear increasingly replicable through purely algorithmic means. This suggests that consciousness, rather than being a prerequisite for understanding, may be a distinct phenomenon that typically accompanies understanding in biological systems but is not necessary for it.
At the same time, the possibility of quantum effects in neural processing reminds us that the mechanistic view of mind may be incomplete. If quantum retrocausality plays a role in consciousness, then our subjective experience may be neither a simple product of neural processing nor an epiphenomenal observer, but an integral part of a temporally complex causal system that escapes simple deterministic description.
What emerges from this consideration is not a definitive conclusion about the nature of mind but a productive uncertainty — an invitation to reconsider our assumptions about what constitutes understanding, agency, and selfhood. AI systems function as conceptual tools that allow us to explore these questions in new ways, challenging us to develop more sophisticated frameworks for understanding both artificial and human cognition.