Menu

Blog

Archive for the ‘supercomputing’ category: Page 43

Apr 7, 2022

Scientists Just Broke The Record For Calculating Pi, And Infinity Never Felt So Close

Posted by in categories: mathematics, supercomputing

Circa 2021


Swiss researchers said Monday they had calculated the mathematical constant pi to a new world-record level of exactitude, hitting 62.8 trillion figures using a supercomputer.

“The calculation took 108 days and nine hours” using a supercomputer, the Graubuenden University of Applied Sciences said in a statement.

Continue reading “Scientists Just Broke The Record For Calculating Pi, And Infinity Never Felt So Close” »

Apr 2, 2022

How China Made An Exascale Supercomputer Out Of Old 14 Nanometer Tech

Posted by in categories: robotics/AI, supercomputing

If you need any proof that it doesn’t take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway “OceanLight” system housed at the National Supercomputing Center in Wuxi, China.

Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called “brain-scale” where the number of parameters starts approaching the number of synapses in the human brain). But, as it turns out, some of these architectural details were hinted at in the three of the six nominations for the Gordon Bell Prize last fall, which we covered here. To our chagrin and embarrassment, we did not dive into the details of the architecture at the time (we had not seen that they had been revealed), and the BaGuaLu paper gives us a chance to circle back.

Before this slew of papers were announced with details on the new Sunway many-core processor, we did take a stab at figuring out how the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC) might build an exascale system, scaling up from the SW26010 processor used in the Sunway “TaihuLight” machine that took the world by storm back in June 2016. The 260-core SW26010 processor was etched by Chinese foundry Semiconductor Manufacturing International Corporation using 28 nanometer processes – not exactly cutting edge. And the SW26010-Pro processor, etched using 14 nanometer processes, is not on an advanced node, but China is perfectly happy to burn a lot of coal to power and cool the OceanLight kicker system based on it. (Also known as the Sunway exascale system or the New Generation Sunway supercomputer.)

Mar 31, 2022

DeepMind Mafia, DishBrain, PRIME, ZooKeeper AI, Instant NeRF

Posted by in categories: biological, climatology, robotics/AI, supercomputing

Mar 31, 2022


Our 91st episode with a summary and discussion of last week’s big AI news!
Outline:

Continue reading “DeepMind Mafia, DishBrain, PRIME, ZooKeeper AI, Instant NeRF” »

Mar 21, 2022

Cluster Your Pi Zeros In Style With 3D Printed Cray-1

Posted by in categories: energy, supercomputing

From a performance standpoint we know building a homebrew Raspberry Pi cluster doesn’t make a lot of sense, as even a fairly run of the mill desktop x86 machine is sure to run circles around it. That said, there’s an argument to be made that rigging up a dozen little Linux boards gives you a compact and affordable playground to experiment with things like parallel computing and load balancing. Is it a perfect argument? Not really. But if you’re anything like us, the whole thing starts making a lot more sense when you realize your cluster of Pi Zeros can be built to look like the iconic Cray-1 supercomputer.

Continue reading “Cluster Your Pi Zeros In Style With 3D Printed Cray-1” »

Mar 21, 2022

This Insane Chinese Supercomputer Changes EVERYTHING

Posted by in categories: government, robotics/AI, supercomputing

The smartest Scientists of both China and the United States are working hard on creating the fastest hardware for future Supercomputers in the exaflop and zettaflop performance range. Companies such as Intel, Nvidia and AMD are continuing Moore’s Law with the help of amazing new processes by TSMC. These supercomputers are secret projects by the government in hopes of beating each other in the tech industry and to prepare for Artificial Intelligence.

TIMESTAMPS:
00:00 A new Superpower in the making.
00:46 A Brain-Scale Supercomputer?
02:47 China Tech vs USA Tech.
05:30 Chinese Semiconductor Technology.
07:39 Last Words.

#china #computing #usa

Mar 12, 2022

Faster analog computer could be based on mathematics of complex systems

Posted by in categories: mathematics, quantum physics, supercomputing

Researchers have proposed a novel principle for a unique kind of computer that would use analog technology in place of digital or quantum components.

The unique device would be able to carry out complex computations extremely quickly—possibly, even faster than today’s supercomputers and at vastly less cost than any existing quantum computers.

The principle uses to overcome the barriers in optimization problems (choosing the best option from a large number of possibilities), such as Google searches—which aim to find the optimal results matching the search request.

Mar 12, 2022

Synthetic synapses get more like a real brain

Posted by in categories: biological, chemistry, food, nanotechnology, robotics/AI, supercomputing

The human brain, fed on just the calorie input of a modest diet, easily outperforms state-of-the-art supercomputers powered by full-scale station energy inputs. The difference stems from the multiple states of brain processes versus the two binary states of digital processors, as well as the ability to store information without power consumption—non-volatile memory. These inefficiencies in today’s conventional computers have prompted great interest in developing synthetic synapses for use in computers that can mimic the way the brain works. Now, researchers at King’s College London, UK, report in ACS Nano Letters an array of nanorod devices that mimic the brain more closely than ever before. The devices may find applications in artificial neural networks.

Efforts to emulate biological synapses have revolved around types of memristors with different resistance states that act like memory. However, unlike the the devices reported so far have all needed a reverse polarity to reset them to the initial state. “In the brain a change in the changes the output,” explains Anatoly Zayats, a professor at King’s College London who led the team behind the recent results. The King’s College London researchers have now been able to demonstrate this brain-like behavior in their synaptic synapses as well.

Zayats and team build an array of gold nanorods topped with a polymer junction (poly-L-histidine, PLH) to a metal contact. Either light or an electrical voltage can excite plasmons—collective oscillations of electrons. The plasmons release hot electrons into the PLH, gradually changing the chemistry of the polymer, and hence changing it to have different levels of conductivity or light emissivity. How the polymer changes depends on whether oxygen or hydrogen surrounds it. A chemically inert nitrogen chemical environment will preserve the state without any energy input required so that it acts as non-volatile memory.

Mar 11, 2022

Supercomputers Simulated a Black Hole And Found Something We’ve Never Seen Before

Posted by in categories: cosmology, supercomputing

While black holes might always be black, they do occasionally emit some intense bursts of light from just outside their event horizon. Previously, what exactly caused these flares had been a mystery to science.

That mystery was solved recently by a team of researchers that used a series of supercomputers to model the details of black holes’ magnetic fields in far more detail than any previous effort. The simulations point to the breaking and remaking of super-strong magnetic fields as the source of the super-bright flares.

Scientists have known that black holes have powerful magnetic fields surrounding them for some time. Typically these are just one part of a complex dance of forces, material, and other phenomena that exist around a black hole.

Mar 6, 2022

Detailed Supercomputer Simulation of the Universe Creates Structures Very Similar to the Milky Way

Posted by in categories: cosmology, evolution, physics, supercomputing

In their pursuit of understanding cosmic evolution, scientists rely on a two-pronged approach. Using advanced instruments, astronomical surveys attempt to look farther and farther into space (and back in time) to study the earliest periods of the Universe. At the same time, scientists create simulations that attempt to model how the Universe has evolved based on our understanding of physics. When the two match, astrophysicists and cosmologists know they are on the right track!

In recent years, increasingly-detailed simulations have been made using increasingly sophisticated supercomputers, which have yielded increasingly accurate results. Recently, an international team of researchers led by the University of Helsinki conducted the most accurate simulations to date. Known as SIBELIUS-DARK, these simulations accurately predicted the evolution of our corner of the cosmos from the Big Bang to the present day.

In addition to the University of Helsinki, the team was comprised of researchers from the Institute for Computational Cosmology (ICC) and the Centre for Extragalactic Astronomy at Durham University, the Lorentz Institute for Theoretical Physics at Leiden University, the Institut d’Astrophysique de Paris, and The Oskar Klein Centre at Stockholm University. The team’s results are published in the Monthly Notices of the Royal Astronomical Society.

Mar 3, 2022

Simulation of a Human-Scale Cerebellar Network Model on the K Computer

Posted by in categories: neuroscience, robotics/AI, supercomputing

Circa 2020 Simulation of the human brain.


Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.

Computer simulation of the whole human brain is an ambitious challenge in the field of computational neuroscience and high-performance computing (Izhikevich, 2005; Izhikevich and Edelman, 2008; Amunts et al., 2016). The human brain contains approximately 100 billion neurons. While the cerebral cortex occupies 82% of the brain mass, it contains only 19% (16 billion) of all neurons. The cerebellum, which occupies only 10% of the brain mass, contains 80% (69 billion) of all neurons (Herculano-Houzel, 2009). Thus, we could say that 80% of human-scale whole brain simulation will be accomplished when a human-scale cerebellum is built and simulated on a computer. The human cerebellum plays crucial roles not only in motor control and learning (Ito, 1984, 2000) but also in cognitive tasks (Ito, 2012; Buckner, 2013). In particular, the human cerebellum seems to be involved in human-specific tasks, such as bipedal locomotion, natural language processing, and use of tools (Lieberman, 2014).

Page 43 of 96First4041424344454647Last