Menu

Blog

Archive for the ‘supercomputing’ category: Page 46

Jan 10, 2022

Newcomer Conduit Leverages Frontera to Understand SARS-CoV-2 ‘Budding’

Posted by in categories: biotech/medical, genetics, supercomputing

I am happy to say that my recently published computational COVID-19 research has been featured in a major news article by HPCwire! I led this research as CTO of Conduit. My team utilized one of the world’s top supercomputers (Frontera) to study the mechanisms by which the coronavirus’s M proteins and E proteins facilitate budding, an understudied part of the SARS-CoV-2 life cycle. Our results may provide the foundation for new ways of designing antiviral treatments which interfere with budding. Thank you to Ryan Robinson (Conduit’s CEO) and my computational team: Ankush Singhal, Shafat M., David Hill, Jr., Tamer Elkholy, Kayode Ezike, and Ricky Williams.


Conduit, created by MIT graduate (and current CEO) Ryan Robinson, was founded in 2017. But it might not have been until a few years later, when the pandemic started, that Conduit may have found its true calling. While Conduit €™s commercial division is busy developing a Covid-19 test called nanoSPLASH, its nonprofit arm was granted access to one of the most powerful supercomputers in the world €”Frontera, at the Texas Advanced Computing Center (TACC) €”to model the €œbudding € process of SARS-CoV-2.

Budding, the researchers explained, is how the virus €™ genetic material is encapsulated in a spherical envelope €”and the process is key to the virus €™ ability to infect. Despite that, they say, it has hitherto been poorly understood:

Continue reading “Newcomer Conduit Leverages Frontera to Understand SARS-CoV-2 ‘Budding’” »

Jan 5, 2022

Bug in backup software results in loss of 77 terabytes of research data at Kyoto University

Posted by in categories: cybercrime/malcode, supercomputing

Computer maintenance workers at Kyoto University have announced that due to an apparent bug in software used to back up research data, researchers using the University’s Hewlett-Packard Cray computing system, called Lustre, have lost approximately 77 terabytes of data. The team at the University’s Institute for Information Management and Communication posted a Failure Information page detailing what is known so far about the data loss.

The team, with the University’s Information Department Information Infrastructure Division, Supercomputing, reported that files in the /LARGEO (on the DataDirect ExaScaler storage system) were lost during a system backup procedure. Some in the press have suggested that the problem arose from a faulty script that was supposed to delete only old, unneeded log files. The team noted that it was originally thought that approximately 100TB of files had been lost, but that number has since been pared down to 77TB. They note also that the failure occurred on December 16 between the hours of 5:50 and 7pm. Affected users were immediately notified via emails. The team further notes that approximately 34 million files were lost and that the files lost belonged to 14 known research groups. The team did not release information related to the names of the research groups or what sort of research they were conducting. They did note data from another four groups appears to be restorable.

Jan 1, 2022

Kyoto University Loses 77 Terabytes of Research Data After Supercomputer Backup Error

Posted by in categories: climatology, engineering, quantum physics, supercomputing, sustainability

Unfortunately, some of the data is lost forever. 🧐

#engineering


A routine backup procedure meant to safeguard data of researchers at Kyoto University in Japan went awry and deleted 77 terabytes of data, Gizmodo reported. The incident occurred between December 14 and 16, first came to light on the 16th, and affected as many as 14 research groups at the university.

Continue reading “Kyoto University Loses 77 Terabytes of Research Data After Supercomputer Backup Error” »

Dec 26, 2021

One of the World’s Most Powerful Supercomputers Uses Light Instead of Electric Current

Posted by in categories: quantum physics, robotics/AI, supercomputing

France’s Jean Zay supercomputer, one of the most powerful computers in the world and part of the Top500, is now the first HPC to have a photonic coprocessor meaning it transmits and processes information using light. The development represents a first for the industry.

The breakthrough was made during a pilot program that saw LightOn collaborate with GENCI and IDRIS. Igor Carron, LightOn’s CEO and co-founder said in a press release: “This pilot program integrating a new computing technology within one of the world’s Supercomputers would not have been possible without the particular commitment of visionary agencies such as GENCI and IDRIS/CNRS. Together with the emergence of Quantum Computing, this world premiere strengthens our view that the next step after exascale supercomputing will be about hybrid computing.”

The technology will now be offered to select users of the Jean Zay research community over the next few months who will use the device to undertake research on machine learning foundations, differential privacy, satellite imaging analysis, and natural language processing (NLP) tasks. LightOn’s technology has already been successfully used by a community of researchers since 2018.

Dec 24, 2021

LightOn Photonic Co-processor Integrated Into European AI Supercomputer

Posted by in categories: information science, robotics/AI, supercomputing

PARIS, Dec. 23, 2021 – LightOn announces the integration of one of its photonic co-processors in the Jean Zay supercomputer, one of the Top500 most powerful computers in the world. Under a pilot program with GENCI and IDRIS, the insertion of a cutting-edge analog photonic accelerator into High Performance Computers (HPC) represents a technological breakthrough and a world-premiere. The LightOn photonic co-processor will be available to selected users of the Jean Zay research community over the next few months.

LightOn’s Optical Processing Unit (OPU) uses photonics to speed up randomized algorithms at a very large scale while working in tandem with standard silicon CPU and NVIDIA latest A100 GPU technology. The technology aims to reduce the overall computing time and power consumption in an area that is deemed “essential to the future of computational science and AI for Science” according to a 2021 U.S. Department of Energy report on “Randomized Algorithms for Scientific Computing.”

INRIA (France’s Institute for Research in Computer Science and Automation) researcher Dr. Antoine Liutkus provided additional context to the integration of LightOn’s coprocessor in the Jean Zay supercomputer: “Our research is focused today on the question of large-scale learning. Integrating an OPU in one of the most powerful nodes of Jean Zay will give us the keys to carry out this research, and will allow us to go beyond a simple ” proof of concept.”

Dec 22, 2021

Elon Musk Company News | Will Starship problems lead to SpaceX bankruptcy?

Posted by in categories: Elon Musk, robotics/AI, space travel, supercomputing

https://www.youtube.com/watch?v=mlwZawQhIKM

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we invite you to find out what is new with Elon Musk, what has been done and what is yet to come. What are the difficulties with the Starlink project and why the problems with the launch of Starship may lead to the bankruptcy of SpaceX, what is new with Tesla, what new products will please the company next year — and this is not just about electric cars! All this and much more in this issue of news from Elon Musk!

Continue reading “Elon Musk Company News | Will Starship problems lead to SpaceX bankruptcy?” »

Dec 11, 2021

Exotic six-quark particle predicted by supercomputers

Posted by in categories: particle physics, supercomputing

The predicted existence of an exotic particle made up of six elementary particles known as quarks by RIKEN researchers could deepen our understanding of how quarks combine to form the nuclei of atoms.

Quarks are the fundamental building blocks of matter. The nuclei of atoms consist of protons and neutrons, which are in turn made up of three quarks each. Particles consisting of three quarks are collectively known as baryons.

Scientists have long pondered the existence of systems containing two baryons, which are known as dibaryons. Only one dibaryon exists in nature—deuteron, a hydrogen nucleus made up of a proton and a neutron that are very lightly bound to each other. Glimpses of other dibaryons have been caught in nuclear-physics experiments, but they had very fleeting existences.

Dec 10, 2021

Toward achieving megatesla magnetic fields in the laboratory

Posted by in categories: cosmology, particle physics, supercomputing

Recently, a research team at Osaka University has successfully demonstrated the generation of megatesla (MT)-order magnetic fields via three-dimensional particle simulations on laser-matter interaction. The strength of MT magnetic fields is 1–10 billion times stronger than geomagnetism (0.3–0.5 G), and these fields are expected to be observed only in the close vicinity of celestial bodies such as neutron stars or black holes. This result should facilitate an ambitious experiment to achieve MT-order magnetic fields in the laboratory, which is now in progress.

Since the , scientists have strived to achieve the highest magnetic fields in the laboratory. To date, the highest magnetic field observed in the laboratory is in the kilotesla (kT)-order. In 2020, Masakatsu Murakami at Osaka University proposed a novel scheme called microtube implosions (MTI) to generate ultrahigh magnetic fields on the MT-order. Irradiating a micron-sized hollow cylinder with ultraintense and generates with velocities close to the speed of light. Those hot electrons launch a cylindrically symmetric implosion of the inner wall ions towards the central axis. An applied pre-seeded of the kilotesla-order, parallel to the central axis, bends the trajectories of ions and electrons in opposite directions because of the Lorentz force. Near the target axis, those bent trajectories of ions and electrons collectively form a strong spin current that generates MT-order magnetic fields.

In this study, one of the , Didar Shokov, has extensively conducted three-dimensional simulations using the supercomputer OCTOPUS at Osaka University’s Cybermedia Center. As a result, a distinct scaling law has been found relating the performance of the generation of the magnetic fields by MTI and such external parameters as applied laser intensity, laser energy, and target size.

Dec 8, 2021

Algorithm to increase the efficiency of quantum computers

Posted by in categories: information science, quantum physics, supercomputing

Quantum computers have the potential to solve important problems that are beyond reach even for the most powerful supercomputers, but they require an entirely new way of programming and creating algorithms.

Universities and major tech companies are spearheading research on how to develop these new algorithms. In a recent collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich, a team of researchers have developed a new method to speed up calculations on quantum computers. The results are published in the journal PRX Quantum of the American Physical Society.

“Unlike classical computers, which use bits to store ones and zeros, information is stored in the qubits of a quantum processor in the form of a , or a wavefunction,” says postdoctoral researcher Guillermo García-Pérez from the Department of Physics at the University of Helsinki, first author of the paper.

Dec 2, 2021

Microsoft’s Azure AI Supercomputer delivers record MLPerf benchmarks

Posted by in categories: robotics/AI, supercomputing

Recently, Microsoft Azure joined the Top 10 club of the TOP500 super computer rankings by delivering 30.05 Petaflops. It was based on Microsoft’s recently announced Azure NDm A100 80GB v4, available on demand. These Azure NDm A100 v4 instances are powered by NVIDIA GPU acceleration and NVIDIA InfiniBand networking.

Microsoft today highlighted the latest (December 2021) MLPerf 1.1 results in which Azure delivered the #2 performance overall and the #1 performance by a cloud provider.

Page 46 of 96First4344454647484950Last