Menu

Blog

Archive for the ‘information science’ category: Page 131

Jun 28, 2022

Spotting Unfair or Unsafe AI using Graphical Criteria

Posted by in categories: education, information science, robotics/AI, transportation

How to use causal influence diagrams to recognize the hidden incentives that shape an AI agent’s behavior.


There is rightfully a lot of concern about the fairness and safety of advanced Machine Learning systems. To attack the root of the problem, researchers can analyze the incentives posed by a learning algorithm using causal influence diagrams (CIDs). Among others, DeepMind Safety Research has written about their research on CIDs, and I have written before about how they can be used to avoid reward tampering. However, while there is some writing on the types of incentives that can be found using CIDs, I haven’t seen a succinct write up of the graphical criteria used to identify such incentives. To fill this gap, this post will summarize the incentive concepts and their corresponding graphical criteria, which were originally defined in the paper Agent Incentives: A Causal Perspective.

A causal influence diagram is a directed acyclic graph where different types of nodes represent different elements of an optimization problem. Decision nodes represent values that an agent can influence, utility nodes represent the optimization objective, and structural nodes (also called change nodes) represent the remaining variables such as the state. The arrows show how the nodes are causally related with dotted arrows indicating the information that an agent uses to make a decision. Below is the CID of a Markov Decision Process, with decision nodes in blue and utility nodes in yellow:

Continue reading “Spotting Unfair or Unsafe AI using Graphical Criteria” »

Jun 28, 2022

In Its Greatest Biology Feat Yet, AI Unlocks the Complex Proteins Guarding Our DNA

Posted by in categories: biotech/medical, genetics, information science, robotics/AI, security

Yet when faced with enormous protein complexes, AI faltered. Until now. In a mind-bending feat, a new algorithm deciphered the structure at the heart of inheritance—a massive complex of roughly 1,000 proteins that helps channel DNA instructions to the rest of the cell. The AI model is built on AlphaFold by DeepMind and RoseTTAfold from Dr. David Baker’s lab at the University of Washington, which were both released to the public to further experiment on.

Our genes are housed in a planet-like structure, dubbed the nucleus, for protection. The nucleus is a high-security castle: only specific molecules are allowed in and out to deliver DNA instructions to the outside world—for example, to protein-making factories in the cell that translate genetic instructions into proteins.

Continue reading “In Its Greatest Biology Feat Yet, AI Unlocks the Complex Proteins Guarding Our DNA” »

Jun 28, 2022

OpenAI’s New AI Learned to Play Minecraft

Posted by in categories: information science, robotics/AI

Never mind the cost of computing, OpenAI said the Upwork contractors alone cost $160,000. Though to be fair, manually labeling the whole data set would’ve run into the millions and taken considerable time to complete. And while the computing power wasn’t negligible, the model was actually quite small. VPT’s hundreds of millions of parameters are orders of magnitude less than GPT-3’s hundreds of billions.

Still, the drive to find clever new approaches that use less data and computing is valid. A kid can learn Minecraft basics by watching one or two videos. Today’s AI requires far more to learn even simple skills. Making AI more efficient is a big, worthy challenge.

In any case, OpenAI is in a sharing mood this time. The researchers say VPT isn’t without risk—they’ve strictly controlled access to algorithms like GPT-3 and DALL-E partly to limit misuse—but the risk is minimal for now. They’ve open sourced the data, environment, and algorithm and are partnering with MineRL. This year’s contestants are free to use, modify, and fine-tune the latest in Minecraft AI.

Jun 28, 2022

Spacecraft in ‘warp bubble’ could travel faster than light

Posted by in categories: cosmology, information science, space travel

Special relativity famously dictates that no known object can travel faster than the speed of light in a vacuum – making it unlikely that humans will ever send spacecraft to explore beyond our local area of the Milky Way. However, new research by Erik Lentz at the University of Göttingen suggests there could be a way beyond this limit. The only catch is that his scheme requires vast amounts of energy and so may never actually be able to propel a spacecraft (Class. Quant. Grav. 38 075015).

Lentz proposes that conventional energy sources could arrange the structure of space–time in the form of a soliton – a robust singular wave. This soliton would act like a “warp bubble’”, contracting space in front of it and expanding space behind. Unlike objects within it, space–time itself can bend, expand or warp at any speed. A spacecraft contained in a hyperfast bubble could therefore arrive at its destination faster than light would in normal space without breaking any physical laws.

It had been thought that the only way to produce a warp drive was by generating vast amounts of negative energy – perhaps by using some sort of undiscovered exotic matter or by manipulating dark energy. To get around this problem, Lentz constructed an unexplored geometric structure of space–time to derive a new family of solutions to Einstein’s general relativity equations called positive-energy solitons. Though Lentz’s solitons appear to conform to Einstein’s general theory of relativity and remove the need to create negative energy, space agencies will not be building warp drives any time soon, if ever. Part of the reason is that Lentz’s positive-energy warp drive requires a huge amount of energy. According to Lentz, a 100 m radius spacecraft would require the energy equivalent to “hundreds of times the mass of Jupiter”.

Jun 28, 2022

Capillary condensation follows classical law even at the nanoscale

Posted by in categories: information science, law, nanotechnology

When water vapour spontaneously condenses inside capillaries just 1 nm thick, it behaves according to the 150-year-old Kelvin equation – defying predictions that the theory breaks down at the atomic scale. Indeed, researchers at the University of Manchester have showed that the equation is valid even for capillaries that accommodate only a single layer of water molecules (Nature 588 250).

Condensation inside capillaries is ubiquitous and many physical processes – including friction, stiction, lubrication and corrosion – are affected by it. The Kelvin equation relates the surface tension of water to its temperature and the diameter of its meniscus. It predicts that if the ambient humidity is between 30–50%, then flat capillaries less than 1.5 nm thick will spontaneously fill with water that condenses from the air.

Real world capillaries can be even smaller, but for them it is impossible to define the curvature of a liquid’s meniscus so the Kelvin equation should no longer hold. However, because such tight confinement is difficult to achieve in the laboratory, this had yet to be tested.

Jun 28, 2022

Atomic quantum processors make their debut

Posted by in categories: computing, information science, particle physics, quantum physics

Two research groups demonstrate quantum algorithms using neutral atoms as qubits. Tim Wogan reports.

The first quantum processors that use neutral atoms as qubits have been produced independently by two US-based groups. The result offers the possibility of building quantum computers that could be easier to scale up than current devices.

Two technologies have dominated quantum computing so far, but they are not without issues. Superconducting qubits must be constructed individually, making it nearly impossible to fabricate identical copies, so the probability of the output being correct is reduced – causing what is known as “gate fidelity”. Moreover, each qubit must be cooled close to absolute zero. Trapped ions, on the other hand, have the advantage that each ion is guaranteed to be indistinguishable by the laws of quantum mechanics. But while ions in a vacuum are relatively easy to isolate from thermal noise, they are strongly interacting and so require electric fields to move them around.

Jun 26, 2022

The Next Generation Of IBM Quantum Computers

Posted by in categories: computing, information science, quantum physics

IBM is building accessible, scalable quantum computing by focusing on three pillars:

**· **Increasing qubit counts.

**· **Developing advanced quantum software that can abstract away infrastructure complexity and orchestrate quantum programs.

Continue reading “The Next Generation Of IBM Quantum Computers” »

Jun 26, 2022

‘Killer robots’ are coming. Is the US ready for the consequences?

Posted by in categories: information science, military, robotics/AI

🤖 Officially, they’re called “lethal autonomous weapons systems.” Colloquially, they’re called “killer robots.” Either way you’re going to want to read about their future in warfare. 👇


The commander must also be prepared to justify his or her decision if and when the LAWS is wrong. As with the application of force by manned platforms, the commander assumes risk on behalf of his or her subordinates. In this case, a narrow, extensively tested algorithm with an extremely high level of certainly (for example, 99 percent or higher) should meet the threshold for a justified strike and absolve the commander of criminal accountability.

Lastly, LAWS must also be tested extensively in the most demanding possible training and exercise scenarios. The methods they use to make their lethal decisions—from identifying a target and confirming its identity to mitigating the risk of collateral damage—must be publicly released (along with statistics backing up their accuracy). Transparency is crucial to building public trust in LAWS, and confidence in their capabilities can only be built by proving their reliability through rigorous and extensive testing and analysis.

Continue reading “‘Killer robots’ are coming. Is the US ready for the consequences?” »

Jun 24, 2022

DeepMind Researchers Develop ‘BYOL-Explore’: A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks

Posted by in categories: information science, policy, robotics/AI

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

Continue reading “DeepMind Researchers Develop ‘BYOL-Explore’: A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks” »

Jun 23, 2022

Github’s AI-Powered Copilot Can Make Developers’ Job Way Easier

Posted by in categories: information science, robotics/AI

Microsoft-owned GitHub is launching its Copilot AI tool today, which helps suggest lines of code to developers inside their code editor. GitHub originally teamed up with OpenAI last year to launch a preview of Copilot, and it’s generally available to all developers today. Priced at US$10 per month or US$100 a year, GitHub Copilot is capable of suggesting the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and JetBrains IDEs. Copilot can suggest complete methods and complex algorithms alongside boilerplate code and assistance with unit testing. More than 1.2 million developers signed up to use the GitHub Copilot preview over the past 12 months, and it will remain a free tool for verified students and maintainers of popular open-source projects. In files where it’s enabled, GitHub says nearly 40 percent of code is now being written by Copilot.

“Over the past year, we’ve continued to iterate and test workflows to help drive the ‘magic’ of Copilot,” Ryan J. Salva, VP of product at GitHub, told TechCrunch via email. “We not only used the preview to learn how people use GitHub Copilot but also to scale the service safely.”

“We specifically designed GitHub Copilot as an editor extension to make sure nothing gets in the way of what you’re doing,” GitHub CEO Thomas Dohmke says in a blog post(Opens in a new window). “GitHub Copilot distills the collective knowledge of the world’s developers into an editor extension that suggests code in real-time, to help you stay focused on what matters most: building great software.”