Toggle light / dark theme

When water vapour spontaneously condenses inside capillaries just 1 nm thick, it behaves according to the 150-year-old Kelvin equation – defying predictions that the theory breaks down at the atomic scale. Indeed, researchers at the University of Manchester have showed that the equation is valid even for capillaries that accommodate only a single layer of water molecules (Nature 588 250).

Condensation inside capillaries is ubiquitous and many physical processes – including friction, stiction, lubrication and corrosion – are affected by it. The Kelvin equation relates the surface tension of water to its temperature and the diameter of its meniscus. It predicts that if the ambient humidity is between 30–50%, then flat capillaries less than 1.5 nm thick will spontaneously fill with water that condenses from the air.

Real world capillaries can be even smaller, but for them it is impossible to define the curvature of a liquid’s meniscus so the Kelvin equation should no longer hold. However, because such tight confinement is difficult to achieve in the laboratory, this had yet to be tested.

Two research groups demonstrate quantum algorithms using neutral atoms as qubits. Tim Wogan reports.

The first quantum processors that use neutral atoms as qubits have been produced independently by two US-based groups. The result offers the possibility of building quantum computers that could be easier to scale up than current devices.

Two technologies have dominated quantum computing so far, but they are not without issues. Superconducting qubits must be constructed individually, making it nearly impossible to fabricate identical copies, so the probability of the output being correct is reduced – causing what is known as “gate fidelity”. Moreover, each qubit must be cooled close to absolute zero. Trapped ions, on the other hand, have the advantage that each ion is guaranteed to be indistinguishable by the laws of quantum mechanics. But while ions in a vacuum are relatively easy to isolate from thermal noise, they are strongly interacting and so require electric fields to move them around.

IBM is building accessible, scalable quantum computing by focusing on three pillars:

**· **Increasing qubit counts.

**· **Developing advanced quantum software that can abstract away infrastructure complexity and orchestrate quantum programs.

**· **Growing an ecosystem of quantum-ready enterprises, organizations, and communities.

The next step in IBM’s goals to build a frictionless development experience will be the release of Qiskit Runtime in 2022, which will allow developers to build workflows in the cloud, offering greater flexibility. Bringing a serverless approach to quantum computing will also provide the flexibility to distribute workloads intelligently and efficiently across quantum and classical systems.

To help speed the work of developers, IBM launched Qiskit Runtime primitives earlier this year. The primitives implement common quantum hardware queries used by algorithms to simplify quantum programming. In 2023, IBM plans to expand these primitives, as well as the capability to run on the next generation of parallelized quantum processors.

Full Story:

🤖 Officially, they’re called “lethal autonomous weapons systems.” Colloquially, they’re called “killer robots.” Either way you’re going to want to read about their future in warfare. 👇


The commander must also be prepared to justify his or her decision if and when the LAWS is wrong. As with the application of force by manned platforms, the commander assumes risk on behalf of his or her subordinates. In this case, a narrow, extensively tested algorithm with an extremely high level of certainly (for example, 99 percent or higher) should meet the threshold for a justified strike and absolve the commander of criminal accountability.

Lastly, LAWS must also be tested extensively in the most demanding possible training and exercise scenarios. The methods they use to make their lethal decisions—from identifying a target and confirming its identity to mitigating the risk of collateral damage—must be publicly released (along with statistics backing up their accuracy). Transparency is crucial to building public trust in LAWS, and confidence in their capabilities can only be built by proving their reliability through rigorous and extensive testing and analysis.

The decision to employ killer robots should not be feared, but it must be well thought-out and meticulously debated. While the future offers unprecedented opportunity, it also comes with unprecedented challenges for which the United States and its allies and partners must prepare.

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

The strategy is based on Bootstrap Your Own Latent (BYOL), a self-supervised latent-predictive method that forecasts an earlier version of its latent representation. In order to handle the problems of creating the representation of the world model and the curiosity-driven policy, BYOL-Explore learns a world model with a self-supervised prediction loss and trains a curiosity-driven policy using the same loss. Computer vision, learning about graph representations, and RL representation learning have all successfully used this bootstrapping approach. In contrast, BYOL-Explore goes one step further and not only learns a flexible world model but also exploits the world model’s loss to motivate exploration.

Microsoft-owned GitHub is launching its Copilot AI tool today, which helps suggest lines of code to developers inside their code editor. GitHub originally teamed up with OpenAI last year to launch a preview of Copilot, and it’s generally available to all developers today. Priced at US$10 per month or US$100 a year, GitHub Copilot is capable of suggesting the next line of code as developers type in an integrated development environment (IDE) like Visual Studio Code, Neovim, and JetBrains IDEs. Copilot can suggest complete methods and complex algorithms alongside boilerplate code and assistance with unit testing. More than 1.2 million developers signed up to use the GitHub Copilot preview over the past 12 months, and it will remain a free tool for verified students and maintainers of popular open-source projects. In files where it’s enabled, GitHub says nearly 40 percent of code is now being written by Copilot.

“Over the past year, we’ve continued to iterate and test workflows to help drive the ‘magic’ of Copilot,” Ryan J. Salva, VP of product at GitHub, told TechCrunch via email. “We not only used the preview to learn how people use GitHub Copilot but also to scale the service safely.”

“We specifically designed GitHub Copilot as an editor extension to make sure nothing gets in the way of what you’re doing,” GitHub CEO Thomas Dohmke says in a blog post(Opens in a new window). “GitHub Copilot distills the collective knowledge of the world’s developers into an editor extension that suggests code in real-time, to help you stay focused on what matters most: building great software.”

The talk is provided on a Free/Donation basis. If you would like to support my work then you can paypal me at this link:
https://paypal.me/wai69
Or to support me longer term Patreon me at: https://www.patreon.com/waihtsang.

Unfortunately my internet link went down in the second Q&A session at the end and the recording cut off. Shame, loads of great information came out about FPGA/ASIC implementations, AI for the VR/AR, C/C++ and a whole load of other riveting and most interesting techie stuff. But thankfully the main part of the talk was recorded.

TALK OVERVIEW
This talk is about the realization of the ideas behind the Fractal Brain theory and the unifying theory of life and intelligence discussed in the last Zoom talk, in the form of useful technology. The Startup at the End of Time will be the vehicle for the development and commercialization of a new generation of artificial intelligence (AI) and machine learning (ML) algorithms.

We will show in detail how the theoretical fractal brain/genome ideas lead to a whole new way of doing AI and ML that overcomes most of the central limitations of and problems associated with existing approaches. A compelling feature of this approach is that it is based on how neurons and brains actually work, unlike existing artificial neural networks, which though making sensational headlines are impeded by severe limitations and which are based on an out of date understanding of neurons form about 70 years ago. We hope to convince you that this new approach, really is the path to true AI.

In the last Zoom talk, we discussed a great unifying of scientific ideas relating to life & brain/mind science through the application of the mathematical idea of symmetry. In turn the same symmetry approach leads to a unifying of a mass of ideas relating to computer and information science. There’s been talk in recent years of a ‘master algorithm’ of machine learning and AI. We’ll explain that it goes far deeper than that and show how there exists a way of unifying into a single algorithm, the most important fundamental algorithms in use in the world today, which relate to data compression, databases, search engines and also existing AI/ML. Furthermore and importantly this algorithm is completely fractal or scale invariant. The same algorithm which is able to perform all these functionalities is able to run on a micro-controller unit (MCU), mobile phone, laptop and workstation, going right up to a supercomputer.

The application and utility of this new technology is endless. We will discuss the road map by which the sort of theoretical ideas I’ve been discussing in the Zoom, academic and public talks over the past few years, and which I’ve written about in the Fractal Brain Theory book, will become practical technology. And how the Java/C/C++ code running my workstation and mobile phones will become products and services.

Algorithms, Shor’s Quantum Factoring Algorithm for breaking RSA Security, and the Future of Quantum Computing.

▬ In this video ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I talk about my PhD research at MIT in Quantum Artificial Intelligence. I also explain the basic concepts of quantum computers, and why they are superior to conventional computers for specific tasks. Prof. Peter Shor, the inventor of Shor’s algorithm and one of the founding fathers of Quantum Computing, kindly agreed to participate in this video.

▬ Follow me ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LinkedIn: https://www.linkedin.com/in/samuel-bosch/
Instagram: https://www.instagram.com/samuel.bosch/

▬ Credits ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Some of the animations were taken from “Quanta Magazine” (Quantum Computers, Explained With Quantum Physics): https://www.youtube.com/channel/UCTpmmkp1E4nmZqWPS-dl5bg.

Other animations are from “Josh’s Channel” (How Quantum Computers Work): https://www.youtube.com/channel/UCnNEI3UdreSoQ6XUNcKoUeg.

The quantum circuit animations are from “Kurzgesagt – In a Nutshell” (Quantum Computers Explained – Limits of Human Technology): https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q

In forthcoming years, everyone will get to observe how beautifully Metaverse will evolve towards immersive experiences in hyperreal virtual environments filled with avatars that look and sound exactly like us. Neil Stephenson’s Snow Crash describes a vast world full of amusement parks, houses, entertainment complexes, and worlds within themselves all connected by a virtual street tens of thousands of miles long. For those who are still not familiar with the metaverse, it is a virtual world in which users can put on virtual reality goggles and navigate a stylized version of themselves, known as an avatar, via virtual workplaces, and entertainment venues, and other activities. The metaverse will be an immersive version of the internet with interactive features using different technologies such as virtual reality (VR), augmented reality (AR), 3D graphics, 5G, hologram, NFT, blockchain, haptic sensors, and artificial intelligence (AI). To scale personalized content experiences to billions of people, one potential answer is generative AI, the process of using AI algorithms on existing data to create new content.

In computing, procedural generation is a method of creating data algorithmically as opposed to manually, typically through a combination of human-generated assets and algorithms coupled with computer-generated randomness and processing power. In computer graphics, it is commonly used to create textures and 3D models.

The algorithmic difficulty is typically seen in Diablo-style RPGs and some roguelikes which use instancing of in-game entities to create randomized items. Less frequently it can be used to determine the relative difficulty of hand-designed content to be subsequently placed procedurally, as can be seen with the monster design in Unangband. For example, the designer can rapidly create content, but leaves it up to the game to determine how challenging that content is to overcome, and consequently where in the procedurally generated environment this content will appear. Notably, the Touhou series of bullet hell shooters use algorithmic difficulty. Though the users are only allowed to choose certain difficulty values, several community mods enable ramping the difficulty beyond the offered values.

For years, physicists have been making major advances and breakthroughs in the field using their minds as their primary tools. But what if artificial intelligence could help with these discoveries?

Last month, researchers at Duke University demonstrated that incorporating known physics into machine learning algorithms could result in new levels of discoveries into material properties, according to a press release by the institution. They undertook a first-of-its-kind project where they constructed a machine-learning algorithm to deduce the properties of a class of engineered materials known as metamaterials and to determine how they interact with electromagnetic fields.