Jul 27, 2024
Tesla FSD V12.5 Rollout: Significant Improvement and Potential for Tesla-Owned Fleet
Posted by Chris Smedley in categories: robotics/AI, transportation
Brighter with Herbert.
Brighter with Herbert.
QPI is a powerful technique that reveals variations in optical path length caused by weakly scattering samples, enabling the generation of high-contrast images of transparent specimens. Traditional 3D QPI methods, while effective, are limited by the need for multiple illumination angles and extensive digital post-processing for 3D image reconstruction, which can be time-consuming and computationally intensive.
In this innovative study, the research team developed a wavelength-multiplexed diffractive optical processor capable of all-optically transforming phase distributions of multiple 2D objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel.
This design allows for the capture of quantitative phase images of input objects located at different axial planes using an intensity-only image sensor, eliminating the need for digital phase recovery algorithms.
Modern computer chips can have features built on a nanometer scale. Until now it has been possible to form such small structures only on top of a silicon wafer, but a new technique can now create nanoscale features in a layer below the surface. The approach has promising applications in both photonics and electronics, say its inventors, and could one day enable the fabrication of 3D structures throughout the bulk of the wafer.
The technique relies on the fact that silicon is transparent to certain wavelengths of light. This means the right kind of laser can travel through the surface of the wafer and interact with the silicon below. But designing a laser that can pass through the surface without causing damage and still carry out precise nanoscale fabrication below is not simple.
Researchers from Bilkent University in Ankara, Türkiye, achieved this by using spatial light modulation to create a needlelike laser beam that gave them greater control over where the beam’s energy was deposited. By exploiting physical interactions between the laser light and the silicon, they were able to fabricate lines and planes with different optical properties that could be combined to create nanophotonic elements below the surface.
Beyond the hype surrounding artificial intelligence (AI) in the enterprise lies the next step—artificial consciousness. The first piece in this practical AI innovation series outlined the requirements for this technology, which delved deeply into compute power—the core capability necessary to enable artificial consciousness. This piece looks at the control and storage technologies and requirements that are not only necessary for enterprise AI deployment but also essential to achieve the state of artificial consciousness.
Controlling unprecedented compute power
While artificial consciousness is impossible without a dramatic rise in compute capacity, that is only part of the challenge. Organizations must harness that compute power with the proper control plane nodes—the familiar backbone of the high availability server clusters necessary to deliver that power. This is essential for managing and orchestrating complex computing environments efficiently.
Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.
Tesla CEO Elon Musk confirmed a number of key details about the Robotaxi during the company’s Q2 2024 Update Letter and earnings call. These include the site of the Robotaxi’s production, as well as the manufacturing process that would be used on the vehicle.
It would not be an exaggeration to state that the Robotaxi unveiling on October 10, 2024 is poised to be Tesla’s most important event this year. And considering Elon Musk’s noticeable focus on Full Self Driving (FSD), it was no surprise when several questions during the Q2 2024 earnings call were focused on the Robotaxi.
As per Tesla’s Q2 2024 Update Letter, its plans for new vehicles, including more affordable models, are still on track for the start of production in the first half of 2025. These vehicles will utilize aspects of its next-generation and current platforms, and they could be produced on the same manufacturing lines as the company’s current vehicle line-up. As for the Robotaxi, however, Tesla was clear.
“The technological focus is on significant increases in range through advances in energy density and the reduction of charging times,” Mercedes explained, noting that the partnership cements a reliable EV battery cell supply chain while providing financial support for Farasis to build a factory in Germany.
How Sustainable Is A Million-Mile EV Battery?
Continue reading “Sustainable Million-Mile Battery Bet Pays Off For Mercedes-Benz” »
Today’s athletes are always on the lookout for new techniques and equipment to help them train more effectively. Modern coaches and sports trainers use intelligent data monitoring through videos and wearable sensors to help enhance athletic conditioning. However, traditional video analysis and wearable sensor technologies often fall short when tasked with producing a comprehensive picture of an athlete’s performance.
Researchers from Lyuliang University have developed a low-cost, flexible, and customizable sensor for badminton players that overcomes current monitoring constraints. The work is published in APL Materials.
Badminton is known for its many technical movements and the dynamic speed and precision required to play successfully. Monitoring the postures, footwork, arm swings, and muscle strength shown by badminton players is limited by video shooting angles and the discomfort of rigid wearable sensors.
Kawasaki performed the world’s first public demonstration of its hydrogen-fueled prototype motorcycle this past weekend in Japan.
Vayu is already witnessing significant traction, with as many as 20 enterprises piloting it’s novel technology and over 100 on a waitlist.