Menu

Blog

Archive for the ‘robotics/AI’ category: Page 284

Mar 19, 2024

Tesla Rolls Out Full Self-Driving Beta v12.3 Update

Posted by in category: robotics/AI

The Tesla v12 software update is introducing what Musk has been calling “end-to-end neural nets”. The biggest difference with previous FSD updates is that the vehicle’s controls are now be handled by neural nets rather than being coded by programmers.

Mar 19, 2024

Method identified to double computer processing speeds

Posted by in category: robotics/AI

Hung-Wei Tseng, a UC Riverside associate professor of electrical and computer engineering, has laid out a paradigm shift in computer architecture to do just that in a recent paper titled, “Simultaneous and Heterogeneous Multithreading.”

Tseng explained that today’s computer devices increasingly have graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately, moving information from one processing unit to the next, which in effect creates a bottleneck.

In their paper, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce what they call “simultaneous and heterogeneous multithreading” or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator.

Mar 19, 2024

Samsung Creates Lab to Research Chips for AI’s Next Phase

Posted by in category: robotics/AI

Samsung Electronics Co. has set up a research lab dedicated to designing an entirely new type of semiconductor needed for artificial general intelligence, a long-standing aspiration in AI development.

Mar 19, 2024

Elon Musk Releases the AI Model Behind Grok, a Competitor to OpenAI’s ChatGPT

Posted by in categories: Elon Musk, robotics/AI

Key Takeaways.

Elon Musk’s xAI rolled out its ChatGPT competitor, Grok, in December.

  • On Sunday, xAI publicly released the AI model behind Grok.

  • Musk is suing OpenAI and its co-founders Sam Altman and Greg Brockman to make the company’s research and technology publicly available.

  • Mar 19, 2024

    Tesla Summon to get major improvements, Autopark gets a new name

    Posted by in categories: Elon Musk, robotics/AI, transportation

    Tesla Summon and Autopark are set to gain major improvements next month, according to company CEO Elon Musk. Autopark is also getting a new name, Musk said, as it appears to be on its way to being called “Banish.”

    After Musk stated earlier this month that Tesla would have some “really cool stuff coming this month and next,” owners and fans of the company were left with their own imaginations to think of what could possibly be coming.

    While many owners have wished for improvements of things like the Auto Wipers, Tesla has been working behind the scenes to improve some of its semi-autonomous driving features and certain parts of Enhanced Autopilot, including Summon and Autopark.

    Mar 19, 2024

    What makes Black Holes Grow and New Stars Form? Machine Learning helps Solve the Mystery

    Posted by in categories: cosmology, robotics/AI

    It takes more than a galaxy merger to make a black hole grow and new stars form: machine learning shows cold gas is needed too to initiate rapid growth — new research finds.

    When they are active, supermassive black holes play a crucial role in the way galaxies evolve. Until now, growth was thought to be triggered by the violent collision of two galaxies followed by their merger, however new research led by the University of Bath suggests galaxy mergers alone are not enough to fuel a black hole — a reservoir of cold gas at the centre the host galaxy is needed too.

    The new study, published this week in the journal Monthly Notices of the Royal Astronomical Society is believed to be the first to use machine learning to classify galaxy mergers with the specific aim of exploring the relationship between galaxy mergers, supermassive black-hole accretion and star formation. Until now, mergers were classified (often incorrectly) through human observation alone.

    Mar 19, 2024

    Nvidia adds generative AI to power humanoid robots

    Posted by in category: robotics/AI

    Nvidia on Monday announced a hardware and software platform for building human-like robots that includes generative artificial intelligence features.

    Mar 19, 2024

    ‘We Created a Processor for the Generative AI Era,’ NVIDIA CEO Says

    Posted by in category: robotics/AI

    Nvidia’s Blackwell isn’t taking any prisoners.

    A monster of a chip that combines two dies.

    Continue reading “‘We Created a Processor for the Generative AI Era,’ NVIDIA CEO Says” »

    Mar 19, 2024

    Jensen Huang unveils new Nvidia super-chip before robots come onstage: ‘Everything that moves in the future will be robotic’

    Posted by in categories: futurism, robotics/AI

    Nvidia, the $2 trillion AI giant, is moving to lap the market once again.

    Mar 19, 2024

    Natural language instructions induce compositional generalization in networks of neurons

    Posted by in categories: biological, robotics/AI

    In this study, we use the latest advances in natural language processing to build tractable models of the ability to interpret instructions to guide actions in novel settings and the ability to produce a description of a task once it has been learned. RNNs can learn to perform a set of psychophysical tasks simultaneously using a pretrained language transformer to embed a natural language instruction for the current task. Our best-performing models can leverage these embeddings to perform a brand-new model with an average performance of 83% correct. Instructed models that generalize performance do so by leveraging the shared compositional structure of instruction embeddings and task representations, such that an inference about the relations between practiced and novel instructions leads to a good inference about what sensorimotor transformation is required for the unseen task. Finally, we show a network can invert this information and provide a linguistic description for a task based only on the sensorimotor contingency it observes.

    Our models make several predictions for what neural representations to expect in brain areas that integrate linguistic information in order to exert control over sensorimotor areas. Firstly, the CCGP analysis of our model hierarchy suggests that when humans must generalize across (or switch between) a set of related tasks based on instructions, the neural geometry observed among sensorimotor mappings should also be present in semantic representations of instructions. This prediction is well grounded in the existing experimental literature where multiple studies have observed the type of abstract structure we find in our sensorimotor-RNNs also exists in sensorimotor areas of biological brains3,36,37. Our models theorize that the emergence of an equivalent task-related structure in language areas is essential to instructed action in humans. One intriguing candidate for an area that may support such representations is the language selective subregion of the left inferior frontal gyrus. This area is sensitive to both lexico-semantic and syntactic aspects of sentence comprehension, is implicated in tasks that require semantic control and lies anatomically adjacent to another functional subregion of the left inferior frontal gyrus, which is implicated in flexible cognition38,39,40,41. We also predict that individual units involved in implementing sensorimotor mappings should modulate their tuning properties on a trial-by-trial basis according to the semantics of the input instructions, and that failure to modulate tuning in the expected way should lead to poor generalization. This prediction may be especially useful to interpret multiunit recordings in humans. Finally, given that grounding linguistic knowledge in the sensorimotor demands of the task set improved performance across models (Fig. 2e), we predict that during learning the highest level of the language processing hierarchy should likewise be shaped by the embodied processes that accompany linguistic inputs, for example, motor planning or affordance evaluation42.

    One notable negative result of our study is the relatively poor generalization performance of GPTNET (XL), which used at least an order of magnitude more parameters than other models. This is particularly striking given that activity in these models is predictive of many behavioral and neural signatures of human language processing10,11. Given this, future imaging studies may be guided by the representations in both autoregressive models and our best-performing models to delineate a full gradient of brain areas involved in each stage of instruction following, from low-level next-word prediction to higher-level structured-sentence representations to the sensorimotor control that language informs.

    Page 284 of 2,432First281282283284285286287288Last