Toggle light / dark theme

ChatGPT Says: AI Will Change EVERYTHING

99% of the following speech was written by ChatGPT. I made a few changes here and there and cut and pasted a couple of paragraphs for better flow. This is the prompt with which I started the conversation:

Write a TED Talks style speech explaining how AI will be the next cross-platform operating system, entertainment service, and search engine as well as source of news and accurate information. Elaborate further in this speech about how this future AI could produce tailored entertainment experiences for the end-user. Explain its application in creating real-time, personally-tailored and novel media including mixed reality, virtual reality, extended reality, and augmented reality media as well as in written fiction and nonfiction, music, video and spoken-word entertainment for its end users. Write a strong and compelling opening paragraph to this speech and end it memorably. Add as much detail as you can on each point. The speech should last at least 15 minutes.

I used an online service called colossyan.com too produce the clips with metahumans. I used the reface app to put my face on some of the metahumans, but it unfortunately stepped on the video. I apologize for the blurriness.

AI Will Be a Public Good With Emad Mostaque | EP #16 Moonshots and Mindsets

In this episode, Emad and Peter discuss everything from AI-generated content and property rights to ethical implications, along with the upcoming hyper-disruption wave of technology in all industries.

Emad Mostaque is the CEO and Co-Founder of Stability AI, a company funding the development of open-source music-and image-generating systems such as Dance Diffusion and Stable Diffusion.

Learn about Stability AI: https://platform.stability.ai/

Access Stable Diffusion: https://github.com/CompVis/stable-diffusion.

This episode is brought to you by Levels: real-time feedback on how diet impacts your health. https://levels.link/peter.

Consider a journey to optimize your mind and body by visiting http://mylifeforce.com/peter.

Musicians Wage War Against Evil Robots

After the release of The Jazz Singer in 1927, all bets were off for live musicians who played in movie theaters. Thanks to synchronized sound, the use of live musicians was unnecessary — and perhaps a larger sin, old-fashioned. In 1930 the American Federation of Musicians formed a new organization called the Music Defense League and launched a scathing ad campaign to fight the advance of this terrible menace known as recorded sound.

The evil face of that campaign was the dastardly, maniacal robot. The Music Defense League spent over $500,000, running ads in newspapers throughout the United States and Canada. The ads pleaded with the public to demand humans play their music (be it in movie or stage theaters), rather than some cold, unseen machine. A typical ad read like this one from the September 2, 1930 Syracuse Herald in New York:

Tho’ the Robot can make no music of himself, he can and does arrest the efforts of those who can.

Part 1: Universal Media Synthesis, The Innovation Pyramid and Autolism

Universal media synthesis, the innovation pyramid and autolism — part 1

AI can now generate images and text that’s as good as a human. What happens when AI can generate all kinds of media as good as a human?

******Remember, the future is unknowable. I do not know the future. I speculate on what m_i_g_h_t happen given a set of starting assumptions. I can speculate about what’s possible but make sure to separate speculation from fact. If you understand these pretenses, then let us speculate about the future of technology.

Special Thanks to the following individuals for creating such great background music:

https://freesound.org/people/Rorschakk/sounds/636989/

https://freesound.org/people/ShortRecord/sounds/544416/

CHIP Landmark Ideas: Ray Kurzweil

Rewriting Biology with Artificial Intelligence.

Ray Kurzweil.

Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a thirty-year track record of accurate predictions. Called “the restless genius” by The Wall Street Journal and “the ultimate thinking machine” by Forbesmagazine, he was selected as one of the top entrepreneurs by Inc. magazine, which described him as the “rightful heir to Thomas Edison.” PBS selected him as one of the “sixteen revolutionaries who made America.” Ray was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition software. Ray received a Grammy Award for outstanding achievements in music technology; he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds twenty-one honorary Doctorates, and honors from three U.S. presidents. Ray has written five national best-selling books, including New York Times best sellers The Singularity Is Near (2005) and How To Create A Mind (2012). He is Co-Founder of Singularity Group and a Principal Researcher and AI Visionary at Google, looking at the long-term implications of technology and society.

The Computational Health Informatics Program (CHIP)

CHIP, founded in 1994, is a multidisciplinary applied research and education program at Boston Children’s Hospital. For more information, please visit our website www.chip.org.

The CHIP Landmark Ideas Series.

A brief history of diffusion, the tech at the heart of modern image-generating AI

Text-to-image AI exploded this year as technical advances greatly enhanced the fidelity of art that AI systems could create. Controversial as systems like Stable Diffusion and OpenAI’s DALL-E 2 are, platforms including DeviantArt and Canva have adopted them to power creative tools, personalize branding and even ideate new products.

But the tech at the heart of these systems is capable of far more than generating art. Called diffusion, it’s being used by some intrepid research groups to produce music, synthesize DNA sequences and even discover new drugs.

So what is diffusion, exactly, and why is it such a massive leap over the previous state of the art? As the year winds down, it’s worth taking a look at diffusion’s origins and how it advanced over time to become the influential force that it is today. Diffusion’s story isn’t over — refinements on the techniques arrive with each passing month — but the last year or two especially brought remarkable progress.

How an AI Stole $35 Million

Artificial Intelligence has seen many advances recently, with new technologies like deepfakes, deepvoice, and GPT3 completely changing how we see the world. These new technologies bring forth many obvious benefits for in workflow and entertainment, but when technology like this exists, there are those who will try and use it for evil. Today we will be taking a look at how AI is giving hackers and cyber criminals more ways to pull off heists focusing on the story of a $35 million dollar hack that was pulled off using artificial intelligence and deep voice software.

0:00 The History of Social Engineering.
1:12 Early Social Engineering Attacks.
5:02 How Hackers are using Artificial Intelligence.
7:37 The $35 Million Heist.

Join as a member to help support the channel and get perks!
https://www.youtube.com/channel/UCHoRvL_JN1_pqRmMlgfUVsQ/join.

Join the community discord:
https://discord.gg/hYNQN45MdN

Subscribe here:

All music from Epidemic Sound.

Prof. DAVID CHALMERS — Consciousness in LLMs [Special Edition]

Support us! https://www.patreon.com/mlst.

If you don’t like the background music, we published a version with it all removed here — https://anchor.fm/machinelearningstreettalk/episodes/Music-R…on-e1sf1l7

David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community.

The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it’s like to be one. Thomas Nagel’s “inconceivability argument” has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious.

Filmed at #neurips22

Discord: https://discord.gg/aNPkGUQtc5

Dream Fusion A.I — Everyone Can Now Easily Make 3D Art With Text!

The Future is Upon Us.
DreamFusion: https://bit.ly/3UQWIjh.
Nvidia Get3D: https://bit.ly/3dU9KMa.
Dream Textures A.I: Seamless Texture Creator.

Explore Sketch to 3D [Monster Mash]: https://youtu.be/YLGWsMfAc50
See Nvidia Nerf: https://youtu.be/Sp03EJCTrTI

Join Weekly Newsletter: https://bit.ly/3lpfvSm.
Find BLENDER Addons: https://bit.ly/3jbu8s7
Learn to Animate in Blender: https://bit.ly/3A1NWac.
See FiberShop — Realtime Hair Tool: https://tinyurl.com/2hd2t5v.
GET Character Creator 4 — https://bit.ly/3b16Wcw.
GET AXYZ ANIMA: https://bit.ly/2GyXz73
GET ICLONE 8 — https://bit.ly/38QDfbb.
Check Out Unity3D Bundles: https://bit.ly/384jRuy.
████████████████████████████
DISCORD: https://discord.gg/G2kmTjUFGm.
Twitter: https://bit.ly/3a0tADG
Music Platform: https://tinyurl.com/v7r8tc6j.
Patreon: https://www.patreon.com/asknk.
████████████████████████████
GET HEADSHOT CC: https://bit.ly/2XpspUw.
GET SKIN GEN: https://bit.ly/2L8m3G2
ICLONE UNREAL LIVE LINK: https://bit.ly/3hXBD3N
GET ACTORCORE: https://bit.ly/3adV9XK

███ BLENDER ADDONS & TUTORIALS ███

Get Free 3D Content Here: https://tinyurl.com/bdh28tb5

#ai.
#b3d.
#update.
#art.
#asknk.
#blender3d.
#blender3dart.
#3dart.
#assets.
#free.
#free3d.
#3D.

00:00 Intro.

Riffusion’s AI generates music from text using visual sonograms

On Thursday, a pair of tech hobbyists released Riffusion, an AI model that generates music from text prompts by creating a visual representation of sound and converting it to audio for playback. It uses a fine-tuned version of the Stable Diffusion 1.5 image synthesis model, applying visual latent diffusion to sound processing in a novel way.

Since a sonogram is a type of picture, Stable Diffusion can process it. Forsgren and Martiros trained a custom Stable Diffusion model with example sonograms linked to descriptions of the sounds or musical genres they represented. With that knowledge, Riffusion can generate new music on the fly based on text prompts that describe the type of music or sound you want to hear, such as “jazz,” “rock,” or even typing on a keyboard.