GPT-4 is here, and youâve probably heard a good bit about it already. Itâs a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work, disrupting education, and so much else, and there was some healthy skepticism about those fears. So where does a more powerful AI leave us?
Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. Itâs hard to argue that new large language models, or LLMs, arenât a genuine engineering feat, and itâs exciting to experience advancements that feel magical, even if theyâre just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAIâs newest model inevitably sidesteps crucial questionsâones that simply donât fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when weâre still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT?
Over the past few weeks, Iâve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. Iâve become obsessive about trying to wrap my head around this moment, because Iâve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not?