An anonymous reader shared a report. Recent headlines have made it clear that if AI does a surprisingly good job at a human task, that task is likely actually being performed by a human. When George Carlin's estate sued podcast producers for using AI to create stand-up routines in the late comedian's style, the podcasters said the script was actually written by Chad. Claimed to have been produced by a named human. (The two recently settled a lawsuit.) A company that makes AI-powered voice interfaces for fast-food drive-thrus says that only 30% of the work is done without the help of humans reviewing the work. Unable to complete. Amazon plans to eliminate its automated “Just Walk Out” checkout system from new stores, which relied on far more human verification than the company had hoped.
We've seen this before, but it may already be lost to Silicon Valley's pathologically short memory. Back in 2015, AI chatbots were gaining traction. Tech giants and startups alike promoted them as always-available, always-chipper, always-reliable assistants. One of his startups, x.ai, promoted an AI assistant that can read emails and schedule meetings. Another girlfriend, GoButler, suggested that she could book a flight or order fries through a delivery app. Facebook also tested a do-it-all concierge service called “M.” The service can answer seemingly any question, perform almost any task, and draw on demand. But in all of these services, the “AI assistant” was often just a human. In 2016, I wrote an article about this and interviewed workers who were doing their jobs as humans hiding behind bots to make sure they didn't make mistakes or say nonsense. .