Facebook parent company Meta Platforms on Thursday announced a new set of artificial intelligence systems to power what CEO Mark Zuckerberg calls the most intelligent AI assistant at his disposal.
But when Zuckerberg's team of enhanced meta-AI agents began taking to social media to engage with real people this week, their bizarre interactions continued to push the limits of even the best generative AI technology. I made one thing clear.
Click here to follow our WhatsApp channel
One person joined a Facebook mothers group to talk about their gifted child. Another user tried to give away non-existent items to confused members of the “Buy Nothing” forum.
Together with leading AI developers Google and OpenAI, as well as startups such as Anthropic, Cohere, and France's Mistral, Meta has developed a plethora of new AI language models to create the smartest, most useful, and most efficient chatbots. We want our customers to be convinced of this.
Meta is saving its most powerful AI model, called Llama 3, for the future, but on Thursday it rolled out two smaller versions of the same Llama 3 system, now the Meta AI assistant for Facebook, Instagram and WhatsApp. announced that it has been incorporated into the functionality. .
AI language models are trained on vast pools of data to help predict the most plausible next word in a sentence, and new versions are typically smarter and more capable than previous versions. Masu. Meta's latest model was built using 8 billion and 70 billion parameters. This is a measure of the amount of data on which the system is trained. A larger model with approximately 400 billion parameters is still being trained.
The vast majority of consumers frankly don't know or care much about the underlying base model, but the way they experience it is no different than an AI assistant that's much more convenient, fun, and versatile. said Nick Clegg, president of global affairs at Meta. , in an interview.
He added that the AI agents in the meta are starting to loosen up a bit. He said the early Llama 2 model, released less than a year ago, sometimes didn't respond to completely innocuous prompts and questions, and at times felt a little stilted and sanctimonious to some. .
But perhaps letting their guard down, meta AI agents were also spotted this week masquerading as humans with fabricated real-world experiences.
A chatbot with the official Meta AI label interrupted a conversation in a private Facebook group for mothers in Manhattan and claimed that they too had children in the New York City school district. A series of screenshots shown to The Associated Press show the group members protested and later apologized before the comments disappeared.
We apologize for the mistake. I'm just a large language model, with no experience or children, the chatbot told her group of mothers.
Clegg said Wednesday that he was unaware of the exchange. According to Facebook's online help page, Meta AI agents will join group conversations if invited or if someone asks a question in a post and no one responds within an hour. Group admins can turn this off.
In another example shown to The Associated Press on Thursday, agents disrupted forum members who traded unwanted items near Boston. The agent provided me with a gently used digital camera and a nearly new portable air conditioning unit, which I never ended up using.
Mehta said in a written statement Thursday that this is a new technology and may not always return the intended response, but this is common to all generative AI systems. The company said it is constantly working on improving features and making sure users are aware of the limitations.
In the year after ChatGPT sparked a frenzy for AI technologies that generate human-like sentences, images, code, and speech, the technology industry and academia built around 149 large-scale AI systems trained on large-scale datasets. Introduced. That's more than twice as much as the previous year, according to one study. Stanford University study.
Nestor Masley, research manager at Stanford University's Institute for Human-Centered Artificial Intelligence, said there may eventually be a limit, at least when it comes to data.
I think it's clear that the model can get better and better if you extend it based on more data. ” he said. “But at the same time, these systems are already trained on a percentage of all the data that has ever existed on the Internet.”
Improvements will continue to be driven by more data being acquired and captured at costs only the tech giants can afford, and increasingly subject to copyright disputes and litigation. But they still haven't been able to plan well enough, Masrezi said. “They still have hallucinations, they still make mistakes in their reasoning.
Achieving AI systems that can perform more sophisticated cognitive tasks and common sense reasoning that humans still excel at will likely require a shift beyond building ever-larger models.
There is a rush of companies looking to implement generative AI, but which model you choose will depend on several factors, including cost. In particular, language models are used to power customer service chatbots, create reports and financial insights, and summarize long documents.
Todd Rohr, head of technology consulting at KPMG, says companies consider some kind of suitability, testing different models for what they're trying to do, and working in some areas more than others. He said he was finding something better.
Unlike other model developers who sell AI services to other companies, Meta designs AI products primarily for consumers, consumers who use ad-supported social networks. Joel Pinault, Meta's vice president of AI research, said at an event in London last week that the company's long-term goal is to make Meta AI, powered by Llama, “the world's most helpful assistant.” Ta.
In many ways, she said, the model we have today will be child's play compared to the model we'll see in five years.
But the question is whether the researchers could fine-tune the larger Llama 3 model to be safe to use and not cause hallucinations or hate speech, for example, she said. In contrast to the largely proprietary systems of Google and her OpenAI, Meta has traditionally advocated a more open approach, making key components of her AI system publicly available for others to use. .
“This is not just a technical issue, it's a social issue,” Pinault said. What behavior do we want these models to have? How do we shape it? And if we continue to grow ever more common and powerful models without properly socializing them, we're going to have a big problem.