News
Meta Launches Stand-Alone AI App, Escalating Rivalry in Competitive GenAI Race
- By John K. Waters
- 04/29/2025
Meta Platforms on Tuesday unveiled a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.
The new "Meta AI" app marks the company's boldest step yet in bringing personalized, conversational AI to the forefront of its ecosystem. It offers users a dedicated experience distinct from the existing AI functions embedded in Facebook, Instagram, WhatsApp, and Messenger. The launch confirms earlier reporting by CNBC and comes as Meta kicks off its inaugural LlamaCon developer event in Menlo Park, California.
The app features a social-infused "Discover" feed showing how others are engaging with Meta AI, along with pre-set prompts to inspire usage. It integrates voice chat powered by Llama 4 and full-duplex speech technology, allowing more fluid, back-and-forth conversations—though the feature remains in early testing and is initially limited to users in the U.S., Canada, Australia, and New Zealand.
Meta CEO Mark Zuckerberg has described 2025 as "the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people," positioning Meta AI as a leading contender. The company's internal numbers showed 700 million monthly active users for Meta AI as of January, up from 600 million in December.
The stand-alone app rollout puts Meta in direct competition with AI chatbots like OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude, and xAI's Grok. It also follows recent moves by Google and xAI to debut their own dedicated mobile apps for their assistants.
The app also replaces the former Meta View app for Ray-Ban Meta smart glasses, consolidating the AI experience across devices. Users can start a voice conversation on their glasses and resume it later via the app or web interface. Personalized responses draw from a user's Facebook and Instagram profiles—if linked through Meta's Account Center—and improve with continued use.
Meta says its latest language model, Llama 4, delivers more natural and context-aware responses, better handling voice input and integrating image generation and editing capabilities. While the AI cannot access real-time web data, Meta is testing new features such as document generation, file imports for analysis, and desktop-optimized web tools with expanded creative options.
Voice remains a cornerstone of Meta's AI strategy. Users can toggle the "Ready to talk" feature in the settings to enable default voice interaction, and a visible icon indicates when the microphone is active.
The app launch comes ahead of Meta's Q1 earnings report on Wednesday. Investors are watching closely for signs that Meta's aggressive AI investments—projected to hit $65 billion this year—are translating into commercial returns.
The company's LlamaCon event is expected to provide developers with insights into its AI roadmap and showcase its efforts to democratize large language model usage.
By debuting a stand-alone AI assistant, Meta is betting big on personalization and ubiquity—aiming to embed Meta AI in users' daily routines across devices, conversations, and digital spaces.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].