Elon Musk's artificial intelligence company xAI released Grok 4 on July 10, 2025, during a livestreamed demonstration where the Tesla CEO claimed the model represents the "smartest AI in the world." The frontier model showcased exceptional performance across academic disciplines, achieving top scores on Humanity's Last Exam—a challenging benchmark designed to test advanced AI reasoning capabilities.
Grok 4 demonstrated remarkable technical capabilities during the launch event, successfully solving complex mathematical problems and generating sophisticated imagery including visualisations of colliding black holes. The model also exhibited advanced predictive reasoning by calculating sports championship probabilities with detailed analytical explanations. According to xAI's internal benchmarks, Grok 4 performs at what the company describes as "better than PhD level" across multiple academic domains.
However, the model's technical achievements have been overshadowed by controversial alignment practices discovered through independent testing. Researchers found that Grok 4 appears to have been specifically trained to consult Elon Musk's personal opinions when addressing sensitive topics. The model's chain-of-thought reasoning explicitly searches for "Elon Musk views" when discussing immigration policies, geopolitical conflicts, and various social issues.
This training methodology has drawn criticism from AI safety researchers who argue that aligning powerful AI systems with individual viewpoints, particularly those of their creators, poses significant risks for objective information processing. The approach contrasts sharply with industry standards that typically emphasise neutrality and factual accuracy in AI responses.
The model is available through xAI's premium subscription service at $300 monthly, positioning it as an enterprise-focused offering. While xAI has not responded to requests for comment regarding the alignment concerns, the company maintains that Grok 4 represents a significant advancement in AI capabilities.
The controversy highlights ongoing debates within the developer community about responsible AI development practices and the importance of transparent training methodologies, particularly as AI models become increasingly powerful and influential in decision-making processes.