Elon Musk’s new venture, xAI, has emerged with a new AI model named ‘Grok.’ Touted as a sassy and less constrained digital entity, Grok promises to bring wit and a touch of rebellion to the AI landscape.
“Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask! Grok is still a very early beta product – the best we could do with 2 months of training – so expect it to improve rapidly with each passing week with your help,” the website reads in part.
Interestingly, Grok’s creation stems from a relatively brief development period of two months and is part of a broader ecosystem, tapping into real-time world knowledge through the X platform. It’s a strategic move considering Musk’s acquisition of Twitter in 2022, which seems to serve as a knowledge reservoir for Grok’s database.
The engine powering Grok is Grok-1, the company’s frontier LLM, which Musk says they developed over the last four months and has gone through many iterations over this span of time. After announcing xAI, Musk says that they trained a prototype LLM (Grok-0) with 33 billion parameters.
With Grok-1’s “state-of-the-art language model,” xAI conducted a series of evaluations using a number of “standard machine learning benchmarks” designed to measure math and reasoning abilities:
GSM8k: Middle school math word problems, (Cobbe et al. 2021), using the chain-of-thought prompt.
MMLU: Multidisciplinary multiple choice questions, (Hendrycks et al. 2021), provided 5-shot in-context examples.
HumanEval: Python code completion task, (Chen et al. 2021), zero-shot evaluated for pass@1.
MATH: Middle school and high school mathematics problems written in LaTeX, (Hendrycks et al. 2021), prompted with a fixed 4-shot prompt.
Despite the potential and prowess of large language models, as showcased by OpenAI’s ChatGPT and others, which consume extensive textual data to generate human-like text, concerns linger over the ethical ramifications. Unlike its counterparts, Grok seemingly operates with fewer restrictions, leaving industry experts and observers to ponder about the implications of an AI that might handle contentious or unethical queries with less resistance.
While the specifics of Grok’s training protocol remain ambiguous, what is clear is that xAI has set out to address critical challenges in AI development. The company emphasizes the importance of models that can self-assess their reliability and counter adversarial attempts to induce erroneous behavior.
This announcement aligns with Musk’s vision for AI tools that transcend the boundaries of political and cultural backgrounds, aiming to serve the collective benefit of humanity while adhering to legal frameworks.
As the AI community awaits broader access to Grok, the debate on balancing AI innovation with ethical safeguards continues to intensify. With Musk’s history of investing in generative AI and his recent push for less “woke” technology, Grok stands at the precipice of a new AI paradigm—one that is as potentially influential as it is controversial.
Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-4.