Is The AI IQ Test free to play?
Yes. The free round is 10 questions and takes about 5 minutes. Share the link in any AI-builder Slack channel or post your score on X and anyone can play without signing up.
For AI Builders
Test what you actually know about the 2023–2026 AI stack — models, techniques, economics, infra, evals. Share your score with #ai-builders and see who's deepest in the material. AI-hosted, mobile-first, built for LinkedIn/X-shareable scorecards.
Play it first
Jump straight in. AI-hosted, no signup, shareable score at the end.
Built For
AI builders, ML engineers, founder-engineers, applied-AI practitioners, AI research readers, OpenAI/Anthropic/Hugging Face followers
What You Get
Suggested starter topic: AI builder trivia covering models, evals, infra, RAG, and economics
About This Pack
The AI IQ Test is built for the population that reads the system cards, follows every Anthropic and OpenAI release the hour they drop, and has strong opinions about the difference between Chain-of-Thought and ReAct. It's not an intro-to-AI quiz — it's a knowledge check for people already shipping with the models. Marcus runs the round with the pacing of an applied-AI engineer running a whiteboard round at a startup interview. Calm, technical, respects the audience's time.
Questions span the current applied-AI stack end-to-end. The model landscape as of 2025–2026 — GPT-5 and the 4o/4.1/5 progression, Claude Sonnet and Opus and Haiku across 3/3.5/4/4.5/4.6, Gemini 2.5 Flash and Pro, Llama 3 and 4 open weights, DeepSeek V3 and R1, Qwen 2.5 and 3, Mistral's recent releases. Inference economics — $/1K tokens across frontier vs open, context window scaling, MoE architectures, the cost differential between reasoning and non-reasoning modes. Evaluation — MMLU, GPQA Diamond, SWE-bench Verified, Chatbot Arena ELO, the 2025 agentic benchmarks, Humanity's Last Exam. RAG patterns — embedding models, hybrid retrieval, reranking, the shift from naive RAG to agentic retrieval. Agent frameworks — MCP, tool use, the evolution from ReAct to planning agents to orchestrators. Research canon — from 'Attention Is All You Need' through InstructGPT through the 2024–2026 reasoning-model papers. Every fact is cross-checked.
AI-builder channels share work the exact way crypto Twitter used to — a score drops on X or LinkedIn, peers have to either match it or admit they're behind on the reading. The pack is built for that flex. The share card previews the pack name and score natively on X, LinkedIn, and Slack. On X, the format fits alongside the 'read the release notes' flex. On LinkedIn, it fits alongside the founder-engineer post pattern. In Slack #ai-builders, it fits as the weekly icebreaker for teams that ship with LLMs.
Custom rounds are where this gets operationally useful. Paste your team's architecture doc and get a round new hires play in their first week. Paste a model's system card and get a quiz your team runs after a release. Describe the angle — 'only Anthropic's 2026 releases,' 'every DeepSeek paper from V2 through R1,' 'the full agentic-benchmark arc from SWE-bench through 2025,' 'the current state of long-context retrieval' — and Trivana generates a hosted round. AI content creators use the pattern for X/LinkedIn challenge posts. Applied-AI team leads use it for retros and offsites. Founders use it as a common-vocabulary check before a product strategy session.
Marcus is the default voice for AI-builder content because the tone fits the audience. Not a hype-voice, not a podcaster-voice — closer to a senior engineer who's deeply familiar with the field and is running the round because someone had to. He runs it in English with the same tone in Spanish, French, Portuguese, and Japanese. On paid plans, Smart Host voice reactions mean Marcus reacts specifically to each answer, which makes the round feel less like a form and more like a pairing session with a peer who reads the same papers.
How To Play + Share
Yes. The free round is 10 questions and takes about 5 minutes. Share the link in any AI-builder Slack channel or post your score on X and anyone can play without signing up.
The current applied-AI stack: the 2024–2026 model landscape (GPT-5, Claude Sonnet and Opus, Gemini 2.5, Llama 3 and 4, DeepSeek, Qwen, Mistral), the economics of inference ($/1K tokens, context window scaling, MoE architectures), evaluation (MMLU, GPQA, SWE-bench, agentic benchmarks, the Chatbot Arena), RAG patterns, agent frameworks, and the research canon from 'Attention Is All You Need' through the 2024–2026 reasoning-model papers.
Yes. Paste a Google Doc URL, upload an architecture PDF, or drop in a founder memo, and Trivana generates a hosted round keyed to that material. AI teams use this for new-hire onboarding (quiz them on the team's architecture), retros (run a quiz on what shipped), and offsites (run a quiz on the year's research roundup).
It's built for that. A 5-minute hosted round at the start of a take-home review or a new-hire's first week gives both sides a low-stakes signal. It is not a technical screen — it is a common-vocabulary check. Managers use it for applied-AI team icebreakers and founder-engineer AMAs.
The pack is refreshed as new models and papers land — we version the pack every few weeks. You can also generate a fresh custom round in under a minute keyed to whatever just dropped (a new OpenAI release, a new Anthropic benchmark, a new DeepSeek paper) which is how most AI builders use the product once they've played the default pack.
Any era, any topic, any language — AI-hosted, shareable, instant.
Create your own game