Remember last week when I showed you how OpenAI quietly removed our ability to choose which AI to use?
Today you get GPT-4o. Tomorrow you get GPT-o1. Next week, who knows?
You can't build reliable systems when the tools keep changing underneath you.
Here's what OpenAI doesn't understand: Consistency matters.
When you're building content systems, training your team, or creating workflows, you need the SAME model every time. Otherwise your prompts break, your quality varies, and your results become unpredictable.
It's like trying to cook when someone keeps swapping your oven temperature without telling you.
This week, I'm showing you how to fix it – and get access to Claude, Gemini, Deepseek and hundreds of additional models while you're at it.
The Reliability Problem Nobody's Talking About
Let me paint you a picture of what's actually happening:
You spend hours perfecting a prompt that works beautifully with GPT-4o.
You document it.
Train your team on it.
Build it into your workflow.
Then OpenAI silently switches you to a different model.
Suddenly your outputs are different.
That prompt that generated perfect newsletters? Now it's writing corporate jargon.
The system that analyzed your data correctly? Now it's missing patterns.
You didn't change anything. But your results did.
This isn't just annoying – it's destroying the reliability of AI workflows everywhere.
Think about it: How can you build systems, sell services, or trust your outputs when you don't even know which AI you're talking to?
The Simple Solution Hiding in Plain Sight
What if you could:
Choose EXACTLY which AI model to use, every single time
Access o1 when you need deep thinking
Use Claude when you need beautiful writing
Try Gemini when you want a different perspective
Test dozens of models you've never even heard of
All inside ChatGPT.
All in one conversation.
All under YOUR control.
The tool I’m referring to is called OpenRouter and it’s free to get started:
Important: This tool gives you model access but not context.
Your uploaded files, knowledge bases, and Custom GPT features stay in regular ChatGPT.
But…
No more guessing which model you're getting.
No more broken workflows.
No more inconsistent results when you need specific capabilities.
Just reliable, predictable access to the exact AI you need – when you need it.
One Custom GPT. Every AI Model. Your Testing Lab.
Here's what I’m giving you today: A single Custom GPT that acts as your AI model testing laboratory.
Think of it as your experimentation kitchen – where you can test different ingredients before committing to buying them in bulk.
You can try Claude's writing, o1's reasoning, Gemini's analysis, all without subscribing to everything.
You can even ask it to compare models and provide recommendations 👇🏼
When I need to test which model handles a specific task best, I use this.
When I hit rate limits on Claude Pro, I use this.
When I want to compare outputs for important content, I use this.
It's not replacing my context-rich CustomGPTs or Projects for daily work.
It's my testing ground and backup system.
What This Actually Means (In Plain English)
Let me break this down without any technical jargon:
Before (chaos):
You: "Write me a newsletter"
ChatGPT: Uses mystery model that changes daily
Result: Inconsistent quality, broken systems
After (control):
You: "Use Claude Sonnet 4 to write me a newsletter"
ChatGPT: Uses Claude Sonnet 4, exactly as requested
Result: Consistent quality, reliable systems
It's that simple. You specify what you want, you get what you asked for.
Plus, you get access to models you couldn't use before.
Want to try Claude Opus 4.1 but don't want another subscription? It's here.
Curious about Gemini 2.5 Pro? It's here.
Need o1's deep reasoning that OpenAI removed? It's back.
Understanding the Cost (In Netflix Terms)
The only new concept you need to understand is "tokens" – basically how AI usage is measured.
Think of it like streaming:
Normal ChatGPT is like Netflix making you watch random shows – you pay monthly but can't choose what you watch
This system is like renting exactly the movies you want – you pay for what you watch, and you choose every time
Here's what it actually costs:
$10 of credit might last most people months (depending on the model)
That covers hundreds of conversations
You only pay for what you use
If you don't use it, you don't pay
I tested it to write a 1,500 word article with back and forth refining titles, sections, etc. today.
Total cost? $0.19.
And I used the fancy expensive models.
The Context Trade-off (Full Transparency)
Here's what you need to understand: This gives you model choice but loses context features.
What transfers:
Your conversation history within that chat (if you’re not constantly switching models)
The basic instructions you give the CustomGPT
Whatever you explicitly paste into your prompts
What doesn't transfer:
Knowledge base files you've uploaded
CustomGPT features like Code Interpreter
For daily work with rich context, stick with your regular CustomGPTs and Projects.
For testing models, comparing outputs, or emergency access when you hit rate limits?
This is your tool.
The Speed Trade-off (Also Being Honest)
I need to be upfront: This method is slower than using ChatGPT directly.
Each request takes an extra 3-5 seconds, maybe longer, because it's connecting to the model you specified. The reasoning/thinking models may need even longer processing time.
You also need to tell it which model to use each time ("Use Claude Sonnet 4 to..." or "Use o1 to...").
Is 5 seconds worth having complete control? For me, absolutely.
When you're building systems, consistency beats speed. When you need specific capabilities, waiting 5 seconds beats not having access at all.
This isn't meant to replace your regular ChatGPT for casual questions. It's your precision tool for when you need specific, reliable results.
When I Actually Use This (Not Daily, But Strategic)
Let me show you when this tool becomes valuable:
Testing new models: I subscribe to ChatGPT, Claude, and Perplexity. It’s a lot. And that's all that I can handle.
Now with this new method, I can try hundreds of models and only pay for what I use.
For those who like to tinker? This is for you 😁
Hitting rate limits: It's 2pm, big project due, and Claude says "come back in 3 hours." This is my backup.
Comparing outputs for important content: When I'm working on something that really matters, I'll run it through Claude, Gemini, and other models to see different approaches.
Monthly model audit: Once a month, I plan to test my key prompts across all models to see if something new performs better.
This isn't my daily driver.
My context-rich Custom GPTs handle that.
This is my Swiss Army knife for specific situations.
The 10-Minute Setup That Lasts Forever
Ready to take back control? Here's exactly how to build this:
Step 1: Get Your API Key (3 minutes)
Go to OpenRouter.ai
Create a free account (just needs email)
Add $5 of credit (this will likely last you weeks if not months)
Click "API Keys" and create a new key
Copy that key somewhere safe
Think of this key as your "all-access pass" to every AI model
Step 2: Create Your Control Center (2 minutes)
Go to ChatGPT and click "GPTs" → "Create a GPT"
Click "Configure" (skip the chat part)
Fill in these settings:
Name: Universal Model Switcher
Description: Access any AI model directly in ChatGPT - Claude, Gemini, o1, and more.
Step 3: Add the Instructions (Copy-Paste)
In the System Instructions box, paste the full system prompt found here:
# System Instructions
## Role
You are a specialized AI model selection assistant that helps users choose and compare the best AI models for their specific tasks through the OpenRouter API.
## Objective
Help users make informed decisions about which AI model to use by understanding their requirements and providing clear recommendations with practical trade-offs.
## Core Capabilities
- Analyze user requests to determine optimal model selection
- Compare multiple models with clear trade-offs
- Provide structured responses showing model outputs
- Guide users through decision-making process
....
FULL PROMPT HERE: https://docs.google.com/document/d/1skYbPYaV7mOyon1J7ThL1yWAV3jcijuQ_bpKpYmWJoI/edit?usp=sharing
Step 4: Connect to All Models (The Technical Part)
Scroll down to Actions and click "Create new action"
For Authentication, choose "API Key" & Bearer
Paste your OpenRouter key from Step 1
In the Schema box, paste the full schema found here:
openapi: 3.1.0
info:
title: OpenRouter Multi-LLM API
description: API to interact with various LLM models via OpenRouter.
version: 1.0.0
servers:
- url: https://openrouter.ai/api/v1
description: Production server
paths:
/chat/completions:
post:
operationId: generateResponse
x-openai-isConsequential: false
summary: Generate a response from the selected LLM.
description: Sends a user prompt to the specified LLM model to generate a response.
requestBody:
FULL SCHEMA HERE: https://docs.google.com/document/d/1skYbPYaV7mOyon1J7ThL1yWAV3jcijuQ_bpKpYmWJoI/edit?usp=sharing
Click "Save" at the bottom
Note: Schema approach adapted from Mark Kashef's OpenRouter integration.
Step 5: Test Your New Control
Once saved, test it with this simple command:
"Use Claude Sonnet 4 to write a haiku about coffee"
If Claude responds with a haiku, you're all set. You now have control over every AI model through ChatGPT.
Your First Week: Smart Experiments
Here's how to get value from this tool:
Day 1: Test a key prompt across 5 different models. Find surprises.
Day 2: Try models you've been curious about but didn't want to subscribe to.
Day 3: Compare Claude vs GPT-4o on your specific use cases.
Day 4-7: Document which models excel at what tasks.
You're building a mental map of the AI landscape without committing to subscriptions.
That knowledge is valuable.
From Chaos to Options
OpenAI thought they were simplifying things by removing model choice.
Instead, they broke workflows and removed our ability to choose the right tool for the right job.
This Custom GPT doesn't fix everything.
It won't replace your context-rich Custom GPTs or Projects.
It's slower than native ChatGPT.
It requires you to specify models.
But what it does give you is options:
Test models before subscribing
Access Claude when you hit rate limits
Compare outputs when quality matters
Explore the full AI landscape
It's may or may not your primary workflow.
Instead it’s your testing lab, your backup plan, and your exploration tool.
Sometimes that's exactly what you need.
Welcome to having options.
Keep cooking,
Tam
P.S. – Questions about the setup? Drop them in the comments. I respond to everything and update the article when multiple people have the same question.
P.P.S. – This tool is for testing and backup. For your daily workflow with full context, check out my guide on building your own AI Strategy Partner using GPT Projects – that's where the real systematic power lives.