Are you tired of JanitorLLM ruining you for everyone else? Or is your character forgetting important details during roleplay? Then it’s time for you to start using OpenRouter on JanitorAI and use better LLM models.
JanitorAI’s in-house JanitorLLM (JLLM) is free to use and has no usage limits or filters, which is great. However, its context size (memory) is limited, and its responses become repetitive and boring fairly quickly.
What Is A Proxy On JanitorAI?
Proxy on JanitorAI lets users access different LLM models, including GPT-4, Claude, and Deepseek, instead of the default JLLM for roleplays. These models can be used from their official providers or through services like OpenRouter.
Why Use OpenRouter On JanitorAI?
Larger LLM models are trained on a broader range of data. They have larger context sizes, generate more creative responses, and follow instructions more effectively. You can enhance your roleplaying experience by using OpenRouter to access more advanced LLM models.
Increased Context Size
Context Size refers to the number of tokens (data) that the LLM references when generating a response. In simple terms, it’s your AI character’s memory. A larger context size allows your AI character to remember more details of your conversation.
Also Read: Understanding Tokens And Context Size
However, even models that can handle larger context sizes struggle to stay coherent past a certain point. The optimal context size range is between 8,192 and 16,384 tokens.
Creative Writing
JLLM tends to be repetitive and reuses phrases too often. While this is common with all LLMs, larger models are trained on more data and don’t feel as repetitive or boring during extended roleplaying sessions.
Faster Responses
In most cases, your responses will generate faster when using OpenRouter on JaintorAI compared to JLLM. If you’re using a free model, you might encounter issues like slow responses or network errors during periods of high traffic.
Better Custom Prompts
While you can use custom prompts with JLLM, other models are better at following instructions. You can also use a custom prompt with a higher token count without worrying about it affecting your AI character’s memory.
Roleplay With More Characters
You can enhance your roleplay experience with characters that have over 1,500 permanent tokens, such as character personality and scenario. When using JLLM, these permanent tokens use about 25% of the context size.
What Is OpenRouter?
OpenRouter is a proxy service that offers a unified interface for accessing different free and paid LLM models. OpenRouter does not host any LLMs; it intelligently routes your requests through one of its multiple providers.
Also Read: Use DeepSeek On JanitorAI
OpenRouter lets you send 50 free messages daily to their free models. You can add $10 to your OpenRouter account to increase the daily free message limit to 1,000.
How To Configure OpenRouter On JanitorAI
- Register: Sign up for an OpenRouter account.
- API Key: Create an API Key and save it in Notepad, as it’s only visible to you once during creation.
Start a new chat or open an existing one with any character on JanitorAI, then click on the “using janitor” button on the menu bar at the top to access the API settings menu.

Click on the “Proxy” option in the API settings menu and then on “+ Add Configuration.” There are three key options for configuring OpenRouter on JanitorAI.

- Model: Select the LLM model you want to use. You must enter the exact model name provided by OpenRouter.
- Other API/Proxy URL:
https://openrouter.ai/api/v1/chat/completions - API Key: Enter the API key you created on OpenRouter’s site.
Make sure there are no unwanted spaces or typos in your input. It’s best to copy and paste these details instead of typing them manually. Save your settings and refresh your chat page. You should now be ready to use the LLM model from OpenRouter on JanitorAI.
Common OpenRouter Models For AI Roleplay
Below are some common OpenRouter models used for AI roleplay. Make sure to copy and paste these in the Proxy settings menu on JanitorAI.
google/gemini-2.5-pro[PAID]deepseek/deepseek-v3.2[PAID]deepseek/deepseek-chat-v3-0324[PAID]deepseek/deepseek-r1-0528[PAID]z-ai/glm-4.6[PAID]moonshotai/kimi-k2-thinking[PAID]
You can browse models on OpenRouter’s site and copy the model name by clicking the clipboard icon next to any model’s name.

Errors After Setting Up A Proxy On JanitorAI
Errors can pop up if you make a mistake while configuring a proxy on JanitorAI or if OpenRouter experiences a service interruption or outage.
Double Check Before Troubleshooting
- Your API/Proxy URL is correct, with no extra spaces or typos.
- You clicked “Save Settings” and then refreshed your chat page.
- Your API key is correct, with no extra spaces or typos.
PROXY ERROR 400: {“error”:{“message”: “is not a valid model ID”, “code:400} (unk)
You have entered the incorrect model name in your JanitorAI Proxy settings.

You can copy the model name from OpenRouter’s website by clicking the clipboard icon next to any model’s name.
PROXY ERROR 404: {“error”:{“message”:”No endpoints found matching your data policy.}
You need to turn ON the “Enable training and logging (chatroom and API)” option in your OpenRouter Privacy settings for most free model providers.
Rate limit exceeded: free-models-per-day. Add 10 credits to unlock 1000 free models requests per day (unk)
You have reached your limit of 50 free messages with OpenRouter. You need to wait until the limit refreshes or add $10 to your account.
PROXY ERROR: Unknown response: [object Object] (unk)
Similar to the previous error message. You have reached your limit of 50 free messages with OpenRouter. You need to wait until the limit refreshes or add $10 to your account.
Provider Returned Error (unk)
OpenRouter’s provider for the specific model you are using is offline or experiencing service interruptions. You’ll need to wait until the provider is back online.







