This took me a while to get going to pass the "api-version": "2025-01-01-preview" and then I kept hitting MAX TOKEN LENGTH issues when using a smaller model.
- Configure the provider in your
opencode.jsonfile:"I did this is my WSL instance"
mkdir -p ~/.config/opencode/ touch ~/.config/opencode/opencode.json code ~/.config/opencode/opencode.json
{ "$schema": "https://opencode.ai/config.json", "provider": { "azure-foundry": { "npm": "@ai-sdk/openai-compatible", "name": "Azure Foundry", "options": { "baseURL": "https://<MY_AI_FOUNDRY_INSTANCE>.cognitiveservices.azure.com/openai/deployments/<MY_DEPLOYMENT_NAME>/", "queryParams": { "api-version": "2025-01-01-preview" } }, "models": { "MY_MODEL": { "name": "<CUSTOM_MODEL_DISPLAY_NAME_IN_OPENCODE>" } } } } } - To use this configuration in my
devcontainer.jsonI add a mount to map this file.{ .... , "mounts": [ "source=${localEnv:HOME}/.config/opencode,target=/home/node/.config/opencode,type=bind", ...... ], ...... } - Assuming you have opencode installed
npm install -g opencode-aiin your devcontainer runopencode auth login- Arrow key up to
other - enter
azure-foundry - enter
YOUR_SECURE_API_TOKEN - Run
/modelsinopencodeto select your Azure AI Foundry model
- Arrow key up to
🎉 Happy coding!
possibly after you download OLLAMA on your machine and get a MCP client (Claude) seems to be good to start with and dont have to buy a subscription, add the following and then you have something to try your models on locally
"ollama": {
"command": "uv",
"args": [
"--directory",
"C:/Source/MCP/PythonServer/weather",
"run",
"ollama.py"
]
},