The API in Front of the AI: Part 2

Filed under: Cloud Engineering · AI Infrastructure · Local Lab Picking Up Where We Left Off In Part 1, we got Bifrost running locally on your Mac, wired up Ollama with qwen3.5, and verified the whole stack was humming. Requests flowing through the gateway, streaming working, tool calling confirmed. Now we’re going deeper. This post is about MCP — the Model Context Protocol. If Part 1 was about giving your AI a reliable phone line, Part 2 is about giving it hands. By the end you’ll have a local MCP server running on your machine that exposes real tools, connected through Bifrost, so qwen3.5 can actually do things on your behalf — check system info, run safe shell commands, do math — all without touching the cloud. ...

March 31, 2026 · 8 min · Anthony Mineer