Model Context Protocol

10 February 2026
AIMCPC#LLM

The current rate of development of AI tools supporting the development workflow is staggering. It is not easy to keep up with all the developments and not all of them are reliable or helpful. In this post I want to focus on a development that started as a game-changer about a year ago and has already become a new standard: the Model Context Protocol.

What is It?

The Model Context Protocol (MCP) was introduced back in November 2024 by Anthropic, as an open standard which allows AI models to access external data, tools and applications. In the documentation, Anthropic draws the picture of MCP being the USB-C port for AI applications, as it provides a standardized way to connect AI applications to external systems. MCP had its “USB Moment” in March 2025, when OpenAI announced full support of the standard. At the end of last year, Anthropic donated the protocol to the Agentic AI Foundation, making it a community-governed standard. A bit more than one year after inception, it is now widely used and supported by most AI tools.

The Use-Case

I came across the whole MCP topic due to a use-case which came up in my daily work. We use Gitlab for version control, but have Github Copilot as our main AI subscription. Connecting the two would be helpful to support merge requests reviews with the help of AI agents. By now, Gitlab offers its own MCP Server, but I decided to implement my own little MCP server to get more familiar with the protocol. The MCP Server is used by an agent running in Visual Studio Code. I will show the implementation of the MCP server itself in the end, but want to focus on the innerworkings of the Model Context Protocol first using this application as an example.

Components

MCP uses a client-server architecture where an MCP Host, can establish a connection to one or multiple MCP Servers. To establish this connection, the MCP host creates an MCP Client for each MCP server. The MCP host can for example be Visual Studio Code. It can connect to a local file system MCP server by instantiating an MCP client internally which establishes and maintains the connection to this MCP server. MCP consists of two distinct layers: The data layer and the transport layer. The data layer defines how the transmitted data looks, the transport layer defines how it is transmitted between server and client.

The Data Layer

JSON-RPC 2.0 as Data Format

The data transmitted by the MCP protocol uses data structures defined by JSON-RPC 2.0. JSON-RPC defines three distinct data structures.

  1. Request When the AI wants to “do” something, it sends a request object. This object contains an id, which is also returned in the response object to enable linking request and response.
{
  "jsonrpc": "2.0",
  "id": "req-001",
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": { "city": "Zurich" }
  }
}
  1. Response The server sends back the result or an error, with the same id.
{
  "jsonrpc": "2.0",
  "id": "req-001",
  "result": {
    "content": [{ "type": "text", "text": "It's 12°C and rainy in Zurich." }]
  }
}
  1. Notifications A notification is a special request object without an id. They are used when no confirmation is needed (fire and forget).
{
  "jsonrpc": "2.0",
  "method": "notifications/resources/list_changed"
}

MCP Data Layer

The data layer protocol used in MCP defines how developers can share context from MCP servers to MCP clients, it is the heart and soul of how the MCP works.

Initialization

MCP begins with an initialization, which acts as a capability negotiation. The client sends an initialize request, to establish connection and negotiate supported features. In my example, this is done as soon as the tool is triggered the first time from Visual Studio Code. The initialization request looks like this:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-11-25",
    "capabilities": {
      "roots": { "listChanged": true },
      "sampling": {},
      "elicitation": { "form": {}, "url": {} },
      "tasks": {
        // more task capabilities
      },
      "extensions": {
        "io.modelcontextprotocol/ui": {
          "mimeTypes": ["text/html;profile=mcp-app"]
        }
      }
    },
    "clientInfo": { "name": "Visual Studio Code", "version": "1.109.0" }
  }
}
  • id: This is the identifier of the request.
  • protocolVersion: The request contains the MCP protocol version to ensure compatible protocols and prevent communication errors.
Info

The client should always provide the latest version supported. If the server supports the same version, it must respond with that version. Otherwise it should respond with the latest version it supports. If the client does not support the version of the server, it should disconnect.

  • capabilities: The capabilities contains the features that each participant supports. In this request, the client declares that it supports the roots, sampling, elicitation and tasks capabilities.
    • roots allows the server to request available root folders from the client. "listChanged": true means that the client will send notifications to the server if the root changed.
    • sampling is a relatively new capability of the client. Using sampling, the server can send requests to the client to generate completions using the host’s LLM. This means that the MCP server can leverage LLM capabilities without its own access to LLM models.
    • elicitation allows the server to request additional data through sending an elicitation/create request.
    • tasks indicate to the server that the client is equipped to handle tasks. This is mostly used for long running operations and allows the client or server among other things to see what is currently running list or cancel long running tasks.
  • extensions: Here the client can define additional available functionality which it supports. In this case it indicates to the server that it can render html in the UI.
  • clientInfo: The client information object contains additional identification of the client and versioning information. The response to this initialization request could be:
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2025-06-18",
    "capabilities": {
      "logging": {},
      "tools": { "listChanged": true }
    },
    "serverInfo": { "name": "GitlabMcpServer", "version": "1.0.0.0" }
  }
}
  • id: Identifier of the response which matches the request.
  • protocolVersion: Used to ensure compatibility.
  • capabilities: Here the server lists its own capabilities. In this case:
    • logging shows that this server supports logging functionality. It can send log messages as notifications and the client can control the log level if required.
    • tools indicates that the server supports the tools primitive. This means the client can query the server for available tools. The server also sends notifications when its tools list changed ("listChanged": true).
  • serverInfo: Equivalent of the clientInfo above. After successful initialization, the client sends a notification to indicate when it is ready:
{
  "method": "notifications/initialized",
  "jsonrpc": "2.0"
}

Logging

Now that the contract is negotiated, immediately after the initialization, the log level is set by the client.

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "logging/setLevel",
  "params": { "level": "debug" }
}

This request gets a new id, and calls the method logging/setLevel with the parameter debug, meaning that all log messages down to level debug should be transmitted from the server to the client. The server responds with a simple confirmation response:

{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {}
}

Tools List

Next, the client requests a list of the available tools from the server with the following request:

{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "tools/list",
  "params": {}
}

This calls the method tools/list with no parameters, resulting in the following response:

{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "tools": [
      {
        "name": "ReviewMergeRequest",
        "description": "Fetches title, description and diff of a GitLab MR for review when no project ID is provided.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "mrId": {
              "description": "The Merge Request IID (e.g. 42)",
              "type": "integer"
            }
          },
          "required": ["mrId"]
        }
      },
      {
        "name": "ReviewMergeRequestWithProjectId",
        "description": "Fetches title, description and diff of a GitLab MR for review when a project id is provided.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "projectId": {
              "description": "The Project ID (e.g. 123)",
              "type": "string"
            },
            "mrId": {
              "description": "The Merge Request IID (e.g. 42)",
              "type": "integer"
            }
          },
          "required": ["projectId", "mrId"]
        }
      }
    ]
  }
}

This response tells the client that the server supports two different tools:

  1. ReviewMergeRequest: This tool allows to request the information of a merge request given by the provide merge request id while using the standard project id defined in the server. Most of our code is in one big monorepo, which I use as the standard when no specific project id is given in the request. The response also shows that an input object is required which must contain a field mrId with the id of the merge request to be fetched.
  2. ReviewMergeRequestWithProjectId: This tool is the same as the first tool, but allows the client to provide a specific project id along with the merge request id.

Actual Tool Call

With the tool list, Visual Studio Code knows what tools are available and will use it if the LLM is asked to review a merge request. This can be seen in the thinking step of the LLM, in this case I am using Claude Sonnet 4.5:

The user is asking me to get the changes from merge request 42. Looking at the available tools, I can see there are GitLab merge request tools available:

- `mcp_gitlab-mcp_ReviewMergeRequest` - Fetches title, description and diff of a GitLab MR for review when no project ID is provided.
- `mcp_gitlab-mcp_ReviewMergeRequestWithProjectId` - Fetches title, description and diff of a GitLab MR for review when a project id is provided.

Since the user hasn't specified a project ID, I should try the first tool with just the MR ID (42).

It then creates a request which is sent to the MCP server:

{
  "jsonrpc": "2.0",
  "id": 4,
  "method": "tools/call",
  "params": {
    "name": "ReviewMergeRequest",
    "arguments": { "mrId": 42 },
    "_meta": {
      "progressToken": "516156c8-4a01-42e4-84d1-9632c7ca8cc4",
      "vscode.conversationId": "c3050472-ad72-48fe-992b-8a169992c741",
      "vscode.requestId": "2b018cc4-7d45-4865-9055-a309b403ff7a"
    }
  }
}

In this request the method tools/call is called, with the name of the tool (ReviewMergeRequest) as well as the arguments which are required by the tool (in this case the merge request id "mrId": 42). Some additional information is sent in the _meta section of the request:

  • progressToken: This is a core MCP feature which allows the server to send asynchronous progress features. The server could send a notification back to the client to indicate the progress of the request. In this example this is not used, but such a notification could look like the following:
{
  "jsonrpc": "2.0",
  "method": "notifications/progress",
  "params": {
    "progressToken": "516156c8-4a01-e4-84d1-9632c7ca8cc4",
    "progress": 50,
    "total": 100,
    "message": "Fetching merge request..."
  }
}
  • vscode.conversationId: This id acts as a context link. It links the specific tool call to an agent session. If a tool fails or needs more information, Visual Studio Code uses this ID to map it to the correct chat window.
  • vscode.requestId: This is a unique identifier of this specific interaction sequence. This can be used for tracing or also cancellation, for example when the user clicks the stop button in the chat window. Once the server has executed its logic to retrieve the data from Gitlab, it will send the response with the content of the merge request:
{
  "result": {
    "content": [
      {
        "type": "text",
        "text": "# Review: New Feature Implementation (MR !)\r\n **Author**: AuthorName\r\n #Description\r\n This merge request introduces a new feature that allows users to do XYZ.\r\n#Changes\r\n... list of changes ..."
      }
    ]
  },
  "id": 4,
  "jsonrpc": "2.0"
}

This data can then be consumed by the LLM to do its code review. All changes are contained in the response from the MCP and depending on the prompt it can then also be instructed to read files in the vicinity of the changes to look for problems or potential incompatibilities.

Transport Layer

The previous chapter explained the structure of the data sent between the client and the server, whereas the transport layer defines how this data is transmitted. MCP supports two different communication mechanisms:

  • STDIO: This uses standard input/output streams for a direct process communication. It is used if the MCP server is running locally and provides the optimal performance without any networking.
  • Streamable HTTP transport: HTTP POST is used for client-to-server messaging while optionally supporting Server-Sent events for streaming functionality. This is used for remote MCP servers and OAuth is recommended to handle authentication. The transportation layer abstracts the actual communication mechanisms from the protocol layer, which allows to use the JSON-RPC 2.0 message format across all transport mechanisms.

How to Implement an MCP Server

Due to the fact that MCP very fast became the standard for integrating external data sources and tools into AI agent, a whole ecosystem of supporting libraries was quickly developed across all programming languages. SDKs are available for all common programming languages in the Github repository of the Model Context Protocol .
Since I am mostly developing in C#, this was my language of choice for my MCP server. There is also a handy blog post by Microsoft on how to build an MCP server in C# available here. I used the ModelContextProtocol package provided by the Model Context Protocol initiative itself. The starting point is the Program.cs file which contains the logic to start the server:

// 1. Create Host ApplicationBuilder
HostApplicationBuilder builder = Host.CreateApplicationBuilder(args);

// Enables usage of secrets for tokens and other configuration
builder.Configuration.AddUserSecrets<Program>();

// 2. Configure GitLab Client
string gitLabUrl = builder.Configuration["GitLab:Url"];
string gitLabToken = builder.Configuration["GitLab:Token"]
  ?? throw new InvalidOperationException("GitLab token not configured");

// 3. Add HTTP client as a service
builder.Services.AddHttpClient("GitLabClient", client =>
{
  client.BaseAddress = new Uri(gitLabUrl);
  client.DefaultRequestHeaders.Add("Private-Token", gitLabToken);
  client.Timeout = TimeSpan.FromSeconds(10);
});

// 4. Add singleton for the GitLabService
builder.Services.AddSingleton<IGitLabService, GitLabService>();

// 5. Configure MCP Server
builder.Services
       .AddMcpServer()
       .WithStdioServerTransport()
       .WithToolsFromAssembly();

IHost app = builder.Build();

// 6. Run the server
await app.RunAsync();

Here we build a simple application using the HostApplicationBuilder. We retrieve URL and token and create the HTTP client as well as register the singleton. What makes this application into an MCP Server are the lines

builder.Services
       .AddMcpServer()
       .WithStdioServerTransport()
       .WithToolsFromAssembly();

The first extension method adds the functionality of the MCP server, the second enables STDIO as the transportation layer and the third one uses Reflection to scan the current assembly. It automatically registers any class marked with [McpServerToolType] and exposes its methods as tools. This eliminates the need to manually register every single tool, keeping your Program.cs clean. The tool itself is defined in the GitLabTools.cs file:

[McpServerToolType]
public class GitLabTools(IGitLabService gitLabService)
{
  [McpServerTool(Name = "ReviewMergeRequest")]
  [Description("Fetches title, description and diff of a GitLab MR for review when no project ID is provided.")]
  public async Task<string> ReviewMergeRequest(
    [Description("The Merge Request IID (e.g. 42)")]
    int mrId)
  {
    // Our standard project has project ID 123
    try
    {
      return await GetMergeRequestAsString("123", mrId);
    }
    catch (HttpRequestException ex)
    {
      return $"API Error: {ex.Message}. Check your Project ID and Token.";
    }
    catch (Exception ex)
    {
      return $"Unexpected Error: {ex.Message}";
    }
  }

  [McpServerTool(Name = "ReviewMergeRequestWithProjectId")]
  [Description("Fetches title, description and diff of a GitLab MR for review when a project id is provided.")]
  public async Task<string> ReviewMergeRequestWithProjectId(
    [Description("The Project ID (e.g. 123)")]
    string projectId,
    [Description("The Merge Request IID (e.g. 42)")]
    int mrId)
  {
    try
    {
      return await GetMergeRequestAsString(projectId, mrId);
    }
    catch (HttpRequestException ex)
    {
      return $"API Error: {ex.Message}. Check your Project ID and Token.";
    }
    catch (Exception ex)
    {
      return $"Unexpected Error: {ex.Message}";
    }
  }

The annotations McpServerToolType and McpServerTool are used to define the methods which are the actual tools which should be available in the MCP Server. The description of the methods and the parameters is not just documentation but it is part of the prompt sent to the LLM and included in method calls like the tools/list response you have seen above. The logic of how the data is retrieved from Gitlab is out of scope of this post, only standard API calls of the Gitlab API are used there. Just with these two classes, you can set up an MCP server and connect it to any system you want your LLM agents to integrate. To ultimately make it available to Visual Studio Code, you can add it through “MCP: Add Server”, which will add it to your mcp.json configuration file. In my case, this looks like the following for my C# based MCP Server:

{
  "servers": {
    "gitlab-mcp": {
      "type": "stdio",
      "command": "dotnet",
      "args": ["run", "--project", "path\\to\\project.csproj"]
    }
  },
  "inputs": []
}
Note

Note: The configuration above uses dotnet run, which is excellent for development because it recompiles your changes on the fly. For a permanent installation, you should compile the project (dotnet publish) and point the command directly to the generated .exe or .dll to improve the startup time of the agent.

Further Documentation

There is much more to discover about the MCP server. Besides tools, the servers can also expose resources (access to actual data sources) or prompts (reusable templates like system prompts or examples) and recently with the addition of the concept of tasks there are now even more options of how to attach external systems to your LLMs. I found the official documentation on https://modelcontextprotocol.io/ very helpful and extensive. The very moment this blog post is published it will already be outdated. The topic of MCP is moving fast and new functionality is added quickly. Refer to the official documentation for more recent information.