Getting Started

Developer SDKs

Use MCP or the Sandbox SDK to integrate MarcoPolo into custom LangChain agents or autonomous Python scripts.

Building your own AI agents or autonomous scripts that need to work with data? Connect them to a MarcoPolo workspace using MCP or the LangChain Sandbox SDK. Your agent gets the same tools, security model, and persistent state that the built-in integrations use.

API tokens

Both MCP and the Sandbox SDK authenticate using API tokens. Generate them from the MarcoPolo web UI:

  1. Go to mcp.marcopolo.dev/app
  2. Open the Developer page
  3. Click + Create Token

Each token is scoped to your workspace. Your agent connects with the same data sources, credentials, and persistent state as your conversational sessions. Use the token for MCP authentication or pass it to the Sandbox SDK.

Option 1: MCP (Model Context Protocol)

Any agent or framework that supports MCP can connect to MarcoPolo's remote server:

https://mcp.marcopolo.dev

Your agent authenticates with an API token and gets access to all MarcoPolo tools: list_datasources, query, get_schema, browse, download, upload, execute_command, create_data_view, and generate_connector_url.

# Connecting via MCP with the Python SDK
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client

async with streamablehttp_client("https://mcp.marcopolo.dev") as (read, write, _):
    async with ClientSession(read, write) as session:
        await session.initialize()
        sources = await session.call_tool("list_datasources", {
            "context": "Listing available datasources for the user."
        })

See the MCP Python SDK for current API details.

Option 2: LangChain Sandbox SDK

For agents built with LangChain or LangGraph, langchain-marcopolo wraps MarcoPolo's workspace as a LangChain-compatible tool provider.

pip install langchain-marcopolo
from dotenv import load_dotenv
from langchain_marcopolo import MarcopoloSandbox

load_dotenv()  # loads MARCOPOLO_API_TOKEN from .env

sandbox = MarcopoloSandbox(api_key=os.environ["MARCOPOLO_API_TOKEN"])
tools = sandbox.get_tools()

# Use tools in your LangChain agent
from langchain.agents import create_tool_calling_agent
agent = create_tool_calling_agent(llm, tools, prompt)

Install from: github.com/immersa-co/langchain-marcopolo

Same workspace, any interface

Your custom agent operates in the same workspace you use from Claude, ChatGPT, or Cursor. Queries your agent writes show up in the web UI. Context your agent builds (RULES.md updates, cached results, scripts) is available in your next conversational session.

One workspace per user, regardless of how many agents or tools connect to it.

When to use which

Use MCP if your agent framework supports it natively, or if you're building a lightweight integration. MCP is a standard protocol with broad ecosystem support.

Use the Sandbox SDK if you're deep in the LangChain/LangGraph ecosystem and want tighter integration with those frameworks' tool and execution abstractions.

Both authenticate with the same API token, connect to the same workspace, and use the same tools and security model.

On this page