Skip to main content

Introduction

The Labellerr MCP Server enables seamless interaction with the Labellerr platform using natural language through AI assistants such as Claude Desktop and Cursor. Built on the Model Context Protocol (MCP), it provides 23 comprehensive tools for managing datasets, projects, annotations, and exports through conversational interfaces.

Quick Start

Prerequisites

  • Python 3.8 or higher
  • Git installed on your system
  • Labellerr API credentials (API Key, API Secret, Client ID) - Get your credentials
  • Claude Desktop or Cursor installed

Installation

First, clone the SDKPython repository:
Clone Repository
git clone https://github.com/Labellerr/SDKPython.git
cd SDKPython
Then install the required dependencies:
Install Dependencies
pip install -r requirements.txt

Configuration

Add to ~/.cursor/mcp.json:
Cursor Configuration
{
  "mcpServers": {
    "labellerr": {
      "command": "python3",
      "args": ["/FULL/PATH/TO/SDKPython/labellerr/mcp_server/server.py"],
      "env": {
        "LABELLERR_API_KEY": "your_api_key",
        "LABELLERR_API_SECRET": "your_api_secret",
        "LABELLERR_CLIENT_ID": "your_client_id"
      }
    }
  }
}
Important: Use the absolute path to server.py. Restart your AI assistant completely after configuration.

Available Tools

The MCP server provides 23 comprehensive tools organized into 5 functional categories to manage your complete annotation workflow:
project_create
tool
Create a new annotation project (requires dataset_id and template_id)
project_list
tool
List all projects in your workspace
project_get
tool
Get detailed information about a specific project
project_update_rotation
tool
Update annotation rotation configuration
dataset_create
tool
Create dataset with automatic file upload and status polling
dataset_upload_files
tool
Upload individual files to create a dataset
dataset_upload_folder
tool
Upload an entire folder of files
dataset_list
tool
List all datasets with filtering options
dataset_get
tool
Get detailed information about a dataset
template_create
tool
Create annotation template with questions/guidelines
annotation_export
tool
Export project annotations in various formats (JSON, COCO, CSV, PNG)
annotation_check_export_status
tool
Check status of export jobs
annotation_download_export
tool
Get download URL for completed exports
annotation_upload_preannotations
tool
Upload pre-annotations (synchronous)
annotation_upload_preannotations_async
tool
Upload pre-annotations (asynchronous)
monitor_system_health
tool
Check MCP server health and connection status
monitor_active_operations
tool
List all active operations and their status
monitor_project_progress
tool
Get progress statistics for a project
monitor_job_status
tool
Monitor background job status
query_project_statistics
tool
Get detailed statistics for a project
query_dataset_info
tool
Get detailed information about a dataset
query_operation_history
tool
Query history of operations performed
query_search_projects
tool
Search for projects by name or type

Usage Examples

Interact with the MCP server using natural language commands through your AI assistant. Below are common usage patterns:

Create a Complete Project

Create an image annotation project called "Product Detection" with 
bounding boxes for "Product" and "Defect". Upload images from 
/Users/me/products and use my email [email protected]
Automated workflow:
  1. Upload images from the specified folder
  2. Create a dataset with uploaded files
  3. Generate an annotation template with bounding box questions
  4. Create and link the project with all resources

Monitor Project Progress

What's the progress on all my annotation projects?

Export Annotations

Export all accepted annotations from project abc123 in COCO JSON format

Upload Additional Data

Upload images from /Users/me/more-data to dataset xyz789

Project Creation Workflow

The MCP server implements a structured three-step workflow for creating annotation projects. When you request a complete project, the server automatically executes all steps in sequence.
Automated Workflow: Describe your requirements in natural language, and the MCP server handles the complete workflow automatically.

Step 1: Create Dataset

The server uploads your files and creates a dataset.Process:
  • Files are uploaded to cloud storage
  • Dataset is created and linked
  • Processing status is monitored until completion
Example request:
Create a dataset called "Training Data" from folder /path/to/images
Server response:
Uploaded 150 images
Dataset created: dataset_abc123
Dataset ready for annotation

Step 2: Create Template

The server creates an annotation template with your specified questions.Process:
  • Annotation questions are defined
  • Question types are configured (BoundingBox, polygon, etc.)
  • Template is validated and saved
Example request:
Create an annotation template with bounding boxes for "Car" and "Truck"
Server response:
Template created: template_xyz789
Questions: Car (BoundingBox), Truck (BoundingBox)

Step 3: Create Project

The server links the dataset and template to create your project.Process:
  • Dataset and template are validated
  • Project is created with proper configuration
  • Resources are linked and ready for annotation
Example request:
Create a project using dataset dataset_abc123 and template template_xyz789
Server response:
Project created: project_final_456
Linked dataset: dataset_abc123 (150 files)
Linked template: template_xyz789
Ready to start annotation
Unified Command: You can create everything in one natural language request:
Create an image annotation project called "Vehicle Detection" with 
bounding boxes for "Car" and "Truck". Upload images from 
/Users/me/vehicles and use my email [email protected]
The MCP server will automatically execute all three steps and provide you with the final project ID.
Workflow Benefits:
  • Proper resource management
  • Clear project structure
  • Validation at each step
  • Automatic error handling

Supported Data Types

The MCP server supports the following data types across all tools:
Data TypeSupported Formats
ImageJPG, PNG, TIFF
VideoMP4
AudioMP3, WAV
DocumentPDF
TextTXT

Export Formats

Annotations can be exported in multiple industry-standard formats:
FormatDescription
jsonLabellerr native format with complete annotation metadata
coco_jsonCOCO dataset format for computer vision tasks
csvComma-separated values for tabular data analysis
pngSegmentation masks for pixel-level annotations

Error Handling

The MCP server provides descriptive error messages to help diagnose and resolve issues:
Error: dataset_id is required
Message: Please create a dataset first using dataset_upload_folder
All operations are logged in the operation history and can be queried using natural language:
Show me the history of operations I've performed

Troubleshooting

AI Assistant Doesn’t Show Tools

Resolution steps:
  1. Completely restart your AI assistant (quit and reopen the application)
  2. Verify the path to server.py is absolute (starts with / on macOS/Linux or C:\ on Windows)
  3. Test the server manually: python3 /path/to/server.py
  4. Check the MCP configuration file for syntax errors

Authentication Errors

Resolution steps:
  1. Obtain fresh credentials from your Labellerr workspace
  2. Update the configuration file with the correct API credentials
  3. Ensure all three credentials (API Key, API Secret, Client ID) are present
  4. Restart your AI assistant after updating credentials

File Upload Issues

Common causes and solutions:
  • Verify file paths are absolute, not relative
  • Check file system permissions for read access
  • Confirm file formats match the specified data type
  • Ensure sufficient disk space for upload operations

Advanced Features

Operation History

The server maintains a comprehensive log of all operations with timestamps, durations, and status information. Query the operation history using natural language:
Show me failed operations from the last hour

Resource Caching

Active projects and datasets are cached in memory for improved performance and faster access. View currently cached resources:
What Labellerr resources are currently active?

Status Polling

Dataset creation includes automatic status polling to monitor processing completion. Configure timeout settings as needed:
dataset_create(
  folder_path="/path/to/data",
  wait_for_processing=True,
  processing_timeout=300  # 5 minutes
)

Additional Resources

For comprehensive documentation and technical references:

Next Steps