MCP Server for Salesforce

Your AI assistant can operate Salesforce safely.

Read everything. Plan changes. Apply only with a locked plan hash. g-gremlin gives AI agents in Claude Desktop, Cursor, or Windsurf structured Salesforce access: SOQL queries, object introspection, deterministic snapshots, metadata deployment, drift detection, and report listing.

Every mutation requires a SHA-256 plan hash.

Public beta is live. Read tools work without a FoundryOps license (Salesforce auth still required). Start a 30-day trial to unlock licensed write and admin tools.

Claude Desktop

You:

Show me the schema for our Lead object in Salesforce

Claude:

Running sfdc.describe with sobject="Lead"...

Lead Object Schema

Fields: 72 | Custom: 18 | Required: 5

Key fields:

Email (Email, required)

Company (String, required)

Lead_Score__c (Number, custom)

Status (Picklist: Open, Working, Closed)

Claude Desktop / Cursor / Windsurf Config

Add this to your MCP client configuration. Read and analyze tools are available by default.

{ "mcpServers": { "g-gremlin-sfdc": { "command": "g-gremlin", "args": ["mcp", "serve", "--provider", "sfdc"] } } }

To enable write tools, add "--enable-writes" to the args array.

Safety Contract

How Writes Are Governed

Four layers of protection between an AI agent and your Salesforce org.

1

Default: Read-Only

8 read + 4 analyze tools are registered by default. No write tools are exposed unless explicitly enabled.

2

Writes Require --enable-writes

Write tools are only registered when the server starts with the explicit flag. You control this at the server level, not the AI agent level.

3

Mutations Are Plan → Apply

Every write operation has a corresponding plan tool. The plan step produces a preview and a SHA-256 plan_hash. No plan, no mutation.

4

Hash Mismatch = Rejected

Apply requires the exact plan_hash from the plan step. If anything changed — different parameters, stale plan, org drift — the hash won't match and the operation is rejected.

What Salesforce Teams Actually Need

Every feature exists because someone hit a wall trying to use AI with Salesforce.

AI assistants can't run SOQL from Claude Desktop

sfdc.query runs any SOQL query and returns structured rows. Filter, aggregate, join — whatever your org allows.

You need to understand an object before you can work with it

sfdc.describe returns the full SObject schema: fields, types, picklist values, relationships. Ask Claude to explain your data model.

No way to get a point-in-time CRM state from an AI assistant

sfdc.snapshot exports deterministic manifests with row hashes and field checksums. Compare snapshots to see exactly what changed.

Metadata deployments from an AI agent sound terrifying

Every write requires a plan_hash from the preview step. If anything changed between plan and apply, the hash won't match and the operation is rejected.

Metadata drifts between environments and nobody notices

sfdc.metadata_pack.drift compares your local source against the live org. See exactly what's out of sync, by component.

Generating package.xml by hand is error-prone

sfdc.manifest_generate scans a source directory and produces a correct package.xml. Supports Flow, ApexClass, LWC, Layout, and more.

Salesforce reports are locked inside the browser UI

sfdc.reports.list surfaces every report in your org via MCP. Report export is available via the CLI. Feed report metadata into Claude for analysis without leaving your IDE.

14 MCP Tools

Structured JSON responses designed for AI agent consumption, not human terminal output.

Tier 1: Read & DiscoverREAD

sfdc.whoami

Check auth, show org identity (username, org ID, instance URL)

sfdc.doctor

Health diagnostics (sf CLI version, auth status, API connectivity)

sfdc.query

Run a SOQL query and return structured rows

sfdc.describe

Full SObject schema (fields, types, picklist values, relationships)

sfdc.snapshot

Deterministic snapshot with row hashes and field checksums

sfdc.audit

Audit IDs in a CSV against live Salesforce records

sfdc.reports.list

List all Salesforce reports in the org

sfdc.manifest_generate

Generate package.xml from a local source directory

Tier 2: Analyze & PlanANALYZE

sfdc.metadata_pack.plan

Plan metadata pack changes, lock baseline fingerprint

sfdc.metadata_pack.drift

Compare local metadata source against live org

sfdc.metadata_pack.verify

Run post-deploy smoke assertions on metadata

sfdc.pack.plan

Plan data pack updates against snapshot baseline

Tier 3: MutateWRITE

sfdc.metadata_pack.apply

Deploy metadata changes (requires plan_hash from plan step)

sfdc.pack.apply

Apply data pack changes (requires plan_hash from plan step)

Two-Phase Safety on Every Mutation

Nothing writes to your Salesforce org until you've reviewed the plan. Every mutation requires a cryptographic hash.

1

Plan (default)

Tool runs without making changes. Returns a full preview of what would happen, plus a SHA-256 plan_hash.

"plan": { "components": 12, "type": "metadata_pack" },

"plan_hash": "sha256:a3f8c2e9...",

"baseline_locked": true

2

Apply (explicit)

Caller passes the plan_hash. If the hash doesn't match — wrong plan, stale data, org drift — rejected with a clear error.

"plan_hash": "sha256:a3f8c2e9...",

"ok": true, "deployed": 12

Built for Your Role

⚙️

Salesforce Admins

Stop clicking through Setup. Ask Claude to describe objects, check metadata drift, generate package.xml, and preview deployments — with guardrails that won't let anything deploy until you verify the plan.

🛠️

Developers / Architects

14 MCP tools. Structured JSON responses. Two-phase mutation safety with SHA-256 plan hashing. Supports Flow, ApexClass, LWC, Layout, and more. Requires the MCP-enabled g-gremlin build.

💡

AI-Curious Ops Teams

You've heard AI can help with Salesforce work. But raw sf CLI is too complex for AI agents to drive safely. This gives them structured tools with built-in safety.

PUBLIC BETA

Start Free Trial

Public beta is live. Start a 30-day free trial for full access.

Setup steps:

1

Install

pipx install 'g-gremlin[mcp]'
2

Authenticate with sf CLI

sf org login web --alias myorg
3

Add to your MCP client

Claude Desktop / Cursor / Windsurf: one JSON block in your MCP config.
FAQ

Common Questions

How is this different from using sf CLI directly?

sf CLI is a terminal tool for humans. The MCP server wraps it into structured tools that AI agents in Claude Desktop, Cursor, or Windsurf can call directly. Responses are structured JSON designed for programmatic consumption.

Is it safe to let an AI deploy metadata to my Salesforce org?

Every mutation is gated behind two-phase safety. The plan step produces a preview and a SHA-256 plan_hash. The apply step requires both --enable-writes on the server AND the exact plan_hash. If the org changed between plan and apply, the hash won't match and the operation is rejected.

What metadata types are supported?

Flow, FlexiPage, Layout, ApexClass, ApexTrigger, LightningComponentBundle (LWC), RecordType, CustomField, PermissionSet, ValidationRule, and Queue. Coverage is expanding with each release.

Which MCP clients are supported?

Claude Desktop, Cursor, Windsurf, and any MCP-compatible client. The server uses stdio transport.

How do I install it?

pipx install 'g-gremlin[mcp]', authenticate with sf CLI (sf org login web), and add the server to your MCP client config. Requires the MCP-enabled g-gremlin build — check release notes for availability.

Can I use read tools without enabling writes?

Yes. By default, only read and analyze tools are exposed. Write tools are only registered when you start the server with --enable-writes. Most workflows — querying, describing objects, taking snapshots, checking drift — work without writes enabled.

Your AI assistant just got Salesforce admin access.

Public beta is live. Start a 30-day free trial for full access.

Salesforce MCP Server - Safe AI Access to Your Org | FoundryOps