Ableton Meets AI: How I Turned Claude Into My Studio Partner
I need to tell you about the most fun I've had coding in years.
Picture this: it's 3 AM, I'm in the studio, I've got a groove going in Ableton but I can't nail the chord progression. Normally I'd spend the next hour trying random voicings until something clicks. Instead, I opened Claude and typed "give me a jazzy chord progression in Eb minor with some tension." And I watched the MIDI notes appear in my session. In real time. Inside Ableton.
That moment — that's when I knew this project was going to eat my life for a while. And honestly? Totally worth it.
What Is AbletonMCP?
The original AbletonMCP was created by Siddharth Ahuja — a brilliant concept that connects Ableton Live to Claude AI through the Model Context Protocol (MCP). Claude can see your session, create tracks, write MIDI, load instruments, add effects, control playback. Through natural language. No mouse needed.
I saw it and my brain exploded. As someone who's been producing music for years and writing code even longer, this was the perfect collision of my two worlds. But when I actually tried to use it for real production work, I kept hitting walls. Only 15 basic tools. Blocking connections that froze everything. No music theory knowledge. One massive file that was impossible to extend.
So I did what I always do. I forked it and went to town.
My fork: github.com/Jeff909Dev/ableton-mcp
How It Works
Three layers talking to each other. Simple concept, powerful result:
┌──────────────────────────────────────┐
│ CLAUDE AI │
│ │
│ "Create a synthwave track with │
│ drums, bass, and a pad in Am" │
└───────────────┬──────────────────────┘
│ MCP Protocol
▼
┌──────────────────────────────────────┐
│ MCP SERVER (Python) │
│ │
│ ┌────────────────────────────────┐ │
│ │ Async Connection Layer │ │
│ │ Response Cache (TTL-based) │ │
│ │ Batch Command Support │ │
│ └────────────────────────────────┘ │
│ │
│ 8 Tool Modules: │
│ Session · Track · Clip · Scene │
│ Device · Transport · Browser · AI │
└───────────────┬──────────────────────┘
│ TCP JSON (port 9877)
▼
┌──────────────────────────────────────┐
│ ABLETON REMOTE SCRIPT │
│ (Inside Ableton Live) │
│ │
│ Socket Server → Command Router │
│ 61 handlers → Ableton Live API │
└──────────────────────────────────────┘
When you ask Claude "add some reverb to the drums," it figures out which tools to call, sends JSON commands over TCP to a Remote Script living inside Ableton, and that script talks directly to Ableton's Python API. Results flow back the same way. The whole thing takes milliseconds.
What I Changed (And Why It Matters)
From 15 Tools to 67
The original could create a track and drop some notes. Cool for a demo. But try to actually produce a song and you'll need to delete tracks, duplicate clips, adjust device parameters, search the browser, undo mistakes... I added everything a real session needs:
- Track management: create, delete, duplicate, arm, solo, mute, volume, pan, sends
- Clip operations: read notes, remove notes, duplicate, loop settings, quantize
- Scene control: create, delete, duplicate, fire, stop all clips
- Device control: get parameters, set by name or index, toggle on/off, delete
- Transport: undo, redo, metronome, loop, capture MIDI, tap tempo
- Browser: text search with aggressive caching for instant lookups
That's 4.5x more tools than the original. Every single one added because I personally needed it while producing.
Async Everything (No More Freezing)
The original used blocking TCP sockets with time.sleep() calls everywhere. Every. Single. Command. Froze the server while waiting for Ableton's response. If you asked Claude to create 5 tracks, it would go: create... wait... create... wait... create... wait...
I ripped all of that out and replaced it with asyncio. Now Claude fires commands without waiting. The difference is ridiculous — what used to feel laggy now feels instant.
Smart Caching
Here's something I learned the hard way: Claude asks Ableton "what's your current state?" a LOT. Like, before almost every operation. And every time, that's a TCP round-trip.
So I built a TTL-based cache. Session state gets cached for 2 seconds (it changes when you're working). Browser items get cached for 60 seconds (your sound library doesn't change mid-session). This alone cut redundant network calls by ~80%.
Batch Commands
Instead of one command per TCP call, I added batching. When Claude wants to create 5 tracks and set their volumes, that's 1 network call instead of 10. Simple optimization, massive impact.
Full Session Snapshot
Classic N+1 problem: the old code queried the session, then each track individually, then each clip on each track. I added get_full_session_state — one call, entire session. 70% latency reduction on the most common operation.
The Music Theory Engine
OK, this is the part where I got a little obsessed.
I built a 1,128-line AI music theory module from scratch. This isn't just "pick random notes in a scale." This is proper music theory:
23 chord types — major, minor, dim, aug, sus2, sus4, every 7th variation you can think of, add9, 6th chords. When Claude places a Cmaj7, it knows exactly which four notes that is and how to voice it.
15 scales — major, all three minors, dorian, phrygian, lydian, mixolydian, both pentatonics, blues, whole tone, diminished. Ask for "a melody in lydian" and every note will be correct.
10 rhythm styles — straight, swing, shuffle, funk, triplet, syncopated... Each with timing offsets that make beats feel human, not robotic.
Intelligent generation — chord progressions that follow real harmonic rules, basslines that lock with the chords, melodies that respect scale and rhythm constraints, harmonic analysis of existing note sequences.
When you tell Claude "create a jazz progression in Bb," it picks from ii-V-I patterns, uses appropriate extensions (9ths, 13ths, altered dominants), applies the right voicings, and places notes with musical timing. It's not guessing. It knows.
This is what happens when a DJ who studied computer engineering builds a music theory engine at 4 AM. You get something unreasonably thorough.
Modular Architecture
The original was one giant server.py. Fine for a prototype. Impossible to maintain or extend.
I split everything into clean modules:
MCP_Server/
├── connection.py # Async TCP, auto-reconnect
├── cache.py # TTL-based response cache
└── tools/
├── session_tools.py # Session/tempo/master
├── track_tools.py # Track CRUD
├── clip_tools.py # Clip ops & MIDI notes
├── scene_tools.py # Scene management
├── device_tools.py # Effect/instrument params
├── transport_tools.py # Play/stop/record/undo
├── browser_tools.py # Library search & nav
└── ai_tools.py # Music theory engine
Want to add a new tool? Drop a function in the right module. It auto-registers. Zero merge conflicts, zero spaghetti code.
How To Set It Up
Want to try it yourself? Here's the quick version:
Prerequisites
- Ableton Live (10 or 11+)
- Claude Desktop with MCP support
- Python 3.8+
Installation
1. Clone my fork:
git clone https://github.com/Jeff909Dev/ableton-mcp.git
cd ableton-mcp
2. Install the Remote Script in Ableton:
# Automated install
./install.sh
# Or manually: copy the AbletonMCP folder to
# macOS: /Applications/Ableton Live*/Contents/App-Resources/MIDI Remote Scripts/
# Windows: C:\ProgramData\Ableton\Live*\Resources\MIDI Remote Scripts\
3. Enable the Remote Script in Ableton:
- Open Ableton Preferences → Link, Tempo & MIDI
- Under Control Surface, select "AbletonMCP"
- You should see "Connected" in Ableton's status bar
4. Configure Claude Desktop:
Add this to your Claude Desktop MCP config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"ableton-mcp": {
"command": "python",
"args": ["/path/to/ableton-mcp/MCP_Server/server.py"]
}
}
}
5. Restart Claude Desktop and start producing!
Open Ableton, open Claude, and try: "What's the current state of my session?"
If Claude responds with your session details, you're connected.
Real Examples (Things I Actually Do)
Here's how I use this in my production workflow:
Quick idea sketching: "Create 4 MIDI tracks — drums, bass, synth lead, and a pad. Set the tempo to 124 BPM. Give me a tech house drum pattern with a kick on every beat and an offbeat hi-hat." I go from zero to a working sketch in under a minute.
Harmony exploration: "I have this melody on track 2. What key is it in? Now create a chord progression that works with it, something moody but not too dark." Claude analyzes the notes, suggests chords, and writes them.
Sound design: "Find me a dark, detuned saw bass in the browser and load it on track 3. Add a low-pass filter and some distortion." Claude navigates my library, finds the right sound, and sets up the signal chain.
Quick fixes: "The snare on track 1 is too loud. Drop it 3dB and add a bit of reverb to glue it with the room." Takes 2 seconds instead of hunting through mixer controls.
Learning: "Explain the chord progression on track 4 and suggest where it could go next." Claude reads the MIDI, does harmonic analysis, and teaches you music theory in context.
The Hard Parts
A few things that made this challenging:
Python 2/3 compatibility: Ableton 10 runs Python 2. Ableton 11+ runs Python 3. The Remote Script has to work in both. Conditional imports, careful string/bytes handling, and lots of "wait, which Python is this running in?" moments.
Main thread constraint: Ableton's API is not thread-safe. Every state-modifying call must happen on the main thread. The Remote Script uses schedule_message() to dispatch safely while reads happen on the socket thread.
No pip inside Ableton: The Remote Script runs in a sandbox. No external packages. Everything is built with Python's standard library and Ableton's internal _Framework API. You appreciate requests a lot more when you have to write raw socket handling from scratch.
What's Next
I keep adding tools as I need them. Some things on my radar:
- Automation lane control (draw volume curves, filter sweeps)
- Audio clip manipulation (warp markers, stretch modes)
- Better instrument rack navigation
- Export/bounce from Claude
- A web UI to visualize what Claude is doing
Links
- My fork (with all improvements): github.com/Jeff909Dev/ableton-mcp
- Original project by Siddharth Ahuja: github.com/ahujasid/ableton-mcp
- Smithery registry: smithery.ai/@ahujasid/ableton-mcp
- MCP Protocol docs: modelcontextprotocol.io
Credits
Massive respect to Siddharth Ahuja for creating AbletonMCP and proving that this crazy idea actually works. The foundation was rock solid. I just... couldn't stop adding to it.
If you're a producer who uses Ableton and Claude, there's genuinely no reason not to try this. It's free, it's open source, and it will change how you think about your production workflow.
The future of music isn't AI replacing artists. It's artists with AI making things that neither could make alone. And honestly? We're just getting started.