LOL yeah I agree, we're definitely building in a crowded space. I am very hopeful though for the amount of utility that'll be made in the agent orchestration space though! There's a lot that can be built if we successfully make developers 10x more productive.
There is something you are not explaining (at least I couldn't find it, sorry if you do), but how do you manage apps states? Basically databases?
Most of these agents solutions are focusing on git branches and worktrees, but at least none of them mention databases. How do you handle them? For example, in my projects, this means I would need ten different copies of my database. What about other microservices that are used, like redis, celery, etc? Are you duplicating (10-plicating) all of them?
If this works flawlessly it would be very powerful, but I think it still needs to solve more issues whan just filesystem conflicts.
Great question currently superset manages worktrees + runs setup/teardown scripts you define on project setup. Those scripts can install dependencies, transfer env variables, and spin up branching services.
For example:
• if you’re using Neon/Supabase, your setup script can create a DB branch per workspace
• if you’re using Docker, the script can launch isolated containers for Redis/Postgres/Celery/etc
Currently we only orchestrate when they run, and have the user define what they do for each project, because every stack is different. This is a point of friction we are also solving by adding some features to help users automatically generate setup/teardown scripts that work for their projects.
We are also building cloud workspaces that will hopefully solve this issue for you and not limit users by their local hardware.
You can for PG use that magic copy db they have, where they instantly (close to) copy db and with git-worktrees you can work on this, then tear it down. With sqlite obviously you would just copy it
Why aren’t you mocking your dependencies? I should be able to run a microservice without 3rd party and it still work. If it doesn’t, it’s a distributed monolith.
For databases, if you can’t see a connection string in env vars, use sqlite://:memory and make a test db like you do for unit testing.
For redis, provide a mock impl that gets/sets keys in a hash table or dictionary.
What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever.
The point isn’t you shouldn’t have a database, the point is what are your concerns? For me and my teams, we care about our code, the performance of that code, the correctness of that code, and don’t test against a live database so that we understand the separation of concerns between our app and its storage. We expect a database to be there. We expect it to have such and such schema. We don’t expect it to live at a certain address or a certain configuration as that is the databases concern.
We tell our app at startup where that address is or we don’t. The app should only care whether we did or not, if not, it will need to make one to work.
This is the same logic with unit testing. If you’re unit testing against a real database, that isn’t unit testing, that’s an integration test.
If you do care about the speed of your database and how your app scales, you aren’t going to be doing that on your local machine.
Hey there, I'm another member of the superset team! I think it's definitely something you have to get used to, and it is somewhat task dependent.
For bug fixes and quick changes I can definitely get to 5-7 in parallel, but for real work I can only do 2-3 agents in parallel.
Human review still remains the /eventual/ bottleneck, but I find even when I'm in the "review phase" of a PR, I have enough downtime to get another agent the context it needs between agent turns.
We're looking into ways to reduce the amount of human interaction next, I think there's a lot of cool ideas in that space but the goal is over time tools improve to require less and less human intervention.
Use review bots (CodeRabbit, Sourcery and Codescene together work for me). This is for my own projects outside of work, of course. I use Terragon for this. 10 parallel rust builds would kill my computer. Got a threadripper on its way through, so superset sounds like something I need to give a go.
The real bottleneck isn’t human review per se, it’s unstructured review. Parallel agents only make sense if each worktree has a tight contract: scoped task, invariant tests, and a diff small enough to audit quickly. Without that, you’re just converting “typing time” into “reading time,” which is usually worse. Tools like this shine when paired with discipline: one hypothesis per agent, automated checks gate merges, and humans arbitrate intent—not correctness.
Recently I gave Catnip a try and it works very smoothly. It works on web via GitHub workspaces and also has mobile app.
https://github.com/wandb/catnip
Thanks, Catnip looks pretty cool! Honestly it's pretty similar, I think ours is a bit more lightweight (it seems they have remote sandboxes where they host their code whereas we host your code locally using git worktrees).
The mobile app is a pretty cool feature though - will definitely take a peek at that soon.
I have my own VM's with agents installed inside, is there a tool which supports calling a codex/claude in a particular directory through a particular SSH destination?
Basically BringYourOwnAgentAndSandbox support.
Or which supports plugins so I can give it a small script which hooks it up to my available agents.
Noticed this is built with electron (nice job with the project architecture btw, I appreciate the cleanness), any particular reason a Windows build isn't available yet?
We do plan to ship Windows (and Linux) builds, Electron makes that feasible, but for the first few releases we focused on macOS so we could keep the surface area small and make sure the core experience was solid since none of us are using Windows or Linux machines to properly test the app in those environments.
But it on the roadmap and glad to know theres interest there :)
I’ve been following this space and a lot of good apps:
Conductor
Chorus
Vibetunnel
VibeKanban
Mux
Happy
AutoClaude
ClaudeSquad
All of these allow you to work on multiple terminals at once. Some support work trees and others don’t. Some work on your phone and others are desktop only.
My issue with most of them is the xterm.js, which can't handle when the terminals get large/too big, Even Conductor (great app, i love conductor and the team behind it) had to drop their "big-terminal" mode. i'm hacking a native solution for this which i personally like by hacking Ghostty+SwiftTerm.
Thanks for the question. For most traditional web apps using frameworks like Next.js, Vite, etc they'll automatically try the next port if its in use (3000-> 3001 -> 3003). We give a visualization of which ports are running from each worktree so you can see at a glance whats where.
For more complex setups if your app has hardcoded ports or multiple services that need coordination you can use setup/teardown scripts to manage this. Either dynamically assigning ports or killing the previous server before starting a new one (you can also kill the previous sever manually).
In practice most users aren't running all 10 agent's dev servers at once (yet), you're usually actively previewing 1-2 at at time while the other are working (writing code, running tests, reviewing, etc). But please give it a try and let me know if you encounter anything you want us to improve :)
In the past I've worked with devs who complain about the cost of context switching when they're asked to work on more than one thing in a sprint. I have no idea how they'd cope with a tool like this. They'd probably complain a lot and just not bother using it.
Most of these agents solutions are focusing on git branches and worktrees, but at least none of them mention databases. How do you handle them? For example, in my projects, this means I would need ten different copies of my database. What about other microservices that are used, like redis, celery, etc? Are you duplicating (10-plicating) all of them?
If this works flawlessly it would be very powerful, but I think it still needs to solve more issues whan just filesystem conflicts.
For example: • if you’re using Neon/Supabase, your setup script can create a DB branch per workspace • if you’re using Docker, the script can launch isolated containers for Redis/Postgres/Celery/etc
Currently we only orchestrate when they run, and have the user define what they do for each project, because every stack is different. This is a point of friction we are also solving by adding some features to help users automatically generate setup/teardown scripts that work for their projects.
We are also building cloud workspaces that will hopefully solve this issue for you and not limit users by their local hardware.
For databases, if you can’t see a connection string in env vars, use sqlite://:memory and make a test db like you do for unit testing.
For redis, provide a mock impl that gets/sets keys in a hash table or dictionary.
Stop bringing your whole house to the camp site.
What does that mean in this context?
What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever.
The point isn’t you shouldn’t have a database, the point is what are your concerns? For me and my teams, we care about our code, the performance of that code, the correctness of that code, and don’t test against a live database so that we understand the separation of concerns between our app and its storage. We expect a database to be there. We expect it to have such and such schema. We don’t expect it to live at a certain address or a certain configuration as that is the databases concern.
We tell our app at startup where that address is or we don’t. The app should only care whether we did or not, if not, it will need to make one to work.
This is the same logic with unit testing. If you’re unit testing against a real database, that isn’t unit testing, that’s an integration test.
If you do care about the speed of your database and how your app scales, you aren’t going to be doing that on your local machine.
For bug fixes and quick changes I can definitely get to 5-7 in parallel, but for real work I can only do 2-3 agents in parallel.
Human review still remains the /eventual/ bottleneck, but I find even when I'm in the "review phase" of a PR, I have enough downtime to get another agent the context it needs between agent turns.
We're looking into ways to reduce the amount of human interaction next, I think there's a lot of cool ideas in that space but the goal is over time tools improve to require less and less human intervention.
And yeah the next frontier is definitely offloading to agents in sandboxes, Kiet has that as one of his top priorities.
Recently I gave Catnip a try and it works very smoothly. It works on web via GitHub workspaces and also has mobile app. https://github.com/wandb/catnip
How is this different?
The mobile app is a pretty cool feature though - will definitely take a peek at that soon.
I have my own VM's with agents installed inside, is there a tool which supports calling a codex/claude in a particular directory through a particular SSH destination?
Basically BringYourOwnAgentAndSandbox support.
Or which supports plugins so I can give it a small script which hooks it up to my available agents.
But it on the roadmap and glad to know theres interest there :)
Conductor
Chorus
Vibetunnel
VibeKanban
Mux
Happy
AutoClaude
ClaudeSquad
All of these allow you to work on multiple terminals at once. Some support work trees and others don’t. Some work on your phone and others are desktop only.
Superset seems like a great addition!
0. https://github.com/coder/ghostty-web
For more complex setups if your app has hardcoded ports or multiple services that need coordination you can use setup/teardown scripts to manage this. Either dynamically assigning ports or killing the previous server before starting a new one (you can also kill the previous sever manually).
In practice most users aren't running all 10 agent's dev servers at once (yet), you're usually actively previewing 1-2 at at time while the other are working (writing code, running tests, reviewing, etc). But please give it a try and let me know if you encounter anything you want us to improve :)
https://superset.apache.org/