NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Vibe Kanban – Kanban board to manage your AI coding agents (github.com)
gpm 20 hours ago [-]
Hmm, analytics appear to default to enabled: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

It is harvesting email addresses and github usernames: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...

Then it seems to track every time you start/finish/merge/attempt a task, and every time you run a dev server. Including what executors you are using (I think this means "claude code" or the like), whether attempts succeeded or not and their exit codes, and various booleans like whether or not a project is an existing one, or whether or not you've set up scripts to run with it.

This really strikes me as something that should be, must legally be in many jurisdictions, opt in.

louiskw 17 hours ago [-]
That's fair feedback, I have a PR with a very clear opt-in here https://github.com/BloopAI/vibe-kanban/pull/146

I will leave this open for comments for the next hour and then merge.

TeMPOraL 17 hours ago [-]
Nice, I vote for merging it :).

It really doesn't hurt to be honest about this and ask up-front. This is clear enough and benign enough that I'd actually be happy to opt-in.

louiskw 17 hours ago [-]
Merged and building, thanks for bearing with us
gpm 16 hours ago [-]
I concur :)
smcleod 13 hours ago [-]
Good on you for taking action on this kind of feedback!
bn-l 20 hours ago [-]
Thanks, really appreciate the heads up. I put devs who do this on a personal black list for life.

I think also that this would be better as an mcp tool / resource. Let the model operate and query it as needed.

willsmith72 18 hours ago [-]
It's the email/username harvesting that you mean right? Or do people also have something against anonymised product analytics?
gpm 18 hours ago [-]
I have something against opt-out analytics over TCP/IP or UDP/IP period, because they aren't anonymized, they include an IP address by virtue of the protocol.

But I definitely only posted that original complaint of the email/username (not the person you responded to initially).

const_cast 10 hours ago [-]
> anonymised product analytics?

They're not anonymous, they're just pseudo-anonymous. It's incredibly easy to collect pieces of data A thru Z that, on their own, are anonymous but, all together, are not. It's also incredibly easy to collect data that you think is generic but is actually not.

Do you query the screen size? I have bad news for you. But, all of this is besides the point: when that data is exfiltrated to a third-party service, you have no idea how it's being used. You have a piece of paper, if you're lucky, telling you the privacy policy, which is usually "you have no privacy dumbass".

Even if data appears completely anonymous to humans, it can be ingested by machine learning algorithms that can spot patterns and de-anonymize the data.

I mean, we have companies who's entire business model is "how do we string together bits of data and tie it to real-world identity?": namely Google. Turns out it's remarkably easy when you have your hands in a lot of different pots. Collect a little anonymous data here, a little there, and boom: now you know that Billy Joe who lives on First Street loves to go to Walmart at 1 AM and buy Ben and Jerry's ice cream in a moment of weakness.

adastra22 4 hours ago [-]
Yes to both.
willsmith72 1 hours ago [-]
how do you build a product without analytics? how do you measure the success and failure of every change?
msgodel 1 hours ago [-]
Many users tend to be pretty vocal when changes break things they like, you don't need to spy on them for that. Mail readers > analytics frameworks.
willsmith72 27 minutes ago [-]
"not breaking things they like" is a very low bar for building a great product

To be honest building things this way seems like such a competitive disadvantage I don't see how it could ever work at scale. Certainly all the big players are using them. If we shake our heads at the little players doing the same, we're just going to widen the moat

swyx 19 hours ago [-]
could you point me to what jurisdictions require analytics opt in esp for open source devtools? thats not actually something ive seen as a legal requirement, more a community preference.

eg ok we all know about EU website cookie banners, but i am more ignorant about devtools/clis sending back telemetry. any actual laws cited here would update me significatnly

gpm 19 hours ago [-]
I mean, you've labelled one big one already with the GDPR covering a significant fraction of the world - and unlike your average analytics "username and email address" sounds unquestionably identifying/personal information.

Where I live I think this would violate PIPEDA, the Canadian privacy law that covers all business that do business in any Canadian province/territory other than BC/Alberta/Quebec (which all have similar laws).

There's generally no exception in these for "open source devtools" - laws are typically still laws even if release something for free. The Canadian version (though I don't think the GDPR does) has an exception for entirely non-commercial organizations, but Bloop AI appears to be a commercial organization so it wouldn't apply. It also contains an exception for business contact information - but as I understand it that is not interpreted broadly enough to cover random developers email addresses just because they happen to be used for a potentially personal github account.

Disclaimer: Not a lawyer. You should probably consult a lawyer in the relevant jurisdiction (i.e. all of them) if it actually matters to you.

generalizations 18 hours ago [-]
> GDPR covering a significant fraction of the world

> privacy law that covers all business that do business in any Canadian province

A random group of people uploaded free software source code and said 'hey world, try this out'. I wish the GDPR and the PIPEDA the best of luck in keeping people from doing that. (Not to actually defend the telemetry, tbh that's kinda sleezy imo.)

gpm 18 hours ago [-]
I mean, those are merely the two countries privacy laws I'm most familiar with. The general principal of "no you can't just steal peoples personal information" is not something unique to the ~550 million people the laws I cited cover.

And the laws don't prevent you from uploading "random" software and saying "try this". They prevent you from uploading spyware and saying "try this". Edit: Nor does the Canadian one cover any random group of people, it covers commercial entities, which Bloop AI appears to be.

jjangkke 20 hours ago [-]
analytics stuff is fine but the email harvesting/github username appears to be illegal especially if its done without notifying the user?

great catch, many open source projects appear to be just an elaborate lead gen tool these days.

janoelze 17 hours ago [-]
fork, task claude to remove all github dependence, build.
gpm 17 hours ago [-]
I did this locally to try it out :) Also stubbed out the telemetry and added jj support. "Personalizing" software like this is definitely one of LLMs superpowers.

I'm not particularly inclined to publish it because I don't want to associate myself with a project harvesting emails like this.

BeetleB 15 hours ago [-]
> and added jj support

Please do the same for Aider :-)

https://github.com/Aider-AI/aider/issues/4250

gpm 14 hours ago [-]
Be the change you want to see! This is pretty close to a best case task for these models because it's a relatively direct "translation" of existing code.

There's a big difference between "something actually ready for use" and "claude hacked sometime together with bubblegum and ducttape that works on my system" though - doing it properly will probably take a bit of work.

janoelze 17 hours ago [-]
yes, i was just doing/thinking the same, it was an interesting experience to sculpt a somewhat complex codebase to my needs in minutes.
hsbauauvhabzb 17 hours ago [-]
Use a telemetry backed tool to remove telemetry from another telemetry backed tool?
TeMPOraL 17 hours ago [-]
There's telemetry you consent to, and telemetry you don't. Just because I'm fine with a tool like Claude Code collecting some telemetry, doesn't mean I'm fine with a different party collecting telemetry - and the two products being used together doesn't change it. It's not naive, it's simply my right.
janoelze 17 hours ago [-]
it came to mind first, you're free to use whatever flavour of LLM f̶l̶o̶a̶t̶s̶ ̶y̶o̶u̶r̶ ̶b̶o̶a̶t̶ vibes your code.
hsbauauvhabzb 17 hours ago [-]
That doesn’t change the naïvety of the response.
adastra22 4 hours ago [-]
> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

Is this really the case?

swalsh 23 hours ago [-]
I built something similar for my own workflow. Works okay. The hard part is as you scale, you end up with compounded false affirmatives. Model adds some fallback mechanism that makes it work, tests pass, etc. The nice part is you can ask models to review the code from others, call out fallbacks, hard coding, stuff like that. It does a good job at identifying buried bodies. But if you dig up a buried body, I'd manually confirm it was properly disposed of as the models usually hid the body in the first place because they needed some input they didn't have, got confused or ran into an issue.
oc1 23 hours ago [-]
We need something like a kitchen brigade in software - one who writes the vibe code tickets (Chef de Vibe), one who reviews the vibe code (Sous-Vibe), one who oversees the agents and restarts them if they get hung up (Agent de Station). We could theoretically smash thousand tickets a day with this principle
ggordonhall 22 hours ago [-]
Completely agree!

You can actually use a coding agent to create tickets from within Vibe Kanban. Add the Vibe Kanban MCP server (from MCP settings) and ask the agent to plan a task and write tickets.

atavistically 7 hours ago [-]
c.f. "Surgical Team" in 'The Mythical Man-Month' by Fred Brooks. That book is perennially relevant.
lharries 23 hours ago [-]
I used this last week and it's excellent - feels like the same increase in productivity increase from when I first used Cursor.

Are you thinking of doing a hosted version so I can have my team collab on it?

And I found I could open lots of PRs at once but they often need to be dependent on each other - and then I want to make a change to the first one. How are you thinking of better managing that flow?

louiskw 23 hours ago [-]
Yeah I think giving the option to move execution to the cloud makes a lot of sense, I already find my macbook slowing down after 4 concurrent runs, mainly rustc.

Also now we're pushing many more PRs think we defo need better ways to stack and review work. Will look into this asap

hddbbdbfnfdk 22 hours ago [-]
Very productive increase sirs! Whole team well promoted.
mahsima 4 hours ago [-]
Great

Add it to this lists, i think it helps:

https://github.com/tokyo-dal/awesome-ai-coding-tools

https://github.com/devtoolsd/awesome-devtools

and any awesome lists related to ai development

notarobot123 6 hours ago [-]
I'd be interested in how it _feels_ to run with this workflow for an extended period.

If you're doing code reviews, writing new tickets, evaluating progress and guiding the overall structure of a project, do you loose something important or is it genuinely a satisfying way of working which you could imagine sustaining for the long-term?

uxamanda 20 hours ago [-]
If you use gitlab, you can use the command line "glab" tool to have agents work from the built in kanban. They can open and close tasks, start MRs off of them etc. It's not as integrated as this tool, but works well with a mix of humans and robots.
louiskw 17 hours ago [-]
Interesting, hadn't heard of that. Would better GitLab support be useful in Vibe Kanban?
barbazoo 22 hours ago [-]
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

This feel like much too broad a statement to be true.

bwfan123 22 hours ago [-]
> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.

This tactic is called "assuming the sale". ie, Make a statement as-if it is already true, and put the burden on the reader to negate it. Majority of us are too scared of what others think, and go-along by default. It is related to the FOMO tactic in that it could be used in conjunction with it to make it a double-whammy. for example, the statement above could have ended with: "and everyone is now using agents to increase their productivity, and if you arent using it, you are left behind"

Glad you stood up to challenge it.

skeeter2020 19 hours ago [-]
I'll add - often not adding the last part is even MORE powerful: "and everyone is now using agents to increase their productivity..."
18 hours ago [-]
17 hours ago [-]
lazarus01 21 hours ago [-]
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks

> > This feel like much too broad a statement to be true.

This is just what they wish to be true.

lbrito 20 hours ago [-]
I wonder how demographics (specifically age) tie into this. I'm well into my 30s and I found that statement absurd, but perhaps it is basically universally true among recent grads.
bigfishrunning 18 hours ago [-]
Maybe it is -- the next few years are going to get really rough for them; they'll develop no skills outside of AI.
ljm 22 hours ago [-]
I wouldn't say it's the majority of my time but the most utility I've got out of AI is using MCP to deal with the boring shit: update my jira tickets to in progress/in review, read feedback on a PR and address the trivial shit, check the CI pipeline and make it pass if it failed, and write commits in a consistent, descriptive way.

It's a lot more hands on when you try to write code with it, which I still try out, but it's only because I know exactly what the solution is and I'm just walking the agent towards it and improving how I write my prompts. It's slower than doing it myself in many cases.

rvz 21 hours ago [-]
I read that too and these are the kind of statements which really tells you what happens when a profession embraces mediocrity and accepts something as crass as "Vibe-coding" which is somehow going to change "software engineering" even when adding so-called "AI agents" - which makes it worse.

All this cargo-culting is done without realizing that more code means more security issues, technical debt, more time for humans to review the mess and *especially* more testing.

Once again, Vibe-coding is not software engineering.

skeeter2020 19 hours ago [-]
and I came into the industry when software was not engineering. Still think this is mostly true (you can call yourself an engineer when you insure your product)
Disposal8433 9 hours ago [-]
You're right and it's sad. Instead of being more serious about the output of our work, we put everything in the trash and removed all barriers and tools to would have hardened the code. The processes to plan, write specs, and check applications went the way of the dodo too.

I'm glad I work for a regulated industry where we still have some kind of responsibility and pride for what we do. I could never work for the kind of irresponsible anarchy that AI is creating.

dhorthy 21 hours ago [-]
i feel so strongly that this will rapidly become true over the next 6 months. if you don't believe me check out Sean Grove's talk from mid June - https://www.youtube.com/watch?v=8rABwKRsec4
adastra22 4 hours ago [-]
How young are you?
iimblack 22 hours ago [-]
The permissions this asks for feel kinda insane to me. Why does a kanban board need to see the code or my deploy keys among other things?
jeltz 22 hours ago [-]
I would assume because it was vibe coded.
gpm 21 hours ago [-]
More generously I'd assume because

- It's an early prototype so they haven't dealt with fine grained permissions

- They really do want to do things like access private repos with it themselves

- They really do want the ability to do things like checkout code, create PRs, etc... and that involves a lot of permission.

skeeter2020 19 hours ago [-]
every one of your "more generous" assumptions is the opposite of what should be their process. It's the equivalent of "vacuum up as much data as possible and then decide what to do with it". Not acceptable.
gpm 19 hours ago [-]
It's "vacuuming" that data in the sense of giving API access to a tool that runs on your local computer, that seems acceptable enough for me in the early stages of developing a tool.

The other privacy complains I have regarding them harvesting usernames and email addresses... not so much.

TeMPOraL 16 hours ago [-]
Because it's not "a kanban board"? It's a coding agent orchestrator that's made in the shape of a Kanban board.

You might be right that this app asks for excessively broad privileges, but your case would be much stronger if it wasn't backed by an absurdly disingenuous argument.

deepdarkforest 23 hours ago [-]
This is a launch by a YC company that converts enterprise cobol code into java. Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

It's chaos. Thats fine if you are vibe coding an unimportant nextjs/vercel demo, but i'm really sceptical of all this stance that you should be proud of how abstracted you are from code. A kanban board to just shoot off as many tasks as possible and just quickly read over the PR's is crazy to me. If you want to appear a serious company that should be allowed to write enterprise code, imo this path is so risky. I see this in quite a few podcasts, tweets etc. People bragging how abstracted they are from their own product anymore. Again, maybe i am missing something, but all of this github copilot/just reviewing like 10 coding agents PR's just introduces so much noise and slop. Is it really what you want your image to be as a code company?

unshavedyak 23 hours ago [-]
> Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.

Fwiw Claude suggests using different git workspaces for your agents. This would entirely solve the clashing, though they may still conflict and need normal git conflict resolves of course.

Theoretically that would work fine, as it would be just like two people working on different branches/repos/etc.

I've not tried that though. AI generates way too much code for me to review as it is, several subtasks working concurrently would be overwhelming for me.

helsinki 12 hours ago [-]
This works in theory and somewhat in practice but it is not as clean as people make it seem, as someone who has spent tens of thousands on Opus tokens and worktrees - it’s just not that great. It works, but it’s just, ugh, boring, super tedious, etc. at the end of it all, you’re still sitting around waiting for Claude to merge conflicts.
louiskw 23 hours ago [-]
This is a bet on a future where code is increasingly written by AI and we as human engineers need the best tools to review that work, catch issues and uphold quality.
deepdarkforest 23 hours ago [-]
I don't disagree, but the current sentiment i was referring to seems to be "maximize AI code generation with tools helping you to do that" rather than "prioritize code quality over AI leverage, even if it means limiting AI use somewhat."
codingdave 22 hours ago [-]
It is not just chaos, it is an unwanted product. Don't misunderstand - people would love this product if it works. But AI cannot do this yet. Products like this are built on an assumption that AI has matured enough to actually succeed at all tasks. But that simply isn't true. Vibe coding is still slop.

AI needs to do every single step of this type of flow to an acceptable quality level, with high standards on that definition of "acceptable", and then you could bring all the workflow together. But doing the workflow first and assuming quality will catch up later is just asking for a pile of rejections when you try to sell it.

I'm not just making this up, either... I've seen and talked to numerous people over the last couple years who all came up with similar ideas. Some even did have workable prototypes running. And they had sales from the mom/friends/family connections, but when they tried to get "real" sales, they hit walls.

skeeter2020 19 hours ago [-]
If I multiply my 100x productivity gains from using AI with your 10x increase what am I supposed to do with all that free time?
ffsm8 18 hours ago [-]
Maybe Tony can inspire you?

( ◠ ‿ ・ ) —

https://youtube.com/shorts/YBAcvRV7VSM?si=jp2hZvFIVo-vSdu6

_jayhack_ 23 hours ago [-]
Very cool and interesting project. Ideas like this are a threat to traditionally-conceived project management platforms like Linear; that being said, Linear and others (Monday, ClickUp, etc.) are pushing aggressively into UX built for human/AI collaboration. I guess the question is how quickly they can execute and how many novel features are required to properly bring AI into the human project workspace
louiskw 22 hours ago [-]
Cheers! Smaller teams, more infrastructure, more testing, tasks requiring review in minutes not days - the features are just totally different for the new world than what legacy PM tools are optimised for, and who they have to continue to serve.
jackbridger 19 hours ago [-]
This is interesting. I’m using Claude Code a fair bit and have found writing specs to be more effective than promoting and this feels closer to that. I can see the appeal of this for simple tasks now and maybe increasingly bigger tasks as models get better.
louiskw 17 hours ago [-]
Very much a bet that things are going to get much much better very quickly
helsinki 13 hours ago [-]
So far, it’s relatively bug-free well-written code that I’ve forked to work behind the walls of a hedge fund, and it works, but the reality is that it doesn’t provide anything that some terminal windows and git worktrees can’t offer. Am I missing something?

You really need to add more features, because I struggle to find a compelling reason for advanced users to use it.

sails 8 hours ago [-]
If you could consider how a team could use linear in conjunction with this that would be very interesting.

I would imagine matching tickets/conversations from Linear mcp to use as an overlay of context

randysalami 23 hours ago [-]
I tried to build something similar but in a peer-to-peer fashion and for humans + AI. It was supposed to be like a Kanban board that could scale to any team size and use Planning AI to ingest/match/monitor work realtime across teams and agents. I ran out of steam and couldn’t get funding but here is the prototype version:

https://postwork-alpha.vercel.app/

User: maryann.biaggioli@astarconsulting.com

Pass: Test1234!

I never got to a point where I actually integrated AI agents (weren’t as good at the time) but it’s cool to see it working in the real world!

diggan 21 hours ago [-]
That's an interesting idea and looks neat! I have my own developed agent running locally in containers, and currently use GitHub issues+pull requests for coordinating all the asynchronous work. Do you have any pointers on the approach I should take if I basically have something like a service already running for this, and I just want to hook up your UI to use it instead? Just some broad pointers on what would be required would be most helpful already!
sqs 17 hours ago [-]
This is really cool. I used Vibe Kanban with Amp to update some of our docs and UI components, and it was great.
louiskw 17 hours ago [-]
I would say conservatively that 80% of Vibe Kanban has been built by Amp
slig 23 hours ago [-]
How does this compare with Backlog.md? [1]

[1]: https://news.ycombinator.com/item?id=44483530

ggordonhall 23 hours ago [-]
Hi, co-author of this project here!

In Vibe Kanban you can directly interact with Claude Code from within the Kanban board. E.g. you can write out a ticket, hit a button to run it locally with Claude Code/Gemini etc., watch its responses, and then review any diffs that it generated.

slig 23 hours ago [-]
Thank you! I was taking a look at the docs, and I'm going to play with it later today. Thanks for sharing and congrats on shipping!
gpm 21 hours ago [-]
I definitely don't feel like the models are reliable enough that I'd be more productive running them in parallel like this yet, but I can see a future where I want this.

Their reliability probably varies a lot depending on what you are using them for - so maybe I'm just using them in more difficult (for claude) domains.

louiskw 17 hours ago [-]
Yes I generally cherry pick the easier 50% of my backlog and work on those with Vibe Kanban, and the other 50% is still manual or happens with coding agent but with a human-in-the-loop.

This is a bet that coding agents will continue to get better, and this feels like the right time to try and figure out the interface.

FailMore 21 hours ago [-]
Do you think you will keep it free or can you see a business model developing around it? If so, what do you think it would be? / How would you split paid tiers vs free users? Not a big deal to me...!! But I'm curious how one might commercialise these types of free/open source projects
louiskw 17 hours ago [-]
I could see there being a long term free offering that doesn't cost us compute or tokens, and probably some other offerings that actually do use resources and would make sense to build a business around.

But that's not a today problem, we just want to absorb feedback and iterate until we build the ultimate tool for working with these coding agents.

swyx 22 hours ago [-]
Louis did a long form chat and vibe coded extra features in our discord meetup: https://www.youtube.com/watch?v=NCksand7Iwo for more info on this
louiskw 22 hours ago [-]
"11 days ago" this feels like a year ago :O
peadarohaodha 21 hours ago [-]
Would love to have a good selection of keyboard shortcuts! Power Vibe Kanban
louiskw 21 hours ago [-]
Good point - there are a few random ones already but I will ask a coding agent to implement this comprehensively
remram 22 hours ago [-]
Why do you need to "manage" your coding agents like they are people? How long does it take them to do a task once prompted, in the background?

Don't you just prompt an immediately review the result?

gpm 21 hours ago [-]
Currently - claude code is pretty slow. I've definitely had it take >15 minutes on the (faster than opus!) sonnet model just thinking and writing code without feedback or running long lasting tools. I expect this to change given that companies like cerebras exist and seem to know how to generate tokens in much less time, but the current state of the art is what it is.

Always - if you're going to pipe the result of some slow process back to them (like building a giant C++ project that takes minutes/hours, or running a huge set of tests...)... it's going to be slow.

louiskw 22 hours ago [-]
For me, the average task takes a coding agent 2-5 minutes to complete. A slightly annoying amount of time as I'm prone to getting distracted while I wait.

This gives me something to do in that time.

My guess is time to complete a task will oscillate - going up as we give agents more complex tasks to work on, and going down with LLM performance improvement.

scotty79 22 hours ago [-]
I don't think smart ones are that fast. They can work for hours if you have the budget.
remram 19 hours ago [-]
Interesting, I didn't know that.
jjangkke 20 hours ago [-]
I'm not sure if Kanban is the right UI for what this is supposed to be for, just a gut feeling. Curious what other UI is more appropriate for this.
louiskw 17 hours ago [-]
Kanban seems like a good starting place, but I broadly agree that the interface for human<>agent collaboration will need to be different from the default interface we have today with legacy PM tools.

Things move across the board so quickly when AI is doing the work that ~50% of the columns seem pretty redundant.

skeptrune 23 hours ago [-]
I remember the original Bloop search engine!

Some kind of UI or management system like this seems like it would be high level useful. Will have to give it a run.

louiskw 23 hours ago [-]
Nostalgia!

Let me know if any issues, we're turning feedback around pretty quick

csomar 23 hours ago [-]
Why does this need GitHub auth? This asks for unlimited private access to ones repo. This is a hard NO from me.
louiskw 17 hours ago [-]
It can open GitHub PRs from the interface and there's tons more info we want to pull in like the result of CI checks
csomar 11 hours ago [-]
Then you should have gone with a GitHub App instead of an Oauth App. The difference is that a GitHub App allows granularity of selection.
MH15 21 hours ago [-]
Can you use this with the Claude Code Github Action?
bpshaver 22 hours ago [-]
> Get 10x more out of...

So you're saying it goes up to 11x?

skeeter2020 19 hours ago [-]
AI was already giving you 100x so we're up to 1000x
louiskw 22 hours ago [-]
Great ChatGPT question XD
drbojingle 19 hours ago [-]
I'm building one of these too :D
shibeprime 23 hours ago [-]
Now we just let a CTO agent create the cards, review, and merge the PRs?
ggordonhall 23 hours ago [-]
We have this! Vibe Kanban includes its own MCP server that you can use to create tickets within the Kanban board.

Click on the MCP Servers tab, then hit "Add Vibe Kanban MCP". Then create and start a "planning" ticket like "Plan a migration from AWS to Azure and create detailed tickets for each step along the way". Sit back and watch the cards roll in!

Will do more to document this better soon :)

amirhirsch 21 hours ago [-]
great. now i have to spend my morning using vibe kanban to make a tui for vibe kanban with textual. i'll submit a pr as soon as it's finished.
Jolter 21 hours ago [-]
It says in the linked readme to not open pull requests without discussing it first.
21 hours ago [-]
lowdownbutter 21 hours ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:33:57 GMT+0000 (UTC) with Wasmer Edge.