I realised that many companies offer no-code platforms to their users for automating workflows.
The numbers were kinda shocking.
I spent a week deep-diving into Gumloop and other no-code platforms.
They're well-designed, but here's the problem: they're not built for agents. They're built for workflows. There's a difference.
Agents need customisation. They have to make decisions, route dynamically, and handle complex tool orchestration. Most platforms treat these as afterthoughts. I wanted to fix that.
Although it's not production-ready and nowhere close to handling the requests of companies like Gumloop and similar ones, this is intended to showcase the robustness of Vibe coding and how easily you can build sophisticated apps in a matter of days. You can also carry forward the work to improve it.
Picking my tech stack
NextJS was the obvious choice for the vibe-coding stack. Could I have used FastAPI with a React frontend?
Sure — but just thinking about coordinating deployments, managing CORS, and syncing types made me tired.
For adding a near-unlimited suite of SaaS app integrations, Composio was the obvious choice. It features a JS SDK that enables you to add agent integrations easily.
When it comes to agent frameworks, JS lacks the buffet Python has.
It boiled down to two frameworks: LangGraph and the AI SDK (I’d heard about Mastra AI, but I didn’t want to spend the weekend getting familiar with it).
I chose LangGraph over AI SDK because LangGraph’s entire mental model is nodes and edges — exactly how visual agent builders should work. Every agent is just a graph; every workflow, a path through that graph. AI SDK is great, but not convenient for graph-based agents.
Coding with Vibes
If you’re a vibe-code hater, skip ahead.
Frontend is entirely vibe-coded. I didn’t use Lovable or Bolt.new because it’s easier to open the code in Cursor and tweak it there.
My setup
- GPT-4.1 – The sniper: does exactly what you ask, nothing more, nothing less. Great for precise component tweaks.
- Gemini 2.5 Pro – The machine-gun: rewrites entire components and understands context across files. Perfect for major refactors.
- 21st Dev’s MCP Server – uses the Cursor Agent to build beautiful shadow components. Instead of copy-pasting docs, I just describe what I want.
The canvas where users drag-and-drop nodes? Built with React Flow plus a moving grid background from 21st Dev. Took ~30 minutes; doing it by hand would’ve exhausted me.
Building the Components
Strip away the marketing fluff; an AI agent is two things:
- An LLM that makes decisions
- The tools it can use to take action
That's it. So I built exactly four fundamental nodes:
- Input Node – where data enters the system
- Output Node – where results emerge
- LLM Node – makes decisions
- Tool Node – takes actions
…and an Agent Node that combines an LLM + Tools for convenience. Every complex workflow is just a remix of these primitives.
Composio for adding unlimited tool integrations
Writing tool integrations is painful. Managing auth for those tools? That’s where developers go to die.
Every tool has a different auth flow. Multiply that by 100 + tools and you have a maintenance nightmare.
Composio fixes this: one SDK, hundreds of pre-built tools, auth handled automatically. Ship in a weekend instead of spending months on OAuth.
API Routes
Each workflow is a JSON graph. Here’s a tiny example:
{
"nodes": [
{
"id": "input_1",
"type": "customInput",
"position": { "x": 100, "y": 100 },
"data": { "label": "User Query" }
}
],
"edges": [
{
"source": "input_1",
"target": "agent_1"
}
]
}
I wanted one API route that takes the entire graph and executes it.
When a user hits Run, this happens:
- Graph Validation – find the Input node, verify edges connect, check for cycles
- Topological Sort – determine execution order (LangGraph does this beautifully)
- Node Execution – each node type has its own execution logic
- State Management – pass data between nodes while maintaining context
// Sample snippet
switch (node.type) {
case 'llm': {
const model = getModelFromApiKey(node.data.apiKey);
result = await model.invoke(previousOutput);
break;
}
case 'tool': {
const tool = await composio.getTool(node.data.action);
result = await tool.execute(previousOutput);
break;
}
case 'agent': {
const tools = await composio.getTools(node.data.tools);
const agent = createReActAgent(model, tools);
result = await agent.invoke(previousOutput);
break;
}
}
Managing Authentication with Tools
Authentication was my personal nightmare.
Composio solved the technical part, but the UX? That took three rewrites.
v1 pain-stack
- Manually type action names (spelled perfectly)
- Leave my app to authenticate on Composio’s dashboard
- Come back and hope it worked
I added a drop-down of actions, but auth was still clunky. So I:
- Pulled every available tool from Composio’s API and cached it locally.
- Built a modal showing each toolkit, its tools and connection status.
- Adapted the UI to the tool’s auth type:
- API Keys – password input + link to get the key
- OAuth2 (hosted) – Connect button opens a pop-up
- OAuth2 (custom) – form for client credentials
- Other – dynamic form built from required fields
Once authenticated, the same modal lets you search and add tools in one click.
Agent Orchestration Patterns
Anthropic’s guide “Building Effective Agents” lists several patterns. I created nodes that instantiate these instantly.
1. Prompt Chaining
- Pattern: Sequential; output of one agent feeds the next.
-
Node example:
customInput → agent_1 → agent_2 → customOutput
2. Parallelisation
- Pattern: Agents run in parallel and their results are aggregated.
- Node example:
customInput → agent_1 (parallel)
customInput → agent_2 (parallel)
both → aggregator → customOutput
3. Routing
- Pattern: A router agent decides which branch to use.
- Node example:
customInput → router_agent
router_agent → agent_1 | agent_2 → customOutput
4. Evaluator-Optimiser
- Pattern: Generator agent produces solutions; evaluator checks them; loop until good.
- Node example:
customInput → generator_agent → solution_output
↘ evaluator_agent ↗
5. Augmented LLM
- Pattern: An agent node is augmented with tool calls / external data.
-
Node example:
customInput → agent(with tools) → customOutput
After 48 hours of rapid development, I had a working agent platform.
The barrier to building agents has collapsed. You don’t need a 20-person team and six months; you need:
- Clear thinking about what agents are (decision-makers with tools)
- The right abstractions (everything is a graph)
- The wisdom to reuse existing solutions instead of rebuilding them
The irony? I spent more time perfecting the auth modal than building the execution engine. In the age of vibe-code, the hardest problems aren’t technical — they’re about understanding users and having the taste to build well.
The code lives on GitHub. Fork it, break it, make it better.
Finally, the fruits of 48 hrs of vibe-coding:
This was all about vibe coding my way to an actual product. Though it's still maybe not fully ready for the real world, it's 80% there in a weekend, which would have taken months before.
Top comments (42)
I MS Painted a $10 trillion painting in a few minutes to be displayed at Louvre Museum, here's how:
I throw a bunch of meaningless crap together...
Then I clickbait.
Then I say it's 80% there.
LOL
☝️
clickbait: must clarify that a weekend of Vibe coded garbage did not get into YC neither its valued at 20M.
This is like making a line follower robot on your desk and calling it Tesla.
Sorry to break your bubble
and you thought "the world needs more of that"?
Made any money of it yet? Then there's the 80/20 rule of development.
This is for the love of the game
What is the 80/20 rule of development? 🤔
If 80% is done with 20% of effort the remaining 20% will take the remaing 80% of effort
Appreciate the transparency at the end there. That last line about it not being fully ready for the real world stood out... felt like the only human breath in a sea of AI-gen content.
If you're ever interested in diving into agent systems built for autonomy and real-world deployment, I've been pushing the boundaries with my DataOps Terminal v2.0 NLP to CL, dual-screen interface, and a full cognitive agent loop with live data, memory, and observability.
GitHub: github.com/GodsIMiJ1/dataops-terminal
Post: dev.to/ghostking314/dataops-termin...
(Open-source AI agent interface, built for research & education.)
Total useless...
crazy how much you cranked out just by rolling with it tbh - ever hit a point where the speed starts biting back or does moving fast help you spot what matters most?
We've come a long way with these AI models. Great work, Sunil!
The fact that this level of complexity is now weekend-buildable is wild....love the breakdown
Thanks @parag_nandy_roy
Yes! The real struggle is always UX and auth flows, not the code itself. Curious - if you could pick one thing to polish next, what would it be?
It's definitely the UX; the most difficult part was app integrations for agentic flow, but Composio took care of it.
sweet
Just use NodeRed...
Some comments may only be visible to logged-in visitors. Sign in to view all comments.