✨︎

How People Are Using AI For Vibe Coding

In 2025, people “vibe code” by asking Claude, Cursor/o3, and Comet to build and modify software end-to-end—pushing PRs without reading the code, one-shotting games and web apps, wiring databases via MongoDB’s MCP, and even generating thousands of projects (one team reports $48,952.95 in model spend with zero human-written code). The tradeoffs show up quickly: posts describe production friction and outdated dependencies, loss of code understanding and fast-accumulating tech debt, and a security failure where a vibe-coded app exposed user data via a simple GET; practical mitigations include asking the agent to clean, de-bloat, and document after it finishes. Tools keep lowering the bar (Gemini 2.5 Pro for coding; OSS agent platforms that read/write files and run commands), but the guidance is to use vibe coding for prototypes or constrained features and apply human review before scale.

🤖 ai summary based on 26 tweets

Popular demos from official product accounts, team members, or affiliated creators.

Can’t load? Read tweets in preview...

It blows my mind that you can do this now! I opened @get_mocha and built an entire application in less than 30 minutes: • the frontend • the backend • with a database • with Stripe integration I used to charge clients 5 figures to do all of this, and I finished it today in half an hour. Please, stop and appreciate this for a second: You can now turn one English sentence into an entire functional application. If this is not the definition of "mind-blowing", I don't know what is. I've been using @get_mocha for quite some time, and they've significantly improved their platform. They are collaborating with me on this post. Take 10 minutes of your day, open this link, and ask it to build something for you: https://t.co/uSqIsGyAKp

Introducing Lovable Cloud & AI, a new chapter for vibe coding. Anyone can now build apps with complex AI and backend functionality, just by prompting. 100k+ new ideas, tools, and sites are built on Lovable daily. Today, we're redefining what's possible: https://t.co/pcZO05SmTu

We shipped an OSS 'vibe coding platform' (like @v0) built with @vercel AI SDK, Gateway and Sandbox. We worked with @openai to tune the GPT-5 agent loop. It can write/read files, run commands, install packages, autofix errors… Demo oneshotting a multiplayer Pong in Go ↓ https://t.co/3wthQVqPy8

SQL and Python alone can feed your family for decades. I learned SQL 25 years ago, and it's been one of the best decisions I've ever made. Back then, I had to go to college to learn, but you don't need to do that anymore. Today, you can build your own "college" if you want. This sounds crazy, but such are the times we are living. I've been working with https://t.co/mSbtU01Cq4 for a while now, and it's one of the easiest ways to build *anything* I want. They've just released version 2, and it's quite impressive. They are collaborating with me on this post. I use vibe-coding primarily to test different ideas. Before, I had to imagine things, and now I can actually try them out without writing a single line of code. For those of you wanting to learn SQL, try something like this: • Go to https://t.co/mSbtU01Cq4 • Ask it to build an app to translate English into SQL • Deploy it 10 minutes later, you'll have a complete application that will help you learn SQL. Or Python. Or anything you want. Sometimes, I think we are living in a simulation. It's pretty incredible how much we can build using English alone.

Today's THE day, Folks! 🥁🥁🥁 https://t.co/iKbRNEoXMm is LIVE! I was lucky to beta test it, easily the most capable AI web agent I’ve used!🔥 Built on Stagehand (100% open source), it turns natural language into flawless browser automation. FREE & zero coding needed! 🧵 ↓ https://t.co/TnA6C83vez

An LLM with native access to find and run agents is a huge thing! This large language model (called ASI:One and developed by @Fetch_ai) can do this natively when you show it a problem: • Reason about the problem • Identify live agents and tools that can help solve it • Dynamically invoke these agents and tools Honestly, this is pretty nuts! You don't need to set up anything beforehand. You don't need to write any code. Here is the best part: ASI:One has access to any agent published on Agentverse, an open directory of agents! This gives the model access to *millions* of tools it can execute dynamically when solving a task. Here is a simple example (the one in the attached video): • I went to Google Maps • I copied the latitude and longitude of a location • I asked ASI:One about relevant places around that location • ASI:One went to the marketplace • It identified and called the Google Places agent • It returned a list of museums, restaurants, and landmarks near my location Pretty awesome! You can try this now by going to https://t.co/r0F6dhl5kG. You don't need to pay or register to test it out. If you want to get ideas of what's possible, check Agentverse, the agent marketplace: https://t.co/zpPzdUDzAL. The model will have access to all of those tools. This is pretty awesome and opens up a whole new world of possibilities! Thanks to the team for partnering with me on this post!

Very excited to share the best coding model we’ve ever built! Today we’re launching Gemini 2.5 Pro Preview 'I/O edition' with massively improved coding capabilities. Ranks no.1 on LMArena in Coding and no.1 on the WebDev Arena Leaderboard. It’s especially good at building interactive web apps - this demo shows how it can be helpful for prototyping ideas. Try it in @GeminiApp, Vertex AI, and AI Studio https://t.co/7FbP3R1cmF Enjoy the pre-I/O goodies !

One quick check...