Noticing myself adopting a certain rhythm in AI-assisted coding (i.e. code I actually and professionally care about, contrast to vibe code). 1. Stuff everything relevant into context (this can take a while in big projects. If the project is small enough just stuff everything e.g. `files-to-prompt . -e ts -e tsx -e css -e md --cxml --ignore node_modules -o prompt.xml`) 2. Describe the next single, concrete incremental change we're trying to implement. Don't ask for code, ask for a few high-level approaches, pros/cons. There's almost always a few ways to do thing and the LLM's judgement is not always great. Optionally make concrete. 3. Pick one approach, ask for first draft code. 4. Review / learning phase: (Manually...) pull up all the API docs in a side browser of functions I haven't called before or I am less familiar with, ask for explanations, clarifications, changes, wind back and try a different approach. 6. Test. 7. Git commit. Ask for suggestions on what we could implement next. Repeat. Something like this feels more along the lines of the inner loop of AI-assisted development. The emphasis is on keeping a very tight leash on this new over-eager junior intern savant with encyclopedic knowledge of software, but who also bullshits you all the time, has an over-abundance of courage and shows little to no taste for good code. And emphasis on being slow, defensive, careful, paranoid, and on always taking the inline learning opportunity, not delegating. Many of these stages are clunky and manual and aren't made explicit or super well supported yet in existing tools. We're still very early and so much can still be done on the UI/UX of AI assisted coding.
How People Are Using AI For Coding
In 2025, professional developers pair tightly with AI to ship production work: multiplayer backends arrive via Cursor and Grok in hours; Claude Code can dump ~20k LOC in a 2.5-hour run that still needs review; teams route output through guardrails like Codacy’s MCP server and terminal agents such as Gemini CLI (1M-token context, rate-limited), with editors moving to AI modes (VS Code). The working rhythm tilts to “tight-leash” AI development—load context, ask for options, draft, review docs, test, commit—with model swaps for stability (e.g., Sonnet → Gemini 2.5), error audits (Chip Huyen: “content not found” 20–30%), and repo restructuring to cut agent steps (8→7). Organizations report scale effects (claims of >⅓ code generated at Google), while leaders emphasize speed and cross-language execution (Andrew Ng). Agents also act beyond code (sending emails or automating flows), pushing teams to enforce explicit permissions, CI checks, and human review before merge.
🤖 ai summary based on 25 tweets
Insights from builder, researcher, investors, and domain experts. Opinions are the authors.