🧑‍💻

How People Are Using AI For Coding

In 2025, professional developers pair tightly with AI to ship production work: multiplayer backends arrive via Cursor and Grok in hours; Claude Code can dump ~20k LOC in a 2.5-hour run that still needs review; teams route output through guardrails like Codacy’s MCP server and terminal agents such as Gemini CLI (1M-token context, rate-limited), with editors moving to AI modes (VS Code). The working rhythm tilts to “tight-leash” AI development—load context, ask for options, draft, review docs, test, commit—with model swaps for stability (e.g., Sonnet → Gemini 2.5), error audits (Chip Huyen: “content not found” 20–30%), and repo restructuring to cut agent steps (8→7). Organizations report scale effects (claims of >⅓ code generated at Google), while leaders emphasize speed and cross-language execution (Andrew Ng). Agents also act beyond code (sending emails or automating flows), pushing teams to enforce explicit permissions, CI checks, and human review before merge.

🤖 ai summary based on 25 tweets

Insights from builder, researcher, investors, and domain experts. Opinions are the authors.

Can’t load? Read tweets in preview...

The jump from "agents are nowhere close to working" to "okay, narrow agents for research and coding work pretty well" to (very recently) "general purpose agents are actually useful for a range of tasks" has been quick enough (less than a year) so that most people have missed it.

There’s a window right now where AI agents will get built for every vertical and domain. The playbook is to go deep on the context engineering required for the vertical or particular space, figure out the right UX that ties into the existing workflows naturally, and connect to the relevant data sources and tools. Especially early on, it’s useful to get as close to key customers as possible to figure out what’s working and what’s not and constantly make improvements to bring them back to the mothership. AI is moving so fast right now that there’s a huge premium in making quick updates and seeing how they improve the customer’s workflows. It’s also important to price the agents for maximum adoption with simple subscription prices or on clear consumption model, and expect to ride out the cost improvements from AI efficiency. Don’t get too greedy on price right now as market share is likely most important. It can be helpful to go after use cases that are constrained by the availability or high cost of talent. This means that any incremental boost in productivity in these spaces offers high ROI for the customer. In these areas, customers will always be willing to try AI agents to finally get around to solving their problems. This is why AI coding agents, security agents, or legal agents are taking off right now initially. These are all areas where demand for solving the problem has always exceeded the level of talent available. But every vertical has examples of this. There’s a clear moment right now where the next generation of these AI Agents will get built across every space.

One interesting thing I've learned when talking with GitHub CEO @ashtom last week is how GitHub is hiring *more* early-careers devs (interns / new grads) as before. I asked: why? Given lots of companies are hiring less assuming AI might be at the level of a junior dev (cont'd) https://t.co/XXBDT8ufAK

AI’s ability to make tasks not just cheaper, but also faster, is underrated in its importance in creating business value. For the task of writing code, AI is a game-changer. It takes so much less effort — and is so much cheaper — to write software with AI assistance than without. But beyond reducing the cost of writing software, AI is shortening the time from idea to working prototype, and the ability to test ideas faster is changing how teams explore and invent. When you can test 20 ideas per month, it dramatically changes what you can do compared to testing 1 idea per month. This is a benefit that comes from AI-enabled speed rather than AI-enabled cost reduction. That AI-enabled automation can reduce costs is well understood. For example, providing automated customer service is cheaper than operating human-staffed call centers. Many businesses are more willing to invest in growth than just in cost savings; and, when a task becomes cheaper, some businesses will do a lot more of it, thus creating growth. But another recipe for growth is underrated: Making certain tasks much faster (whether or not they also become cheaper) can create significant new value. I see this pattern across more and more businesses. Consider the following scenarios: - If a lender can approve loans in minutes using AI, rather than days waiting for a human to review them, this creates more borrowing opportunities (and also lets the lender deploy its capital faster). Even if human-in-the-loop review is needed, using AI to get the most important information to the reviewer might speed things up. The ability to provide loans quickly opens up the market to new customers in need of rapid funds and helps customers who need a quick positive or negative decision to accept the loan or move on. - If an academic institution gives homework feedback to students in minutes (via sophisticated autograding) rather than days (via human grading), not only is the automation cheaper, the rapid feedback facilitates better learning. - If an online seller can approve purchases faster, this can lead to more sales. For example, many platforms that accept online ad purchases have an approval process that can take hours or days; if approvals can be done faster, they can earn revenue faster. Further, for customers buying ads, being able to post an ad in minutes lets them test ideas faster and also makes the ad product more valuable. - If a company’s sales department can prioritize leads and respond to prospective customers in minutes or hours rather than days — closer to when the customers’ buying intent first led them to contact the company — sales representatives might close more deals. Likewise, a business that can respond more quickly to requests for proposals may win more deals. I’ve written previously about looking at the tasks a company does to explore where AI can help. Many teams already do this with an eye toward making tasks cheaper, either to save costs or to do those tasks many more times. If you’re doing this exercise, consider also whether AI can significantly speed up certain tasks. One place to examine is the sequence of tasks on the path to earning revenue. If some of the steps can be sped up, perhaps this can help revenue growth. Growth is more interesting to most businesses than cost savings, and if there are loops in your business that, when sped up, would drive growth, AI might be a tool to unlock this growth. [Original text: https://t.co/qx2Ir6pkSp ]

Noticing myself adopting a certain rhythm in AI-assisted coding (i.e. code I actually and professionally care about, contrast to vibe code). 1. Stuff everything relevant into context (this can take a while in big projects. If the project is small enough just stuff everything e.g. `files-to-prompt . -e ts -e tsx -e css -e md --cxml --ignore node_modules -o prompt.xml`) 2. Describe the next single, concrete incremental change we're trying to implement. Don't ask for code, ask for a few high-level approaches, pros/cons. There's almost always a few ways to do thing and the LLM's judgement is not always great. Optionally make concrete. 3. Pick one approach, ask for first draft code. 4. Review / learning phase: (Manually...) pull up all the API docs in a side browser of functions I haven't called before or I am less familiar with, ask for explanations, clarifications, changes, wind back and try a different approach. 6. Test. 7. Git commit. Ask for suggestions on what we could implement next. Repeat. Something like this feels more along the lines of the inner loop of AI-assisted development. The emphasis is on keeping a very tight leash on this new over-eager junior intern savant with encyclopedic knowledge of software, but who also bullshits you all the time, has an over-abundance of courage and shows little to no taste for good code. And emphasis on being slow, defensive, careful, paranoid, and on always taking the inline learning opportunity, not delegating. Many of these stages are clunky and manual and aren't made explicit or super well supported yet in existing tools. We're still very early and so much can still be done on the UI/UX of AI assisted coding.

Even though I’m a much better Python than JavaScript developer, with AI assistance, I’ve been writing a lot of JavaScript code recently. AI-assisted coding, including vibe coding, is making specific programming languages less important, even though learning one is still helpful to make sure you understand the key concepts. This is helping many developers write code in languages we’re not familiar with, which lets us get code working in many more contexts! My background is in machine learning engineering and back-end development, but AI-assisted coding is making it easy for me to build front-end systems (the part of a website or app that users interact with) using JavaScript (JS) or TypeScript (TS), languages that I am weak in. Generative AI is making syntax less important, so we can all simultaneously be Python, JS, TS, C++, Java, and even Cobol developers. Perhaps one day, instead of being “Python developers" or “C++ developers,” many more of us will just be “developers”! But understanding the concepts behind different languages is still important. That’s why learning at least one language like Python still offers a great foundation for prompting LLMs to generate code in Python and other languages. If you move from one programming language to another that carries out similar tasks but with different syntax — say, from JS to TS, or C++ to Java, or Rust to Go — once you’ve learned the first set of concepts, you’ll know a lot of the concepts needed to prompt an LLM to code in the second language. (Although TensorFlow and PyTorch are not programming languages, learning the concepts of deep learning behind TensorFlow will also make it much easier to get an LLM to write PyTorch code for you, and vice versa!) In addition, you’ll be able to understand much of the generated code (perhaps with a little LLM assistance). Different programming languages reflect different views of how to organize computation, and understanding the concepts is still important. For example, someone who does not understand arrays, dictionaries, caches, and memory will be less effective at getting an LLM to write code in most languages. Similarly, a Python developer who moves toward doing more front-end programming with JS would benefit from learning the concepts behind front-end systems. For example, if you want an LLM to build a front end using the React framework, it will benefit you to understand how React breaks front ends into reusable UI components, and how it updates the DOM data structure that determines what web pages look like. This lets you prompt the LLM much more precisely, and helps you understand how to fix issues if something goes wrong. Similarly, if you want an LLM to help you write code in CUDA or ROCm, it helps to understand how GPUs organize compute and memory. Just as people who are fluent in multiple human languages can communicate more easily with other people, LLMs are making it easier for developers to build systems in multiple contexts. If you haven’t already done so, I encourage you to try having an LLM write some code in a language you’d like to learn but perhaps haven’t yet gotten around to, and see if it helps you get some new applications to work. [Original text: https://t.co/NdjaPgwwuk ]

One quick check...