Notes on Four Blog Posts on How I use LLM

Over the past few weeks, several top software engineers have published blog posts about how they use AI. Here are a few of the posts I came across in various forums:

Below, I’ve compiled my personal notes on these posts. I’ll highlight key points, share my thoughts, and reflect on what stood out to me as particularly interesting or novel.

Why I use Cline for AI Engineering by Addy Osmani

Author’s Bio: Addy Osmani is an Irish Software Engineer and leader currently working on the Google Chrome web browser.

Cline is a coding agent VS Code extension. The description from the GitHub Repo

Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.

In this blog post, Addy Osmani presents an interesting mental model for thinking about Cline not as an interactive Q&A system, but as a system tool for suggesting or modifying code blocks.

Cline approaches AI assistance differently from most tools in the market. Rather than focusing solely on code generation or completion, it operates as a systems-level tool that can interact with your entire development environment. This becomes particularly valuable when dealing with complex debugging scenarios, large-scale refactoring, or integration testing.

The DeepSeek-R1 + Sonnet hybrid approach

Recent benchmarks and user experiences have shown that combining DeepSeek-R1 for planning with Claude 3.5 Sonnet for implementation can reduce costs by up to 97% while improving overall output quality.

The combination is interesting and looks similar to plumbing various Unix commands through pipes to achieve the desired output rather than using a single command.

Cline’s ability to switch between models seamlessly makes this hybrid approach practical. With the v3.2.6 update, the system even remembers your preferred model for each mode, making it effortless to maintain optimal model selection for different types of tasks. You’re not stuck with a single model’s trade-offs - you can optimize for cost, capability, or speed depending on the specific task at hand.

Checkpoints: Version control beyond git

The system operates independently of your regular git workflow, preventing the need to pollute commit history with experimental changes.

This is the first time I have come across the concept, and I am intrigued to try it out.

Computer Use: Runtime awareness

Above, Cline was able to connect to launch Chrome to verify that a set of changes correctly rendered. It notices that there was a Next.js error and can proactively address this without me copy/pasting issues back and forth. This is a game-changer.

This bridges a crucial gap between static code analysis and runtime behavior - something particularly valuable when dealing with complex web applications or distributed systems.

This looks promising if you’re doing web development and a lot of front-end development.

Conclusion

The trade-off of additional complexity for greater control and capability makes sense for serious development work. While simpler tools might be sufficient for basic tasks, Cline’s system-level approach provides unique value for complex engineering challenges.

Cline’s philosophy for a being coding agent is what stands out.

How I Use “AI” by Nicholas Carlini

Author Bio: Nicholas Carlini is a research scientist at Google DeepMind.

But the reason I think that the recent advances we’ve made aren’t just hype is that, over the past year, I have spent at least a few hours every week interacting with various large language models, and have been consistently impressed by their ability to solve increasingly difficult tasks I give them. And as a result of this, I would say I’m at least 50% faster at writing code for both my research projects and my side projects as a result of these models.

The approach of tinkering or using LLMs to solve coding problems on a regular basis is noteworthy

If I were to categorize these examples into two broad categories, they would be “helping me learn” and “automating boring tasks”. Helping me learn is obviously important because it means that I can now do things I previously would have found challenging; but automating boring tasks is (to me) actually equally important because it lets me focus on what I do best, and solve the hard problems.

Rather than thinking of an LLM as replacing you in your job, using it as a tool to improve your skillset and enhance your knowledge by using it as a companion seems to be a common pattern.

As a tutor for new technologies

But today, I’ll just ask a language model to teach me Docker. So here’s me doing just that.

This is a recurring theme, and a lot of folks are doing it. Last week, I was using DeepSeek to do something similar and was impressed by the accuracy and reliability (though there’s still a long way to go for unpopular languages). A year back, LLMs had high false positive rates for suggestions (anecdotal). Recently, at least for the top six languages, the quality of the suggestions has significantly improved.

To simplify code

Now golly has a CLI tool that does what I want—all I needed was a way to call into it correctly. The first step of this was to take the C++ code that supports something like 50 different command line options and just get it to do exactly the one thing I wanted. So I just dumped all 500 lines of C++ into the LLM and asked for a shorter file that would do the same thing.

And you know what? It worked flawlessly. Then I just asked for a Python wrapper around the C++ code. And that worked too.

This is a fabulous testimonial. The concept of using it for code reviews, combined with a reasoning model, can significantly enhance one’s journey in mastering a particular language. Overall, I can see the scientist at work here. It’s an excellent use case for automating mundane tasks and increasing utilitarian value. The article is perfect for anyone hesitant to try LLMs but looking for ways to improve their quality of life through automation.

How I Use AI: Meet My Promptly Hired Model Intern by Armin Ronacher

Author Bio: Armin is a well known software engineer who have created various pouplar libraries like Flask, Jinja and co-founder of Sentry, SAAS product.

#!/bin/sh
MODEL=phi4:latest
if ping -q -c1 google.com &>/dev/null; then
  MODEL=claude-3-5-sonnet-latest
fi
OLD_TEXT="$(cat)"
llm -m $MODEL "$OLD_TEXT" -s "fix spelling and grammar in the given text,
    and reply with the improved text and no extra commentary.
    Use double spacing."

This script can automatically switch between a local model (phi4 via Ollama) and a remote one (claude-3-5-sonnet-latest) based on internet connectivity. With a command like !llm-spell in Vim, I can fix up sentences with a single step.

This is relatable to me because I use grammar correction tools both at work and for personal blog posts—ensuring my writing is clear and polished. Like Armin, I face a similar challenge as a non-native English speaker: maintaining a consistent voice and keeping the same level of engagement throughout a post. To address this, I use the llm command and also invoke it through Raycast as a script command.

Writing with AI

Here are some of the things I use AI for when writing:

Grammar checking: I compare the AI’s suggested revisions side by side with my original text and pick the changes I prefer.

Restructuring: AI often helps me see when my writing is too wordy. In the days before AI, I often ended up with super long articles that did not read well and that I did not publish. Models like o1 are very helpful in identifying things that don’t need to be said.

Writing Notes and finding key points: Here, I ask the AI to read through a draft “like a Computer Science 101 student” and take notes. This helps me see if what it absorbed matches what I intended to convey.

Roast my Article: I have a few prompts that asks the AI to “roast” or criticize my article, as if commenting on Reddit, Twitter, or Hacker News. Even though these critiques seem shallow, they can sting, and they often highlight weaknesses in my argument or lack of clarity. Even if they don’t necessarily impact the writing, they prime me for some of the feedback I inevitably receive.

Identifying jargon: If I worry there’s too much jargon, I use AI to resolve acronyms and point out technical terms I’ve used without explanation, helping me make the text more accessible.

I find three use cases particularly helpful: (1) writing notes and identifying key points, (2) having my article critiqued, and (3) identifying jargon.

Writing notes and identifying key points: This approach provides valuable feedback on your article by placing the LLM in the reader’s shoes.

Talking to Her

ChatGPT is also incredibly helpful when having to work with multiple languages. For a recent example, my kids have Greek friends and we tried to understand the difference between some Greek words that came up. I have no idea how to write it, Google translate does not understand my attempts of pronouncing them either. However, ChatGPT does. If I ask it in voice mode what “pa-me-spee-tee” in Greek means it knows what I tried to mumble and replies in a helpful manner.

Lately, I’ve been thinking about improving my pronunciation of English words using LLMs. For context, I grew up in Tamil Nadu, in southern India, and I speak with a thick accent. I’ve often had to repeat myself multiple times due to my pronunciation. I hate it when my jokes fall flat because of it. Now, it’s time to experiment with LLMs to improve this.

Final Thoughts

My approach isn’t about outsourcing thinking, but augmenting it: using LLMs to accelerate grunt work, untangle mental knots, and prototype ideas faster. Skepticism is healthy, but dismissing AI outright risks missing its potential as a multiplier for those willing to engage critically.

I like the usage of the word, augmenting, feels and fits apt.

Building personal software with Claude by Nelson Elhage

Working between defined interfaces

When working with Claude, I found myself instinctively choosing to break down problems into ones with relatively well-defined and testable interfaces. For instance, instead of asking it to make changes to the Rust and elisp code in one query, I would ask for a feature to be added to the Rust side, which I would then spot-check by inspecting the output JSON, and then ask for the corresponding elisp changes.

This is something I do often in code base, where when I don’t like certain pieces of bigger task, I ask LLM to do A, B, C task separately. Example: Writing a long SQL query. I do this out of habit of iterative developing the pieces and finally plugging all the components(also writing the most exciting stuff first!)

The entire post covers how the author fixed a performance issue with emacs lisp function that was interacting with obsidian.md. It’s fanatastic plug for using LLM and it’s coding capability.

Conclusion

This is something I often do in my codebase. When I don’t like certain parts of a larger task, I ask the LLM to handle tasks A, B, and C separately. For example, when writing a long SQL query, I iteratively develop smaller pieces and then combine them—often starting with the most exciting parts first!

The post details how the author fixed a performance issue with an Emacs Lisp function that interacted with Obsidian.md. It’s a fantastic showcase of using LLMs and their coding capabilities.

Conclusion

I enjoyed reading all these articles, especially seeing how everyone perceives and utilises the power of LLMs to improve their work and life.

One thing is clear: to get real value out of LLMs, you need curiosity and a willingness to invest time in learning. The rewards come with consistent effort.

Each of the four blog posts took a unique approach:

Disclaimer: This post is not AI-generated slop (no summarization), but an LLM was used to improve grammar.