- Profitable Programming
- Posts
- 7 Ways I Use AI to Boost My Productivity as a Software Engineer
7 Ways I Use AI to Boost My Productivity as a Software Engineer
My automation stack that turns coding tasks into review sessions
Since GitHub Copilot's public release in 2022, which was the first AI coding tool to achieve massive adoption, the landscape has evolved rapidly. We've gone from simple autocomplete suggestions to sophisticated development environments like Cursor and other intelligent editors, and now to fully "vibe-coded" software. What started as helpful code completion has transformed into comprehensive workflow automation that fundamentally changes how we build software.
While the productivity improvement has yet to be proven on a macro level, I've systematically tested and integrated these tools throughout my entire development process, and I dare claim that the results are measurable: features that previously took me days now ship in hours, and debugging sessions that once consumed hours now resolve in minutes. To give a very concrete example, the WordPress integration for the SaaS I’m currently working on, BlogSEO, has been completely written by Claude code. I knew nothing about WordPress and yet I was able to fully integrate this CMS to my SaaS in 6 hours (and with almost no bug ;)).
In this post, I'm sharing the specific tools and workflows that have dramatically boosted my productivity.
Important caveat: I still write some of my backend code manually, particularly for complex and critical logic. Building these systems myself makes debugging future issues significantly easier and ensures I can efficiently update and extend the code when requirements change.
1. Autocomplete beyond single lines
Tools: Cursor, GitHub Copilot
This is the most common use of AI in software engineering. You install an extension like GitHub Copilot to your IDE such as VSCode, or install a dedicated IDE like Cursor and press Tab
for the win, as the lines autocomplete themselves. If you don’t use this today, I really advise you try it. It almost doesn’t change your existing coding workflow; it’s like boosting your IDE’s IntelliSense with AI. So you still write the code, the AI just makes your IDE suggestion more powerful.
These tools have evolved to the point where they’re also able to suggest the next line, in a different part of the code after you accepted the last suggestion, so it can really saves you time if you’re writing common boilerplate code, or when it’s heavily boilerplated (like closing tags in frontend JSX code).
My personal take: Don’t use SuperMaven, which is an alternative to the tools I mentioned. It was marketed as the fastest autocomplete tool; however fast it is, all the code it produces has been pure garbage on my side.
2. Ticket → Feature automation with AI agents
Tools: Claude Code, Gemini CLI
My favorite way to use AI. If you master that one, it’s the real time saver. Most of the tools previously mentioned also provide an “agent mode” but from my tests, Claude Code & Gemini are the best ones so far. If you haven’t tried Claude code or Gemini CLI, you can think of it as your software engineering intern. You ask it to do tasks, and it implements them. Simple. Provided you are clear enough with your instructions, it’ll manage to do what you ask it.
Limitations: the agent only has context of what is in your git repository. So if you have a micro services architecture with 50 RPCs in your repo, it might not work very well because the agent can’t read the code of the remote procedure (but you will have problems with humans as well :}). If you define your cloud architecture directly on the web, without any Infrastructure as Code files that is gitted (this is a bad practice but it happens), you’ll also have problems. And same thing for your database schema: if the agent doesn’t have access to the migrations or the schema, it will have a hard time interacting with your database.
Bonus: if you use Linear or similar ticket management tools, you can connect your agent to it using MCP and if you have well written tickets, you can essentially tell it “Plan then implement ticket XXXX” and go make yourself a coffee. The ticket will be done when you come back.
Bonus 2: You can configure your agent to commit, push and create PRs/MRs for you, so sometimes you can fully automate the ticket handling, you just have to review or test the feature.
My Personal take: Claude Code is the real deal. If you use it with the “plan mode” (Shift
+ Tab
), it will think for multiple minutes before proposing an implementation plan that you can approve or not, and then it writes everything without you needing to confirm every changes. What I’ve observed is that if the implementation plan is good, you’re almost sure that it will implement the feature as you want it to, and so the only thing remaining to do is testing the feature and reviewing the code.

3. Replacing complex heuristics with API calls
Tools: OpenAI API, AWS Bedrock
Sometimes, you cannot write code that works 100% of the time. Some tasks are inherently statistical. In such cases, engineers often rely on heuristics: a set of rules based on some observations that will cover most of the cases and will solve your task for a good percentage of the cases. An example of this is formatting data from heterogenous sources when doing data scraping. You can write rules to try and standardize the information, but given the heterogenous nature of your sources, you can always expect the formatting/standardization rules to break for some cases.
That’s where AI can really be a game changer. Instead of pulling your hair to try and cover all the possible scenario in your code, you can just make an API call to an AI model provider with your task at hand and the intelligent / robust nature of the model will often have better performances than one thousand if
statements.
Example: I recently had to write code that should detect whether a webpage is a 404
or “not found”. I could write complex heuristics like first checking the HTTP status, then if it’s a 200
status code (You would be surprised by how many websites return a 200
status with a “not found” page…), parse the HTML, remove the boiler plate elements and look for words such has “not found” etc.… or I can just do an API call with a cheap & fast model and ask it “is this a not found page” and it will answer me for less than 1/1000th of a dollar and in less than one second.
Bottom line: it’s quite obvious, but the more logic you delegate to the AI, the bigger your OpenAI invoice. Make sure to have the right monitoring systems in place to not go out of business.
4. Screenshot → React component conversion
Tools: Claude Code, Cursor, Github Copilot
It’s as easy as it sounds. You take a screenshot. Give it to your AI agent. Ask it to implement it in whatever frontend framework you’re using. And you got it. Sometimes, the result doesn’t match exactly, so you can just screenshot the result, give it to the agent, and rince and repeat until it reaches your desired state. Stupidly simple.
My personal take: Frontend work at companies who have designers is basically a translation job. Implementing a Figma is not something I would deem a high-value-added job. And it turns out AI is very good at transforming screenshots or Figmas to React components. So I very rarely write frontend code anymore. I’m much faster at describing what I want, or providing a screenshot to Claude code than writing the JSX code myself. The only thing I write myself are usually the callbacks, but other than that I haven’t written much frontend code in the past 6 months.
5. Automatic PR/MR reviews
Tools: GitHub Copilot, CodeRabbit
Even if you don’t use AI to write some of your code, you might find it useful to have it as a first layer of review in your code. Tools like GitHub Copilot or CodeRabbit now integrates with GitHub & GitLab to provide reviews on your code when you open a merge/pull request. If you have access to these tools, you might as well enable the feature as it can catch some nice bugs before obliterating your dev environment :)
My personal take: Sometimes, GitHub Copilot reviews the code written by GitHub Copilot and points some issues in it. That’s when I knew that my job was still safe for at least some years. 🤞

6. PR/MR summaries
Tools: GitHub Copilot, Claude Code, Cursor
The aforementioned tools also make it possible to summarize your changes when you open a PR/MR.
My personal take: GitHub Copilot writes garbage summaries paraphrasing the code from experience, so I prefer using Claude code for this.
7. Debugging
Tools: GitHub Copilot, Claude Code, Cursor
I forgot the last time I looked up something on Stack Overflow. AI can be a huge time saver when it comes to fixing bugs. Instead of trying to understand the bug, look up a solution online, translate it into your context, and finally implement a fix, you can now look for the red text in your terminal, copy paste it to your favorite agent and ask “fix pls” and watch it do the work for you. You just need to review the fix to understand what went wrong, which makes debugging almost 3-4x faster for me. It looks stupid said like that, but it’s faster and much more efficient in 99% of my cases.
The same thing also applies if you are good enough at describing your bugs. You don’t need an error text for this to work; just providing enough context to the agent for it to investigate potential issues can save you hours of debugging. A screenshot also works when it’s a frontend bug.
I’m sure I have forgotten some AI tools and use cases in this post. I tried to focus on the tools I tried extensively but I might have missed some other very good ones. If you have some suggestions, feel free to reach out, I’d love to hear your opinion!
Personal Update
Since the last issue of my newsletter, the number of subscribers really started kicking off, so thanks a lot for taking the time to read it, it means the world to me! If you have any feedback, I’d love to hear what you think - don’t hesitate to reach out! (You can just reply to the email if you’re reading this from your inbox).
This post was bit more technical than usual, but next issue will be more business-focused, I promise!
In June, I’ve made my latest project, BlogSEO, publicly available. It’s only been 4 months since I started working on BlogSEO, but it has already exceeded my expectations. We have already generated 300 articles across 20+ different websites, and all the customers are satisfied with the tool so far.
Acquisition-wise, I have managed to keep growing organically thanks to users' referrals and word of mouth, and I can't thank them enough for their trust and support; they truly are my heroes.
The next month will be key as I keep improving the product to fully address the issues of my users with new features and higher quality articles. I plan on launching the tool on big platforms like ProductHunt by the end of August, but before that, I need to make sure the tool is 100% ready for self-serve usage, meaning it needs to be more intuitive and a lot of documentation & help ressources have to be prepared.
If you have a website, and want to grow your organic traffic while you sleep, you should give it a try! 👇
Cheers!