
Discussions about AI-assisted coding are everywhere—and for good reason. The topic tends to stir up a mix of emotions. Some people are curious about the possibilities, some are excited about improving their day-to-day efficiency, and others are worried these tools will eventually get “smart” enough to replace them.
In this article, I will share my own experiences using AI as a coding assistant in my daily workflow.
For context, I’m a full stack engineer with 12 years of web development experience. My current focus is UI development with React and TypeScript.
Depending on the project, I use a variety of LLMs and AI tools, including:
Why Context Matters So Much
Regardless of which model you use, getting good results requires preparation. LLMs produce dramatically better output when they’re given sufficient context about:
- The problem space
- The tech stack
- Architectural constraints
- Coding standards and preferences
For example, if the only instruction provided is:
“Create a reusable React dropdown component”
…the response could reasonably be:
- A fully custom component with inline styles
- A ShadCN-based implementation assuming Tailwind
- A wrapper around a Bootstrap dropdown
Without more information, the LLM has no idea:
- Which version of React you’re using
- Whether the app uses SSR
- How important accessibility is
- What design system or component library is standard in your project
Many LLMs won’t ask follow-up questions; they’ll just guess the “most likely” solution.
Global Instructions: The Real Productivity Unlock
You could solve this by writing extremely detailed prompts, but that quickly becomes tedious and undermines the efficiency gains AI is supposed to provide.
A better approach is to supply global context that applies to every prompt.
When using AI tools inside your IDE, this often means configuration files like:
CLAUDE.md (for Claude)copilot-instructions.md (for GitHub Copilot)
These files are typically generated during a one-time setup. The AI scans the repository and records important assumptions, such as:
- “This application uses .NET 8.0”
- “UI components use ShadCN with Tailwind and Radix primitives”
- “Authentication is handled via Microsoft Entra ID”
You can also manually update these files or even ask the LLM to update them for you.
If you ask for a “reusable React dropdown component” before and after generating these instruction files, the difference in output quality is usually dramatic. The AI can move faster and align with your repository’s conventions.
Tip: It can be beneficial to separate your instructions into smaller, more specific files in a docs folder (auth.md, data-fetching.md, etc), and point to them from your LLM-specific files. This lets you keep a single source of truth, while allowing multiple LLMs to work efficiently in your project.
The Limits of Context (and Hallucinations)
Even with excellent context, LLMs aren’t magic.
They’re still prone to hallucination (confidently producing content that is incorrect or completely fabricated). A common pattern looks like this:
“I understand now! The fix is…”
…followed by code that’s:
- More complicated
- Harder to reason about
- Still incorrect
This leads to the real question:
When is it actually efficient to use LLMs, and what are they best at?
The strengths and limitations below reflect typical, out-of-the-box usage. In practice, the more effort you invest in context, instruction files, and guidance, the better the results tend to be.
Where AI Shines
In my experience, AI is most effective in these scenarios:
- Quick prototypes, where code quality isn’t the top priority
- Translating logic from one programming language to another
- Single-purpose functions with complex logic that would normally require stepping through a debugger
Common examples:
- Parsing authentication tokens
- Formatting dates or strings in very specific ways
- Creating and explaining regular expressions
- Investigating and narrowing down error causes
- Writing CSS or Tailwind classes
Styling is a bit of a toss-up. The AI often adds unnecessary styles, but if CSS isn’t your strong suit, it can still be a big help.
Where AI Falls Short
There are also clear areas where AI is far less effective (without additional guidance or setup):
- High-level architecture and long-term planning
LLMs don’t naturally think ahead unless explicitly told to, and even then the results often fall short of what an experienced architect would expect. - Producing high-quality, maintainable code quickly
AI can generate a lot of code fast, but well-structured, modular code often takes longer to review and refactor than writing it yourself. I frequently spend significant time cleaning up AI-generated code.
Final Thoughts
After using AI in my everyday work, my conclusion is fairly simple:
AI is excellent at increasing speed and efficiency, but it does not replace good engineering judgment.
On its own, AI tends to optimize for immediacy rather than long-term maintainability. When left unguided, it will readily generate solutions that work today while introducing architectural fragility or technical debt tomorrow. That’s where skepticism is warranted.
That said, well-architected software is achievable with AI when the right conditions are in place. With strong global context, clearly defined architectural constraints, well-maintained instruction files, and, most importantly, a developer who understands what good architecture looks like, AI can become a genuinely effective collaborator.
Used thoughtfully, AI becomes a powerful accelerator. Used blindly, it can become technical debt.