Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151466 stories
·
33 followers

Coding with AI, not let AI code for me

1 Share

Since the generative AI is trending, I have the following thinking about it: it is very powerful if we use it in a convenient way. We may use a knife as a cooking tool or not... you get the idea.

I just prepared a document which is working as a library where you will find prompts for coding with AI, but not let AI coding for you. It is only my perspective, but I am currently thinking that If I want to be a Software Developer or something like that, nowadays, I really have to understand the code, and not only copy and paste. But well, as I mentioned, it is only my point of view and not the truth.

Nevertheless, at the same time I think that we should take the advantage of AI. Therefore, I am sharing an online doc where this library of prompts will be living: https://docs.google.com/document/d/1KH1O48it3-r-jUrLNenUeavv-1NJdVyprzoQUHqGGp8/edit?usp=sharing

Thank you very much for reading.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Does AI Really Make Coders Faster?

2 Shares
One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me." But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower... Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles... The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. Other key points from the article: LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks." "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain." "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..." "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt." Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools." The story is part of MIT Technology Review's new Hype Correction series of articles about AI.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

A reality check on AI engineering: Lessons from the trenches of an early stage startup

1 Share
(BigStock Image)

Like most tech leaders, I’ve spent the last year swimming in the hype: AI will replace developers. Anyone can build an app with AI. Shipping products should take weeks, not months.

The pressure to use AI to rapidly ship products and features is real. I’ve lost track of how many times I’ve been asked something to the effect of, “Can’t you just build it with AI?” But the reality on the ground is much different.

AI isn’t replacing engineers. It’s replacing slow engineering.

At Replify, we’ve built our product with a small team of exceptional full-stack engineers using AI as their copilot. It has transformed how we plan, design, architect, and build, but it’s all far more nuanced than the narrative suggests.

What AI is great at today

It can turn some unacceptable timelines into a same-day release. One of our engineers estimated a change to our voice AI orchestrator would take three days. I sanity-checked the idea with ChatGPT, had it generate a Cursor prompt, and Cursor implemented the change correctly on the first try. We shipped the whole thing in one hour: defined, coded, reviewed, tested, and deployed.

Getting it right on the first try is rare, but that kind of speed is now often possible.

It’s better than humans at repo-wide, difficult debugging. We had a tricky user-reported bug that one of our developers spent two days chasing. With one poorly written prompt, Cursor found the culprit in minutes and generated the fix. We pushed a hot fix to prod in under 30 minutes.

Architecture decisions are faster and better. What used to take months and endless meetings in enterprise environments now takes a few focused hours. We’ll dump ramblings of business requirements into an LLM, ask it to stress-test ideas, co-write the documentation, and iterate through architectural options with pros, cons, and failure points. It surfaces scenarios and ideas instantly that we didn’t think of and produces clean artifacts for the team.

The judgment and most ideas are still ours, but the speed and completeness of the thinking is on a completely different level.

Good-enough UI and documentation come for free. When you don’t need a design award, AI can generate a good, clean use interface quickly. Same with documentation: rambling notes in, polished documentation out.

Prototype speed is now a commodity. In early days, AI lets you get to “something that works” shockingly fast. Technology is rarely the competitive moat anymore, it’s having things like distribution, customers, and operational excellence.

Where AI still falls flat

It confidently gives wrong answers. We spent an entire day trying to get ChatGPT and Gemini to solve complex AWS Amplify redirect needs. Both insisted they had the solution. Both were absolutely wrong. Reading the docs and solving “the old-fashioned way” took two hours and revealed the LLMs’ approaches weren’t even possible.

Two wasted engineers, one lost day.

You still need to prompt carefully and review everything. AI is spectacular at introducing subtle regressions if you’re not explicit about constraints and testing. It will also rewrite perfectly fine code if you tell it something is broken (and you’re wrong).

It accelerates good engineering judgment. It also accelerates bad direction.

Infra, security, and scaling require real expertise. Models can talk about architecture and infrastructure, but coding assistants still struggle to produce secure, scalable infrastructure-as-code. They don’t always see downstream consequences like cost spikes or exposure risks without a knowledgeable prompter. 

Experts still determine the best robust solution.

Speed shifts the bottlenecks. Engineering moves faster with AI, so product, UI/UX, architecture, QA, and release must move faster, too. 

One bonus non-AI win helping us here: Loom videos for instant ticket creation (as opposed to laborious requirement documentation) result in faster handoffs, fewer misunderstandings, more accurate output, and better async velocity.

So what does this mean for startups?

  • AI lets great engineers become superhuman: Small teams can now ship at speeds that used to require entire departments.
  • The bar for engineers goes up, not down: Fewer people, but they must be excellent.
  • Technology alone is no longer a reliable moat: Everyone has AI. Your defensibility is things like distribution, network, brand, operational excellence.
  • AI won’t 10x everything: Some parts will fly. Others still depend on time, people, and judgment.
  • Leaders must be hands-on with AI and technical strategy: Without that, AI only introduces new bottlenecks and issues.

The reality check

AI isn’t replacing engineers. It’s replacing slow feedback loops, tedious work, and barriers to execution.

We’re not living in a world where AI writes, deploys, and scales your entire product (yet). But we are living in a world where a three-person team can compete with a 30-person team — if they know how to wield AI well.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code’s Slack Beta Pushes “Chat-First” Software Development Forward

1 Share
Anthropic is rolling out an integration between Claude Code and Slack, allowing developers to trigger coding tasks directly from team conversations.
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Manage Agents, Instructions, Prompts, & Skills in Seconds with this VS Code Extension

1 Share


Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Most Important AI Stories This Week

1 Share
From: AIDailyBrief
Duration: 27:22
Views: 272

A rapid-fire roundup of the biggest AI stories of the week, from Google’s Gemini 3 Flash pushing the speed and efficiency frontier to fresh OpenAI fundraising rumors that highlight the escalating cost of compute and shifting cloud alliances. Amazon’s AI reorganization and leadership changes signal a tighter focus on models, agents, and custom silicon, while ChatGPT’s new app directory points toward an AI platform layer that plugs into everyday tools. The episode closes on the politics of AI infrastructure, including chip supply tension and the backlash around proposals to pause data center construction, with major implications for innovation, access, and competition in 2026.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories