Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152413 stories
·
33 followers

'The Downside To Using AI for All Those Boring Tasks at Work'

2 Shares
The promise of AI-powered workplace tools that sort emails, take meeting notes, and file expense reports is finally delivering meaningful productivity gains -- one software startup reported a 20% boost around mid-2025 -- but companies are discovering an unexpected tradeoff: employees are burning out from the relentless pace of high-level cognitive work. Roger Kirkness, CEO of 14-person software startup Convictional, noticed that after AI took the scut work off his team's plates, their days became consumed by intensive thinking, and they were mentally exhausted and unproductive by Friday. The company transitioned to a four-day workweek; the same amount of work gets done, Kirkness says. The underlying problem, according to Boston College economist and sociologist Juliet Schor, is that businesses tend to simply reallocate the time AI saves. Workers who once mentally downshifted for tasks like data entry are now expected to maintain intense focus through longer stretches of data analysis. "If you just make people work at a high-intensity pace with no breaks, you risk crowding out creativity," Schor says.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Database in an "inconsistent state" and other errors

1 Share

Hello. I was directed here for help by Microsoft Tech Support because the problem is "file specific" and not at the app level. I am not computer savvy and have never used a forum like this. Tech support just kind of dumped me here and said "have at it," so any guidance, help, etc. would be appreciated.

I use Microsoft Office 365 as a personal/individual subscriber and my Access program started issuing error message over the past week. The biggest concern and problem is around db called Comic Books and I've attached the error message. When I click OK, it attempts to repair, fails and issues the second message. Another database--an earlier copy of Comic Books from mid December-- opens OK, but has its own problems, such as when I attempt to simply copy a table (see third error message). I don't know if these are related.  Can anyone help me? Is there any hope for me ever regaining access to the Comic Books database which was last accessible and working on January 4? Thank you. 

 

 

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A New JavaScript Framework? In this Economy?

1 Share
Sigment is a new framework that simplifies.

After 20 years in software, full-stack developer Yaniv Soussana is tired of complexity in React-based JavaScript frameworks. So the creator of the app conversion tool Wappaa did what developers tend to do: He built a new framework, called Sigment.

“I wanted to create something better than React and Angular, because I’m already tired of all this — I wanted to create something simple for developers,” he said.

Sigment simplifies by exemption, which means it does not:

  • Combine HTML with JavaScript;
  • Use JSX; or
  • Create a virtual DOM.

So when would you use Sigment?

Well, for starters, if you’re fed up with React-based frameworks or don’t want to learn React, it might be a good option. The open source framework claims to have a maximum runtime performance with minimal overhead, full control over rendering with a API, zero-config development without transpilation, and fine-grained reactivity.

Let’s dig in.

The Problem With Mixing HTML and JavaScript

React mixes JavaScript with HTML syntax (JSX), and, frankly, Soussana isn’t a fan. Sigment does not mix the two, which he says makes the syntax shorter and easier. This makes the framework more accessible to those who know vanilla JavaScript but don’t want to learn React (which created and uses JSX).

That makes it possible for the framework to do more than build single page applications; it also supports an HTML first-architecture, he said. Plus, Sigment supports dynamic or incremental rendering.

“The developer can create a small website and then when the user starts to move, it will, on time, on the fly, create the new element and everything,” he explained, adding it will also put that element into the cache for better performance.

Why Sigment Doesn’t Use JSX

JSX stands for JavaScript XML. It’s a syntax extension that allows developers to write HTML-like code directly inside their JavaScript.

For context, React relies on JSX syntax, as do other React-based frameworks. Preact, Qwik and Solid JS also use JSX. With JSX, developers write JavaScript that generates HTML.

The issue with JSX is it requires transpilation, or conversion, of the code, plus additional tooling such as Babel, Webpack or Vite. And while that feels declarative, it adds complexity to the build process, according to Soussana.

Sigment relies on Templates, which means the UI is written in a specialized version of HTML that the framework engine understands. Svelte, by the way, also rather famously uses this approach, as do Angular and Vue.

Instead of JSX, Sigment relies on JavaScript tag functions. Instead of writing:

<div class="container">,

…a developer might write:div({ class: 'container' }).

This results in “lightning-fast” performance and faster iteration because the code is already valid JavaScript, according to Segment’s website. Also, because Sigment doesn’t use JSX, developers can create websites with pure HTML and simple syntax, Soussana told The New Stack. It also means the framework works without creating a virtual DOM, he explained.

No Virtual DOM

I asked Soussana why he decided not to use a virtual DOM, which is a lightweight, simplified copy of the “real” DOM; i.e., the actual HTML elements on the screen. A virtual DOM acts as a “drafting board” between the developer’s code and the actual browser.

Soussana pointed out that Svelte and SolidJS also do not use the virtual DOM.

“We are in the new generation,” he said. “We don’t need the virtual DOM anymore. It’s just add[ing] more complexity and heaviness, and also, it takes more time to compile.”

Instead, Sigment uses Signals. Angular and Qwik creator Miško Hevery once explained Signals as a value you place in a bucket. He compared this to a traffic cop that tells the framework when there is access. When a Signal is read, the framework sends a message that someone read the value and it then goes on to the next value.

That makes the performance light, Soussana explained, adding that Sigment runs for the first time and then in the runtime, when the user needs something, it will render it and save it to the cache. The next time the user needs something, it can take it from the cache.

No virtual DOM and no JSX means a smaller bundle size as well, Soussana said, adding that this creates better performance and better experiences for the user and developer.

The post A New JavaScript Framework? In this Economy? appeared first on The New Stack.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Owners, not renters: Mozilla’s open source AI strategy

1 Share
Abstract black halftone cloud illustration on a pink background, representing cloud computing or digital infrastructure.

The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you’re given. 

I think we can do better. Making that happen is now central to what Mozilla is doing.

What we did for the web

Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible — dropping Internet Explorer’s market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.

There’s a reason the browser is called a “user agent.” It was designed to be on your side — blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens. 

Now AI is becoming the new intermediary. It’s what I’ve started calling “Layer 8” — the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world. 

The question we have to ask is straightforward: Whose side will your new user agent be on?

Why closed systems are winning (for now)

We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you’re a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing — it all comes bundled together in a package that just works. I understand the appeal firsthand, because I’ve made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.

The open-source AI ecosystem is a different story. It’s powerful and advancing rapidly, but it’s also deeply fragmented — models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don’t have to spare. This is the core challenge we face, and it’s important to name it clearly: What we’re dealing with isn’t a values problem where developers are choosing convenience over principle. It’s a developer experience problem. And developer experience problems can be solved.

The ground is already shifting

We’ve watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway — not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn’t match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.

AI has the potential to follow the same path — but only if someone builds it.  And several shifts are already reshaping the landscape: 

  • Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks — and they run on hardware that organizations already own;
  • The economics are changing too. As enterprises feel the constraints of closed dependencies, self-hosting is starting to look like sound business rather than ideological commitment (companies like Pinterest have attributed millions of dollars in savings to migrating to open-source AI infrastructure);
  • Governments want control over their supply chain. Governments are becoming increasingly unwilling to depend on foreign platforms for capabilities they consider strategically important, driving demand for sovereign systems; and,
  • Consumer expectations keep rising. People want AI that responds instantly, understands their context, and works across their tools without locking them into a single platform.

The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn’t win by being more principled than the alternatives. Openness wins when it becomes the better deal — cheaper, more capable, and just as easy to use

Where the cracks are forming

If openness is going to win, it won’t happen everywhere at once. It will happen at specific tipping points — places where the defaults haven’t yet hardened, where a well-timed push can change what becomes normal. We see four.

The first is developer experience. Developers are the ones who actually build the future — every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that’s where most of the building is happening. But developers don’t want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they’ll build the open ecosystem themselves.

The second is data. For a decade, the assumption has been that data is free to scrape — that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it’s used and a share in the value it creates. We’re moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there’s still a chance to build it right.

The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.

The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open — through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.

What an open stack could look like

Today’s dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next — data improves models, models improve applications, applications generate more data that only the platform can use. It’s a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don’t build on the platform; you build inside it.

There’s another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.

We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:

  • Open developer interfaces at the top. SDKs, guardrails, workflows, and orchestration that don’t lock you into a single vendor;
  • Open data standards underneath. Provenance, consent, and portability built in by default, so you know where your training data came from and who has rights to it;
  • An open model ecosystem below that. Smaller, specialized, interchangeable models that you can inspect, tune to your values, and run where you need them; and 
  • Open compute infrastructure at the foundation. Distributed and federated hardware across cloud and edge, not routed through a handful of hyperscn/lallers.

Pieces of this stack already exist — good ones, built by talented people. The task now is to fill in the gaps, connect what’s there, and make the whole thing as easy to use as the closed alternatives. That’s the work.

Why open source matters here

If you’ve followed Mozilla, you know the Manifesto. For almost 20 years, it’s guided what we build and how — not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:

  • Human agency. In a world of AI agents, it’s more important than ever that technology lets people shape their own experiences — and protects privacy where it matters most;
  • Decentralization and open source. An open, accessible internet depends on innovation and broad participation in how technology gets created and used. The success of open-source AI, built around transparent community practices, is critical to making this possible; and
  • Balancing commercial and public benefit. The direction of AI is being set by commercial players. We need strong public-benefit players to create balance in the overall ecosystem.

Open-source AI is how these principles become real. It’s what makes plurality possible — many intelligences shaped by many communities, not one model to rule them all. It’s what makes sovereignty possible — owning your infrastructure rather than renting it. And it’s what keeps the door open for public-benefit alternatives to exist alongside commercial ones.

What we’ll do in 2026

The window to shape these defaults is still open, but it won’t stay open forever. Here’s where we’re putting our effort — not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.

Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack — model routing, evaluation, guardrails, memory, orchestration — into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call. 

Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.

Learn from real deployments. Strategy that isn’t grounded in practical experience is just speculation, so we’re deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.

Invest in the ecosystem. We’re not just building; we’re backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can’t do everything ourselves, and we shouldn’t try. The goal is to put resources behind the people and teams already doing the work.

Show up for the community. The open-source AI ecosystem is vast, and it’s hard to know what’s working, what’s hype, and where the real momentum is building. We want to be useful here. We’re launching a newsletter to track what’s actually happening in open AI. We’re running meetups and hackathons to bring builders together. We’re fielding developer surveys to understand what people actually need. And at MozFest this year, we’re adding a dedicated developer track focused on open-source AI. If you’re doing important work in this space, we want to help it find the people who need to see it.

Are you in?

Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it — we just want to help it succeed. There’s a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.

We kept the web open not by asking anyone’s permission, but by building something that worked better than the alternatives. We’re ready to do that again.

So: Are you in?

If you’re a developer building toward an open source AI future, we want to work with you. If you’re a researcher, investor, policymaker, or founder aligned with these goals, let’s talk. If you’re at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist — that keeps everyone honest.

The future of intelligence is being set now. The question is whether you’ll own it, or rent it.

We’re launching a newsletter to track what’s happening in open-source AI — what’s working, what’s hype, and where the real momentum is building. Sign up here to follow along as we build.

Read more here about our emerging strategy, and how we’re rewiring Mozilla for the era of AI.

The post Owners, not renters: Mozilla’s open source AI strategy  appeared first on The Mozilla Blog.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Not Be Overwhelmed by AI – A Developer’s Guide to Using AI Tools Effectively

1 Share

If you’re a developer, you’ll likely want to use AI to boost your productivity and help you save time on menial, repetitive tasks. And nearly every recruiter these days will expect you to understand how to work with AI tools effectively. But there’s no real manual for this – you figure it out by doing.

While AI tools can be very helpful, some people believe that using them makes you less of a developer. But I don’t believe that’s the case.

The problem begins when you accept an AI’s output without review or understanding and push it straight to production. This increases debugging time and introduces avoidable errors, especially since AI can hallucinate when it lacks proper context. As the developer, you must always remain in control.

I had an interview where I was given four project use cases, each with a strict time slot, and all deliverables had to be built and pushed within 24 hours. They asked me if I knew how to use AI to boost productivity, and I confidently said yes. What I did not realize at the time was that the technical assessment itself was designed to test exactly that. It wasn’t just about whether I could write code, but whether I could also use AI effectively while still thinking like an engineer.

If there is one skill worth adding to your toolkit this year as an engineer, it’s learning how to use AI properly. That means understanding prompt engineering, knowing when to rely on AI, and most importantly, staying in control as the driver while AI remains the tool.

In this guide, we’ll move beyond the hype and look at the practical reality of engineering in the age of AI. We’ll cover the mental models required to use these tools safely, how to avoid the "verification gap" where bugs hide in plain sight, and take a tour of the current toolkit, from simple editors to autonomous agents. Finally, we’ll walk through a real-world Flutter workflow to show you exactly how to integrate these skills into your daily coding routine.

Table of Contents:

  1. Prerequisites

  2. How to Work Effectively with AI

  3. Understanding the Machine: Why It Hallucinates

  4. The Reality of AI Development

  5. The Skill of the Future: Context Management

  6. A Tour of a Few Toolkits: What to Use and Why

  7. A Crash Course in Prompt Engineering

  8. How to Actually Get Started

    • A Simple Practical Workflow Example
  9. Security and Ethics

  10. Conclusion

  11. References:

Prerequisites

Before you install every extension in the marketplace, you need to ground yourself in the fundamentals. AI is a multiplier, not a substitute. If you multiply zero by a million, you still get zero.

So here are the key skills you’ll need if you want to use AI effectively:

  1. Code literacy is non-negotiable: You must be able to read and understand code faster than you can write it. If you can’t spot a logic error or a security vulnerability in an AI-generated snippet, you are introducing technical debt that will be difficult to pay off later.

  2. System design thinking: AI is great at writing functions, but terrible at architecture. You need to know how the pieces fit together – database schemas, API contracts, state management – before you ask AI to build them.

  3. Debugging skills: When AI code fails (and it will), it often fails in obscure ways. You need the grit and knowledge to dig into stack traces without relying on the AI to "fix it" blindly in an infinite loop.

How to Work Effectively with AI

To truly master AI, you need to look beyond the tools themselves. While knowing which extension to install is helpful, a comprehensive approach requires addressing the workflow changes and psychological shifts that come with AI-assisted development.

Many resources out there touch on the "what," but to move from a junior user to a senior practitioner, you must understand the "how." The following five concepts focus on the Senior Engineer’s perspective: managing risk, maintaining quality, and ensuring that your skills grow rather than atrophy.

Concept 1: The "Junior Intern" Mental Model

The biggest mistake developers make is treating AI like a senior architect when it should be viewed as a talented but inexperienced junior intern: it’s fast and can type faster than you, it’s eager and will always give an answer even when it’s guessing, and it lacks context about the full history and nuanced business logic behind a codebase.

The reason for this specific mindset is about trust and verification. When a junior developer starts on their first day, you likely don’t trust them to push to production immediately – not because they aren't smart, but because they lack the historical context of the codebase and haven't proven their judgment yet. Instead, you review their pull requests line-by-line.

You should treat AI with that same level of initial scrutiny. If you wouldn’t blindly merge a PR from a new hire without understanding how it handles edge cases, you shouldn’t blindly merge code from ChatGPT or Gemini, either.

Concept 2: The Verification Gap

There is a cognitive phenomenon every AI user encounters: it’s much harder to read code than to write it. This is the case because when you write code yourself you build a mental map of the logic as you type.

But when AI generates fifty lines of code in a second, you skip that mental mapping process, and the danger is that you glance at the code, it looks correct syntactically, and you accept it – with the consequence that two weeks later, when a bug appears, you have no memory of how that function works since you never actually “wrote” it.

In this case, the solution is to force yourself to trace the execution and, if you don’t immediately grasp the logic, ask the AI to explain the code line-by-line before you accept it.

Concept 3: AI-Driven Test Driven Development (TDD)

If you’re worried about AI writing buggy code, the best safety net is writing the tests first, since surprisingly AI is often better at writing tests than implementation code. This is because tests describe behavior, which LLMs excel at parsing.

The workflow is to first prompt the test – for example, “Write a Jest unit test for a function that calculates tax, handling 0%, negative numbers, and missing inputs” – then verify that the test cases make sense and cover edge cases. Only after that should you ask the AI to generate the function to pass those specific tests.

This reverses the risk: instead of hoping the AI code works, you define “working” first via the test and force the AI to meet that standard.

Concept 4: The "Blank Page" Paralysis vs. Refactoring

AI is a “velocity tool,” but it works differently depending on the phase of work. From 0 to 1 (creation), AI is excellent because it kills the “blank page syndrome” by giving you a skeleton to start with. From 1 to N (refactoring), AI truly shines but is often underused.

So don’t just use AI to write new code. You can also use it to clean old code with prompts like “Rewrite this function to be more readable,” “Convert this promise-chain syntax to async/await,” or “Identify any potential race conditions in this block.”

Concept 5: Fighting Skill Atrophy

There’s a legitimate fear that relying on AI will make you a “worse” developer over time. If you’re working with Flutter and you never write a TextFormField validator or a StreamBuilder function again, will you forget how they work?

To prevent this, use the “Tutor” Strategy: use AI to teach, not just to solve. Avoid prompts like “Write a regex to validate an email,” which only gives you code, and instead ask for explanations like “Explain how to implement an email validator in Flutter, breaking down each part of the logic”. By doing this, you gain both knowledge and code.

Make it a habit to ask “Why?” whenever AI suggests a widget, package, or pattern you haven’t used. Have it compare alternatives, and turn each coding session into a learning session that strengthens your Flutter or general development skills.

Understanding the Machine: Why It Hallucinates

To control an AI tool, you must understand its nature. Large Language Models (LLMs) are not "knowledge bases" or "search engines" in the traditional sense. Rather, they are prediction engines.

When you ask an AI to write a Dart function, it isn't "thinking" about computer science logic. It’s calculating the statistical probability of the next token (word or character) based on the millions of lines of code it has seen during training.

  1. The trap: It prioritizes plausibility over truth. It will confidently invent a library import that doesn't exist because the name sounds like a library that should exist.

  2. The fix: Treat AI output as a "suggestion," not a solution. If you don't understand why the code works, you are not ready to commit it.

The Reality of AI Development

AI likely isn’t going to replace your job, and it’s not going to stop junior developers from being hired. What puts developers at risk is relying on AI without understanding the fundamentals.

As Sundar Pichai once shared, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This allows engineers to move faster and focus on higher-impact work. That’s the reality today.

No product manager expects you to take longer to build a feature, fix a bug, or optimize performance. You are expected to be an expert at programming and competent at using AI assistants to get work done efficiently.

The Skill of the Future: Context Management

If there’s one technical limitation you must understand, it’s the Context Window. Think of the context window as the AI's "short-term working memory." Every time you chat with an AI, you are feeding it data. But this bucket has a limit. Here are a couple issues you’ll need to be aware of:

  1. Context rot: If you have a chat session that is 400 messages long, the AI often "forgets" the instructions you gave it at the start.

  2. Context pollution: If you paste five different files that aren't relevant to the bug you are fixing, you confuse the model. It’s like trying to solve a math problem while someone shouts random history facts at you.

To combat these issues, you’ll need to learn to curate context. Don't just dump your whole repo into a chat. Select only the specific files, interfaces, and error logs relevant to the immediate task.

A Tour of a Few Toolkits: What to Use and Why

I haven’t fully mastered AI development myself, but I started intentionally embracing it in the middle of last year – and my perspective has changed. While some AI tools still feel experimental, many are genuinely helping developers solve problems.

Here is a breakdown of the current landscape, from simple helpers to full-blown agents.

1. The In-Editor Assistants (The "Co-Pilots")

These tools live in your IDE. They are your pair programmers.

GitHub Copilot:

Copilot provides both autocomplete and a chat interface, making it ideal for generating boilerplate code, writing unit tests, or explaining legacy code.

To get started, install the VS Code extension, then start typing a function name or write a descriptive comment like // function to parse CSV and return JSON, and let Copilot autocomplete the implementation for you. You can read more about Copilot’s features here.

GIF of GitHub Copilot Edits in Visual Studio

Gemini Code Assist:

Gemini Code Assist is Google’s enterprise-grade AI for developers. It can read your entire codebase thanks to its massive context window, allowing it to answer questions, suggest refactors, and help navigate complex, multi-file projects. It’s especially useful for large codebases and cloud-native GCP development.

To start using it, install the plugin in IntelliJ or VS Code, connect your Google Cloud project, and use the chat to ask about functions, classes, or files across your repo. You can read more about its features here.

GIF of Gemini Code Assist

2. The AI-Native Editors

These aren't just plugins. Instead, the entire editor is built around AI.

Cursor

Cursor is a fork of VS Code that integrates AI deeply into your workflow, allowing it to “see” your terminal errors, documentation, and entire codebase. It’s best for rapid iteration, with features like “Tab” that predict your next edit, not just your next word.

To get started, download the Cursor IDE (it imports your VS Code settings), open a file, hit Cmd+K (or Ctrl+K), and type a prompt like “Refactor this component to use React Hooks” to let AI assist you directly in your code. You can learn more about Cursor here.

GIF of Cursor

Firebase Studio & Google AI Studio

Firebase Studio is a web-based, agentic environment for full-stack development, letting you go from zero to a deployed app quickly using Google’s ecosystem, including Auth, Firestore, and hosting. It combines Project IDX with Gemini to scaffold backend and frontend code simultaneously, making it ideal for building production-ready applications fast.

Google AI Studio, on the other hand, is focused on AI-assisted prototyping and code generation, letting you experiment with prompts, generate snippets, test models, and explore AI-driven ideas before integrating them into a full workflow like Firebase Studio.

To get started, you can learn more about Firebase Studio, and Google AI Studio

GIF of Google AI Studio

GIF of Firebase Studio

Flutter in Firebase Studio

Google Anti-Gravity (Agentic AI Developer Platform):

Google Antigravity is an agentic AI–first integrated development environment (IDE) created by Google that embeds autonomous AI agents directly into the coding workflow. This lets them understand codebases, plan and execute multi-step engineering tasks such as feature implementation, refactoring, and debugging, and produce reviewable outputs. It goes beyond traditional autocomplete tools to focus on completing real software development work.

You can learn more about Antigravity here.

GIF of Google AntiGravity

3. The "Agentic" Tools (CLI and Servers)

These tools don't just write code – they perform actions (run commands, manage files).

Gemini CLI / Claude Code

Gemini CLI and Claude Code are AI-powered command-line interfaces that let you chat with the AI and have it execute terminal commands for you. They’re best for DevOps tasks, complex refactors across multiple files, and setting up development environments.

To get started, install the CLI via your terminal, authenticate, and then type commands like gemini "analyze the logs in /var/log and summarize errors" or claude "scaffold a new Next.js project with Tailwind" to let AI handle the work directly in your terminal.

To learn more, you can read more about Gemini CLI, and Claude Code here.

GIF of Google's Gemini CLI

MCP Servers (Model Context Protocol)

MCP is an open standard by Anthropic that lets AI securely connect to your data sources, databases, Slack, local files, and more, so it can “know” your specific business context. It’s best for building custom AI workflows that require direct access to proprietary or internal data.

To get started, the process is a bit more advanced than it is for other AI tools. You’ll need to run an MCP server (similar to a local server) that exposes your database to an AI client like Claude Desktop, allowing the AI to safely query your data. For an additional reference, check out the Figma MCP server documentation.

A screenshot of an image gallery next to the codebase. The codebase has a React and Tailwind code representation of the design.

4. The Generators (UI & Full Stack)

These tools focus on generating visual layouts or entire app structures.

v0 / Lovable / Stitch

v0 is a text-to-app tool that converts plain-language prompts into functional UIs. It typically generates React components with Tailwind styling, making it ideal for quickly prototyping dashboards or MVPs.

Lovable focuses on rapid frontend prototyping by turning design ideas or written prompts into live web interfaces without manual coding, helping teams iterate visually.

And Stitch specializes in creating complex UI layouts from text, supporting interactive and responsive components, so developers can generate production-ready React/Tailwind code for multi-component pages and copy it directly into their projects.

To get started with these tools, you can check out their docs here:

  1. v0 docs

  2. Lovable docs

  3. Stitch docs

GIF of Google Stitch

Lovable in Action

GenUI SDK for Flutter

This SDK is a tool that lets AI generate UI widgets dynamically based on user conversations, transforming chatbots from simple text interfaces into interactive experiences – like showing a flight picker or other screens. It’s best for building chatbots that need to render “screens” instead of just responding with text.

To get started, you can check out the google/flutter-genui repository, set up a Flutter project that listens to an LLM stream, and render widgets on the fly as the AI responds.

GitHub - flutter/genui

Builder.io Figma Plugin

The Builder.io Figma plugin allows you to take designs created in Figma and automatically convert them into production-ready frontend code or Builder.io components. It bridges the gap between design and development by letting designers and developers quickly turn visual layouts into working web pages or app interfaces, without manually recreating the design in code.

It also supports interactive elements and responsive layouts, making it ideal for rapid prototyping and accelerating the design-to-development workflow.

builder.io to Figma

Builder.io Figma Plugin

Now that you’re familiar with some of the most popular AI tools out there right now, you’ll need to know the basics of prompt engineering techniques so you can effectively talk to your LLM.

A Crash Course in Prompt Engineering

"Prompt Engineering" sounds like a buzzword, but it’s actually just referring to effective communication with an LLM. A lot of the bad code generated by AI is the result of lazy or ineffective prompting.

Instead of typing something vague and relatively unhelpful, like"Write a function to sort a list," use the C.A.R. framework:

  1. Context: Who is the AI? What is the environment?

    Example: "Act as a Senior Go Engineer. We are working in a cloud-native environment using AWS Lambda."

  2. Action: What specifically do you want?

    Example: "Write a function that sorts a list of User objects by 'LastLogin' date. Handle edge cases where the date is null."

  3. Result: How do you want the output formatted?

    Example: "Provide only the code snippet and one unit test. Do not add conversational filler."

By constraining the AI, you force it to narrow its probabilistic search, resulting in much higher-quality code.

How to Actually Get Started

You do not need to learn how to use all of these tools – but being familiar with some of them and aware of what’s out there will help prepare you for where software development is heading.

Here’s how you can combat the overwhelm and actually get started honing your skills:

  1. Pick one tool: Start with Cursor or GitHub Copilot. They have the lowest barrier to entry.

  2. Start changing your workflow: Instead of Googling a regex or a Dart string separation syntax, ask the AI to show you an example and explain how it works.

  3. Review everything: Treat the AI like a junior intern. It’s eager to please but often wrong, so make sure you read every line of code it generates and understand how it works.

  4. Prompt iterate: If the output is bad, don't just delete it. Refine your prompt and work with the AI to improve the code. You can say things like "This code is inefficient," or "Use the repository pattern for this."

A Simple Practical Workflow Example

Let’s look at what this looks like in practice. Imagine you need to build a luxury car rental page that displays car categories and vehicle types. This is a classic UI challenge involving structured layouts, clean visual hierarchy, and smooth user interaction.

Step 1: Create a Context-Rich Prompt

Instead of typing "make a car app home page," type this detailed request into Cursor or Copilot:

"Create a Flutter HomePage widget for a luxury car rental app. Use a CustomScrollView with a SliverAppBar that expands to show a high-res image of a Featured Car. Below that, include a horizontal ListView for categories (SUV, Sports, Electric) and a vertical list of CarCard widgets. Use a dark theme with Colors.grey[900] background and gold accents."

IMG of Copilot with prompt entry

Step 2: The Review (The "Junior Intern" Check)

The AI generates the code, but you won’t want to run it yet. Instead, read through it carefully to catch common Flutter pitfalls, such as placing a vertical ListView inside a CustomScrollView without using SliverList or SliverToBoxAdapter, hardcoding widget heights that can cause overflows on smaller screens, and using NetworkImage without a placeholder or error builder.

IMG of Copilot with generated code

Step 3: The Verification

Before adding the widget to your main navigation, carefully review the AI-generated code to ensure it meets quality standards.

You’ll want to check that it follows Flutter best practices, such as proper widget composition and use of const where possible. Make sure it’s memory-safe with no dangling controllers or listeners, and that the code is readable and maintainable with clear variable naming, indentation, comments, and structure. You’ll also want to check that performance is optimized for smooth scrolling, efficient image loading, and minimal widget rebuilds.

For this project, which is just a UI prototype, you don’t need to check things like error handling, accessibility, or security – but for general projects, those additional checks should also be considered.

Only once the code passes these checks should you integrate it into your main project. This step ensures you’re not blindly trusting the AI output but actively confirming that it’s robust, clean, and production-ready.

I copied the code, opened Android Studio, and pasted it into main.dart in a new Flutter project. You can also easily run it on DartPad.dev. Here are the screenshots showing it in action:

IMG of Running the app in Android Studio

IMG of running app on Dartpad.dev

Step 4: The Iteration

If you look at the project preview now, you’ll notice the category chips look plain. You can reply to the AI:

"The category chips look boring. Refactor the horizontal list to use ChoiceChip widgets with a custom border radius, and add a simple Hero animation to the car images so they transition smoothly to a details page."

IMG of Copilot with prompt

By following this loop – Prompt, Review, Verify, Iterate – you can solve complex, highly specific Flutter problems without getting stuck in the weeds, while ensuring the final code is memory-safe and robust.

The quality of the output is also determined by the model you use. Strong reasoning-focused models like Claude Opus 4.5, Gemini 3 Pro, and similar high-capacity models tend to produce more accurate architectural decisions, cleaner Flutter patterns, and fewer subtle lifecycle or performance issues.

Security and Ethics

As we rush to adopt these tools, it is easy to overlook the implications of sending our code to third-party servers.

The primary security risk is data leakage. When you paste API keys, database credentials, or proprietary algorithms into a public LLM, that data leaves your local machine. If the model providers use your chat history to train future versions of their models, your trade secrets or private keys could theoretically be surfaced in another user's autocomplete suggestions months later. This is why "sanitizing" your input, removing secrets and PII (Personally Identifiable Information), is non-negotiable.

Beyond security, there are significant ethical and legal gray areas regarding copyright and ownership. Since LLMs are trained on billions of lines of open-source code, there is an ongoing debate about whether AI-generated code infringes on existing licenses. If an AI reproduces a specific, licensed algorithm verbatim without attribution, using that code in a commercial product could expose your company to legal liability.

To combat these risks, you should advocate for enterprise-grade agreements (like GitHub Copilot Business), which contractually guarantee that your code will not be used for model training. If you cannot afford enterprise tiers, consider using local, open-weights models (using tools like Ollama) for sensitive tasks, ensuring your data never leaves your network.

Finally, always keep a "human in the loop." AI should be treated as a drafting tool, not a decision-maker, ensuring that a human is always accountable for the final output.

Conclusion

I haven’t fully mastered using AI myself, but my perspective has shifted: while some tools still feel experimental, many are already solving real problems and making development easier, the very purpose computers were designed for.

Don’t let the fear of being “replaced” paralyze you. The developers at the most risk are those who refuse to adapt. Take control, experiment, and integrate AI into your workflow.

Now is the time to put this into practice. Start small by testing a specific prompt in a tool like Cursor or Gemini, or challenge yourself with a timed mini-project to simulate an AI-assisted workflow, similar to an interview scenario. These exercises will give you hands-on experience and reveal how AI can amplify your skills, streamline repetitive tasks, and unlock new ways of solving problems.

The future of development isn’t about AI replacing you. Rather, it’s about using it to make you a faster, smarter, and more capable developer.

References:

1. General AI in Software Engineering

  1. Sundar Pichai on AI Code at Google: On Alphabet’s Q3 2024 earnings call, CEO Sundar Pichai revealed that more than 25% of all new code at Google is generated by AI, then reviewed and accepted by engineers. This is a massive benchmark for "The Reality of AI Development."

  2. The Model Context Protocol (MCP) Announcement: This is the official introduction of the open standard you mentioned in your "Agentic Tools" section. It was created by Anthropic and recently donated to the Agentic AI Foundation under the Linux Foundation.

  3. The Google Antigravity Announcement: This is the official introduction of Google Antigravity, an agentic AI development platform by Google that embeds autonomous AI agents directly into the software development workflow. It introduces an agent-first IDE experience where AI can plan, execute, and verify complex engineering tasks across the editor, terminal, and connected tools, moving beyond traditional code completion or chat-based assistance.

2. Deep Dives into the Toolkit

  1. Cursor’s "Composer" and Visual Editor: Cursor recently released a visual editor that allows you to drag-and-drop elements and edit code through a browser preview, which bridges the gap between design and code.

  2. GitHub Copilot Agents & MCP: GitHub has officially integrated MCP into Copilot, allowing the coding agent to connect to external tools like Slack, Jira, or your own local databases.

  3. Claude Code CLI (Autonomous Tasks): Documentation on how the Claude CLI handles "checkpointing," allowing you to rewind code if an autonomous agent goes down the wrong path.

3. Frontend & UI Generation

  1. v0 by Vercel: Vercel’s official platform for "Generative UI." It uses React, Tailwind, and Shadcn UI to turn prompts into full-screen previews.

  2. GenUI SDK for Flutter: The official documentation for the Google/Flutter team's "Generative UI" experiment, which allows AI to render widgets on the fly.

4. Developer Productivity Research

  1. GitHub Data on Developer Velocity: GitHub’s research shows that developers using AI complete tasks up to 55% faster than those who don't.



Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building ChatGPT Apps with React Components: Introducing KendoReact + Apps SDK

1 Share

Our new KendoReact + ChatGPT repo makes it easier to build Apps SDK experiences using KendoReact Free, a collection of free enterprise-grade components for React.

The OpenAI Apps SDK opens the door to a new category of applications: chat-native apps that combine conversation, data and UI into a single interface. But for many React developers, figuring out exactly how to get UI components to show up and behave correctly inside the Apps SDK model is a hurdle that can be difficult to jump.

If that sounds like you, then good news! Our new KendoReact + ChatGPT repo makes it easier to build Apps SDK experiences using Progress KendoReact Free, a collection of free enterprise-grade components for React.

This repo exists to help remove some of that guesswork. It includes working examples using KendoReact Free components, so you can understand exactly how to implement them in this new context. It also includes a Node.js server setup and end-to-end patterns that you can adapt for your own Apps SDK projects.

For Example?

For example, one of the most challenging aspects of working with the Apps SDK is the narrow view size: the chat view space is only 768px wide. That means that any UI components a developer wants to use must already be completely responsive or adaptive, or it won’t work well in the limited space available.

By using the KendoReact components, this is already handled! Our components have been highly responsive for years, built to work just as well on mobile as on desktop—making them a perfect match for the smaller view of the chat UI. Similarly, our components already have inbuilt interactivity with the view, meaning that clients won’t need to do the same level of extensive prompting and testing they would with other components.

We designed this as a practical starting point: a place to see what our optimized, functional components look like in a real Apps SDK context, so you can fork the examples and use them to launch your own projects.

Why This Matters to You

The Apps SDK introduces a new way to build software: chat-native applications where UI elements are rendered as part of a conversational interface. KendoReact Free gives you a collection of our most popular and beloved UI components, and this repo shows how to bring them into that new environment with minimal friction.

That means that all the things you get with KendoReact Free are now available for your ChatGPT apps, including:

  • 50 extensively tested and enterprise-quality components
  • A professional design system created by our dedicated team
  • Four design themes that align with popular existing third-party design systems (Material, Fluent, Bootstrap and Kendo)
  • Responsive and mobile interactivity—perfect for ChatGPT’s narrow conversation-oriented view
  • Easy template extensibility
  • A library of components that are screen-reader friendly, WCAG and Section 508 compliant, and human-tested for accessibility

If you’re experimenting with the Apps SDK, building internal tools or prototyping a new chat-first workflow, this project gives you a faster way to get hands-on.

Try It Out!

To get started, just clone the repo, follow the setup instructions, explore the examples and adapt from there. Everything is MIT-licensed and open to use and extend.

If you have feedback, questions or ideas, open an issue or start a discussion in the repo. We look forward to seeing what you build!

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories