Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150970 stories
·
33 followers

Building 'Ask AI'

1 Share

This entry is part of the 2025 F# Advent Calendar, a month-long celebration across the community. It’s a privilege to contribute alongside so many talented developers who share a passion for this ever-growing language ecosystem.

Why ‘Ask AI’ Exists

Every blog accumulates something beyond articles: an implicit body of knowledge. Years of technical writing create relationships between concepts, build upon shared foundations, and form an interconnected corpus. In the case of this site, adding portfolio descriptions and company information to the blog entries brings the indexed document count to nearly 100.

That’s a substantial corpus covering everything from low-level memory safety to cloud architecture to hardware security. A visitor arriving with a specific question currently must navigate manually, scanning titles and skimming content to find relevant material. Even with filtering and sorting in the blog listing, narrowing down to what’s relevant for a particular visitor’s context is challenging. The friction is real.

An intelligent query system transforms passive content into an active knowledge resource. But the implementation matters. Ask AI doesn’t simply retrieve text and ask a language model to summarize it. It returns a synthesized answer alongside a rank-ordered list of source documents, each with relevance scores that explain why they contributed to the response. This transparency changes the nature of the interaction: the visitor can trust the answer because they can verify it.

The business case is straightforward. For a ‘deep tech’ startup developing complex hardware and software offerings, the blog represents intellectual capital. Making that capital discoverable through natural language questions reduces the barrier between a visitor’s curiosity and the company’s expertise. When someone asks a question, they get an answer grounded in actual content, with links to dive deeper.

This turns the concept of the static “FAQ” on its head. A resource that’s often outdated as soon as it’s deployed is now a living system that updates automatically as the site gains information and additional context.

What Cloudflare “AI Search” Provides

Cloudflare’s AI Search, their new internal name for AutoRAG, is a managed Retrieval-Augmented Generation pipeline. The service handles the complex orchestration that RAG systems require: document ingestion, intelligent chunking, embedding generation, vector storage, similarity search, and response synthesis. For developers who want RAG capabilities without building infrastructure, it’s a compelling starting point.

This example implementation works like this: content (markdown documents from the site - blog entries, product descriptions and company information) is synchronized into an R2 bucket. AI Search monitors that bucket, automatically processing new or modified documents. When content changes, the system chunks the text, generates embeddings using their BGE model, and stores everything in a Vectorize index. At query time, the question is embedded, similar chunks are retrieved, and an LLM synthesizes a response using those chunks as context.

What makes this particularly useful is the metadata flow. Custom attributes attached to R2 objects propagate through the pipeline and appear in query responses. This means we can attach document titles, URLs, publication dates, and other context that helps both the LLM and the end user understand where information comes from.

Loading diagram...

The diagram shows two pipelines: content flows from markdown through the sync script into R2, where AI Search indexes it; queries flow from the user through the ask-ai worker, which calls AI Search to retrieve context and generate responses.

CloudflareFS: The Enabling Infrastructure

Here’s where the story becomes interesting. Cloudflare provides extensive TypeScript SDKs and a CLI tool called Wrangler for development and deployment. But for teams working in F#, context-switching to TypeScript for infrastructure feels unnecessary when, frankly, a better option in F# exists.

CloudflareFS is a collection of F# bindings that provide type-safe access to Cloudflare’s runtime and management APIs. The project leverages Fable to transpile F# to JavaScript, meaning F# code runs indirectly in Cloudflare Workers. But the bindings didn’t appear from thin air. They’re the product of a toolchain that transforms Cloudflare’s own specifications into idiomatic F#.

The Binding Generation Pipeline

Two foundational tools make CloudflareFS possible: Hawaii and Glutinum. Both represent years of community investment in F# tooling, and Ask AI wouldn’t exist without them.

Hawaii processes OpenAPI specifications and generates F# HTTP clients. Cloudflare publishes their management APIs as OpenAPI specs, which means Hawaii can automatically generate typed clients for provisioning R2 buckets, D1 databases, Workers deployments, and more. The generated code handles serialization, HTTP mechanics, and error responses, leaving application developers to focus on business logic.

Glutinum takes a different approach, transforming TypeScript definitions into F# interface bindings. Cloudflare’s worker runtime types, their AI SDK, the D1 database client, R2 storage APIs, these are all published as TypeScript. Glutinum parses those definitions and produces F# interfaces that Fable can emit as the correct JavaScript calls.

The combination is powerful. Hawaii provides the management layer for provisioning and deploying resources from CI/CD pipelines or local development tools. Glutinum provides the runtime layer for code that actually executes inside Workers. Together, they enable a full F# workflow from development through deployment.

// Hawaii-generated management client example
// Provisions a D1 database via Cloudflare's API
let createDatabase (config: CloudflareConfig) (name: string) =
    async {
        let client = D1Client(config.ApiToken, config.AccountId)
        let! result = client.CreateDatabase({ Name = name })
        match result with
        | Ok database ->
            printfn "Created D1 database: %s (ID: %s)" name database.Uuid
            return Ok database.Uuid
        | Error e ->
            return Error $"Failed to create database: {e.Message}"
    }

// Glutinum-generated runtime bindings example
// Used inside a Worker at request time
let queryDatabase (db: D1Database) (question: string) =
    promise {
        let sql = "SELECT * FROM query_log WHERE query_text LIKE ?"
        let stmt = db.prepare(sql).bind($"%%{question}%%")
        let! result = stmt.all<QueryLogEntry>()
        return result.results |> Option.defaultValue (ResizeArray())
    }

Notice the different computation expressions. Management operations use standard F# async workflows since they run in .NET on developer machines or CI/CD systems. Runtime operations use promise, a Fable-provided computation expression that compiles to JavaScript Promises. This isn’t a departure from F# idioms; it’s an adaptation. The promise CE provides familiar let! and return syntax while ensuring the generated JavaScript uses native Promise semantics that Workers expect.

The ask-ai Worker Implementation

With bindings in place, implementing the worker is remarkably clean. The worker exposes two endpoints: a streaming /ask-stream for the interactive UI and a non-streaming /ask for simpler integrations.

Worker Environment Bindings

Cloudflare Workers receive their configuration through environment bindings, typed references to resources like databases, AI services, and storage buckets. CloudflareFS defines these as F# interfaces:

[<AllowNullLiteral>]
[<Interface>]
type WorkerEnv =
    inherit Env
    abstract member DB: D1Database with get
    abstract member AI: Ai<obj> with get
    abstract member ALLOWED_ORIGIN: string with get
    abstract member AUTORAG_NAME: string with get

The Ai binding provides access to the full Workers AI surface, including the autorag method that returns an AI Search client. The D1Database binding enables analytics logging. Both are injected by Cloudflare at runtime, meaning the worker code never manages credentials or connection strings.

Handling Queries with Persona Context

One design decision worth highlighting: Ask AI accepts optional persona and interest parameters that adjust how responses are framed. A business leader asking about the Fidelity framework receives different emphasis than an engineer asking the same question.

/// Build context prefix based on user persona and interests
let buildContextPrefix (persona: string option) (interests: string array) : string =
    let personaText =
        match persona |> Option.defaultValue "engineer" with
        | "business" -> "I am a business leader."
        | "academic" -> "I am an academic."
        | "security" -> "I am a security professional."
        | _ -> "I am an engineer."

    let interestText =
        let interestNames =
            interests
            |> Array.choose (function
                | "fidelity" -> Some "the Fidelity framework"
                | "cloudflarefs" -> Some "CloudflareFS"
                | _ -> None)
        if interestNames.Length > 0 then
            " I am principally interested in " + (String.concat " and " interestNames) + "."
        else
            ""

    personaText + interestText

This context prefix prepends the user’s question before it reaches AI Search, subtly steering the response toward the user’s perspective. The LLM sees “I am an engineer. I am principally interested in CloudflareFS. How do you deploy workers?” rather than just the bare question.

Streaming Response Architecture

The streaming endpoint deserves attention because it demonstrates a pattern that’s increasingly common in AI applications: Server-Sent Events (SSE) that deliver both structured data and incremental text.

/// Handle streaming /ask-stream POST endpoint
/// Returns SSE with sources first, then streams AI response chunks
let handleAskStreamRequest (request: Request) (env: WorkerEnv) (ctx: ExecutionContext) : JS.Promise<Response> =
    promise {
        let startTime = DateTime.UtcNow
        let queryId = Guid.NewGuid().ToString()

        let! body = request.json<AskRequest>()
        let question = body.question.Trim()

        if String.IsNullOrWhiteSpace(question) then
            return jsonResponse { error = "Question is required" } 400
        else

        // Build context prefix from persona and interests
        let interests = if isNullOrUndefined body.interests then [||] else body.interests
        let contextPrefix = buildContextPrefix body.persona interests
        let fullQuery = if String.IsNullOrEmpty(contextPrefix) then question else $"{contextPrefix} {question}"

        let autorag = env.AI.autorag(env.AUTORAG_NAME)

        // Step 1: Get sources via search (no LLM, fast)
        let searchRequest: AutoRagSearchRequest = !!createObj [
            "query" ==> fullQuery
            "max_num_results" ==> 5
        ]
        let! searchResult = autorag.search(searchRequest)
        let sources = extractSources searchResult.data

        // Step 2: Get streaming AI response
        let streamRequest: AutoRagAiSearchRequestStreaming = !!createObj [
            "query" ==> fullQuery
            "max_num_results" ==> 5
            "stream" ==> true
        ]
        let! streamResponse = autorag.aiSearch(streamRequest)

        // Create a TransformStream to build our SSE response
        let transformStream: obj = emitJsExpr () "new TransformStream()"
        let readable: obj = transformStream?readable
        let writable: obj = transformStream?writable
        let writer: obj = writable?getWriter()
        let encoder: obj = emitJsExpr () "new TextEncoder()"

        // ... stream processing logic ...

        // Return SSE response immediately with the readable stream
        let headers = Globals.Headers.Create()
        headers.set("Content-Type", "text/event-stream")
        headers.set("Cache-Control", "no-cache")
        headers.set("Connection", "keep-alive")

        return Globals.Response.Create(!!readable, !!createObj [
            "status" ==> 200
            "headers" ==> headers
        ])
    }

The key here is the two-phase approach. First, we call search to get sources without invoking the LLM, which is fast. As soon as the AI summary begins to stream tokens, we send those sources to the client via an SSE event. Then the streaming AI response continues, and it forwards chunks as they arrive. The user sees sources initially, then watches the answer materialize, this creates a responsive experience even though the full result may take several seconds to generate.

Source Extraction and URL Generation

AI Search returns metadata attached to retrieved documents, but the format requires transformation for the frontend:

/// Extract sources from AutoRAG search response data
let private extractSources (data: ResizeArray<AutoRagSearchResponse.data>) : SourceReference array =
    data.ToArray()
    |> Array.map (fun item ->
        let attrs = item.attributes
        let titleVal: obj = attrs.["title"]
        let urlVal: obj = attrs.["url"]

        let title =
            if isNullOrUndefined titleVal then filenameToTitle item.filename
            else string titleVal

        // Generate clean URL from filename if url metadata missing
        // Filename format: "section--slug.md" (e.g., "blog--my-post.md")
        let url =
            if isNullOrUndefined urlVal then
                let filename = item.filename.Replace(".md", "")
                if filename.Contains("--") then
                    let parts = filename.Split([|"--"|], StringSplitOptions.None)
                    let section = parts.[0]
                    let slug = parts.[1]
                    $"/{section}/{slug}/"
                else
                    $"/blog/{filename}/"
            else string urlVal

        { title = title; url = url; relevance = item.score }
    )
    |> Array.distinctBy (fun s -> s.url)
    |> Array.sortByDescending (fun s -> s.relevance)

This logic handles cases where metadata might be missing by deriving URLs from filenames. The filename convention, section--slug.md, encodes enough information to reconstruct the URL path. Deduplication ensures the same document doesn’t appear multiple times if multiple chunks matched.

The Content Synchronization Script

Content reaches AI Search through R2, and the sync script manages that pipeline. Rather than using a Worker for this task, we use an F# script (sync-content.fsx) that runs in .NET, leveraging the AWS S3 SDK since R2 is S3-compatible.

#!/usr/bin/env dotnet fsi
// Sync site content to R2 bucket for AutoRAG indexing

#r "nuget: AWSSDK.S3, 3.7.305"

open System
open System.IO
open System.Net.Http
open System.Security.Cryptography
open Amazon.S3
open Amazon.S3.Model

// Content directories to sync (relative to hugo/content)
let contentDirs = [
    "blog", "/blog/"           // blog posts -> /blog/{slug}/
    "company", "/company/"     // company pages -> /company/{slug}/
    "portfolio", "/portfolio/" // portfolio pages -> /portfolio/{slug}/
]

// Helper: normalize filename to R2 key with section prefix
// e.g., ("blog", "my-post.md") -> "blog--my-post.md"
let toR2Key (section: string) (filename: string) =
    let normalizedFilename = filename.ToLowerInvariant().Replace(" ", "-")
    $"{section}--{normalizedFilename}"

// Upload new or modified files
for KeyValue(key, (localPath, section)) in localFiles do
    let localBytes = File.ReadAllBytes(localPath)
    let localMD5 = computeMD5 localBytes

    let needsUpload =
        match r2Objects.TryFind key with
        | Some etag -> etag <> localMD5  // ETag is MD5 for single-part uploads
        | None -> true  // New file

    if needsUpload then
        use stream = new MemoryStream(localBytes)
        let request = PutObjectRequest(
            BucketName = bucketName,
            Key = key,
            InputStream = stream,
            ContentType = "text/markdown",
            DisablePayloadSigning = true  // Required for R2 compatibility
        )
        let response = client.PutObjectAsync(request) |> Async.AwaitTask |> Async.RunSynchronously
        // ...

The script uses MD5 hashes to detect changes, only uploading files that have actually been modified. This makes it efficient to run frequently, whether manually during development or automatically in CI/CD. After uploading changes, the script triggers AI Search’s full scan endpoint:

// Trigger AutoRAG indexing if there were changes
if hasChanges then
    use httpClient = new HttpClient()
    let url = sprintf "https://api.cloudflare.com/client/v4/accounts/%s/autorag/rags/%s/full_scan" acctId ragName

    use request = new HttpRequestMessage(HttpMethod.Patch, url)
    request.Headers.Add("Authorization", sprintf "Bearer %s" token)
    request.Content <- new StringContent("{}", Encoding.UTF8, "application/json")

    let response = httpClient.SendAsync(request) |> Async.AwaitTask |> Async.RunSynchronously
    // ...

CI/CD Pipeline Potential

The architecture obviastes the need for Wrangler, Cloudflare’s CLI tool, in favor of direct API calls through CloudflareFS bindings. This design choice enables something powerful: the entire deployment pipeline can be expressed in F# and integrated into .NET build systems.

Consider what a CI/CD pipeline might look like:

// Conceptual deployment orchestration
let deploy (config: CloudflareConfig) =
    async {
        // 1. Provision resources if they don't exist
        let! r2Result = R2.ensureBucket config "speakez-content"
        let! d1Result = D1.ensureDatabase config "speakez-ask-ai"

        // 2. Compile F# to JavaScript via Fable
        let! compileResult = Fable.compile "workers/ask-ai"

        // 3. Deploy worker with bindings
        let! deployResult =
            Workers.deploy config "ask-ai" {
                MainModule = "dist/Main.js"
                Bindings = [
                    AIBinding "AI"
                    D1Binding ("DB", d1Result.Id)
                    PlainText ("AUTORAG_NAME", "speakez-rag")
                ]
            }

        // 4. Sync content to R2
        let! syncResult = ContentSync.sync config "hugo/content"

        return deployResult
    }

This isn’t just a convenience; it’s a paradigm shift. Infrastructure provisioning, application deployment, and content synchronization become testable F# code rather than shell scripts calling CLI tools. Teams can use the same CI/CD patterns they apply to their application code.

A git-aware deployment can take this further. By analyzing diffs, the pipeline can determine the minimal deployment scope: cosmetic changes might only require a static site refresh, while worker code changes trigger full synchronization to the AI Search instance. Not only is the site smarter, but the deployment pipeline is more intelligent and nuanced as well.

The Open Source Foundation

CloudflareFS wouldn’t exist without the F# community’s sustained investment in tooling. The project builds directly on three pillars:

Hawaii, designed by Zaid Ajaj, has been quietly transforming how F# developers consume REST APIs for years. Its Swagger/OpenAPI-to-F# generation provides the management layer that makes programmatic infrastructure control possible.

Glutinum, from Maxime Mangel and contributors, solves the complementary problem of TypeScript interop. As JavaScript ecosystems publish TypeScript definitions for everything, Glutinum makes those definitions accessible to F# developers.

Fable, the F# to JavaScript compiler, is the foundation everything else rests on. Alfonso Garcia-Caro and the Fable community have built something remarkable: a way to write F# and run it anywhere JavaScript runs. Without Fable, there would be no F# on Cloudflare Workers.

CloudflareFS is built to be fully available as a free open source resource. The goal is to expand what’s possible for F# developers targeting modern edge infrastructure. The bindings, the generators, the samples, all are available for the community to use, examine, and improve.

Closing Thoughts

Building Ask AI was an exercise in practical application of tools that others created. The feature works because Hawaii generates management clients, because Glutinum produces runtime bindings, because Fable transpiles F# to JavaScript that runs at the edge. The implementation code, the handlers and types and sync scripts, is the visible layer built atop invisible years of open source effort.

For those considering similar projects, the path is clearer than it’s ever been. F# developers can target Cloudflare’s global edge network without leaving their preferred language. They can provision infrastructure, deploy workers, and manage content through type-safe APIs. The friction that once made polyglot development feel mandatory has been engineered away by people who shared their work freely.

Thank you to everyone who contributed to making this possible. The F# community’s generosity continues to expand what individual developers can accomplish.

And don’t forget to try Ask AI for yourself!

If you’re curious to see a sample of the code, it’s available in the samples directory of the CloudflareFS repo.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

C# Advent 2025 - Extension Members

1 Share

Hero image credit: Image generated by AI

Donny, the head of Elf IT, has a problem. Liza wrote a basic app to maintain data about the various workshops across Santa Toys Global Manufacturing & Distribution, Ltd., but then she quit and didn’t leave any source code. All Donny has is some documentation and a copy of the compiled class library. As best he can tell, it’s got a couple of classes that look something close to this.

public class Workshop
{
    public List<Kid> NiceList = new List<Kid>();
    public List<Kid> NaughtyList = new List<Kid>();
    public int PresentsMade = 0;
    public int PresentsPerNiceKid = 1;
    public int LumpsOfCoalInReserve = 0;
}

public class Kid
{
    public string Name { get; set; }
    public int Age { get; set; }
}

Sure, he could rewrite that, but the library also has a bunch of other functionality that would take him too long to rewrite. And he’s under a lot of pressure to improve their IT systems performance. Then Donny remembers about extension methods.

Extension methods are nothing new. They’ve been around in .NET forever. If you aren’t familiar with them, extension methods allow you to add methods to an existing type or class without having to create a new derived type. Creating them is fairly straightforward. You create a static class, and add a static function to that class whose first parameter was the type to be extended with a “this” modifier, like so:

public static class MyExtensions
{
    public static string MineCoal(this Workshop workshop, int amountMined){
        workshop.LumpsOfCoalInReserve += amountMined;
    }
}

This would create a function for a Workshop class that would add an amount of coal to the reserves. You could then use that method with any Workshop variable.

NorthAmericaWorkshop.MineCoal(200);

Easy, right?

New Syntax

With C# 14, there’s a new syntax available to you called “extension members” that can make it a bit more clear in your code. You still create a static class to put your extensions in to, but now there’s a new syntax you can use to group related extensions into a logical grouping called “extension blocks”. The docs define extension blocks as:

A block in a non-nested, nongeneric, static class that contains extension members for a type or an instance of that type.

You’ll get a better idea by looking at the following example:

public static class ExtensionClass
{
    extension(Workshop workshop)
    {
        public void AddToNiceList(Kid kid)
        {
            workshop.NiceList.Add(kid);
        }
        public void AddToNaughtyList(Kid kid)
        {
            workshop.NaughtyList.Add(kid);
        }
        public void MineCoal(int amount)
        {
            workshop.LumpsOfCoalInReserve += amount;
        }
    }
}

Here you see 3 extension methods created for the Workshop class. It does the same job as the previous syntax, but it’s cleaner, more organized, and more readable than the older syntax with its “static” and “this” on every line approach. You only need a single reference to the receiver object at the top that all the methods can reference, so you don’t need to add the this keyword on every method anymore.

Usage in the rest of your code remains unchanged.

NorthAmericaWorkshop.MineCoal(200);

You can also invoke them like any other static class method.

ExtensionClass.MineCoal(NorthAmericaWorkshop, 200);

Important note: Remember that defined methods in a class or interface take precedence over extension methods for a type or class. So, if you have a class that defines a method called MineCoal and an extension method for that class called MineCoal, the extension method will never get called. It will always call the method defined in the class or its interface. So watch your naming.

Important note #2: Remember that structs are passed by value, not by reference. So, if your extension method modifies values within the struct, you need to make sure that the extension method utilizies the ref keyword.

extension(ref int input){
    public void Increment() => input++;
}

Good to know, right?

Extension Properties and Operators

Another thing this new syntax allows you to do is to declare extension properties and operators in addition to extension methods. This was something that you couldn’t do before, and it just adds to usefulness of the new format. How about a property that tells me how many total kids are being covered by that particular workshop? Easy.

public static class ExtensionClass
{
    extension(Workshop workshop)
    {
        public int TotalKids() => workshop.NiceList.Count + workshop.NaughtyList.Count;
    }
}

Uh, oh! Donny just got a notification in Discord. Santa has announced that they are merging several groups of workshops together to promote “SYNERGY”! Donny hates corporate buzzwords. But thanks to the new extension method support for operators, he can quickly write an extension that will make it easy and clean to merge the data from multiple workshops into a single workshop.

public static class ExtensionClass
{
    extension(Workshop) { 
        public static Workshop operator + (Workshop left, Workshop right)
        {
            return new Workshop
            {
                NiceList = new List<Kid>(left.NiceList).Concat(right.NiceList).ToList(),
                NaughtyList = new List<Kid>(left.NaughtyList).Concat(right.NaughtyList).ToList(),
                PresentsMade = left.PresentsMade + right.PresentsMade,
                PresentsPerNiceKid = Math.Max(left.PresentsPerNiceKid, right.PresentsPerNiceKid),
                LumpsOfCoalInReserve = left.LumpsOfCoalInReserve + right.LumpsOfCoalInReserve
            };
        }
    }
}

Note that the syntax for an operator is slightly different. An extension operator needs to be declared with the static keyword. But now that he’s written the extension method, getting a new workshop object to represent the merger in code is as simple as:

var EuroMuricaWorkshop = NorthAmericaWorkshop + EuropeWorkshop;

Now he can easily merge all that data for the reporting purposes without having to redo all his back end data. For now, anyway. No time for that during the holiday rush. He’ll add a task to the backlog to circle back to that massive undertaking later. And we all know what that means!!!

Conclusion

The old syntax is still supported 100%. This isn’t a breaking change and you don’t have to rewrite any of your existing code. But the new syntax in C# 14 for extension members, as well as the addition of extension properties and operators, can make your code much more easily readable and cleaner. And we all like clean code, don’t we? Future you, and the next dev up, (and Donny) will thank you!

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Tagless Final in F# - Part 1: Froggy Tree House

1 Share

FsAdvent 2025: This is Part 1 of a 6-part series on Tagless-Final in F#.

This blog series came about from a chance conversation with the brilliant and funny Dr. Vaishnavi S. I’m going to bury the lede on this one, but if you read the whole series, the point of confluence - and indeed the inspiration to go off and do some research on this topic - will become apparent! Thank you! :)


Froggy Tree House: A Tiny DSL for a Tiny Game

Welcome to Froggy Tree House! 🐸

This is a series about building a game. Well, not just a game. It’s about how we talk to computers, how we define meaning, and how we can say one thing but mean two (or three, or four) different things.

But mostly, it’s about a frog named Froggy.

Froggy lives in a tree. He likes to jump. He likes to croak. Sometimes, if he’s lucky, he catches a fly. We want to write a program to control Froggy.

The “Ugly” Way

If we were writing standard F#, we might model Froggy’s actions as a list of commands.

type Action = 
    | Jump 
    | Croak 
    | EatFly

let myProgram = [ Jump; Croak; Jump; EatFly ]

This is okay, but it’s a bit… static. What if we want to do things based on the state of the world? What if we want to chain actions together more naturally?

We could write functions:

let runFroggy (frog: Frog) =
    let frog2 = frog.Jump()
    let frog3 = frog2.Croak()
    frog3.EatFly()

That’s a bit clunky. We have to thread that frog state through everything. If we forget to pass frog2 to the next function and pass frog instead, we’ve introduced a bug where time didn’t move forward.

Imagine if we had 50 lines of this. One typo, and Froggy teleports back in time. We want something that handles that plumbing for us.

The “Cute” Way: A Froggy DSL

What we really want is to write code that looks like a story. We want a Domain Specific Language (DSL) just for Froggy.

Wouldn’t it be nice if we could write this?

let adventure = frog {
    jump
    croak
    jump
    eat_fly
}

This looks clean. It looks like a script. But how do we make F# understand it?

Making It Work (The Magic)

To make that frog { ... } syntax work, we need a Computation Expression. But before we build the builder, we need to decide what our “instructions” actually are.

In this series, we’re going to use a technique where an Interpreter is just a record of functions.

type FrogInterpreter<'a> = {
    Jump : unit -> 'a
    Croak : unit -> 'a
    EatFly : unit -> 'a
    // We need a way to glue instructions together
    Bind : 'a -> (unit -> 'a) -> 'a 
    Return : unit -> 'a
}

Wait, don’t panic at the types! All this says is: “If you want to be a Frog Interpreter, you need to know how to handle a Jump, a Croak, and Eating a Fly.”

The generic type 'a represents the result of our program.

  • If we are printing a story, 'a might be string.
  • If we are simulating a game, 'a might be FrogState -> FrogState.
  • If we are drawing a picture, 'a might be Image.

The Bind function is the glue. It says: “Give me the result of the previous instruction ('a), and a function that generates the next instruction (unit -> 'a), and I will combine them into a new result ('a).”

Now, our frog builder just uses this interpreter.

// A FrogProgram is just a function that takes an interpreter and returns a result
type FrogProgram<'a> = FrogInterpreter<'a> -> 'a

type FrogBuilder() =
    // 'Yield' is called when we have a simple value (like 'return' in C#)
    member _.Yield(()) = fun (i: FrogInterpreter<'a>) -> i.Return ()
    
    // Custom operations allow us to add keywords like 'jump' and 'croak'
    [<CustomOperation("jump")>]
    member _.Jump(state: FrogProgram<'a>) = 
        fun (i: FrogInterpreter<'a>) -> i.Bind (state i) (fun () -> i.Jump())

    [<CustomOperation("croak")>]
    member _.Croak(state: FrogProgram<'a>) = 
        fun (i: FrogInterpreter<'a>) -> i.Bind (state i) (fun () -> i.Croak())

    [<CustomOperation("eat_fly")>]
    member _.EatFly(state: FrogProgram<'a>) = 
        fun (i: FrogInterpreter<'a>) -> i.Bind (state i) (fun () -> i.EatFly())

let frog = FrogBuilder()

Note: This is a simplified view. In a real tagless-final encoding, we might do this slightly differently, but let’s stick to the idea that a program is just a function waiting for an interpreter.

Interpreter 1: The Storyteller

Now that we have our adventure defined, it doesn’t actually do anything. It’s just a description. To run it, we need an interpreter.

Let’s build a Pretty Printer. This interpreter doesn’t simulate physics; it just tells a story.

let storyTeller : FrogInterpreter<string> = {
    Jump = fun () -> "Froggy jumps up!"
    Croak = fun () -> "Ribbit!"
    EatFly = fun () -> "Yum, a fly!"
    Return = fun () -> ""
    Bind = fun prev next -> 
        let n = next()
        if prev = "" then n else prev + "\n" + n
}

// Run it!
let result = adventure storyTeller
printfn "%s" result

Output:

Froggy jumps up!
Ribbit!
Froggy jumps up!
Yum, a fly!

Interpreter 2: The Simulator

That was fun, but is Froggy actually getting anywhere? Let’s build a Simulator that tracks Froggy’s height and hunger.

type FrogState = { Height: int; Hunger: int }

// Our 'a is now a function: FrogState -> FrogState
let simulator : FrogInterpreter<FrogState -> FrogState> = {
    Jump = fun () -> fun s -> { s with Height = s.Height + 1; Hunger = s.Hunger + 1 }
    Croak = fun () -> fun s -> { s with Hunger = s.Hunger + 1 } // Croaking takes energy
    EatFly = fun () -> fun s -> { s with Hunger = 0 }
    Return = fun () -> id
    Bind = fun prev next -> fun s -> 
        let s' = prev s
        let nextAction = next()
        nextAction s'
}

// Run it!
let finalState = adventure simulator { Height = 0; Hunger = 0 }
printfn "Final Height: %d, Hunger: %d" finalState.Height finalState.Hunger

Output:

Final Height: 2, Hunger: 0

Wait, what if Froggy runs out of energy? We could add logic to Jump to check s.Hunger.

    Jump = fun () -> fun s -> 
        if s.Hunger > 10 then 
            printfn "Froggy is too tired to jump!"
            s 
        else 
            { s with Height = s.Height + 1; Hunger = s.Hunger + 1 }

This is the beauty of it: The adventure code doesn’t know about hunger limits. The interpreter enforces the rules of physics.

Sidebar: Interpreters Are Just Records of Functions

Notice something cool? The adventure code didn’t change.

let adventure = frog {
    jump
    croak
    jump
    eat_fly
}

This single piece of code has two different meanings depending on which interpreter we give it.

  • To the storyTeller, it’s a string generator.
  • To the simulator, it’s a state transition function.

This is the power of separating the description of the program from its execution. In fancy math terms, the FrogInterpreter type defines an Algebra, and our storyTeller and simulator are two different Instances of that algebra. The adventure is a program written against that algebra.

What’s Next?

Right now, Froggy just follows a script. But the world is scary and unpredictable! What if Froggy wants to make choices? What if there are multiple paths up the tree?

In the next post, we’ll introduce Nondeterminism and let Froggy explore the multiverse. 🌌


This post is part of FsAdvent 2025.

Next: Maps, Branches, and Choices »

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Azure Resource Mover: What Actually Moves, What Doesn’t

1 Share
All sample PowerShell companion code for this blog can be found here . Azure has plenty of tools that do one thing really well, and Azure Resource Mover fits right into that category. If you need to move supported resources across regions without rebuilding from scratch, this is your tool. The trick is knowing what it was built for, what it refuses to touch, and how to use it without creating a surprise outage. This guide walks through what Resource Mover is good at, what it is not, how to...

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase | Lou Franco

1 Share

BONUS: Swimming in Tech Debt — Practical Techniques to Keep Your Team from Drowning in Its Codebase

In this fascinating conversation, veteran software engineer and author Lou Franco shares hard-won lessons from decades at startups, Trello, and Atlassian. We explore his book "Swimming in Tech Debt," diving deep into the 8 Questions framework for evaluating tech debt decisions, personal practices that compound over time, team-level strategies for systematic improvement, and leadership approaches that balance velocity with sustainability. Lou reveals why tech debt is often the result of success, how to navigate the spectrum between ignoring debt and rewriting too much, and practical techniques individuals, teams, and leaders can use starting today.

The Exit Interview That Changed Everything

"We didn't go slower by paying tech debt. We went actually faster, because we were constantly in that code, and now we didn't have to run into problems." — Lou Franco

 

Lou's understanding of tech debt crystallized during an exit interview at Atalasoft, a small startup where he'd spent years. An engineer leaving the company confronted him: "You guys don't care about tech debt." Lou had been focused on shipping features, believing that paying tech debt would slow them down. But this engineer told a different story — when they finally fixed their terrible build and installation system, they actually sped up. They were constantly touching that code, and removing the friction made everything easier. This moment revealed a fundamental truth: tech debt isn't just about code quality or engineering pride. It's about velocity, momentum, and the ability to move fast sustainably. Lou carried this lesson through his career at Trello (where he learned the dangers of rewriting too much) and Atlassian (where he saw enterprise-scale tech debt management). These experiences became the foundation for "Swimming in Tech Debt."

Tech Debt Is the Result of Success

"Tech debt is often the result of success. Unsuccessful projects don't have tech debt." — Lou Franco

 

This reframes the entire conversation about tech debt. Failed products don't accumulate debt — they disappear before it matters. Tech debt emerges when your code survives long enough to outlive its original assumptions, when your user base grows beyond initial expectations, when your team scales faster than your architecture anticipated. At Atalasoft, they built for 10 users and got 100. At Trello, mobile usage exploded beyond their web-first assumptions. Success creates tech debt by changing the context in which code operates. This means tech debt conversations should happen at different intensities depending on where you are in the product lifecycle. Early startups pursuing product-market fit should minimize tech debt investments — move fast, learn, potentially throw away the code. Growth-stage companies need balanced approaches. Mature products benefit significantly from tech debt investments because operational efficiency compounds over years. Understanding this lifecycle perspective helps teams make appropriate decisions rather than applying one-size-fits-all rules.

The 8 Questions Framework for Tech Debt Decisions

"Those 8 questions guide you to what you should do. If it's risky, has regressions, and you don't even know if it's gonna work, this is when you're gonna do a project spike." — Lou Franco

 

Lou introduces a systematic framework for evaluating whether to pay tech debt, inspired by Bob Moesta's push-pull forces from product management. The 8 questions create a complete picture:

 

  1. Visibility — Will people outside the team understand what we're doing?

  2. Alignment — Does this match our engineering values and target architecture?

  3. Resistance — How hard is this code to work with right now?

  4. Volatility — How often do we touch this code?

  5. Regression Risk — What's the chance we'll introduce new problems?

  6. Project Size — How big is this to fix?

  7. Estimate Risk — How uncertain are we about the effort required?

  8. Outcome Uncertainty — How confident are we the fix will actually improve things?

 

High volatility and high resistance with low regression risk? Pay the debt now. High regression risk with no tests? Write tests first, then reassess. Uncertain outcomes on a big project? Do a spike or proof of concept. The framework prevents both extremes — ignoring costly debt and undertaking risky rewrites without proper preparation.

Personal Practices That Compound Daily

"When I sit down at my desk, the first thing I do is I pay a little tech debt. I'm looking at code, I'm about to change it, do I even understand it? Am I having some kind of resistance to it? Put in a little helpful comment, maybe a little refactoring." — Lou Franco

 

Lou shares personal habits that create compounding improvements over time. Start each coding session by paying a small amount of tech debt in the area you're about to work — add a clarifying comment, extract a confusing variable, improve a function name. This warms you up, reduces friction for your actual work, and leaves the code slightly better than you found it. The clean-as-you-go philosophy means tech debt never accumulates faster than you can manage it. But Lou's most powerful practice comes at the end of each session: mutation testing by hand. Before finishing for the day, deliberately break something — change a plus to minus, a less-than to less-than-or-equal. See if tests catch it. Often they don't, revealing gaps in test coverage. The key insight: don't fix it immediately. Leave that failing test as the bridge to tomorrow's coding session. It connects today's momentum to tomorrow's work, ensuring you always start with context and purpose rather than cold-starting each day.

Mutation Testing: Breaking Things on Purpose

"Before I'm done working on a coding session, I break something on purpose. I'll change a plus to a minus, a less than to a less than equals, and see if tests break. A lot of times tests don't break. Now you've found a problem in your test." — Lou Franco

 

Manual mutation testing — deliberately breaking code to verify tests catch the break — reveals a critical gap in most test suites. You can have 100% code coverage and still have untested behavior. A line of code that's executed during tests isn't necessarily tested — the test might not actually verify what that line does. By changing operators, flipping booleans, or altering constants, you discover whether your tests protect against actual logic errors or just exercise code paths. Lou recommends doing this manually as part of your daily practice, but automated tools exist for systematic discovery: Stryker (for JavaScript, C#, Scala) and MutMut (for Python) can mutate your entire codebase and report which mutations survive uncaught. This isn't just about test quality — it's about understanding what your code actually does and building confidence that changes won't introduce subtle bugs.

Team-Level Practices: Budgets, Backlogs, and Target Architecture

"Create a target architecture document — where would we be if we started over today? Every PR is an opportunity to move slightly toward that target." — Lou Franco

 

At the team level, Lou advocates for three interconnected practices. First, create a target architecture document that describes where you'd be if starting fresh today — not a detailed design, but architectural patterns, technology choices, and structural principles that represent current best practices. This isn't a rewrite plan; it's a North Star. Every pull request becomes an opportunity to move incrementally toward that target when touching relevant code. Second, establish a budget split between PM-led feature work and engineering-led tech debt work — perhaps 80/20 or whatever ratio fits your product lifecycle stage. This creates predictable capacity for tech debt without requiring constant negotiation. Third, hold quarterly tech debt backlog meetings separate from sprint planning. Treat this backlog like PMs treat product discovery — explore options, estimate impacts, prioritize based on the 8 Questions framework. Some items fit in sprints; others require dedicated engineers for a quarter or two. This systematic approach prevents tech debt from being perpetually deprioritized while avoiding the opposite extreme of engineers disappearing into six-month "improvement" projects with no visible progress.

The Atlassian Five-Alarm Fire

"The Atlassian CTO's 'five-alarm fire' — stopping all feature development to focus on reliability. I reduced sync errors by 75% during that initiative." — Lou Franco

 

Lou shares a powerful example of leadership-driven tech debt management at scale. The Atlassian CTO called a "five-alarm fire" — halting all feature development across the company to focus exclusively on reliability and tech debt. This wasn't panic; it was strategic recognition that accumulated debt threatened the business. Lou worked on reducing sync errors, achieving a 75% reduction during this focused period. The initiative demonstrated several leadership principles: willingness to make hard calls that stop revenue-generating feature work, clear communication of why reliability matters strategically, trust that teams will use the time wisely, and commitment to see it through despite pressure to resume features. This level of intervention is rare and shouldn't be frequent, but it shows what's possible when leadership truly prioritizes tech debt. More commonly, leaders should express product lifecycle constraints (startup urgency vs. mature product stability), give teams autonomy to find appropriate projects within those constraints, and require accountability through visible metrics and dashboards that show progress.

The Rewrite Trap: Why Big Rewrites Usually Fail

"A system that took 10 years to write has implicit knowledge that can't be replicated in 6 months. I'm mostly gonna advocate for piecemeal migrations along the way, reducing the size of the problem over time." — Lou Franco

 

Lou lived through Trello's iOS navigation rewrite — a classic example of throwing away working code to start fresh, only to discover all the edge cases, implicit behaviors, and user expectations baked into the "old" system. A codebase that evolved over several years contains implicit knowledge — user workflows, edge case handling, performance optimizations, and subtle behaviors that users rely on even if they never explicitly requested them. Attempting to rewrite this in six months inevitably misses critical details. Lou strongly advocates for piecemeal migrations instead. The Trello "Decaffeinate Project" exemplifies this approach — migrating from CoffeeScript to TypeScript incrementally, with public dashboards showing the percentage remaining, interoperable technologies allowing gradual transition, and the ability to pause or reverse if needed. Keep both systems running in parallel during migrations. Use runtime observability to verify new code behaves identically to old code. Reduce the problem size steadily over months rather than attempting big-bang replacements. The only exception: sometimes keeping parallel systems requires scaffolding that creates its own complexity, so evaluate whether piecemeal migration is actually simpler or if you're better off living with the current system.

Making Tech Debt Visible Through Dashboards

"Put up a dashboard, showing it happen. Make invisible internal improvements visible through metrics engineering leadership understands." — Lou Franco

 

One of tech debt's biggest challenges is invisibility — non-technical stakeholders can't see the improvement from refactoring or test coverage. Lou learned to make tech debt work visible through dashboards and metrics. The Decaffeinate Project tracked percentage of CoffeeScript files remaining, providing a clear progress indicator anyone could understand. When reducing sync errors, Lou created dashboards showing error rates declining over time. These visualizations serve multiple purposes: they demonstrate value to leadership, create accountability for engineering teams, build momentum as progress becomes visible, and help teams celebrate wins that would otherwise go unnoticed. The key is choosing metrics that matter to the business — error rates, page load times, deployment frequency, mean time to recovery — rather than pure code quality metrics like cyclomatic complexity that don't translate outside engineering. Connect tech debt work to customer experience, reliability, or developer productivity in ways leadership can see and value.

Onboarding as a Tech Debt Opportunity

"Unit testing is a really great way to learn a system. It's like an executable specification that's helping you prove that you understand the system." — Lou Franco

 

Lou identifies onboarding as an underutilized opportunity for tech debt reduction. When new engineers join, they need to learn the codebase. Rather than just reading code or shadowing, Lou suggests having them write unit tests in areas they're learning. This serves dual purposes: tests are executable specifications that prove understanding of system behavior, and they create safety nets in areas that likely lack coverage (otherwise, why would new engineers be confused by the code?). The new engineer gets hands-on learning, the team gets better test coverage, and everyone wins. This practice also surfaces confusing code — if new engineers struggle to understand what to test, that's a signal the code needs clarifying comments, better naming, or refactoring. Make onboarding a systematic tech debt reduction opportunity rather than passive knowledge transfer.

Leadership's Role: Constraints, Autonomy, and Accountability

"Leadership needs to express the constraints. Tell the team what you're feeling about tech debt at a high level, and what you think generally is the appropriate amount of time to be spent on it. Then give them autonomy." — Lou Franco

 

Lou distills leadership's role in tech debt management to three elements. First, express constraints — communicate where you believe the product is in its lifecycle (early startup, rapid growth, mature cash cow) and what that means for tech debt tolerance. Are we pursuing product-market fit where code might be thrown away? Are we scaling a proven product where reliability matters? Are we maintaining a stable system where operational efficiency pays dividends? These constraints help teams make appropriate trade-offs. Second, give autonomy — once constraints are clear, trust teams to identify specific tech debt projects that fit those constraints. Engineers understand the codebase's pain points better than leaders do. Third, require accountability — teams must make their work visible through dashboards, metrics, and regular updates. Autonomy without accountability becomes invisible engineering projects that might not deliver value. Accountability without autonomy becomes micromanagement that wastes engineering judgment. The balance creates space for teams to make smart decisions while keeping leadership informed and confident in the investment.

AI and the Future of Tech Debt

"I really do AI-assisted software engineering. And by that, I mean I 100% review every single line of that code. I write the tests, and all the code is as I would have written it, it's just a lot faster. Developers are still responsible for it. Read the code." — Lou Franco

 

Lou has a chapter about AI in his book, addressing the elephant in the room: will AI-generated code create massive tech debt? His answer is nuanced. AI can accelerate development tremendously if used correctly — Lou uses it extensively but reviews every single line, writes all tests himself, and ensures the code matches what he would have written manually. The problem emerges with "vibe coders" — non-developers using AI to generate code they don't understand, creating unmaintainable messes that become someone else's problem. Developers remain responsible for all code, regardless of how it's generated. This means you must read and understand AI-generated code, not blindly accept it. Lou also raises supply chain security concerns — dependencies can contain malicious code, and AI might introduce vulnerabilities developers miss. His recommendation: stay six months behind on dependency updates, let others discover the problems first, and consider separate sandboxed development machines to limit security exposure. AI is a powerful tool, but it doesn't eliminate the need for engineering judgment, testing discipline, or code review practices.

The Style Guide Beyond Formatting

"Have a style guide that goes beyond formatting to include target architecture. This is the kind of code we want to write going forward." — Lou Franco

 

Lou advocates for style guides that extend beyond tabs-versus-spaces formatting rules to include architectural guidance. Document patterns you want to move toward: how should components be structured, what state management approaches do we prefer, how should we handle errors, what testing patterns should we follow? This creates a shared understanding of the target architecture without requiring a massive design document. When reviewing pull requests, teams can reference the style guide to explain why certain approaches align with where the codebase is headed versus perpetuating old patterns. This makes tech debt conversations less personal and more objective — it's not about criticizing someone's code, it's about aligning with team standards and strategic direction. The style guide becomes a living document that evolves as the team learns and technology changes, capturing collective wisdom about what good code looks like in your specific context.

Recommended Resources

Some of the resources mentioned in this episode include: 

 

About Lou Franco

 

Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum.

 

You can link with Lou Franco on LinkedIn and learn more at LouFranco.com.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251213_Lou_Franco_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Why Agentic AI Will Be Tech's Biggest Winner of 2026 – How to Integrate Agentic AI Into Your Business Today

1 Share
Why Agentic AI Will Be Tech's Biggest Winner of 2026 – How to Integrate Agentic AI Into Your Business Today

Agentic AI is no longer science fiction — it’s the most powerful shift in artificial intelligence since the launch of ChatGPT. In 2026, agentic AI will separate winners from losers in tech, business, and productivity.

 

This article explains everything you need to know: what agentic AI actually is, why it’s exploding in 2026, real-world examples, and — most importantly — how you can start integrating agentic AI into your workflow or company right now.

 

Why Agentic AI Will Be Tech's Biggest Winner of 2026
Why Agentic AI Will Be Tech's Biggest Winner of 2026

 

Table of Contents

 

 

What Is Agentic AI? The Simple Definition

Agentic AI refers to artificial intelligence systems that can independently set goals, make decisions, take actions, and iterate until the objective is completed — without constant human guidance.

 

Unlike traditional chatbots that only respond when you talk to them, an agentic AI can:

  • Understand a high-level goal (“Book me the cheapest business-class flight to Tokyo next month with a hotel under $300/night”)
  • Break it into subtasks
  • Use tools (search the web, open browsers, send emails, fill forms)
  • Handle obstacles and re-plan in real time
  • Deliver the final result (itinerary + bookings) while you sleep

 

Dario Amodei (CEO of Anthropic), Sam Altman (OpenAI), and even Elon Musk have all publicly stated that the future belongs to agentic systems. In 2025 we already see the first generation; by 2026 they will be mainstream.

 

 

Reactive AI vs Agentic AI: The Critical Difference

Reactive AI (ChatGPT, Claude, Gemini today) Agentic AI (2025–2026 wave)
Initiative Only acts when prompted Can start tasks autonomously
Tool Use Limited or one-at-a-time Orchestrates dozens of tools in loops
Memory & Context Short conversation window Long-term memory + project history
Error Handling Gives up or asks you Retries, re-plans, finds workarounds
Goal Completion Provides information Delivers finished outcome

 

 

Why 2026 Will Be the Year of Agentic AI

Several unstoppable trends are converging in 2026:

  1. Model capability leap – GPT-5 class models + o3-style reasoning chains are live.
  2. Tool-use & browser control – Chrome extensions, APIs, and computer-use endpoints are now reliable.
  3. Memory & state – Vector databases + infinite context make long-running agents possible.
  4. Price collapse – Reasoning tokens are dropping below $1 per million; running an agent for hours costs pennies.
  5. Enterprise adoption – Salesforce, Microsoft Copilot Studio, and ServiceNow all ship agent builders in 2025–2026.
  6. Startup explosion – Over $20 billion poured into agentic startups in 2024–2025 alone.

 

Prediction: By the end of 2026, the majority of knowledge workers will have at least one personal agentic AI working for them 24/7.

 

 

7 Real-World Agentic AI Examples Already Winning in 2025–2026

  1. Devin by Cognition – The first fully autonomous software engineer (already writing production code at startups).
  2. Adept ACT-1 & OpenAI Operator – Can operate any software through the browser like a human.
  3. Salesforce Agentforce – Autonomous sales & service agents closing deals without human reps.
  4. Microsoft Copilot Workspace – Turns a GitHub issue into merged PR automatically.
  5. MultiOn + BrowserGPT – Personal agents that shop, book travel, apply to jobs for you.
  6. ManasAI & Replit Agent – Full-cycle app development in minutes.
  7. Anthropic Computer Use – Claude can now control your Mac/PC and get work done while you sleep.

 

 

How to Integrate Agentic AI Into Your Business Today (Step-by-Step)

Step 1: Identify repetitive high-value workflows
Step 2: Choose your stack (see tools below)
Step 3: Start with narrow scoped agents
Step 4: Add memory (Pinecone, Supabase Vector, or Qdrant)
Step 5: Implement human-in-the-loop approvals for money/risk
Step 6: Scale to multi-agent teams

 

Real example: A marketing agency reduced content production cost by 89% using a researcher → writer → editor → publisher agent swarm.

 

 

Top 10 Agentic AI Tools & Platforms You Can Use Right Now (2025–2026)

  1. CrewAI – Open-source multi-agent framework (most popular)
  2. LangGraph (by LangChain) – State machine for complex agents
  3. AutoGen (Microsoft) – Multi-agent conversation framework
  4. n8n + OpenAI – No-code agent builder
  5. Browserless + Puppeteer agents – Full browser control
  6. Adept, Anthropic Computer Use, OpenAI Operator – Official endpoints
  7. Replit Agent – Best for developers
  8. MultiOn – Consumer-facing personal agent
  9. Salesforce Agentforce – Enterprise-grade
  10. Zapier Central – No-code agents connected to 6000+ apps

 

 

Risks and Challenges of Agentic AI (And How to Mitigate Them)

  • Infinite loops → Set budget & time caps
  • Hallucinated actions → Use structured output + human review
  • Security → Never give full credentials; use OAuth scopes
  • Cost overruns → Monitor token usage in real time

 

The Future Beyond 2026: Multi-Agent Systems & Artificial Superintelligence

By 2027–2028 we will see:

  • Companies made of 100% AI agents (no human employees)
  • Personal AI operating systems that manage your entire digital life
  • The first sparks of AGI emerging from massive agent swarms

 

 

Frequently Asked Questions About Agentic AI

What exactly is agentic AI?

Agentic AI is artificial intelligence that can autonomously pursue complex goals using tools, reasoning, and iteration — without constant human input.

When will agentic AI become mainstream?

2026 is the breakout year. The required capabilities (reasoning, tool use, memory) are already here in late 2025.

Can I build my own agentic AI today?

Yes! With CrewAI, LangGraph, or even Zapier Central you can have a working agent in under an hour.

Is agentic AI the same as AGI?

No. Agentic AI is narrow-to-medium scope autonomy. AGI would be human-level or beyond across every domain.

 

 

Summary & Key Takeaways

  • Agentic AI = AI that independently completes goals using tools and reasoning.
  • 2026 is the inflection point — the tech is ready now in late 2025.
  • Early adopters are already 5–10× more productive.
  • You can start integrating agentic AI today with free/open-source tools.
  • The winners of the next decade will be those who master agentic workflows first.

 

Don’t wait for 2026. The agentic AI revolution has already started — and the gap between those who adopt now and those who wait will be measured in years, not months.

 

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories