Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150985 stories
·
33 followers

F# Weekly #50, 2025 – Making of A Programming Language

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

🚀 Excited to announce SharpIDE – A Modern, Cross-Platform IDE for .NET!I'm thrilled to share my latest open-source project, just in time for .NET 10: SharpIDE, a brand new IDE for .NET, built with .NET and Godot! 🎉🔗 Check it out on GitHub: github.com/MattParkerDe……

Matt Parker (@mattparker.dev) 2025-11-11T23:24:13.521Z

Videos

Let's build a Programming Language – together!The goal? Combine the developer experience of Python with the safety of TypeScript.In Episode 0: Syntax + Type Inference📺 http://www.youtube.com/watch?v=fSRT…#TypeInference #TypeScript #CSharp #FSharp #Rust #Haskell

SchlenkR (@schlenkr.bsky.social) 2025-12-13T08:58:15.904Z

Blogs & FsAdvent

F# vNext

Highlighted projects

New Releases

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rust in Linux's Kernel 'is No Longer Experimental'

1 Share
Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why AI Advantage Compounds

1 Share
From: AIDailyBrief
Duration: 11:45
Views: 1,058

AI advantage compounds as organizations integrate GenAI into workflows and scale beyond isolated experiments. Surveys reveal widespread productivity and financial gains, attribution challenges, and gaps between expected and actual AI investment. Reinvestment into AI capabilities and a shift from time-saving tasks to decision-making, revenue generation, and autonomous agents creates a self-reinforcing flywheel with non-linear ROI.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Dont' Sleep on GPT-5.2, It's a coding BEAST!

1 Share


Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Dynamic data-driven scrollable button menu construction kit for Snap Spectacles part 2 - how it works

1 Share

In part 1 I described how this component can be used, and I promised to go deeper into the details about how it worked in a follow-up post. This is that post.

The ScrollMenu prefab

I usually build up my prefabs in such a way that the top-level SceneObject has a kind of controller script, while there is always one child SceneObject that holds the actual visible part (in this case, a menu). That way, I can let the controller script handle the actual display state and the way things work by calling a script method, without having to mess with the actual internal structure of the prefab, potentially turning off parts that have vital controlling scripts on it, messing up the workings of the app, and potentially creating issues that way.

The controller script in this case is called - very originally - UIKitScrollMenuController. It features a few input fields. You can change the first three (in fact, you must do so with the Scroll Button Prefab field after you dragged it onto your scene, as I explained in part 1). The last three should best be left undisturbed.

The first field is the vertical size a button uses (including padding), the second the horizontal size. The control makes buttons in two columns and as many rows as necessary. If you want more columns, you will have to adapt the code. Since you will have to press them by finger, I don’t anticipate much narrower buttons, so I guess you don’t have to change Column Size that often, but Y Offset you might. It is now tailored toward my sample button.

The MenuFrame component contains the Frame script showing the UI canvas, as well as the HeadLock and the Billboard script keeping the UI more or less in view.

One thing of note - if you are using Billboard, please remember to disable “Allow translation”, otherwise you can still grab and move the floating window, but you will more or less be fighting the HeadLock and the Billboard scripts, which is not desirable. Either the user decides where a window goes, or the system - but not both.

Some other details:

  • ScrollWindowAnchor determines where on the floating screen the scroll window will appear. This you can use mostly to decide the vertical starting point, should you require to change that.
  • ScrollWindow itself decides the actual size of the scroll area.
  • Scrollbar determines the vertical position of the scrollbar.
  • Slider determines the size of the scrollbar.

If you change either ScrollWindowAnchor or ScrollWindow, be prepared to fiddle with Scrollbar and ScrollbarSlider until it all fits nicely together again, with sizes aligning visually, etc.

Scripts

The whole thing works using only three custom scripts:

So let’s start with the easy part:

BaseUIKitScrollButtonController

This is a fairly simple script, but still requires some explanation. Let’s start with the header and the events.

@component
export class BaseUIKitScrollButtonController extends BaseScriptComponent {
    @input buttonText: Text;
    @input uiKitButton: BaseButton;

    private onButtonPressedEvent = new Event<BaseScrollButtonData>();
    public readonly onButtonPressed = this.onButtonPressedEvent.publicApi();

    public onHoveredEvent = new Event<boolean>();
    public onHovered = this.onHoveredEvent.publicApi();

Remember this can be used as a parent class component for your own button script. Here you can see what it does behind the curtains.

  • Text should contain the button’s text to be set by the data fed to this button’s setButtonData method (as explained before)
  • It exposes an onButtonPressed event that is triggered when the button is pressed, and returns the BaseScrollButtonData that was used to create this button in the first place.
  • It also exposes an event onHovered that tells the interested listener whether the button is hovered over by the user.

Although both events are public, they are typically only used internally, by the UIKitScrollMenuController, as will become clear later.

The setButtonData is used by UIKitScrollMenuController to feed the actual button data to the button that is to be created:

public setButtonData(scrollButtonData: BaseScrollButtonData): void {
    if (this.uiKitButton != null) {
        this.uiKitButton.onHoverEnter.add(() => this.onHoveredEvent.invoke(true));
        this.uiKitButton.onHoverExit.add(() => this.onHoveredEvent.invoke(false));
        this.uiKitButton.onTriggerDown.add(() => this.onButtonPressedEvent.invoke(scrollButtonData));
        this.buttonText.text = scrollButtonData.buttonText;
        this.applyCustomSettings(scrollButtonData);
    }
}

protected applyCustomSettings(scrollButtonData: BaseScrollButtonData): void {
}

It wires up the button’s internal events to the BaseUIKitScrollButtonController’s onButtonPressed and onHovered, both of which will be consumed by the UIKitScrollMenuController. It also sets the button’s text, then finally calls the (here empty) applyCustomSettings method that you can override in a child class should you need to do so, to perform some custom actions for your custom button. I showed an example of that here.

UIKitScrollMenuController

This is basically the magic wand that all ties it together. The start is simple enough:

@component
export class UIKitScrollMenuController extends BaseScriptComponent {
    @input yOffset: number = 5;
    @input columnSize: number = 4;
    @input scrollButtonPrefab: ObjectPrefab;
    @input scrollWindow: ScrollWindow;
    @input menuRoot: SceneObject;
    @input closeButton: BaseButton;

    private onButtonPressedEvent = new Event<BaseScrollButtonData>();
    public readonly onButtonPressed = this.onButtonPressedEvent.publicApi();
    private scrollArea: SceneObject;

On top we see the six inputs already discussed before, then again a onButtonPressed that can inform interested listeners what button in the list was pressed (as shown here). The scrollArea we will need for a peculiar thing later.

Next is the setting up in onAwake:

private onAwake(): void {
    this.scrollArea = this.scrollWindow.getSceneObject();
    this.setMenuVisible(false);
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.initializeUI();
    });
    delayedEvent.reset(0.1);
}

We get a reference to the actual scrollWindow, we hide the menu for now, then start a delayed event for initializing the UI. This is necessary because for some reason the close button is not awake at onAwake yet, so you get the “Component not yet awake” error otherwise.

The methods for initializing the UI, as well as opening and closing the menu are as follows:

protected initializeUI(): void {
    this.closeButton.onTriggerDown.add(() => this.closeMenu());
}

closeMenu() {
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.setMenuVisible(false);
    });
    delayedEvent.reset(0.25);
    this.setMenuVisible(false);
}

public setMenuVisible(visible: boolean): void {
    this.menuRoot.enabled = visible;
}

In the closeMenu I keep a standard 0.25 seconds delay so the ‘click’ sound of the button has time to play; otherwise it will not play or be clipped as menuRoot SceneComponent, that holds all the UI, is hidden. The menuRoot component should be set to the first child in the prefab, as said before:

Otherwise the UIKitScrollMenuController will essentially disable itself.

The meat of the matter is the createButtons method, which essentially creates all buttons and the structure to support events to the outside world. Your own code should call it, feeding it an array of BaseScrollButtonData (or a child class of that). It starts as follows:

public createButtons(scrollButtonData: BaseScrollButtonData[]): void {
    var lines = Math.ceil(scrollButtonData.length / 2);
    var initOffset = lines % 2 != 0 ? this.yOffset : this.yOffset / 2;
    var yStart = Math.ceil(lines / 2) * this.yOffset - initOffset;
    var line = 0;
    this.scrollWindow.onInitialized.add(() => {
        this.scrollWindow.setScrollDimensions(new vec2(0, lines * this.yOffset));
    });
    this.setMenuVisible(true);

It first calculates how many rows of buttons are required, then the initial offset from the center, and with that at what y coordinate the buttons need to start (this will be used later). Then the scroll window scroll dimensions is set to the vertical size times the number of lines. Since we are only scrolling vertically, we only need to set the y part of it.

In hindsight I should have called yOffset “rowSize”, but what the heck.

for (let i = 0; i < scrollButtonData.length; i++) {
    var button = this.scrollButtonPrefab.instantiate(this.scrollArea);
    var buttonTransform = button.getTransform();
    var xPos = (i % 2 == 0) ? -this.columnSize : this.columnSize;
    buttonTransform.setLocalPosition(
      new vec3(xPos, yStart - this.yOffset * line, 0.1));
    button.enabled = true;
    if (i % 2 != 0) {
        line++;
    }

    const buttonController =
       getComponent<BaseUIKitScrollButtonController>(button,
                   BaseUIKitScrollButtonController);
    buttonController.setButtonData(scrollButtonData[i]);
    buttonController.onHovered.add((p) => {
        this.scrollWindow.vertical = !p;
    });
    buttonController.onButtonPressed.add((data) =>
       this.onButtonPressedEvent.invoke(data));
}
this.updateScrollPosition();

So, for every entry in scrollButtonData this code:

  • Creates a button
  • Places it on the left or right side of the list based on whether it’s an odd or even button
  • Enables the button
  • Increases the line after every two buttons
  • Gets a reference to BaseUIKitScrollButtonController
  • Feeds the BaseScrollButtonData entry to its setButtonData method - this will hook up the button
  • Makes sure the window scrolling is disabled when you hover over a button
  • Routes a button’s pressed event to the outside so you can listen to all buttons in one place
  • And finally calls updateScrollPosition

The third-to-last thing deserves a bit of explanation. I noticed it was pretty hard to press a button on a scrollable list, especially in not very good lighting conditions, as you tend to accidentally drag/move the list if you just miss the buttons, or the Spectacles camera misses you trying to press it. This makes it a lot more usable.

updateScrollPosition now is also a bit of a hacky thing. Because if you fill up a UIKit scroll list, it tends to set its scroll button halfway down the list. Why that is, I don’t know.

private updateScrollPosition(): void {
    const delayedEvent = this.createEvent("DelayedCallbackEvent");
    delayedEvent.bind(() => {
        this.scrollWindow.scrollPositionNormalized = new vec2(0, 1);
        this.menuRoot.getTransform().setLocalScale(new vec3(1, 1, 1));
    });
    delayedEvent.reset(1);
}

It basically sets the scrollPositionNormalized to 0, 1, which translates to “vertical top scroll position”. 0, 0 is vertical center, 0, -1 is vertical bottom. If you don’t add updateScrollPosition, instead of the desired left screen, you get the right screen.

Conclusion

So that’s kind of it. I do hope it’s useful for you and also gives you some insights into how you can cajole some UIKit elements into the shape you want. There is actually another little helper class in here, but I will deal with that later in yet another blog post.

The demo project is (still) here at GitHub.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why most enterprise AI coding pilots underperform (Hint: It's not the model)

1 Share

Gen AI in software engineering has moved well beyond autocomplete. The emerging frontier is agentic coding: AI systems capable of planning changes, executing them across multiple steps and iterating based on feedback. Yet despite the excitement around “AI agents that code,” most enterprise deployments underperform. The limiting factor is no longer the model. It’s context: The structure, history and intent surrounding the code being changed. In other words, enterprises are now facing a systems design problem: They have not yet engineered the environment these agents operate in.

The shift from assistance to agency

The past year has seen a rapid evolution from assistive coding tools to agentic workflows. Research has begun to formalize what agentic behavior means in practice: The ability to reason across design, testing, execution and validation rather than generate isolated snippets. Work such as dynamic action re-sampling shows that allowing agents to branch, reconsider and revise their own decisions significantly improves outcomes in large, interdependent codebases. At the platform level, providers like GitHub are now building dedicated agent orchestration environments, such as Copilot Agent and Agent HQ, to support multi-agent collaboration inside real enterprise pipelines.

But early field results tell a cautionary story. When organizations introduce agentic tools without addressing workflow and environment, productivity can decline. A randomized control study this year showed that developers who used AI assistance in unchanged workflows completed tasks more slowly, largely due to verification, rework and confusion around intent. The lesson is straightforward: Autonomy without orchestration rarely yields efficiency.

Why context engineering is the real unlock

In every unsuccessful deployment I’ve observed, the failure stemmed from context. When agents lack a structured understanding of a codebase, specifically its relevant modules, dependency graph, test harness, architectural conventions and change history. They often generate output that appears correct but is disconnected from reality. Too much information overwhelms the agent; too little forces it to guess. The goal is not to feed the model more tokens. The goal is to determine what should be visible to the agent, when and in what form.

The teams seeing meaningful gains treat context as an engineering surface. They create tooling to snapshot, compact and version the agent’s working memory: What is persisted across turns, what is discarded, what is summarized and what is linked instead of inlined. They design deliberation steps rather than prompting sessions. They make the specification a first-class artifact, something reviewable, testable and owned, not a transient chat history. This shift aligns with a broader trend some researchers describe as “specs becoming the new source of truth.”

Workflow must change alongside tooling

But context alone isn’t enough. Enterprises must re-architect the workflows around these agents. As McKinsey’s 2025 report “One Year of Agentic AI” noted, productivity gains arise not from layering AI onto existing processes but from rethinking the process itself. When teams simply drop an agent into an unaltered workflow, they invite friction: Engineers spend more time verifying AI-written code than they would have spent writing it themselves. The agents can only amplify what’s already structured: Well-tested, modular codebases with clear ownership and documentation. Without those foundations, autonomy becomes chaos.

Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review. Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub’s own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows. The goal isn’t to let an AI “write everything,” but to ensure that when it acts, it does so inside defined guardrails.

What enterprise decision-makers should focus on now

For technical leaders, the path forward starts with readiness rather than hype. Monoliths with sparse tests rarely yield net gains; agents thrive where tests are authoritative and can drive iterative refinement. This is exactly the loop Anthropic calls out for coding agents. Pilots in tightly scoped domains (test generation, legacy modernization, isolated refactors); treat each deployment as an experiment with explicit metrics (defect escape rate, PR cycle time, change failure rate, security findings burned down). As your usage grows, treat agents as data infrastructure: Every plan, context snapshot, action log and test run is data that composes into a searchable memory of engineering intent, and a durable competitive advantage.

Under the hood, agentic coding is less a tooling problem than a data problem. Every context snapshot, test iteration and code revision becomes a form of structured data that must be stored, indexed and reused. As these agents proliferate, enterprises will find themselves managing an entirely new data layer: One that captures not just what was built, but how it was reasoned about. This shift turns engineering logs into a knowledge graph of intent, decision-making and validation. In time, the organizations that can search and replay this contextual memory will outpace those who still treat code as static text.

The coming year will likely determine whether agentic coding becomes a cornerstone of enterprise development or another inflated promise. The difference will hinge on context engineering: How intelligently teams design the informational substrate their agents rely on. The winners will be those who see autonomy not as magic, but as an extension of disciplined systems design:Clear workflows, measurable feedback, and rigorous governance.

Bottom line

Platforms are converging on orchestration and guardrails, and research keeps improving context control at inference time. The winners over the next 12 to 24 months won’t be the teams with the flashiest model; they’ll be the ones that engineer context as an asset and treat workflow as the product. Do that, and autonomy compounds. Skip it, and the review queue does.

Context + agent = leverage. Skip the first half, and the rest collapses.

Dhyey Mavani is accelerating generative AI at LinkedIn.

Read more from our guest writers. Or, consider submitting a post of your own! See our guidelines here.



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories