Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151095 stories
·
33 followers

String Performance: Avoid Unnecessary Conversions with StringBuilder

1 Share
The excerpt from "Rock Your Code" advises caution when using StringBuilder with non-string types, highlighting that unnecessary conversions can hinder performance.



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

AI Unit Testing in 2026: What Developers Still Get Wrong

1 Share

AI is transforming software development at an incredible pace. Tools can now generate unit tests in seconds, covering edge cases, happy paths, and even complex flows.

It feels like we’ve solved testing.

We haven’t.

AI didn’t eliminate the need for unit testing.
It exposed a deeper problem:

We don’t validate our tests.


⚡ AI Unit Testing Is Fast, But Is It Correct?

Recent industry discussions highlight how AI is accelerating unit testing, but also raising new risks.

Two strong examples from SD Times:

These articles point to a clear shift:

We’ve moved from writing tests → generating tests.

But they stop just before the real challenge:

Who validates those tests?


AI makes test creation incredibly easy.

Today, you can:

  • Generate hundreds of tests in seconds
  • Reach impressive coverage numbers
  • Simulate multiple execution paths

And that feels like progress.

But here’s the catch:

Speed amplifies mistakes.

AI doesn’t understand your system—it predicts patterns based on existing code.

That leads to tests that are:

  • Redundant
  • Based on incorrect assumptions
  • Passing… but not testing anything meaningful

That’s the gap in AI unit testing today.

Not generation.

Validation.

❗ The Dangerous Illusion: Passing Tests

A test passing used to mean something.

Today?

Not always.

Here’s a simple example:

TEST(CalculatorTests, Add_ReturnsCorrectValue)<br>{<br>    Calculator calc;<br>    ASSERT_EQ(calc.Add(2, 3), 5);<br>}

Now imagine AI generates 20 variations of this:

  • Different inputs
  • Same logic
  • Same assertions

You get:

  • More tests
  • Higher coverage

But no additional value.

This is what we call:

False confidence.


🧩 The Real Problem: These Aren’t True Unit Tests

AI-generated tests often:

  • Call real file systems
  • Depend on time (DateTime.Now)
  • Use real services or processes

They look like unit tests.
They pass like unit tests.

But they’re not isolated.

And without isolation, you don’t have unit testing.


🛠️ Why Mocking and Isolation Matter More Than Ever

In the age of AI, mocking isn’t optional—it’s critical.

A real unit test must:

  • Run fast
  • Be deterministic
  • Isolate dependencies

This is where tools like Typemock come in.

With isolator-based unit testing:

  • You can mock static, non-virtual, and hard dependencies
  • Ensure tests don’t touch external resources
  • Keep tests truly independent

Without this?

AI will happily generate tests that:

  • Pass today
  • Break tomorrow
  • And never tell you why

🧠 The Shift: From Test Creation to Test Validation

This is the real evolution:

Before AIAfter AI
Writing tests is hardWriting tests is easy
Few tests, high intentMany tests, unclear value
Focus on creationFocus on validation

We are entering a new era:

Test Validation is the new bottleneck.


🔍 What Should You Validate?

To trust AI-generated tests, you need to verify:

1. Duplication

Are multiple tests checking the same thing?

2. Coverage Quality

Do tests actually exercise meaningful logic?

3. Isolation

Are external dependencies properly mocked?

4. Assertions

Do the assertions reflect real business intent?


🚀 Where Typemock Fits

Typemock was built for this exact challenge.

In a world of AI-generated tests, you need:

  • Strong mocking capabilities (.NET & C++)
  • Isolation of any dependency
  • Confidence that tests are real—not illusions

Typemock helps you:

  • Turn generated tests into real unit tests
  • Remove hidden dependencies
  • Ensure your test suite actually protects your code

👉 Learn more:


💡 Final Thought

AI didn’t break testing.

It revealed something we ignored:

A test that passes is not necessarily a test you can trust.

The future isn’t about writing more tests.

It’s about knowing which ones matter.

The post AI Unit Testing in 2026: What Developers Still Get Wrong appeared first on Typemock.

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Exploring Gemma 4: The Future of Local AI Models

1 Share

Recently, Deepmind unveiled Gemma 4, the highly anticipated successor to the popular Gemma 3 model lineup. We’re excited to explore its performance when run locally, especially using vLLM at full capacity. As we delve into its capabilities, we’ll also share insights on setting up your own local AI environment to test Gemma 4’s prowess.

Based on content from Digital Spaceport

Technical Setup

For those eager to replicate our setup, we recommend checking out the Hermes OpenwebUI Setup guide and the 8 GPU Rack build video for detailed instructions. Here’s a list of hardware essentials we used:

  • GPUs: 3090 24GB, 5060Ti 16GB, 4090 24GB
  • Motherboard: MZ32-AR0
  • CPU: AMD EPYC 7702
  • RAM: 256GB DDR4 DIMMs
  • Power Supplies: Corsair HX1500i, Seasonic PRIME PX1600
  • Riser Cables and Rack: x16 PCIe Risers, PCIe3 x1 USB risers, Plastic Rack

Visit Digital Spaceport for a comprehensive DIY guide.

Exploring Gemma 4’s Features

Gemma 4 introduces several enhancements, including support for up to 140 languages and a context window of up to 256. Models range from lightweight variants like E2B and E4B, optimized for low-end hardware, to the most robust 31B model. One standout feature is its ability to handle diverse AI tasks with impressive reasoning and multimodality, even on smaller models.

Benchmarking and Performance

The improved context window prevents quality deterioration, a significant upgrade from its predecessor. Notably, tests showed exceptional performance jumps in MMLU and code evaluation scenarios, indicating a considerable leap compared to the Gemma 3 series. While we’re still conducting nuanced benchmark testing, early results are promising.

The Ethical Dimension

In exploring AI capabilities, ethical considerations remain paramount. One of our tests posed a classic ethical dilemma, where Gemma 4 demonstrated commendable reasoning, albeit with some limitations around inherent safety protocols. This scenario underscores the need for continual improvements in AI ethics training, ensuring comprehensive self-governance in complex situations.

Conclusion

Gemma 4 represents a promising stride in local AI deployment, offering versatility and power across various configurations. Whether you’re looking to harness its capabilities for coding tasks or exploring its safety features, Gemma 4’s versatility holds immense potential for both hobbyists and professionals.

To stay updated with our latest AI explorations, consider supporting us through membership, Patreon, or purchasing via our affiliate links. For more details on the Gemma 4 model and associated resources, visit the links provided.

Read the whole story
alvinashcraft
11 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Are Employers Using Your Data To Figure Out the Lowest Salary You'll Accept?

1 Share
MarketWatch looks at "surveillance wages," pay rates "based not on an employee's performance or seniority, but on formulas that use their personal data, often collected without employees' knowledge." According to Nina DiSalvo, policy director at labor advocacy group Towards Justice, some systems use signals associated with financial vulnerability — including data on whether a prospective employee has taken out a payday loan or has a high credit-card balance — to infer the lowest pay a candidate might accept. Companies can also scrape candidates' public personal social-media pages, she said... A first-of-its-kind audit of 500 labor-management artificial-intelligence companies by Veena Dubal, a law professor at University of California, Irvine, and Wilneida Negrón, a tech strategist, found that employers in the healthcare, customer service, logistics and retail industries are customers of vendors whose tools are designed to enable this practice. Published by the Washington Center for Equitable Growth, a progressive economic think tank, the August 2025 report... does not claim that all employers using these systems engage in algorithmic wage surveillance. Instead, it warns that the growing use of algorithmic tools to analyze workers' personal data can enable pay practices that prioritize cost-cutting over transparency or fairness... Surveillance wages don't stop at the hiring stage — they follow workers onto the job, too. The vendors that provide such services also offer tools that are built to set bonus or incentive compensation, according to the report. These tools track their productivity, customer interactions and real-time behavior — including, in some cases, audio and video surveillance on the job. Nearly 70% of companies with more than 500 employees were already using employee-monitoring systems in 2022, such as software that monitors computer activity, according to a survey from the International Data Corporation. "The data that they have about you may allow an algorithmic decision system to make assumptions about how much, how big of an incentive, they need to give to a particular worker to generate the behavioral response they seek," DiSalvo said. The article notes that Colorado introduced the "Prohibit Surveillance Data to Set Prices and Wages Act" to ban companies from setting pay rates with algorithms that use payday-loan history, location data or Google search behavior for algorithmically set. Thanks to long-time Slashdot reader sinij for sharing the article.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete

New Copilot for Windows 11 includes a full Microsoft Edge package, uses more RAM

1 Share

There’s a new version of Copilot rolling out on Windows 11, and it dumps native code (WinUI) in favor of web components. This was expected based on our previous findings, but to our surprise, it actually ships with a full-blown version of Microsoft Edge.

I can’t tell if Microsoft is really losing the AI race, but at this point, it’s quite obvious that the company hasn’t managed to build a solid Copilot experience for Windows or stick with one approach for more than a quarter.

This latest version replaces the native app, which itself replaced the WebView version, which replaced the PWA, which replaced the Copilot that once lived in a sidebar.

Copilot in the Microsoft Store

If you don’t have the new Copilot yet, go to the Microsoft Store and search for Copilot. You’ll find a new listing called “Microsoft Copilot,” and it shows a download button even when Copilot is already installed on your PC.

If you hit the Download button, you’ll notice it completes almost instantly. That’s because it isn’t downloading the Copilot app itself. Instead, it’s downloading a Copilot installer, similar to how the Microsoft Edge installer works.

Copilot using Edge installer
Copilot using Edge installer

The Store even warns that you need to take action in another window, which makes it clear that the Copilot download is no longer handled directly by the Microsoft Store. You might have noticed a similar pattern for Microsoft Teams.

After the update is installed, the old native Copilot app, built on the WinUI framework, automatically disappears from the Start menu and other places, as the new Copilot takes over.

Copilot new app on Windows 11

I opened this new Copilot, and it looks exactly like the web version (web.copilot.com). It’s actually a lot smoother and almost feels native. However, there are some caveats, such as high RAM usage, which is quite upsetting as it undermines Microsoft’s recent efforts to revive Windows.

Copilot’s new version is a resource hog, a hybrid version that ships with its own Edge browser

In our tests, Windows Latest observed that Copilot uses up to 500MB of RAM in the background, and it also reaches up to 1GB of RAM when you begin to interact with it. On the other hand, native Copilot used to have less than 100MB of RAM usage.

Copilot in Task Manager
Copilot in Task Manager

This made me curious , so I looked into how the new “web-based” Copilot app is different, and it turns out that it is a hybrid web app with a rebranded/forked Edge instance running as a dedicated app in a WebView2 container.

MIcrosoft Edge package in Copilot app

As you can see in the above screenshot, Copilot’s installation folder literally has a 146.0.3856.97 folder, which is a complete Microsoft Edge installation. The size of the Edge folder is approx 850 MB.

It contains all Edge binaries, including msedge.exe, msedge.dll, msedge_elf.dll, ffmpeg.dll, libGLESv2.dll, Vulkan/SwiftShader, WidevineCDM, etc. Also, Windows Latest observed that msedge.dll inside the new Copilot app package is 315 MB, which confirms it’s a full Chromium browser engine.

msedge.exe in Copilot app package

If it were a standard WebView2 or Progressive Web App, it would have relied on the existing Edge integration in Windows 11 instead of shipping with its own Edge fork.

I also found Edge subsystems in Copilot’s package, including Browser Helper Objects, Trust Protection Lists/, PdfPreview/, Extensions/, edge_feedback/, edge_game_assist/, and DRM.

MS edge features in Copilot

Interestingly, Windows 11’s new Copilot app has both WebView2 and full browser capabilities. My source is an msedgewebview2.exe in the package, along with multiple .dll files, including EmbeddedBrowserWebView.dll, which means there’s a bundled WebView2 runtime with Microsoft Edge.

Copilot for Windows 11 uses a private Edge copy
Image Courtesy: WindowsLatest.com

This new Copilot is an interesting app, and that might also explain why it feels faster than typical web apps or PWAs. It’s because Microsoft ships a private copy of Edge inside the Copilot app, includes a custom launcher (mscopilot.exe), and the Copilot UI itself is a web app rendered via WebView2.

Regardless, even if it passes as a good web app, we don’t need any of those on Windows 11 at this point. Windows 11 is already bloated with web apps, PWAs, and Electron. What do you think? Let me know in the comments below.

The post New Copilot for Windows 11 includes a full Microsoft Edge package, uses more RAM appeared first on Windows Latest

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Three New MAI Models

1 Share

Microsoft announced three new first‑party MAI models this week, all available through Foundry. The releases cover transcription, voice generation, and image creation through Microsoft Foundry and the MAI Playground.

The transcription model (MAI‑Transcribe‑1) focuses on accuracy across a broad set of languages while running faster and cheaper than the usual options. The voice model (MAI‑Voice‑1) generates natural speech from very small samples. The model can produce a full minute of audio in about a second, and it does so with unusually efficient GPU use. If you want to check it out, try it in Copilot Audio Expressions.

MAI‑Image‑2 also improves image generation speed across Copilot and Foundry, delivering roughly twice the performance while keeping quality in line with previous models. Just ask Copilot (web or Windows) to generate an image and it will use MAI‑Image‑2 where available.

Microsoft is also pricing these models well below the usual market rates. Transcription at thirty‑six cents per hour is roughly a 40 to 60 percent savings compared to the typical dollar‑per‑hour services. Voice generation at twenty‑two dollars per million characters comes in at about half the cost of most high‑quality TTS models. Image output at thirty‑three dollars per million tokens is often 70 percent cheaper than comparable offerings from the major providers. The MAI lineup is clearly positioned as the lower‑cost option.

What stands out is not any single capability, but the shift in direction. Microsoft is building more of its own stack rather than betting everything on OpenAI. That shift, I assume, has deeper implications for cost, direction, and long‑term strategy. Even more significantly, each model was built by small team about 10 and tuned for efficiency, which seems to be the through‑line of this entire effort. Suggesting that high‑quality models no longer require massive research groups.

A weathered metal sign mounted on a yellow brick wall reads *School Bus Exit* with a right‑pointing arrow beneath the text.

As a note, I do work at Microsoft, but I am not part of the team that develops these models.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories