Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147417 stories
·
33 followers

Is a secure AI assistant possible?

1 Share

AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.

That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral.

OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to read all of the security blog posts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities.

In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research.

Risk management

OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time.

But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files. 

There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numerous such vulnerabilities that put security-naïve users at risk.

Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches.

But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will.

And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks.

It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says. 

Building guardrails

The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants.

Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe.

Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job.

One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection.

But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while.

Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack.

The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user.

“The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.”

On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.” 

Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool.

As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant.

But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

WPF to React

1 Share

 I've spent the last year migrating a huge Windows WPF application to a modern React app.  The battle continues.

Here I'll share the journey and some of the pain-points that I found and fixed along the way

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Exploring Compiled Bindings in .NET MAUI

1 Share

Learn about compiled bindings in .NET MAUI and how they affect performance and debugging.

Imagine you’re about to buy a house … but you can only discover its defects once you’re already living in it.

You’d definitely want to know that information beforehand to make better decisions, right? ️

The same thing happens in applications: depending on the mechanism we choose, some processes can be slower, consume more resources and—even worse—show errors way too late.

Today, you’ll learn what Compiled Bindings are and the latest enhancements available in .NET MAUI to boost performance and avoid unpleasant surprises when the app is already in the users’ hands.

Data Bindings in .NET MAUI

In .NET MAUI, we usually work with data bindings, but when using them, it’s important to keep two key points in mind:

  • There’s no compile-time validation of binding expressions, which means that if a binding is invalid, we’ll only find out when the app is already running. This can cost us valuable time, when we could have detected the issue during the compilation process.

  • And another point, related to the one above, is that runtime verification bindings aren’t very efficient in terms of performance, since they do all their work while the app is running. To “check” the information they need, they must inspect objects over and over again. This makes the process slower and, of course, increases resource consumption.

Compiled bindings enhance the performance of data bindings in .NET MAUI by resolving binding expressions during compile time instead of waiting until the app is running. This allows the system to detect issues early, so if a binding is incorrect, it will show up as a build error. As a result, debugging becomes easier and you avoid dealing with unexpected binding failures at runtime.

Compile Time vs. Runtime

We’ve been talking a lot about compile time and runtime, so if those terms still feel a bit blurry for you, here’s a simple explanation to make everything clearer.

Compile Time: This is the moment when your code is checked to make sure everything is correct before the app can run. Here is where issues like typos in property names, missing variables, or incompatible types are detected. If something is wrong, the build stops and alerts you so you can fix it first. This saves a lot of headaches—it’s always better to catch errors at this stage rather than later on.

Runtime: This occurs when the app is already running. If an error appears at this point, it can definitely be fixed—but it may take a bit more time, since you often need to analyze more carefully to understand exactly what caused the issue. And worst of all, it can crash or freeze the app right in front of the user.

Summarizing it in a table, these would be the differences between the two:

Description: Compile Time, When does it happen? Before running the app, If there’s an error?: It stops the build and alerts you - Description: Runtime, When does it happen? While the app is running, If there’s an error? The app may crash while it’s running

Ways to Use Compiled Bindings

There are two ways to use compiled bindings in .NET MAUI: you can apply them directly in your XAML files or configure them through C# code.

Let’s explore each one of them:

Compiled Bindings in XAML

In .NET MAUI 8, compiled bindings aren’t applied when a XAML binding uses the Source property, and they also don’t support multi-bindings.

Starting in .NET MAUI 9, the compiler now shows warnings when bindings aren’t using compiled bindings by default. You can learn more about this behavior in the article: XAML compiled binding warnings.

Compiled Bindings in C# Code

Working with compiled bindings brings many benefits. One of the biggest ones—besides performance improvements—is that it gives us a much better experience as developers. It saves us a lot of time by showing problems before running the application. Otherwise, we would have to debug, investigate exactly where the issue happens, identify the error and then fix it. Now, IntelliSense can warn us right away.

When using bindings on C#, we normally rely on string-based property paths, such as "Text".

These paths are resolved at runtime using reflection, and that process consumes resources. On top of that, the performance cost can vary depending on the platform where the app is running (Android, iOS, Windows, etc.).

Example:

// .NET 8
MyLabel.SetBinding(Label.TextProperty, "Text");

Now, in .NET MAUI 9, a new SetBinding extension method was introduced that allows defining bindings using a lambda expression (Func) instead of a string.

Like this:

// .NET 9 
MyLabel.SetBinding(Label.TextProperty, static (Entry entry) => entry.Text);

When using SetBinding, you need to be careful with the lambda expressions you define. Only simple property access expressions are allowed. This is because compiled bindings need to know exactly which property is being bound in order to generate the optimized binding code at compile time.

A valid example would be directly accessing the Name property:

static (PersonViewModel vm) => vm.Name;

Besides property access, you can also use:

Array or list index access

static (PersonViewModel vm) => vm.PhoneNumbers[0];

Indexer access ([])

static (PersonViewModel vm) => vm.Config["Font"];

Casts to access the correct property

static (Label label) => (label.BindingContext as PersonViewModel).Name;

static (Label label) => ((PersonViewModel)label.BindingContext).Name;

❌ What is not allowed?

Just as it’s important to know what is supported, it’s equally important to understand what isn’t. Here are some examples:

➖ Method calls

static (PersonViewModel vm) => vm.GetAddress();

➖ Creating new values by combining multiple properties

static (PersonViewModel vm) => vm.Address?.Street + " " + vm.Address?.City;

➖ String interpolation

static (PersonViewModel vm) => $"Name: {vm.Name}";

BindingBase.Create

Another improvement in .NET MAUI 9 is BindingBase.Create.

In addition to using lambda expressions directly with SetBinding, .NET MAUI 9 also adds a helper called BindingBase.Create. This is useful for more complex bindings, such as a MultiBinding.

Example:

.NET 8

myEntry.SetBinding(Entry.TextProperty, new MultiBinding 
{ 
    Bindings = new Collection<BindingBase> 
    { 
    new Binding(nameof(Entry.FontFamily), source: RelativeBindingSource.Self), 
    new Binding(nameof(Entry.FontSize), source: RelativeBindingSource.Self), 
    new Binding(nameof(Entry.FontAttributes), source: RelativeBindingSource.Self), 
    }, 
Converter = new StringConcatenationConverter()

});

Now in .NET 9

// in .NET 9

myEntry.SetBinding(Entry.TextProperty, new MultiBinding 
    { 
    Bindings = new Collection<BindingBase> 
    { 
    Binding.Create(static (Entry entry) => entry.FontFamily, source: RelativeBindingSource.Self), 
    Binding.Create(static (Entry entry) => entry.FontSize, source: RelativeBindingSource.Self), 
    Binding.Create(static (Entry entry) => entry.FontAttributes, source: RelativeBindingSource.Self), 
    }, 
    Converter = new StringConcatenationConverter() 
    });

Compiler Error

⚠️ Important: The official documentation says: “A CS0272 compiler error will occur if the set accessor for a property or indexer is inaccessible. If this occurs, increase the accessibility of the accessor.”

Let’s understand it in a simpler way.

This error may appear when you try to assign a value to a property or indexer, but its set accessor isn’t accessible. For example:

public string Name { get; private set; }

Here, the Name property can be read, but the set is private—so if you try to assign a value, it won’t be possible:

person.Name = "Leo"; // ❌ CS0272

And what would be the solution to avoid a CS0272 error? Just change the accessibility of the set so that it can be modified when needed, for example:

public string Name { get; set; } // ✅ Now it can be updated

Conclusion

And that’s it! Now you know what compiled bindings are, why they matter for performance and debugging, and how to use them in C#—including the enhancements introduced in .NET MAUI 9.

By applying compiled bindings, your apps become faster, more reliable and much easier to troubleshoot … before your users ever see an error. I hope this guide helps you improve the quality and performance of your applications starting today!

If you have any questions or want to see more related topics, feel free to leave them in the comments—I’d love to help you!

See you in the next article! ‍♀️

References

The explanation was based on the official documentation, and includes most of the code examples presented there.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Copilot Testing for .NET Brings AI-powered Unit Tests to Visual Studio 2026

1 Share

GitHub Copilot testing for .NET makes it dramatically faster and more convenient to generate high-quality unit tests without leaving your code. Today, we’re excited to announce that this capability is now generally available in Visual Studio 2026 v18.3.

This release brings GitHub Copilot testing for .NET to everyone, with richer IDE integration, more natural prompting, and new entry points that make unit test generation feel effortless and intuitive.

Informed by real-world usage and feedback, this GA release focuses on removing friction and helping developers go from code to tested confidence with just a few interactions. If you have tried the feature in Visual Studio Insiders, this release builds directly on your feedback. If you are new to the experience, this is the best time to start.

Screenshot of test agent summary output in a Visual Studio window

Purpose-built AI for Unit Testing

GitHub Copilot testing for .NET is designed specifically for unit testing, with built-in awareness of your solution structure, test frameworks, and build system. It operates as an end-to-end testing workflow rather than a single-response prompt.

You can generate tests at the scope that makes sense for your task, whether that is a single member, a class, a file, an entire project, a full solution, or your current git diff. GitHub Copilot testing then uses that scope to determine what to generate, how to organize the tests, and how to execute them within Visual Studio. The agent also works with any model you have selected in Copilot Chat, allowing you to use your preferred model while taking advantage of the purpose-built testing workflow.

Screenshot of example prompt in GitHub Copilot chat window demonstrating use of @Test

When you start a testing request, GitHub Copilot testing:

  • Generates unit tests scoped to your selected code
  • Builds and runs those tests automatically
  • Detects failures and attempts to fix them
  • Reruns until you have a stable starting point

Throughout this process, the agent uses your project configuration and chosen test framework to ensure tests are organized, discovered, and executed correctly within Visual Studio.

Once test generation completes, Copilot provides a structured summary that helps you understand what changed:

  • Test files and projects created or modified
  • Before-and-after coverage information
  • Pass/fail signals and unstable cases
  • Insights into testability gaps
  • Direct links to the generated tests so that you can review and iterate immediately

This approach shortens the feedback loop between writing code and validating it, helping you move forward with confidence. All of this is driven by how you interact with the agent. The first step is simply telling GitHub Copilot what you want to test.

More natural, free-form prompting

GitHub Copilot testing for .NET now supports free-form prompting, making it easier to describe what you want to test in your own words. You still target the testing agent with @Test, but you are no longer constrained to a rigid command format.

You can write prompts using natural language, and the agent will interpret your intent while handling test generation, execution, and recovery behind the scenes.

With free-form prompting you can

  • Reference any code either precisely or vaguely
    • @Test generate unit tests for my core business logic”
    • @Test class Foo”
    • @Test the requests parsing logic”
  • Mention your current Git changes
    • @Test write unit tests for my current changes”
    • @Test #git_changes
  • Ask to fix specific or all failing tests
    • @Test fix my failing tests”
  • Ask for achieving specific target code coverage
    • @Test class Bar, targeting 80% code coverage”
  • Specify testing preferences and conventions
    • @Test generate tests using xUnit and FluentAssertions”

For scenarios where you want explicit scoping, the structured syntax is still supported:

  • @Test #<target>
    • Where <target> can be a member, class, file, project, solution, or git diff

This prompting flexibility makes it easier to express intent, whether you want to expand coverage, stabilize failing tests, or generate tests that align with your team’s existing conventions.

New entry points that meet you where you work

GitHub Copilot testing for .NET is designed to be easy to find and easy to use so that generating unit tests fits naturally into your existing development workflow.

You can always invoke the testing experience directly in Copilot Chat by starting a prompt with @Test. In addition, new entry points surface the same workflow from familiar places in the IDE, helping you discover and use the capability without changing how you work.

Right-click in the editor

  • Right-click → Copilot Actions → Generate Tests
  • The scope is inferred from where you right-click, following the same behavior as other Copilot Actions (for example, member, class, or file)
  • The command will launch the dedicated testing experience for C# projects, with the appropriate @Test context applied automatically

GIF showing flow from right-clicking in editor to Copilot Actions to the test agent prompt being populated in Copilot Chat

Copilot chat icebreakers

  • Selecting the prompt related to writing unit tests launches the testing agent when your editor focus is on C# code
  • The prompt is automatically populated with @Test, scoped to your active document by default

Screenshot of available icebreaker options as entry point to test agent

Getting Started

Pre-requisites: Visual Studio 18.3, C# code, and a paid GitHub Copilot license.

  1. Open your C# project or solution in Visual Studio 18.3 and make sure it builds without errors to simplify the process.
  2. Start a new Copilot Chat thread and start your prompt with @Test. You may use natural language or the structured syntax as demonstrated above to define your request. Screenshot of the proper way to prompt with @Test in the Copilot chat window
  3. In the Chat window, select Send. GitHub Copilot testing for .NET will then initiate an iterative process. Copilot will analyze your code and create a test project if one doesn’t exist. Then it generates tests, builds, and runs them, all automatically.

    Screenshot of the Copilot chat window as the test are being generated

  4. Test Explorer shows the results as the tests are generated. If Test Explorer is not automatically opened, you may do so by selecting Test → Test Explorer.
  5. After test generation is complete, GitHub Copilot testing will provide a summary in Copilot Chat.Screenshot of the post-generation summary in Copilot chat

 

For more usage information, check out our Getting Started docs.

What’s next?

The GA release of GitHub Copilot testing for .NET reflects what we have learned from developers using the experience in real projects. Feedback from early usage directly shaped improvements to prompting, discoverability, and how the testing workflow fits into everyday development.

General availability is an important milestone, but it is not the end of the journey. We continue to run user studies and gather feedback to understand how developers use GitHub Copilot testing for .NET in real-world scenarios, especially as requests grow in size and complexity.

One area we are actively exploring is a planning phase for more advanced testing requests. For larger scopes or more specific requirements, developers want greater control up front, including the ability to clarify intent, confirm assumptions, and review a proposed plan before tests are generated. We are investigating how this kind of experience could be integrated directly into Visual Studio to better support complex workflows and precise needs.

If you have had a chance to try the tool, we would love to hear your thoughts through this survey, including feedback on the experience today and where you would like to see this capability go next, including beyond Visual Studio.

Give feedback

As AI continues to influence how developers build and validate software, our focus remains on making these capabilities practical, predictable, and well-integrated into the tools you already use.

If you haven’t yet, try GitHub Copilot testing for .NET in Visual Studio 18.3 and let us know what you think! You can use the Give Feedback button in Visual Studio to let us know what is working well and where we can continue to improve. Your feedback is greatly appreciated and plays a direct role in shaping future product decisions.

The post GitHub Copilot Testing for .NET Brings AI-powered Unit Tests to Visual Studio 2026 appeared first on .NET Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Unlock language-specific rich symbol context using new find_symbol tool

1 Share

Refactoring at scale is a time-consuming and error-prone process for developers. In large codebases, developers have relied on manual searches and incremental edits across multiple files to accomplish these tasks.

Modern development workflows depend on fast and accurate code navigation to avoid these pitfalls. When developers refactor existing code, explore unfamiliar areas of a large codebase, or make targeted changes, they naturally rely on IDE language service features such as Find All References, Go to Definition, and Go to Implementation to understand how code is structured and connected.

Agent mode now has access to these same language-aware capabilities through the new find_symbol tool. This goes beyond traditional text or file search traditionally available in agent mode by enabling symbol-level reasoning powered by enterprise-grade language services.

Find symbol tool selected in VS Copilot Chat

What is find_symbol?

Find_symbol exposes rich, language-specific symbol information to Copilot Agent Mode, allowing the agent to reason about symbols (including functions, classes, interfaces, and variables).

Specifically, this tool allows Copilot agent mode to:

  • View all references of a given symbol across the entire codebase
  • Understand symbol metadata such as type, declaration, implementation, and scope

The find_symbol tool is available today in the latest Visual Studio 2026 Insiders version 18.4. Supported languages include: C++, C#, Razor, TypeScript, and any other language for which you have a supported Language Server Protocol (LSP) extension installed.

For best results, write clear prompts and use AI models that support tool-calling. Learn more at AI model comparison – GitHub Docs

Example scenarios

All examples below were showcased using bullet3, an open-source C++ physics simulation engine.

Adding additional functionality to existing code

As applications evolve, you often need to enhance existing functions without breaking current behavior. This can include adding logging or performance metrics.

These tools help the agent quickly identify all relevant references, ensuring complete and accurate updates for feature additions.

Find symbol references found 4 references to growtable

API Refactoring

Refactoring an API, such as hardening it, requires deep understanding of how the API is consumed across a codebase. With symbol-level insight, the agent can discover all usages, distinguish between call paths, and propose safe refactors with minimal breakage.

find symbol able to find 18 references to a hash table and reoslve accesses

Tell us what you think

We appreciate the time you’ve spent reporting issues/suggestions and hope you continue to give us feedback when using Visual Studio on what you like and what we can improve. You can share feedback with us via Developer Community: report any bugs or issues via report a problem and share your suggestions for new features or improvements to existing ones.

Stay connected with the Visual Studio team by following us on YouTube, X, LinkedInTwitch and on Microsoft Learn

 

The post Unlock language-specific rich symbol context using new find_symbol tool appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How do I suppress the hover effects when I put a Win32 common controls ListView in single-click mode?

1 Share

A customer had a Win32 common controls ListView in single-click mode. This has a side effect of enabling hover effects: When the mouse hovers over an item, the cursor changes to a hand, and the item gets highlighted in the hot-track color. How can they suppress these hover effects while still having single-click activation?

When the user hovers over an item, the ListView sends a LVN_HOT­TRACK notification, and you can suppress all hot-tracking effects by returning 1.

    // WndProc
    case WM_NOTIFY:
    {
        auto nm = (NMLISTVIEW*)lParam;
        if (nm->hdr.code == LVN_HOTTRACK)
        {
            return 1;
        }
    }
    break;

If you are doing this from a dialog box, you need to set the DWLP_MSG­RESULT to the desired return value, which is 1 in this case, and then return TRUE to say “I handled the message; use the value I put into DWLP_MSG­RESULT.”

    // DlgProc
    case WM_NOTIFY:
    {
        auto nm = (NMLISTVIEW*)lParam;
        if (nm->hdr.code == LVN_HOTTRACK)
        {
            SetWindowLongPtr(hDlg, DWLP_MSGRESULT, 1);
            return TRUE;
        }
    }
    break;

The post How do I suppress the hover effects when I put a Win32 common controls ListView in single-click mode? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories