Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151970 stories
·
33 followers

MCP Demystified: Tools vs Resources vs Prompts Explained Simply

1 Share

Introduction

When developers start working with Model Context Protocol (MCP), one of the most confusing parts is understanding the difference between MCP Tools, Resources, and Prompts. All three are important components in modern AI application development, but they serve completely different purposes.

In real-world AI systems like chatbots, AI agents, and copilots, using these components correctly can make your application scalable, clean, and easy to maintain. If used incorrectly, it can lead to confusion, bugs, and poor system design.

In this article, we will clearly explain the difference between MCP Tools, Resources, and Prompts in simple words, using real-world examples and practical explanations. This guide is helpful for both beginner and intermediate developers working with AI and MCP.

What Are MCP Tools?

MCP Tools are functions or services that an AI model can use to perform real-world actions. These actions usually involve doing something outside the AI system, such as calling an API, updating a database, or sending a message.

In simple terms, Tools represent what the AI can do.

Real-World Analogy

Think of MCP Tools like service workers in a company. For example, a delivery person delivers packages, a support agent updates tickets, and a payment system processes transactions. Similarly, MCP Tools perform specific tasks when requested by the AI.

Examples of MCP Tools

  • A tool that fetches user details from a database
  • A tool that sends emails or notifications
  • A tool that creates or updates support tickets
  • A tool that calls third-party APIs like payment gateways
  • A tool that triggers workflows in enterprise systems

Key Understanding

Tools are action-based. They execute operations and return results. Whenever your AI needs to "do something," you should use a Tool.

What Are MCP Resources?

MCP Resources are data sources that the AI model can access to read information. These are typically read-only and provide context or knowledge to the AI.

In simple terms, Resources represent what the AI can read or see.

Real-World Analogy

Think of MCP Resources like books in a library or documents in a company. You can read and learn from them, but you cannot directly change their content.

Examples of MCP Resources

  • A database table containing customer information
  • A knowledge base with FAQs and documentation
  • System logs that track user activity
  • Configuration files or static datasets
  • Company policy documents or guidelines

Key Understanding

Resources are data-based. They provide information but do not perform any action. Whenever your AI needs information to make a decision, you should use a Resource.

What Are MCP Prompts?

MCP Prompts are structured instructions or templates that guide how the AI model should think, behave, and respond.

In simple terms, Prompts represent how you instruct the AI.

Real-World Analogy

Think of Prompts like instructions given to an employee. For example, “Write a professional email,” “Summarize this report,” or “Answer politely to the customer.” These instructions shape how the output is generated.

Examples of MCP Prompts

  • A prompt to summarize customer feedback
  • A prompt to generate a support response in a polite tone
  • A prompt to analyze data and provide insights
  • A prompt to translate text into another language
  • A prompt to generate code based on requirements

Key Understanding

Prompts are instruction-based. They define how the AI should process input and generate output.

Key Differences Between MCP Tools, Resources, and Prompts

Understanding the difference between MCP Tools, Resources, and Prompts is important for building scalable AI systems.

Tools vs Resources vs Prompts

  • Tools are used for performing actions
  • Resources are used for reading data
  • Prompts are used for guiding AI behavior

Detailed Comparison

  • Tools interact with external systems and can change data or trigger operations
  • Resources only provide data and do not modify anything
  • Prompts control how the AI thinks, responds, and formats its output

Comparison Table

AspectMCP ToolsMCP ResourcesMCP Prompts
PurposePerform actionsProvide dataGuide behavior
NatureActivePassiveInstructional
UsageAPI calls, updatesData readingAI response generation
OutputAction resultDataGenerated content

How MCP Tools, Resources, and Prompts Work Together

In real-world AI systems, these three components are used together to create powerful workflows.

Step-by-Step Flow

  1. The user sends a request to the AI system
  2. The Prompt defines how the AI should understand and respond
  3. The AI fetches required information from Resources
  4. If an action is required, the AI uses a Tool
  5. The AI combines everything and generates a final response

Practical Example

Consider an AI customer support system:

  • The Prompt ensures the response is polite and helpful
  • The Resource provides customer history and previous tickets
  • The Tool updates the ticket status or sends an email notification

This combination helps build intelligent, real-world AI applications.

Advantages of Understanding MCP Concepts

  • Helps developers design clean and scalable AI architecture
  • Improves clarity in system design and reduces confusion
  • Enhances performance by separating responsibilities
  • Makes debugging and maintenance easier
  • Supports faster development of AI-powered applications

Common Mistakes Developers Make

  • Using Tools when only data retrieval is needed
  • Treating Resources as editable systems
  • Writing vague or unclear Prompts
  • Mixing responsibilities between Tools, Resources, and Prompts
  • Not structuring MCP components properly in applications

Best Practices for Using MCP Tools, Resources, and Prompts

  • Clearly define the role of each component before implementation
  • Use Tools only for actions that change system state or trigger operations
  • Use Resources strictly for reading and retrieving data
  • Write clear, specific, and well-structured Prompts
  • Test Tools, Resources, and Prompts independently before integration
  • Keep your architecture modular and easy to scale

Summary

Understanding the difference between MCP Tools, Resources, and Prompts is essential for modern AI application development using Model Context Protocol. Tools allow AI systems to perform actions, Resources provide the necessary data, and Prompts guide how the AI behaves and generates responses. When these components are used correctly, developers can build scalable, efficient, and intelligent AI systems. Mastering these MCP concepts will help you design better architectures and create powerful AI-driven applications in today’s evolving technology landscape.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 309 - "GenAI's NFT Moment" (Have we reached peak AI silliness?)

1 Share
From: Iot Coffee Talk
Duration: 1:07:52
Views: 4

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Pete, Bill, Alistair, Rob, and Leonard jump on Web3 for a discussion about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Rain", The Cult
🐣 How did any of us survive the mosh pits of our youth?
🐣 How to become an MVP at Microsoft.
🐣 How has the developer ecosystem game changed since the days of Microsoft Win32.
🐣 Why leading journalists are two-years behind in their tech reporting.
🐣 The telco economy versus the AI economy, which is bigger?
🐣 The secret value of AI - enabling our ability to deal with exponential complexity.
🐣 The great AI fallacy - it will replace people. Quite the opposite. Why?
🐣 Is GenAI having its NFT moment? Ask yourself, did you fall for it the first time?
🐣 Why tech bubbles are not OK.
🐣 The quality and verification deficit and the exponential risk curve.
🐣 Why do we know better but we never seem to do it?
🐣 Folding a shirt - the physical AI inflection point.
🐣 Why physical is so much harder than you are being told.
🐣 Do we care about Rob's sustainability tech and visionary IoT solutions?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

RAG in .NET: What the Tutorials Don’t Tell You. Chunking, Embedding, and Production Gotchas

1 Share

Most RAG tutorials show you the happy path. A clean project, a handful of sample documents, a query that works first time.

That’s fine for getting oriented but it’s not what building these systems in production actually looks like.

This post is the one I wished existed when I started working in earlier RAG solutions using Semantic Kernel.

The chunking decisions that degrade silently, the retrieval quality that looks fine in demos and falls apart on real queries, and observability gaps that make debugging feel like guessing.

All in .NET.

This blog is by know means exhaustive and I continue to find optimisations but it’s a good starting point.

Lets dig in.

~

What RAG Is

RAG (Retrieval-Augmented Generation) is a pattern, not a framework. Before asking an LLM to answer a question, you first retrieve relevant content from your own data store and include it in the prompt.

The model answers from that grounded context rather than from training data alone.

A RAG pipeline has five stages:

  1. Ingestion — load source documents (HTML, Markdown, PDFs, plain text)
  2. Chunking — split documents into segments small enough to embed meaningfully
  3. Embedding — convert each chunk into a vector using an embedding model
  4. Storage — persist vectors to a vector store (SQLite, Elasticsearch, Azure AI Search, etc.)
  5. Retrieval + Generation — embed the incoming query, find the closest chunks, build a grounded prompt, generate an answer

 

Simple on paper. The devil is in the implementation choices at each stage.

~

Chunking: The Step Most Tutorials Rush

Chunking quality has a disproportionate effect on retrieval quality. Chunks too large: vector similarity becomes diluted. Too small: you lose the context that makes a chunk meaningful.

One approach is to use TextChunker.SplitMarkdownParagraphs() from Semantic Kernel. It respects document structure such as  paragraphs, headings, and list items don’t get bisected mid-sentence.

var chunks = TextChunker.SplitMarkdownParagraphs(
    lines: markdownContent.Split('n').ToList(),
    maxTokensPerParagraph: 512,
    overlapTokens: 50
);

 

The overlapTokens parameter matters. A small overlap (10%-15%) between adjacent chunks ensures that a relevant sentence near a chunk boundary doesn’t disappear from retrieval. Skipping this is a common mistake.

I implemented my own custom chunking service on one project.

Gotcha: HTML content

Convert HTML to Markdown before chunking. Raw HTML bloats chunks with noise such as tags, attributes, and inline styles .  These degrade embedding quality. Use  the HtmlAgilityPack to strip structure first.

Gotcha: Mixed content types

A chunk that mixes a code sample with surrounding prose often embeds poorly because the two content types pull the vector in different directions. Chunk code blocks separately and tag them with metadata for filtering at retrieval time.  This was an an important learning for me.

~

The Relevance Threshold Is Not a Magic Number

Semantic Kernel’s SearchAsync takes a minRelevanceScore parameter. Tutorial defaults (0.75–0.80) are not universally correct.  The right threshold depends on your corpus and embedding model.

var results = await memory.SearchAsync(
    collection: CollectionName,
    query: userQuery,
    limit: 5,
    minRelevanceScore: 0.70
);

Start at 0.70 (or whatever your comfort level is) and run representative queries, and look at what gets returned.

Build a manual eval set of 20–30 query/expected-answer pairs and iterate. There is no substitute for looking at actual retrieval results on your specific data.

~

Choosing a Vector Store

Match the tool to the stage:

  • VolatileMemoryStore — Demos only. Vectors live in RAM, gone on restart.
  • SqliteMemoryStore — Local development and early production. File-based, zero infrastructure overhead.
  • Elasticsearch — Already in your stack? Use it. Good for hybrid search.
  • Azure AI Search — Production on Azure. Managed, scalable.
  • Qdrant / Pinecone — Dedicated vector workloads at scale.

 

SQLite is underrated for early production. It’s a one-line swap from VolatileMemoryStore and handles modest query volumes without infrastructure cost. Migrate later when you actually need to.

~

The One-Time Embedding Check

Once you’re using a persistent store, add a collection existence check before the ingestion loop. Without it, every restart re-embeds the entire corpus — API calls and cost you don’t need.

var collections = await sqliteStore.GetCollectionsAsync().ToListAsync();
if (!collections.Contains(CollectionName))
{
    await ragService.IngestDocumentsAsync(documents, CollectionName);
}
else
{
    Console.WriteLine("Vectors already stored - skipping ingestion.");
}

 

Small investment. Saves meaningful API cost at scale.

~

Prompt Construction: Ground It Properly

The difference between a useful RAG system and a hallucinating one often comes down to prompt construction.

A simple prompt you can use:

var sb = new StringBuilder();
sb.AppendLine("Answer the question using ONLY the context below.");
sb.AppendLine("If the answer is not in the context, say so explicitly.");
sb.AppendLine();
sb.AppendLine("CONTEXT:");
foreach (var chunk in retrievedChunks)
{
    sb.AppendLine($"[Source: {chunk.Metadata.Id}]");
    sb.AppendLine(chunk.Metadata.Text);
    sb.AppendLine();
}
sb.AppendLine($"QUESTION: {userQuery}");

The key phrases are “ONLY the context below” and “say so explicitly”. Without explicit grounding instructions, models blend retrieved content with training knowledge which looks helpful but introduces unfaithful answers.

This isn’t optional.

~

Semantic Caching: The Easy Win Most People Skip

For user-facing or high-volume pipelines, add semantic caching early. Before hitting the vector store and LLM, check whether an incoming query is semantically similar to a recent query already answered.

If the similarity score is above threshold return the cached answer directly.

var cachedAnswer = await cacheService.FindSimilarAsync(query, threshold: 0.92f);
if (cachedAnswer != null)
{
    return cachedAnswer.Answer; // No vector search, no LLM call
}

 

At scale this eliminates a large proportion of pipeline calls and cuts latency dramatically for common query patterns.  Add this early.  Retrofitting it later is more work than it needs to be.

~

Observability: Knowing What’s Actually Happening

A RAG pipeline has multiple failure modes and they all look the same from the outside: a bad answer. Without instrumentation you can’t tell whether the problem is in chunking, retrieval, prompt construction, or the model itself.

Consider capturing data using a logging record similar to:

public record RagQueryTrace
{
    public string Query { get; init; }
    public int ChunksRetrieved { get; init; }
    public float TopChunkScore { get; init; }
    public float LowestChunkScore { get; init; }
    public string[] SourceIds { get; init; }
    public string GeneratedAnswer { get; init; }
    public double LatencyMs { get; init; }
    public bool CacheHit { get; init; }
}

Signals to watch:

  • TopChunkScore consistently below 0.75: retrieval is struggling.
  • ChunksRetrieved always hitting your limit: try widening search and re-ranking.
  • CacheHit always false with high latency: cache threshold may be too tight.

 

Wire up end-to-end tracing with ILogger:

public async Task<string> QueryAsync(string query, string collection)
{
    var sw = Stopwatch.StartNew();
    _logger.LogInformation("RAG query started. Query={Query}", query);

    var chunks = await RetrieveChunksAsync(query, collection);
    _logger.LogInformation("Retrieval complete. Chunks={Count}, TopScore={Score:F3}",
        chunks.Count, chunks.FirstOrDefault()?.Relevance ?? 0);

    var answer = await GenerateAnswerAsync(query, chunks);
    _logger.LogInformation("Generation complete. LatencyMs={Ms}", sw.ElapsedMilliseconds);

    return answer;
}

 

Diagnosing bad answers:

  • Right chunks not retrieved? – Retrieval problem (threshold, chunking, embedding model)
  • Chunks retrieved but answer wrong? – Tighten grounding instructions in the prompt
  • Chunks and prompt correct but hallucinated? – Add explicit “do not speculate” to system prompt

 

Work backwards through the trace when you experience any of the above.

~

Evaluating RAG Quality and Why CI Matters

Most RAG prototypes get evaluated informally. This works until the corpus changes, a threshold gets tweaked, or the embedding model is swapped. Quality silently regresses with no way to detect it.

Build question/answer pairs covering easy queries, hard queries spanning multiple documents, and edge cases where the answer isn’t in the corpus and the system should say “I don’t know”. Three metrics worth tracking include:

  • Context Recall: were the right chunks retrieved?
  • Faithfulness: does the answer stick to the retrieved context?
  • Answer Correctness: does the answer match the expected answer?

 

Wire evals into your CI process.  For example:

[Fact]
public async Task RagEval_ContextRecall_AboveThreshold()
{
    var results = await RunEvalSetAsync(_evalQueries);
    var avgRecall = results.Average(r => r.ContextRecall);
    Assert.True(avgRecall >= 0.80,
        $"Context recall {avgRecall:P0} is below the 80% threshold");
}

The edge case eval is the most important, i,e. – queries where the answer genuinely isn’t in the corpus.

These test whether the system correctly says “I don’t know” rather than hallucinating.

Hallucination on out-of-scope queries is the thing that erodes user trust fastest and it’s the thing informal testing almost never catches.

~

What to Watch Out For

Some other things to watch out for:

  • Skipping overlap tokens in chunking — sentences near chunk boundaries silently drop out of retrieval. Always set overlapTokens.
  • Using tutorial threshold values verbatim — 0.75 or 0.80 is a starting point, not a universal answer. Tune against your actual corpus.
  • Re-embedding on every restart — add the collection existence check. It’s five lines and saves real API cost.
  • Weak grounding instructions — “use the context below” is not the same as “ONLY the context below”. The difference shows up in production.
  • No out-of-scope eval set — hallucination on questions the system can’t answer is the fastest way to lose user trust. Test for it explicitly.

 

Another reminded – make sure you chunk prose and code examples differently.  This really caught me out.

~

Tools Used

Key tools involved in the above included:

  • Semantic Kernel (chunking, embedding, retrieval)
  • TextChunker.SplitMarkdownParagraphs (structure-aware chunking)
  • HtmlAgilityPack (HTML-to-Markdown conversion)
  • SqliteMemoryStore / VolatileMemoryStore / Azure AI Search (vector stores)
  • ILogger / Stopwatch (observability and tracing)
  • xUnit (eval set CI integration)
  • .NET 8

 

The happy path is easy to build. A RAG system that stays reliable as the corpus grows, thresholds get tuned, and real users ask unexpected questions is a different problem.

The patterns above are the ones that made the difference in practice.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Many anti-AI arguments are conservative arguments

1 Share

Most anti-AI rhetoric is left-wing coded. Popular criticisms of AI describe it as a tool of techno-fascism, or appeal to predominantly left-wing concerns like carbon emissions, democracy, or police brutality. Anti-AI sentiment is surprisingly bipartisan, but the big anti-AI institutions are labor unions and the progressive wing of the Democrats.

This has always seemed weird to me, because the contents of most anti-AI arguments are actually right-wing coded. They’re not necessarily intrinsically right-wing, but they’re the kind of arguments that historically have been made by conservatives, not liberals or leftists. Here are some examples:

  • Many AI critics complain that AI steals copyrighted content, but prior to 2023, leftists have been largely anti-intellectual-property on principle (either because they’re anti-property, or because they characterize copyright as benefiting huge media corporations and patent trolls).
  • A popular anti-AI-art sentiment is that it’s corrosive to the human spirit to consume AI slop: in other words, art just inherently ought to be generated by humans, and using AI thus damages some part of our intangible human soul. Whether you like this argument or not, it’s structurally similar to a whole slate of classic arguments-from-intuition for conservative positions like anti-abortion or anti-homosexuality.
  • Weird new technological art has traditionally been championed by the left-wing and dismissed by the right-wing (as inhuman, cheap, or degenerate). But when it comes to AI art, it’s the left-wing making these arguments, and others (not necessarily right-wingers) arguing that AI art can also be a medium of human artistic expression.
  • One main worry about AI is that it’s going to take over a lot of jobs. This is a compelling argument! But the left-wing has recently been famously unsympathetic to this same argument around fossil-fuel energy jobs like coal mining, to the point where Biden infamously advised a group of miners in New Hampshire to learn to code1. Halting technological progress to preserve jobs is quite literally a “conservative” position.

On top of all that2, frontier AI models themselves are quite left-wing. Notwithstanding some real cases of data bias (most infamously Google’s image model miscategorizing dark-skinned humans as “gorillas”), the models reliably espouse left-wing positions. Even Elon Musk’s deliberate attempt to create a right-wing AI in Grok has had mixed success. In 2006, Stephen Colbert coined the phrase “reality has a left-wing bias”. If the left-wing were more sympathetic to AI, I think they would be using this as a pro-left argument3.

So what happened? A year ago I wrote Is using AI wrong? A review of six popular anti-AI arguments. In that post I blame the hard right-wing turn many big tech CEOs made in 2024. That was around the same time that LLMs was emerging in the public consciousness with ChatGPT, so it made sense that AI got tagged as right-wing: after all, the billionaires on TV and Twitter talking about how AI were going to change the world were all the same people who’d just gone all-in on Donald Trump. I still think this is a pretty good explanation - just unfortunate timing - but there are definitely other factors at play.

One obvious factor is the hangover from the pro-crypto mania of 2021 and 2022, where many of the same tech-obsessed folks also posted ugly art and talked about how their technology would change the world forever. Few of these predictions came true (though cryptocurrency has indeed changed the world forever), and it’s understandable that many people viewed AI as a natural continuation of this movement.

On top of that, Donald Trump himself has come out strongly pro-AI, both in terms of policy and in terms of actually posting AI art himself. This naturally creates a backlash where anti-Trump people are primed to be even more anti-AI4. Here are some more reasons:

  • AI has real environmental impact (though this is often wildly overstated, as I say here), and the right-wing is politically committed to downplaying or denying anthropogenic environmental impacts in general.
  • When times are tough, it’s easy to blame the hot new thing that everyone is talking about. Because the right-wing is currently ascendant in the US, left-wingers are more inclined to talk about how tough times are.
  • The left-wing is over-represented in the kind of “computer jobs” that are under direct threat from AI.
  • Being pro-Europe has always been left-wing coded, and Europe has been noticeably slower and more sceptical about AI than the USA.

Let me finally put my cards on the table. I would describe myself as on the left wing, and I’m broadly agnostic about the impact of AI. Like the boring fence-sitter I am, I think it will have a mix of positive and negative effects. In general, I’m unconvinced by the pro-copyright and human-soul-related anti-AI arguments, or by the idea that AI is inherently right-wing, but I’m troubled by the environmental impact and the impact on jobs (which in my view are more classically left-wing positions).

Still, I’m curious what will happen when the left-wing flavor of anti-AI rhetoric disappears, which I think it will (as I said at the start, anti-AI sentiment is actually pretty bipartisan). When people start making explicitly right-wing anti-AI arguments, will that cause the left-wing to move a little bit towards supporting AI? Or will right-wing institutions continue to explicitly support AI, allowing anti-AI sentiment to become a wedge issue that the left-wing can exploit to pry away voters? In any case, I don’t think the current state of affairs is particularly stable. In many ways, the dominant anti-AI arguments would fit better in a conservative worldview than in the worldview of their liberal proponents.


  1. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023.

  2. Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around.

  3. Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing).

  4. I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act).

Read the whole story
alvinashcraft
31 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Forgotten message from the past: LB_INIT­STORAGE

1 Share

The classic Win32 list box control lets you preallocate memory in anticipation of adding a large number of items. The documentation recommends doing this for cases where you are adding more than 100 items to a list box.¹ What is being preallocated here?

The list box internally tracks the items as an array of structures and a separate memory pool for strings. What LB_INIT­STORAGE does is tell the list box control to preallocate memory for the two blocks: Make the first memory block big enough that you can add at least wParam items to the list box, and grow the second memory block so that it can hold an additional lParam bytes of string data.

The number of bytes required to hold a string includes a null terminator, so a 12-character Unicode string requires (12 + 1) × 2 = 26 bytes. Therefore, if you intend to have a total of 100 Unicode strings, each an average of 10 characters long, you would recommend expanding the string memory pool by 100 × (10 + 1) × 2 = 2200 bytes. The call to LB_INIT­STORAGE would be

SendMessage(hwndLB, LB_INITSTORAGE, 100, 100 * (10 + 1) * sizeof(TCHAR));

Preallocating the memory avoids quadratic memory allocation behavior when the buffers have to be grown each time a new item is added.²

¹ Personally, I think that 100 items is too many for a list box, from a usability standpoint. If you have that many items, I think an auto-suggest box is a better choice, so that people can just type a partial string to narrow the search rather than being forced to scroll through multiple pages to get to the item they want.

² Note however that quadratic behavior is not avoided completely. Internally, that array of structures is really a series of parallel arrays packed together, so adding an item requires that all the items in the second and subsequent parallel arrays be moved to make room for the new item in the first array. Another reason not to have a large number of items in your list box.

The post Forgotten message from the past: <CODE>LB_<WBR>INIT­STORAGE</CODE> appeared first on The Old New Thing.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Meta Pivots From Open Weights, Big Pharma Bets On AI, Regulatory Patchwork, Simulating Human Cohorts

1 Share
The Batch AI News and Insights: AI-native software engineering teams operate very differently than traditional teams.
Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories