Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151972 stories
·
33 followers

How to Make Your GitHub Profile Stand Out

1 Share

If you have a Github profile, you might overlook the many ways you can customize it – and that's completely understandable. After all, at its core, GitHub is a home for your code.

But beyond repositories and commits, your profile can say a lot about you as a developer.

When used intentionally, GitHub becomes more than a code hosting platform. It becomes your CV for your codebase. It tells your story, showcases your skills, and gives people a reason to trust your work.

In this article, we'll break down the different ways to make your GitHub profile stand out. From setting up your GitHub account to engaging storytelling for your repositories, there's lots you can do.

Let's get started!

Table of Contents

Step 1: Sign Up for a Github Account

To begin, you'll need a GitHub account. If you don’t have one, you can set one up here.

Once you have your account set up and you're logged in, we can move on to the next step.

Step 2: Add a Profile Image

Your profile image is often the first thing people notice. It could be a professional photo of yourself, or an image or avatar that represents you or your interests

As long as it’s appropriate, you’re good to go.

To add a profile image, you'll need to:

  • Open your profile menu/dashboard

  • Click on the image icon at the left

  • Click on the edit text on the image icon

  • Select the image to set as your profile picture

  • Click the "Set new profile picture" button

So, you should have something like this:

Image showing the new Profile image added

GitHub link to this page: https://github.com/settings/profile

And there you have it, your GitHub profile image is set.

On to the next one…

Step 3: Add Profile Details

This step is all about credibility and discoverability.

At the center of your profile settings you'll see fields like email, location, social media links and so on. We'll be adding those details so you can take advantage of the discoverability it lends to your profile.

Image showing public profile settings tab

GitHub link to this page: https://github.com/settings/profile

For this step, you'll want to add as much detail as possible (apart from your home address – I think we both know why).

For the location, you can just put in your city or country so others have a general idea of where you are in the world.

Step 4: Add a Profile README File

This is where you introduce yourself properly and tell your story.

A Profile README is a special repository named exactly the same as your GitHub username. Your README file appears directly on your profile page.

The READme should answer the following questions:

  • Who are you?

  • What are your project highlights?

  • What are you currently working on or learning?

  • Your hobbies or interests (optional)

While answering these questions, you should aim to keep it minimal and yet interesting. You don't want to overwhelm the visitor.

Here's how to create your README:

  • Click New repository

  • Name the repository exactly the same as your GitHub username

  • Check “Add a README file”

  • Make sure the repository is public

  • Click Create repository

Profile README file setup:

Image showing profile README file being created

GitHub link to this page: https://github.com/new

So if you answered the questions listed above, your README file should look something like this:

Image showing Profile README section already created

GitHub link to this page: https://github.com/chinazachisom/chinazachisom

It should also be showing directly on your GitHub profile like below:

Github Profile Showing the Newly Added README file

GitHub link to this page: https://github.com/chinazachisom

Step 5: Tell a Story About Each Repository

Now, this is where you can tell a story about each of your repositories using a README file.

NB: Each repository should have its own separate README file.

What to include in a repository README:

  • Project title

  • What the project is

  • The purpose (the “why”)

  • Key features

  • Challenges you faced and how you solved them

  • Setup or usage instructions (or a live link if hosted)

  • Technical concepts used (e.g., throttling, caching, lazy loading) (optional)

  • Images or video demos

You may also include badges, charts, contribution graphs or other visual enhancements that help highlight project quality, activity and impact.

With the above structure, you can tell the stories behind your projects, show your problem-solving skills, and make your work easier to understand and evaluate.

Repository README File Sample:

Image showing the README file for the new repository

GitHub link to this page: https://github.com/chinazachisom/Artsy

Conclusion

Your Github Profile is more than just a storage space for your codebase. It's your developer Identity as well.

Following these basic steps can help turn your Github into a portfolio infused with your personal brand. It makes your GitHub Profile stand out, which can help open doors for more opportunities.

Treat it like a CV for your code and let your work speak for you.

About the Author

Hi there! I'm Chinaza Chukwunweike, a Software Engineer passionate about building robust, scalable systems that make a real world impact. I'm also an advocate for continuous learning and improvement.

If you found this useful, please share it! And follow me for more Software Engineering tips, AI learning strategies, and productivity frameworks.



Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

SSIS Extension Updates – Apr 2026

1 Share

Back in June 2024, I announced I was changing the way I report updates to the SQL Server Integration Services extension for Visual Studio (in a post titled SSIS Extension Updates – Jun 2024). I have a calendar reminder that reminds me to check the links every quarter, and that reminder did its job admirably earlier this month, but I was heads down building updates to SSIS Catalog Compare and SSIS Framework for the Data Integration Lifecycle Management Suite.

Speaking of DILM Suite, I was building a fresh set of demo projects in Visual Studio 2026 Insiders Community Edition, preparing to do battle with the deployment process, when I noticed an update to the SSIS 2022+ extension.

There Are Two Integration Services Extensions

In 2024, the Microsoft SSIS team forked the Integration Services extension into pre-2022 and 2022+ versions.
Since that time, there haven’t been many updates to the pre-2022 version.

Given there are no new updates to the pre-2022 extension at this time, I decided not to report on that version.
If the Microsoft SSIS Team updates the pre-2022 extension in the future, I’ll include a section describing the update. But for now, I’ll simply include the link.

Extension Page Links

The links to each extension are:

SSIS 2022+: https://marketplace.visualstudio.com/items?itemName=SSIS.MicrosoftDataToolsIntegrationServices

SSIS Pre-2022: https://marketplace.visualstudio.com/items?itemName=SSIS.SqlServerIntegrationServicesProjects&ssr=false#overview

SSIS 2022+ Update

A new update is available for the SQL Server Integration Services Projects 2022+:

The latest update is version 2.2, released 1 Apr 2026.

Bug fixes:

  • Fixed issues in Import Project Wizard and SSIS in Azure Connection Wizard for VS 2026 18.4.
  • Fixed an installer issue where an existing OLE DB driver could be removed during SSIS installation.
  • Fixed a dark mode issue where SSIS design surfaces (such as Control Flow and Data Flow) appeared with white backgrounds.
  • Upgraded the SSIS VSTA dependency version to help prevent setup failures in environments with stricter security settings.
  • Improved ODBC buffer management by adding cache for realignment.

Known issues:

  • Upgrading from earlier versions currently depends on an upcoming Visual Studio Installer fix. Until then, to design and execute Analysis Task and related connections, install Microsoft Analysis Services Projects 2022+ as workaround
  • In the context menu (right mouse button) on objects in the project (e.g. the solution, a package) in Visual Studio, many of the entries appear many times. This happens only when Microsoft Analysis Services Projects 2022+ is installed together.

Conclusion

Lots of enterprises continue to use SSIS – especially for on-premises data engineering. In a recent conversation with Enterprise Data & Analytics data engineers, we surmised SSIS may likely remain available for as long as SQL Server is supported on-premises.

It’s a guess, yes; but an educated and somewhat informed guess.

Ivan Peev and I had a recent conversation about the state and future of SSIS. You can view a video of the livestream here. Ivan is founder of COZYROC, a third-party SSIS controls vendor, and creator of the SSIS+ Component Suite. COZYROC is a member of the DILM Integration Circle.

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

MCP Demystified: Tools vs Resources vs Prompts Explained Simply

1 Share

Introduction

When developers start working with Model Context Protocol (MCP), one of the most confusing parts is understanding the difference between MCP Tools, Resources, and Prompts. All three are important components in modern AI application development, but they serve completely different purposes.

In real-world AI systems like chatbots, AI agents, and copilots, using these components correctly can make your application scalable, clean, and easy to maintain. If used incorrectly, it can lead to confusion, bugs, and poor system design.

In this article, we will clearly explain the difference between MCP Tools, Resources, and Prompts in simple words, using real-world examples and practical explanations. This guide is helpful for both beginner and intermediate developers working with AI and MCP.

What Are MCP Tools?

MCP Tools are functions or services that an AI model can use to perform real-world actions. These actions usually involve doing something outside the AI system, such as calling an API, updating a database, or sending a message.

In simple terms, Tools represent what the AI can do.

Real-World Analogy

Think of MCP Tools like service workers in a company. For example, a delivery person delivers packages, a support agent updates tickets, and a payment system processes transactions. Similarly, MCP Tools perform specific tasks when requested by the AI.

Examples of MCP Tools

  • A tool that fetches user details from a database
  • A tool that sends emails or notifications
  • A tool that creates or updates support tickets
  • A tool that calls third-party APIs like payment gateways
  • A tool that triggers workflows in enterprise systems

Key Understanding

Tools are action-based. They execute operations and return results. Whenever your AI needs to "do something," you should use a Tool.

What Are MCP Resources?

MCP Resources are data sources that the AI model can access to read information. These are typically read-only and provide context or knowledge to the AI.

In simple terms, Resources represent what the AI can read or see.

Real-World Analogy

Think of MCP Resources like books in a library or documents in a company. You can read and learn from them, but you cannot directly change their content.

Examples of MCP Resources

  • A database table containing customer information
  • A knowledge base with FAQs and documentation
  • System logs that track user activity
  • Configuration files or static datasets
  • Company policy documents or guidelines

Key Understanding

Resources are data-based. They provide information but do not perform any action. Whenever your AI needs information to make a decision, you should use a Resource.

What Are MCP Prompts?

MCP Prompts are structured instructions or templates that guide how the AI model should think, behave, and respond.

In simple terms, Prompts represent how you instruct the AI.

Real-World Analogy

Think of Prompts like instructions given to an employee. For example, “Write a professional email,” “Summarize this report,” or “Answer politely to the customer.” These instructions shape how the output is generated.

Examples of MCP Prompts

  • A prompt to summarize customer feedback
  • A prompt to generate a support response in a polite tone
  • A prompt to analyze data and provide insights
  • A prompt to translate text into another language
  • A prompt to generate code based on requirements

Key Understanding

Prompts are instruction-based. They define how the AI should process input and generate output.

Key Differences Between MCP Tools, Resources, and Prompts

Understanding the difference between MCP Tools, Resources, and Prompts is important for building scalable AI systems.

Tools vs Resources vs Prompts

  • Tools are used for performing actions
  • Resources are used for reading data
  • Prompts are used for guiding AI behavior

Detailed Comparison

  • Tools interact with external systems and can change data or trigger operations
  • Resources only provide data and do not modify anything
  • Prompts control how the AI thinks, responds, and formats its output

Comparison Table

AspectMCP ToolsMCP ResourcesMCP Prompts
PurposePerform actionsProvide dataGuide behavior
NatureActivePassiveInstructional
UsageAPI calls, updatesData readingAI response generation
OutputAction resultDataGenerated content

How MCP Tools, Resources, and Prompts Work Together

In real-world AI systems, these three components are used together to create powerful workflows.

Step-by-Step Flow

  1. The user sends a request to the AI system
  2. The Prompt defines how the AI should understand and respond
  3. The AI fetches required information from Resources
  4. If an action is required, the AI uses a Tool
  5. The AI combines everything and generates a final response

Practical Example

Consider an AI customer support system:

  • The Prompt ensures the response is polite and helpful
  • The Resource provides customer history and previous tickets
  • The Tool updates the ticket status or sends an email notification

This combination helps build intelligent, real-world AI applications.

Advantages of Understanding MCP Concepts

  • Helps developers design clean and scalable AI architecture
  • Improves clarity in system design and reduces confusion
  • Enhances performance by separating responsibilities
  • Makes debugging and maintenance easier
  • Supports faster development of AI-powered applications

Common Mistakes Developers Make

  • Using Tools when only data retrieval is needed
  • Treating Resources as editable systems
  • Writing vague or unclear Prompts
  • Mixing responsibilities between Tools, Resources, and Prompts
  • Not structuring MCP components properly in applications

Best Practices for Using MCP Tools, Resources, and Prompts

  • Clearly define the role of each component before implementation
  • Use Tools only for actions that change system state or trigger operations
  • Use Resources strictly for reading and retrieving data
  • Write clear, specific, and well-structured Prompts
  • Test Tools, Resources, and Prompts independently before integration
  • Keep your architecture modular and easy to scale

Summary

Understanding the difference between MCP Tools, Resources, and Prompts is essential for modern AI application development using Model Context Protocol. Tools allow AI systems to perform actions, Resources provide the necessary data, and Prompts guide how the AI behaves and generates responses. When these components are used correctly, developers can build scalable, efficient, and intelligent AI systems. Mastering these MCP concepts will help you design better architectures and create powerful AI-driven applications in today’s evolving technology landscape.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 309 - "GenAI's NFT Moment" (Have we reached peak AI silliness?)

1 Share
From: Iot Coffee Talk
Duration: 1:07:52
Views: 4

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Pete, Bill, Alistair, Rob, and Leonard jump on Web3 for a discussion about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Rain", The Cult
🐣 How did any of us survive the mosh pits of our youth?
🐣 How to become an MVP at Microsoft.
🐣 How has the developer ecosystem game changed since the days of Microsoft Win32.
🐣 Why leading journalists are two-years behind in their tech reporting.
🐣 The telco economy versus the AI economy, which is bigger?
🐣 The secret value of AI - enabling our ability to deal with exponential complexity.
🐣 The great AI fallacy - it will replace people. Quite the opposite. Why?
🐣 Is GenAI having its NFT moment? Ask yourself, did you fall for it the first time?
🐣 Why tech bubbles are not OK.
🐣 The quality and verification deficit and the exponential risk curve.
🐣 Why do we know better but we never seem to do it?
🐣 Folding a shirt - the physical AI inflection point.
🐣 Why physical is so much harder than you are being told.
🐣 Do we care about Rob's sustainability tech and visionary IoT solutions?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

RAG in .NET: What the Tutorials Don’t Tell You. Chunking, Embedding, and Production Gotchas

1 Share

Most RAG tutorials show you the happy path. A clean project, a handful of sample documents, a query that works first time.

That’s fine for getting oriented but it’s not what building these systems in production actually looks like.

This post is the one I wished existed when I started working in earlier RAG solutions using Semantic Kernel.

The chunking decisions that degrade silently, the retrieval quality that looks fine in demos and falls apart on real queries, and observability gaps that make debugging feel like guessing.

All in .NET.

This blog is by know means exhaustive and I continue to find optimisations but it’s a good starting point.

Lets dig in.

~

What RAG Is

RAG (Retrieval-Augmented Generation) is a pattern, not a framework. Before asking an LLM to answer a question, you first retrieve relevant content from your own data store and include it in the prompt.

The model answers from that grounded context rather than from training data alone.

A RAG pipeline has five stages:

  1. Ingestion — load source documents (HTML, Markdown, PDFs, plain text)
  2. Chunking — split documents into segments small enough to embed meaningfully
  3. Embedding — convert each chunk into a vector using an embedding model
  4. Storage — persist vectors to a vector store (SQLite, Elasticsearch, Azure AI Search, etc.)
  5. Retrieval + Generation — embed the incoming query, find the closest chunks, build a grounded prompt, generate an answer

 

Simple on paper. The devil is in the implementation choices at each stage.

~

Chunking: The Step Most Tutorials Rush

Chunking quality has a disproportionate effect on retrieval quality. Chunks too large: vector similarity becomes diluted. Too small: you lose the context that makes a chunk meaningful.

One approach is to use TextChunker.SplitMarkdownParagraphs() from Semantic Kernel. It respects document structure such as  paragraphs, headings, and list items don’t get bisected mid-sentence.

var chunks = TextChunker.SplitMarkdownParagraphs(
    lines: markdownContent.Split('n').ToList(),
    maxTokensPerParagraph: 512,
    overlapTokens: 50
);

 

The overlapTokens parameter matters. A small overlap (10%-15%) between adjacent chunks ensures that a relevant sentence near a chunk boundary doesn’t disappear from retrieval. Skipping this is a common mistake.

I implemented my own custom chunking service on one project.

Gotcha: HTML content

Convert HTML to Markdown before chunking. Raw HTML bloats chunks with noise such as tags, attributes, and inline styles .  These degrade embedding quality. Use  the HtmlAgilityPack to strip structure first.

Gotcha: Mixed content types

A chunk that mixes a code sample with surrounding prose often embeds poorly because the two content types pull the vector in different directions. Chunk code blocks separately and tag them with metadata for filtering at retrieval time.  This was an an important learning for me.

~

The Relevance Threshold Is Not a Magic Number

Semantic Kernel’s SearchAsync takes a minRelevanceScore parameter. Tutorial defaults (0.75–0.80) are not universally correct.  The right threshold depends on your corpus and embedding model.

var results = await memory.SearchAsync(
    collection: CollectionName,
    query: userQuery,
    limit: 5,
    minRelevanceScore: 0.70
);

Start at 0.70 (or whatever your comfort level is) and run representative queries, and look at what gets returned.

Build a manual eval set of 20–30 query/expected-answer pairs and iterate. There is no substitute for looking at actual retrieval results on your specific data.

~

Choosing a Vector Store

Match the tool to the stage:

  • VolatileMemoryStore — Demos only. Vectors live in RAM, gone on restart.
  • SqliteMemoryStore — Local development and early production. File-based, zero infrastructure overhead.
  • Elasticsearch — Already in your stack? Use it. Good for hybrid search.
  • Azure AI Search — Production on Azure. Managed, scalable.
  • Qdrant / Pinecone — Dedicated vector workloads at scale.

 

SQLite is underrated for early production. It’s a one-line swap from VolatileMemoryStore and handles modest query volumes without infrastructure cost. Migrate later when you actually need to.

~

The One-Time Embedding Check

Once you’re using a persistent store, add a collection existence check before the ingestion loop. Without it, every restart re-embeds the entire corpus — API calls and cost you don’t need.

var collections = await sqliteStore.GetCollectionsAsync().ToListAsync();
if (!collections.Contains(CollectionName))
{
    await ragService.IngestDocumentsAsync(documents, CollectionName);
}
else
{
    Console.WriteLine("Vectors already stored - skipping ingestion.");
}

 

Small investment. Saves meaningful API cost at scale.

~

Prompt Construction: Ground It Properly

The difference between a useful RAG system and a hallucinating one often comes down to prompt construction.

A simple prompt you can use:

var sb = new StringBuilder();
sb.AppendLine("Answer the question using ONLY the context below.");
sb.AppendLine("If the answer is not in the context, say so explicitly.");
sb.AppendLine();
sb.AppendLine("CONTEXT:");
foreach (var chunk in retrievedChunks)
{
    sb.AppendLine($"[Source: {chunk.Metadata.Id}]");
    sb.AppendLine(chunk.Metadata.Text);
    sb.AppendLine();
}
sb.AppendLine($"QUESTION: {userQuery}");

The key phrases are “ONLY the context below” and “say so explicitly”. Without explicit grounding instructions, models blend retrieved content with training knowledge which looks helpful but introduces unfaithful answers.

This isn’t optional.

~

Semantic Caching: The Easy Win Most People Skip

For user-facing or high-volume pipelines, add semantic caching early. Before hitting the vector store and LLM, check whether an incoming query is semantically similar to a recent query already answered.

If the similarity score is above threshold return the cached answer directly.

var cachedAnswer = await cacheService.FindSimilarAsync(query, threshold: 0.92f);
if (cachedAnswer != null)
{
    return cachedAnswer.Answer; // No vector search, no LLM call
}

 

At scale this eliminates a large proportion of pipeline calls and cuts latency dramatically for common query patterns.  Add this early.  Retrofitting it later is more work than it needs to be.

~

Observability: Knowing What’s Actually Happening

A RAG pipeline has multiple failure modes and they all look the same from the outside: a bad answer. Without instrumentation you can’t tell whether the problem is in chunking, retrieval, prompt construction, or the model itself.

Consider capturing data using a logging record similar to:

public record RagQueryTrace
{
    public string Query { get; init; }
    public int ChunksRetrieved { get; init; }
    public float TopChunkScore { get; init; }
    public float LowestChunkScore { get; init; }
    public string[] SourceIds { get; init; }
    public string GeneratedAnswer { get; init; }
    public double LatencyMs { get; init; }
    public bool CacheHit { get; init; }
}

Signals to watch:

  • TopChunkScore consistently below 0.75: retrieval is struggling.
  • ChunksRetrieved always hitting your limit: try widening search and re-ranking.
  • CacheHit always false with high latency: cache threshold may be too tight.

 

Wire up end-to-end tracing with ILogger:

public async Task<string> QueryAsync(string query, string collection)
{
    var sw = Stopwatch.StartNew();
    _logger.LogInformation("RAG query started. Query={Query}", query);

    var chunks = await RetrieveChunksAsync(query, collection);
    _logger.LogInformation("Retrieval complete. Chunks={Count}, TopScore={Score:F3}",
        chunks.Count, chunks.FirstOrDefault()?.Relevance ?? 0);

    var answer = await GenerateAnswerAsync(query, chunks);
    _logger.LogInformation("Generation complete. LatencyMs={Ms}", sw.ElapsedMilliseconds);

    return answer;
}

 

Diagnosing bad answers:

  • Right chunks not retrieved? – Retrieval problem (threshold, chunking, embedding model)
  • Chunks retrieved but answer wrong? – Tighten grounding instructions in the prompt
  • Chunks and prompt correct but hallucinated? – Add explicit “do not speculate” to system prompt

 

Work backwards through the trace when you experience any of the above.

~

Evaluating RAG Quality and Why CI Matters

Most RAG prototypes get evaluated informally. This works until the corpus changes, a threshold gets tweaked, or the embedding model is swapped. Quality silently regresses with no way to detect it.

Build question/answer pairs covering easy queries, hard queries spanning multiple documents, and edge cases where the answer isn’t in the corpus and the system should say “I don’t know”. Three metrics worth tracking include:

  • Context Recall: were the right chunks retrieved?
  • Faithfulness: does the answer stick to the retrieved context?
  • Answer Correctness: does the answer match the expected answer?

 

Wire evals into your CI process.  For example:

[Fact]
public async Task RagEval_ContextRecall_AboveThreshold()
{
    var results = await RunEvalSetAsync(_evalQueries);
    var avgRecall = results.Average(r => r.ContextRecall);
    Assert.True(avgRecall >= 0.80,
        $"Context recall {avgRecall:P0} is below the 80% threshold");
}

The edge case eval is the most important, i,e. – queries where the answer genuinely isn’t in the corpus.

These test whether the system correctly says “I don’t know” rather than hallucinating.

Hallucination on out-of-scope queries is the thing that erodes user trust fastest and it’s the thing informal testing almost never catches.

~

What to Watch Out For

Some other things to watch out for:

  • Skipping overlap tokens in chunking — sentences near chunk boundaries silently drop out of retrieval. Always set overlapTokens.
  • Using tutorial threshold values verbatim — 0.75 or 0.80 is a starting point, not a universal answer. Tune against your actual corpus.
  • Re-embedding on every restart — add the collection existence check. It’s five lines and saves real API cost.
  • Weak grounding instructions — “use the context below” is not the same as “ONLY the context below”. The difference shows up in production.
  • No out-of-scope eval set — hallucination on questions the system can’t answer is the fastest way to lose user trust. Test for it explicitly.

 

Another reminded – make sure you chunk prose and code examples differently.  This really caught me out.

~

Tools Used

Key tools involved in the above included:

  • Semantic Kernel (chunking, embedding, retrieval)
  • TextChunker.SplitMarkdownParagraphs (structure-aware chunking)
  • HtmlAgilityPack (HTML-to-Markdown conversion)
  • SqliteMemoryStore / VolatileMemoryStore / Azure AI Search (vector stores)
  • ILogger / Stopwatch (observability and tracing)
  • xUnit (eval set CI integration)
  • .NET 8

 

The happy path is easy to build. A RAG system that stays reliable as the corpus grows, thresholds get tuned, and real users ask unexpected questions is a different problem.

The patterns above are the ones that made the difference in practice.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Many anti-AI arguments are conservative arguments

1 Share

Most anti-AI rhetoric is left-wing coded. Popular criticisms of AI describe it as a tool of techno-fascism, or appeal to predominantly left-wing concerns like carbon emissions, democracy, or police brutality. Anti-AI sentiment is surprisingly bipartisan, but the big anti-AI institutions are labor unions and the progressive wing of the Democrats.

This has always seemed weird to me, because the contents of most anti-AI arguments are actually right-wing coded. They’re not necessarily intrinsically right-wing, but they’re the kind of arguments that historically have been made by conservatives, not liberals or leftists. Here are some examples:

  • Many AI critics complain that AI steals copyrighted content, but prior to 2023, leftists have been largely anti-intellectual-property on principle (either because they’re anti-property, or because they characterize copyright as benefiting huge media corporations and patent trolls).
  • A popular anti-AI-art sentiment is that it’s corrosive to the human spirit to consume AI slop: in other words, art just inherently ought to be generated by humans, and using AI thus damages some part of our intangible human soul. Whether you like this argument or not, it’s structurally similar to a whole slate of classic arguments-from-intuition for conservative positions like anti-abortion or anti-homosexuality.
  • Weird new technological art has traditionally been championed by the left-wing and dismissed by the right-wing (as inhuman, cheap, or degenerate). But when it comes to AI art, it’s the left-wing making these arguments, and others (not necessarily right-wingers) arguing that AI art can also be a medium of human artistic expression.
  • One main worry about AI is that it’s going to take over a lot of jobs. This is a compelling argument! But the left-wing has recently been famously unsympathetic to this same argument around fossil-fuel energy jobs like coal mining, to the point where Biden infamously advised a group of miners in New Hampshire to learn to code1. Halting technological progress to preserve jobs is quite literally a “conservative” position.

On top of all that2, frontier AI models themselves are quite left-wing. Notwithstanding some real cases of data bias (most infamously Google’s image model miscategorizing dark-skinned humans as “gorillas”), the models reliably espouse left-wing positions. Even Elon Musk’s deliberate attempt to create a right-wing AI in Grok has had mixed success. In 2006, Stephen Colbert coined the phrase “reality has a left-wing bias”. If the left-wing were more sympathetic to AI, I think they would be using this as a pro-left argument3.

So what happened? A year ago I wrote Is using AI wrong? A review of six popular anti-AI arguments. In that post I blame the hard right-wing turn many big tech CEOs made in 2024. That was around the same time that LLMs was emerging in the public consciousness with ChatGPT, so it made sense that AI got tagged as right-wing: after all, the billionaires on TV and Twitter talking about how AI were going to change the world were all the same people who’d just gone all-in on Donald Trump. I still think this is a pretty good explanation - just unfortunate timing - but there are definitely other factors at play.

One obvious factor is the hangover from the pro-crypto mania of 2021 and 2022, where many of the same tech-obsessed folks also posted ugly art and talked about how their technology would change the world forever. Few of these predictions came true (though cryptocurrency has indeed changed the world forever), and it’s understandable that many people viewed AI as a natural continuation of this movement.

On top of that, Donald Trump himself has come out strongly pro-AI, both in terms of policy and in terms of actually posting AI art himself. This naturally creates a backlash where anti-Trump people are primed to be even more anti-AI4. Here are some more reasons:

  • AI has real environmental impact (though this is often wildly overstated, as I say here), and the right-wing is politically committed to downplaying or denying anthropogenic environmental impacts in general.
  • When times are tough, it’s easy to blame the hot new thing that everyone is talking about. Because the right-wing is currently ascendant in the US, left-wingers are more inclined to talk about how tough times are.
  • The left-wing is over-represented in the kind of “computer jobs” that are under direct threat from AI.
  • Being pro-Europe has always been left-wing coded, and Europe has been noticeably slower and more sceptical about AI than the USA.

Let me finally put my cards on the table. I would describe myself as on the left wing, and I’m broadly agnostic about the impact of AI. Like the boring fence-sitter I am, I think it will have a mix of positive and negative effects. In general, I’m unconvinced by the pro-copyright and human-soul-related anti-AI arguments, or by the idea that AI is inherently right-wing, but I’m troubled by the environmental impact and the impact on jobs (which in my view are more classically left-wing positions).

Still, I’m curious what will happen when the left-wing flavor of anti-AI rhetoric disappears, which I think it will (as I said at the start, anti-AI sentiment is actually pretty bipartisan). When people start making explicitly right-wing anti-AI arguments, will that cause the left-wing to move a little bit towards supporting AI? Or will right-wing institutions continue to explicitly support AI, allowing anti-AI sentiment to become a wedge issue that the left-wing can exploit to pry away voters? In any case, I don’t think the current state of affairs is particularly stable. In many ways, the dominant anti-AI arguments would fit better in a conservative worldview than in the worldview of their liberal proponents.


  1. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023.

  2. Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around.

  3. Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing).

  4. I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act).

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories