Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147916 stories
·
33 followers

Why Stack Overflow and Cloudflare launched a pay-per-crawl model

1 Share
Inside the pay-per-crawl model colaunched by Stack Overflow and Cloudflare.
Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Turn your Raspberry Pi into an AI agent with OpenClaw

1 Share

The tech corners of the internet are buzzing with talk of OpenClaw, an open source AI agent. I’ve been playing around with it in the Maker Lab at Pi Towers for the past couple of weeks to find out what it’s really capable of.

By now, most of us are familiar with generative AI chatbots such as ChatGPT or Claude. These tools simulate conversation and generate responses based on prompts, using large language models (LLMs) to answer questions, write code, brainstorm ideas, or help analyse information. They’re incredibly useful — like having a knowledgeable assistant you can ask anything.

But traditional chatbots are fundamentally reactive; you ask a question, they respond. They can help you think through a problem, but the actual execution is still up to you.

This is where AI agents come in

OpenClaw takes the same generative AI capabilities and adds the missing piece: action. Instead of just generating text, an AI agent can use tools, run commands, interact with APIs, manage workflows, and carry out tasks on your behalf.

But, as Spider-Man wisely reminds us, with great power comes great responsibility. Installing OpenClaw on your main computer gives it deep access to your system, potentially allowing it to browse websites, fill in forms, and interact with personal data. That level of capability is incredibly powerful, but it can also pose a very real security risk.

Running OpenClaw on a standalone device like a Raspberry Pi is a great way to mitigate these security concerns. You gain isolation, control, and peace of mind, all while benefiting from a system that’s always on, energy efficient, and quietly ‘doing’ in the background.

Installing OpenClaw

On a freshly installed and updated Raspberry Pi OS, run the following terminal command:

curl -fsSL https://openclaw.ai/install.sh | bash

This will install everything you need and take you through the setup process.

Nice day for a wedding photo booth

As an illustrative example, I’ll share my own first OpenClaw experiment using a Raspberry Pi: a wedding photo booth. You know the type — guests step up, photos are taken, and then they’re instantly shared. I’d previously built one myself in Python (though “built” might be generous). It worked, but it wasn’t pretty.

Later, I experimented with ‘vibe coding’, copying and pasting code between ChatGPT and my Raspberry Pi’s file system. The outcome was much better than my original attempt, but it still required a fair bit of time and manual effort.

Finally, I decided to give it the OpenClaw treatment. I installed the agent on my Raspberry Pi 5 (though a Raspberry Pi 4 with 8GB of RAM works well too), added a VPN service (Tailscale integrates seamlessly with OpenClaw), and configured my OpenAI API key as the primary AI provider.

Next, I set up a fresh installation of Raspberry Pi OS on another Raspberry Pi, which I planned to use as the brains of my photo booth with a Raspberry Pi Camera Module 2 attached. I provided OpenClaw with my login credentials for the Raspberry Pi 5 and asked it to SSH into the device.

From there, I simply chatted with OpenClaw in plain English, explaining exactly how I wanted the photo booth to behave with simple prompts like “Change the font to…”, “Centre the text…”, and so on. Within just a couple of hours, I had created this:

Everything was completed without a single Bash or Python command, and no coding on my part whatsoever. The AI agent created all the necessary files, built the webpage, configured the Wi-Fi hotspot for photo downloads, and set up admin access. From start to finish, it handled everything I needed.

Top tip:

We recommend using a high-quality SD card for your OpenClaw build. Better yet, you could add an M.2 HAT+ and run the OS from an SSD (just use ‘SD Card Copier’ in ‘Accessories’ on Raspberry Pi OS). This makes OpenClaw super snappy.

Using OpenClaw offline

By connecting OpenClaw to a locally hosted model via tools like Ollama, llama.cpp, or LocalAI, all reasoning and processing can happen directly on your Raspberry Pi, keeping your data private, reducing latency, and eliminating API costs. While local AI models may not always match the raw capabilities of large cloud-based models, they excel at fast, iterative tasks and can be combined with cloud providers as an intelligent fallback.

PicoClaw on Raspberry Pi Zero 2 W

While OpenClaw is a powerful AI system for managing workflows and tools, PicoClaw is a slimmed-down agent designed to run locally and execute tasks on minimal hardware, making it perfect for devices like Raspberry Pi Zero, Raspberry Pi Zero 2 W, or Raspberry Pi 3. Since these boards don’t use LPDDR4 memory, you can build an AI agent that’s insulated from supply constraints and price fluctuations in that market.

To try it out, I installed PicoClaw on a Raspberry Pi Zero 2 W and, 30 seconds later, created a test webpage…

The shift towards edge-driven intelligence

Starting with something as simple as hosting webpages quickly shows that OpenClaw is less about replacing tools and more about changing how we interact with them. Tools like OpenClaw, whether they’re used for testing new concepts, managing infrastructure, or supporting real-world deployments, illustrate the potential for shifting inferencing from cloud-based LLMs to low-cost, local devices like Raspberry Pi.

The post Turn your Raspberry Pi into an AI agent with OpenClaw appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic Code Fixing with GitHub Copilot SDK and Foundry Local

1 Share

Introduction

AI-powered coding assistants have transformed how developers write and review code. But most of these tools require sending your source code to cloud services, a non-starter for teams working with proprietary codebases, air-gapped environments, or strict compliance requirements. What if you could have an intelligent coding agent that finds bugs, fixes them, runs your tests, and produces PR-ready summaries, all without a single byte leaving your machine?

The Local Repo Patch Agent demonstrates exactly this. By combining the GitHub Copilot SDK for agent orchestration with Foundry Local for on-device inference, this project creates a fully autonomous coding workflow that operates entirely on your hardware. The agent scans your repository, identifies bugs and code smells, applies fixes, verifies them through your test suite, and generates a comprehensive summary of all changes, completely offline and secure.

This article explores the architecture behind this integration, walks through the key implementation patterns, and shows you how to run the agent yourself. Whether you're building internal developer tools, exploring agentic workflows, or simply curious about what's possible when you combine GitHub's SDK with local AI, this project provides a production-ready foundation to build upon.

Why Local AI Matters for Code Analysis

Cloud-based AI coding tools have proven their value—GitHub Copilot has fundamentally changed how millions of developers work. But certain scenarios demand local-first approaches where code never leaves the organisation's network.

Consider these real-world constraints that teams face daily:

  • Regulatory compliance: Financial services, healthcare, and government projects often prohibit sending source code to external services, even for analysis
  • Intellectual property protection: Proprietary algorithms and trade secrets can't risk exposure through cloud API calls
  • Air-gapped environments: Secure facilities and classified projects have no internet connectivity whatsoever
  • Latency requirements: Real-time code analysis in IDEs benefits from zero network roundtrip
  • Cost control: High-volume code analysis without per-token API charges

The Local Repo Patch Agent addresses all these scenarios. By running the AI model on-device through Foundry Local and using the GitHub Copilot SDK for orchestration, you get the intelligence of agentic coding workflows with complete data sovereignty. The architecture proves that "local-first" doesn't mean "capability-limited."

The Technology Stack

Two core technologies make this architecture possible, working together through a clever integration called BYOK (Bring Your Own Key). Understanding how they complement each other reveals the elegance of the design.

GitHub Copilot SDK

The GitHub Copilot SDK provides the agent runtime, the scaffolding that handles planning, tool invocation, streaming responses, and the orchestration loop that makes agentic behaviour possible. Rather than managing raw LLM API calls, developers define tools (functions the agent can call) and system prompts, and the SDK handles everything else.

Key capabilities the SDK brings to this project:

  • Session management: Maintains conversation context across multiple agent interactions
  • Tool orchestration: Automatically invokes defined tools when the model requests them
  • Streaming support: Real-time response streaming for responsive user interfaces
  • Provider abstraction: Works with any OpenAI-compatible API through the BYOK configuration

Foundry Local

Foundry Local brings Azure AI Foundry's model catalog to your local machine. It automatically selects the best available hardware acceleration—GPU, NPU, or CP, and exposes models through an OpenAI-compatible API on localhost. Models run entirely on-device with no telemetry or data transmission.

For this project, Foundry Local provides:

  • On-device inference: All AI processing happens locally, ensuring complete data privacy
  • Dynamic port allocation: The SDK auto-detects the Foundry Local endpoint, eliminating configuration hassle
  • Model flexibility: Swap between models like qwen2.5-coder-1.5b, phi-3-mini, or larger variants based on your hardware
  • OpenAI API compatibility: Standard API format means the GitHub Copilot SDK works without modification

The BYOK Integration

The entire connection between the GitHub Copilot SDK and Foundry Local happens through a single configuration object. This BYOK (Bring Your Own Key) pattern tells the SDK to route all inference requests to your local model instead of cloud services:

const session = await client.createSession({
  model: modelId,
  provider: {
    type: "openai",               // Foundry Local speaks OpenAI's API format
    baseUrl: proxyBaseUrl,        // Streaming proxy → Foundry Local
    apiKey: manager.apiKey,
    wireApi: "completions",       // Chat Completions API
  },
  streaming: true,
  tools: [ /* your defined tools */ ],
});

This configuration is the key insight: with one config object, you've redirected an entire agent framework to run on local hardware. No code changes to the SDK, no special adapters—just standard OpenAI-compatible API communication.

Architecture Overview

The Local Repo Patch Agent implements a layered architecture where each component has a clear responsibility. Understanding this flow helps when extending or debugging the system.

┌─────────────────────────────────────────────────────────┐
│                    Your Terminal / Web UI                │
│                    npm run demo / npm run ui             │
└──────────────┬──────────────────────────────────────────┘
               │
┌──────────────▼──────────────────────────────────────────┐
│          src/agent.ts  (this project)                    │
│                                                          │
│  ┌────────────────────────────┐   ┌──────────────────┐  │
│  │  GitHub Copilot SDK         │   │  Agent Tools      │  │
│  │  (CopilotClient)            │   │  list_files       │  │
│  │  BYOK → Foundry             │   │  read_file        │  │
│  └────────┬───────────────────┘   │  write_file       │  │
│            │                       │  run_command      │  │
└────────────┼───────────────────────┴──────────────────┘  │
             │                                             │
             │ JSON-RPC                                    │
┌────────────▼─────────────────────────────────────────────┐
│          GitHub Copilot CLI  (server mode)                │
│          Agent orchestration layer                        │
└────────────┬─────────────────────────────────────────────┘
             │ POST /v1/chat/completions   (BYOK)
┌────────────▼─────────────────────────────────────────────┐
│          Foundry Local  (on-device inference)             │
│          Model: qwen2.5-coder-1.5b via ONNX Runtime      │
│          Endpoint: auto-detected (dynamic port)           │
└───────────────────────────────────────────────────────────┘

The data flow works as follows: your terminal or web browser sends a request to the agent application. The agent uses the GitHub Copilot SDK to manage the conversation, which communicates with the Copilot CLI running in server mode. The CLI, configured with BYOK, sends inference requests to Foundry Local running on localhost. Responses flow back up the same path, with tool invocations happening in the agent.ts layer.

The Four-Phase Workflow

The agent operates through a structured four-phase loop, each phase building on the previous one's output. This decomposition transforms what would be an overwhelming single prompt into manageable, verifiable steps.

Phase 1: PLAN

The planning phase scans the repository and produces a numbered fix plan. The agent reads every source and test file, identifies potential issues, and outputs specific tasks to address:

// Phase 1 system prompt excerpt
const planPrompt = `
You are a code analysis agent. Scan the repository and identify:
1. Bugs that cause test failures
2. Code smells and duplication
3. Style inconsistencies

Output a numbered list of fixes, ordered by priority.
Each item should specify: file path, line numbers, issue type, and proposed fix.
`;

The tools available during this phase are list_files and read_file—the agent explores the codebase without modifying anything. This read-only constraint prevents accidental changes before the plan is established.

Phase 2: EDIT

With a plan in hand, the edit phase applies each fix by rewriting affected files. The agent receives the plan from Phase 1 and systematically addresses each item:

// Phase 2 adds the write_file tool
const editTools = [
  {
    name: "write_file",
    description: "Write content to a file, creating or overwriting it",
    parameters: {
      type: "object",
      properties: {
        path: { type: "string", description: "File path relative to repo root" },
        content: { type: "string", description: "Complete file contents" }
      },
      required: ["path", "content"]
    }
  }
];

The write_file tool is sandboxed to the demo-repo directory, path traversal attempts are blocked, preventing the agent from modifying files outside the designated workspace.

Phase 3: VERIFY

After making changes, the verification phase runs the project's test suite to confirm fixes work correctly. If tests fail, the agent attempts to diagnose and repair the issue:

// Phase 3 adds run_command with an allowlist
const allowedCommands = ["npm test", "npm run lint", "npm run build"];

const runCommandTool = {
  name: "run_command",
  description: "Execute a shell command (npm test, npm run lint, npm run build only)",
  execute: async (command: string) => {
    if (!allowedCommands.includes(command)) {
      throw new Error(`Command not allowed: ${command}`);
    }
    // Execute and return stdout/stderr
  }
};

The command allowlist is a critical security measure. The agent can only run explicitly permitted commands—no arbitrary shell execution, no data exfiltration, no system modification.

Phase 4: SUMMARY

The final phase produces a PR-style Markdown report documenting all changes. This summary includes what was changed, why each change was necessary, test results, and recommended follow-up actions:

## Summary of Changes

### Bug Fix: calculateInterest() in account.js
- **Issue**: Division instead of multiplication caused incorrect interest calculations
- **Fix**: Changed `principal / annualRate` to `principal * (annualRate / 100)`
- **Tests**: 3 previously failing tests now pass

### Refactor: Duplicate formatCurrency() removed
- **Issue**: Identical function existed in account.js and transaction.js
- **Fix**: Both files now import from utils.js
- **Impact**: Reduced code duplication, single source of truth

### Test Results
- **Before**: 6/9 passing
- **After**: 9/9 passing

This structured output makes code review straightforward, reviewers can quickly understand what changed and why without digging through diffs.

The Demo Repository: Intentional Bugs

The project includes a demo-repo directory containing a small banking utility library with intentional problems for the agent to find and fix. This provides a controlled environment to demonstrate the agent's capabilities.

Bug 1: Calculation Error in calculateInterest()

The account.js file contains a calculation bug that causes test failures:

// BUG: should be principal * (annualRate / 100)
function calculateInterest(principal, annualRate) {
  return principal / annualRate;  // Division instead of multiplication!
}

This bug causes 3 of 9 tests to fail. The agent identifies it during the PLAN phase by correlating test failures with the implementation, then fixes it during EDIT.

Bug 2: Code Duplication

The formatCurrency() function is copy-pasted in both account.js and transaction.js, even though a canonical version exists in utils.js. This duplication creates maintenance burden and potential inconsistency:

// In account.js (duplicated)
function formatCurrency(amount) {
  return '$' + amount.toFixed(2);
}

// In transaction.js (also duplicated)
function formatCurrency(amount) {
  return '$' + amount.toFixed(2);
}

// In utils.js (canonical, but unused)
export function formatCurrency(amount) {
  return '$' + amount.toFixed(2);
}

The agent identifies this duplication during planning and refactors both files to import from utils.js, eliminating redundancy.

Handling Foundry Local Streaming Quirks

One technical challenge the project solves is Foundry Local's behaviour with streaming requests. As of version 0.5, Foundry Local can hang on stream: true requests. The project includes a streaming proxy that works around this limitation transparently.

The Streaming Proxy

The streaming-proxy.ts file implements a lightweight HTTP proxy that converts streaming requests to non-streaming, then re-encodes the single response as SSE (Server-Sent Events) chunks—the format the OpenAI SDK expects:

// streaming-proxy.ts simplified logic
async function handleRequest(req: Request): Promise {
  const body = await req.json();
  
  // If it's a streaming chat completion, convert to non-streaming
  if (body.stream === true && req.url.includes('/chat/completions')) {
    body.stream = false;
    
    const response = await fetch(foundryEndpoint, {
      method: 'POST',
      body: JSON.stringify(body),
      headers: { 'Content-Type': 'application/json' }
    });
    
    const data = await response.json();
    
    // Re-encode as SSE stream for the SDK
    return createSSEResponse(data);
  }
  
  // Non-streaming and non-chat requests pass through unchanged
  return fetch(foundryEndpoint, req);
}

This proxy runs on port 8765 by default and sits between the GitHub Copilot SDK and Foundry Local. The SDK thinks it's talking to a streaming-capable endpoint, while the actual inference happens non-streaming. The conversion is transparent, no changes needed to SDK configuration.

Text-Based Tool Call Detection

Small on-device models like qwen2.5-coder-1.5b sometimes output tool calls as JSON text rather than using OpenAI-style function calling. The SDK won't fire tool.execution_start events for these text-based calls, so the agent includes a regex-based detector:

// Pattern to detect tool calls in model output
const toolCallPattern = /\{[\s\S]*"name":\s*"(list_files|read_file|write_file|run_command)"[\s\S]*\}/;

function detectToolCall(text: string): ToolCall | null {
  const match = text.match(toolCallPattern);
  if (match) {
    try {
      return JSON.parse(match[0]);
    } catch {
      return null;
    }
  }
  return null;
}

This fallback ensures tool calls are captured regardless of whether the model uses native function calling or text output, keeping the dashboard's tool call counter and CLI log accurate.

Security Considerations

Running an AI agent that can read and write files and execute commands requires careful security design. The Local Repo Patch Agent implements multiple layers of protection:

  • 100% local execution: No code, prompts, or responses leave your machine—complete data sovereignty
  • Command allowlist: The agent can only run npm test, npm run lint, and npm run build—no arbitrary shell commands
  • Path sandboxing: File tools are locked to the demo-repo/ directory; path traversal attempts like ../../../etc/passwd are rejected
  • File size limits: The read_file tool rejects files over 256 KB, preventing memory exhaustion attacks
  • Recursion limits: Directory listing caps at 20 levels deep, preventing infinite traversal

These constraints demonstrate responsible AI agent design. The agent has enough capability to do useful work but not enough to cause harm. When extending this project for your own use cases, maintain similar principles, grant minimum necessary permissions, validate all inputs, and fail closed on unexpected conditions.

Running the Agent

Getting the Local Repo Patch Agent running on your machine takes about five minutes. The project includes setup scripts that handle prerequisites automatically.

Prerequisites

Before running the setup, ensure you have:

  • Node.js 18 or higher: Download from nodejs.org (LTS version recommended)
  • Foundry Local: Install via winget install Microsoft.FoundryLocal (Windows) or brew install foundrylocal (macOS)
  • GitHub Copilot CLI: Follow the GitHub Copilot CLI install guide

Verify your installations:

node --version    # Should print v18.x.x or higher
foundry --version
copilot --version

One-Command Setup

The easiest path uses the provided setup scripts that install dependencies, start Foundry Local, and download the AI model:

# Clone the repository
git clone https://github.com/leestott/copilotsdk_foundrylocal.git
cd copilotsdk_foundrylocal

# Windows (PowerShell)
.\setup.ps1

# macOS / Linux
chmod +x setup.sh
./setup.sh

When setup completes, you'll see:

━━━ Setup complete! ━━━

  You're ready to go. Run one of these commands:

    npm run demo     CLI agent (terminal output)
    npm run ui       Web dashboard (http://localhost:3000)

Manual Setup

If you prefer step-by-step control:

# Install npm packages
npm install
cd demo-repo && npm install --ignore-scripts && cd ..

# Start Foundry Local and download the model
foundry service start
foundry model run qwen2.5-coder-1.5b

# Copy environment configuration
cp .env.example .env

# Run the agent
npm run demo

The first model download takes a few minutes depending on your connection. After that, the model runs from cache with no internet required.

Using the Web Dashboard

For a visual experience with real-time streaming, launch the web UI:

npm run ui

Open http://localhost:3000 in your browser. The dashboard provides:

  • Phase progress sidebar: Visual indication of which phase is running, completed, or errored
  • Live streaming output: Model responses appear in real-time via WebSocket
  • Tool call log: Every tool invocation logged with phase context
  • Phase timing table: Performance metrics showing how long each phase took
  • Environment info: Current model, endpoint, and repository path at a glance

Configuration Options

The agent supports several environment variables for customisation. Edit the .env file or set them directly:

VariableDefaultDescription
FOUNDRY_LOCAL_ENDPOINTauto-detectedOverride the Foundry Local API endpoint
FOUNDRY_LOCAL_API_KEYauto-detectedOverride the API key
FOUNDRY_MODELqwen2.5-coder-1.5bWhich model to use from the Foundry Local catalog
FOUNDRY_TIMEOUT_MS180000 (3 min)How long each agent phase can run before timing out
FOUNDRY_NO_PROXYSet to 1 to disable the streaming proxy
PORT3000Port for the web dashboard

Using Different Models

To try a different model from the Foundry Local catalog:

# Use phi-3-mini instead
FOUNDRY_MODEL=phi-3-mini npm run demo

# Use a larger model for higher quality (requires more RAM/VRAM)
FOUNDRY_MODEL=qwen2.5-7b npm run demo

Adjusting for Slower Hardware

If you're running on CPU-only or limited hardware, increase the timeout to give the model more time per phase:

# 5 minutes per phase instead of 3
FOUNDRY_TIMEOUT_MS=300000 npm run demo

Troubleshooting Common Issues

When things don't work as expected, these solutions address the most common problems:

ProblemSolution
foundry: command not foundInstall Foundry Local—see Prerequisites section
copilot: command not foundInstall GitHub Copilot CLI—see Prerequisites section
Agent times out on every phaseIncrease FOUNDRY_TIMEOUT_MS (e.g., 300000 for 5 min). CPU-only machines are slower.
Port 3000 already in useSet PORT=3001 npm run ui
Model download is slowFirst download can take 5-10 min. Subsequent runs use the cache.
Cannot find module errorsRun npm install again, then cd demo-repo && npm install --ignore-scripts
Tests still fail after agent runsThe agent edits files in demo-repo/. Reset with git checkout demo-repo/ and run again.
PowerShell blocks setup.ps1Run Set-ExecutionPolicy -Scope Process Bypass first, then .\setup.ps1

Diagnostic Test Scripts

The src/tests/ folder contains standalone scripts for debugging SDK and Foundry Local integration issues. These are invaluable when things go wrong:

# Debug-level SDK event logging
npx tsx src/tests/test-debug.ts

# Test non-streaming inference (bypasses streaming proxy)
npx tsx src/tests/test-nostream.ts

# Raw fetch to Foundry Local (bypasses SDK entirely)
npx tsx src/tests/test-stream-direct.ts

# Start the traffic-inspection proxy
npx tsx src/tests/test-proxy.ts

These scripts isolate different layers of the stack, helping identify whether issues lie in Foundry Local, the streaming proxy, the SDK, or your application code.

Key Takeaways

  • BYOK enables local-first AI: A single configuration object redirects the entire GitHub Copilot SDK to use on-device inference through Foundry Local
  • Phased workflows improve reliability: Breaking complex tasks into PLAN → EDIT → VERIFY → SUMMARY phases makes agent behaviour predictable and debuggable
  • Security requires intentional design: Allowlists, sandboxing, and size limits constrain agent capabilities to safe operations
  • Local models have quirks: The streaming proxy and text-based tool detection demonstrate how to work around on-device model limitations
  • Real-time feedback matters: The web dashboard with WebSocket streaming makes agent progress visible and builds trust in the system
  • The architecture is extensible: Add new tools, change models, or modify phases to adapt the agent to your specific needs

Conclusion and Next Steps

The Local Repo Patch Agent proves that sophisticated agentic coding workflows don't require cloud infrastructure. By combining the GitHub Copilot SDK's orchestration capabilities with Foundry Local's on-device inference, you get intelligent code analysis that respects data sovereignty completely.

The patterns demonstrated here, BYOK integration, phased execution, security sandboxing, and streaming workarounds, transfer directly to production systems. Consider extending this foundation with:

  • Custom tool sets: Add database queries, API calls to internal services, or integration with your CI/CD pipeline
  • Multiple repository support: Scan and fix issues across an entire codebase or monorepo
  • Different model sizes: Use smaller models for quick scans, larger ones for complex refactoring
  • Human-in-the-loop approval: Add review steps before applying fixes to production code
  • Integration with Git workflows: Automatically create branches and PRs from agent-generated fixes

Clone the repository, run through the demo, and start building your own local-first AI coding tools. The future of developer AI isn't just cloud—it's intelligent systems that run wherever your code lives.

Resources

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Celebrating Community and Curiosity at Swetugg Stockholm 2026

1 Share

Every February, Swetugg transforms Stockholm into a vibrant gathering place for developers, architects, cloud practitioners, and community builders from across the Nordics and beyond. The 2026 edition continued that tradition with even more energy, deeper technical conversations, and an unmistakable sense of belonging—one that mirrors the very spirit of the Microsoft MVP community. 

As a long-running grassroots conference built by developers for developers, Swetugg is a space where curiosity fuels everything. This year brought together local user groups, first‑time speakers, seasoned MVP experts, and community members eager to explore new possibilities across the Microsoft ecosystem. 

Though rooted in Sweden’s tech scene, Swetugg has grown into a truly global touchpoint. Attendees traveled from across Scandinavia and Europe, highlighting how interconnected and collaborative our developer communities have become. 

Real‑world insights, engaged audience — hands‑on learning at Swetugg Stockholm 2026.

MVPs at Swetugg: Driving Knowledge, Inclusion, and Innovation 

One of the most inspiring parts of Swetugg is seeing how many MVPs take the stage—not only to deliver top-tier technical sessions but also to uplift emerging voices. 

From talks on .NET performance tuning to practical Azure governance patterns to real‑world demonstrations of AI tooling, MVPs showcased the value of hands-on, experience-driven learning. But what resonated even more deeply was the emphasis on sharing the stage: inviting newer speakers into panels, amplifying diverse perspectives, and creating mentoring moments throughout the hallway track. 

Curiosity in action. Developers, architects, and community builders coming together at Swetugg Stockholm 2026 to learn, share, and connect.

 

 

 

 

MVP Barbara ForbesSwetugg combines strong technical content with a very warm community feeling. Very impressive when the weather is freezing!”  

Hands-On Learning: The Heart of Swetugg 

True to its developer‑first roots, Swetugg goes far beyond traditional presentations. Deep‑dive sessions, live demos, and spontaneous hallway troubleshooting gave attendees the chance to explore ideas in depth, experiment, and learn directly from one another.

MVPs played a pivotal role here—jumping into live demos, offering real world guidance, and helping participants break down complex challenges into actionable next steps.  

 

 

 

Looking Ahead: Who Will Share Their Story Next? 

If you left Swetugg feeling inspired, wondering how you can contribute more visibly to the tech community, that spark is exactly how many MVP stories begin. So we invite you to ask yourself: 
What’s your community story—and who might it inspire next? Whether you're thinking about nominating someone for the MVP Award, stepping on stage for the first time, or sharing your first tutorial online, your voice matters. And events like Swetugg show how powerful it can be when shared. 

 

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

WHY WE USE ALL CAPS TO SHOUT, with Glenn Fleishman

1 Share

1161. Today, we look at the history of writing in all-uppercase letters. Tech historian Glenn Fleishman explains how capitals transitioned from a sign of importance to a convention for shouting. Plus, we discuss his research tracking the association between yelling and capital letters back to 1856 and why early newspapers used all capitals to make tiny type seem larger.

Glenn Fleishman's website.

🔗 Join the Grammar Girl Patreon.

🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey

🔗 Get the edited transcript.

🔗 Get Grammar Girl books

| HOST: Mignon Fogarty

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Dan Feierabend, Maram Elnagheeb
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian
  • Podcast Associate: Maram Elnagheeb

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTubeTikTokFacebook. ThreadsInstagramLinkedInMastodonBluesky.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://dts.podtrac.com/redirect.mp3/media.blubrry.com/grammargirl/stitcher.simplecastaudio.com/e7b2fc84-d82d-4b4d-980c-6414facd80c3/episodes/32c78ead-bd08-4420-816d-b8312e3b4f73/audio/128/default.mp3?aid=rss_feed&awCollectionId=e7b2fc84-d82d-4b4d-980c-6414facd80c3&awEpisodeId=32c78ead-bd08-4420-816d-b8312e3b4f73&feed=XcH2p3Ah
Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Set Up SMTP for WordPress Emails and Contact Forms

1 Share

Imagine setting up a lead capture or newsletter registration form, only to later find out that subscribers haven’t been receiving your emails. It’s very frustrating, and unfortunately, it’s one of the most common problems WordPress users face. 

This is why you need to set up SMTP (Simple Mail Transfer Protocol). Without it, emails containing contact form submissions, password reset links, new user registration notices, and WooCommerce order confirmations will likely end up in spam folders — or won’t arrive at all.

Below, we’ll take a closer look at SMTP and how it improves email deliverability. We’ll then show you how to set it up in WordPress using a plugin and troubleshoot common SMTP issues. 

How WordPress sends emails by default

WordPress uses the wp_mail() function to send emails. This is built on the basic mail() function provided by PHP, the programming language that WordPress software is based on.

While wp_mail() gets the job done, it’s not secure or reliable. This is because it doesn’t authenticate emails sent from your website. 

As a result, your emails will lack the trust signals that email clients like Gmail and Outlook use to verify the authenticity of the sender. This means that they’ll likely be flagged as spam or blocked entirely.

Additionally, many hosting servers aren’t configured to handle complex email protocols. They’re primarily optimized to serve web pages. Therefore, messages sent via wp_mail() may be blocked, delayed, or marked as suspicious by receiving servers. 

Plus, some providers block common SMTP ports or throttle email traffic to prevent abuse. This can further impact email deliverability.

Note that web hosting and email delivery are two different services. Web hosting serves your website to visitors, while email delivery requires configured mail servers, authentication protocols, and sender reputation management.

So, even if your hosting provider lets you create a custom email address for your website, you’ll still want to use a dedicated SMTP service to significantly improve the chances of your emails arriving in your recipients’ inboxes.

What is SMTP (Simple Mail Transfer Protocol)?

SMTP is the standard communication protocol for sending emails across the Internet. Unlike the default wp_mail() method in WordPress, SMTP requires authentication. It also uses encryption to ensure that your messages are delivered securely.

SMTP connects to a mail server (like Gmail, Outlook, or a transactional email provider like your web host) through a username and password. This connection is encrypted with SSL or TLS, which protects both the sender and the recipient. 

How SMTP enhances WordPress email deliverability

Now that you have a clearer idea of what SMTP is, let’s talk about how it boosts WordPress email deliverability. 

For starters, SMTP requires you to authenticate your identity using credentials provided by your email service provider. This ensures that the messages originate from a trusted source (i.e. your website).

Encrypted connections (using SSL or TLS) add another layer of security. These ensure that your emails can’t be intercepted by cyber criminals during transit.

When emails are sent through a trusted SMTP server, they are more likely to be accepted by receiving servers. This improves your domain’s sender reputation over time, so your messages have a lower chance of being marked as spam.

SMTP is supported by all major email services, including Gmail, Outlook/Office 365, Zoho, SendGrid, Mailgun, and Amazon SES. These services provide the credentials you need to send authenticated emails through your WordPress site.

How to set up SMTP in WordPress (a step-by-step guide)

Now, let’s show you how to set up SMTP with a WordPress plugin.

1. Install a WordPress SMTP plugin

One of the best SMTP plugins on the market is MailPoet. Designed by Automattic (the people behind WordPress.com), this transactional email service comes with a built-in SMTP solution that’s very easy to activate. Plus, it guarantees a near 99% global delivery rate.

To get started, go to Plugins → Add Plugin and use the search bar to find the tool.

MailPoet listing in the WordPress plugin repository

Click on Install Now and Activate. WordPress will direct you to MailPoet’s setup page.

MailPoet page with the text "better email without leaving WordPress"

Note that you’ll need a MailPoet account to continue. You can create one for free.

option to connect to a MailPoet account

Once you’ve created an account, you can simply connect it to your WordPress website. 

2. Configure SMTP plugin settings

Now, you can configure your SMTP plugin settings. In your WordPress dashboard, go to MailPoet → Settings and select the Send With tab.

MailPoet sending service settings

Here, you can select MailPoet Sending Service to reroute all WordPress emails through the plugin’s SMTP solution. This is available for free for up to 5,000 emails per month. 

You also have the option to connect the plugin to another SMTP service like SendGrid and Amazon SES. To do this, select Other and follow the instructions to connect to your chosen service. 

How to test your WordPress SMTP setup

Most SMTP plugins have a built-in test email feature. You can use it to send a message to your own address and confirm a successful delivery.

You can also submit a form from your website to make sure that contact form notifications are delivered correctly. This is especially useful for checking formatting, reply-to headers, and other content. 

If you use WooCommerce, you could even do a test purchase to check that order notifications and confirmations are received. 

Once you’ve received your test email, you’ll want to examine its headers to ensure that your domain is passing authentication checks. This also helps you confirm that your DNS records (SPF, DKIM, and DMARC) are properly configured and that your SMTP connection is working as it should.

Open the email in your inbox and locate the option to “view original message” or “view headers.” This varies by email client, so you might need to refer to their documentation to see how it’s done. 

If you’re using Gmail, click on the three dots near the sender information and look for the Show original option.

Show original option in Gmail

In the page that opens, you’ll see more information about the message. Look for the following authentication results:

  • spf=pass: Shows that the IP address sending the email is authorized by the domain’s SPF record
  • dkim=pass: Confirms that the email hasn’t been altered during transmission, and that the signature matches the domain’s DKIM key
  • dmarc=pass: Means that SPF and/or DKIM align with your domain’s policies

If any of these checks fail, there might be a problem with your DNS configuration or how your SMTP service handles message signing.

Troubleshooting common SMTP issues

Setting up SMTP is a straightforward process, but you might encounter some issues. Let’s look at the most commonly reported problems, and how to solve them.

Invalid SMTP credentials

Incorrect credentials are one of the most common SMTP issues. So, you’ll want to double-check your username and password or API key. 

If your provider requires an app-specific password, make sure that you’ve generated it correctly in your email provider’s account settings. If you’re using OAuth authentication, check that the token is still valid and hasn’t expired.

Ports blocked by hosting provider

To prevent spam, some web hosting providers block outbound SMTP traffic on ports like 587 and 465. 

If you’ve done everything correctly but your test email fails to send, you’ll want to reach out to your host. They may need to unblock the necessary ports, or provide an alternative method for sending emails via SMTP.

DNS records not propagating

Services like SendGrid, Mailgun, and Amazon SES typically require DNS verification to confirm domain ownership and allow email sending. This involves setting up SPF, DKIM, and sometimes DMARC records. 

These records usually take up to 48 hours to propagate. Until the process is ready, your emails might fail authentication and be marked as suspicious. You can use DNS propagation check tools to monitor their status.

Once the propagation period is over, send a test email again. If it’s not working, there might be another issue. 

Plugin conflicts

If you have multiple plugins on your website that are trying to modify mail settings, it can lead to conflicts that affect email delivery. You might also encounter an issue if you have a form builder on your website that’s not compatible with the SMTP plugin or service that you’re using. 

If emails suddenly stop working, try deactivating recently installed or updated plugins. You’ll want to focus on tools related to email, website security, or performance optimization. 

If your emails work when a particular plugin is deactivated, investigate the issue by reaching out to its developers. 

Two-factor authentication and app passwords

If your account has two-factor authentication (2FA) enabled, you might not be able to use your regular password for SMTP. 

Most providers like Gmail and Outlook offer the option to create an app-specific password, and you’ll want to use this in your SMTP plugin (instead of your main account password). If you’re unsure, you’ll likely find information about 2FA and passwords in your provider’s documentation.

Gmail API quota exceeded

If you’ve configured SMTP using Gmail’s API, you’ll likely encounter daily sending limits. Free Gmail accounts are limited to 500 messages per day. Google Workspace accounts may have higher limits. 

If you exceed these quotas, your emails will fail to send until the quota resets. So, you’ll want to monitor usage and consider switching to a premium service if your traffic grows.

Frequently asked questions (FAQ)

Finally, let’s answer some common questions about setting up SMTP in WordPress. 

What’s the difference between WordPress default email and SMTP?

The default WordPress method (wp_mail()) uses your server’s basic mail function and therefore lacks authentication measures. Meanwhile, SMTP connects to a mail server with secure credentials and encryption, improving email deliverability.

What types of WordPress emails does SMTP Improve?

SMTP helps with all emails sent from your WordPress site, including contact form submissions, new user registrations, password reset requests, WooCommerce order confirmations, newsletters, and more. 

How much does it cost to set up SMTP for a WordPress website?

You can set up SMTP for free when using a plugin like MailPoet. Additionally, Gmail, Outlook, and other email providers offer free SMTP services with some limitations, while premium services like SendGrid or Mailgun offer free tiers and charge based on volume.

Do I need to pay for an SMTP service?

Not necessarily. You can use MailPoet, which has a free version that works for most small sites. If you have a more complex site with higher email volumes, you might need to get a paid SMTP service that offers better deliverability, analytics, and support.

Can I use Gmail SMTP for WordPress emails?

Yes, you can connect to Gmail SMTP, but you’ll likely need to enable 2FA and create an app-specific password.

Can I use Outlook or Office 365 SMTP for WordPress emails?

Yes. To do this, use smtp.office365.com as the host and port 587 with TLS. You’ll also need to add your full email address as the SMTP username, and generate an app password if you have 2FA enabled.

Do I need technical skills to configure SMTP in WordPress?

No. Most SMTP plugins offer guided setup wizards. Just follow the prompts and enter information like sender name and email address. If you encounter any issues, you can always contact the plugin developers or your mail providers for assistance. 

Can I use SMTP with any WordPress contact form plugin?

Yes. SMTP works with most major form plugins, including Jetpack Forms, WPForms, Ninja Forms, Contact Form 7, and Gravity Forms.

What are recommended contact form plugins for WordPress?

Jetpack Forms is a powerful form plugin by Automattic, the same people behind WordPress.com. It comes with pre-made templates for lead capture, registration, feedback, contact, and other forms. 

It integrates seamlessly with the block editor, and you can add a form on any page or post on your website. Simply add the Form block, choose a template, and customize the fields and appearance to suit your needs. 

Jetpack Forms is free and is included with the default Jetpack plugin. It also integrates with Akismet to prevent spam and Jetpack AI Assistant to create nearly any form with a simple text prompt. 





Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories