Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146140 stories
·
33 followers

The day of the second killing

1 Share

Steven Garcia, as told to Gaby Del Valle:

I was in the middle of a frozen lake when I got the notification from the Minnesota Star Tribune that there had been a shooting. I was on assignment at a pond hockey event, and someone who was supposed to play later that evening said he probably wouldn't be able to make it - they knew there would be protests and demonstrations happening.

I arrived a little over three hours later. Federal officers had already cleared the scene - the FBI had been there investigating - so the only law enforcement present were state and local officials: the Minneapolis Police Department, their SWAT team, the Hennepin …

Read the full story at The Verge.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Wake: Give Claude Code visibility into your terminal history

1 Share

Wake is a terminal recording tool that gives Claude Code visibility into your development sessions. Instead of copy-pasting error messages or explaining what you just ran, Claude can see your complete terminal history.

The Problem

Claude Code can't see what happens in your terminal beyond the commands it directly executes. Build failures, error messages, debugging sessions — all invisible. You end up copy-pasting logs or describing what happened.

I got tired of this context-switching, so I built wake.

How It Works

I considered two approaches:

Shell hooks (preexec/precmd) give you command boundaries but miss the actual output — stdout/stderr bypass the shell entirely.

PTY wrapper wraps your shell in a pseudo-terminal, capturing every byte in both directions.

Wake combines both. Shell hooks provide structure (command start/end, exit codes) while the PTY layer captures output. They communicate via Unix socket.

Implementation

Built in Rust using portable-pty for cross-platform PTY handling and tokio for async I/O. The system stores both raw bytes (with ANSI codes) and plaintext, truncating output at 5MB by default.

All data stays local in SQLite (~/.wake/).

MCP Integration

Wake exposes tools through Claude's Model Context Protocol:

  • List recent terminal sessions
  • Retrieve commands from a session
  • Search across terminal history

Once connected, Claude Code can query your terminal history directly.

Try It

curl -sSf https://raw.githubusercontent.com/joemckenney/wake/main/install.sh | sh

Add to your shell config:

eval "$(wake init zsh)"  # or bash

Connect to Claude Code:

claude mcp add wake-mcp -- wake-mcp

Then just wake shell and work normally. When you ask Claude for help, it already knows what happened.

GitHub repo — feedback welcome.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Using Azure Device Provisioning Service with Self Signed X.509 Certificates – Part 1

1 Share

For testing, demo and development reasons, it can be useful (and a fair amount cheaper!) to use self-signed X.509 certificates when using the Azure Device Provisioning Service.

This guide is only for testing and demo purposes…

Please don’t use Self-Signed X.509 certificates in production!

This can involve a few steps and comes with a few pitfalls… This guide will hopefully help you navigate your way through that and get your Azure Device Provisioning (DPS) setup and IoT Devices provisioning nicely!

What we’re going to do is create some self signed X.509 certificates, create an Azure DPS instance and Group Enrolment, create a simulated device, provision it through our DPS and finally have it connect to an IoT Hub.

A word of caution… This is a long series of posts with lots of steps… However, there will also be a script available to just run for those who just want to get it done, rather than to learn why and how.

Contents:

  1. A Primer on DPS (This post!)
  2. Creating DPS and IoT Hub Instances and Linking Them
  3. Creating X.509 Certificates with OpenSSL
  4. Uploading to the Cloud
  5. Verifying X.509 Certificates (Proof of Possession)
  6. Creating X.509 Enrollment Groups
  7. Setting up a Simulated IoT Device (C#)

A Primer on Azure Device Provisioning Service (DPS)


What is Device Provisioning Service?

Let’s start with the basics… Before we write code or create Azure resources, you need to understand what DPS does and why it matters for your IoT deployment.

Azure IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the correct IoT Hub without requiring human intervention.

The allocation of devices can be based on a set of rules including geographical location for latency, an evenly weighted distribution, a static configuration or a custom allocation based on pattern matching.

Why Use DPS?

You might be wondering: “Can’t I just register devices manually in IoT Hub?” You can, but let’s see why that doesn’t scale:

Traditional Approach (Manual Registration)

  • Register each device manually in IoT Hub
  • Hardcode connection strings in device firmware
  • Difficult to reassign devices to different hubs
  • Doesn’t scale beyond a few devices

DPS Approach (Zero-Touch Provisioning)

  • Devices automatically register themselves
  • No connection strings hardcoded
  • Devices can be reassigned dynamically
  • Scales to millions of devices

Key Benefits

Now that you understand the difference, here are the concrete advantages of using DPS;

  • Scale: Provision thousands or millions of devices without manual intervention. Devices can self-register when they first come online.
  • Security: No connection strings hardcoded in device firmware. Devices use cryptographic attestation (symmetric keys, TPM, or X.509 certificates) to prove identity.
  • Flexibility: Devices can be reassigned to different IoT Hubs based on business logic, geolocation, or load balancing.
  • Multi-tenancy: Different devices can automatically be directed to different IoT Hubs based on enrollment configuration.

Attestation Methods

Attestion is how devices prove their identity to DPS. Think of it like showing ID before boarding a flight. DPS supports three different “ID types”:

DPS supports three attestation mechanisms:

1. Symmetric Key

  • Simplest method
  • Uses shared secret (enrollment group key)
  • Good for development and testing
  • Less secure than certificate-based methods

2. TPM (Trusted Platform Module)

  • Uses hardware security module
  • Very secure
  • Requires TPM chip on device
  • Unique per device

3. X.509 Certificate (This is what we’ll focus on)

  • Uses public key infrastructure (PKI)
  • Industry standard for security
  • Supports certificate rotation
  • Scalable with enrollment groups

High-Level Provisioning Flow

┌──────────┐
│  Device  │
└────┬─────┘

     │ 1. Initial connection with attestation


┌─────────────────┐
│       DPS       │  2. Validates attestation
│                 │  3. Determines target IoT Hub
└────┬────────────┘

     │ 4. Returns assignment


┌──────────┐
│  Device  │  5. Connects to assigned IoT Hub
└────┬─────┘


┌─────────────┐
│  IoT Hub    │  6. Device is registered and ready
└─────────────┘

When to Use DPS

✅ Use DPS when:

  • Deploying many devices
  • Devices need to be reassigned between hubs
  • You want zero-touch provisioning
  • Security is important (no hardcoded secrets)
  • You need multi-tenancy support

❌ Skip DPS when:

  • Only a few devices (< 10)
  • Devices never move between hubs
  • Prototyping with connection strings is acceptable
  • You don’t need automated provisioning

Next Steps

In the following sections, we’ll:

  1. Create DPS and IoT Hub instances
  2. Set up a complete X.509 certificate hierarchy
  3. Configure DPS to trust our certificates
  4. Create an enrollment group
  5. Provision a device using X.509 attestation

Next: Creating DPS and IoT Hub Instances >

The post Using Azure Device Provisioning Service with Self Signed X.509 Certificates – Part 1 first appeared on Pete Codes.

The post Using Azure Device Provisioning Service with Self Signed X.509 Certificates – Part 1 appeared first on Pete Codes.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Setting Up Your Own Android Work Profile

1 Share
Discover how to create a separate, easily managed work environment on your Android device—even if you're self-employed or outside a corporate setup—with
Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Actually Use AI at Work (When You’re Limited to Copilot)

1 Share

How To Use AI at Work Effectively

“AI doesn’t help you do more work.
It helps you decide better, earlier — so less work is wasted.”
— JD Meier

Most advice on using AI at work assumes freedom you don’t have.

You’re limited by policies, tools, and accountability —
often to Copilot alone.

This is how AI actually creates leverage anyway.

Key Takeaways

  • Most advice about AI at work assumes freedom you don’t have; real leverage comes from working within constraints.

  • Prompts are a starting point, but they don’t scale — workflows do.

  • AI creates value when it reduces thinking friction and compresses time to clarity.

  • You don’t need agents or automation first; you need repeatable ways of shaping work.

  • When AI is embedded into workflows, impact spreads naturally — without forcing adoption.


Overview Summary

Most people encounter AI at work after something breaks:
teams get cut, scope stays the same, and your pressure increases.

They’re told to “use AI,” but are limited by policies, tools, and accountability.

Often to Copilot alone.

The result is frustration: AI is present, but nothing actually feels easier.

This guide explains how real progress with AI actually happens inside those constraints.

It walks through the natural progression most people experience:
from basic prompting, to shaping work, to embedding AI into workflows that compound value.

The focus isn’t on tools, agents, or clever prompts.

It’s on redesigning how work flows so fewer people can make better decisions, earlier, with less wasted effort.


The Situation Many People Are In

Teams get cut.
The work doesn’t.

People are told to “use AI,” but are constrained by:

  • enterprise policies

  • security rules

  • limited tools (often Copilot only)

  • no time to experiment

  • real accountability for outcomes

This creates a gap between expectation and capability.

The real question becomes:

How do I actually achieve more — safely, credibly, and within guardrails — instead of just “trying prompts”?


The Real Constraints (Let’s Name Them Clearly)

Most people are operating under four hard limits:

  1. Tool constraint
    Often limited to Copilot inside M365

  2. Data constraint
    No external uploads
    No customer data outside tenant
    No custom models

  3. Time constraint
    Fewer people, same scope, less slack

  4. Credibility constraint
    Outputs must be explainable, defensible, and reviewable

This means:

  • “Just automate everything” is fantasy

  • Prompt libraries won’t save anyone

  • Real leverage comes from how work is shaped, not which model is used


What “Doing More with AI” Actually Means

Not:

  • automation everywhere

  • replacing judgment

  • clever prompts

It means:

  • reducing thinking friction

  • compressing time-to-clarity

  • reshaping how work flows through a person

AI creates leverage only when the shape of work changes.


Where Most People Actually Start

Almost everyone starts with prompts.

They ask AI to:

  • draft

  • summarize

  • rewrite

  • polish

That’s not wrong.
It’s how you learn the surface area.

But it creates a trap:

AI becomes something you visit, not something that changes how work happens.

Progress stalls here.

This is when people ask:

  • “Do I need better prompts?”

  • “Do I need agents?”

  • “Do I need custom GPTs?”

Those feel like the next step — but they’re not.


The First Real Shift (and Most People Miss It)

The shift happens when someone notices:

“I keep doing the same kind of thinking over and over.”

Not tasks.
Thinking.

Examples:

  • turning messy input into a decision

  • preparing before meetings

  • explaining tradeoffs

  • figuring out what actually matters this week

This is where AI stops being about doing work
and starts being about shaping work.

Instead of asking:

“Can you write this?”

The question becomes:

“What does ‘good’ look like for this situation?”

Once that structure is named, AI suddenly becomes consistent.


When AI Starts to Feel Useful (Even with Copilot)

At this point:

  • the same questions get asked every week

  • the same outputs are useful every time

  • the same friction shows up in the same places

AI becomes a work surface, not a tool.

People stop thinking in terms of prompting and start thinking in terms of:

  • before / during / after

  • inputs / outputs

  • decisions / consequences

This works even in Copilot-only environments.

The leverage was never the model.
It was:

  • clarifying earlier

  • deciding sooner

  • reusing thinking instead of recreating it


Why Agents and Custom GPTs Disappoint Early

Agents and custom GPTs are tempting because they promise scale.

But they usually fail when introduced too early.

Why?

  • clarity isn’t stable yet

  • inputs aren’t disciplined

  • outputs aren’t trusted

  • judgment hasn’t been shaped

People try to scale before they know what should scale.

That’s why agents are usually organizational leverage — not personal leverage first.


How Impact Actually Scales Inside a Company

Scaling does not happen by sharing prompts.

That almost never sticks.

What spreads is:

  • clearer meetings

  • tighter briefs

  • better decisions

  • less rework

Others don’t copy the AI.
They copy the way work shows up.

AI made that easier — but the thing that scaled was the shape of the work.


The Quiet Truth Underneath Everything

AI doesn’t help you do more work.

It helps you decide better, earlier, so less work is wasted.

When that clicks:

  • speed increases

  • quality rises

  • energy comes back

  • fewer people can carry more responsibility without breaking

That’s the real progression.


Workflows for the Win

This is where leverage actually compounds

Everything up to this point leads here.

Prompts don’t scale.
Tools don’t scale.
Even individual brilliance doesn’t scale.

Workflows do.

A workflow is simply:

  • when work starts

  • how it gets shaped

  • what “good” looks like at the end

  • and what happens next

This is where AI stops being helpful and starts being decisive.

The mistake most people make is thinking workflows are:

  • automation

  • process diagrams

  • enterprise initiatives

They’re not.

At the individual level, a workflow is just a repeatable way of thinking with support.

For example:

  • how the week gets framed

  • how meetings turn into decisions

  • how options get compared

  • how commitments get made

  • how learning feeds forward

When AI is embedded inside these moments, three things happen at once:

  1. Work stops restarting

    • Less re-explaining

    • Less re-deciding

    • Less cleanup after the fact

  2. Quality becomes consistent

    • Same questions get asked

    • Same criteria applied

    • Same standards enforced

  3. Others can step in

    • Because the thinking is visible

    • Because the output is predictable

    • Because judgment has a shape

This is why workflows beat prompts.

A prompt helps you once.
A workflow helps anyone every time.

And this is also why Copilot-only environments aren’t a blocker.

You don’t need agents to:

  • start every week the same way

  • extract the same decision signals from meetings

  • pressure-test assumptions before committing

  • close loops instead of leaving residue

You just need:

  • a defined input moment

  • a defined output artifact

  • AI consistently in the middle

Once that exists, scaling happens quietly.

Not because people are told to “use AI,”
but because they experience less friction and better outcomes.

They adopt the workflow because it works.
AI comes along for the ride.

That’s the win.


Final Thoughts

AI at work isn’t a tool problem.
It’s a work design problem.

The biggest gains don’t come from smarter models or more automation.

They come from clarifying earlier, deciding sooner, and reusing thinking instead of recreating it every week.

When AI is built into workflows — not bolted on as a side tool — work gets calmer, decisions get cleaner, and your capacity quietly returns.

That’s how AI actually earns its place at work.

You Might Also Like

AI Hub

Getting Started with AI

The AI Mindset

AI Metaphors

The post How to Actually Use AI at Work (When You’re Limited to Copilot) appeared first on JD Meier.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Don't "Trust the Process"

1 Share

Don't "Trust the Process"

Jenny Wen, Design Lead at Anthropic (and previously Director of Design at Figma) gave a provocative keynote at Hatch Conference in Berlin last September.

Don't "Trust the process" slide, speaker shown on the left

Jenny argues that the Design Process - user research leading to personas leading to user journeys leading to wireframes... all before anything gets built - may be outdated for today's world.

Hypothesis: In a world where anyone can make anything — what matters is your ability to choose and curate what you make.

In place of the Process, designers should lean into prototypes. AI makes these much more accessible and less time-consuming than they used to be.

Watching this talk made me think about how AI-assisted programming significantly reduces the cost of building the wrong thing. Previously if the design wasn't right you could waste months of development time building in the wrong direction, which was a very expensive mistake. If a wrong direction wastes just a few days instead we can take more risks and be much more proactive in exploring the problem space.

I've always been a compulsive prototyper though, so this is very much playing into my own existing biases!

Via @jenny_wen

Tags: design, prototyping, ai, generative-ai, llms, ai-assisted-programming, vibe-coding

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories