Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146114 stories
·
33 followers

Bringing SendGrid and Segment to Twilio.com: A More Unified Web Experience

1 Share
SendGrid and Segment are merging into Twilio.com. Learn what’s changing, what stays the same, and how a unified platform helps you build faster.
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #4, 2026 – F# event / (un)conference in 2026?

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

I wonder… would there be any interest in some F# event / (un)conference in 2026?I always wanted to do something similar to F# Creators Workshop (which @dsyme.bsky.social organised years ago in Cambridge) or Elm Camp (which has been for last couple of years)#fsharp

Krzysztof Cieslak (@kcieslak.io) 2026-01-21T18:53:56.833Z

Videos

Blogs

Highlighted projects

New Releases

🚀 Agent.NET has evolved significantly since the alpha.1 announcement — alpha.2 and now alpha.3 bring proper MAF execution, symmetrical InProcess/Durable workflows, and a more cohesive workflow CE with ROP built in.Full release history:github.com/JordanMarr/A…#fsharp #dotnet #aiagents

Jordan Marr (@jordanmarr.bsky.social) 2026-01-21T16:46:54.511Z

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Day 24: Deployment Automation: Let AI Generate the Scary Parts

1 Share

Migrations are scary.

Not the code. The code is usually simple. It’s the stakes. Migrations run once. They touch production data. They can’t always be undone. Get it wrong, and you’re in trouble.

Deployment scripts, rollback procedures, runbooks. Same story. You write them hoping you’ll never need them. You don’t test them because testing them is almost as scary as needing them.

This is exactly the kind of work AI should help with. Not because AI writes perfect deployment scripts, but because AI writes first drafts that include the things you’d forget. Rollback steps. Verification queries. The edge cases that bite you when you’re already stressed.

Having something to review beats staring at a blank file wondering what you’re forgetting.

The Migration Generation Prompt

For database migrations:

Generate a database migration for this change.

Current schema:
[describe or paste current state]

Desired schema:
[describe what you want]

Database: [PostgreSQL/MySQL/SQLite/MongoDB]
ORM: [Prisma/TypeORM/Drizzle/raw SQL]

Generate:
1. Forward migration (up)
2. Rollback migration (down)
3. Data migration if needed
4. Verification query to confirm success

Consider:
- Table locks (can this run without downtime?)
- Index creation (CONCURRENTLY where possible)
- Default values for new non-null columns
- Data transformation for existing rows
- Foreign key constraints

A Real Migration Example

I needed to add a status field to a trades table with millions of rows.

My prompt:

Generate a PostgreSQL migration.

Current: trades table has no status field
Desired: trades table has status field (enum: pending, completed, cancelled)

Requirements:
- Zero downtime
- Table has 2 million rows
- All existing trades should be 'completed'
- New trades will set status explicitly

AI generated:

-- Migration: add_trade_status
-- Generated: 2024-01-15

-- Step 1: Create enum type
CREATE TYPE trade_status AS ENUM ('pending', 'completed', 'cancelled');

-- Step 2: Add column with default (allows concurrent writes)
ALTER TABLE trades
ADD COLUMN status trade_status DEFAULT 'completed';

-- Step 3: Backfill in batches (avoids long locks)
-- Run this separately, monitoring for lock contention
DO $$
DECLARE
  batch_size INT := 10000;
  updated INT;
BEGIN
  LOOP
    UPDATE trades
    SET status = 'completed'
    WHERE id IN (
      SELECT id FROM trades
      WHERE status IS NULL
      LIMIT batch_size
      FOR UPDATE SKIP LOCKED
    );

    GET DIAGNOSTICS updated = ROW_COUNT;
    EXIT WHEN updated = 0;

    RAISE NOTICE 'Updated % rows', updated;
    PERFORM pg_sleep(0.1); -- Brief pause between batches
  END LOOP;
END $$;

-- Step 4: After backfill complete, add NOT NULL constraint
ALTER TABLE trades ALTER COLUMN status SET NOT NULL;

-- Step 5: Remove default (new inserts must specify)
ALTER TABLE trades ALTER COLUMN status DROP DEFAULT;

-- Verification
SELECT status, COUNT(*) FROM trades GROUP BY status;
-- Expected: all rows should be 'completed'

-- Rollback (if needed before NOT NULL constraint)
ALTER TABLE trades DROP COLUMN status;
DROP TYPE trade_status;

The AI knew about batched updates, SKIP LOCKED, and the order of operations. Things I might have gotten wrong writing it myself.

The Rollback Script Prompt

For any deployment:

Generate a rollback script for this deployment.

What's being deployed:
[describe the changes]

Dependencies:
[what other systems are affected]

Data changes:
[any data migrations]

Generate:
1. Rollback steps in order
2. Verification after each step
3. Point of no return (if any)
4. Estimated rollback time
5. Who needs to be notified

Consider:
- Can data changes be reversed?
- What if new data was created after deploy?
- What services need to be restarted?
- What caches need to be cleared?

The Deployment Checklist Prompt

For complex deployments:

Create a deployment checklist for this feature.

Feature: [describe it]
Components affected: [list services, databases, etc.]
Dependencies: [what must happen in order]

Generate a checklist covering:

Pre-deployment:
- Tests to run
- Approvals needed
- Communication to send
- Maintenance windows

Deployment steps:
- Exact sequence
- Commands to run
- Verification after each step
- Expected behavior

Post-deployment:
- Smoke tests
- Monitoring to watch
- How to confirm success
- Timeline for all-clear

Rollback triggers:
- What indicates we should roll back
- Decision criteria

The Runbook Prompt

For when things go wrong:

Create a runbook for when this breaks in production.

Feature: [describe it]
Common failure modes: [what you expect could go wrong]

For each failure mode, create a section:
1. How to identify this failure (symptoms)
2. Immediate mitigation steps
3. Root cause investigation
4. Resolution steps
5. Verification
6. Communication templates

The runbook should be usable by an on-call engineer at 3am who isn't
familiar with this feature.

Infrastructure as Code

When you need infrastructure:

Generate Terraform/CloudFormation/Pulumi for this infrastructure.

What I need:
[describe the infrastructure]

Cloud provider: [AWS/GCP/Azure]
Requirements:
- [security requirements]
- [networking requirements]
- [scaling requirements]

Generate:
1. Resource definitions
2. Variables for configuration
3. Outputs needed
4. Documentation comments

Follow:
- Least privilege principle
- No hardcoded secrets
- Meaningful resource names
- Tags for cost tracking

CI/CD Pipeline Generation

For deployment pipelines:

Generate a CI/CD pipeline for this project.

Platform: [GitHub Actions/GitLab CI/CircleCI/Jenkins]
Tech stack: [languages, frameworks]

Pipeline should:
1. Run tests on PR
2. Build and verify
3. Deploy to staging on merge to main
4. Deploy to production manually or on tag

Include:
- Caching for speed
- Parallel jobs where possible
- Environment secrets handling
- Deployment notifications
- Rollback capability

The Pre-Deployment Verification Prompt

Before you deploy:

Help me verify this deployment is ready.

Changes: [describe what's being deployed]
Migration: [yes/no, describe if yes]

Verify:
1. All tests pass?
2. Migration tested on staging?
3. Migration is reversible?
4. Feature flags in place?
5. Monitoring dashboards ready?
6. Rollback plan documented?
7. Stakeholders notified?
8. On-call aware?

What else should I check before deploying?

The Post-Deployment Verification Prompt

After you deploy:

Generate post-deployment verification steps.

What was deployed: [describe]
Expected behavior: [what should work now]

Generate:
1. Smoke tests to run manually
2. Queries to verify data integrity
3. Metrics to check
4. Log patterns to look for
5. Timeline: when can we call this successful?

Feature Flag Integration

For safe rollouts:

Add feature flag to this deployment.

Feature: [describe]
Flag name: [suggested name]

Generate:
1. Flag check code
2. Gradual rollout plan (1%, 10%, 50%, 100%)
3. Metrics to monitor at each stage
4. Kill switch procedure
5. Flag cleanup after full rollout

The Deployment Documentation Prompt

For the future:

Document this deployment for future reference.

What was deployed: [describe]
When: [date]
Why: [reason for the change]

Generate documentation covering:
1. What changed (technical summary)
2. How to verify it's working
3. Known issues or limitations
4. How to roll back
5. Related future work
6. Who to contact with questions

Tomorrow

You’ve deployed. The feature is live. But the code is messy. AI writes working code, but working doesn’t mean maintainable. Tomorrow I’ll show you how to refactor AI code from working to maintainable.


Try This Today

  1. Think of a deployment you have coming up
  2. Run the deployment checklist prompt
  3. See what you forgot

The checklist you generate won’t be perfect for your organization. But it’s a starting point. Customize it, save it, reuse it.

The goal isn’t for AI to deploy for you. It’s for AI to help you not forget the boring but critical stuff.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

0.0.394

1 Share

2026-01-24

  • Deduplicate identical model instruction files to save context
  • Exit summary displays accurate usage metrics instead of zeros
  • Getting git branch works in repositories with no commits
  • Add support for GitHub Enterprise Cloud (*.ghe.com) in /delegate command
  • Directory path uses consistent muted text color with git branch and model display
  • Plugin skills work in agent responses
  • Timeline hides startup messages to reduce noise
  • Fixed timeline entry regression where read_agent and other tools showed incorrect content
  • Git status updates on-demand instead of polling every 15 seconds
  • SDK supports infinite sessions with automatic context compaction
  • Memory loading errors are handled gracefully without user warnings
  • /delegate command accepts optional prompt, uses conversation context
  • Auto-update no longer removes old CLI package versions
  • Improve task completion with clearer detached process guidance
  • Simplified bottom bar by hiding some keyboard hints
  • Queue slash commands alongside messages using Ctrl+D
  • Press / to search sessions in /resume
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

python-1.0.0b260123

1 Share

[1.0.0b260123] - 2026-01-23

Added

  • agent-framework-azure-ai: Add support for rai_config in agent creation (#3265)
  • agent-framework-azure-ai: Support reasoning config for AzureAIClient (#3403)
  • agent-framework-anthropic: Add response_format support for structured outputs (#3301)

Changed

  • agent-framework-core: [BREAKING] Simplify content types to a single class with classmethod constructors (#3252)
  • agent-framework-core: [BREAKING] Make response_format validation errors visible to users (#3274)
  • agent-framework-ag-ui: [BREAKING] Simplify run logic; fix MCP and Anthropic client issues (#3322)
  • agent-framework-core: Prefer runtime kwargs for conversation_id in OpenAI Responses client (#3312)

Fixed

  • agent-framework-core: Verify types during checkpoint deserialization to prevent marker spoofing (#3243)
  • agent-framework-core: Filter internal args when passing kwargs to MCP tools (#3292)
  • agent-framework-core: Handle anyio cancel scope errors during MCP connection cleanup (#3277)
  • agent-framework-core: Filter conversation_id when passing kwargs to agent as tool (#3266)
  • agent-framework-core: Fix use_agent_middleware calling private _normalize_messages (#3264)
  • agent-framework-core: Add system_instructions to ChatClient LLM span tracing (#3164)
  • agent-framework-core: Fix Azure chat client asynchronous filtering (#3260)
  • agent-framework-core: Fix HostedImageGenerationTool mapping to ImageGenTool for Azure AI (#3263)
  • agent-framework-azure-ai: Fix local MCP tools with AzureAIProjectAgentProvider (#3315)
  • agent-framework-azurefunctions: Fix MCP tool invocation to use the correct agent (#3339)
  • agent-framework-declarative: Fix MCP tool connection not passed from YAML to Azure AI agent creation API (#3248)
  • agent-framework-ag-ui: Properly handle JSON serialization with handoff workflows as agent (#3275)
  • agent-framework-devui: Ensure proper form rendering for int (#3201)
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Leak: Nvidia is about to challenge ‘Intel Inside’ with as many as eight Arm laptops

1 Share
This is not an Nvidia Arm laptop, but the old image seemed thematically appropriate.

Intel and AMD have split the Windows laptop market for years, but the x86 players may be getting outnumbered. It's not just Apple MacBooks and MediaTek-based Chromebooks using Arm chips anymore. There are finally competent Qualcomm Snapdragon laptops running Windows, and - as soon as this spring - Nvidia will finally power Windows consumer laptops with Arm chips all by itself.

They won't have an Nvidia graphics chip next to an Intel CPU, but rather an Nvidia N1 system-on-chip at the helm - and overnight, a Lenovo leak revealed that the company has built six laptops on the upcoming N1 and N1X processors, including a 15-inch gaming machine.

Read the full story at The Verge.

Read the whole story
alvinashcraft
11 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories