Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146607 stories
·
33 followers

AWS Launches Generative AI Essentials Course on Coursera and edX

1 Share

The course is teaching developers Amazon Q, Bedrock, security guardrails, and agent workflows.

The post AWS Launches Generative AI Essentials Course on Coursera and edX appeared first on TechRepublic.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI’s Infrastructure Boom: Opportunity, Responsibility, and the Race for Sustainable Scale

1 Share

Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.

In today’s Cloud Wars Minute, I explore how the AI infrastructure boom is reshaping the global economy — and why sustainability has become a defining battleground for the industry’s biggest players.

Highlights

00:09 — The biggest AI firms in the world are currently in the midst of the largest and fastest global infrastructure build out since the Industrial Revolution. With this infrastructure rally, we’re seeing investment in underserved geographies and communities, job creation on a massive scale, and the bare bones that will power the most transformative technological revolution in history.

00:35 — But we’re also seeing something else, that’s the potential for, and I want to reiterate the word potential here, the potential for dramatic environmental consequences if issues like clean water, clean energy, and the extraction of raw materials aren’t adequately addressed and in the most part, and for most leaders, they are.

00:58 — But this isn’t simply a checkbox. It’s an ongoing, evolving initiative that’s become an integral part of strategic planning for AI innovations. In one of the latest examples, Microsoft has agreed a deal with Varaha, an Indian startup that works with smallholders in Asia on carbon removal projects.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

01:23 — Microsoft has committed to acquiring over 100,000 tons of carbon dioxide removal credits over the coming three years through the company. In practice, this will equate to the building of 18 industrial gasification reactors that will burn cotton stalks from smallholder farms in India as biochar.

01:55 — As well as supporting Microsoft’s goals, the project will improve air quality in India’s Maharashtra region by utilizing crop residue that would otherwise be burnt in the open air. These are unprecedented times, and they call for creative thinking. That’s one thing Microsoft and its fellow Cloud Wars leaders have access to in abundance.

02:29 — This is leading to the emergence of new schemes that not only ensure companies are fulfilling their corporate promises and responsibilities, but also have knock-on effects on the communities where these schemes are based.


The post AI’s Infrastructure Boom: Opportunity, Responsibility, and the Race for Sustainable Scale appeared first on Cloud Wars.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft is experimenting with a top menu bar for Windows 11

1 Share

Microsoft's PowerToys team is contemplating building a top menu bar for Windows 11, much like Linux, macOS, or older versions of Windows. The menu bar, or Command Palette Dock as Microsoft calls it, would be a new optional UI that provides quick access to tools, monitoring of system resources, and much more.

Microsoft has provided concept images of what it's looking to build, and is soliciting feedback on whether Windows users would use a PowerToy like this. "The dock is designed to be highly configurable," explains Niels Laute, a senior product manager at Microsoft. "It can be positioned on the top, left, right, or bottom edge of the scree …

Read the full story at The Verge.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

82 percent of hackers now use AI

1 Share
A future of cybersecurity, powered by AI, promises a world where it's not just about defending against threats, but preemptively shaping a resilient digital landscape. But of course the technology is equally attractive to attackers and that means ethical hackers need to adopt it to. A new study from Bugcrowd finds that 82 percent of hackers now use AI in their workflows, up from 64 percent in 2023, with AI primarily used for automating tasks, accelerating learning, and analyzing data. The report’s findings show a decisive shift toward human-augmented intelligence, with hackers integrating AI into their workflows at significantly higher… [Continue Reading]
Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too)

1 Share
This post first appeared on Aman Khan’s AI Product Playbook newsletter and is being republished here with the author’s permission.

Let me start with some honesty. When people ask me “Should I become an AI PM?” I tell them they’re asking the wrong question.

Here’s what I’ve learned: Becoming an AI PM isn’t about chasing a trendy job title. It’s about developing concrete skills that make you more effective at building products in a world where AI touches everything.

Every PM is becoming an AI PM, whether they realize it or not. Your payment flow will have fraud detection. Your search bar will have semantic understanding. Your customer support will have chatbots.

Think of AI Product Managements as less of an OR and instead more of an AND. For example: AI x health tech PM or AI x fintech PM.

The Five Skills I Actually Use Every Day

This post was adapted from a conversation with Aakash Gupta on The Growth Podcast. You can find the episode here.

After ~9 years of building AI products (the last three of which have been a complete ramp-up using LLMs and agents), here are the skills I use constantly—not the ones that sound good in a blog post, but the ones I literally used yesterday.

  • AI prototyping
  • Observability, akin to telemetry
  • AI evals: The New PRD for AI PMs
  • RAG versus fine-tuning versus prompt engineering
  • Working with AI engineers

1. Prototyping: Why I code every week

Last month, our design team spent two weeks creating beautiful mocks for an AI agent interface. It looked perfect. Then I spent 30 minutes in Cursor building a functional prototype, and we immediately discovered three fundamental UX problems the mocks hadn’t revealed.

The skill: Using AI-powered coding tools to build rough prototypes.
The tool: Cursor. (It’s VS Code but you can describe what you want in plain English.)
Why it matters: AI behavior is impossible to understand from static mocks.

How to start this week:

  1. Download Cursor.
  2. Build something stupidly simple. (I started with a personal website landing page.)
  3. Show it to an engineer and ask what you did wrong.
  4. Repeat.

You’re not trying to become an engineer. You’re trying to understand constraints and possibilities.

2. Observability: Debugging the black box

Observability is how you actually peek underneath the hood and see how your agent is working.

The skill: Using traces to understand what your AI actually did.
The tool: Any APM that supports LLM tracing. (We use our own at Arize, but there are many.)
Why it matters: “The AI is broken” is not actionable. “The context retrieval returned the wrong document” is.

Your first observability exercise:

  1. Pick any AI product you use daily.
  2. Try to trigger an edge case or error.
  3. Write down what you think went wrong internally.
  4. This mental model building is 80% of the skill.

3. Evaluations: Your new definition of “done”

If you haven’t checked it out yet, this is a primer on Evals I worked with Lenny on.

Vibe coding works if you’re shipping prototypes. It doesn’t really work if you’re shipping production code.

The skill: Turning subjective quality into measurable metrics.
The tool: Start with spreadsheets, graduate to proper eval frameworks.
Why it matters: You can’t improve what you can’t measure.

Build your first eval:

  1. Pick one quality dimension (conciseness, friendliness, accuracy).
  2. Create 20 examples of good and bad. Label them “verbose” or “concise.”
  3. Score your current system. Set a target: 85% of responses should be “just right.”
  4. That number is now your new North Star. Iterate until you hit it.

4. Technical intuition: Knowing your options

Prompt engineering (1 day): Add brand voice guidelines to the system prompt.

Few-shot examples (3 days): Include examples of on-brand responses.

RAG with style guide (1 week): Pull from our actual brand documentation.

Fine-tuning (1 month): Train a model on our support transcripts.

Each has different costs, timelines, and trade-offs. My job is knowing which to recommend.

Building intuition without building models:

  1. When you see an AI feature you like, write down three ways they might have built it.
  2. Ask an AI engineer if you’re right.
  3. Wrong guesses teach you more than right ones.

5. The new PM-engineer partnership

The biggest shift? How I work with engineers.

Old way: I write requirements. They build it. We test it. Ship.

New way: We label training data together. We define success metrics together. We debug failures together. We own outcomes together.

Last month, I spent two hours with an engineer labeling whether responses were “helpful” or not. We disagreed on a lot of them. This taught me that I need to start collaborating on evals with my AI engineers.

Start collaborating differently:

  • Next feature: Ask to join a model evaluation session.
  • Offer to help label test data.
  • Share customer feedback in terms of eval metrics.
  • Celebrate eval improvements like you used to celebrate feature launches.

Your Four-Week Transition Plan

Week 1: Tool setup

  • Install Cursor.
  • Get access to your company’s LLM playground.
  • Find where your AI logs/traces live.
  • Build one tiny prototype (took me three hours to build my first).

Week 2: Observation

  • Trace five AI interactions in products you use.
  • Document what you think happened versus what actually happened.
  • Share findings with an AI engineer for feedback.

Week 3: Measurement

  • Create your first 20-example eval set.
  • Score an existing feature.
  • Propose one improvement based on the scores.

Week 4: Collaboration

  • Join an engineering model review.
  • Volunteer to label 50 examples.
  • Frame your next feature request as eval criteria.

Week 5: Iteration

  • Take your learnings from prototyping and build these learnings into a production proposal.
  • Set the bar with evals.
  • Use your AI Intuition for iteration—Which knobs should you turn?

The Uncomfortable Truth

Here’s what I wish someone had told me three years ago: You will feel like a beginner again. After years of being the expert in the room, you’ll be the person asking basic questions. That’s exactly where you need to be.

The PMs who succeed in AI are the ones who are comfortable being uncomfortable. They’re the ones who build bad prototypes, ask “dumb” questions, and treat every confusing model output as a learning opportunity.

Start this week

Don’t wait for the perfect course, the ideal role, or for AI to “stabilize.” The skills you need are practical, learnable, and immediately applicable.

Pick one thing from this post, commit to doing it this week, and then tell someone what you learned. This is how you’ll begin to accelerate your own feedback loop for AI product management.

The gap between PMs who talk about AI and PMs who build with AI is smaller than you think. It’s measured in hours of hands-on practice, not years of study.

See you on the other side.



Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Testing Python Code for Scalability & What's New in pandas 3.0

1 Share

How do you create automated tests to check your code for degraded performance as data sizes increase? What are the new features in pandas 3.0? Christopher Trudeau is back on the show this week with another batch of PyCoder’s Weekly articles and projects.

Christopher digs into an article about building tests to make sure your software is fast, or at least doesn’t get slower as it scales. The piece focuses on testing Big-O scaling and its implications for algorithms.

We also discuss another article covering the top features in pandas 3.0, including the new dedicated string dtype, a cleaner way to perform column-based operations, and more predictable default copying behavior with Copy-on-Write.

We share several other articles and projects from the Python community, including a collection of recent releases and PEPs, a profiler for targeting individual functions, a quiz to test your Django knowledge, when to use each of the eight versions of UUID, the hard-to-swallow truths about being a software engineer, an offline reverse geocoding library, and a library for auto-generating CLIs from any Python object.

Our live Python cohorts start February 2, and we’re down to the last few seats. There are two tracks: Python for Beginners or Intermediate Deep Dive. Eight weeks of live instruction, small groups, and real accountability. Grab your seat at realpython.com/live.

This episode is sponsored by Honeybadger.

Course Spotlight: Intro to Object-Oriented Programming (OOP) in Python

Learn Python OOP fundamentals fast: master classes, objects, and constructors with hands-on lessons in this beginner-friendly video course.

Topics:

  • 00:00:00 – Introduction
  • 00:03:28 – Python 3.15.0 Alpha 4 Released
  • 00:03:50 – Django Bugfix Release: 5.2.10, 6.0.1
  • 00:04:22 – PEP 819: JSON Package Metadata
  • 00:04:41 – PEP 820: PySlot: Unified Slot System for the C API
  • 00:04:59 – PEP 822: Dedented Multiline String (d-String)
  • 00:06:04 – What’s New in pandas 3.0
  • 00:13:11 – pandas 3.0.0 documentation
  • 00:13:44 – Sponsor: Honeybadger
  • 00:14:30 – Unit Testing Your Code’s Performance
  • 00:17:51 – Introducing tprof, a Targeting Profiler
  • 00:23:03 – Video Course Spotlight
  • 00:24:31 – Django Quiz 2025
  • 00:24:56 – 8 Versions of UUID and When to Use Them
  • 00:29:17 – 10 hard-to-swallow truths they won’t tell you about software engineer job
  • 00:44:02 – gazetteer: Offline Reverse Geocoding Library
  • 00:46:13 – python-fire: A library for automatically generating command-line interfaces
  • 00:47:40 – Thanks and goodbye

News:

Show Links:

Discussion:

Projects:

Additional Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E282_03_PyCoders.4852e6c3c1a7.mp3
Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories