Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149612 stories
·
33 followers

Spotify will let you edit your Taste Profile to control your recommendations

1 Share
When you edit your Taste Profile, you'll impact your personalized playlists like Discover Weekly, recommendations, and Wrapped.
Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Live Nation Execs Brag About 'Robbing' Ticket Buyers In Slack DMs

1 Share
An anonymous reader quotes a report from Pitchfork: Earlier this week, the U.S. Department of Justice and Live Nation reached a settlement in the DOJ's antitrust lawsuit against the concert giant. During the trial, which lasted only a week, representatives for Live Nation had moved to exclude a collection of Slack direct messages from 2022 between two of the company's regional directors from the evidence presented to the jury. Bloomberg and a number of other publications have, as of today (March 12), successfully petitioned New York federal judge Arun Subramanian to release the chats. The conversations are between Ben Baker, now head of ticketing for Venue Nation, and Jeff Weinhold, currently a senior director in the ticketing department. Baker and Weinhold joke about overcharging and price-gouging fans -- "Robbing them blind, baby," Baker brags in one exchange pertaining to a Kid Rock show in Tampa Bay -- as well as being able to raise prices on ancillary services such as parking seemingly at will. "These people are so stupid," Baker writes. "I almost feel bad taking advantage of them BAHAHAHAHAHA." Live Nation described the messages as "off-the-cuff banter, not policy, decision-making, or facts of consequence." In a statement the company has since added: "The Slack exchange from one junior staffer to a friend absolutely doesn't reflect our values or how we operate."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Taylor Soper named director of Seattle’s AI House after remarkable run at GeekWire

1 Share
Editor and reporter Taylor Soper joined GeekWire in 2012 out of the University of Washington. (GeekWire Photo / Kurt Schlosser)

After more than 13 years as a GeekWire reporter and editor, Taylor Soper is preparing for his next big assignment: he’ll soon join AI2 Incubator as director of AI House, the Seattle startup hub that has quickly become a gathering place for AI founders, practitioners, and researchers.

This is a big change for all of us. We are going to miss Taylor deeply, and we know GeekWire’s readers and community will, as well. But we’re excited to see what he’ll do in his new role, and we’ll be using the opportunity to bring fresh eyes to GeekWire’s coverage of startups and the broader tech community in the Seattle region and the rest of the Pacific Northwest. 

In a post announcing Taylor’s new role, AI2 Incubator Managing Director Yifan Zhang says that he “brings a unique combination of skills that fits our thesis for today’s AI era: over a decade of deep relationships across Seattle tech, an intense and insatiable curiosity, and a talent for asking the right questions.”

We can vouch for that. Taylor was GeekWire’s first editorial hire, back in 2012, straight out of the University of Washington, when we were in a 10×10 foot office next to the Ballard Bridge. 

In the years that followed, he became one of the most connected and respected business and tech reporters in the Pacific Northwest, covering everything from early-stage startups to Microsoft and Amazon at the highest levels, breaking funding scoops and acquisition news. 

In recent years, as GeekWire’s editor, he has led and coordinated our news team and guided our reporting on everything from civic coverage to artificial intelligence.

Taylor Soper on the job through the years, including the video (center) that he submitted in support of his original GeekWire job application.

Taylor’s impact goes well beyond what he’s reported and published. The Seattle tech community is better for the long hours and dedication he brought to the job, year after year.

“I’m grateful to GeekWire for giving me an opportunity and supporting my growth, and to all of my colleagues for the work we did together reporting on the Seattle tech ecosystem,” he said. “I’m excited to work alongside founders and help supercharge the next generation of startups in this AI era.”

GeekWire remains as committed as ever to covering startups and the tech community. GeekWire co-founder Todd Bishop will be stepping back into the role of editor while continuing to report and write, working with staff reporters Lisa Stiffler and Kurt Schlosser, co-founder and publisher John Cook, and regular contributors including Alan Boyle and Thomas Wilde.

We’re also looking to add reinforcements to our news team over time.

Taylor’s last day at GeekWire will be March 25, and if the pattern holds, he’ll be reporting and publishing stories to the end. We feel fortunate that GeekWire has been his home for so long, and we have no doubt his impact will continue for many years to come.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Agent & Copilot Podcast: Gina Montgomery on Designing Trusted Copilot, Agent Experiences

1 Share

Gina Montgomery, Director of AI, Automation & Analytics, Armanino, joins Giuseppe Ianni on this episode of the AI Agent & Copilot Podcast. Montgomery details the learning objectives of her session at the upcoming AI Agent & Copilot Summit, and the impact the event has on building AI awareness across the community.

Key Takeaways

  • Shift in AI focus: As Director of AI at Armanino, Montgomery explains that organizations are shifting from last year’s AI experimentation and demos toward defining real business use cases and operationalizing AI; embedding it into processes, insights, governance, and workforce interactions to transform how the business runs.
  • Session overview: One of her sessions, “From Insight to Intuition: Designing Copilot Experiences that Understand People,” will provide “a practical blueprint for designing Copilot and agent experiences that people can trust and use,” addressing the gap between building AI systems and thoughtfully designing how employees interact with them.
  • Learning objectives: Turning on Copilot often leads to early experimentation but a dip in trust as users encounter vague outputs, prompt fatigue, and unclear accountability because the experience wasn’t intentionally designed. Montgomery’s masterclass introduces an “agent experience” framework called CARE — context, awareness, relationships, and empathy — to help organizations design AI systems that are trustworthy, accountable, and effective in business workflows.
  • Event relevance: The event, explains Montgomery, comes at “an inflection point with AI adoption across businesses,” bringing together technical and business leaders to help organizations move from exploring AI’s possibilities to deploying it responsibly and at scale across their operations.

The post AI Agent & Copilot Podcast: Gina Montgomery on Designing Trusted Copilot, Agent Experiences appeared first on Cloud Wars.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Copilot Studio Case Study Shows 61% Faster AI Support With Multi-Agent Architecture

1 Share

Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.

In today’s Cloud Wars Minute, I explore how Microsoft is using Copilot Studio and multi-agent orchestration to dramatically improve customer support performance.

Highlights

00:09 — Now, one of the best ways to assess the impact of Microsoft Copilot is to examine case studies of the technology in action. Microsoft has announced details of a recent project delivered through Copilot Studio, aimed at enhancing the customer support experience on microsoft.com, building on the Ask Microsoft web agent created using Microsoft Copilot Studio.

00:51 — This new approach resulted in a 61% reduction in latency and up to 70% fewer human escalations. The Microsoft team tested and refined the original web assistant, getting it live within just a few weeks using Copilot Studio tools.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

01:11 — However, it was the facilities multi-agent orchestration feature that truly enhanced this project, enabling the team to connect the main agent to sub-agents with domain-specific knowledge in areas such as Azure or Microsoft 365 .

01:34 — Firstly, Microsoft is presenting a very tangible use case for Copilot Studio here. Secondly, it highlights the speed at which Copilot Studio can be used to rapidly deploy and easily edit agentic workflows. And finally, it serves as a really good advertisement for multi-agent architecture and orchestration, which I believe unlocks the most capable AI performance.


The post Microsoft Copilot Studio Case Study Shows 61% Faster AI Support With Multi-Agent Architecture appeared first on Cloud Wars.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Capability Architecture for AI-Native Engineering

1 Share

A few years into the AI shift, the gap between engineers is not talent. It’s coordination: shared norms and a shared language for how AI fits into everyday engineering work. Some teams are already getting real value. They’ve moved beyond one-off experiments and started building repeatable ways of working with AI. Others haven’t, even when the motivation is there. The reason is often simple: The cost of orientation has exploded. The landscape is saturated with tools and advice, and it’s hard to know what matters, where to start, and what “good” looks like once you care about production realities.

The missing map

What’s missing is a shared reference model. Not another tool. A map. Which engineering activities can AI responsibly support? What does quality mean for those outputs? What changes when part of the workflow becomes probabilistic? And what guardrails keep integration safe, observable, and accountable? Without that map, it’s easy to drown in novelty, and easy to confuse widespread experimentation with reliable integration. Teams with the least time, budget, and local support pay the highest price, and the gap compounds.

That gap is now visible at the organizational level. More organizations are trying to turn AI into business value, and the difference between hype and integration is showing up in practice. It’s easy to ship impressive demos. It’s much harder to make AI-assisted work reliable under real-world constraints: measurable quality, controllable failure modes, clear data boundaries, operational ownership, and predictable cost and latency. This is where engineering discipline matters most. AI does not remove the need for it; it amplifies the cost of missing it. The question is how we move from scattered experimentation to integrated practice without burning cycles on tool churn. To do that at scale, we need shared scaffolding: a public model and shared language for what “good” looks like in AI-native engineering.

We have seen why this kind of shared scaffolding matters before. In the early internet era, promise and noise moved faster than standards and shared practice. What made the internet durable was not a single vendor or methodology but a cultural infrastructure: open knowledge sharing, global collaboration, and shared language that made practices comparable and teachable. AI-native engineering needs the same kind of cultural infrastructure, because integration only scales when the industry can coordinate on what “good” means. AI does not remove the need for careful engineering. On the contrary, it punishes the absence of it.

A public scaffold for AI-native engineering

In the second half of 2025, I began to notice growing unease among engineers I worked with and friends in IT. There was a clear sense that AI would change our work in profound ways, but far less clarity on what that actually meant for a person’s role, skills, and daily practice. There was no shortage of trainings, guides, blogs, or tools, but the more resources appeared, the harder it became to judge what was relevant, what was useful, and where to begin. It felt overwhelming. How do you know which topics truly matter to you when suddenly everything is labeled AI? How do you move from hype to useful integration?

I was feeling much of that same uncertainty myself. I was trying to make sense of the shift too, and for a while I think I was waiting for a clearer structure to emerge from elsewhere. It was only when friends started reaching out to me for help and guidance that I realized I might have something meaningful to contribute. I do not consider myself an AI expert. I am finding my way through these changes just like many other engineers. But over the years, I had become known for my work in IT workforce development, skill and capability frameworks, and engineering excellence and enablement. I know how to help people navigate complexity in a practical and sustainable way, and I enjoy bringing clarity to chaos.

That is what led me to start working on the AI Flower as a hobby project in early October 2025, building on frameworks and methods I already had experience with.

When I began sharing it with friends in IT to gather feedback, I saw how much it resonated. It helped them make sense of the complexity around AI, think more clearly about their own upskilling, and begin shaping AI adoption strategies of their own. That is when I realized this casual experiment held real value, and decided I wanted to publish it so it could help empower other engineers and IT organizations in the same way it had helped my friends.

With the AI Flower, I’m offering a public scaffold for AI-native engineering work: a shared reference model that helps engineers, teams, and organizations adopt and integrate AI sustainably and reliably. It’s meant to steer and organize the conversation around AI-assisted engineering, and to invite targeted feedback on what breaks, what’s missing, and what “good” should mean in real production contexts. It’s not meant to be perfect. It’s meant to be useful, freely available, open to contribution, and shaped by the strongest resource our industry has: collective intelligence.

Open knowledge sharing and collaboration cannot be optional. If AI is becoming part of how we design, build, operate, secure, and govern systems, we need more than tools and enthusiasm. Many of us work on systems people rely on every day. When those systems fail, the impact is real. That’s why we owe it to the people who depend on these systems to do this with care, and why we won’t get there in isolation. We need the industry, globally, to converge on shared standards for dependable practice.

The AI Flower visualized: Petals represent engineering disciplines, and each encompasses core engineering activities, best practices, learning resources, AI risk and considerations, and AI guidance per activity.

About the AI Flower

The AI Flower maps the core activities that make up engineering work across the main engineering disciplines. For each activity, it defines what good looks like, based on practices that should already feel familiar to engineers. It then helps people explore how AI can support those activities in practice, providing guidance on how to begin using AI in that work, sharing links to useful learning resources, and outlining the main risks, trade-offs, and mitigations.

But the AI landscape is changing quickly. This activity-based approach helps engineers understand how AI can support core engineering tasks, where risks may arise, and how to start building practical experience. But on its own, it isn’t enough as a long-term model for AI adoption.

As AI capabilities evolve, many engineering activities will become more abstracted, more automated, or absorbed into the infrastructure layer. That means engineers will need to do more than learn how to use AI within today’s activities. They will also need to work with emerging approaches such as context engineering and agentic workflows, which are already reshaping what we consider core engineering work. A concept I call the Skill Fossilization Model captures that progression. It shows how both engineering skills and AI-related skills evolve over time, and how some of them become less visible as work moves to a higher level of abstraction. Together, the AI Flower and the Skill Fossilization Model are meant to help engineers stay adaptable as the field continues to shift.

The main purpose of the AI Flower is to help engineers find their way through these rapid changes and grow with them. While I provide content for each section and activity, the real value lies in the framework and structure itself. To become truly valuable, it will need the insight, care, and contribution of engineers across disciplines, perspectives, and regions.

I genuinely believe the AI Flower, as an open and freely available framework, can serve as a scaffold for that work. This is my contribution to a changing industry. But it will only be useful—it will only “bloom”—if the community tests it, challenges it, and improves it over time.

And if any industry can turn open critique and contribution into shared standards at a global scale, it’s ours, isn’t it?

Join me at AI Codecon to learn more

If the AI Flower resonates and you want the full walkthrough, I’ll be presenting it at O’Reilly’s upcoming AI Codecon. (Registration is free and open to all.)

If you’re concerned about how quickly AI engineering patterns are evolving, that concern is valid. We’ve already seen the center of gravity shift from ad hoc prompt work, to context engineering, to increasingly agentic workflows, and there is more coming. A core design goal of the AI Flower is to stay stable across those shifts by focusing on underlying capabilities rather than specific techniques. I’ll go deeper on that stability principle, including the Skill Fossilization model, at AI Codecon as well.



Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories