Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148172 stories
·
33 followers

AI Agent and Copilot Podcast: Cisco Engineering Leader on AI’s Impact in Product Support

1 Share

Welcome to this AI Agent & Copilot Podcast, where we analyze the opportunities, impact, and outcomes that are possible with AI.

In this episode, I speak with Nik Kale, a principal engineer with Cisco and the architect of an AI-powered system — called AI Support Fabric — that’s used in product support inside Cisco and by customers of the tech giant.

Highlights

AI Support Fabric Background (02:24)

AI Support Fabric powers in-product guidance, AI-assisted support, and human escalation workflows within Cisco products. Customers often navigate multiple tabs and portals to find answers, leading to a disconnected experience. AI Support Fabric aims to provide a more connected experience across the entire product ecosystem, moving from reactive to proactive support.

Implementation and Results of AE Support Fabric (04:50)

AI Support Fabric runs in Cisco security and enterprise environments. It has over 200,000 unique users and 15,000 unique customers engaging weekly. The system brings AI and human intelligence directly into the product, providing personalized and predictive support. The in-product layer ensures customers receive targeted remediation content and guidance, reducing noise. Examples include handling zero-day vulnerabilities by pushing targeted remediation content directly to affected customers.

Unified Data Foundation (09:49)

AI Support Fabric is built on a unified data foundation called Tron, which acts as a single source of truth; Tron ingests millions of customer interactions, categorizing them into actionable outcomes like defects or documentation issues. The Digital Intellectual Capital Ecosystem (DICE) distills knowledge from years of support operations into reusable content, enabling omni-channel delivery. The principle of “build once, deliver everywhere” ensures content is reusable across various customer interaction channels.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

Layers of AI Support (12:34)

AI Support Fabric consists of three layers: proactive guidance, AI assistant, and human escalation. The proactive guidance layer uses the Cisco Digital Adoption Platform to surface relevant guidance at the moment of friction. The AI assistant is a multi-agent system that coordinates at machine speeds, acting like a task force for complex issues. The human escalation layer packages all relevant diagnostic information for human engineers to resolve complex issues.

Human Escalation and Safety in AI Systems (18:50)

Human escalation is especially important in security products. The human escalation layer treats escalation as a first-class feature, ensuring AI recommendations are validated and logged. The system reduces mean time to resolution by 15% to 20%, saving time and effort for both customers and engineers.

Customer Outcomes and ROI (21:38)

ROI is framed across three dimensions: resolution speed, knowledge leverage, and shift from reactive to proactive support. Resolution speed is improved by providing contextual health and reducing initial calls for information. Knowledge leverage multiplies the value of institutional support knowledge, making it available when needed. The shift from reactive to proactive support prevents issues before they become problems.

More Cisco and AI Insights:


The post AI Agent and Copilot Podcast: Cisco Engineering Leader on AI’s Impact in Product Support appeared first on Cloud Wars.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How AI agents could destroy the economy

1 Share
Citrini Research imagines a report from two years in the future, in which unemployment has doubled and the total value of the stock market has fallen by more than a third.
Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Inside Microsoft’s big Xbox leadership shake-up

1 Share
Xbox logo illustration

Xbox fans had been anticipating the retirement of Microsoft Gaming CEO Phil Spencer for years, but what most hadn't expected was the departure of Xbox president Sarah Bond too. For many outside the company, Bond seemed like Spencer's natural successor, a deputy of sorts.

Microsoft CEO Satya Nadella and Microsoft CFO Amy Hood clearly didn't agree.

Instead of picking Bond for the role, Microsoft promoted Asha Sharma, a former Microsoft AI executive, to the top of Xbox. The decision to overlook Bond might have surprised many Xbox fans, but for the more than a dozen current and former Microsoft employees I've been speaking to, it's felt inev …

Read the full story at The Verge.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Agent & Copilot Podcast: Stoneridge Software CEO Eric Newell on Building Secure AI Strategies

1 Share

In this episode of the AI Agent & Copilot Podcast, John Siefert is joined by Eric Newell, CEO, Stoneridge Software, who details his session at AI Agent & Copilot Summit NA 2026, which will focus on how organizations can build a productive AI strategy that moves beyond experimentation.

Key Takeaways

  • Session overview: Newell will be leading a session as part of the M365 & Work IQ masterclass, “Executive’s Guide to Rolling Out M365 Copilot.” The session will focus on how organizations can move beyond AI experimentation to build a secure and productive AI strategy. “AI is incredibly powerful,” he explains, “But you need to just make sure that you’re set up to take advantage of it, and then you build some organizational capacity to do it.”
  • AI executive briefings: For customers and other leaders, Newell shares executive-level AI education and practical guidance, grounding other leaders in what AI, LLMs, and Microsoft’s tools can do for productivity. He notes that some of these learnings will be a part of his session at the event.
  • Final thoughts: In closing, Newell adds that he’s looking forward to his session and hopes attendees bring questions focused on practical guidance.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

The post AI Agent & Copilot Podcast: Stoneridge Software CEO Eric Newell on Building Secure AI Strategies appeared first on Cloud Wars.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Defense against uploads: Q&A with OSS file scanner, pompelmi

1 Share
API and network traffic get all the press, but some folks are still trying to build a better upload scanner.
Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Tool Switching Is Stealth Friction – Beat It at the Access Layer

1 Share

Has your team’s sprint velocity actually improved since you approved all those AI coding tools?

If not, recent research by JetBrains and UC Irvine shows your developers may be facing a new dimension of context switching that resists the usual fixes.  

The key findings were that most AI-assisted developers switched in and out of their IDEs more but 74% of those surveyed didn’t notice it. When context switching doesn’t feel like context switching, behavioral policies won’t catch it.

Consolidating AI tools would catch it but at the cost of flexibility. Model capabilities evolve constantly. Locking into one vendor limits your team’s ability to learn, experiment, and stay competitive.     

The good news is that there’s a solution that sidesteps both challenges – consolidating the access layer. 

Here’s the research behind it, why it works, and how to apply it. 

Developers complain about switching, just not this kind

In general, developers are outspoken about context switching killing productivity. Atlassian’s State of Developer Experience Report 2025 found developers citing switching context between tools as one of their biggest drags on productivity.

At the same time, developers report record productivity thanks to an ever-increasing array of AI tools. In the 2025 DORA State of AI-Assisted Software Development Report, respondents said that AI had a positive impact on delivery throughput, code quality, and almost every other key performance outcome. 

Paradoxically DORA also found no relationship between AI adoption and reduced friction or burnout. The organizational wins weren’t translating to a lighter day-to-day experience.

This disconnect between experience and performance points to something deeper. When researchers combine self-reported perceptions with objective behavioral data, the gap becomes clear.

  • In the JetBrains/UC Irvine study mentioned above, 74% of surveyed AI-assisted developers didn’t notice an increase in their switching. Telemetry on 151 million IDE window activations across 800 developers told a different story. Over the two-year study period, AI users’ monthly window switching trended upward while non-AI users’ did not. This divergence was mostly invisible to those experiencing it. Conducted from October 2022 to October 2024, the research spanned ChatGPT’s launch and the initial scramble to adopt AI coding tools.

74% said switching hadn’t gone up.

Telemetry disagreed.

  • Experienced open-source developers in a 2025 METR study believed AI tools made them 20% faster. Screen recordings showed the opposite.

All this research suggests that AI’s productivity benefits come with a hidden cost when distributed across different tools and interfaces. The switching feels productive and voluntary, so it is nearly impossible to manage behaviorally. When developers don’t perceive the friction, they can’t self-correct. When they don’t report it, you can’t coach around it.

The solution isn’t measuring or managing – it’s architectural. And there’s a proven pattern for architectural solutions to developer friction.

The platform-engineering lesson: Consolidation reduces cognitive load

Platform engineering is all about building internal tooling and infrastructure that lets developers self-service what they need without hitting speed bumps like tickets or approvals. The goal is to create “golden paths” that make the right ways the easy ways.

Traditionally, platform engineering has focused on the “outer loop” of everything after git push. This includes CI/CD pipelines, deployment automation, infrastructure provisioning, and security scanning.

AI tools, on the other hand, fragment the “inner loop” of everything before git push. GitLab’s 2025 Global DevSecOps Report found that 49% of development teams use more than five AI tools across use cases like code generation, testing, and documentation. 

Standardization was the top motivation for platform initiatives according to Weave Intelligence’s State of AI in Platform Engineering 2025 report, but standardizing around a single AI tool doesn’t work when different models are better at different tasks. 

Reducing developers’ cognitive load was the second-highest motivation. Apply that principle to AI tools: consolidate the access layer, not the options.

One environment, multiple AI tools

Since our study data was finalized in 2024, we’ve shipped two features that make JetBrains IDEs the consolidated access layer for your team’s AI tools of choice: 

Bring Your Own Key (BYOK) lets your team use OpenAI, Anthropic, or any OpenAI-compatible provider with existing API keys. You maintain cost visibility through provider dashboards while developers access models directly in the IDE.

No browser tabs required. LLMs work inside the IDE.

Agent Client Protocol (ACP) support means any ACP-compatible coding agent can work within JetBrains IDEs. ACP is an open standard we’re partnering with Zen on to ensure agents function across editors without vendor lock-in. The recently launched ACP Registry makes finding and configuring agents quick and easy.

All ACP-compatible agents are available in the IDE.

Takeaway

AI-related switching doesn’t surface the same way as shifts between meetings, projects, or traditional tools. Developers notice it less, so they report it less. Behavioral policies can’t apply to what isn’t visible.

The fix is architectural, not managerial. In platform engineering, this principle applies to post-commit workflows. Apply it to pre-commit AI workflows by standardizing where developers access the tools: in the environment where they already write, test, and debug code.

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories