Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
142115 stories
·
32 followers

Browser Extensions Turn Nearly 1 Million Browsers Into Website-Scraping Bots

1 Share
Over 240 browser extensions with nearly a million total installs have been covertly turning users' browsers into web-scraping bots. "The extensions serve a wide range of purposes, including managing bookmarks and clipboards, boosting speaker volumes, and generating random numbers," reports Ars Technica. "The common thread among all of them: They incorporate MellowTel-js, an open source JavaScript library that allows developers to monetize their extensions." Ars Technica reports: Some of the data swept up in the collection free-for-all included surveillance videos hosted on Nest, tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive and Intuit.com, vehicle identification numbers of recently bought automobiles along with the names and addresses of the buyers, patient names and the doctors they saw, travel itineraries hosted on Priceline, Booking.com, and airline websites, Facebook Messenger attachments and Facebook photos, even when the photos were set to be private. The dragnet also collected proprietary information belonging to Tesla, Blue Origin, Amgen, Merck, Pfizer, Roche, and dozens of other companies. Tuckner said in an email Wednesday that the most recent status of the affected extensions is: - Of 45 known Chrome extensions, 12 are now inactive. Some of the extensions were removed for malware explicitly. Others have removed the library. - Of 129 Edge extensions incorporating the library, eight are now inactive. - Of 71 affected Firefox extensions, two are now inactive. Some of the inactive extensions were removed for malware explicitly. Others have removed the library in more recent updates. A complete list of extensions found by Tuckner is here.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Touts $500 Million in AI Savings While Slashing Jobs

1 Share
Microsoft is keen to show employees how much AI is transforming its own workplace, even as the company terminates thousands of personnel. From a report: During a presentation this week, Chief Commercial Officer Judson Althoff said artificial intelligence tools are boosting productivity in everything from sales and customer service to software engineering, according to a person familiar with his remarks. Althoff said AI saved Microsoft more than $500 million last year in its call centers alone and increased both employee and customer satisfaction, according to the person, who requested anonymity to discuss an internal matter. The company is also starting to use AI to handle interactions with smaller customers, Althoff said. This effort is nascent, but already generating tens of millions of dollars, he said.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI is reportedly releasing an AI browser in the coming weeks

1 Share
OpenAI's browser is said to use AI to rethink how users browse the web, keeping some user interactions inside ChatGPT instead of linking out to websites.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI closes its deal to buy Jony Ive’s io and build AI hardware

1 Share

OpenAI has officially closed its nearly $6.5 billion acquisition of io, the hardware startup co-founded by famed former Apple designer Jony Ive, the company announced Wednesday. But it was careful to only refer to the startup as io Products Inc.

The blog post initially announcing the acquisition was also scrubbed from OpenAI’s website due to a trademark lawsuit from Iyo, the hearing device startup spun out of Google’s moonshot factory.

Now it’s back up, with a new note: “We’re thrilled to share that the io Products, Inc. team has officially merged with OpenAI. Jony Ive and LoveFrom remain independent and have assumed deep design and creative responsibilities across OpenAI.”

OpenAI originally announced the deal and plans to create dedicated AI hardware, with a video on social media featuring its CEO, Sam Altman, and Ive. That video has been scrubbed from its website and social media channels due to the lawsuit and has not returned. “The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco,” the blog post states.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Will LLMs and Vibe Coding Fuel a Developer Renaissance?

1 Share
Panel discussion at AWS Builder Loft on the future of developers in the AI era, with five speakers seated on stage.

Between companies increasingly using large language models (LLMs) in their development process, such as Microsoft writing up to 30% of its codebase using AI, and site reliability engineers (SREs) adopting incident vibing, it is clear that software practices are evolving.

Despite model makers pushing the narrative that fully autonomous AI development agents are coming very soon, the consensus remains that having a human in the loop is here to stay, at least for some time. So will AI fuel a developer renaissance?

The Shift to Multi-Agent Workflows

I recently moderated a Rootly AI Labs panel on the topic. Solomon Hykes, CEO of Dagger and founder of Docker, argued that while the industry has been busy figuring out single-agent approaches, multi-agent setups represent the next frontier.

For example, the startup Factory introduced “Droids,” software agents designed to handle parallel development tasks. One agent could manage code refactoring, while another could conduct a code review, and yet another could handle the task backlog on Linear, prioritizing and assigning tickets.

These setups shift a developer’s role from direct technical tasks to managing and verifying the work of these agents, turning developers into engineering managers.

Anthropic recently released a blueprint on building multi-agent systems, based on lessons from its Research feature, which coordinates multiple Claude agents to explore complex topics.

The report highlights that in agentic systems, small issues that would be minor in traditional software can compound and derail workflows entirely, making the gap between prototype and production wider than expected.

Getting multi-agent systems to work reliably turns the “last mile” into most of the journey, and developers are the ones responsible for making it happen.

Developer Roles Are Expanding

As developers transition into managing teams of AI agents, their roles naturally broaden beyond purely technical tasks. Malika Aubakirova, partner on the AI infrastructure team at Andreessen Horowitz, highlighted the rise of nano unicorns; fast-growing, high-revenue startups with small teams, like Cursor, which reached $300M ARR with just 20 employees.

She noticed consistent patterns across these companies. First, they augment their teams with AI agents across engineering, product development and customer-facing functions. In this model, AI isn’t a side tool; it’s treated as infrastructure and is central to how work gets done.

Second, these startups frequently employ generalists rather than specialists. For example, in such environments, engineers aren’t limited to backend or frontend tasks; they would contribute across the entire application life cycle and even assist with go-to-market initiatives.  This shift is redefining team structures, tooling and what it means to scale a modern software company.

Panel discussion at AWS Builder Loft on the future of developers in the AI era.

A panel discussion at AWS Builder Loft on the future of developers in the AI era.

This trend isn’t limited to startups; it’s also playing out inside large, established tech companies. A senior engineering leader at LinkedIn, who asked to remain anonymous, noted that role expectations have significantly expanded. Engineers are now expected to operate across multiple functions, acting not just as developers but also as project managers, data scientists, and SREs, while leveraging AI agents to execute across these domains.

While the actual usefulness of LLMs is still debated, one thing is certain: engineers are being asked to do more with less.

Challenges for Reliability and SRE Teams

Despite the productivity boosts from AI agents, their adoption poses reliability challenges. Kevin Van Gundy, CEO at Hypermode and former COO at Vercel, emphasizes that the non-deterministic nature of LLMs, which can produce hallucinations, is causing very unusual incidents. Handling incidents in deterministic systems was already complex; now imagine doing it when the system itself can’t be trusted to behave the same way twice.

Hykes noted that as LLMs become embedded at every stage of the software development life cycle, from authoring application code and creating other agents, to running tests, provisioning infrastructure and handling monitoring, the number of places where things can go wrong is increasing.

SREs, as the last line of defense between sanity and chaos, may need to be concerned about the volume and complexity of incidents heading their way. The good news, however, is that they’ll also become more sought-after talent, and platform teams, more specifically, will hold the keys to providing the infrastructure for these agentic workflows at scale.

Navigating the AI-Driven Market: Skills for Developers

So, what should engineers do to stay up to date? Van Gundy encourages engineers to keep building using the latest and hottest tools, such as Replit and Lovable. The focus should be on expanding beyond purely technical skills and developing strong product intuition and UX expertise. Rapid development capabilities alone won’t guarantee success without a polished product.

Conversely, Mike Chambers, specialist developer advocate for machine learning at Amazon Web Services, recommends that developers have a deep understanding of the underlying technology behind LLMs. Learning foundational AI concepts, such as transformers, can significantly improve engineers’ effectiveness when leveraging these tools. Like any other system, LLMs have strengths and weaknesses, and you shouldn’t use a hammer for a screw.

The Developer Renaissance Is Underway

The panel consensus was that LLMs indeed offer a potential renaissance for developers by significantly expanding their roles. Success in this new era will likely be highly based on blending human oversight with AI capabilities, balancing technical depth with product sensibility.

In the future, engineers’ performance reviews may focus less on individual execution and more on how effectively engineers manage and direct their agent workforce.

The post Will LLMs and Vibe Coding Fuel a Developer Renaissance? appeared first on The New Stack.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Multi-Agent Systems and MCP Tools Integration with Azure AI Foundry

1 Share

 

The Power of Connected Agents: Building Multi-Agent Systems

Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in!

The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents  a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents.

Why Connected Agents Matter

As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative:

1. Simplified Complexity: Instead of building one massive agent that does everything (and becomes difficult to maintain), you can create smaller, specialized agents with clearly defined responsibilities.

2. No Custom Orchestration Required: The main agent naturally delegates tasks using natural language - no need to write complex routing logic or orchestration code.

3. Better Reliability and Debugging: When something goes wrong, it's much easier to identify which specific agent is causing issues rather than debugging a monolithic system.

4. Flexibility and Extensibility: Need to add a new capability? Just create a new connected agent without modifying your main agent or other parts of the system.

How Multi-Agent Systems Work

The architecture is surprisingly straightforward:

1. A main agent acts as the orchestrator, interpreting user requests and delegating tasks
2. Connected sub-agents perform specialized functions like data retrieval, analysis, or summarization
3. Results flow back to the main agent, which compiles the final response

For example, imagine building a ticket triage system. When a new support ticket arrives, your main agent might:
- Delegate to a classifier agent to determine the ticket type
- Send the ticket to a priority-setting agent to determine urgency
- Use a team-assignment agent to route it to the right department

All this happens seamlessly without you having to write custom routing logic!

Setting Up a Multi-Agent Solution

The module walks you through the entire process:

1. Initializing the agents client
2. Creating connected agents with specialized roles
3. Registering them as tools for the main agent
4. Building the main agent that orchestrates the workflow
5. Running the complete system

Taking It Further: Integrating MCP Tools with Azure AI Agents

Once you've mastered multi-agent systems, the next level is connecting your agents to external tools and services. The Integrate MCP Tools with Azure AI Agents module teaches you how to use the Model Context Protocol (MCP) to give your agents access to a dynamic catalog of tools.

What is Dynamic Tool Discovery?

Traditionally, adding new tools to an AI agent meant hardcoding each one directly into your agent's code. But what if tools change frequently, or if different teams manage different tools? This approach quickly becomes unmanageable.

Dynamic tool discovery through MCP solves this problem by:

1. Centralizing Tool Management: Tools are defined and managed in a central MCP server
2. Enabling Runtime Discovery: Agents discover available tools during runtime through the MCP client
3. Supporting Automatic Updates: When tools are updated on the server, agents automatically get the latest versions

The MCP Server-Client Architecture

The architecture involves two key components:

1. MCP Server: Acts as a registry for tools, hosting tool definitions decorated with `@mcp.tool`. Tools are exposed over HTTP when requested.

2. MCP Client: Acts as a bridge between your MCP server and Azure AI Agent. It discovers available tools, generates Python function stubs to wrap them, and registers those functions with your agent.

This separation of concerns makes your AI solution more maintainable and adaptable to change.

Setting Up MCP Integration

The module guides you through the complete process:

1. Setting up an MCP server with tool definitions
2. Creating an MCP client to connect to the server
3. Dynamically discovering available tools 
4. Wrapping tools in async functions for agent use
5. Registering the tools with your Azure AI agent

Once set up, your agent can use any tool in the MCP catalog as if it were a native function, without any hardcoding required!

Practical Applications for Student Developers

As a student developer, how might you apply these concepts in real projects?

Classroom Projects:
- Build a research assistant that delegates to specialized agents for different academic subjects
- Create a coding tutor that uses different agents for explaining concepts, debugging code, and suggesting improvements

Hackathons:
- Develop a sustainability app that uses connected agents to analyze environmental data from different sources
- Create a personal finance advisor with specialized agents for budgeting, investment analysis, and financial planning

Personal Portfolio Projects:
- Build a content creation assistant with specialized agents for brainstorming, drafting, editing, and SEO optimization
- Develop a health and wellness app that uses MCP tools to connect to fitness APIs, nutrition databases, and sleep tracking services

Getting Started

Ready to dive in? Both modules include hands-on exercises where you'll build real working examples:
- A ticket triage system using connected agents
- An inventory management assistant that integrates with MCP tools

The prerequisites are straightforward:
- Experience with deploying generative AI models in Azure AI Foundry
- Programming experience with Python or C#

Conclusion

Multi-agent systems and MCP tools integration represent the next evolution in AI application development. By mastering these concepts, you'll be able to build more sophisticated, maintainable, and extensible AI solutions - skills that will make you stand out in internship applications and job interviews.

The best part? These modules are designed with practical, hands-on learning in mind - perfect for student developers who learn by doing. So why not give them a try? Your future AI applications (and your resume) will thank you for it!

Want to learn more about Model Context Protocol 'MCP' see MCP for Beginners

Happy coding!

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories