Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147547 stories
·
33 followers

IBM Plans To Triple Entry-Level Hiring in the US

1 Share
IBM said it will triple entry-level hiring in the US in 2026, even as AI appears to be weighing on broader demand for early-career workers. From a report: While the company declined to disclose specific hiring figures, it said the expansion will be "across the board," affecting a wide range of departments. "And yes, it's for all these jobs that we're being told AI can do," said Nickle LaMoreaux, IBM's chief human resources officer, speaking at a conference this week in New York. LaMoreaux said she overhauled entry-level job descriptions for software developers and other roles to make the case internally for the recruitment push. "The entry-level jobs that you had two to three years ago, AI can do most of them," she said at Charter's Leading With AI Summit. "So, if you're going to convince your business leaders that you need to make this investment, then you need to be able to show the real value these individuals can bring now. And that has to be through totally different jobs."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Copilot Studio agent security: Top 10 risks you can detect and prevent

1 Share

Organizations are rapidly adopting Copilot Studio agents, but threat actors are equally fast at exploiting misconfigured AI workflows. Mis-sharing, unsafe orchestration, and weak authentication create new identity and data‑access paths that traditional controls don’t monitor. As AI agents become integrated into operational systems, exposure becomes both easier and more dangerous. Understanding and detecting these misconfigurations early is now a core part of AI security posture.

Copilot Studio agents are becoming a core part of business workflows- automating tasks, accessing data, and interacting with systems at scale.

That power cuts both ways. In real environments, we repeatedly see small, well‑intentioned configuration choices turn into security gaps: agents shared too broadly, exposed without authentication, running risky actions, or operating with excessive privileges. These issues rarely look dangerous- until they are abused.

If you want to find and stop these risks before they turn into incidents, this post is for you. We break down ten common Copilot Studio agent misconfigurations we observe in the wild and show how to detect them using Microsoft Defender and Advanced Hunting via the relevant Community Hunting Queries.

Short on time? Start with the table below. It gives you a one‑page view of the risks, their impact, and the exact detections that surface them. If something looks familiar, jump straight to the relevant scenario and mitigation.

Each section then dives deeper into a specific risk and recommended mitigations- so you can move from awareness to action, fast.

#Misconfiguration & RiskSecurity ImpactAdvanced Hunting Community Queries (go to: Security portal>Advanced hunting>Queries> Community Queries>AI Agent folder)
1Agent shared with entire organization or broad groupsUnintended access, misuse, expanded attack surface• AI Agents – Organization or Multi‑tenant Shared
2Agents that do not require authenticationPublic exposure, unauthorized access, data leakage• AI Agents – No Authentication Required
3Agents with HTTP Request actions using risky configurationsGovernance bypass, insecure communications, unintended API access• AI Agents – HTTP Requests to connector endpoints
• AI Agents – HTTP Requests to non‑HTTPS endpoints
• AI Agents – HTTP Requests to non‑standard ports
4Agents capable of email‑based data exfiltrationData exfiltration via prompt injection or misconfiguration• AI Agents – Sending email to AI‑controlled input values
• AI Agents – Sending email to external mailboxes
5Dormant connections, actions, or agentsHidden attack surface, stale privileged access• AI Agents – Published Dormant (30d)
• AI Agents – Unpublished Unmodified (30d)
• AI Agents – Unused Actions
• AI Agents – Dormant Author Authentication Connection
6Agents using author (maker) authenticationPrivilege escalation, separation of duties bypass‑of‑duties bypass• AI Agents – Published Agents with Author Authentication
• AI Agents – MCP Tool with Maker Credentials
7Agents containing hard‑coded credentialsCredential leakage, unauthorized system access• AI Agents – Hard‑coded Credentials in Topics or Actions
8Agents with Model Context Protocol (MCP) tools configuredUndocumented access paths, unintended system interactions• AI Agents – MCP Tool Configured
9Agents with generative orchestration lacking instructionsPrompt abuse, behavior drift, unintended actions• AI Agents – Published Generative Orchestration without Instructions
10Orphaned agents (no active owner)Lack of governance, outdated logic, unmanaged access• AI Agents – Orphaned Agents with Disabled Owners

Top 10 risks you can detect and prevent

Imagine this scenario: A help desk agent is created in your organization with simple instructions.

The maker, someone from the support team, connects it to an organizational Dataverse using an MCP tool, so it can pull relevant customer information from internal tables and provide better answers. So far, so good.

Then the maker decides, on their own, that the agent doesn’t need authentication. After all, it’s only shared internally, and the data belongs to employees anyway (See example in Figure 1). That might already sound suspicious to you. But it doesn’t to everyone.

You might be surprised how often agents like this exist in real environments and how rarely security teams get an active signal when they’re created. No alert. No review. Just another helpful agent quietly going live.

Now here’s the question: Out of the 10 risks described in this article, how many do you think are already present in this simple agent?

The answer comes at the end of the blog.

Figure 1 – Example Help Desk agent.

1: Agent shared with the entire organization or broad groups

Sharing an agent with your entire organization or broad security groups exposes its capabilities without proper access boundaries. While convenient, this practice expands the attack surface. Users unfamiliar with the agent’s purpose might unintentionally trigger sensitive actions, and threat actors with minimal access could use the agent as an entry point.

In many organizations, this risk occurs because broad sharing is fast and easy, often lacking controls to ensure only the right users have access. This results in agents being visible to everyone, including users with unrelated roles or inappropriate permissions. This visibility increases the risk of data exposure, misuse, and unintended activation of sensitive connectors or actions.

2: Agents that do not require authentication

Agents that you can access without authentication, or that only prompt for authentication on demand, create a significant exposure point. When an agent is publicly reachable or unauthenticated, anyone with the link can use its capabilities. Even if the agent appears harmless, its topics, actions, or knowledge sources might unintentionally reveal internal information or allow interactions that were never for public access.

This gap appears because authentication was deactivated for testing, left in its default state, or misunderstood as optional. The results in an agent that behaves like a public entry point into organizational data or logic. Without proper controls, this creates a risk of data leakage, unintended actions, and misuse by external or anonymous users.

3: Agents with HTTP request action with risky configurations

Agents that perform direct HTTP requests introduce a unique risks, especially when those requests target non-standard ports, insecure schemes, or sensitive services that already have built in Power Platform connectors. These patterns often bypass the governance, validation, throttling, and identity controls that connectors provide. As a result, they can expose the organization to misconfigurations, information disclosure, or unintended privilege escalation.

These configurations appear unintentionally. A maker might copy a sample request, test an internal endpoint, or use HTTP actions for flexibility during testing and convenience. Without proper review, this can lead to agents issuing unsecured calls over HTTP or invoking critical Microsoft APIs directly through URLs instead of secured connectors. Each of these behaviors represent an opportunity for misuse or accidental exposure of organizational data.

4: Agents capable of email-based aata exfiltration

Agents that send emails using dynamic or externally controlled inputs present a significant risk. When an agent uses generative orchestration to send email, the orchestrator determines the recipient and message content at runtime. In a successful cross-prompt injection (XPIA) attack, a threat actor could instruct the agent to send internal data to external recipients.

A similar risk exists when an agent is explicitly configured to send emails to external domains. Even for legitimate business scenarios, unaudited outbound email can allow sensitive information to leave the organization. Because email is an immediate outbound channel, any misconfiguration can lead to unmonitored data exposure.

Many organizations create this gap unintentionally. Makers often use email actions for testing, notifications, or workflow automation without restricting recipient fields. Without safeguards, these agents can become exfiltration channels for any user who triggers them or for a threat actor exploiting generative orchestration paths.

5: Dormant connections, actions, or agents within the organization

Dormant agents and unused components might seem harmless, but they can create significant organizational risk. Unmonitored entry points often lack active ownership. These include agents that haven’t been invoked for weeks, unpublished drafts, or actions using Maker authentication. When these elements stay in your environment without oversight, they might contain outdated logic or sensitive connections That don’t meet current security standards.

Dormant assets are especially risky because they often fall outside normal operational visibility. While teams focus on active agents, older configurations are easily forgotten. Threat actors, frequently target exactly these blind spots. For example:

  • A published but unused agent can still be called.
  • A dormant maker-authenticated action might trigger elevated operations.
  • Unused actions in classic orchestration can expose sensitive connectors if they are activated.

Without proper governance, these artifacts can expose sensitive connectors if they are activated.

6: Agents using author authentication

When agents use the maker’s personal authentication, they act on behalf of the creator rather than the end user.  In this configuration, every user of the agent inherits the maker’s permissions. If those permissions include access to sensitive data, privileged operations, or high impact connectors, the agent becomes a path for privilege escalation.

This exposure often happens unintentionally. Makers might allow author authentication for convenience during development or testing because it is the default setting of certain tools. However, once published, the agent continues to run with elevated permissions even when invoked by regular users. In more severe cases, Model Context Protocol (MCP) tools configured with maker credentials allow threat actors to trigger operations that rely directly on the creator’s identity.

Author authentication weakens separation of duties and bypasses the principle of least privilege. It also increases the risk of credential misuse, unauthorized data access, and unintended lateral movement

7: Agents containing hard-coded credentials

Agents that contain hard-coded credentials inside topics or actions introduce a severe security risk. Clear-text secrets embedded directly in agent logic can be read, copied, or extracted by unintended users or automated systems. This often occurs when makers paste API keys, authentication tokens, or connection strings during development or debugging, and the values remain embedded in the production configuration. Such credentials can expose access to external services, internal systems, or sensitive APIs, enabling unauthorized access or lateral movement.

Beyond the immediate leakage risk, hard-coded credentials bypass the standard enterprise controls normally applied to secure secret storage. They are not rotated, not governed by Key Vault policies, and not protected by environment variable isolation. As a result, even basic visibility into agent definitions may expose valuable secrets.

8: Agents with model context protocol (MCP) tools configured

AI agents that include Model Context Protocol (MCP) tools provide a powerful way to integrate with external systems or run custom logic. However, if these MCP tools aren’t actively maintained or reviewed, they can introduce undocumented access patterns into the environment.

This risk when MCP configurations are:

  • Activated by default
  • Copied between agents
  • Left active after the original integration is no longer needed

Unmonitored MCP tools might expose capabilities that exceed the agent’s intended purpose. This is especially true if they allow access to privileged operations or sensitive data sources. Without regular oversight, these tools can become hidden entry points that user or threat actors might trigger unintended system interactions.

9: Agents with generative orchestration lacking instructions

AI agents that use generative orchestration without defined instructions face a high risk of unintended behavior. Instructions are the primary way to align a generative model with its intended purpose. If instructions are missing, incomplete, or misconfigured, the orchestrator lacks the context needed to limit its output. This makes the agent more vulnerable to user influence from user inputs or hostile prompts.

A lack of guidance can cause an agent to;

  • Drift from its expected behaviors. The agent might not follow its intended logic.
  • Use unexpected reasoning. The model might follow logic paths that don’t align with business needs.
  • Interact with connected systems in unintended ways. The agent might trigger actions that were never planned.

For organizations that need predictable and safe behavior, behavior, missing instructions area significant configuration gap.

10: Orphaned agents

Orphaned agents are agents whose owners are no longer with organization or their accounts deactivated. Without a valid owner, no one is responsible for oversight, maintenance, updates, or lifecycle management. These agents might continue to run, interact with users, or access data without an accountable individual ensuring the configuration remains secure.

Because ownerless agents bypass standard review cycles, they often contain outdated logic, deprecated connections, or sensitive access patterns that don’t align with current organizational requirements.

Remember the help desk agent we started with? That simple agent setup quietly checked off more than half of the risks in this list.

Keep reading and running the Advanced Hunting queries in the AI Agents folder, to find agents carrying these risks in your own environment before it’s too late.

Figure 2: The example Help Desk agent was detected by a query for unauthenticated agents.

From findings to fixes: A practical mitigation playbook

The 10 risks described above manifest in different ways, but they consistently stem from a small set of underlying security gaps: over‑exposure, weak authentication boundaries, unsafe orchestration, and missing lifecycle governance.

Figure 3 – Underlying security gaps.

Damage doesn’t begin with the attack. It starts when risks are left untreated.

The section below is a practical checklist of validations and actions that help close common agent security gaps before they’re exploited. Read it once, apply it consistently, and save yourself the cost of cleaning up later. Fixing security debt is always more expensive than preventing it.

1. Verify intent and ownership

Before changing configurations, confirm whether the agent’s behavior is intentional and still aligned with business needs.

  • Validate the business justification for broad sharing, public access, external communication, or elevated permissions with the agent owner.
  • Confirm whether agents without authentication are explicitly designed for public use and whether this aligns with organizational policy.
  • Review agent topics, actions, and knowledge sources to ensure no internal, sensitive, or proprietary information is exposed unintentionally.
  • Ensure every agent has an active, accountable owner. Reassign ownership for orphaned agents or retire agents that no longer have a clear purpose. For step-by-step instructions, see Microsoft Copilot Studio: Agent ownership reassignment.
  • Validate whether dormant agents, connections, or actions are still required, and decommission those that are not.
  • Perform periodic reviews for agents and establish a clear organizational policy for agents’ creation. For more information, see Configure data policies for agents.

2. Reduce exposure and tighten access boundaries

Most Copilot Studio agent risks are amplified by unnecessary exposure. Reducing who can reach the agent, and what it can reach, significantly lowers risk.

  • Restrict agent sharing to well‑scoped, role‑based security groups instead of entire organizations or broad groups. See Control how agents are shared.
  • Establish and enforce organizational policies defining when broad sharing or public access is allowed and what approvals are required.
  • Enforce full authentication by default. Only allow unauthenticated access when explicitly required and approved. For more information see Configure user authentication.
  • Limit outbound communication paths:
    • Restrict email actions to approved domains or hard‑coded recipients.
    • Avoid AI‑controlled dynamic inputs for sensitive outbound actions such as email or HTTP requests.
  • Perform periodic reviews of shared agents to ensure visibility and access remain appropriate over time.

3. Enforce strong authentication and least privilege

Agents must not inherit more privilege than necessary, especially through development shortcuts.

Replace author (maker) authentication with user‑based or system‑based authentication wherever possible. For more information, see Control maker-provided credentials for authentication – Microsoft Copilot Studio | Microsoft Learn and Configure user authentication for actions.

  • Review all actions and connectors that run under maker credentials and reconfigure those that expose sensitive or high‑impact services.
  • Audit MCP tools that rely on creator credentials and remove or update them if they are no longer required.
  • Apply the principle of least privilege to all connectors, actions, and data access paths, even when broad sharing is justified.

4. Harden orchestration and dynamic behavior

Generative agents require explicit guardrails to prevent unintended or unsafe behavior.

  • Ensure clear, well‑structured instructions are configured for generative orchestration to define the agent’s purpose, constraints, and expected behavior. For more information, see Orchestrate agent behavior with generative AI.
  • Avoid allowing the model to dynamically decide:
    • Email recipients
    • External endpoints
    • Execution logic for sensitive actions
  • Review HTTP Request actions carefully:
    • Confirm endpoint, scheme, and port are required for the intended use case.
    • Prefer built‑in Power Platform connectors over raw HTTP requests to benefit from authentication, governance, logging, and policy enforcement.
    • Enforce HTTPS and avoid non‑standard ports unless explicitly approved.

5. Eliminate Dead Weight and Protect Secrets

Unused capabilities and embedded secrets quietly expand the attack surface.

  • Remove or deactivate:
    • Dormant agents
    • Unpublished or unmodified agents
    • Unused actions
    • Stale connections
    • Outdated or unnecessary MCP tool configurations
  • Clean up Maker‑authenticated actions and classic orchestration actions that are no longer referenced.
  • Move all secrets to Azure Key Vault and reference them via environment variables instead of embedding them in agent logic.
  • When Key Vault usage is not feasible, enable secure input handling to protect sensitive values.
  • Treat agents as production assets, not experiments, and include them in regular lifecycle and governance reviews.

Effective posture management is essential for maintaining a secure and predictable Copilot Studio environment. As agents grow in capability and integrate with increasingly sensitive systems, organizations must adopt structured governance practices that identify risks early and enforce consistent configuration standards.

The scenarios and detection rules presented in this blog provide a foundation to help you;

  • Discovering common security gaps
  • Strengthening oversight
  • Reduce the overall attack surface

By combining automated detection with clear operational policies, you can ensure that their Copilot Studio agents remain secure, aligned, and resilient.

This research is provided by Microsoft Defender Security Research with contributions from Dor Edry and Uri Oren.

Learn more

The post Copilot Studio agent security: Top 10 risks you can detect and prevent appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Harness the power of your data with SQL Server 2025, the AI-ready enterprise database storyboard

1 Share
From: Microsoft Azure Developers
Duration: 20:54
Views: 20

Would you like to take your search in SQL Server to the next level? In this episode, check out how to use vector search, securely built into the SQL Server 2025. #sqlserver2025 #sqlai

🌮 Chapter Markers:
0:00 – Introduction
01:10 – Evolution of SQL
02:44 - Intelligent searching with SQL and AI
06:16 – Model definition
08:55 - Generate embeddings
13:19 – Vector index
15:25 – Vector Search
17:50 – Azure OpenAI
19:36 – wrap up

🌮 Resources
Announcement: https://aka.ms/sqlserver2025blog
Sign-up: https://aka.ms/getsqlserver2025
Learn Docs: https://aka.ms/sqlserver2025docs
Product page: https://aka.ms/sqlserver2025

🌮 Follow us on social:
Scott Hanselman | @SHanselman – https://x.com/SHanselman
Azure Friday | @AzureFriday – https://x.com/AzureFriday
Bob Ward | @bobwardms – https://x.com/bobwardms

Blog - https://aka.ms/azuredevelopers/blog
Twitter - https://aka.ms/azuredevelopers/twitter
LinkedIn - https://aka.ms/azuredevelopers/linkedin
Twitch - https://aka.ms/azuredevelopers/twitch

#azuredeveloper #azure

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Makes Verifying the Bottleneck with Dave Farley @ModernSoftwareEngineeringYT

1 Share
From: Hangar DX podcast
Duration: 37:17
Views: 4

"The way that AI is changing software engineering is a bigger shift than object-oriented programming, the internet, and Agile together.", says Dave Farley, author of Continuous Delivery and Modern Software Engineering.

Dave also shares why programming languages were designed to help engineers decompose problems into smaller chunks, the three fundamental problems of AI coding, why verification becomes the bottleneck in AI-assisted coding, and why engineering discipline, test-driven development, and behavior-driven development matter even more in this new era.

00:00 Introduction to Developer Productivity and Experience
02:13 Dave Farley's Journey in Software Engineering
08:23 The Impact of AI on Software Development
11:00 AI Tools and Their Role in Coding
16:39 The Importance of TDD and BDD in AI Development
20:37 Testing and Feedback Loops in AI Programming
25:30 Navigating Ambiguity in Specifications
29:29 Future of Software Architecture with AI
34:55 Adapting to AI in Software Engineering Practices
37:28 Conclusion and Future Perspectives

About Dave Farley
Dave is a pioneer of continuous delivery, a thought leader and expert practitioner in CD, DevOps, TDD, and software design, and shares his expertise through his consultancy, YouTube channel @ModernSoftwareEngineeringYT , books, and training courses. Dave co-authored the definitive book on Continuous Delivery and has published Modern Software Engineering.

About Hangar DX (https://dx.community/)
The Hangar is a community of senior DevOps and senior software engineers focused on developer experience. This is a space where vetted, experienced professionals can exchange ideas, share hard-earned wisdom, troubleshoot issues, and ultimately help each other in their projects and careers.

We invite developers who work in DX and platform teams at their respective companies or who are interested in developer productivity.
More: https://dx.community/

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Build Mode is back: Building world-class founding teams

1 Share

It takes a village to build something great. In Season 2 of Build Mode, we go deep on how to assemble a founding team that signals ambition, execution, and long-term success. Founders and investors share candid lessons on hiring, structuring, and scaling teams that actually win. New episodes coming February 19.





Download audio: https://traffic.megaphone.fm/TCML4556447444.mp3
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

PPP 497 | A Practical System for Navigating Chaos, with author Richard Carson

1 Share

Summary

In this episode, Andy talks with Richard Carson, author of The Book of Change. If you feel like you barely finish one change before the next one hits, this conversation is for you. Richard shares his deeply researched and battle-tested framework called People Sustained Organizational Change Management, or PSOCM. Unlike many change management books, this is not about certifications or slogans. It is about building a repeatable system to diagnose problems, distinguish adaptive from transformational change, and gain executive traction when support is not automatic.

You will hear why so many change efforts fail before they even begin, how to craft a clear problem statement, and what leaders often misunderstand about the type of change they are facing. Richard also explains why he chose the phrase "People Sustained" and how thinking structurally about change can even help at home.

If you're looking for practical, grounded insights on leading through continuous change, this episode is for you!

Sound Bites

  • "My advice to you is to anticipate change and manage change before it manages you."
  • "Different change models have been introduced in the literature, but there has not been one coherent model for managing organizational change."
  • "PSOCM is driven by defined actions with statistical metrics that produce measurable results."
  • "You get a free book and the next thing you know you're getting the pitch to hire them at an exorbitant amount of money per hour."
  • "Organizations consist of people, and it is the people who are primarily the problem."
  • "Change management is proactive. Emergency management is reactive."
  • "It is not productive to put the organization on the couch and ask, 'Well, what do you think?'"
  • "You can change a process, but you cannot change a person's underlying psychology."
  • "You now own it, or it now owns you."

Chapters

  • 00:00 Introduction
  • 01:40 Start of Interview
  • 01:54 Family Culture and Early Influences
  • 03:58 Criticisms of Change Management Books and Certifications
  • 06:15 Defining Organizational Change Management in Plain Talk
  • 07:44 What Surprised Him in the History of Change
  • 10:57 Adaptive vs. Transformational Change
  • 14:23 Why He Named It People Sustained Organizational Change Management
  • 20:03 Problem Identification and Writing Effective Problem Statements
  • 24:31 Getting Executive Support When Change Is Not Top Down
  • 26:49 When Benefits Do Not Move Leaders
  • 28:21 One More Idea to Anticipate Change Before It Manages You
  • 30:03 Applying Change Lessons at Home as a Parent
  • 31:36 End of Interview
  • 32:38 Andy Comments After the Interview
  • 35:31 Outtakes

Learn More

You can learn more about Richard and his work at RichardCarson.org. Make sure to get the free ebook download.

For more learning on this topic, check out:

  • Episode 343 with Gary Lloyd. He has a clever metaphor of thinking about change like a gardener, not a mechanic. It's a great discussion that I think you'll find quite practical.
  • Episode 344 with Peter Bregman and Howie Jacobson. Their book is about change, but not at the organizational level. They think you can change other people, which sounds presumptuous at the least. But they back that up in the interview so check out episode 344 for more.
  • Episode 53 with John Kotter. He's one of the most famous names when it comes to change management. Go way back to episode 53 to hear from John directly.

Pass the PMP Exam

If you or someone you know is thinking about getting PMP certified, we've put together a helpful guide called The 5 Best Resources to Help You Pass the PMP Exam on Your First Try. We've helped thousands of people earn their certification, and we'd love to help you too. It's totally free, and it's a great way to get a head start.

Just go to 5BestResources.PeopleAndProjectsPodcast.com to grab your copy. I'd love to help you get your PMP this year!

Join Us for LEAD52

I know you want to be a more confident leader—that's why you listen to this podcast. LEAD52 is a global community of people like you who are committed to transforming their ability to lead and deliver. It's 52 weeks of leadership learning, delivered right to your inbox, taking less than 5 minutes a week. And it's all for free. Learn more and sign up at GetLEAD52.com. Thanks!

Talent Triangle: Business Acumen

Topics: Change Management, Organizational Change, Leadership, Executive Sponsorship, Problem Identification, Adaptive Change, Transformational Change, Strategic Thinking, Organizational Culture, Project Leadership, Continuous Improvement, Stakeholder Engagement

The following music was used for this episode:

Music: Lullaby of Light feat Cory Friesenhan by Sascha Ende
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Tropical Vibe by WinnieTheMoog
License (CC BY 4.0): https://filmmusic.io/standard-license

Thank you for joining me for this episode of The People and Projects Podcast!





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/497-RichardCarson.mp3?dest-id=107017
Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories