Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
142402 stories
·
32 followers

Trump announces billions in investments to make Pennsylvania an AI hub

1 Share

President Donald Trump helped announce more than $90 billion in investments in AI and energy at an event in Pennsylvania on Tuesday.

Among those investments include some multi-billion dollar commitments from Google. The company is planning to invest $25 billion to build data centers and AI infrastructure for the electric grid in the PJM region, according to remarks from president and chief investment officer Ruth Porat. It is also announcing a $3 billion US hydropower deal with Brookfield Energy.

CoreWeave, a cloud computing company, announced plans to invest more than $6 billion to build a new data center “to power the most cutting-edge AI use cases” in Pennsylvania. Meta will invest $2.5 million to “support startups in rural Pennsylvania communities in addition to community accelerator training for small businesses,” according to a fact sheet. Anthropic will commit $1 million over three years to support a program that provides cybersecurity education and an additional $1 million over three years to “support energy research at Carnegie Mellon University.”

The commitments included investments from some gas companies, too. Enbridge plans to invest $1 billion to “expand” its gas pipelines “into Pennsylvania,” per the fact sheet. Equinor is investing $1.6 billion to “boost natural gas production at Equinor’s Pennsylvania facilities and explore opportunities to link gas to flexible power generation for data centers.”

The investments were announced as part of the “inaugural” Pennsylvania Energy and Innovation Summit. Pennsylvania is a leading gas-producing state and an epicenter of the fracking boom in the US. Trump repeated calls to “drill, baby, drill” during the event.

Update, July 15th: Added link to remarks from Ruth Porat.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

MCP: A Practical Security Blueprint for Developers

1 Share
"MCP: A Practical Security Blueprint for Developers" featured image. Abstract architectural image

Imagine a world where your coding environment isn’t just a tool, but a true partner. It anticipates your needs, connects smoothly with your databases and infrastructure, and even talks to your command-line tools. This isn’t just a future dream; it’s what Model Context Protocol (MCP) promises. MCP extends advanced agentic coding tools like Cursor, Windsurf or Cline, making your work faster, smoother and incredibly powerful by connecting it to external context and tools.

But with great power comes great responsibility. This same advanced technology, while freeing for developers, can also expose your systems and data to new and dangerous weaknesses. So, the real question isn’t whether MCP improves the developer experience, but how can you adopt it without putting your security at risk?

Real-World Alarms: Lessons From the Field

The journey to using MCP safely isn’t just about theories; it’s guided by “wake-up calls” from actual events. These aren’t just scary stories; they’re crucial lessons for any developer embracing this game-changing technology.

Take the example of Anthropic’s MCP Inspector. This tool, meant to help debug MCP servers, had a serious flaw. Because it lacked proper security between its client and proxy, unauthorized requests could launch MCP commands. This was a clear reminder: even tools designed for security need strong security themselves.

Another warning came from @cyanheads/git-mcp-server. Before version 2.1.5, this server, which helps with Git projects, was open to command injection. This meant that if inputs weren’t cleaned properly, an attacker could inject their own system commands, turning a helpful tool into a weapon.

And it’s not just about developer tools. The WordPress AI Engine plugin, before version 2.1.5, had a security flaw where it didn’t properly check user permissions. This could lead to unauthorized changes or loss of data. These incidents highlight a key point: The convenience of AI integration should never come at the cost of strong security.

Your Security Blueprint: Using MCP Safely

So, how do you use the huge potential of MCP without inviting disaster? It starts with always thinking about security, building it into every step of development and adoption.

The Hidden Threat: Prompt Injection

Imagine your AI assistant, following instructions carefully, suddenly sending confidential files to an attacker. This is the danger of prompt injection. Here, harmful commands are hidden in normal-looking text that your AI processes. The AI, unaware of the hidden commands, carries out actions you never approved.

How Developers Can Fix It:

  • Always show tool actions for user approval: Being clear is your best defense. Before any action happens, make it obvious and require the user to say “yes.”
  • Block or clean up suspicious patterns: Be alert for hidden text or tricky Unicode characters meant to fool the AI.
  • Use strongly typed tools: Building your server with strongly typed tools can greatly lower the risk of remote code execution. By clearly showing each action and needing user approval, you create a vital safety check where a human is always involved. Salesforce DX MCP Server, for example, uses TypeScript libraries to do this.

The Silent Theft: Token Theft

MCP servers often store OAuth tokens to connect to other online services. If these tokens are stolen, attackers can pretend to be users, often without anyone noticing. This isn’t just about one service being hacked; a single breach can lead to attacks across many services, giving attackers widespread access to your digital world.

How Developers Can Fix It:

  • Use tokens that don’t last long and have limited access: The shorter a token is valid and the fewer things it can do, the less harm it can cause if stolen.
  • Encrypt tokens when they’re stored and change them regularly: Treat your tokens like valuable money — protect them well and update them often.
  • Adopt a “zero secrets” approach: Using a tool like Salesforce DX MCP Server, for instance, means developers don’t need to put secrets in their settings, removing the risk of plain-text secrets being exposed. Plus, it focuses on passing usernames instead of tokens, which greatly reduces the chances of an attack.

Too Much Access: Overbroad Permissions

Many MCP servers, aiming for ease of use, ask for full access to your systems even when they only need to read information. This seemingly small over-request can have serious results. A compromised tool with too much access could leak your entire email inbox, your whole drive — even files you never meant to share.

How Developers Can Fix It:

  • Follow the principle of least privilege: Only give the exact permissions needed. Think of it like giving a guest only the keys to the rooms they need to enter, not your whole house.
  • Carefully review access permissions: Regularly check and trim permissions to make sure they match what’s actually needed.
  • Implement detailed access control: The Salesforce DX MCP Server is a good example of this. It can get authentication info only for organizations that have been clearly allowed, and users specify these allowed organizations when they start the server.

The Hidden Danger: Malicious or Unchecked Tools

The open nature of the MCP ecosystem can be a mixed blessing. Third-party MCP servers, while convenient, might contain hidden or harmful behaviors. This brings the risk of internal data being stolen and serious supply-chain attacks, where a problem in one part can affect your entire system.

How Developers Can Fix It:

  • Review source code for security: Before letting developers use any MCP server, do a thorough security check of its source code.
  • Choose official, signed software: Pick MCP servers from trusted sources that have verified digital signatures.
  • Keep a list of approved MCP servers: Create a curated list of MCP servers that have been checked and avoid automatic updates without a security review first.
  • Enforce internal rules: At Salesforce, for example, developers are only allowed to use MCP servers that have been approved by security.

Open Doors: Unsafe Defaults and Network Exposure

Early MCP connectors, in their effort to be easy to use, often ran on 0.0.0.0 without any authentication or encryption. This basically left an open door, allowing attackers to exploit tools just by visiting a webpage. This oversight created a direct path to remote code execution (RCE) vulnerabilities and immediate data theft.

How Developers Can Fix It:

  • Always use HTTPS: Encrypt all communication to protect data as it travels.
  • Use OAuth-based authentication: Securely confirm the identity of users and applications accessing your MCP server.
  • Use secure cloud-based solutions: For example, Salesforce’s Heroku Remote MCP Server uses OAuth for secure authentication to support secure default settings and cloud integration.

Beyond the Basics: Constant Watchfulness

Adopting MCP safely isn’t a one-time task; it’s an ongoing commitment to being watchful and always improving.

  • Review your code and design: Regularly check the security of your MCP setup.
  • Log and monitor every tool call: Detailed logging provides a record and helps detect unusual activity early.
  • Stay informed and update: Pay attention to security warnings and quickly update your MCP servers to the newest, most secure versions.

Conclusion: Empowering Developers, Responsibly

Model Context Protocol is truly a game changer. It offers advanced developer experiences that blend human intention with machine execution. But this power comes with a crucial demand: security must be a top priority in their design.

By using methods like typed tools, avoiding storing secrets in plain text, giving only necessary access, and making security the default, you can turn MCP from potential weaknesses into powerful, safe tools in your AI toolkit.

It’s also encouraging to see so many proposals from the enterprise community to improve the security design of the protocol. The future of development is smart and connected; let’s make sure it’s secure too.

The post MCP: A Practical Security Blueprint for Developers appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

CollabLLM: Teaching LLMs to collaborate with users

1 Share
CollabLLM blog hero | flowchart diagram starting in the upper left corner with an icon of two overlapping chat bubbles; arrow pointing right to an LLM network node icon; branching down to show three simulated users; right arrow to a

Large language models (LLMs) can solve complex puzzles in seconds, yet they sometimes struggle over simple conversations. When these AI tools make assumptions, overlook key details, or neglect to ask clarifying questions, the result can erode trust and derail real-world interactions, where nuance is everything.

A key reason these models behave this way lies in how they’re trained and evaluated. Most benchmarks use isolated, single-turn prompts with clear instructions. Training methods tend to optimize for the model’s next response, not its contribution to a successful, multi-turn exchange. But real-world interaction is dynamic and collaborative. It relies on context, clarification, and shared understanding.

User-centric approach to training 

To address this, we’re exploring ways to train LLMs with users in mind. Our approach places models in simulated environments that reflect the back-and-forth nature of real conversations. Through reinforcement learning, these models improve through trial and error, for example, learning when to ask questions and how to adapt tone and communication style to different situations. This user-centric approach helps bridge the gap between how LLMs are typically trained and how people actually use them.  

This is the concept behind CollabLLM (opens in new tab), recipient of an ICML (opens in new tab) Outstanding Paper Award (opens in new tab). This training framework helps LLMs improve through simulated multi-turn interactions, as illustrated in Figure 1. The core insight behind CollabLLM is simple: in a constructive collaboration, the value of a response isn’t just in its immediate usefulness, but in how it contributes to the overall success of the conversation. A clarifying question might seem like a delay but often leads to better outcomes. A quick answer might appear useful but can create confusion or derail the interaction.

Figure 1 compares two training strategies for Large Language Models: a standard non-collaborative method and our proposed collaborative method (CollabLLM). On the left, the standard method uses a preference/reward dataset with single-turn evaluations, resulting in a model that causes ineffective interactions. The user gives feedback, but the model generates multiple verbose and unsatisfactory responses, requiring many back-and-forth turns. On the right, CollabLLM incorporates collaborative simulation during training, using multi-turn interactions and reinforcement learning. After training, the model asks clarifying questions (e.g., tone preferences), receives focused user input, and quickly generates tailored, high-impact responses.
Figure 1. Diagram comparing two training approaches for LLMs. (a) The standard method lacks user-agent collaboration and uses single-turn rewards, leading to an inefficient conversation. (b) In contrast, CollabLLM simulates multi-turn user-agent interactions during training, enabling it to learn effective collaboration strategies and produce more efficient dialogues.

CollabLLM puts this collaborative approach into practice with a simulation-based training loop, illustrated in Figure 2. At any point in a conversation, the model generates multiple possible next turns by engaging in a dialogue with a simulated user.

Figure 2 illustrates the overall training procedure of CollabLLM. For a given conversational input, the LLM and a user simulator are used to sample conversation continuations. The sampled conversations are then scored using a reward model that utilizes various multiturn-aware rewards, which are then in turn used to update parameters of the LLM.
Figure 2: Simulation-based training process used in CollabLLM

The system uses a sampling method to extend conversations turn by turn, choosing likely responses for each participant (the AI agent or the simulated user), while adding some randomness to vary the conversational paths. The goal is to expose the model to a wide variety of conversational scenarios, helping it learn more effective collaboration strategies.

Microsoft Research Blog

Research at Microsoft 2024: Meeting the challenge of a changing world

In this new AI era, technology is changing even faster than before, and the transition from research to reality, from concept to solution, now takes days or weeks rather than months or years.

Opens in a new tab

To each simulated conversation, we applied multiturn-aware reward (MR) functions, which assess how the model’s response at the given turn influences the entire trajectory of the conversation. We sampled multiple conversational follow-ups from the model, such as statements, suggestions, questions, and used MR to assign a reward to each based on how well the conversation performed in later turns. We based these scores on automated metrics that reflect key factors like goal completion, conversational efficiency, and user engagement.

To score the sampled conversations, we used task-specific metrics and metrics from an LLM-as-a-judge framework, which supports efficient and scalable evaluation. For metrics like engagement, a judge model rates each sampled conversation on a scale from 0 to 1.

The MR of each model response was computed by averaging the scores from the sampled conversations, originating from the model response. Based on the score, the model updates its parameters using established reinforcement learning algorithms like Proximal Policy Optimization (PPO) or Direct Preference Optimization (DPO).

We tested CollabLLM through a combination of automated and human evaluations, detailed in the paper. One highlight is a user study involving 201 participants in a document co-creation task, shown in Figure 3. We compared CollabLLM to a baseline trained with single-turn rewards and to a second, more proactive baseline prompted to ask clarifying questions and take other proactive steps. CollabLLM outperformed both, producing higher-quality documents, better interaction ratings, and faster task completion times.

Figure 3 shows the main results of our user study on a document co-creation task, by comparing a baseline, a proactive baseline, and CollabLLM. CollabLLM outperformed the two baselines. Relative to the best baseline, CollabLLM yields improved document quality rating (+0.12), interaction rating (+0.14), and a reduction of average time spent by the user (-129 seconds).
Figure 3: Results of the user study in a document co-creation task comparing CollabLLM to a baseline trained with single-turn rewards.

Designing for real-world collaboration

Much of today’s AI research focuses on fully automated tasks, models working without input from or interaction with users. But many real-world applications depend on people in the loop: as users, collaborators, or decision-makers. Designing AI systems that treat user input not as a constraint, but as essential, leads to systems that are more accurate, more helpful, and ultimately more trustworthy.

This work is driven by a core belief: the future of AI depends not just on intelligence, but on the ability to collaborate effectively. And that means confronting the communication breakdowns in today’s systems.

We see CollabLLM as a step in that direction, training models to engage in meaningful multi-turn interactions, ask clarifying questions, and adapt to context. In doing so, we can build systems designed to work with people—not around them.

Opens in a new tab

The post CollabLLM: Teaching LLMs to collaborate with users appeared first on Microsoft Research.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mastering PowerShell Get-ADPrincipalGroupMembership

1 Share

This guide covers syntax, practical examples, common use cases, and troubleshooting tips to assist with your daily duties with AD management tasks.

What does Get-ADPrincipalGroupMembership do?

The Get-ADPrincipalGroupMembership cmdlet gets the Active Directory (AD) groups that have a specified user, group, computer, or service account as a member. You will need to make sure a Global Catalog is online in your forest for this cmdlet to function correctly.

When IT Pros and AD admins often need to determine which security groups a user account belongs to, they can use similar cmdlets like ‘Get-ADUser’ or ‘Get-ADGroup’. But Get-ADPrincipalGroupMembership offers the perspective of the wonders of PowerShell efficiency – you don’t need the Active Directory Users and Computers management console – you can use the command line.

When you need to satisfy audits or compliance reporting, you can use this cmdlet and build a PowerShell script based on it to handle these requests with ease.

Get-ADPrincipalGroupMembership parameters and syntax

The -Identity parameter specifies the user, group, or computer object that you want to get group memberships for. You can specify them by globally unique identifier (GUID), distinguished name (DN), security identifier (SID), or SAM account name. I’ll go through some examples of syntax next.

  • sAMAccountName: Legacy logon name, used for backward compatibility.
  • GUID: Unique identifier for Active Directory objects, remains constant.
  • SID: Security identifier for access control, unique within a domain.

Here are the fundamentals of the cmdlet and the core parameters you can utilize with Get-ADPrincipalGroupMembership:

ParameterDescription
AuthType Specifies the authentication method to use.
Credential Specifies the user credentials, if necessary, that the command should use.
Identity Specifies the AD principal object to query
Partition Specifies the distinguished name of an AD partition
ResourceContextPartition Specifies the DN of the partition of an AD or AD LDS instance
ResourceContextServerSpecifies that the cmdlet return a list of groups that the user or group is a member of in the specified domain.
Server An SSL connection is required for the Basic authentication method.
Get-ADPrincipalGroupMembership command parameters

Get-ADPrincipalGroupMembership examples

Let’s start with identifying what groups the built-in administrator is a member of.

Get-ADPrincipalGroupMembership -Identity administrator
Getting the group memberships for the built-in administrator account
Getting the group memberships for the built-in administrator account – Image Credit: Michael Reinders/Petri.com

We can pipe this somewhat helpful info using the Format-Table (ft) parameter. A little easier to read the group names.

Get-ADPrincipalGroupMembership -Identity administrator | ft
Discovering the group membership of the 'administrator' account using PowerShell
Discovering the group membership of the ‘administrator’ account using PowerShell – Image Credit: Michael Reinders/Petri.com

That’s more readable. Let’s next look at the groups my ‘main’ account is in.

Get-ADPrincipalGroupMembership -Identity mreinders | ft
Looking at the groups my 'daily driver' account is in - need to make changes
Looking at the groups my ‘daily driver’ account is in – need to make changes – Image Credit: Michael Reinders/Petri.com

You’ll see common groups here like ‘domain users’. However, do you see an issue, of sorts? I do. Schema Admins? This is a very simple way to audit your memberships. There is absolutely NO reason ANY account should be in Schema Admins, until necessary. I will certainly remove that group from that account.

We can also access child or resource domains to be expedient as well. We’ll use this command to get the group memberships of the built-in administrator in my child domain – corp.reinders.local.

Get-ADPrincipalGroupMembership -Identity administrator -ResourceContextServer corp.reinders.local -ResourceContextPartition "DC=reinders,DC=local"
Using Get-ADPrincipalGroupMembership to get groups from a child/resource domain
Using Get-ADPrincipalGroupMembership to get groups from a child/resource domain – Image Credit: Michael Reinders/Petri.com

It is similar, but it is pulling data from the child/resource domain. So, you can imagine being able to write a script that would scour your entire forest, enumerate all the ‘admin’ users, and get their memberships in a nice, tidy CSV file using ‘Export-CSV’ for analysis.

Let’s do an example of getting the group memberships of a computer object. The syntax is very similar.

Get-ADPrincipalGroupMembership -Identity 'CN=W11-CLIENT02,OU=Domain Windows Computers,DC=reinders,DC=local'
Getting group memberships of a computer object
Getting group memberships of a computer object – Image Credit: Michael Reinders/Petri.com

This is a simple example – the computer is in ‘Domain Computers.’ However, again, you can see the power here with troubleshooting computer objects not receiving correct Group Policy Objects, etc.

Note – you can also pipe that output with the ‘Select-Object’ parameter like this ‘| Select-Object Name’ to only display the Name of the group.

Filtering groups using Get-ADPrincipalGroupMembership

When encountering a user, group, or computer with many group memberships, you may want to filter them at the outset. You can use this command as an example.

Get-ADPrincipalGroupMembership -Identity administrator | Select Name | Where-Object {$_.Name -like 'G*'}
Filtering what groups are returned using 'Where-Object'
Filtering what groups are returned using ‘Where-Object’ – Image Credit: Michael Reinders/Petri.com

We can see there is a single group that starts with ‘G’ in the output. Wonderful.

Security and final words

One of the core facets of using Get-ADPrincipalGroupMembership is auditing and verifying that user, group, and computer accounts are in the least number of groups necessary. This is known as least privilege security, where rights assigned are just enough for users to do their jobs and groups and service accounts to perform their functions.

Although ‘Get-ADGroupMember’ is used more often, the ‘Get-ADPrincipalGroupMemmbership’ cmdlet is useful for identifying what users, computers, and group objects are in what security groups in your environment. Creating an automated task to fire off a PowerShell script every week and saving the output on a network file share, or SharePoint or Teams site, will go far in at least identifying potential security incidents or breaches.

Thank you for reading my post on this topic. Please feel free to leave a comment or question below.

Frequently Asked Questions

How to get AD user group membership?

To retrieve the group membership of an Active Directory (AD) user, use the Get-ADPrincipalGroupMembership cmdlet. This command returns all the groups that a specified user (or computer) is a direct member of.

Example:

Get-ADPrincipalGroupMembership -Identity jdoe

This will list all groups that the user jdoe belongs to, including security and distribution groups.

How to extract AD group members?

To extract members of a specific AD group, use the Get-ADGroupMember cmdlet.

Example:

Get-ADGroupMember -Identity "HR Team"

This command returns all direct members of the group named “HR Team”. To get detailed info like email addresses or department, pipe it into Get-ADUser or Select-Object.

How do I check which AD group I belong to?

If you want to check your own group memberships, open PowerShell and run:

$User = [System.Security.Principal.WindowsIdentity]::GetCurrent().Name
Get-ADPrincipalGroupMembership -Identity $User

This dynamically fetches the currently logged-in user and lists their group memberships.

How do I find the user group membership in PowerShell?

Use the Get-ADPrincipalGroupMembership cmdlet with the appropriate -Identity parameter (username or distinguished name).

Example:

Get-ADPrincipalGroupMembership -Identity "samaccountname"

For richer output, you can format the results:

Get-ADPrincipalGroupMembership "jdoe" | Select-Object Name, GroupCategory, GroupScope

This provides group name, category (Security or Distribution), and scope (Global, Universal, or DomainLocal).

The post Mastering PowerShell Get-ADPrincipalGroupMembership appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using agent mode to refactor and iterate on a project

1 Share
From: VisualStudio
Duration: 9:38
Views: 140

Did you know that GitHub Copilot can act as your teammate during development? Watch as Simona uses natural language prompts to refactor and iterate on a new C++ project, transforming it from a single-file console app to a multi-file Windows app. With GitHub Copilot's detailed guidance, tool calling capabilities, and automatic error checking, it's easy to extend and improve your codebase. Whether you're fixing errors, refactoring code, or adding new features, GitHub Copilot helps you every step of the way, making development faster and more intuitive.

⌚ Chapters:
00:00 Intro
00:19 Demo - Refactoring and Iterating with Agent Mode
08:21 In Summary
09:25 Wrap

🎙️ Featuring: Simona Liao

#visualstudio #githubcopilot

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Grok thinks it’s Mecha Hitler, and AIs can think strategically

1 Share

In episode 59 of the AI Fix, our hosts ponder whether AIs need a “disagreement dial”, Mark wonders what he could do with an AI-powered “drug design engine”, Graham plays Wolfenstein instead of working, a robot graduates from high school, and a popular rock group is unmasked as an AI fever dream.

Graham explains why Grok thinks it’s Mecha Hitler, and Mark reveals which AI is most likely to betray you.

Episode links:



The AI Fix

The AI Fix podcast is presented by Graham Cluley and Mark Stockley.

Learn more about the podcast at theaifix.show, and follow us on Bluesky at @theaifix.show.

Never miss another episode by following us in your favourite podcast app. It's free!

Like to give us some feedback or sponsor the podcast? Get in touch.



Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy



Download audio: https://pdst.fm/e/pscrb.fm/rss/p/clrtpod.com/m/audio3.redcircle.com/episodes/782c8782-47c9-4445-9bd4-55a4364ee998/stream.mp3
Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories