Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146726 stories
·
32 followers

How we test a web framework

1 Share

An AI generated image of Wasp mascot thinking about testing the web framework

Wasp is a compiler-driven full-stack web framework; it takes configuration and source files with your unique logic, and it generates the complete source code your the web app.

As a result of our approach and somewhat unique design, we have a large surface area to test. Every layer can break in its own creative way, and a strong suite of automated tests is what keeps us (somewhat) sane.

In this article, our goal is to demonstrate the practical side of testing in a compiler-driven full-stack framework, where traditional testing intersects with code generation and developer experience.

Overview of Wasp ecosystem

Our approach to tests

If we wanted to reduce our principle to a single sentence, it would be: We believe that test code deserves the same care as production code.

Bad tests slow you down. They make you afraid to change things. So our principle is simple: if a piece of test code matters enough to catch a bug, it matters enough to be well-designed. We refactor it. We name things clearly. We make it easy to read and reason about.

It’s not new or revolutionary; it’s just consistent care, applied where most people stop caring.

Tests that explain themselves

Our guiding principle is that tests should be readable at a glance, without requiring an understanding of the machinery hiding underneath. That’s why we write them so that the essence of the test, the input and expected output, comes first. Supporting logic and setup details follow afterward, only for those who need to understand the details.

spec_kebabToCamelCase :: Spec
spec_kebabToCamelCase = do
"foobar" ~> "foobar"
"foo-bar-bar" ~> "fooBarBar"
"foo---bar-baz" ~> "fooBarBaz"
"-foo-" ~> "foo"
-- ...
"--" ~> ""
"" ~> ""
where
kebab ~> camel = it (kebab ++ " -> " ++ camel) $ do
kebabToCamelCase kebab `shouldBe` camel

That rule naturally connects to the next one: tests should be descriptive enough that you can understand their essence without additional comments. That’s why sometimes we end up with beautifully long descriptions like this:

spec_WriteFileDrafts :: Spec
spec_WriteFileDrafts =
describe "fileDraftsToWriteAndFilesToDelete" $ do
it "should write and delete nothing if there are no checksums and no file drafts" $
-- ...
it "should write new (not in checksums list) and updated (in checksums list but different checksum) files drafts and delete redundant files (in checksums but have no corresponding file draft)" $ do
-- ...

The nice thing about writing tests in Haskell is how easy it is to build tiny DSLs that make tests readable. And for us, reading code is much more important than writing it; we even leaned into Unicode operators for math operations. But the boundary between clarity and productivity can be tricky when you realize nobody remembers how to type “⊆”.

  describe "isSubintervalOf" $ do
-- ...
[vi| [4, inf) |][vi| [3, inf) |] ~> True
[vi| (3, inf) |][vi| [3, inf) |] ~> True
-- ...
[vi| (inf, inf) |][vi| (inf, inf) |] ~> True
[vi| [2, 2.5) |][vi| (1, 2.6] |] ~> True

Courage not coverage

Chasing 100% coverage is fun. It’s complete. But it’s also hard to do. It can push you to spend time testing code paths that don’t really matter. It looks good in the report, but while getting there, you miss out on testing potentially important stuff.

The goal is that our combined tests catch nearly all meaningful errors. We aim for “courage”. Confidence that if something breaks, we’ll know fast.

TDD (but not the one you think)

We've always liked the idea of test-driven development, but it never really stuck for us. In practice, we’d start coding and only after something worked, would we add tests.

One thing we love is strong typing (we use TypeScript and Haskell), describing what the feature should look like and how data should flow. Once the types make sense, the implementation becomes straightforward. It’s leaning on the compiler to guide you along the way. For us, that rhythm feels more natural, the Type-Driven Development.

Testing the compiler

At the core of our framework sits the compiler, written in Haskell. It takes a configuration file and user source code as input, and it assembles a full-stack web app as output.

Overview of Wasp compilation process

Although Haskell has excellent reliability and type safety (e.g., check out our library for type-safe paths), tests are still necessary. We use unit tests to ensure our compiler’s logic is correct. But the compiler’s most important product is the generated code that exists outside the Haskell domain. To verify the generated code, we use the end-to-end (e2e) tests.

Our E2E tests story

The purpose of our e2e tests is to verify that the Wasp binary works as expected. We are not concerned with the internal implementation, only its interface and outputs.

The interface is the Wasp CLI (called waspc). Every command is treated as a black box: we feed it input, observe its side effects, and verify the output.

The primary output of waspc is a Wasp app. So we validate that each command correctly generates or modifies an app. Secondary outputs are installer behavior, uninstall flow, bash completions, etc.

Tracking each and every change

Wasp generates a considerable amount of code, and even small compiler tweaks can cause the weirdest changes in the output — a real-life butterfly effect. We want to be sure that each PR doesn't cause any unexpected changes.

Snapshot tests are the crown jewel of our e2e story. We use it to track the compiler’s code generation changes in the form of golden vs current snapshots. We test the actual (current) output vs. the expected (golden) output.

They are an efficient way to gain high confidence in the generated output with relatively little test code, a good fit for code generation. Because we track golden snapshots with Git, every pull request clearly shows how the generated code changes.

To make it clear what we are testing, we build our test cases from simple:

waspNewSnapshotTest :: SnapshotTest
waspNewSnapshotTest =
makeSnapshotTest
"wasp-new"
[createSnapshotWaspProjectFromMinimalStarter]

To more complex ones, feature by feature (command by command):

waspMigrateSnapshotTest :: SnapshotTest
waspMigrateSnapshotTest =
makeSnapshotTest
"wasp-migrate"
[ createSnapshotWaspProjectFromMinimalStarter,
withInSnapshotWaspProjectDir
[ waspCliCompile,
appendToPrismaFile taskPrismaModel,
waspCliMigrate "foo"
]
]
where
taskPrismaModel = -- ... details ...

What does this look like in practice? Suppose while modifying a feature, we accidentally added a stray character (e.g., a dot) while editing a Mustache template, which means it will also appear in the generated code. If we now run snapshot tests to compare the current output of the compiler with the golden (expected) one, it will detect the change in the generated files and ask us to review it:

A terminal window showing a stray character diff

Now we can check the change and accept it if it was expected, or fix it if not. Finally, when we are satisfied with the current snapshot, we record it as a new golden snapshot.

Untangling TypeScript from Mustache

Mustache templates make up the core of our code generation. Any file with dynamic content is a Mustache template, be it TypeScript, HTML, or a Dockerfile. It made sense as we need the compiler to inject them with relevant data.

While this gave us a lot of control and flexibility while generating the code, it also created development challenges. Mustache templates aren’t valid TypeScript, so they broke TypeScript’s own ecosystem: linters, formatters, and tests.

Mustache template TypeScript file opened in code editor showcasing broken linters and formatters

This inconvenience made our usual development workflow consist of generating a Wasp app, making modifications to the generated files, and carrying over the changes back to the template. Repeat the process until we get it right.

That is why we’re migrating most of the TypeScript logic from Mustaches templates into dedicated npm packages. This will leave templates as mostly simple import/export wrappers, while allowing us to build and test the TypeScript side of the source code with full type safety and normal tooling.

Testing the Wasp apps

Besides the compiler, we also ship many Wasp apps ourselves, including starter templates and example apps. We maintain them and update them together with the compiler. Our goal is to test the Wasp apps in runtime, and we use playwright e2e tests for that.

Starter templates

Typically, every Wasp app starts from a starter template. They are prebuilt Wasp apps that you generate through Wasp CLI to get you started. As they are our first line of UX (or DX?), it's essential to keep the experience as smooth and flawless as possible.

What is most important is to test the starter templates themselves. Each starter represents a different promise that we have to validate. We test their domains rather than the framework itself.

Interestingly, since starter templates are Mustache templates, we can’t test them directly. Instead, we must initialize new projects through the Wasp CLI, on which we run the prebuilt playwright e2e tests.

Example apps

Starter templates get you started, but Wasp features so many more features. To test the entire framework end-to-end, we had to build additional Wasp apps — example apps. They serve a dual purpose: to serve as a public examples of what can be built with Wasp and how, but also as a testing suite on which we run extensive tests.

We test each framework feature with playwright. On each PR, we build the development version of Wasp, and each example app runs its e2e tests in isolation. While golden snapshots provide clarity into code generation changes, these tests serve to ensure none of the framework features' expectations were broken.

Kitchen sink

The kitchen sink app is the "holy grail" of example apps. We test most of the framework features in this single application (smartly named kitchen-sink ). If you’re not familiar with the term “kitchen sink application”, think of it like a Swiss knife for framework features

Login page of the kitchen-sink application

Kitchen sink is also one of the applications we snapshot in our snapshot tests. So, kitchen-sink not only serves to test that the code works in the runtime, but also tracks any changes to the code generation.

We have one golden rule when modifying/adding framework features: “There must be a test in the example applications which covers this feature.”

When Kitchen Sink is not enough (or too much)

Previously, I mentioned that kitchen-sink tests most of the framework features. Most — because Wasp has mutually exclusive features. For example, usernameAndPassword authentication vs email authentication (yes, email authentication also uses a password, I didn’t design the name). So we try to pick up the scraps with the rest of the smaller example apps.

While the kitchen-sink application is suitable for showcasing the framework's power to users, it’s impossible to test all of the features in a single application. Nor is it the proper way to test Wasp end-to-end.

This is how our “variants” idea sparked. The idea is to build variants on top of the minimal starter. E.g., “Wasp app but using SendGrid email sender”, “Wasp app but using Mailgun email sender”…

An AI generated image of Wasp mascot putting Wasp app variants on the coveyor belt to the testing pipeline

For each possible feature that exists, a Wasp application of that feature should exist. It's something we haven't yet solved, but we plan to address it as we approach the Wasp 1.0 release. For now, the kitchen-sink app serves us well enough.

Building tools for your tests

Wasp applications are complex systems with many parts: front-end, back-end, database, and specific requirements and differences between the development and production versions of the application. This makes test automation cumbersome.

You can do it, but you really don’t want to repeat the process. So we’ve packaged it into our own driver called wasp-app-runner. It exports two simple commands: dev and build. It’s not suitable for development purposes (nor deployment), but for testing, it’s perfect. Tooling for your tests is tooling for your sanity.

Testing the deployment

Wasp CLI can automatically deploy your Wasp applications to certain supported providers. You set the production environment variables, and the command does everything else.

To ensure deployment continues to work correctly, each code merge on the Wasp repository triggers a test deployment of the kitchen-sink example app using the development version of Wasp, followed by basic smoke tests on the client and server to confirm everything runs smoothly. Finally, we clean up the deployed app.

When releasing a new version of our framework, we follow the same procedure described above, but for all the example apps, not just the kitchen-sink one: we redeploy their test deployments using this new version of the framework. However, these deployments remain permanent, as we use example apps to showcase Wasp to users.

Testing the docs (kind of)

APIs change fast, in a startup building a pre-1.0 framework. Documentation lags even faster. You tweak a feature, push the code, and somewhere, a forgotten code example still lies.

We’re careful about updating documentation when features change, but some references hide in unexpected corners. It’s a recurring pain: docs are the primary way developers experience your tool, yet they’re often the easiest part to let rot. So we started treating documentation more like code.

Keeping code examples honest

You modify a feature, update the API, but some part of the docs still shows an old example. Users (and I) prefer copy-pasting examples over reading API documentation. We copy and paste broken snippets and expect things to work, but they don’t.

Wouldn’t it be nice if docs’ code examples were also tested like Wasp app examples? Why not combine the two?

We agreed that the docs examples must reference the source code of example apps. Each code snippet in the docs must declare a source file in one of the example apps where that same code resides (with some caveats). We can automatically verify that the reference is correct and the code matches; if not, the CI fails.

We are implementing this as a Docusaurus plugin called code-ref-checker. It’s still a work in progress, but we’re happy with the early results (notice the code ref in the header):

```ts title="src/auth.ts" ref="waspc/examples/todoApp/src/auth/signup.ts:L1-14"
import { defineUserSignupFields } from "wasp/server/auth";

export const userSignupFields = defineUserSignupFields({
address: (data) => {
if (typeof data.address !== "string") {
throw new Error("Address is required.");
}
if (data.address.length < 10) {
throw new Error("Address must be at least 10 characters long.");
}
return data.address;
},
});
```

An additional benefit is that, besides ensuring code examples in the docs don't become stale, it forces us to test every feature, because when we write documentation and add a code example, it can’t exist without implementing it first inside an example app.

Making tutorials testable

We have a “Todo App” tutorial in our documentation that, before every release, we would manually review and verify to ensure it was still valid. Someone would have to execute all the steps, and once they finally finish them, they would still have to test the resulting Wasp app.

While code-ref-checker solved the examples drift, tutorials add a time dimension. They evolve as the reader builds the app: files appear, disappear, and change with each step. So we opted for a new solution.

Looking at our tutorial, each step changes the project: run a CLI command, apply a diff, and move on. We realized the tutorial basically repeats those two actions over and over.

So we built a small CLI tool integrating with the Docusaurus plugin to formalize that process:

  1. Each step defines an action.
  2. The CLI can replay all steps to rebuild the final app automatically.
  3. Steps are easily editable in isolation.
  4. That final app is then tested like any other Wasp app.

We call it TACTE, the Tutorial Action Executor.

In TACTE, each step is declared via a JSX component that lives next to the tutorial content itself, and the CLI helps us define the actions to make the process work.

To setup a new Wasp project, run the following command in your terminal:

<TutorialAction
id="create-wasp-app"
action="INIT_APP"
starterTemplateName="minimal"
/>

```sh
wasp new TodoApp -t minimal
```

# ...

Start by cleaning up the starter project and removing unnecessary code and files.

<TutorialAction id="prepare-project" action="APPLY_PATCH" />

First, remove most of the code from the `MainPage` component:

```tsx title="src/MainPage.tsx" auto-js
export const MainPage = () => {
return <div>Hello world!</div>;
};
```

TACTE is still in development, but we are planning to publish it as a library in the near future.

Conclusion

See Our approach to tests.

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft delays Xbox Game Pass Ultimate price hikes for some subscribers

1 Share

Microsoft is holding off on its Xbox Game Pass Ultimate price hikes for some existing subscribers in select countries. After announcing a 50 percent price increase to Game Pass Ultimate last week, Microsoft now says this price increase will only currently affect new purchases and not existing subscribers in markets like Austria, Germany, Ireland, Israel, Korea, Poland, and India.

“At this time, these increases will only affect new purchases and will not affect your current subscription for the market in which you reside, as long as you are on an auto-recurring plan,” explains Microsoft in an email that was sent to some Xbox Game Pass Ultimate subscribers overnight. “Should you choose to cancel your plan and repurchase, you will be charged at the new current rate.”

Microsoft has confirmed to The Verge that the email is genuine, and it’s not impacting subscribers in the US or UK. “Our recent Game Pass update remains unchanged. Current subscribers in certain countries will continue renewing at their existing price for now, in line with local requirements. We’ll provide advance notice before price adjustments take effect in these countries,” says Kari Perez, head of Xbox communications, in a statement to The Verge.

The change in these countries is likely related to local regulations on subscription price changes, and it means in Ireland existing subscribers with auto-renew enabled will still be charged at the €17.99 monthly rate, instead of the new €26.99 pricing. Microsoft notes in its email that existing subscribers in these markets will be notified “at least 60 days in advance” of price changes, meaning the changes won’t go into effect for at least two more months.

The halt in price increases in select markets is a change to what Microsoft announced last week. “This updated pricing will go into effect on October 1st for new subscribers, and then at the next billing cycle, likely to be November 4th, for current subscribers,” said Dustin Blackwell, director of gaming and platform communications at Microsoft, in a briefing with The Verge last week.

Update, October 7th: Article updated with comment from Microsoft.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Mastodon is taking cues from Bluesky with plans for its own starter ‘Packs’

1 Share
Mastodon is planning to make it easier for newcomers to discover curated collections of users to follow by launching a new starter packs feature.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Disrupting threats targeting Microsoft Teams

1 Share

The extensive collaboration features and global adoption of Microsoft Teams make it a high-value target for both cybercriminals and state-sponsored actors. Threat actors abuse its core capabilities – messaging (chat), calls and meetings, and video-based screen-sharing – at different points along the attack chain. This raises the stakes for defenders to proactively monitor, detect, and respond.

While under Microsoft’s Secure Future Initiative (SFI), default security has been strengthened by design, defenders still need to make the most out of customer-facing security capabilities. Therefore, this blog recommends countermeasures and controls across identity, endpoints, data apps, and network layers to help harden enterprise Teams environments. To frame these defenses, we first examine relevant stages of the attack chain. This guidance complements, but doesn’t repeat, the guidance built into the Microsoft Security Development Lifecycle (SDL) as outlined in the Teams Security Guide;  we will instead focus on guidance for disrupting adversarial objectives based on the relatively recently observed attempts to exploit Teams infrastructure and capabilities.

Attack chain

Diagram showing the stages of attack and relevant attacker behavior abusing Microsoft Teams features
Figure 1. Attack techniques that abuse Teams along the attack chain

Reconnaissance

Every Teams user account is backed by a Microsoft Entra ID identity. Each team member is an Entra ID object, and a team is a collection of channel objects. Teams may be configured for the cloud or a hybrid environment and supports multi-tenant organizations (MTO) and cross-tenant communication and collaboration. There are anonymous participants, guests, and external access users. From an API perspective, Teams is an object type that can be queried and stored in a local database for reconnaissance by enumerating directory objects, and mapping relationships and privileges. For example, federation tenant configuration indicates whether the tenant allows external communication and can be inferred from the API response queries reflecting the effective tenant federation policy.

While not unique to Teams, there are open-source frameworks that can specifically be leveraged to enumerate less secure users, groups, and tenants in Teams (mostly by repurposing the Microsoft Graph API or gathering DNS), including ROADtools, TeamFiltration, TeamsEnum, and MSFT-Recon-RS. These tools facilitate enumerating teams, members of teams and channels, tenant IDs and enabled domains, as well as permissiveness for communicating with external organizations and other properties, like presence. Presence indicates a user’s current availability and status outside the organization if Privacy mode is not enabled, which could then be exploited if the admin has not disabled external meetings and chat with people and organizations outside the organization (or at least limited it to specified external domains).

Many open-source tools are modular Python packages including reusable libraries and classes that can be directly imported or extended to support custom classes, meaning they are also interoperable with other custom open-source reconnaissance and discovery frameworks designed to identify potential misconfigurations.

Resource development

Microsoft continuously enhances protections against fraudulent Microsoft Entra ID Workforce tenants and the abuse of free tenants and trial subscriptions. As these defenses grow stronger, threat actors are forced to invest significantly more resources in their attempts to impersonate trusted users, demonstrating the effectiveness of our layered security approach. . This includes threat actors trying to compromise weakly configured legitimate tenants, or even actually purchasing legitimate ones if they have confidence they could ultimately profit. It should come as no surprise that if they can build a persona for social engineering, they will take advantage of the same resources as legitimate organizations, including custom domains and branding, especially if it can lend credibility to impersonating internal help desk, admin, or IT support, which could then be used as a convincing pretext to compromise targets through chat messaging and phone calls. Sophisticated threat actors try to use the very same resources used by trustworthy organizations, such as acquiring multiple tenants for staging development or running separate operations across regions, and using everyday Teams features like scheduling private meetings through chat, and audio, video and screen-sharing capabilities for productivity.

Initial access

Tech support scams remain a generally popular pretext for delivery of malicious remote monitoring and management (RMM) tools and information-stealing malware, leading to credential theft, extortion, and ransomware. There are always new variants to bypass security awareness defenses, such as the rise in email bombing to create a sense of stress and urgency to restore normalcy. In 2024, for instance, Storm-1811 impersonated tech support, claiming to be addressing junk email issues that it had initiated. They used RMM tools to deliver the ReedBed malware loader of ransomware payloads and remote command execution. Meanwhile, Midnight Blizard has successfully impersonated security and technical support teams to get targets to verify their identities under the pretext of protecting their accounts by entering authentication codes that complete the authentication flow for breaking into the accounts.

Similarly in May, Sophos identified a 3AM ransomware (believed to be a rebranding of BlackSuit) affiliate adopting techniques from Storm-1811, including flooding employees with unwanted emails followed by voice and video calls on Teams impersonating help desk personnel, claiming they needed remote access to stop the flood of junk emails. The threat actor reportedly spoofed the IT organization’s phone number.

With threat actors leveraging deepfakes, perceived authority helps make this kind of social engineering even more effective. Threat actors seeking to spoof automated workflow notifications and interactions can naturally extend to spoofing legitimate bots and agents as they gain more traction, as threat actors are turning to language models to facilitate their objectives.

Prevalent threat actors associated with ransomware campaigns, including the access broker tracked as Storm-1674 have used sophisticated red teaming tools, like TeamsPhisher, to distribute DarkGate malware and other malicious payloads over Teams. In December 2024, for example, Trend Micro reported an incident in which a threat actor impersonated a client during a Teams call to persuade a target to install AnyDesk. Remote access was reportedly then used also to deploy DarkGate. Threat actors may also just use Teams to gain initial access through drive-by-compromise activity to direct users to malicious websites.

Widely available admin tools, including AADInternals, could be leveraged to deliver malicious links and payloads directly into Teams. Teams branding (like any communications brand asset) makes for effective bait, and has been used by adversary-in-the-middle (AiTM) actors like Storm-00485. Threat actors could place malicious advertisements in search results for a spoofed app like Teams to misdirect users to a download site hosting credential-stealing malware. In July 2025, for instance, Malwarebytes reported observing a malvertising campaign delivering credential-stealing malware through a fake Microsoft Teams for Mac installer.

Whether it is a core app that is part of Teams, an app created by Microsoft, a partner app validated by Microsoft, or a custom app created by your own organization—no matter how secure an app—they could still be spoofed to gain a foothold in a network. And similar to leveraging a trusted brand like Teams, threat actors will also continue to try and take advantage of trusted relationships as well to gain Teams access, whether leveraging an account with access or abusing delegated administrator relationships to reach a target environment.

Persistence

Threat actors employ a variety of persistence techniques to maintain access to target systems—even after defenders attempt to regain control. These methods include abusing shortcuts in the Startup folder to execute malicious tools, or exploiting accessibility features like Sticky Keys (as seen in this ransomware case study). Threat actors could try to create guest users in target tenants or add their own credentials to a Teams account to maintain access.

Part of the reason device code phishing has been used to access target accounts is that it could enable persistent access for as long as the tokens remain valid. In February, Microsoft reported that Storm-2372 had been capturing authentication tokens by exploiting device code authentication flows, partially by masquerading as Microsoft Teams meeting invitations and initiating Teams chats to build rapport, so that when the targets were prompted to authenticate, they would use Storm-2372-generated device codes, enabling Storm-2372 to steal the authenticated sessions from the valid access tokens.

Teams phishing lures themselves can sometimes be a disguised attempt to help threat actors maintain persistence. For example, in July 2025, the financially motivated Storm-0324 most likely relied on TeamsPhisher to send Teams phishing lures to deliver a custom malware JSSloader for the ransomware operator Sangria Tempest to use as an access vector to maintain a foothold.

Execution

Apart from admin accounts, which are an attractive target because they come with elevated privileges, threat actors try and trick everyday Teams users into clicking links or opening files that lead to malicious code execution, just like through email.

Privilege escalation

If threat actors successfully compromise accounts or register actor-controlled devices, they often times  try to change permission groups to escalate privileges. If a threat actor successfully compromises a Teams admin role, this could lead to abuse of the permissions to use the admin tools that belong to that role.

Credential access

With a valid refresh token, actors can impersonate users through Teams APIs. There is no shortage of administrator tools that can be maliciously repurposed, such as AADInternals, to intercept access to tokens with custom phishing flows. Tools like TeamFiltration could be leveraged just like for any other Microsoft 365 service for targeting Teams. If credentials are compromised through password spraying, threat actors use tools like this to request OAuth tokens for Teams and other services. Threat actors continue to try and bypass multifactor authentication (MFA) by repeatedly generating authentication prompts until someone accepts by mistake, and try to compromise MFA by adding alternate phone numbers or intercepting SMS-based codes.

For instance, the financially motivated threat actor Octo Tempest uses aggressive social engineering, including over Teams, to take control of MFA for privileged accounts. They consistently socially engineer help desk personnel, targeting federated identity providers using tools like AADInternals to federate existing domains, or spoof legitimate domains by adding and then federating new domains to forge tokens.

Discovery

To refine targeting, threat actors analyze Teams configuration data from API responses, enumerate Teams apps if they obtain unauthorized access, and search for valuable files and directories by leveraging toolkits for contextualizing potential attack paths. For instance, Void Blizzard has used AzureHound to enumerate a compromised organization’s Microsoft Entra ID configuration and gather details on users, roles, groups, applications, and devices. In a small number of compromises, the threat actor accessed Teams conversations and messages through the web client. AADInternals can also be used to discover Teams group structures and permissions.

The state-sponsored actor Peach Sandstorm has delivered malicious ZIP files through Teams, then used AD Explorer to take snapshots of on-premises Active Directory database and related files.

Lateral movement

A threat actor that manages to obtain Teams admin access (whether directly or indirectly by purchasing an admin account through a rogue online marketplace) could potentially leverage external communication settings and enable trust relationships between organizations to move laterally. In late 2024, in a campaign dubbed VEILdrive by Hunters’ Team AXON, the financially motivated cybercriminal threat actors Sangria Tempest and Storm-1674 used previously compromised accounts to impersonate IT personnel and convince a user in another organization through Teams to accept a chat request and grant access through a remote connection.

Collection

Threat actors often target Teams to try and collect information from it that could help them to accomplish their objectives, such as to discover collaboration channels or high-privileged accounts. They could try to mine Teams for any information perceived as useful in furtherance of their objectives, including pivoting from a compromised account to data accessible to that user from OneDrive or SharePoint. AADInternals can be used to collect sensitive chat data and user profiles. Post-compromise, GraphRunner can leverage the Microsoft Graph API to search all chats and channels and export Teams conversations.

Command and control

Threat actors attempt to deliver malware through file attachments in Teams chats or channels. A cracked version of Brute Ratel C4 (BRc4) includes features to establish C2 channels with platforms like Microsoft Teams by using their communications protocols to send and receive commands and data.

Post-compromise, threat actors can use red teaming tool ConvoC2 to send commands through Microsoft Teams messages using the Adaptive Card framework to embed data in hidden span tags and then exfiltrate using webhooks. But threat actors can also use legitimate remote access tools to try and establish interactive C2 through Teams.

Exfiltration

Threat actors may use Teams messages or shared links to direct data exfiltration to cloud storage under their control. Tools like TeamFiltration include an exfiltration module that rely on a valid access token to then extract recent contacts and download chats and files through OneDrive or SharePoint.

Impact

Threat actors try to use Teams messages to support financial theft through extortion, social engineering, or technical means.

Octo Tempest has used communication apps, including Teams to send taunting and threatening messages to organizations, defenders, and incident response teams as part of extortion and ransomware payment pressure tactics. After gaining control of MFA through social engineering password resets, they sign in to Teams to identify sensitive information supporting their financially motivated operations.

Mitigation and protection guidance

Strengthen identity protection

Harden endpoint security

Secure Teams clients and apps

Implementing some of these recommendations will require Teams Administrator permissions.

Protect sensitive data

Raise awareness

  • Get started using attack simulation training. The Teams attack simulation training is currently in private preview. Build organizational resilience by raising awareness of QR code phishing, deepfakes including voice, and about protecting your organization from tech support and ClickFix scams.
  • Train developers to follow best practices when working with the Microsoft Graph API. Apply these practices when detecting, defending against, and responding to malicious techniques targeting Teams.
  • Learn more about some of the frequent initial access threats impacting SharePoint servers. SharePoint is a front end for Microsoft Teams and an attractive target.

Configure detection and response

  • Verify the auditing status of your organization in Microsoft Purview to make sure you can investigate incidents. In Threat Explorer, Content malware includes files detected by Safe Attachments for Teams, and URL clicks include all user clicks in Teams.
  • Customize how users report malicious messages, and then view and triage them.
    • If user reporting of messages is turned on in the Teams admin center, it also needs to be turned on in the Defender portal. We encourage you to submit user reported Teams messages to Microsoft here.
  • Search the audit log for events in Teams.
    • Refer to the table listing the Microsoft Teams activities logged in the Microsoft 365 audit log. With the Office 365 Management Activity API, you can retrieve information about user, admin, system, and policy actions and events including from Entra activity logs.
  • Familiarize yourself with relevant advanced hunting schema and available tables.
    • Advanced hunting supports guided and advanced modes. You can use the advanced hunting queries in the advanced hunting section to hunt with these tables for Teams-related threats.
    • Several tables covering Teams-related threats are available in preview and populated by Defender for Office 365, including MessageEvents, MessagePostDeliveryEvents, MessageUrlInfo, and UrlClickEvents. These tables provide visibility into ZAP events and URLs in Teams messages, including allowed or blocked URL clicks in Teams clients. You can join these tables with others to gain more comprehensive insight into the progression of the attack chain and end-to-end threat activity.
  • Connect Microsoft 365 to Microsoft Defender for Cloud Apps.
    • To hunt for Teams messages without URLs, use the CloudAppEvents table, populated by Defender for Cloud Apps. This table also includes chat monitoring events, meeting and Teams call tracking, and behavioral analytics. To make sure advanced hunting tables are populated by Defender for Cloud Apps data, go to the Defender portal and select Settings > Cloud apps > App connectors. Then, in the Select Microsoft 365 components page, select the Microsoft 365 activities checkbox. Control Microsoft 365 with built-in policies and policy templates to detect and notify you about potential threats.
  • Create Defender for Cloud Apps threat detection policies.
    • Many of the detection types enabled by default apply to Teams and do not require custom policy creation, including sign-ins from geographically distant locations in a short time, access from a country not previously associated with a user, unexpected admin actions, mass downloads, activity from anonymous IP addresses, or from a device flagged as malware-infected by Defender for Endpoint, as well as Oauth app abuse (when app governance is turned on).
    • Defender for Cloud Apps enables you to identify high-risk use and cloud security issues, detect abnormal user behavior, and prevent threats in your sanctioned cloud apps. You can integrate Defender for Cloud Apps with Microsoft Sentinel (preview) or use the supported APIs.
  • Detect and remediate illicit consent grants in Microsoft 365.
  • Discover and enable the Microsoft Sentinel data lake in Defender XDR. Sentinel data lake brings together security logs from data sources like Microsoft Defender and Microsoft Sentinel, Microsoft 365, Microsoft Entra ID, Purview, Intune, Microsoft Resource Graph, firewall and network logs, identity and access logs, DNS, plus sources from hundreds of connectors and solutions, including Microsoft Defender Threat Intelligence. Advanced hunting KQL queries can be run directly on the data lake. You can analyze the data using Jupyter notebooks.

Microsoft Defender detections

Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, apps to provide integrated protection against attacks like the threat discussed in this blog.

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.

Microsoft Defender XDR

The following alerts might indicate threat activity associated with this threat.

  • Malicious sign in from a risky IP address
  • Malicious sign in from an unusual user agent
  • Account compromised following a password-spray attack
  • Compromised user account identified in Password Spray activity
  • Successful authentication after password spray attack
  • Password Spray detected via suspicious Teams client (TeamFiltration)

Microsoft Entra ID Protection

Any type of sign-in and user risk detection might also indicate threat activity associated with this threat. An example is listed below. These alerts, however, can be triggered by unrelated threat activity.

  • Impossible travel
  • Anomalous Microsoft Teams login from web client

Microsoft Defender for Endpoint

The following alerts might indicate threat activity associated with this threat.

  • Suspicious module loaded using Microsoft Teams

The following alerts might also indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity and are not monitored in the status cards provided with this report.

  • Suspicious usage of remote management software

Microsoft Defender for Office 365

The following alerts might indicate threat activity associated with this threat.

  • Malicious link shared in Teams chat
  • User clicked a malicious link in Teams chat

When Microsoft Defender for Cloud Apps is enabled, the following alert might indicate threat activity associated with this threat.

  • Potentially Malicious IT Support Teams impersonation post mail bombing

The following alerts might also indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity and are not monitored in the status cards provided with this report.

  • A potentially malicious URL click was detected
  • Possible AiTM phishing attempt

Microsoft Defender for Identity

The following Microsoft Defender for Identity alerts can indicate associated threat activity:

  • Account enumeration reconnaissance
  • Suspicious additions to sensitive groups
  • Account Enumeration reconnaissance (LDAP)

Microsoft Defender for Cloud Apps

The following alerts might indicate threat activity associated with this threat.

  • Consent granted to application with Microsoft Teams permissions
  • Risky user installed a suspicious application in Microsoft Teams
  • Compromised account signed in to Microsoft Teams
  • Microsoft Teams chat initiated by a suspicious external user
  • Suspicious Teams access via Graph API

The following alerts might also indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity and are not monitored in the status cards provided with this report.

  • Possible mail exfiltration by app

Microsoft Security Copilot

Microsoft Security Copilot customers can use the Copilot in Defender embedded experience to check the impact of this report and get insights based on their environment’s highest exposure level in Threat analytics, Intel profiles, Intel Explorer and Intel projects pages of the Defender portal.

You can also use Copilot in Defender to speed up analysis of suspicious scripts and command lines by inspecting them below the incident graph on an incident page and in the timeline on the Device entity page without using external tools.

Threat intelligence reports

Microsoft customers can use the following reports in Microsoft products to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.

Microsoft Defender XDR threat analytics

Microsoft Security Copilot customers can also use the Microsoft Security Copilot integration in Microsoft Defender Threat Intelligence, either in the Security Copilot standalone portal or in the embedded experience in the Microsoft Defender portal to get more information about this threat actor.

Hunting queries

Microsoft Defender XDR

Advanced hunting allows you to view and query all the data sources available within the unified Microsoft Defender portal, which include Microsoft Defender XDR and various Microsoft security services.

After onboarding to the Microsoft Sentinel data lake, auxiliary log tables are no longer available in Microsoft Defender advanced hunting. Instead, you can access them through data lake exploration Kusto Query Language (KQL) queries in the Defender portal. For more information, see KQL queries in the Microsoft Sentinel data lake.

You can design and tweak custom detection rules using the advanced hunting queries and set them to run at regular intervals, generating alerts and taking response actions whenever there are matches. You can also link the generated alert to this report so that it appears in the Related incidents tab in threat analytics. Custom detection rule can automatically take actions on devices, files, users, or emails that are returned by the query. To make sure you’re creating detections that trigger true alerts, take time to review your existing custom detections by following the steps in Manage existing custom detection rules.

Detect potential data exfiltration from Teams

let timeWindow = 1h; 
let messageThreshold = 20; 
let trustedDomains = dynamic(["trustedpartner.com", "anothertrusted.com"]); 
CloudAppEvents 
| where Timestamp > ago(1d) 
| where ActionType == "MessageSent" 
| where Application == "Microsoft Teams" 
| where isnotempty(AccountObjectId)
| where tostring(parse_json(RawEventData).ParticipantInfo.HasForeignTenantUsers) == "true" 
| where tostring(parse_json(RawEventData).CommunicationType) in ("OneOnOne", "GroupChat") 
| extend RecipientDomain = tostring(parse_json(RawEventData).ParticipantInfo.ParticipatingDomains[1])
| where RecipientDomain !in (trustedDomains) 
| extend SenderUPN = tostring(parse_json(RawEventData).UserId)
| summarize MessageCount = count() by bin(Timestamp, timeWindow), SenderUPN, RecipientDomain
| where MessageCount > messageThreshold 
| project Timestamp, MessageCount, SenderUPN, RecipientDomain
| sort by MessageCount desc  

Detect mail bombing that sometimes precedes technical support scams on Microsoft Teams

EmailEvents 
   | where Timestamp > ago(1d) 
   | where DetectionMethods contains "Mail bombing" 
   | project Timestamp, NetworkMessageId, SenderFromAddress, Subject, ReportId

Detect malicious Teams content from MessageEvents

MessageEvents 
   | where Timestamp > ago(1d) 
   | where ThreatTypes has "Phish"                
       or ThreatTypes has "Malware"               
       or ThreatTypes has "Spam"                    
   | project Timestamp, SenderDisplayName, SenderEmailAddress, RecipientDetails, IsOwnedThread, ThreadType, IsExternalThread, ReportId

Detect communication with external help desk/support representatives

MessageEvents  
| where Timestamp > ago(5d)  
 | where IsExternalThread == true  
 | where (RecipientDetails contains "help" and RecipientDetails contains "desk")  
	or (RecipientDetails contains "it" and RecipientDetails contains "support")  
	or (RecipientDetails contains "working" and RecipientDetails contains "home")  
	or (SenderDisplayName contains "help" and SenderDisplayName contains "desk")  
	or (SenderDisplayName contains "it" and SenderDisplayName contains "support")  
	or (SenderDisplayName contains "working" and SenderDisplayName contains "home")  
 | project Timestamp, SenderDisplayName, SenderEmailAddress, RecipientDetails, IsOwnedThread, ThreadType

Expand detection of communication with external help desk/support representatives by searching for linked process executions

let portableExecutable  = pack_array("binary.exe", "portable.exe"); 
let timeAgo = ago(30d);
MessageEvents
  | where Timestamp > timeAgo
  | where IsExternalThread == true
  | where (RecipientDetails contains "help" and RecipientDetails contains "desk")
      or (RecipientDetails contains "it" and RecipientDetails contains "support")
      or (RecipientDetails contains "working" and RecipientDetails contains "home")
  | summarize spamEvent = min(Timestamp) by SenderEmailAddress
  | join kind=inner ( 
      DeviceProcessEvents  
      | where Timestamp > timeAgo
      | where FileName in (portableExecutable)
      ) on $left.SenderEmailAddress == $right.InitiatingProcessAccountUpn 
  | where spamEvent < Timestamp

Surface Teams threat activity using Microsoft Security Copilot

Microsoft Security Copilot in Microsoft Defender comes with a query assistant capability in advanced hunting. You can also run the following prompt in Microsoft Security Copilot pane in the Advanced hunting page or by reopening Copilot from the top of the query editor:

Show me recent activity in the last 7 days that matches attack techniques described in the Microsoft Teams technique profile. Include relevant alerts, affected users and devices, and generate advanced hunting queries to investigate further.

Microsoft Sentinel

Possible Teams phishing activity

This query specifically monitors Microsoft Teams for one-on-one chats involving impersonated users (e.g., 'Help Desk', 'Microsoft Security').

let suspiciousUpns = DeviceProcessEvents
    | where DeviceId == "alertedMachine"
    | where isnotempty(InitiatingProcessAccountUpn)
    | project InitiatingProcessAccountUpn;
    CloudAppEvents
    | where Application == "Microsoft Teams"
    | where ActionType == "ChatCreated"
    | where isempty(AccountObjectId)
    | where RawEventData.ParticipantInfo.HasForeignTenantUsers == true
    | where RawEventData.CommunicationType == "OneonOne"
    | where RawEventData.ParticipantInfo.HasGuestUsers == false
    | where RawEventData.ParticipantInfo.HasOtherGuestUsers == false
    | where RawEventData.Members[0].DisplayName in ("Microsoft  Security", "Help Desk", "Help Desk Team", "Help Desk IT", "Microsoft Security", "office")
    | where AccountId has "@"
    | extend TargetUPN = tolower(tostring(RawEventData.Members[1].UPN))
    | where TargetUPN in (suspiciousUpns)

Files uploaded to Teams and access summary

This query identifies files uploaded to Microsoft Teams chat files and their access history, specifically mentioning operations from SharePoint. It allows tracking of potential file collection activity through Teams-related storage.

OfficeActivity 
    | where RecordType =~ "SharePointFileOperation"
    | where Operation =~ "FileUploaded" 
    | where UserId != "app@sharepoint"
    | where SourceRelativeUrl has "Microsoft Teams Chat Files" 
    | join kind= leftouter ( 
       OfficeActivity 
        | where RecordType =~ "SharePointFileOperation"
        | where Operation =~ "FileDownloaded" or Operation =~ "FileAccessed" 
        | where UserId != "app@sharepoint"
        | where SourceRelativeUrl has "Microsoft Teams Chat Files" 
    ) on OfficeObjectId 
    | extend userBag = bag_pack(UserId1, ClientIP1) 
    | summarize make_set(UserId1, 10000), make_bag(userBag, 10000) by TimeGenerated, UserId, OfficeObjectId, SourceFileName 
    | extend NumberUsers = array_length(bag_keys(bag_userBag))
    | project timestamp=TimeGenerated, UserId, FileLocation=OfficeObjectId, FileName=SourceFileName, AccessedBy=bag_userBag, NumberOfUsersAccessed=NumberUsers
    | extend AccountName = tostring(split(UserId, "@")[0]), AccountUPNSuffix = tostring(split(UserId, "@")[1])
    | extend Account_0_Name = AccountName
    | extend Account_0_UPNSuffix = AccountUPNSuffix

References

Learn more

For the latest security research from the Microsoft Threat Intelligence community, check out ff

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky.

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post Disrupting threats targeting Microsoft Teams appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

New Microsoft Secure Future Initiative (SFI) patterns and practices: Practical guides to strengthen security

1 Share

Building on the momentum of our initial launch of the Microsoft Secure Future Initiative (SFI) patterns and practices, this second installment continues our commitment to making security implementation practical and scalable. The first release introduced a foundational library of actionable guidance rooted in proven architectures like Zero Trust. Now, we’re expanding that guidance with new examples that reflect our ongoing learnings—helping customers and partners understand our strategic approach more deeply and apply it effectively in their own environments.

This next set of SFI patterns and practices articles include practical, actionable guidance built by practitioners, for practitioners, in the areas of network, engineering systems, and security response. Each of the six articles includes details on how Microsoft has improved our security posture in each area so customers, partners, and the broader security community can do the same.

Pattern name SFI Pillar What it helps you do
Network isolation Protect networks Contain breaches by default. Strongly segment and isolate your network (through per-service ACLs, isolated virtual networks, and more) to prevent lateral movement and limit cyberattackers if they get in.
Secure all tenants and their resources Protect tenants and isolate systems Help eliminate “shadow” tenants. Apply baseline security policies, such as multifactor authentication (MFA), Conditional Access, and more, to every cloud tenant and retire unused ones, so cyberattackers can’t exploit forgotten, weakly-secured environments.
Higher security for Entra ID apps Protect tenants and isolate systems Close identity backdoors. Enforce high security standards for all Microsoft Entra ID (Azure AD) applications—removing unused apps, tightening permissions, and requiring strong authorization—to block common misconfigurations cyberattackers abuse for cross-tenant attacks.
Zero Trust for source code access Protecting engineering systems Secure the dev pipeline. Require proof-of-presence MFA for critical code commits and merges to help ensure only verified developers can push code and stop cyberattackers from surreptitiously injecting changes.
Protect the software supply chain Protecting engineering systems Lock down builds and dependencies. Govern your continuous integration and continuous delivery (CI/CD) pipelines and package management—use standardized build templates, internal package feeds, and automated scanning to block supply chain cyberattacks before they reach production.
Centralize access to security logs Monitoring and detecting threats Speed up investigations. Standardize and centralize your log collection (with longer retention) so that security teams have unified visibility and can detect and investigate incidents faster—even across complex, multi-cloud environments.

More about SFI patterns and practices

Just as software design patterns provide reusable solutions to common engineering problems, SFI patterns and practices offer repeatable, proven approaches to solving complex cybersecurity challenges. Each pattern is crafted to address a specific security risk—legacy infrastructure or inconsistent CI/CD pipelines—and is grounded in Microsoft’s own experience. Like design patterns in software architecture, these security patterns are modular, extensible, and built for reuse across diverse environments.

Additionally, each pattern in the SFI patterns and practices library follows a consistent and purposeful structure. Every article begins with a pattern name—a concise handle that captures the essence of the cybersecurity challenge. The problem section outlines the security risk and its real-world context, helping readers understand why it matters. The solution describes how Microsoft addressed the issue internally. The guidance section provides practical recommendations that customers can consider applying in their own environments. Finally, the implications section outlines the outcomes and trade-offs of implementing the pattern, helping organizations anticipate both the benefits and the operational considerations.

This structure offers a framework for understanding, applying, and evolving security practices.

Next steps with SFI

April 2025 progress Report

Read the report ↗

Security is a journey, and Microsoft is committed to sharing our insights from SFI. Watch for more actionable advice in coming months. SFI patterns and practices provide a roadmap for putting secure architecture into practice. Embracing these approaches enables organizations to advance their security posture, minimize deployment hurdles, and establish environments that are secure by design, by default, and in operations.

To get access to the full library, visit our new SFI patterns and practices webpage. And check out the new SFI video on our redesigned website to hear directly from Microsoft leadership about how we are putting security above all else.

Let’s build a secure future, together

Talk to your Microsoft account team to integrate these practices into your roadmap.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post New Microsoft Secure Future Initiative (SFI) patterns and practices: Practical guides to strengthen security appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

How a top bug bounty researcher got their start in security

1 Share

As we kick off Cybersecurity Awareness Month, the GitHub Bug Bounty team is excited to spotlight one of the top performing security researchers who participates in the GitHub Security Bug Bounty Program, @xiridium!

GitHub is dedicated to maintaining the security and reliability of the code that powers millions of development projects every day. GitHub’s Bug Bounty Program is a cornerstone of our commitment to securing both our platform and the broader software ecosystem.

With the rapid growth of AI-powered features like GitHub Copilot, GitHub Copilot coding agent, GitHub Spark, and more, our focus on security is stronger than ever—especially as we pioneer new ways to assist developers with intelligent coding. Collaboration with skilled security researchers remains essential, helping us identify and resolve vulnerabilities across both traditional and emerging technologies.

We have also been closely auditing the researchers participating in our public program—to identify those who consistently demonstrate expertise and impact—and inviting them to our exclusive VIP bounty program. VIP researchers get direct access to:

  • Early previews of beta products and features before public launch
  • Dedicated engagement with GitHub Bug Bounty staff and the engineers behind the features they’re testing 😄
  • Unique Hacktocat swag—including this year’s brand new collection!

Explore this blog post to learn more about our VIP program and discover how you can earn an invitation!

To celebrate Cybersecurity Awareness Month this October, we’re spotlighting one of the top contributing researchers to the bug bounty program and diving into their methodology, techniques, and experiences hacking on GitHub. @xiridium is renowned for uncovering business logic bugs and has found some of the most nuanced and impactful issues in our ecosystem. Despite the complexity of their submissions, they excel at providing clear, actionable reproduction steps, streamlining our investigation process and reducing triage time for everyone involved.


How did you get involved with Bug Bounty? What has kept you coming back to it?

I was playing CTFs (capture the flag) when I learned about bug bounties. It was my dream to get my first bounty. I was thrilled by people finding bugs in real applications, so it was a very ambitious goal to be among the people that help fix real threats. Being honest, the community gives me professional approval, which is pretty important for me at the moment. This, in combination with technical skills improvement, keeps me coming back to bug bounties!

What do you enjoy doing when you aren’t hacking?

At the age of 30, I started playing music and learning how to sing. This was my dream from a young age, but I was fighting internal blocks on starting. This also helps me switch the context from work and bug bounty to just chill. (Oh! I also spend a lot of bounties on Lego 😆.)

How do you keep up with and learn about vulnerability trends?

I try to learn on-demand. Whenever I see some protobuf (Protocol Buffers) code looking interesting or a new cloud provider is used, that is the moment when I say to myself, “Ok, now it’s time to learn about this technology.” Apart from that, I would consider subscribing to Intigriti on Twitter. You will definitely find a lot of other smart people and accounts on X, too, however,  don’t blindly use all the tips you see. They help, but only when you understand where they come from. Running some crazily clever one-liner rarely grants success.

What tools or workflows have been game-changers for your research? Are there any lesser-known utilities you recommend?

Definitely ChatGPT and other LLMs. They are a lifesaver for me when it comes to coding. I recently heard some very good advice: “Think of an LLM as though it is a junior developer that was assigned to you. The junior knows how to code, but is having hard times tackling bigger tasks. So always split tasks into smaller ones, approve ChatGPT’s plan, and then let it code.”It helps with smaller scripts, verifying credentials, and getting an overview on some new technologies.

You’ve found some complex and significant bugs in your work—can you talk a bit about your process?

Doing bug bounties for me is about diving deep into one app rather than going wide. In such apps, there is always something you don’t fully understand. So my goal is to get very good at the app. My milestone is when I say to myself, “Okay, I know every endpoint and request parameter good enough. I could probably write the same app myself (if I knew how to code 😄).” At this point, I try to review the most scary impact for the company and think on what could go wrong in the development process. Reading the program rules once again actually helps a lot.

Whenever I dive into the app, I try to make notes on things that look strange. For example: there are two different endpoints for the same thing. `/user` and `/data/users`. I start thinking, “Why would there be two different things for the same data?” Likely, two developers or teams didn’t sync with each other on this. This leads to ambiguity and complexity of the system.

Another good example is when I find 10 different subdomains, nine are on AWS and one is on GCP. That is strange, so there might be different people managing those two instances. The probability of bugs increases twice!

What are your favorite classes of bugs to research and why?

Oh, this is a tough one. I think I am good at looking for leaked credentials and business logic. Diving deep and finding smaller nuances is my speciality. Also, a good note on leaked data is to try to find some unique endpoints you might see while diving into the web app. You can use search on GitHub for that. Another interesting discovery is to Google dork at Slideshare, Postman, Figma, and other developer or management tools and look for your target company. While these findings rarely grant direct vulnerabilities, it might help better understand how the app works.

Do you have any advice or recommended resources for researchers looking to get involved with Bug Bounty?

Definitely, Portswigger Labs and hacker101 . It is a good idea to go through the easiest tasks for each category and find something that looks interesting for you. Then, learn everything you find about your favorite bug: read reports, solve CTFs, HackTheBox, all labs you might find.

What’s one thing you wish you’d known when you first started?

Forget about “Definitely this is not vulnerable” or “I am sure this asset was checked enough.” I have seen so many cases when other hackers found bugs on the www domain for the public program.

Bonus thought: If you know some rare vulnerability classes, don’t hesitate to run a couple tests. I once found Oracle padding on a web app in the authentication cookie. Now, I look for those on every target I might come across.


Thank you, @xiridium, for participating in GitHub’s bug bounty researcher spotlight! Each submission to our bug bounty program is a chance to make GitHub, our products, and our customers more secure, and we continue to welcome and appreciate collaboration with the security research community. So, if this inspired you to go hunting for bugs, feel free to report your findings through HackerOne.

The post How a top bug bounty researcher got their start in security appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories