Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149008 stories
·
33 followers

5 Windows developer features you didn't know about

1 Share
From: kayla.cinnamon
Duration: 7:03
Views: 302

In this video, I show off the newest developer features that have been added to Windows.

Links:
Edit: https://github.com/microsoft/edit
Sudo: https://github.com/microsoft/sudo
Dev Drive: https://learn.microsoft.com/windows/dev-drive

Intro: (00:00)
End Task: (00:26)
File Explorer with Git integration: (01:12)
Edit: (02:16)
Sudo: (04:14)
Dev Drive: (05:08)
Outro: (06:47)

Socials:
👩‍💻 GitHub: https://github.com/cinnamon-msft
🐤 X: https://x.com/cinnamon_msft
🦋 Bluesky: https://bsky.app/profile/kaylacinnamon.bsky.social
🐘 Mastodon: https://hachyderm.io/@cinnamon

Disclaimer: I've created everything on my channel in my free time. Nothing is officially affiliated or endorsed by Microsoft in any way. Opinions and views are my own! 🩷

#windows #developer #sudo #git #development #performance #terminal

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Join the Agent 365 'Ask Microsoft Anything' session on March 18

1 Share

 

Sign up and participate in the March 18 Agent 365 AMA

🌐 aka.ms/Agent365AMA

 

 

Learn more about the capabilities of Agent 365 in this live 'Ask Microsoft Anything' with product and engineering team experts! Get your questions answered about capabilities for agent observability, security, and governance, developer resources, and how to get started as you confidently scale agents in your organization.

Meet the panel of Agent 365 experts

Irina Nechaeva
General Manager

Kendra Springer
Principal Group Product Manager

Pratap Ladhani
Technical Customer Architect

 Irina Nechaeva
General Manager

 Gad Epstein
Principal Group Product Manager

 Neta Haiby
Principal Group Product Manager

 

We're looking forward to you joining the AMA and bringing your questions. See you March 18th!

 

Watch the AMA on March 18 (or later) 

🌐 Live AMA: Microsoft Agent 365 | Microsoft Community Hub

 

 

Additional Agent 365 resources
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Security in the agentic era: A new paradigm

1 Share

Trust is the new control plane: A security leader's guide to the age of AI agents

Artificial intelligence is no longer a background utility — it's becoming an autonomous workforce. As AI agents take on independent roles in business processes, organizations must confront a new class of security, governance, and compliance challenges that traditional frameworks were never designed to handle.

The question is no longer whether to adopt AI. It's whether your organization can do so without creating unacceptable risk. This guide examines the three pillars every security and technology leader must address: securing AI systems, governing AI behavior, and complying with a rapidly evolving regulatory landscape.

 

Why the Stakes Are Higher Than Ever

By 2027, Microsoft projects that more than 1.3 billion AI agents will be deployed globally — a number that rivals the entire US population. These aren't passive tools waiting for instructions. They are autonomous systems operating around the clock, accessing privileged data, executing decisions, and interacting with customers, suppliers, and internal workflows — all without a human explicitly in the loop for every action.

For security leaders, the implications are immediate and stark:

  • 1.3 billion non-human identities to manage and secure
  • 1.3 billion autonomous workflows capable of triggering real-world consequences
  • 1.3 billion new attack surfaces that existing security architectures weren't built to defend

The Microsoft Work Trend Index reinforces the urgency: 95% of organizations are already using AI in some form. Yet only a single-digit percentage of leaders feel confident enough to deploy generative or agentic AI at enterprise scale. The gap between adoption and readiness is significant — and it's widening.

Closing that gap starts with one thing: trust. Nobody builds a digital workforce on a foundation they can't rely on.

 

The Shifting Threat Landscape

The World Economic Forum's top global risks through 2035 paint a sobering picture of where we're headed. Misinformation and disinformation rank among the top five risks — and AI is accelerating both.

AI-generated phishing attacks now achieve click rates up to four times higher than traditional attempts. There are no spelling mistakes. The logos are perfect. The tone matches your organization's communication style. Employees who were trained to spot obvious red flags are encountering attacks that look entirely legitimate.

But phishing is only part of the threat surface. The broader risks include:

  • Data oversharing — AI agents inadvertently exposing sensitive information across organizational and jurisdictional boundaries
  • Adverse AI outcomes — biased, inaccurate, or non-compliant outputs that create legal liability and reputational damage
  • Cyber espionage — the same autonomous capabilities that make AI agents powerful make them attractive targets for nation-state actors and sophisticated adversaries

The fundamental principles of good security hygiene haven't changed. But the autonomy and scale at which AI systems now operate means any failure can propagate faster and further than anything we've dealt with before.

 

Pillar one — securing AI: Getting the foundation right

Rethinking data loss prevention for the agentic era

Controlling what data AI agents can access and share is one of the most immediate operational challenges organizations face. Traditional Data Loss Prevention (DLP) systems were built around known identifiers — matching patterns in email traffic, file transfers, and endpoint activity. Agentic systems operating through APIs and Model Context Protocol (MCP) servers don't fit those models.

To address this, organizations must retrofit existing DLP controls for API-based and agent-driven data flows, while also building new capabilities:

  • Discover and classify data before AI indexes it — legacy data that was never formally classified becomes a serious liability the moment an AI system can read and surface it at machine speed
  • Implement agent-aware role-based access controls (RBAC) — access policies designed for human users often fail to account for the broader reach of autonomous agents
  • Manage data lifecycle actively — data retained beyond its useful or legal life becomes an unnecessary risk vector; organizations need clear policies on retention, restriction, and secure deletion
Prompt injection and the new attack surface

Agentic AI has introduced a genuinely new class of attack. An adversary no longer needs to deploy malware in the conventional sense. Instead, they can embed hidden instructions in a document, an email, or an API response — and if an AI agent reads it, it may execute those instructions as legitimate commands.

Prompt injection attacks represent a fundamental shift in the threat model. Every input channel to an agent — a document upload, an API call, a user prompt, an external data feed — is a potential attack vector. Sanitizing and validating inputs before they reach the model is no longer optional security hygiene; it's a core architectural requirement.

Other significant threats emerging in this space include:

  • Model theft — adversaries targeting the fine-tuned intellectual property embedded in proprietary AI models
  • Hallucination exploitation — fabricated or omitted information leading to biased, incorrect, or legally non-compliant outputs
  • Autonomous decision overreliance — trusting agent outputs without adequate human review, creating exposure to data leakage and incorrect actions executed at scale
Tackling shadow AI

A decade ago, "shadow IT" — employees using unapproved technology outside of IT oversight — was a governance nightmare. Shadow AI is the next chapter of that problem, and it's already here.

Employees are adopting AI tools independently, often without any visibility from security or compliance teams. Without centralized oversight, there's no way to verify that those tools follow responsible AI principles, handle data appropriately, or comply with applicable regulations.

A centralized AI tool policy — clearly defining which systems are approved and why — is essential. The goal isn't to limit innovation. It's to ensure that when AI is used, it can be trusted, explained, and audited.

 

Pillar two — governing AI: Accountability at scale

Security controls protect the system from external threats. Governance determines how the system behaves — and who is accountable when things go wrong.

Data governance as the foundation

AI systems are only as trustworthy as the data they are trained on and operate against. Poor data governance doesn't just create compliance risk — it undermines the reliability of every AI output. Effective data governance in the agentic era requires:

  • Clear data ownership and access policies that explicitly cover non-human identities
  • Centralized data catalogs with lineage tracking, quality controls, and metadata management
  • AI agent identity lifecycle management — onboarding, access reviews, and offboarding for agents, managed with the same rigor applied to human employees

Some leading organizations are already registering AI agents in HR systems and assigning human managers to each one. It's an imperfect analogy, but it enforces one critical discipline: making someone responsible for what the agent does.

A framework for AI governance

Effective AI governance rests on two foundations: core principles and an implementation framework that spans the full AI lifecycle.

Core principles should address transparency in decision-making, clear accountability for outcomes, privacy protections for sensitive data, and meaningful human oversight for high-stakes decisions. These aren't abstract values — they need to be operationalized into specific technical controls and policy requirements.

Governance itself typically runs through three phases:

  • Selection — evaluating AI tools and models for safety, transparency, and compliance fit before adoption
  • Deployment — aligning policies and controls to measurable business objectives so success can actually be verified
  • Ongoing monitoring — continuously assessing performance, fairness, and regulatory compliance over the system's lifetime

One area that remains persistently underinvested: monitoring outputs, not just inputs. Most organizations build controls around what goes into an AI system. But AI operates at machine scale and speed — organizations equally need the capability to watch what's coming out, in real time.

The agent-as-employee mental model

One of the most practical governance frameworks emerging in this space is treating every AI agent like a new employee. Define its job description: what is it authorized to do? What data can it access? Under what conditions does it need a human manager's approval before acting?

If an agent needs access to a new database, who approves that request? If it begins exhibiting anomalous behavior, who receives the alert? If it needs to be decommissioned, what does offboarding look like?

These questions aren't just governance formalities. They are the building blocks of accountability in the agentic era — and the organizations that answer them clearly will be far better positioned than those that don't.

 

Pillar three — compliance: Shift left or fall behind

With 127 countries now enforcing privacy laws, and AI-specific regulations evolving roughly every four to five days, compliance has become a fast-moving target. The EU AI Act, GDPR, DORA, the Cyber Resilience Act — these frameworks overlap, interact, and frequently conflict in ways that create genuine decision fatigue for compliance teams.

The answer is not to wait for the landscape to stabilize. It won't. The only viable strategy is to shift compliance left — embedding regulatory requirements into architecture and process design from day one, rather than trying to retrofit them after deployment.

In practice, this means:

  • Aligning to established baselines such as ISO 42001 (AI Management Systems) rather than custom frameworks that can't be readily demonstrated to external regulators or auditors
  • Mapping extraterritorial regulatory scope — understanding which laws apply to your data flows based on where data originates, where it's processed, and where it's used, not just where your headquarters sits
  • Operationalizing responsible AI principles — translating commitments to privacy, explainability, and fairness into concrete technical controls, documented policies, and auditable practices

The principle of shared responsibility — well understood in cloud computing — applies equally to AI deployment. A platform provider can build AI systems assessed against responsible AI principles, with transparency records publicly available. But the organization deploying those systems still owns its governance choices, its data handling practices, and its accountability to regulators and customers.

 

Microsoft Marketplace: Where trust becomes a procurement decision

All of the security, governance, and compliance principles discussed above ultimately converge at a practical decision point: where and how you acquire AI agents matters as much as how you deploy them. This is where Microsoft Marketplace enters the picture — and it's more significant than simply a software storefront.

In September 2025, Microsoft unified Azure Marketplace and AppSource into a single Microsoft Marketplace, with a dedicated "AI Apps and Agents" category featuring over 3,000 solutions. The strategic intent is clear: make trusted AI agents as easy to discover, evaluate, and procure as any enterprise software — with governance and security built into the acquisition process itself.

Two agent types, different security profiles

For software development companies publishing agents and enterprises evaluating them, Microsoft Marketplace distinguishes between two categories, each with distinct security implications:

  • Azure agents — general-purpose AI solutions running on Azure infrastructure, either hosted by the publisher as a SaaS offering or deployed into the customer's own tenant via container offers. These are suited to cloud-based agentic workflows with custom compute requirements.
  • Microsoft 365 agents — agents integrated directly into Copilot and M365 applications like Teams, Outlook, Word, and Excel. These enhance productivity within the Microsoft 365 environment and are distributed through the Agent Store within the M365 Copilot experience.
Marketplace as a governance lever for buyers

From the enterprise buyer's perspective, Marketplace isn't just about convenience — it's a governance tool. When an organization acquires an agent through Microsoft Marketplace, it is provisioned and distributed aligned to the organization's existing security and governance standards. This means IT retains control over what gets deployed, to whom, and under what conditions — directly addressing the shadow AI problem discussed earlier.

What to Look for When Evaluating Marketplace Agents

Not all agents in the Marketplace are equal from a security standpoint. When evaluating third-party agents for enterprise deployment, security leaders should assess:

  • Responsible AI documentation — does the publisher provide transparent records of how the agent was assessed against safety and fairness principles?
  • Data handling disclosures — where is data processed, stored, and for how long? Does it leave your tenant?
  • Offer type architecture — is the agent SaaS-hosted or tenant-deployed, and does that match your data sovereignty requirements?
  • Compliance certifications — does the publisher hold relevant certifications (SOC 2, ISO 27001, regional data protection compliance) that align with your regulatory obligations?

The Marketplace's trusted channel doesn't eliminate due diligence — but it does provide a structured, governed starting point that unvetted direct procurement cannot.

 

Trust Is the New Control Plane

Behind every efficiency gain, every automated workflow, and every AI-powered product is a human being who needs to trust the system they're relying on. When that trust isn't present, adoption fails — and adoption failure is not a neutral outcome when your competitors are building a more capable AI-powered workforce around you.

But trust in this context means more than security compliance. It requires three things:

  • Explainability — the ability to show regulators, customers, and employees how a decision was reached
  • Resilience — the assurance that the system continues to operate safely even when interconnected components fail
  • Inclusivity — confidence that the system produces fair, unbiased outcomes across the full range of people it affects

Organizations that embed trust into the foundation of their AI strategy — not as a checkbox, but as a design principle — will hold a structural competitive advantage. Those that treat it as a formality will eventually face what's already playing out: billion-dollar lawsuits over unauthorized data use, regulatory fines for opaque deployment practices, and loss of the customer confidence that makes AI investment worthwhile in the first place.

 

A Practical Starting Point

The scope of this challenge can feel overwhelming. The path forward is not. Start here:

  • Establish security fundamentals first — data classification, access management, input sanitization, and output monitoring are non-negotiable foundations
  • Build your governance framework — define agent ownership, document responsibilities, and implement human-in-the-loop checkpoints for decisions that carry real-world consequences
  • Align to regulations proactively — adopt established frameworks, map your full regulatory exposure, and maintain documentation that can be externally verified
  • Invest in AI literacy across the organization — the single biggest barrier to responsible AI adoption is not technology. It is helping people understand how to use these systems well, when to trust them, and when to question them

The age of AI agents has arrived. The organizations that lead in this environment will be the ones that treat trust not as a constraint on innovation — but as the foundation it stands on.

 

This post is based on a presentation covering the security, governance, and compliance dimensions of AI in the agentic era. To view the full session recording, visit Security for SDC Series: Securing the Agentic Era Episode 1

For more on Microsoft's Responsible AI principles and Azure AI Foundry, visit Responsible AI: Ethical policies and practices | Microsoft AI.

__________________________________________________________________________________________________________________________________________________________________

 

 

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Process Explorer v17.1, SDelete v2.06, and Sysmon 1.5.1 for Linux

1 Share

Process Explorer v17.1

This update to Process Explorer, an advanced process, DLL, and handle viewing utility, fixes a crash generated by processes with long names.
 

SDelete v2.06

This update to SDelete, a command line utility for secure file deletion, adds support for long file paths and restricts MFT optimization to NTFS partitions.
 

Sysmon 1.5.1 for Linux

This update to Sysmon for Linux, a tool that monitors and logs system activity including process lifetime, network connections, file system writes, and more, fixes a Red Hat Enterprise Linux 9 eBPF program validation bug.
 
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing GPT-5.4 in Microsoft Foundry

1 Share

Today, we’re announcing OpenAI’s GPT‑5.4 to be generally available soon in Microsoft Foundry: a model designed to help organizations move from planning work to reliably completing it in production environments. As AI agents are applied to longer, more complex workflows; consistency and follow‑through become as important as raw intelligence. GPT‑5.4 combines stronger reasoning with built in computer use capabilities to support automation scenarios, and dependable execution across tools, files, and multi‑step workflows at scale.

GPT-5.4: Enhanced Reliability in Production AI

GPT-5.4 is designed for organizations operating AI in real production environments, where consistency, instruction adherence, and sustained context are critical to success. The model brings together advances in reasoning, coding, and agentic workflows to help AI systems not only plan tasks but complete them with fewer interruptions and reduced manual oversight.

Compared with earlier generations, GPT-5.4 emphasizes stability across longer interactions, enabling teams to deploy agentic AI with greater confidence in day-to-day production use.

GPT-5.4 introduces advancements that aim for production grade AI:

  • More consistent reasoning over time, helping maintain intent across multi‑turn and multi‑step interactions
  • Enhanced instruction alignment to reduce prompt tuning and oversight
  • Latency improved performance for responsive, real-time workflows
  • Integrated computer use capabilities for structured orchestration of tools, file access, data extraction, guarded code execution, and agent handoffs
  • More dependable tool invocation reducing prompt tuning and human oversight
  • Higher‑quality generated artifacts, including documents, spreadsheets, and presentations with more consistent structure

Together, these improvements support AI systems that behave more predictably as tasks grow in length and complexity.

From capability to real-world outcomes

GPT‑5.4 delivers practical value across a wide range of production scenarios where follow‑through and reliability are essential:

  • Agent‑driven workflows, such as customer support, research assistance, and business process automation
  • Enterprise knowledge work, including document drafting, data analysis, and presentation‑ready outputs
  • Developer workflows, spanning code generation, refactoring, debugging support, and UI scaffolding
  • Extended reasoning tasks, where logical consistency must be preserved across longer interactions

Teams benefit from reduced task drift, fewer mid‑workflow failures, and more predictable outcomes when deploying GPT‑5.4 in production.

GPT-5.4 Pro: Deeper analysis for complex decision workflows

GPT‑5.4 Pro, a premium variant designed for scenarios where analytical depth and completeness are prioritized over latency.

Additional capabilities include:

  • Multi‑path reasoning evaluation, allowing alternative approaches to be explored before selecting a final response
  • Greater analytical depth, supporting problems with trade‑offs or multiple valid solutions
  • Improved stability across long reasoning chains, especially in sustained analytical tasks
  • Enhanced decision support, where rigor and thoroughness outweigh speed considerations

Organizations typically select GPT‑5.4 Pro when deeper analysis is required such as scientific research and complex problems, while GPT‑5.4 remains the right choice for workloads that prioritize reliable execution and agentic follow‑through.

Microsoft Foundry: Enterprise‑Grade Control from Day One

GPT‑5.4 and GPT‑5.4 Pro are available through Microsoft Foundry, which provides the operational controls organizations need to deploy AI responsibly in production environments. Foundry supports policy enforcement, monitoring, version management, and auditability, helping teams manage AI systems throughout their lifecycle.

By deploying GPT‑5.4 through Microsoft Foundry, organizations can integrate advanced agentic capabilities into existing environments while aligning with security, compliance, and operational requirements from day one.

Customer Spotlight

Get Started with GPT-5.4 in Microsoft Foundry

GPT‑5.4 sets a new bar for production‑ready AI by combining stronger reasoning with dependable execution. Through enterprise‑grade deployment in Microsoft Foundry, organizations can move beyond experimentation and confidently build AI systems that complete complex work at scale. Computer use capabilities will be introduced shortly after launch.

GPT‑5.4 in Microsoft Foundry is priced at $2.50 per million input tokens, $0.25 per million cached input tokens, and $15.00 per million output tokens. It is available at launch in Standard Global and Standard Data Zone (US), with additional deployment options coming soon. GPT‑5.4 Pro is priced at $30.00 per million input tokens, and $180.00 per million output tokens, and is available at launch in Standard Global.

Build agents for real-world workloads. Start building with GPT‑5.4 in Microsoft Foundry today.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Instagram and Facebook deliver instant playback and boost user engagement with Media3 PreloadManager

1 Share

Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X)






In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world. 


This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback.


The latency gap in short form videos


Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies.


The path forward with Media3 PreloadManager


To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager.


How Meta achieved true instant playback

Existing Complexities


Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds.


Integrating Media3 PreloadManager

To achieve truly instant playback, Meta's Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta's existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach.







Optimization and Performance Tuning

The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns.


Fine tuning implementation to specific UI patterns

Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app:


  • Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos. 


  • Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user.


Scaling for a diverse global device ecosystem

Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness.


This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed.




Architectural wins and code health

Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption. 


By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community. 


Resulting impact on Instagram and Facebook


The proactive architecture delivered immediate and measurable improvements across both platforms. 


  • Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time. 


  • Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics.


Key engineering learnings at scale


As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently.


  • Prioritize intelligent preloading

Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it.


  • Align your implementation with UI patterns

Customize preloading behavior as per your apps’s UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels.

  • Leverage Media3 for long-term code health

Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates.

  • Implement device aware optimizations

Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically.

Learn More


To get started and learn more, visit 


Now you know the secrets for instant playback. Go try them out!



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories