Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
145609 stories
·
32 followers

Microsoft’s new Xbox mode on Windows has leaked for any handheld

1 Share

Microsoft is getting ready to launch its Xbox full-screen experience on the new Xbox Ally devices next month, but it looks like you won’t need new hardware to get it. Windows enthusiasts have discovered a way to enable this new Xbox mode early in Windows 11, thanks to the latest 25H2 update to the operating system.

The method, which involves installing a Release Preview version of Windows 11 and lots of tweaks, works on a variety of handheld gaming PCs — including MSI’s Claw devices and Asus’ ROG Ally range. I’ve been trying it out on the original ROG Ally today, and it allows the device to ignore Asus’ own software in favor of Microsoft’s Xbox app at boot.

The new Xbox full-screen experience doesn’t load the full Windows desktop or a bunch of background processes, freeing up more memory for games. It’s essentially not loading the Explorer shell and saving around 2GB of memory by suppressing all the unnecessary parts of a typical Windows 11 installation.

You launch straight into the Xbox PC app instead, which includes all of your PC games from the Microsoft Store, Battle.net, Steam, and other storefronts. There’s a Game Bar for navigating around, and a new task view that’s a lot more handheld-friendly.

You can also still swap into a Windows desktop mode, or access Windows apps and games directly in this full-screen Xbox mode. Microsoft warns that you’re exiting to the Windows desktop and that you should use touch or a mouse and keyboard “for the best experience,” and it’s the exact same Windows experience that exists on multiple devices right now.

If you want to try this out for yourself, it’s a relatively easy process to get going. But be warned, fiddling with registry settings or the Windows Feature Store (known as Velocity) could result in system instability. If you’re willing to risk some issues that might need rolling back or require a reinstall of Windows, there’s a handy guide on Reddit for all the settings required.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing new update policy for Azure SQL Managed Instance

1 Share

We’re excited to introduce the new update policy SQL Server 2025 for Azure SQL Managed Instance. Now in preview, SQL Server 2025 update policy brings you the latest SQL engine innovation while retaining database portability to the new major release of SQL Server.

Update policy is an instance configuration option that provides flexibility and allows you to choose between instant access to the latest SQL engine features and fixed SQL engine feature set corresponding to 2022 and 2025 major releases of SQL Server. Regardless of the update policy chosen, you continue to benefit from Azure SQL platform innovation. New features and capabilities not related to the SQL engine – everything that makes Azure SQL Managed Instance a true PaaS service – are successively delivered to your Azure SQL Managed Instance resources.

Update policy for each modernization strategy

Always-up-to-date is a “perpetual” update policy. It has no end of lifetime and brings new SQL engine features to instances as soon as they are available in Azure. It enables you to always be at the forefront – to quickly adopt new yet production-ready SQL engine features, benefit from them in everyday operations and keep a competitive edge without waiting for the next major release of SQL Server.

In contrast, update policies SQL Server 2022 and SQL Server 2025 contain fixed sets of SQL engine features corresponding to the respective releases of SQL Server. They’re optimized to fulfill regulatory compliance, contractual, or other requirements for database/workload portability from managed instance to SQL Server. Over time, they get security patches, fixes, and incremental functional improvements in form of Cumulative Updates, but not new SQL engine features. They also have limited lifetime, aligned with the period of mainstream support of SQL Server releases. As the end of mainstream support for the update policy approaches, you should upgrade instances to a newer policy. Instances will be automatically upgraded to the next more recent policy at the end of mainstream support of their existing update policy.

What’s new in SQL Server 2025 update policy

In short, instances with update policy SQL Server 2025 benefit from all the SQL engine features that were gradually added to the Always-up-to-date policy over the past few years, and not available in the SQL Server 2022 update policy. Let’s name few most notable features, with complete list available in the update policy documentation:

Best practices with Update policy feature

  • Plan for the end of lifetime of SQL Server 2022 update policy if you’re using it today, and upgrade to a newer policy on your terms before automatic upgrade kicks in.
  • Make sure to add update policy configuration to your deployment templates and scripts, so that you don’t rely on system defaults that may change in the future.
  • Be aware that using some of the newly introduced features may require changing the database compatibility level.

Summary and next steps

Azure SQL Managed Instance just got new update policy SQL Server 2025. It brings the same set of SQL engine features that exist in the new SQL Server 2025 (currently in preview). Consider it if you have regulatory compliance, contractual, or other reasons for database/workload portability from Azure SQL Managed Instance to SQL Server 2025. Otherwise, use the Always-up-to-date policy which always provides the latest features and benefits available to Azure SQL Managed Instance.

For more details visit Update policy documentation. To stay up to date with the latest feature additions to Azure SQL Managed Instance, subscribe to the Azure SQL video channel, subscribe to the Azure SQL Blog feed, or bookmark What’s new in Azure SQL Managed Instance article with regular updates.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android 16 QPR2 Beta 2 is Here

1 Share
Posted by Matthew McCullough, VP of Product Management, Android Developer

Android 16 QPR2 has released Platform Stability today with Beta 2! That means that the API surface is locked, and the app-facing behaviors are final, so you can incorporate them into your apps and take advantage of our latest platform innovations.

New in the QPR2 Beta



At this later stage in the development cycle, we're focused on the critical work of readying the platform for release. Here are the few impactful changes we want to highlight:

Testing developer verification

To better protect Android users from repeat offenders, Android is introducing developer verification, a new requirement to make app installation safer by preventing the spread of malware and scams. Starting in September 2026 and in specific regions, Android will require apps to be registered by verified developers to be installed on certified Android devices, with an exception made for installs made through the Android Debug Bridge (ADB).

As a developer, you are free to install apps without verification by using ADB, so you can continue to test apps that are not intended or not yet ready to distribute to the wider consumer population.

For apps that enable user-initiated installation of app packages, Android 16 QPR2 Beta 2 contains new APIs that support developer verification during installation, along with a new adb command to let you force a verification outcome for testing purposes.

adb shell pm set-developer-verification-result

By using this command, (see adb shell pm help for full details)  you can now simulate verification failures. This allows you to understand the end-to-end user experience for both successful and unsuccessful verification, so you can prepare accordingly before enforcement begins.

We encourage all developers who distribute apps on certified Android devices to sign up for early access to get ready and stay updated.

SMS OTP Protection

The delivery of messages containing an SMS retriever hash will be delayed for most apps for three hours to help prevent OTP hijacking. The RECEIVE_SMS broadcast will be withheld and sms provider database queries will be filtered. The SMS will be available to these apps after the three hour delay.

Certain apps such as the default SMS, assistant, and dialer apps, along with connected device companion, system apps, etc will be exempt from this delay, and apps can continue to use the SMS retriever API to access messages intended for them in a timely manner.

Custom app icon shapes

Android 16 QPR2 allows users to select from a list of icon shapes that apply to all app icons and folder previews. Check to make sure that your adaptive icon works well with any shape the user selects.

More efficient garbage collection

The Android Runtime (ART) now includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector in Android 16 QPR2 that focuses collection efforts on newly allocated objects, which are more likely to be garbage. You can expect reduced CPU usage from garbage collection, a smoother user experience with less jank, and improved battery efficiency.

Native step tracking and expanded exercise data in Health Connect

Health Connect now automatically tracks steps using the device's sensors. If your app has the READ_STEPS permission, this data will be available from the "android" package. Not only does this simplify the code needed to do step tracking, it's more power efficient as well.

Also, the ExerciseSegment and ExerciseSession data types have been updated. You can now record and read weight, set index, and Rate of Perceived Exertion (RPE) for exercise segments. Since Health Connect is updated independently of the platform, checking for feature availability before writing the data will ensure compatibility with the current local version of Health Connect.

// Check if the expanded exercise features are available
val newFieldsAvailable = healthConnectClient.features.getFeatureStatus(
    HealthConnectFeatures.FEATURE_EXPANDED_EXERCISE_RECORD
) == HealthConnectFeatures.FEATURE_STATUS_AVAILABLE

val segment = ExerciseSegment(
    //...
    // Conditionally add the new data fields
    weight = if (newFieldsAvailable) Mass.fromKilograms(50.0) else null,
    setIndex = if (newFieldsAvailable) 1 else null,
    rateOfPerceivedExertion = if (newFieldsAvailable) 7.0f else null
)

A minor SDK version

QPR2 marks the first Android release with a minor SDK version allowing us to more rapidly innovate with new platform APIs provided outside of our usual once-yearly timeline. Unlike the major platform release (Android 16) in 2025-Q2 that included behavior changes that impact app compatibility, the changes in this release are largely additive and designed to minimize the need for additional app testing.

Android 16 SDK release cadence

Your app can safely call the new APIs on devices where they are available by using SDK_INT_FULL and the respective value from the VERSION_CODES_FULL enumeration.

if (Build.VERSION.SDK_INT_FULL >= Build.VERSION_CODES_FULL.BAKLAVA_1) {
    // Call new APIs from the Android 16 QPR2 release
}

You can also use the Build.getMinorSdkVersion() method to get just the minor SDK version number.

val minorSdkVersion = Build.getMinorSdkVersion(VERSION_CODES_FULL.BAKLAVA)

The original VERSION_CODES enumeration can still be used to compare against the SDK_INT enumeration for APIs declared in non minor releases.

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.BAKLAVA) {
    // Call new APIs from the Android 16 release
}

Since minor releases aren't intended to have breaking behavior changes, they cannot be used in the uses-sdk manifest attributes.

Get started with the Android 16 QPR2 beta

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.  If you are already in the Android Beta program, you will be offered an over-the-air update to Beta 2. We’ll update the system images and SDK regularly throughout the Android 16 QPR2 release cycle.

If you are in the Canary program and would like to enter the Beta program, you will need to wipe your device and manually flash it to the beta release.

For the best development experience with Android 16 QPR2, we recommend that you use the latest Canary version of Android Studio Narwhal Feature Drop.

We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. Thank you for helping to shape the future of the Android platform.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Fixing Bugs (Once More) in Rocks

1 Share
From: Jason Bock
Duration: 1:16:24
Views: 6

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Scott Hunter

1 Share

Conversation with Scott Hunter (VP Microsoft) about Visual Studio, CoPilot Agents, MCP, Azure, Azure Functions and much more.

Photo Scott Hunter

Note: ZenCastr ate Scott’s links, but you can get started at https://learn.microsoft.com

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A Guide to AI Data Management

1 Share

AI data management is emerging as a crucial discipline for organizations aiming to maximize the value of their AI initiatives. Unlike traditional data practices, it must handle massive volumes of diverse, fast-changing data while ensuring reliability, fairness, and compliance. When done well, it streamlines model development, reduces risk, and makes AI projects more scalable and sustainable. In the future, advances in automation and governance will likely make AI data management increasingly self-directed, adaptive, and integral to enterprise strategy.

What is AI data management?

AI data management involves collecting, organizing, storing, and governing data so that it can be used to train AI models. Because AI models depend on large, varied datasets to generate accurate predictions and insights, AI data management focuses less on consistency and accessibility (prioritized in traditional data management) and more on the quality, diversity, and scalability of data.

Key facets of AI data management include preparing raw data for machine learning, handling unstructured formats such as text, images, and video, complying with data privacy regulations, and facilitating access for data scientists and engineers. By establishing a reliable foundation for data, AI data management allows organizations to fully realize the potential of their AI initiatives while also minimizing bias, errors, and regulatory violations.

How is AI data management different from traditional data management?

While traditional data management and AI data management share the fundamental goal of organizing and utilizing data, AI models require a specialized approach. Traditional data management focuses on storing and delivering data for reporting and operations, while AI data management focuses on addressing the unique needs of machine learning algorithms. These needs include massive data volumes, real-time processing capabilities, and stringent quality standards. The comparison chart below illustrates the biggest differences between these two approaches.

 

Aspect Traditional data management AI data management
Primary goal Ensure accurate, reliable, and consistent data for business processes and reporting Provide high-quality, diverse, and scalable datasets for training and deploying AI/ML models
Types of data Structured data (tables, transactions, logs) Structured, semi-structured, and unstructured data (text, images, audio, video, sensor data)
Processes Data storage, integration, governance, and compliance Data labeling, preprocessing, feature engineering, model-specific data pipelines
Scale Moderate, focused on operational data Massive, often petabyte-scale, optimized for AI workloads
Change cycle Relatively static, with periodic updates Highly iterative and dynamic, requiring continuous updates and feedback loops
Challenges Accuracy, consistency, compliance Bias mitigation, data diversity, scalability, model alignment

AI data management use cases

Because AI models depend on massive datasets, the data they’re fed must be properly collected, organized, stored, and governed. Below are some use cases that demonstrate why proper AI data management makes a difference:

    • Training data pipelines: Building automated workflows that move raw data through cleaning, labeling, and feature engineering steps ensures it’s ready for AI training.
    • Unstructured data management: Organizing and storing diverse formats like images, audio, and text properly allows them to be accessible to machine learning models.
    • Metadata and lineage tracking: Recording data origins, transformations, and usage ensures transparency, reproducibility, and trust in AI outputs.
    • Scalable storage solutions: Managing petabyte-scale datasets in cloud or hybrid environments supports large, compute-intensive training tasks.
    • Data governance for AI: Applying rules and policies ensures data quality, security, and compliance with regulations when preparing data for AI.
    • Bias detection and mitigation: Monitoring datasets for imbalance or skew helps reduce harmful bias in model training and outcomes.
    • Continuous data refresh: Updating training datasets with new, real-world information allows models to remain accurate and relevant over time.

Benefits of AI data management

AI data management provides organizations with a foundation for driving successful AI initiatives. With accurate, accessible, and well-governed data, businesses can train more reliable models, accelerate development cycles, and minimize risks. Beyond improving model quality, effective data management also makes it easier to scale AI efforts. Here’s a more detailed breakdown of the business and technical benefits:

Business benefits

    • Higher model accuracy: Clean, well-organized, and representative datasets improve the performance and reliability of AI models.
    • Reduced bias and risk: Governance and monitoring practices help detect and mitigate bias, ensuring fairer and more ethical AI outcomes.
    • Improved compliance: Strong data governance ensures alignment with privacy and regulatory requirements, such as GDPR, HIPAA, or CCPA.
    • Scalability: Structured data pipelines and scalable storage help organizations manage increasingly large and complex datasets for AI training.
    • Faster AI development: Streamlined data preparation and organization accelerate the process of building and deploying models.
    • Greater transparency and trust: Metadata management and lineage tracking provide visibility into where data comes from and how it’s used in training.
    • Operational efficiency: Automating data workflows reduces manual effort, lowers costs, and frees teams to focus on higher-value AI development tasks.

Technical benefits

    • Data pipeline automation: AI data management orchestrates the ingestion, preprocessing, labeling, and transformation of data to ensure that training-ready datasets are consistently delivered.
    • Metadata and lineage tracking: Detailed records of data versions, transformations, and sources are maintained, which ensures reproducibility and enables thorough auditability.
    • Feature store integration: Engineered features are centralized for reuse across multiple models, reducing duplication of work and accelerating experimentation.
    • Scalable storage and compute: The system supports petabyte-scale datasets and integrates with distributed computing environments to handle high-performance AI training workloads.
    • Continuous data refresh: New data streams are automatically incorporated into training pipelines, allowing models to be retrained efficiently without manual intervention.
    • Bias and quality checks: Automated validation is embedded into workflows to detect data skew, imbalances, or missing values before they negatively affect model performance.
  • Model-aligned governance: Access control, security, and compliance rules are enforced in alignment with AI workflows and the handling of sensitive datasets.

Challenges of AI data management

Managing large, diverse datasets requires balancing business priorities like compliance and transparency with technical demands around pipelines, storage, and automation. Understanding the challenges associated with juggling these priorities is the first step toward building strategies that keep AI initiatives effective and sustainable.

Business challenges

    • Regulatory compliance: Organizations must navigate complex data privacy rules, including GDPR, HIPAA, and CCPA, when preparing datasets for AI training.
    • Bias and fairness: Ensuring that datasets are representative and free of bias is crucial for ethical AI, but detecting and mitigating bias can be challenging.
    • Data ownership and governance: Clear policies are required to manage who controls and accesses sensitive data across different teams and systems.
    • Scaling responsibly: Expanding AI initiatives while maintaining transparency, accountability, and trust is a challenge without mature governance frameworks.
    • Resource allocation: Balancing time, budget, and personnel between data preparation, model development, and ongoing management can strain business resources.
    • Change management: Adapting organizational processes to incorporate AI data management practices often meets resistance or requires cultural shifts.
    • Cross-functional coordination: Aligning business units, data teams, and compliance officers to ensure consistent and accurate data handling is a complex task.

Technical challenges

    • Data quality and preparation: Cleaning, labeling, and structuring raw data at scale is an error-prone process requiring significant technical effort.
    • Handling unstructured data: Processing text, images, audio, and video into usable formats for AI training demands advanced tools and specialized infrastructure.
    • Storage and compute scalability: Supporting petabyte-scale datasets and compute-intensive AI training workflows can strain traditional IT systems.
    • Metadata and lineage tracking: Capturing and maintaining accurate records of data sources, transformations, and versions adds operational complexity.
    • Continuous data refresh: Keeping training datasets updated in near real time without disrupting existing pipelines is technically challenging.
    • Integration across systems: Combining data from siloed platforms into unified, training-ready pipelines typically requires custom solutions.
    • Monitoring and error detection: Detecting anomalies, data drift, or pipeline failures in complex AI workflows requires ongoing monitoring and the implementation of automated safeguards.

AI data management tools

Managing data for AI training requires a variety of specialized tools to collect, organize, store, and govern it effectively. The right stack depends on your industry, organization size, and specific AI use cases, but most AI data management ecosystems include tools across several categories to guide the selection process. Here’s a more detailed breakdown of what’s available:

    • Data integration platforms: Tools such as Apache NiFi, Talend, and Fivetran connect and consolidate data from multiple sources so that it flows consistently into AI pipelines.
    • Data labeling and annotation tools: Platforms like Labelbox, Scale AI, and Amazon SageMaker Ground Truth allow you to annotate text, images, audio, and video for supervised machine learning.
    • Data storage and lakehouse solutions: Technologies such as Snowflake, Google BigQuery, and Couchbase Capella provide scalable storage for both structured and unstructured datasets.
    • Metadata and lineage tracking tools: Solutions like Apache Atlas and DataHub provide visibility into the data’s origin, how it changes, and how it’s used in AI training.
    • Feature stores: Platforms like Tecton and Feast centralize engineered features, making them reusable across different models and experiments.
    • Data governance and compliance platforms: Tools such as Collibra and Alation enforce rules, access controls, and privacy policies to help ensure data is handled responsibly.
    • Monitoring and quality assurance tools: Solutions like Monte Carlo and WhyLabs detect anomalies, data drift, and pipeline failures to maintain reliable training data over time.

No single platform covers every aspect of AI data management, so organizations typically combine integration, storage, governance, and monitoring tools to create a more cohesive stack. By selecting the right mix, you can ensure that your data is reliable, compliant, and optimized for training AI models at scale.

The future of AI in data management

In the future, AI data management will evolve from preparing data for training models to becoming a fully intelligent, adaptive system. As data volumes and complexity continue to increase, organizations will rely on AI-driven automation, smarter governance, and self-optimizing pipelines to keep up. Rather than just supporting AI, data management will increasingly be powered by AI, making the process faster, more scalable, and even more resilient than ever before.

    • Fully autonomous pipelines: AI data management will shift toward self-managing pipelines that can ingest, clean, label, and transform data with little to no human oversight.
    • Proactive governance: Instead of static compliance rules, governance systems will predict risks and automatically enforce evolving regulatory and ethical standards.
    • Self-healing infrastructure: Storage and compute systems will detect bottlenecks, failures, or inefficiencies and reconfigure themselves in real time to maintain performance.
    • Real-time multimodal integration: AI will unify structured, unstructured, streaming, and multimodal data (text, vision, audio, IoT) into single, usable datasets.
    • Continuous bias mitigation: Future platforms will detect bias dynamically during both training and inference, adjusting datasets and features to ensure fairness.
    • Standardized AI-native ecosystems: Industry-wide frameworks for feature sharing, metadata exchange, and model-ready datasets will improve platform interoperability.
    • Human-AI co-management: Data teams will collaborate with AI copilots that proactively recommend optimizations, simulate governance impacts, and even generate training-ready datasets on demand.

The long-term trajectory of AI data management points toward systems that are not only scalable but also adaptive and self-governing. As automation continues and governance becomes more proactive, organizations will be able to trust their data pipelines to operate with minimal oversight while maintaining transparency and fairness. Ultimately, the future of AI data management lies in seamless collaboration between humans and AI. In this world, people will focus on strategy and innovation, while AI focuses on making data reliable, compliant, and ready to fuel the next generation of models.


Key takeaways and additional resources

By focusing on the quality, diversity, and governance of data, rather than just storage and accessibility, businesses can build stronger models, reduce risks, and gain a competitive edge. Below are the most important insights to remember:

Key takeaways

    1. AI data management goes beyond traditional data practices by prioritizing the quality, diversity, and scalability of datasets to support machine learning.
    2. Unlike traditional data management, it must handle structured, semi-structured, and unstructured formats such as text, images, audio, and video.
    3. Building reliable training pipelines requires automation for tasks like data cleaning, labeling, and feature engineering at scale.
    4. Strong governance and metadata tracking are essential to ensure transparency, compliance, and trust in AI outcomes.
    5. Effective AI data management reduces bias and risk by continuously monitoring datasets for fairness and representativeness.
    6. The right mix of integration, storage, governance, and monitoring tools creates a cohesive ecosystem optimized for AI workloads.
    7. The future of AI data management will be defined by adaptive, autonomous systems that enable human-AI collaboration while maintaining compliance and fairness.

To learn more about topics related to AI, you can visit the additional resources listed below:

Additional resources


FAQs

Why is AI data management important for businesses? AI data management ensures that data is accurate, organized, and governed, which helps businesses build reliable AI models, reduce risks, and scale their initiatives more effectively.

How is AI transforming data management? AI is automating tasks like data cleaning, labeling, integration, and monitoring, making data pipelines more efficient and adaptive while reducing the need for manual intervention.

How is AI used in database management? AI enhances database management by optimizing queries, automating indexing, detecting anomalies, and predicting performance issues before they disrupt operations.

How does AI data management handle unstructured data? It uses techniques like natural language processing, computer vision, and embedding models to extract meaning and structure from text, images, audio, and video.

How do you integrate AI data management into existing systems? Integration typically involves layering AI-driven tools onto existing data infrastructure, such as data lakes, warehouses, and pipelines, through APIs and connectors that minimize disruption.

The post A Guide to AI Data Management appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories