Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146210 stories
·
33 followers

Welcoming Web Content to Native Apps

1 Share

The web is ubiquitous and accessible, but native apps are desirable and personal. Developers/teams maintaining web apps likely have a plethora of web content in HTML, CSS and JavaScript. Thanks to modern WebViews, like the WebView2 in Uno Platform, web content is now very welcome on native platform targets. Let’s dive in to explore.

A Modern WebView

Web apps are universal. From informational dashboards to entire line-of-business systems, rich interactive experiences are now routinely delivered as web content, reachable from any device with a browser and continuously updated without redeployment.

At the same time, native mobile/desktop apps remain essential. They offer performance, platform integration, offline capabilities, and access to device features that the browser alone cannot always provide.

Historically, there was a sharp divide: web content lived in the browser, and native apps had to re-implement functionality to bring those experiences to devices. That divide is rapidly disappearing.

Modern WebView technologies have matured to the point where web content can be treated as a first-class citizen inside native apps. Developers can now embed web experiences directly into desktop and mobile apps, while retaining full control over navigation, lifecycle, and host-to-web communication. Rather than replacing native UI, web content can now complement it — used where it makes sense, shared across platforms, and updated independently of the app itself.

Uno Platform embraces this hybrid reality. By providing a unified, cross-platform WebView2 UI component, Uno Platform enables developers to seamlessly welcome web content into native apps across mobile, desktop and the web itself. The result is a pragmatic approach: reuse existing web investments, accelerate development, and blend web and native UI without sacrificing the strengths of either. Let’s take a look at the developer experience ..

Working with Web Content

Modern WebViews are excellent UI components – they have the full rendering power of the underlying browser engine and smart enough to render the right abstraction based on which target platform the Uno Platform is running on. What type of things can WebView2 render? As long the web’s love language is HTML, CSS or JavaScript, everything is fair game.

Remote Content

Let’s start with the simplest scenario – there is some remote web content meant to be rendered in browsers. One can add a WebView2 to UI markup, like so:

				
					
  

				
			

And simply point the WebView to the website.

				
					protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    MyWebView.Source = new Uri("https://platform.uno/");
}
				
			

If housed within an infinite UI container, WebView2 simply takes up the full available space and renders the website perfectly and responsively, if supported – just inside a native desktop/mobile app.

DOM Manipulation

WebView2 is also perfectly happy to render local HTML/CSS – let’s change up the UI markup first:

				
					
    

				
			

Once the container XAML page loads up, we can feed the WebView some random HTML to render – a DIV tag in this case.

				
					protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    MyWebView.NavigateToString("
"); MyWebView.NavigationCompleted += async (s, e) => { await MyWebView.ExecuteScriptAsync("document.getElementById('mydiv').style.backgroundColor = 'red';"); }; }

Once the WebView has rendered the HTML, Document Object Model (DOM) manipulation works as well – thanks to JavaScript execution, the DIV color changes from blue to red:

Execute JavaScript

Need the WebView to execute local JavaScript? It’s essentially a browser – so, things work as expected:

				
					protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    MyWebView.NavigateToString("

"); MyWebView.NavigationCompleted += async (s, e) => { await MyWebView.ExecuteScriptAsync("document.getElementById('mydiv').style.backgroundColor = 'red';"); var jsExecution = await MyWebView.ExecuteScriptAsync("1 + 1"); await MyWebView.ExecuteScriptAsync("document.getElementById('myH1').textContent =" + jsExecution + ";"); }; }

The additional header tag in HTML shows the result of the JavaScript method adding two numbers – unlike non-deterministic AI, JS execution means actual math:

C#-JS Communication

Uno Platform apps are essentially .NET cross-platform apps with C#/XAML code, while the WebView2 component is working with web UI in HTML/CSS/JavaScript. If needed, the web content in the WebView can intereact with the host – yes, C# and JavaScript can communicate easily. Let’s add a native text element to our XAML UI markup:

				
					
    
    

				
			

Now, let’s write up a JavaScript function with some platform-specific checks – the goal is to post a message from the WebView to the C#/XAML host:

				
					protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    // MyWebView.Source = new Uri("https://platform.uno/");

    MyWebView.NavigateToString("

"); MyWebView.NavigationCompleted += async (s, e) => { await MyWebView.ExecuteScriptAsync("document.getElementById('mydiv').style.backgroundColor = 'red';"); var jsExecution = await MyWebView.ExecuteScriptAsync("1 + 1"); await MyWebView.ExecuteScriptAsync("document.getElementById('myH1').textContent =" + jsExecution + ";"); var jsFunction = @" function postWebViewMessage(message){ try{ if (window.hasOwnProperty('chrome') && typeof chrome.webview !== undefined) { // Windows chrome.webview.postMessage(message); } else if (window.hasOwnProperty('unoWebView')) { // Android unoWebView.postMessage(JSON.stringify(message)); } else if (window.hasOwnProperty('webkit') && typeof webkit.messageHandlers !== undefined) { // iOS and macOS webkit.messageHandlers.unoWebView.postMessage(JSON.stringify(message)); } } catch (ex){ alert('Error occurred: ' + ex);} } "; await MyWebView.ExecuteScriptAsync(jsFunction); await MyWebView.ExecuteScriptAsync("postWebViewMessage('Hello world from JS!');"); }; MyWebView.WebMessageReceived += (s, e) => { MyTextBlock.Text = e.WebMessageAsJson; }; }

When C# received the message from the WebView, it simply updates the XAML text element with what it received from JavaScript – works the other way as well.

Inspect That

Just to make sure WebView2 renders web content seamlessly across platforms, let’s switch up the target runtime – this time iOS running on an iPad simulator:

Works as expected – this is web content rendered by a WebView, but inside the shell of a native cross-platform .NET app. One thing web developers love is browser developer tools – being able to inspect and manipulate web UI, while the app is running in the browser. Should the same developer experience be possible with native apps hosting a WebView? Sure thing – some platforms might need extra permissions to make web content inspectable, like for iOS at app startup:

				
					public App()
{
    this.InitializeComponent();
    #if __IOS__
        Uno.UI.FeatureConfiguration.WebView2.IsInspectable = true;
    #endif
}
				
			

Now when the app is running, Safari Developer mode sees the app – essentially, noticing the WebView component running inside a native app:

And just like that, any and all web content rendered inside the WebView is inspectable with browser developer tools – developers can manipulate HTML/CSS/JS to their heart’s content:

Native Apps Welcome Web Content

Got web content in the form of HTML, CSS and JavaScript? They are most welcome on native cross-platform apps on mobile and desktop. The WebView2 UI in Uno Platform warmly accepts web UI – developers get to reuse web investments and maintain consistency of experiences across web/devices. Rich web experiences often need native counterparts – modern WebViews enable a seamless experience.

But what if developers/teams have investments in building browser experiences with modern web frameworks, like Angular/React/others? Fret not – that works as well. Please stay tuned for upcoming articles to showcase the reusability experience.

Cheers developers!

Next Steps

Ready to boost your productivity and simplify cross-platform .NET development? Try Uno Platform today by downloading our extension for your preferred IDE and join our developer community.

The post Welcoming Web Content to Native Apps appeared first on Uno Platform.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Gemini-Powered Siri Update Coming February – Apple’s Big AI Upgrade Revealed

1 Share
Apple's Revolutionary Gemini-Powered Siri Upgrade: February Launch Details and Features Revealed

In the ever-evolving world of artificial intelligence, Apple is poised to take a significant leap forward with its Siri assistant. Recent reports indicate that Apple will unveil a groundbreaking Siri update powered by Google's Gemini AI in February. This move comes as part of Apple's broader strategy to enhance its AI capabilities and compete in the rapidly advancing AI market.

 

The anticipation around this Siri upgrade stems from promises made at Apple's WWDC 2024 event, where the company teased more intelligent and personalized features for its virtual assistant. Now, with the integration of Gemini, Siri is expected to deliver on those commitments, offering users a more seamless and powerful experience.

 

Siri 2.0 Powered by Gemini in February
Siri Meets Gemini – February Reveal

 

Table of Contents

 

 

Understanding the Gemini Integration

Gemini, developed by Google, is a sophisticated AI model known for its multimodal capabilities, handling text, images, and more. By incorporating Gemini into Siri, Apple aims to boost the assistant's intelligence, making it more context-aware and responsive to complex queries.

 

This integration is similar to Apple's existing partnership with OpenAI for ChatGPT features, but Gemini brings unique strengths in reasoning and creativity. Users can look forward to a Siri that not only understands natural language better but also integrates deeply with iOS ecosystems for enhanced functionality.

 

 

Release Timeline and Beta Testing

According to reliable sources like Bloomberg's Mark Gurman, Apple plans to demonstrate the new Siri in the second half of February. This February reveal will showcase the capabilities of the Gemini-powered Siri.

 

The update will be rolled out as part of iOS 18.4. Beta testing is slated to begin in February, with a public release expected in March or April. This phased approach ensures that the Siri update is polished and ready for widespread adoption.

 

 

Key Features of the Upgraded Siri

The Gemini-powered Siri promises a host of innovative features designed to make daily interactions smarter and more efficient. Here are some highlights:

  • Enhanced Personalization: Siri will use Gemini to provide tailored responses based on user preferences and history.
  • Advanced Reasoning: Handle complex tasks like multi-step queries and logical deductions with ease.
  • Multimodal Input: Process voice, text, and images for a more versatile assistant experience.
  • Improved Integration: Seamless connectivity with Apple apps and services for better productivity.

 

These features position Siri as a frontrunner in the AI assistant space, potentially surpassing competitors in accuracy and user satisfaction.

 

 

Apple-Google Partnership Insights

The collaboration between Apple and Google for Gemini integration marks a strategic alliance in the AI domain. Despite being rivals in many areas, this partnership leverages Google's expertise in AI to enhance Apple's ecosystem.

 

Reports suggest a multi-year deal, ensuring ongoing updates and improvements to Siri. This move also reflects Apple's commitment to privacy, as Gemini processing will align with Apple's stringent data protection standards.

 

 

Impact on Users and the AI Landscape

For iPhone users, the Gemini-powered Siri update means a more intuitive device interaction, from setting reminders to managing smart home devices. It could redefine how we use our smartphones daily.

 

In the broader AI landscape, this development intensifies competition, pushing other companies to innovate. Apple's entry with Gemini could accelerate AI adoption across consumer tech.

 

 

Frequently Asked Questions

When will Apple unveil the Gemini-powered Siri?

Apple is expected to reveal the new Siri update in the second half of February.

 

What is Gemini AI?

Gemini is Google's advanced AI model that enhances Siri's capabilities with better reasoning and multimodal processing.

 

Which iOS version will include this Siri update?

The Gemini-powered Siri will be part of iOS 18.4, with beta in February and public release in March or April.

 

How does this differ from current Siri?

The upgrade offers more personalized, intelligent responses and deeper integration, powered by Gemini AI.

 

Is the Apple-Google partnership for Siri long-term?

Yes, it's a multi-year deal to continually improve Siri with Gemini technology.

 

 

Article Summary: Key Takeaways

  • Apple to unveil Gemini-powered Siri in February, fulfilling WWDC 2024 promises.
  • Integration with Google's Gemini AI for advanced features like personalization and multimodal capabilities.
  • Part of iOS 18.4: Beta in February, public launch in March/April.
  • Strategic Apple-Google partnership enhances AI competition and user experience.
  • Expected to revolutionize daily interactions with smarter, more efficient Siri.

 

Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Reading Notes #682

1 Share

This week’s Reading Notes bring together programming tips, AI experiments, and cloud updates. Learn to build Python CLI tools. Untangle GitHub issue workflows. Try running AI models locally. Catch up on Azure news. And explore ideas around privacy and cloud architecture. Short reads. Useful takeaways.


Programming

AI

Miscellaneous

~frank

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Maia 200: The AI accelerator built for inference - The Official Microsoft Blog

1 Share

Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an accelerator built on TSMC’s 3nm process with native FP8/FP4 tensor cores, a redesigned memory system with 216GB HBM3e at 7 TB/s and 272MB of on-chip SRAM, plus data movement engines that keep massive models fed, fast and highly utilized. This makes Maia 200 the most performant, first-party silicon from any hyperscaler, with three times the FP4 performance of the third generation Amazon Trainium, and FP8 performance above Google’s seventh generation TPU. Maia 200 is also the most efficient inference system Microsoft has ever deployed, with 30% better performance per dollar than the latest generation hardware in our fleet today.

Maia 200 is part of our heterogenous AI infrastructure and will serve multiple models, including the latest GPT-5.2 models from OpenAI, bringing performance per dollar advantage to Microsoft Foundry and Microsoft 365 Copilot. The Microsoft Superintelligence team will use Maia 200 for synthetic data generation and reinforcement learning to improve next-generation in-house models. For synthetic data pipeline use cases, Maia 200’s unique design helps accelerate the rate at which high-quality, domain-specific data can be generated and filtered, feeding downstream training with fresher, more targeted signals.

Maia 200 is deployed in our US Central datacenter region near Des Moines, Iowa, with the US West 3 datacenter region near Phoenix, Arizona, coming next and future regions to follow. Maia 200 integrates seamlessly with Azure, and we are previewing the Maia SDK with a complete set of tools to build and optimize models for Maia 200. It includes a full set of capabilities, including PyTorch integration, a Triton compiler and optimized kernel library, and access to Maia’s low-level programming language. This gives developers fine-grained control when needed while enabling easy model porting across heterogeneous hardware accelerators.

YouTube Video

Engineered for AI inference

Fabricated on TSMC’s cutting-edge 3-nanometer process, each Maia 200 chip contains over 140 billion transistors and is tailored for large-scale AI workloads while also delivering efficient performance per dollar. On both fronts, Maia 200 is built to excel. It is designed for the latest models using low-precision compute, with each Maia 200 chip delivering over 10 petaFLOPS in 4-bit precision (FP4) and over 5 petaFLOPS of 8-bit (FP8) performance, all within a 750W SoC TDP envelope. In practical terms, Maia 200 can effortlessly run today’s largest models, with plenty of headroom for even bigger models in the future.

A close-up of the Maia 200 AI accelerator chip.

Crucially, FLOPS aren’t the only ingredient for faster AI. Feeding data is equally important. Maia 200 attacks this bottleneck with a redesigned memory subsystem. The Maia 200 memory subsystem is centered on narrow-precision datatypes, a specialized DMA engine, on-die SRAM and a specialized NoC fabric for high‑bandwidth data movement, increasing token throughput.

A table with the title “Industry-leading capability” shows peak specifications for Azure Maia 200, AWS Trainium 3 and Google TPU v7.

Optimized AI systems

At the systems level, Maia 200 introduces a novel, two-tier scale-up network design built on standard Ethernet. A custom transport layer and tightly integrated NIC unlocks performance, strong reliability and significant cost advantages without relying on proprietary fabrics.

Each accelerator exposes:

  • 2.8 TB/s of bidirectional, dedicated scaleup bandwidth
  • Predictable, high-performance collective operations across clusters of up to 6,144 accelerators

This architecture delivers scalable performance for dense inference clusters while reducing power usage and overall TCO across Azure’s global fleet.

Within each tray, four Maia accelerators are fully connected with direct, non‑switched links, keeping high‑bandwidth communication local for optimal inference efficiency. The same communication protocols are used for intra-rack and inter-rack networking using the Maia AI transport protocol, enabling seamless scaling across nodes, racks and clusters of accelerators with minimal network hops. This unified fabric simplifies programming, improves workload flexibility and reduces stranded capacity while maintaining consistent performance and cost efficiency at cloud scale.

A top-down view of the Maia 200 server blade.

A cloud-native development approach

A core principle of Microsoft’s silicon development programs is to validate as much of the end-to-end system as possible ahead of final silicon availability.

A sophisticated pre-silicon environment guided the Maia 200 architecture from its earliest stages, modeling the computation and communication patterns of LLMs with high fidelity. This early co-development environment enabled us to optimize silicon, networking and system software as a unified whole, long before first silicon.

We also designed Maia 200 for fast, seamless availability in the datacenter from the beginning, building out early validation of some of the most complex system elements, including the backend network and our second-generation, closed loop, liquid cooling Heat Exchanger Unit. Native integration with the Azure control plane delivers security, telemetry, diagnostics and management capabilities at both the chip and rack levels, maximizing reliability and uptime for production-critical AI workloads.

As a result of these investments, AI models were running on Maia 200 silicon within days of first packaged part arrival. Time from first silicon to first datacenter rack deployment was reduced to less than half that of comparable AI infrastructure programs. And this end-to-end approach, from chip to software to datacenter, translates directly into higher utilization, faster time to production and sustained improvements in performance per dollar and per watt at cloud scale.

A view of the Maia 200 rack and the HXU cooling unit.

Sign up for the Maia SDK preview

The era of large-scale AI is just beginning, and infrastructure will define what’s possible. Our Maia AI accelerator program is designed to be multi-generational. As we deploy Maia 200 across our global infrastructure, we are already designing for future generations and expect each generation will continually set new benchmarks for what’s possible and deliver ever better performance and efficiency for the most important AI workloads.

Today, we’re inviting developers, AI startups and academics to begin exploring early model and workload optimization with the new Maia 200 software development kit (SDK). The SDK includes a Triton Compiler, support for PyTorch, low-level programming in NPL and a Maia simulator and cost calculator to optimize for efficiencies earlier in the code lifecycle. Sign up for the preview here.

Get more photos, video and resources on our Maia 200 site and read more details.

Scott Guthrie is responsible for hyperscale cloud computing solutions and services including Azure, Microsoft’s cloud computing platform, generative AI solutions, data platforms and information and cybersecurity. These platforms and services help organizations worldwide solve urgent challenges and drive long-term transformation.

Tags: AI, Azure, datacenters

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Who Will Adapt Best to AI Disruption?

1 Share
From: AIDailyBrief
Duration: 9:00
Views: 365

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

973: The Web’s Next Form: MCP UI (with Kent C. Dodds)

1 Share

Scott and Wes sit down with Kent C. Dodds to break down MCP, context engineering, and what it really takes to build effective AI-powered tools. They dig into practical examples, UI patterns, performance tradeoffs, and whether the future of the web lives in chat or the browser.

Show Notes

  • 00:00 Welcome to Syntax!
  • 00:44 Introduction to Kent C. Dodds
  • 02:44 What is MCP?
  • 03:28 Context Engineering in AI
  • 04:49 Practical Examples of MCP
  • 06:33 Challenges with Context Bloat
  • 08:08 Brought to you by Sentry.io
  • 09:37 Why not give AI API access directly?
  • 12:28 How is an MCP different from Skills
  • 14:58 MCP optimizations and efficiency levers
  • 16:24 MCP UI and Its Importance
  • 19:18 Where are we at today with MCP
  • 24:06 What is the development flow for building MCP servers?
  • 27:17 Building out an MCP UI.
  • 29:29 Returning HTML, when to render.
  • 36:17 Calling tools from your UI
  • 37:25 What is Goose?
  • 38:42 Are browsers cooked? Is everything via chat?
  • 43:25 Remix3
  • 47:21 Sick Picks & Shameless Plugs

Sick Picks

Shameless Plugs

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI8670091234.mp3
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories