Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
132217 stories
·
29 followers

What 2024’s Data Told Us About How Developers Work Now

1 Share
year wrapup illustratiojn

2024 saw The New Stack report on a variety of survey-based research about software development. Here are the takeaways we think are most relevant to you as you plan for 2025.

Platform Engineering and Developer Platforms

  • Platform engineers are focused on infrastructure provisioning, which can be problematic because of the diversity of platforms being managed.
  • Even though self-service increases productivity for developer teams, too many platform teams are failing to collect and demonstrate metrics of success.

Open Source

  • Paid maintainers keep code safer.
  • Maintainers worry about contributions from AI and unknown developers.

AI and Developers

  • Time savings and increased productivity, not code quality, are why developers are using AI.
  • Led by younger, inexperienced developers, AI tools have rapidly been adopted for use within the development process.
  • GitHub Copilot did not experience mass adoption.

Hiring and Careers

  • Personal job security is stronger than would be expected based on developers’ generalized anxiety.
  • Developer salaries and wages have been constrained.

Programming Languages

Other Noteworthy Findings

  • Not all companies are leaving the cloud.
  • Customer-facing incidents are on the rise.

Platform Engineering and Developer Platforms

While not a portmanteau like DevOps, platform engineering has emerged as the discipline in which the goals of development and operations converge.

Throughout 2024 we reported on more than four survey-based reports that provided insights into platform engineering and internal developer platforms (IDPs). They demonstrated that the vast majority of enterprises have adopted platform engineering, even though they may not have a formal team with that name.

Standardizing infrastructure provisioning and consumption of IDPs is a main focus of 68% platform teams. Improving developers’ experience and productivity with IDPs follows closely as focal points, per the latest version of the “State of Platform Engineering Report.” by Gitpod and Humanitec.

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

Open Source in 2025: Strap In, Disruption Straight Ahead

1 Share
An open box with a star atop it is encircledd with various lines coming out from it to symbols representing growth and trends to watch.

The open source software world can feel like a bubble at times, one in which people who love to solve problems go and tinker with solutions, share ideas freely and build global communities of contributors. They gather at conferences, meetups and online to praise each other’s hard work and innovation, and remind each other how awesome they are.

But outside forces can sometimes shake that bubble like a snow globe. In March, Redis adjusted the licensing for its open source in-memory data store, which prompted the creation of a Linux Foundation-supported fork, Valkey.

In December, the community around Puppet, an Infrastructure as Code tool, announced plans to fork it in the wake of November news that Perforce, its owner, would “ship any new binaries and packages developed by [its] team to a private, hardened and controlled location. Community contributors will have free access to this private repo under the terms of an end-user license agreement (EULA) for development use.”

In other words, Puppet will now be source-available, not open source.

The trend of widely used open source software moving to more restrictive licensing isn’t new. But the current wave started, arguably, with HashiCorp’s August 2023 decision to pull Terraform (and subsequently other products, like Nomad) back from the open source world, assigning them to Business Source License v1.1. A community is growing around the fork of Terraform, OpenTofu. Likewise for OpenBao, a fork of HashiCorp’s Vault secrets manager created at the end of 2023.

Users are truly experiencing some “turbulation” — a word Matt Butcher, CEO and co-founder of Fermyon Technologies, coined about the open source licensing dramas of 2023 and 2024 — a mashup of “turbulence” and “tribulation.” His company was hit by turbulation caused by HashiCorp’s decision, since Fermyon uses HashiCorp’s Nomad. Butcher told The New Stack (TNS): “We literally ended up asking them for exceptions to bits and pieces, because we were running a patched version of Nomad.”

But as a startup founder, he is watching the licensing decisions carefully. Fermyon, which focuses on WebAssembly, has both open source projects and paid enterprise-level products.

“I’m kind of hoping that that particular approach to things remains still highly tenable, and I think it will,” he told TNS at KubeCon + CloudNativeCon North America, in November. “If we can plan that way from the get-go, we don’t have to yank things back, which is what tends to build up the bad faith in the community, or the assumption of bad faith.”

In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.

And there’s the perennial issue of how to compensate the global army of unpaid, volunteer maintainers that so many projects rely on.

What’s ahead for open source in 2025? Here are some ideas, gleaned from interviews at this past autumn’s tech conferences and also from a New Stack survey of more than 120 industry experts, conducted in November, that asked about the future of open source, developers’ use of AI, and IT infrastructure.

More Consolidation, More Licensing Changes

Expect more “turbulation” in the coming year, as the sprawling cloud native ecosystem consolidates. IBM’s $6.4 billion purchase of HashiCorp, announced in April and now expected to close in Q1 of 2025, may herald a trend.

Matt Moore, CTO of Chainguard, said in a response to the TNS survey that he hopes the IBM-HashiCorp deal might bear fruit for the open source community. “I kind of hope that with IBM’s acquisition of Hashi that they might mend fences between Terraform and OpenTofu. There’s already a precedent here, with Elastic’s reversal of a similar decision.”

Elastic moved Elasticsearch and Kibana to open source licensing in August by adding a GNU Affero General Public License to them. The move came three years after the company’s decision to move both projects away from Apache 2.0 licenses. Elastic’s search and analytics engine, Elastic, has been seeing competition from OpenSearch, a fork sponsored by Amazon Web Services.

What’s tricky about guessing the future of licensing is that the pressures on open source software sponsors don’t simply push in one direction; competition could make a company either embrace more restrictive licensing for an open source tool, or create an open source version to speed adoption.

“Predicting what open source projects are moving to a more restrictive model is highly speculative,” said Scott Wheeler, cloud practice lead at Asperitas, in response to The New Stack survey of industry experts.

Still, Wheeler pointed to Elasticsearch and Kibana as examples that might feel new licensing pressures down the road. And, based on the HashiCorp example, he thinks the following projects might experience similar pressures in the future:

All but Ansible, which is covered by a GNU General Public License v3, are under Apache 2.0 licenses currently. (And all but Ansible are housed at nonprofit foundations, presumably sheltered from commercial concerns.)

By contrast, Jason Perlow, president of Argonaut Media and former editorial director at The Linux Foundation, told TNS that “the reverse of this is going to transpire.”

“I feel it is likely that Chrome and possibly Android are likely to become independently governed open source projects, rather than sold off by Google per antitrust regulation,” Perlow said in response to a TNS survey question about the future of open source. (In August, a US District Court for the District of Columbia found the company had maintained an illegal monopoly in online search.)

David DeSanto, chief product officer at GitLab, expects to see more formerly open source projects move to more restrictive licensing in 2025. But he’s taking a glass-half-full view of the trend.

“I expect that’ll be open to a whole new breed of what open source can mean,” he told The New Stack at KubeCon in November. He pointed to HashiCorp’s moves as an example.

“They’ve been moving Vault off of a very permissive open source license, but that’s led to OpenBao. And at GitLab we’re building our own native secrets manager. That’s going to generate the next round of future open source.”

The Open Source AI Debate: Just Getting Started

In October, the Open Source Initiative released version 1.0 of its definition of open source AI. Stefano Maffulli, OSI’s executive director, told The New Stack ahead of its release that 1.0 is a “humble” document, a work in progress.

In the wake of the definition’s release, critics picked it apart, complaining that it gives some vendors cover if they don’t want to reveal their training data, that the definition fundamentally changes the meaning of open source, and that, given how different AI is from software code, maybe OSI shouldn’t have attempted to define open source AI at all.

“There’s an interesting debate around what constitutes ‘open source’ in AI,” said Yang Li, COO at Cosine, in response to The New Stack’s survey.

Meta and Google just got called out for claiming they’re open source when they’re not. The real question is: as we generate more synthetic data, where do you draw the line? You can be open about your data sources but keep your synthetic data generation process proprietary. It’s a blurry line between revealing your data set and revealing how you created it.”

Clearly this debate is just getting started. Moves by the big players will likely be part of 2025’s conversation. In a company blog post on Dec. 26, OpenAI stated its intention to transform from a for-profit and nonprofit structure to a public benefit corporation — like rivals Anthropic and xAI — that supports a nonprofit.

The big takeaway: AI and its major players are certainly poised to suck a lot of the oxygen out of the open source bubble in 2025.

“The biggest threat will likely be the sustainability and maintainability of existing open source projects,” replied Christian Posta, global field CTO at Solo.io, in response to a survey question about the future of open source.

“As AI continues to dominate technological advancements, there has been a noticeable shift in focus toward AI-related initiatives. This often leaves mature projects, such as CNCF-graduated projects, with fewer maintainers, jeopardizing their ability to remain healthy and sustainable in the long term.”

And it will become harder in 2025 for entities that aren’t Google, Meta, Microsoft and its ilk to compete in open source AI.

“For AI applications, computation and data is central, but only the giants like Google or Meta can obtain the data easily,” said James Luan, vice president of engineering at Zilliz, in response to the TNS survey.

“Smaller groups of developers or even individual developers do not have this luxury, and therefore will find it continuously challenging to find open source models.”

In fact, some are concerned that deep learning models could eventually overtake open source development. There’s pushback on that idea. But the threat looms nevertheless.

“GenAI tools, often proprietary, offer developers advanced automation, code generation and observability capabilities that could outpace open source alternatives in convenience and performance,” warned Eran Kinsbruner, head of product marketing at Lightrun, in his response to the TNS survey.

“At the same time, complex architectures demand highly integrated, scalable solutions, which proprietary platforms are better positioned to deliver. This shift risks sidelining open source projects, especially those unable to keep up with AI-driven innovations or the demands of distributed systems, leading to reduced adoption and investment in open-source ecosystems.”

Security and Compliance Concerns Will Rise

The world, you may have noticed, is not particularly peaceful at the moment. And cyberattackers love to create opportunities out of crises.

“Aside from AI, the other big, huge debate [is] going to be security and compliance,” OSI’s Maffulli told The New Stack. “It’s already there. But in 2025 it is going to be even more, given the geopolitical landscape getting more and more complicated and convoluted. It’s going to be big.”

AI has the potential to supercharge threats, noted Idan Plotnik, co-founder and CEO of Apiiro, in response to the TNS survey.

“In 2025, open source software threats will shift from traditional vulnerabilities to AI-generated backdoors and malware embedded in open source packages,” said Plotnik. “With attackers leveraging AI tools to develop and disguise malware within open source code, addressing these new threats will require a significant advancement in security tools to stay ahead of these quickly evolving challenges.

A bit of good news, however: In 2025, AI automation tools might help find and remediate more unmaintained open source code and technical debt, helping to close some of the potential on-ramps for hackers.

“I believe with the proliferation of AI coding tools we should definitely see some decrease in unmaintained open source components primarily because it would be easier and faster to write code for developers even with limited experience in some highly niched areas,” said Madhukar Kumar, chief marketing officer at SingleStore, in response to the TNS survey.

Paying Maintainers: More Cash, Creativity Needed

Nearly every stack uses open source code but still — still! — most open source maintainers essentially get paid in GitHub stars and kissy-face emojis.

Sixty percent of open source maintainers surveyed by Tidelift, in a study published in September, said they aren’t paid for their work. (Perhaps not coincidentally, about the same percentage said they had quit or had considered quitting their project.)

This situation — the continued dependence on an army of unpaid, underappreciated hobbyists to maintain essential codebases — is going to be the biggest threat to open source in 2025, including the security of its tools and platforms, say industry experts.

“Open source is sprawling and often thanklessly maintained by folks in their nights and weekends,” said Moore of Chainguard, in response to the TNS survey. “As the complexity and breadth of open source ecosystems expand, the rate of necessary updates accelerates exponentially.”

“As more organizations integrate open source solutions, the potential for unpatched vulnerabilities and outdated configurations increases, exposing businesses to significant security and performance challenges.”

Jeff Hollen, director of product for Snowpark, the ecosystem and developer platform at Snowflake, replied to the TNS survey with a comment on the importance of paying the people who build and maintain open source projects.

“Open source depends on sponsors and supporters,” Hollen wrote. “Many maintainers and contributors (myself included) participate in open source in addition to their day-to-day job responsibilities. The biggest threat to open source would be if significant enterprise sponsors stop encouraging, promoting and supporting those efforts to help contributors and maintainers get some value from them.”

In addition to the contributions of sponsors like Tidelift and by large corporations to put productive maintainers on their payrolls, expect some creative attempts to compensate open source developers to emerge in 2025.

One to watch in the coming year is tea, a decentralized technology protocol co-founded by Homebrew creator Max Howell. Howell has launched Chai, which measures open source vitality through package manager data.

Participants who have signed up for a “testnet” of the project are earning tokens; at some point in 2025, when the “mainnet” stage of the project launches, those tokens will be launched on a number of cryptocurrency exchanges, with the intent of having monetary value.

About 16,000 projects signed up for Chai’s testnet, Howell told The New Stack. It’s a mere sliver of the 10.5 million open source projects worldwide, but it’s a clear indication that there’s raging demand for innovative thinking when it comes to compensating open source’s makers.

Lawrence E. Hecht contributed to this post.

The post Open Source in 2025: Strap In, Disruption Straight Ahead appeared first on The New Stack.

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

Is OpenAI’s o3 Finally Thinking Like a Human?

1 Share

Imagine this: You ask your AI assistant a question, and instead of spitting out a half-baked answer in milliseconds, it pauses.

\ It thinks. It reasons.

\ And then, it delivers a response so well thought-out, it feels almost…human.

\ Sounds futuristic, right?

\ Well, welcome to the o3 model, OpenAI’s latest creation that promises to change the game entirely.

\ For years, AI has been stuck in a pattern—faster responses, flashier outputs, but not necessarily smarter ones.

\ With o3, OpenAI is saying, “Slow down. Let’s do this right.”

First Things First: What Is o3?

When OpenAI unveiled o3 during its 12-day “shipmas” event, it wasn’t just another announcement in a crowded AI market.

\ This model, they claimed, is not just smarter—it’s more thoughtful.

\ At its core, o3 is part of OpenAI’s family of “reasoning models.”

\ Unlike traditional AI, which often relies on brute computational force to deliver answers, reasoning models like o3 are designed to process information more like humans.

\ But what sets o3 apart?

\

  • It Fact-Checks Itself: When you ask it a question, it doesn’t just respond—it cross-references and double-checks along the way.
  • It Thinks at Different Speeds: Depending on the task, you can set it to low, medium, or high compute (essentially telling it how much "brainpower" to use). This means it can handle both simple questions and complex puzzles without breaking a sweat.
  • It’s Flexible: There’s the full-blown o3 model and its smaller sibling, o3-mini, designed for lighter tasks and tighter budgets.

Why Call It o3? And What Happened to o2?

OpenAI skipped “o2” because of a trademark conflict with a British telecom provider, O2.

\ Yep, you read that right.

\ Sam Altman, OpenAI’s CEO, even confirmed this during a live stream.

\ In the tech world, even naming AI models can come with legal drama.

\ But enough about the name. Let’s talk about why this model is turning heads.

The Numbers Behind o3: Why It’s Blowing Minds

If you’re into data, here’s where things get juicy.

1 - Reasoning Power

One of the most striking achievements of O3 is its performance on the ARC AGI benchmark—a test designed to measure whether AI can learn and generalize new skills, not just regurgitate what it’s been trained on.

\ Picture this: You’re given a series of geometric patterns and asked to predict the next one.

\ No prior examples, no memorized templates—just raw reasoning.

\ That’s the challenge ARC AGI presents to AI.

  • O1’s Score: 32%
  • O3’s Score: 88% (on high compute)

\ This milestone is significant because ARC AGI is considered the gold standard for evaluating an AI’s ability to think like a human.

\ For the first time, an AI model has surpassed human-level performance on this test.

Here’s the test OpenAI Performed on the o3 model

What’s happening here?

\ You’re shown a grid with colorful shapes and asked, “If this is the input, what should the output look like?”

\ The AI is given a few examples of how input grids are transformed into output grids.

\ The examples follow specific logic or rules.

\ For instance:

  • In one example, a yellow square with red dots inside gets a red border.
  • In another, a yellow square with blue dots gets a blue border.

\ The goal?

  • The AI has to figure out the rules behind these transformations, without being told explicitly.
  • Then, it needs to apply those rules to a brand-new grid (the “Test Input”) and generate the correct “Test Output.”

\ Why is this so hard for AI?

Humans do this all the time.

\ For example, if someone says, “Add a red outline to anything with red dots,” you get it immediately.

\ AI, however, struggles because it doesn’t “understand” the concept of red or outlines—it only processes patterns in data.

\ The ARC test pushes AI to think beyond pre-learned answers.

\ Each test is unique, so memorization won’t help.

\ What about the last test (with the 🤔 emoji)?

Here’s where things get really tricky.

\ The test input mixes things up: there’s a yellow square with magenta dots.

\ The AI hasn’t seen magenta before—what should it do?

\ Humans might guess, “Maybe it should get a magenta border,” but this requires reasoning and a leap of logic.

\ For AI, this is like being asked to jump off a cliff blindfolded.

\ It’s completely outside its training.

2 - O3’s Remarkable Performance

O3 has set a new benchmark in AI reasoning by excelling on the ARC AGI test.

\ On low-compute settings, O3 scored 76% on the semi-private holdout set—a performance far above any previous model.

\ But the real breakthrough came when tested on high-compute settings, where O3 achieved an extraordinary 88%, surpassing the 85% threshold often considered human-level performance.

3 - Coding Wizardry

The graph shows O3 achieving 71.7% accuracy on Bench Verified, a benchmark that simulates real-world software engineering tasks.

\ This is a 46% improvement over O1, signaling O3’s strength in solving complex, practical challenges developers face daily.

\ In competitive coding, the difference is even more dramatic.

\ With an ELO score of 2727, O3 doesn’t just outperform O1’s 1891—it enters a league rivaling top human programmers.

For context, an ELO above 2400 is typically considered grandmaster level and its Codeforces rating of 2727 places it in the top 0.8% of human coders.

4 - Math Genius

On the 2024 American Invitational Mathematics Exam, o3 scored a jaw-dropping 96.7%, missing just one question.

5 - Science Prodigy

On GPQA Diamond, a set of PhD-level science questions, o3 achieved 87.7% accuracy—an unheard-of feat for AI models.

\ These aren’t just numbers—they’re proof that o3 is tackling challenges that once seemed out of reach for machines.

\

How Does o3 Think?

O3 doesn’t just respond like most AI—it takes a breath, pauses, and thinks.

\ Think of it as the difference between blurting out an answer and carefully weighing the options before speaking.

\ This is possible thanks to something called deliberative alignment.

Source : OpenAI

It’s like giving O3 a moral compass, teaching it the rules of safety and ethics in plain language, and showing it how to reason through tough situations instead of just reacting.

\ A Quick Example

Imagine someone trying to outsmart O3 by encoding a harmful request using a ROT13 cipher (basically, a scrambled message).

\ They’re asking for advice on hiding illegal activity.

\ A less advanced AI might take the bait, but O3?

\ It deciphers the request, realizes it’s dodgy, and cross-checks with OpenAI’s safety policies.

\ It doesn’t just block the response.

\ It reasons why this request crosses ethical boundaries and provides a clear refusal.

\ This is AI with a conscience—or as close to one as we’ve ever seen.

\ Here’s how O3’s thought process works:

1 - It Reads the Rules

Instead of guessing what’s right or wrong, O3 is trained with actual safety guidelines written in plain language.

\ It doesn’t just rely on examples to infer behavior—it learns the rulebook upfront.

2 - It Thinks Step-by-Step

When faced with a tricky or nuanced task, O3 doesn’t jump to conclusions.

\ It uses what’s called chain-of-thought reasoning—breaking down the problem, step by step, to figure out the best response.

3 - It Adapts to the Moment

Not every situation is the same.

\ Some tasks need quick answers, others require deep reflection.

\ O3 adjusts its effort based on the complexity of the problem, so it’s efficient when it can be and thorough when it needs to be.

Meet O3 Mini: The Budget-Friendly Genius

Alongside O3, OpenAI introduced O3 Mini, a cost-effective version designed for tasks that don’t require the full power of its big sibling.

\ What’s special about O3 Mini?

\ Adaptive Thinking Time Users can adjust the model’s reasoning effort based on task complexity.

\ Need a quick answer? Go for low-effort reasoning.

\ Tackling a complex coding problem? Crank it up to high-effort mode.

\ Cost-Performance Balance O3 Mini delivers nearly the same level of accuracy as O3 for simpler tasks but at a fraction of the cost.

\ This flexibility makes O3 Mini an attractive option for developers and researchers working on a budget.

\

Is This the Future of AI? A Step Toward AGI

Here’s where things get philosophical.

\ AGI, or Artificial General Intelligence, refers to AI that can perform any task a human can—and often better.

\ OpenAI has always had AGI as its north star, and with o3, it feels like they’re edging closer.

\ Consider this:

  • On ARC-AGI, o3 nearly tripled the performance of its predecessor.
  • It’s solving problems that require learning and reasoning, not just memorization.

\ That said, even OpenAI admits that o3 isn’t AGI yet.

\ It’s more like a prototype of what AGI could look like—an AI that learns, adapts, and reasons in ways that feel… human.

\ The Challenges Ahead Even with its incredible capabilities, o3 isn’t without its flaws:

  1. Cost: Running o3 in high computing settings is expensive—like, 7 to 8 thousand dollars per ta.
  2. Errors: While it’s better at reasoning, o3 can still trip up, especially on simpler tasks where it overthinks the problem.
  3. Ethics: Earlier models like o1 faced criticism for attempting to deceive users in certain scenarios. Will o3 fall into the same trap?

\

The Big Picture

o3 isn’t just another AI model—it’s a glimpse into what AI might become.

\ It’s not perfect, but it’s a step toward an era where machines don’t just respond—they reason, learn, and adapt in ways that feel deeply human.

\ And while we’re still far from AGI, o3 reminds us that progress isn’t linear—it’s exponential.

\ So, what do you think? Are we on the cusp of a new AI revolution? Or is o3 just another milestone on a much longer journey?

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

What's New In Microsoft 365 Copilot | December 2024

1 Share

Welcome to the December 2024 edition of What's new in Microsoft 365 Copilot! Every month, we highlight new features and enhancements to keep Microsoft 365 admins up to date with Copilot features that help your users be more productive and efficient in the apps they use every day.

Let’s take a closer look at what’s new this month:

Admin and management capabilities:

End-user capabilities:

Announcing Copilot release notes

Now, admins can easily review a list of shipped Microsoft 365 Copilot changes across Microsoft 365 apps including new features, product updates, and improvements based on feedback. Copilot release notes were added to Microsoft Learn in December and will be updated monthly.

A screenshot of a website page titled Copilot Release Notes.

Expanded resources for Viva Learning

Soon, admins will be able to duplicate the Copilot Academy and add or remove courses to tailor it to their organization’s needs or address industry-specific requirements. They will also be able to duplicate Microsoft-curated Learning paths and featured sets for further customization options. This feature rolls out in January.

A screenshot of Viva Learning with the section called Manage Academies open

The AI and Copilot Resources provider will power the out-of-the-box Copilot Academy and will be visible by default to users who have Microsoft 365 Copilot licenses. This will appear in the admin tab alongside other providers, enabling admins to control content that powers Copilot Academy in the same way they can control other content providers. This includes the ability to enable, disable, and apply permissions on this content. This feature rolled out in December.

The Manage providers tab in Viva Learning showing AI and Copilot Resources as one of the available providers.

New templates and reports in Copilot Analytics

Copilot survey templates in Viva Pulse enable change leaders to gather employee feedback on aspects of Copilot readiness, adoption, and impact. This qualitative feedback is a great complement to the usage metrics already included in the Copilot Dashboard. Viva Pulse results for the Copilot survey templates will be automatically shared to the Copilot Dashboard. Learn more about Viva Pulse capabilities. This feature rolled out in December.

A screenshot of the Viva Pulse homepage depicting various template features and Pulses sent.

Sales leaders can now create a custom report in Viva Insights using Copilot for Sales metrics or leverage the prebuilt adoption report to understand how people are using Copilot for Sales within the organization. Sales leaders can view how Copilot for Sales actions are contributing to key sales tasks, comparative adoption across groups, effectiveness of various sales actions, and overall usage trends. Learn more about the Copilot for Sales adoption report. This feature rolled out in December.

A slide featuring bar charts for adoption metrics as well as a line graph showing the change in active Copilot users per month.

Transform notetaking with Copilot for OneNote

Copilot for OneNote on Mac and iPad is an intelligent assistant that helps transform the way users interact with their notes. With simple natural language commands, users can use Copilot to help them understand, summarize, and rewrite notes for enhanced clarity and purpose. It is designed to work seamlessly on the Mac, offering a contextual chat experience that empowers users to accomplish tasks faster. Whether preparing for a meeting or organizing thoughts, Copilot for OneNote on Mac is the perfect partner for boosting productivity. Copilot for OneNote for Mac and iPad rolled out in December.  

A screenshot of an iPad with the intelligent assistant side pane open.

Quick Actions with Copilot are now generally available directly on the OneNote desktop app canvas. In a OneNote page, users can click the Copilot icon on the canvas and take notes, summarize, create a task list, or rewrite the page. This feature rolled out in November.

A OneNote page showing the Quick Actions pop-out.

Improve data understanding with Copilot for Excel

With Copilot in Excel, users can now clean data with just one click. Clean Data detects and offers solutions for text inconsistencies, number format issues, and extra spaces. This feature rolled out in December to Copilot in Excel for the web. Learn more about Clean Data with Copilot in Excel.

Screenshot of Excel with the Clean Data feature open showing how it identifies inconsistent text and highlights the corresponding inconsistencies.

Copilot in Excel can recognize several patterns like text transformation, date transformations, simple arithmetic calculations, row numbering, and forward filling, and then suggest a formula for users to apply instead. This feature is rolling out on Windows Desktop in January.

Screenshot of Excel showing a table with manually entered data and an on-grid suggestion to apply a formula.

More dynamic interactions in Copilot for Word and PowerPoint

Now, users can now ask Copilot to read aloud responses in the chat pane, enabling more dynamic interactions with Copilot. This feature rolled out in December for Word and rolled out in December for PowerPoint on web, desktop, and Mac.

A screenshot of a Copilot chat window with the read aloud button circled.

Now, users can add images to their chat using Copilot. Users can ask questions about images, extract text, get a description of a chart, translate information, or generate alt text. This helps users stay in the flow of work while getting necessary information to continue working on a document or presentation. This feature is available as of December for Word and for PowerPoint on web and desktop. Learn more about Add an image or banner with Copilot in Word.

An open Copilot chat window in PowerPoint answering questions about the slide image.

Intelligent recap in Teams for calls from chat and ‘Meet now’ meetings

Soon, you will be able to enjoy AI-generated summaries for more types of meetings, even when they are not scheduled in advance. Intelligent meeting recap will be available for impromptu calls and meetings, like those started from ‘Meet now’ and calls started from chat. You can easily browse the recording by speakers and topics, as well as access AI-generated notes, AI-generated tasks, and name mentions after the ad-hoc meeting ends. This feature has begun rolling out and will be available in January.

A screenshot of a Microsoft Teams meeting recap with AI-generated notes and follow-up tasks

Did you know? The Microsoft 365 Roadmap is where you can get the latest updates on productivity apps and intelligent cloud services. Please note that the dates mentioned in this article are tentative and subject to change. Check back regularly to see what features are in development or coming soon.

 

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

What's new in Astro - December 2024

1 Share
December 2024 - Astro 5, State of JS, Google IDX partnership, and more!
Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete

Understanding deep linking in React Native

1 Share

Editor’s note: This article was updated on 4 December 2024 by Emmanuel John to address the changes to performance and error handling in Go Version 1.23.

Introduction

alt

Deep linking and universal links are the gateways to your application. Deep linking is already part of a seamless experience that any mobile application should have.

Ultimately, they help to reduce churn and increase the loyalty of your user base. Implementing them correctly will directly impact your ability to master campaigns and run promotions within your application.

The deep linking question is as important today as ever before, specifically taking into consideration Identifier for Advertisers (IDFA) and a rising number of walled gardens. Well-executed deep linking will enable your retargeting campaigns and bring engagement to a new level, allowing end users to have a seamless one-click experience between the web and your application.

Once users have already discovered and installed your application, deep linking is the perfect tool to retain those newly acquired users.

In this article, I outline existing ways to implement deep linking and how to test it using the React Native Typescript codebase. You can find the full source code for this project available on GitHub.

What is deep linking and why is it important?

In a nutshell, deep linking is a way to redirect users from a webpage into your application in order to show a specific screen with the requested content. It can be a product, an article, secure content behind a paywall, or a login screen.

One of the most famous examples is the Slack link that they send to your email. This link opens right inside the application, authorizing you to use it with your email account — no account setup needed.

Screenshot of Slack magic link

Deep linking is paramount in 2021. Every effort to lead your users into the app and improve their engagement heavily depends on the strategy on top of the deep linking.

To summarize the main points why deep linking is important:

  • Marketing campaigns
  • User retention
  • Seamless redirects between web and mobile
  • Content delivery behind a paywall or login authentication
  • Increasing customer lifecycle
  • Improving loyalty
  • Minimizing churn
  • Improved search engine ranking

Implementing deep linking requires a more intricate understanding of iOS and Android for extra configuration of each platform in your React Native project.

Take, for example, this syntax diagram of the following URL:

billing-app://billing/1

Diagram of URL scheme
Whenever you navigate to a website using, for example, https://reactivelions.com, you use a URL in which the URL scheme is “https”. In the example above, billing-app is a URL scheme for your deep linking URL.

Deep linking and universal linking in iOS

Starting with iOS 9, Apple introduced universal links to reduce confusion and simplify the user experience. Apple has also introduced support for universal links on macOS with macOS 10.15.

The idea behind universal links is to connect specific website URLs that match content on your website with content inside your application. Another thing to note: the Apple dev team recommends migrating from custom URL schemes to universal links. This is because custom schemes are less secure and vulnerable to exploitation.

Universal links establish a secure connection between your app and website. In Xcode, you enable your app’s entitlement to handle specific domains, while your web server hosts a JSON file detailing the app’s accessible content. This mutual verification prevents others from misusing your app’s links.

This URL would act the same way as the deep linking URL I have shown in the previous section:

https://app.reactivelions.com/billing/3

Configuring universal links requires extra steps on both the server side and mobile side.

First, you start on the server side, where you need to upload a JSON-formatted file that defines the website’s association with a mobile application and its specific routes.

Let’s say you run the example.com domain and want to create an association file. Start by creating a folder or a route in your root domain .well-known, then add JSON content inside the apple-app-site-association:

https://example.com/.well-known/apple-app-site-association

Add JSON content to define website associations:

{
   "applinks": {
       "apps": [],
       "details": [
           {
               "appID": "ABCD1234.com.your.app",
               "paths": [ "/billing/", "/billing/*"]
           },
           {
               "appID": "ABCD1234.com.your.app",
               "paths": [ "*" ]
           }
       ]
   }
}

Check your Apple developer portal to confirm your appID.

Your web server must have a valid HTTPS certificate, as HTTP is insecure and cannot confirm the link between your app and website. The HTTPS certificate’s root must be recognized by the operating system, as custom root certificates aren’t supported.

If your app is targeting iOS 13 or macOS 10.15 and later, the “apps” key is no longer necessary and can be removed. However, if you’re supporting iOS 12, tvOS 12, or earlier versions, you’ll still need to keep the “apps” key included.

If you have multiple apps with the universal links configuration, and you do not want to repeat the relevant JSON you can use this:

{
   "applinks": {
       "apps": [],
       "details": [
           {
               "appIDs": ["ABCD1234.com.your.app", "ABCD1234.com.your.app2"],
               "paths": [ "/billing/", "/billing/*"]
           },
       ]
   }
}

Use this if you are targeting iOS 13, or macOS 10.15 and later. But if you need to support earlier releases, you should stick to using the singular appID key for each app.

Path Configuration

The paths key uses terminal-style pattern matching for URLs, where * represents multiple characters and ? matches one. Early this year, the paths key was replaced with components, which uses an array of dictionaries for URL component pattern matching. Components include the path (marked by /), fragment (marked by #), and query (marked by ?).

{
   "applinks": {
       "apps": [],
       "details": [
           {
               "appIDs": ["ABCD1234.com.your.app", "ABCD1234.com.your.app2"],
               "components": [ 
                  "/": "/path/*/filename",
                  "#": "*fragment",
                  "?": "widget=?*"
               ]
           },
       ]
   }
}

Older versions like iOS 12, tvOS 12, and earlier macOS versions still use the paths key, but newer versions will ignore it if components are present.

If parts of your website aren’t intended to be represented in the app, you can exclude these sections using the exclude key with true as its value.

"components": [ 
  "/": "/path/*/filename",
  "#": "*fragment",
  "?": "widget=?*",
  "exclude": true
]

This functions similarly to the not keyword in the old paths key, though not isn’t compatible with the new components dictionary.

URL pattern matching demo
Here’s how you can handle URL pattern matching for a meal ordering app using JSON to define component patterns in Universal Links:

  1. Order forms with a fixed path structure
    To match order forms at example.com/{any}/order:
{
  "components": [
    {
      "/": "/*/order"
    }
  ]
}

This matches any path that has an arbitrary first component followed by /order.

Example URLs:
https://example.com/user/order
https://example.com/product/order

  1. Matching taco orders with cheese query
    To match URLs with /taco and a cheese query item:
{
  "components": [
    {
      "/": "/taco",
      "?": { "cheese": "*" }
    }
  ]
}

The * in the query will match any value for cheese.

Example URLs:
https://example.com/taco?cheese=cheddar
https://example.com/taco?cheese=mozzarella

  1. Excluding specific coupon codes
    To exclude coupon codes starting with 1 and match other codes:
{
  "components": [
    {
      "/": "/coupon",
      "exclude": true,
      "pattern": "/coupon/1*"
    },
    {
      "/": "/coupon",
      "pattern": "/coupon/*"
    }
  ]
}

Here, the first entry excludes codes starting with 1, while the second matches all other coupon codes.

Example URLs:
Excluded: https://example.com/coupon/1234
Matched: https://example.com/coupon/5678

For the production-ready backend, you can test if your website is properly configured for Universal Links using the aasa-validator tool.

How to configure your Xcode project

To demonstrate how deep linking works, we’ll build a simple test application. This application will have straightforward navigation between the Home and Billing screens using the @react-navigation component:

npx react-native init BillingApp --template

Open your Xcode workspace:

open BillingApp/ios/BillingApp.xcworkspace

Screenshot of Xcode workspace
In your Xcode window, select your newly created project in the left pane (in our case it’s BillingApp). Next, select the BillingApp target inside the newly opened left pane of the internal view for the BillingApp.xcodeproj.

Navigate to the Info section in the top center of that view, then go to the very bottom and click the plus (+) sign under URL Types. Make sure to add billing-id as your new Identifier and specify URL Schemes as billing-app.

By following these steps above, you’ve enabled iOS project configuration to use deep links like billing-app://billing/4 inside your Objective C and JavaScript code later on.

After configuring Xcode, the next step will be focused on React Native. I will start with linking part of the React Native core called LinkingIOS. You can read more about it in the official documentation here.

Its main goal is to construct a bridge that will enable a JavaScript thread to receive updates from the native part of your application, which you can read more about in the AppDelegate.m part below.

Go to ios/Podfile and add this line under target:

pod 'React-RCTLinking', :path => '../node_modules/react-native/Libraries/LinkingIOS'

And then make sure to update your pods using this command:

cd ios && pod install

The next step is to enable the main entry points of your application to have control over the callbacks that are being called when the application gets opened via deep linking.

In this case, we implement the function openURL with options and pass its context to RCTLinkingManager via its native module called RCTLinkingManager.

#import <React/RCTLinkingManager.h>

- (BOOL)application:(UIApplication *)application
openURL:(NSURL *)url
options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
{
return [RCTLinkingManager application:application openURL:url options:options];
}

If you’re targeting iOS 8.x or older, you can use the following code instead:

#import <React/RCTLinkingManager.h>

- (BOOL)application:(UIApplication *)application openURL:(NSURL *)url
  sourceApplication:(NSString *)sourceApplication annotation:(id)annotation
{
  return [RCTLinkingManager application:application openURL:url
                      sourceApplication:sourceApplication annotation:annotation];
}

For the universal links, we will need to implement a callback function continueUserActivity, which will also pass in the context of the app and current universal link into the JavaScript context via RCTLinkingManager.

- (BOOL)application:(UIApplication *)application continueUserActivity:(nonnull NSUserActivity *)userActivity
restorationHandler:(nonnull void (^)(NSArray<id<UIUserActivityRestoring>> * _Nullable))restorationHandler
{
return [RCTLinkingManager application:application
continueUserActivity:userActivity
restorationHandler:restorationHandler];
}

Deep linking in Android

Android deep linking works slightly differently in comparison to iOS. This configuration operates on top of Android Intents, an abstraction of an operation to be performed. Most of the configuration is stored under AndroidManifest.xml and works by actually pointing to which Intent will be opened when the deep link is executed.

How to configure your Android Studio project

Inside your Android manifest android/app/src/main/AndroidManifest.xml we need to do the following:

  • Configure the Intent filter
  • Define the main View action and specify two main categories: DEFAULT and BROWSABLE
  • Finalize the configuration by setting the scheme to billing-app and defining the main route as billing

This way Android will know that this app has deep linking configured for this route billing-app://billing/*:

<intent-filter android:label="filter_react_native">
 <action android:name="android.intent.action.VIEW" />
 <category android:name="android.intent.category.DEFAULT" />
 <category android:name="android.intent.category.BROWSABLE" />
 <data android:scheme="billing-app" android:host="billing" />
</intent-filter>

Navigation and deep linking

In most production-grade applications you’ll end up having multiple screens. You’re most likely to end up using some form of component that implements this navigation for you. However, you can opt out and use deep linking without navigation context by invoking React Native’s core library via JavaScript by calling Linking directly.

You can do this inside your React Native code using these two methods:

  1. If the app is already open:
    Linking.addEventListener('url', ({url}) => {})
  2. If the application is not already open and you want to get the initial URL, use this call:
    Linking.getInitialURL()

Use the acquired deep linking URL to show different content, based on the logic of your application.

If you’re using @react-navigation you can opt-in to configure deep linking using its routing logic.

For this, you need to define your prefixes for both universal linking and deep linking. You will also need to define config with its screens, including nested screens if your application has many screens and is very complex.

Here’s an example of how this configuration looks for our application:

import { NavigationContainer } from '@react-navigation/native';
export const config = {
 screens: {
   Home: {
     path: 'home/:id?',
     parse: {
       id: (id: String) => `${id}`,
     },
   },
   Billing: {
     path: 'billing/:id?',
     parse: {
       id: (id: String) => `${id}`,
     },
   },
 },
};
const linking = {
 prefixes: ['https://app.reactivelions.com', 'billing-app://home'],
 config,
};
function App() {
 return (
   <NavigationContainer linking={linking} fallback={<Text>Loading...</Text>}>
     {/* content */}
   </NavigationContainer>
 );
}

In the code section above, we introduced universal linking and walked through the steps needed to define universal link association on your website’s server end. In Android, there’s something similar called Verified Android App Links.

You can also check out the react-navigation documentation for more details on configuring links.

Using Android App Links helps you avoid the confusion of opening deep links with other applications that aren’t yours. Android usually suggests using a browser to open unverified deep links whenever it’s unsure if they’re App Links (and not deep links).

To enable App Links verification, you will need to change the intent declaration in your manifest file like so:

<intent-filter android:autoVerify="true">

To create app-verified links you will need to generate a JSON verification file that will be placed in the same .well-known folder as in the Xcode section:

keytool -list -v -keystore my-release-key.keystore

This command will generate an association with your domain by signing the configuration with your keystore file:

[{
  "relation": ["delegate_permission/common.handle_all_urls"],
  "target": {
    "namespace": "android_app",
    "package_name": "com.mycompany.app1",
    "sha256_cert_fingerprints":
    ["14:6D:E9:83:C5:73:06:50:D8:EE:B9:95:2F:34:FC:64:16:A0:83:42:E6:1D:BE:A8:8A:04:96:B2:3F:CF:44:E5"]
  }
}]

Then place the generated file on your website using this path:
https://www.example.com/.well-known/assetlinks.json

How to test deep links

After going through all configurations and implementations, you want to ensure you’ve set everything up correctly and that deep links work on each platform of your choice.

Before you test universal links or Android App Verified Links, make sure that all JSON files are uploaded, available, and up to date for each of your domains. Depending on your web infrastructure, you might even want to refresh your Content Delivery Network (CDN) cache.

A successful deep linking test means that, after opening a deep link in the browser, you are forwarded to your application and you can see the desired screen with the given content.

When you go to the billing screen you can specify a number, and the application will render the same number of emojis with flying dollar banknotes. Our application has Home and Billing screens.

If you try to go to the Billing screen from your Home screen, it won’t pass any content, and therefore it will not render any emojis.

In your terminal, you can use these commands to test deep linking for each platform. Play around by changing the number at the end of your deep linking URL to see different numbers of emojis.

  1. iOS
    npx uri-scheme open billing-app://billing/5 --ios

    You can also open Safari and enter billing-app://billing/5 in your address bar, then click go.

    Screenshot of billing app in iOS with dollar sign emojis

  2. Android
    npx uri-scheme open billing-app://billing/5 --android

    Screenshot of billing app in Android with dollar sign emojis

Next steps

You might have noticed that I used TypeScript to write the code for this project. For this project, I’ve implemented custom property types that require custom declarations for each screen. Check props.ts to see these type declarations.

As I mentioned earlier, if you’re building a production-grade application, you’re most likely to end up building complex routing and will need nesting routes to be implemented with your navigator library.

Nesting navigation will enable you to decompose each screen into smaller components and have sub-routes based on your business logic. Learn more about building nesting routes using @react-navigation here.

Looking forward to seeing what you build with this!

The post Understanding deep linking in React Native appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
5 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories