Learn more about code reviews with GitHub Copilot >
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Canva users can now create, edit, and manage their designs by describing their requirements to Anthropic’s Claude AI. The connection is the latest of several integrations that allow Claude users to access third-party tools and services, including Figma, Notion, Stripe, and Prisma, without having to leave their conversation with the AI chatbot.
Starting today, Claude users will be able to use natural language prompts to complete design tasks in their linked Canva account, such as creating presentations, resizing images, and automatically filling premade templates. The integration also enables users to search for keywords within Canva Docs, Presentations, and brand templates, and summarize them through the Claude AI interface. The feature requires both a paid Canva account (which starts at $15 per month) and a paid Claude account ($17 per month).
Anthropic is utilizing the Model Context Protocol (MCP) server that Canva launched last month, which provides Claude with secure access to user Canva content. MCP, often referred to as the “USB-C port of AI” apps, is an open-source standard that enables developers to quickly connect their AI models with other apps and services. Companies like Anthropic, Microsoft, Figma, and Canva have embraced MCP to prepare their platforms for a future tech landscape that’s expected to be filled with AI agents.
“Instead of uploading or manually transferring ideas, users can now generate, summarize, review, and publish Canva designs, all within a Claude chat,” Canva Ecosystem head Anwar Haneef said in a statement to The Verge. “MCP makes this possible with a simple toggle in settings, marking a powerful shift toward user-friendly, AI-first workflows that combine creativity and productivity in one.”
Claude is the first AI assistant to support Canva design workflows through MCP, but the chatbot has other design platform offerings thanks to a similar partnership with Figma that was announced last month. A new Claude integrations directory is also launching on web and desktop today, which should give users an easy overview of all the tools and connected apps at their disposal.
Microsoft has started testing a new feature in Windows 11 that will improve laptop battery life. A new adaptive energy saver mode will soon automatically enable or disable the main energy saver mode, based on your laptop’s workload rather than just its amount of battery life left.
The energy saver mode in Windows 11 typically dims a display brightness by 30 percent, disables transparency effects, and stop apps running in the background. Non-critical Windows update downloads are also paused, and certain apps like OneDrive, OneNote, and Phone Link may not sync fully while energy saver is enabled.
This new adaptive energy saver mode, which will only be available on devices with a battery, will automatically enable or disable without affecting screen brightness. That will make it less noticeable on devices like laptops, tablets, and handhelds.
“Adaptive energy saver is an opt-in feature that automatically enables and disables energy saver, without changing screen brightness, based on the power state of the device and the current system load,” explains Microsoft’s Windows Insider team. Microsoft has started testing this with Canary Channel Windows 11 testers recently, so expect to see it appear in Windows 11 later this year.
While you won’t be able to use this new adaptive energy saver mode on desktop PCs, Microsoft brought its main energy saver mode to all PCs last year, allowing even desktop PCs to save on electricity usage.
Are you running apps on Exchange Web Services (EWS)? With EWS support ending in October 2026, this is the perfect time to modernize. We’re excited to announce a new hands-on tutorial that helps you accelerate migrations from EWS to Microsoft Graph, using AI tools you already have access to. This tutorial teaches you skills that you can apply to working in any legacy code base.
Migrating legacy code can feel daunting. This tutorial turns this challenge into an opportunity to upskill with the latest AI-powered tools and erase years of technical debt at the same time.
The tutorial walks you through the migration process using a real-world ASP.NET MVC mail app as a baseline and uses GitHub Copilot to accelerate every step: analyzing legacy code, generating documentation, adding tests, refactoring, and finally swapping out EWS for the Microsoft Graph API. You’ll discover how AI can help you understand unfamiliar code, automate repetitive tasks, and troubleshoot tricky migration issues.
The migration tutorial helps you learn how to:
Migrations can feel overwhelming, but embracing AI tools like Copilot can transform a tedious upgrade into a growth opportunity. You’ll not only eliminate a looming security risk but also gain practical experience with AI-assisted development—setting yourself and your team up for future success.
Whether you’re a seasoned developer or just curious about how Copilot can help with real-world tasks, this tutorial is a practical resource, packed with tips for making the most of AI tools. Turn a maintenance headache into a hands-on exploration of today’s AI-powered development, and you’ll go from legacy to legendary.
Try out the tutorial and share your experiences or questions in the comments. Let’s learn together and make this transition a win for everyone!
Happy coding,
Thomas & the Exchange Programmability Team
The post Supercharge your EWS migration with AI and GitHub Copilot appeared first on Microsoft 365 Developer Blog.
When GitHub first shipped the pull request (PR) back in 2008, it wrapped a plain-text diff in a social workflow: comments, approvals, and a merge button that crucially refused to light up without at least one thumbs up from another developer. That design decision hard-wired accountability into modern software and let maintainers scale far beyond hallway conversations or e-mail patches.
Seventeen years later, just about every “agentic” coding tool, from research demos to enterprise platforms, still funnels its work through that same merge gate. The PR remains the audit log, the governance layer, and the social contract that says nothing ships until a person is willing to own it.
Now that large language models (LLM) can scaffold projects, file PRs, and even reply to review comments they wrote themselves, the obvious next question is, who is accountable for code that ships when part of it comes from a model?
At GitHub, we think the answer hasn’t fundamentally changed: it’s the developer who hits “Merge.” But what has changed is everything that happens before that click.
In this article, we’ll explore how we’re re-thinking code reviews for a world where developers increasingly work with AI (and how your team can, too).
Earlier this year, the GitHub Copilot code review team conducted in-depth interviews with developers about their code review process. They also walked us through their code review workflow. These interviews revealed three consistent patterns:
An overarching principle quickly became clear: AI augments developer judgment; it can’t replace it. And our findings, from confidence scores to red-flag explanations, are informing how we’re building Copilot’s code review features.
LLMs are already great at the “grind” layer of a review:
Soon they’ll be able to do even more, such as understand product and domain context. But they still fall short on:
Those gaps keep developers in the loop and in the pilot’s seat. That principle is foundational for us as we continue to develop GitHub Copilot.
The most effective approach to AI-assisted code reviews starts before you even submit your pull request. Think of it as the golden rule of development: Treat code reviewers the way you’d like them to treat you.
Before pushing your code, run GitHub Copilot code review in your IDE to catch the obvious stuff so your teammates can focus on the nuanced issues that require developer insight. Copilot code review can comb your staged diff, suggest docstrings, and flag null dereferences. From there, you can fix everything it finds before you submit your PR so teammates never see the noise.
Just because you used AI to generate code doesn’t mean it’s not your code. Once you commit code, you’re responsible for it. That means understanding what it does, ensuring it follows your team’s standards, and making sure it integrates well with the rest of your codebase.
If an AI agent writes code, it’s on me to clean it up before my name shows up in git blame.
Jon Wiggins, Machine Learning Engineer at Respondology
Your pipeline should already be running unit tests, secret scanning, CodeQL, dependency checks, style linters. Keep doing that. Fail fast, fail loudly.
The real power of AI in code reviews isn’t in replacing developers as the reviewers. It’s in handling the routine work that can bog down the review process, freeing developers to focus where their judgment is most valuable.
Make sure tests pass, coverage metrics are met, and static analysis tools have done their work before developer reviews begin. This creates a solid foundation for more meaningful discussion.
You can use an LLM to catch not just syntax issues, but also patterns, potential bugs, and style inconsistencies. Ironically, LLMs are particularly good at catching the sorts of mistakes that LLMs make, which is increasingly relevant as more AI-generated code enters our codebases.
Set clear expectations about when AI feedback should be considered versus when human judgment takes precedence. For example, you should rely on other developers for code architecture and consistency with business goals and organizational values. It’s especially useful to use AI to review long repetitive PRs where it can be easy to miss little things.
While AI can handle much of the routine work in code reviews, developer judgment remains irreplaceable for architectural decisions, mentoring and knowledge transfer, and context-specific decisions that require understanding of your product and users.
And even as LLMs get smarter, three review tasks remain stubbornly human:
The goal is to make developers more effective by letting them focus on what they do best.
Learn more about code reviews with GitHub Copilot >
The post Code review in the age of AI: Why developers will always own the merge button appeared first on The GitHub Blog.