Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147148 stories
·
32 followers

These are the Office icons Microsoft rejected

1 Share
The Word concepts next to the final icon on the right.

Microsoft is busy rolling out new curvy and colorful new Office icons, and now it’s revealing a set of design concepts it experimented with before finalizing these new icons. Some of the concepts are radically different from what Microsoft is shipping, with design explorations for Word, Excel, and PowerPoint that more closely resemble the Office for Mac icons of the past.

The Word concept icons (above) include a notepad-like experiment and different ways to visualize stacks of paper, or documents. Microsoft experimented with making the Word lettering the key part of the icon, and also versions where the lettering blends in or is totally absent. Microsoft eventually settled on a design that has three horizontal bars instead of four, and it’s using versions of the icon with and without lettering.

Microsoft focuses heavily on the use of cells in its existing and new Excel icons, and the concept ones rarely diverge from this. I really like the X icon though, but the rest look similar to what Microsoft landed on for the final icon.

PowerPoint has always been about slides, and Microsoft experimented with a variety of ways of visualizing that for its latest PowerPoint icon. A couple of concepts focus on the lettering, turning into a ribbon-like P or a P letter with a pie chart hanging off of it. The final icon design is a lot more tame though, with a slightly more rounded and colorful take on the current PowerPoint icon.

All of Microsoft’s new Office icons — including new Teams, OneDrive, Outlook, and OneNote designs — are starting to roll out across Windows and iOS at the moment. Microsoft appears to be using the versions with letters in Windows, but for iOS it’s opting for icons without the distinctive letters.

What do you think? Are there any concept versions you prefer over the final designs Microsoft picked?

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Magic Words: Programming the Next Generation of AI Applications

1 Share

“Strange was obliged to invent most of the magic he did, working from general principles and half-remembered stories from old books.”

Susanna Clarke, Jonathan Strange & Mr Norrell

Fairy tales, myths, and fantasy fiction are full of magic spells. You say “abracadabra” and something profound happens.1 Say “open sesame” and the door swings open.

It turns out that this is also a useful metaphor for what happens with large language models.

I first got this idea from David Griffiths’s O’Reilly course on using AI to boost your productivity. He gave a simple example. You can tell ChatGPT “Organize my task list using the Eisenhower four-sided box.” And it just knows what to do, even if you yourself know nothing about General Dwight D. Eisenhower’s approach to decision making. David then suggests his students instead try “Organize my task list using Getting Things Done,” or just “Use GTD.” Each of those phrases is shorthand for systems of thought, practices, and conventions that the model has learned from human culture.

These are magic words. They’re magic not because they do something unworldly and unexpected but because they have the power to summon patterns that have been encoded in the model. The words act as keys, unlocking context and even entire workflows.

We all use magic words in our prompts. We say something like “Update my resume” or “Draft a Substack post” without thinking how much detailed prompting we’d have to do to create that output if the LLM didn’t already know the magic word.

Every field has a specialized language whose terms are known only to its initiates. We can be fanciful and pretend they are magic spells, but the reality is that each of them is really a kind of fuzzy function call to an LLM, bringing in a body of context and unlocking a set of behaviors and capabilities. When we ask an LLM to write a program in Javascript rather than Python, we are using one of these fuzzy function calls. When we ask for output as an .md file, we are doing the same. Unlike a function call in a traditional programming language, it doesn’t always return the same result, which is why developers have an opportunity to enhance the magic.

From Prompts to Applications

The next light bulb went off for me in a conversation with Claire Vo, the creator of an AI application called ChatPRD. Claire spent years as a product manager, and as soon as ChatGPT became available, began using it to help her write product requirement documents or PRDs. Every product manager knows what a PRD is. When Claire prompted ChatGPT to “write a PRD,” it didn’t need a long preamble. That one acronym carried decades of professional practice. But Claire went further. She refined her prompts, improved them, and taught ChatGPT how to think like her. Over time, she had trained a system, not at the model level, but at the level of context and workflow.

Next, Claire turned her workflow into a product. That product is a software interface that wraps up a number of related magic words into a useful package. It controls access to her customized magic spell, so to speak. Claire added detailed prompts, integrations with other tools, access control, and a whole lot of traditional programming in a next-generation application that uses a mix of traditional software code and “magical” fuzzy function calls to an LLM. ChatPRD even interviews users to learn more about their goals, customizing the application for each organization and use case.

Claire’s quickstart guide to ChatPRD is a great example of what a magic-word (fuzzy function call) application looks like.

You can also see how magic words are crafted into magic spells and how these spells are even part of the architecture of applications like Claude Code through the explorations of developers like Jesse Vincent and Simon Willison.

In “How I’m Using Coding Agents in September, 2025,” Jesse first describes how his claude.md file provides a base prompt that “encodes a bunch of process documentation and rules that do a pretty good job keeping Claude on track.” And then his workflow calls on a bunch of specialized prompts he has created (i.e., “spells” that give clearer and more personalized meaning to specific magic words) like “brainstorm,” “plan,” “architect,” “implement,” “debug,” and so on. Note how inside these prompts, he may use additional magic words like DRY, YAGNI, and TDD, which refer to specific programming methodologies. For example, here’s his planning prompt (boldface mine):

Great. I need your help to write out a comprehensive implementation plan.

Assume that the engineer has zero context for our codebase and questionable
taste. document everything they need to know. which files to touch for each
task, code, testing, docs they might need to check. how to test it.give
them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. frequent commits.

Assume they are a skilled developer, but know almost nothing about our
toolset or problem domain. assume they don't know good test design very
well.

please write out this plan, in full detail, into docs/plans/

But Jesse didn’t stop there. He built a project called Superpowers, which uses Claude’s recently announced plug-in architecture to “give Claude Code superpowers with a comprehensive skills library of proven techniques, patterns, and tools.” Announcing the project, he wrote:

Skills are what give your agents Superpowers. The first time they really popped up on my radar was a few weeks ago when Anthropic rolled out improved Office document creation. When the feature rolled out, I went poking around a bit – I asked Claude to tell me all about its new skills. And it was only too happy to dish…. [Be sure to follow this link! – TOR]

One of the first skills I taught Superpowers was How to create skills. That has meant that when I wanted to do something like add git worktree workflows to Superpowers, it was a matter of describing how I wanted the workflows to go…and then Claude put the pieces together and added a couple notes to the existing skills that needed to clue future-Claude into using worktrees.

After reading Jesse’s post, Simon Willison did a bit more digging into the original document handling skills that Claude had announced and that had sparked Jesse’s brainstorm. He noted:

Skills are more than just prompts though: the repository also includes dozens of pre-written Python scripts for performing common operations.

 pdf/scripts/fill_fillable_fields.py for example is a custom CLI tool that uses pypdf to find and then fill in a bunch of PDF form fields, specified as JSON, then render out the resulting combined PDF.

This is a really sophisticated set of tools for document manipulation, and I love that Anthropic have made those visible—presumably deliberately—to users of Claude who know how to ask for them.

You can see what’s happening here. Magic words are being enhanced and given a more rigorous definition, and new ones are being added to what, in fantasy tales, they call a “grimoire,” or book of spells. Microsoft calls such spells “metacognitive recipes,” a wonderful term that ought to stick, though for now I’m going to stick with my fanciful analogy to magic.

At O’Reilly, we’re working with a very different set of magic words. For example, we’re building a system for precisely targeted competency-based learning, through which our customers can skip what they already know, master what they need, and prove what they’ve learned. It also gives corporate learning system managers the ability to assign learning goals and to measure the ROI on their investment.

It turns out that there are dozens of learning frameworks (and that is itself a magic word). In the design of our own specialized learning framework, we’re invoking Bloom’s taxonomy, SFIA, and the Dreyfus Model of Skill Acquisition. But when a customer says, “We love your approach, but we use LTEM,” we can invoke that framework instead. Every corporate customer also has its own specialized tech stack. So we are exploring how to use magic words to let whatever we build adapt dynamically not only to our end users’ learning needs but to the tech stack and to the learning framework that already exists at each company.

That would be a nightmare if we had to support dozens of different learning frameworks using traditional processes. But the problem seems much more tractable if we are able to invoke the right magic words. That’s what I mean when I say that magic words are a crucial building block in the next generation of application programming.

The Architecture of Magic

Here’s the important thing: Magic isn’t arbitrary. In every mythic tradition, it has structure, discipline, and cost. The magician’s power depends on knowing the right words, pronounced in the right way, with the right intent.

The same is true for AI systems. The effectiveness of our magic words depends on context, grounding, and feedback loops that give the model reliable information about the world.

That’s why I find the emerging ecosystem of AI applications so fascinating. It’s about providing the right context to the model. It’s about defining vocabularies, workflows, and roles that expose and make sense of the model’s abilities. It’s about turning implicit cultural knowledge into explicit systems of interaction.

We’re only at the beginning. But just as early programmers learned to build structured software without spelling out exact machine instructions, today’s AI practitioners are learning to build structured reasoning systems out of fuzzy language patterns.

Magic words aren’t just a poetic image. They’re the syntax of a new kind of computing. As people become more comfortable with LLMs, they will pass around the magic words they have learned as power user tricks. Meanwhile, developers will wrap more advanced capabilities around those that come with any given LLM once you know the right words to invoke their power. Each application will be built around a shared vocabulary that encodes its domain knowledge. Back in 2022, Mike Loukides called these systems “formal informal languages.” That is, they are spoken in human language, but do better when you apply a bit of rigor.

And at least for the foreseeable future, developers will write “shims” between the magic words that control the LLMs and the more traditional programming tools and techniques that interface with existing systems, much as Claire did with ChatPRD. But eventually we’ll see true AI to AI communication.

Magic words and the spells built around them are only the beginning. Once people start using them in common, they become protocols. They define how humans and AI systems cooperate, and how AI systems cooperate with each other.

We can already see this happening. Frameworks like LangChain or the Model Context Protocol (MCP) formalize how context and tools are shared. Teams build agentic workflows that depend on a common vocabulary of intent. What is an MCP server, after all, but a mapping of a fuzzy function call into a set of predictable tools and services available at a given endpoint?

In other words, what was once a set of magic spells is becoming infrastructure. When enough people use the same magic words, they stop being magic and start being standards—the building blocks for the next generation of software.

We can already see this progression with MCP. There are three distinct kinds of MCP servers. Some, like Playwright MCP, are designed to make it easier for AIs to interface with applications originally designed for interactive human use. Others, like the GitHub MCP Server, are designed to make it easier for AIs to interface with existing APIs, that is, with interfaces originally designed to be called by traditional programs. But some are designed as a frontend for a true AI-to-AI conversation. Other protocols, like A2A, are already optimized for this third use case.

But in each case, an MCP server is really a dictionary (or in magic terms, a spellbook)  that explains the magic words that it understands and how to invoke them. As Jesse Vincent put it to me after reading a draft of this piece:

The part that feels the most like magic spells is the part that most MCP authors do incredibly poorly. Each tool has a “description” field that tells the LLM how you use the tool. That description field is read and internalized by the LLM and changes how it behaves. Anthropic are particularly good at tool descriptions and most everybody else, in my experience, is…less good.

In many ways, publishing the prompts, tool descriptions, context, and skills that add functionality to LLMs may be a more important frontier of open source AI than open weights. It’s important that we treat our enhancements to magic words not as proprietary secrets but as shared cultural artifacts. The more open and participatory our vocabularies are, the more inclusive and creative the resulting ecosystem will be.


Footnotes

  1. While often associated today with stage magic and cartoons, this magic word was apparently used from Roman times as a healing spell. One proposed etymology suggests that it comes from the Aramaic for “I create as I speak.”



Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Think a Recruiter Will Land You Your Dream Job? Read This First

1 Share

If I had a dollar for every time an engineering leader came to me saying, “I don’t need coaching, I just need someone to help me find my next job”, I could retire tomorrow.

Let’s clear something up: recruiters do not find jobs for people.

That’s not their job. That’s not who pays them. That’s not the incentive structure they operate in.

Recruiters find people for jobs, not jobs for people. And if you miss that distinction, you’ll set yourself up for frustration in your job search.

Here’s what every engineering leader needs to understand:

By the way, not everything I see in engineering leadership is “safe” for LinkedIn. That’s why I write NSFL rants and trench notes only for my inner circle. If you want the real stories and insights, subscribe here.

#1 – The Recruiter’s Client Is Not You

A recruiter works for the company. Period.

The company pays the fee. The company creates the job spec. The company is the customer.

So yes, a great recruiter might care about you as a person — but the system is designed for them to care about placing someone in an open role. If you happen to be that person, awesome. If not, they’ll move on.

Want the audio / video format of this? Watch below.

https://www.youtube.com/watch?v=63mHUywRbFU

#2 – You Don’t Need a Recruiter… You Need Many

When people say, “I’m looking for a recruiter to help me,” I know they don’t understand how the game works.

There isn’t a magical recruiter out there whose job is to be your personal talent agent. That person doesn’t exist.

If recruiters are part of your strategy, you need to build relationships with lots of them. Treat it like networking. The more touchpoints, the more chances you’ll overlap with a live opportunity.

#3 – Recruiters Are Only One Slice of the Pie

Even at the executive level, only a fraction of roles come through recruiters.

The majority of opportunities are unlocked through your network. Conversations. Referrals. Relationships.

So yes, leverage recruiters as one spoke in the wheel, but don’t mistake them for the whole strategy. If all you’re doing is waiting for one recruiter to call, you’re leaving 80%+ of your opportunities untouched.

#4 – Coaching Solves a Different Problem

This is where the recruiter vs. coach conversation gets interesting.

A coach doesn’t “place” you into a job. What I do is equip you with the clarity, courage, and strategy to own your search and accelerate the outcome.

If you’re acting out of desperation, coaching helps you slow down, refocus, and make smart decisions.

If you feel isolated, coaching connects you with community and perspective.

If your confidence is shot, coaching rebuilds it so you show up strong in every interview and networking call.

If you want more than just a new job — if you want a lifestyle upgrade — coaching helps you design that vision and go get it.

Recruiters can’t do that for you. It’s not their role.

Let me leave you with this

Recruiters are not your agent. They will not hustle day and night to find you a job. They’re paid by companies to fill open roles. Full stop.

So stop expecting a recruiter to hand you your dream role on a platter.

Instead:

  • Build relationships with recruiters (plural).
  • Invest in coaching if you want to accelerate and maximize your transition.
  • Most of all, take ownership of your career strategy — don’t outsource it.

Because the truth is, your next opportunity won’t come from “a recruiter.” It’ll come from you showing up with clarity, confidence, and the right system.

And that’s exactly what we do together.

I help engineering leaders design a career strategy that actually works — one that gets you out of stagnation and into the roles and lifestyle you want faster.

If you’ve been relying on recruiters or job boards and not getting traction, let’s change that.

👉 Book a quick career growth audit with me here

We’ll look at where you’re stuck, what’s missing in your approach, and map the next best step for you.

No pressure, just clarity.

Have you ever relied too heavily on recruiters in a job search? What happened? Share in the comments.

The post Think a Recruiter Will Land You Your Dream Job? Read This First appeared first on OACO.

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Managing Dependencies and Downstream Bottlenecks in Scrum | Renee Troughton

1 Share

Renee Troughton: Managing Dependencies and Downstream Bottlenecks in Scrum

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

"For the actual product teams, it's not a problem for them... It's more the downstream teams that aren't the product teams, that are still dependencies... They just don't see that work until, hey, we urgently need this."

Renee brings a dual-edged challenge from her current work with dozens of teams across multiple business lines. While quarterly planning happens at a high level, small downstream teams—middleware, AI, data, and even non-technical teams like legal—are not considered in the planning process. These teams experience unexpected work floods with dramatic peaks and troughs throughout the quarter. The product teams are comfortable with ambiguity and incremental delivery, but downstream service teams don't see work coming until it arrives urgently. Through a coaching conversation, Renee and Vasco explore multiple experimental approaches: top-to-bottom stack ranking of initiatives, holding excess capacity based on historical patterns, shared code ownership where downstream teams advise rather than execute changes, and using Theory of Constraints to manage flow into bottleneck teams. They discuss how lack of discovery work compounds the problem, as teams "just start working" without identifying all players who need involvement. The solution requires balancing multiple strategies while maintaining an experimentation mindset, recognizing that complex systems require sensing our way toward solutions rather than predicting them.

Self-reflection Question: Are you actively managing the flow of work to prevent downstream bottlenecks, or are you allowing your "downstream teams" to be repeatedly overwhelmed by last-minute urgent requests?

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn’t just about innovation—it’s about coaching!🔥

Angela thought she was just there to coach a team. But now, she’s caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn’t just about the product—it’s about the people.

🚨 Will Angela’s coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

Buy Now on Amazon

[The Scrum Master Toolbox Podcast Recommends]

About Renee Troughton

Renee is one of the most experienced Agile coaches in the Southern Hemisphere with over two decades of transformation experience across banking, insurance, pharma, and real estate. Since 2002, she's helped organizations go digital, tackle systemic issues, and deliver value faster. Passionate about cutting bureaucracy, Renee champions a return to humanity at work.

Follow Renee’s work at AgileForest.com, her website as well as her work on the Agile Revolution podcast

You can link with Renee Troughton on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251015_Renee_Troughton_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Cost Optimization of Azure AI Services

1 Share

Why Cost Optimization in Azure AI Services Matters?

Cost optimization ensures that AI projects remain scalable, predictable, and sustainable, balancing innovation with financial responsibility.

Azure AI services—such as Azure Machine Learning, Azure Cognitive Services, and Azure OpenAI—primarily follow consumption-based pricing models. For example, Azure OpenAI bills based on the number of tokens processed in input and output. Customers with high-volume workloads can also opt for Provisioned Throughput Units (PTUs) to guarantee capacity at predictable rates.

Core Strategies for Cost Optimization

  • Auto-Scaling and Right-Sizing

Select compute SKUs that align with workload demands.
Use mid-tier GPUs or CPU instances for lighter workloads; reserve top-tier GPUs for demanding training or inference.
Leverage Spot VMs for interrupt-tolerant jobs, which can reduce costs by up to 90% compared to on-demand pricing.
For predictable workloads, Reserved Instances and commitment plans can provide significant discounts.

  • Choose the Right Compute Resources

Select compute SKUs that align with workload demands.
Use mid-tier GPUs or CPU instances for lighter workloads; reserve top-tier GPUs for demanding training or inference.
Leverage Spot VMs for interrupt-tolerant jobs, which can reduce costs by up to 90% compared to on-demand pricing.
For predictable workloads, Reserved Instances and commitment plans can provide significant discounts.

  • Batching and Request Grouping

Group multiple requests together to maximize GPU/CPU utilization.
Batching is especially useful in high-throughput inference scenarios, significantly lowering the cost per prediction.

  • Caching for Repeated Prompts

Azure OpenAI supports prompt caching, which avoids re-processing repeated or similar tokens.
For long prompts, structure common reusable sections at the beginning to maximize cache hits.
Cached tokens are often billed at a much lower rate—or even free in some deployment modes.

  • Optimize Data Storage and Movement

Keep compute and data in the same region to minimize egress costs.
Apply Azure Blob Storage lifecycle management to move infrequently accessed data to cooler, cheaper tiers.
Use Azure Data Lake optimizations for large-scale training data.

  • Continuous Monitoring and Cost Visibility

Enable Azure Cost Management + budgets + alerts to prevent runaway costs.
Apply resource tagging (project, team, environment) for granular tracking.
Integrate with Power BI dashboards for stakeholder visibility.

  • Governance and Guardrails

Use Azure Policy to restrict the deployment of costly SKUs unless approved.
Apply FinOps practices like show back/chargeback to create accountability across business units.

  • Cost-Aware Development Practices

During experimentation, use smaller models or lower compute tiers before scaling to production.
Sandbox environments help teams iterate quickly without incurring large bills.
Build testing pipelines that validate performance-cost trade-offs.

 

Cost optimization plan for Azure services across AI, data, and app hosting.

ServiceResource TypeCost Optimization Strategies
Azure ML (Workspace)Microsoft.MachineLearningServices/workspacesUse auto-scaling clusters, low-priority/spot VMs, archive unused datasets, right-size GPUs/CPUs
Azure AI SearchMicrosoft.Search/searchServicesScale replicas/partitions dynamically, use Basic tier for non-prod, optimize indexer schedules, remove stale indexes
Azure AI Services / OpenAIMicrosoft.CognitiveServices/accountsMonitor token usage, enable prompt caching, use Provisioned Throughput Units (PTUs) for predictable costs, batch requests
Azure Kubernetes Service (AKS)Microsoft.ContainerService/managedClustersEnable cluster/pod autoscaling, use spot node pools, optimize node pool sizing, reduce logging retention
Azure App Service (Web/Functions)Microsoft.Web/sitesUse Consumption plan for Functions, autoscale down in off-hours, reserve instances for prod, avoid idle deployment slots
Azure API Management (APIM)Microsoft.ApiManagement/serviceStart with Consumption/Developer tier, enable caching policies, scale Premium only when multi-region HA is needed
Azure Container AppsMicrosoft.App/containerAppsUse pay-per-vCPU/memory billing, scale-to-zero idle apps, optimize container images, use KEDA autoscaling
Azure Cosmos DBMicrosoft.DocumentDB/databaseAccountsUse Autoscale RU/s, adopt serverless for low workloads, apply TTL for cleanup, consolidate containers
Azure SQL (Database)Microsoft.Sql/servers/databasesUse serverless auto-pause for dev/test, use Elastic Pools, right-size tiers, enable auto-scaling storage
Azure SQL (Managed Instance)Microsoft.Sql/managedInstancesRight-size vCores, buy reserved capacity (1/3 years), scale storage separately, move non-critical workloads to SQL DB
MySQL Flexible ServerMicrosoft.DBforMySQL/flexibleServersUse burstable SKUs for dev/test, enable Auto-Stop, optimize storage, adjust backup retention
PostgreSQL Flexible ServerMicrosoft.DBforPostgreSQL/flexibleServersSimilar to MySQL: burstable SKUs, auto-stop idle servers, use pooling, avoid unnecessary Hyperscale
AI FoundryMicrosoft.MachineLearningServices/aiFoundryConsolidate endpoints, autoscale inference, use model compression (ONNX/quantization), archive old models
Storage AccountsMicrosoft.Storage/storageAccountsApply lifecycle policies (Hot → Cool → Archive), enable soft delete, batch/compress data, use Premium storage only where needed

Optimizing the cost of Azure AI services is not a static process but an ongoing journey—blending technical insight with strategic action. By staying proactive, leveraging the latest features, and weaving in automation and governance, AI innovation can thrive within budget boundaries. 

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Top Features of Notebooks in Microsoft Fabric

1 Share

The notebook experience in Microsoft Fabric is similar to notebook experiences on other platforms - they provide an interactive and collaborative environment where you can combine code, output, and documentation for data exploration and processing. However, there are a number of key features that set Fabric notebooks apart. I'll walk through the top features of Fabric notebooks in this blog post.

Native Integration with Lakehouses

Fabric notebooks are natively integrated with your lakehouses in Fabric. You can mount a new or existing lakehouse directly into your Fabric notebook simply by using the 'Lakehouse Explorer' in the notebook interface. The Lakehouse Explorer automatically detects all of the tables and files stored within your lakehouse which you can then browse and load directly into your notebook. This direct integration with your lakehouses eliminates any need for manual paths / set-up, making it simple and intuitive to explore your lakehouse data from your Fabric notebook.

Lakehouse Explorer in Fabric Notebooks.

Built-in File System with Notebook Resources

Fabric notebooks also come with a built-in file system called notebook 'Resources', allowing you to store small files - like code modules, CSVs and images etc. The notebook 'Resources Explorer' acts like a local file system within the notebook environment, you can manage folders and files here just like you would on your local machine. Within your notebook, you can then read from or write to the built-in file system. The files stored in the file system are tied to the notebook itself, and are separate from OneLake. This is useful for when you want to store files temporarily to perform quick experiments or ad hoc analysis of data / scripts. Or, if you want to just simply store notebook-specific assets.

Notebook Resources in Fabric Notebooks.

Drag-and-Drop Data Exploration with Data Wrangler

Fabric notebooks also have a built in feature called the 'Data Wrangler' which allows you to use the notebook interface to drag-and-drop files from your lakehouse / in-built file system into your notebook and load the data, all without writing any code. After dropping the file into the notebook, the data wrangler autogenerates the code needed to query and load the data. This low-code experience, simplifies data loading and lowers the barrier to entry to get started with data exploration. You don't need any coding experience to simply just load your data into your Fabric notebook.

Drag and Drop with Data Wrangler in Fabric Notebooks.

AI Assistance with Copilot

Copilot for Data Science and Data Engineering (Preview) is an AI assistant within Fabric notebooks that helps you to analyse and visualise your data. You can ask Copilot to provide insights on your data, generate code for data transformations or to build visualisations.

In Fabric notebooks, you can access Copilot by using the Copilot chat panel. Here you can ask questions like "Show me the top 10 products by sales", "Show me a bar chart of sales by product" or "Generate code to remove duplicates from this dataframe". Copilot will respond with either natural language explanations or will generate the relevant code snippets that you can copy and paste into your notebook to execute. You can also ask the Copilot chat to provide natural language explanations of notebook cells, and add markdown comments, helping you to understand and document your code. This makes data exploration more accessible, especially for those with a lower level of coding knowledge.

Copilot chat panel in Fabric Notebooks.

Alongside the Copilot chat, you can also interact with Copilot directly within your notebook cells by using the Copilot in-cell panel. Here you can make requests to Copilot and it will provide the necessary code snippet directly in the cell below.

Copilot in cell panel in Fabric Notebooks.

Also built into Fabric notebooks is Copilot's AI-driven inline code completion. Copilot generates code suggestions as you type based on your notebook's context using a model trained on millions of lines of code. This feature minimises syntax errors and helps you to write code more efficiently in your notebooks, accelerating notebook development.

You can also use Copilot to add comments, fix errors, or debug your Fabric notebook code by using Copilot's chat-magics. These are a set of IPython magic commands, that help you to interact with Copilot. For example, placing the %%add_comments magic command above a cell prompts Copilot to annotate the code with a meaningful explanation. Similarly, the %%fix_errors command analyses code and suggests corrections inline.

Copilot chat magics.

Having spent time working with Copilot in Fabric notebooks, I've found the main advantage is that is that it saves time. Even if the output needs tweaking, it saves time and effort upfront by doing the bulk of the ground work. This is especially true for tasks that don't require deep contextual understanding or complex decision-making, for example reading/writing data, creating schemas from dataframes, renaming columns, basic transformations etc. Or for tasks where is already a pattern in place within your code base, as you can ask Copilot to base its output on that, and it's generally accurate. I've also found that it's good at debugging your code and can spot things that are not always obvious, and it's pretty good at generating documentation too, all of which also saves time and effort.

However, Copilot doesn't always fully understand the context or intent behind your work. This is especially true for more complex tasks. Or, sometimes it might suggest code that is unoptimised. This is why you can't be fully reliant on Copilot's suggestions, you still need to review and refine what it generates. That said, even if the output isn't exactly what you need, it is often along the right tracks and can give you inspiration to get started. On the whole it's useful to get unblocked and speed up routine tasks, but it should be used as a tool to assist you, whilst you stay in control of the decision making behind your code. It is also worth noting that most of the Fabric notebooks Copilot features described above are currently in Preview.

Faster Spark Start-Up

With the Spark-based Fabric notebooks, it is generally very quick to spin up a Spark session. If you have ever used notebooks in Azure Synapse before, you will know it takes a few minutes to spin up a Spark session. However, with Fabric notebooks, it takes a matter of seconds. The fast start up times for Spark sessions is due to Fabric's starter pool model, which keeps a lightweight Spark runtime ready to serve new sessions. This means when you initiate a Spark job, it can attach your session to an already running pool and it doesn't need to provision a full cluster from scratch.

If you're running Spark sessions anywhere else in your tenant, the Spark runtime should start very quickly. This is because Fabric re-uses active sessions across the tenant. If any Spark session is already active within your tenant, your notebook can essentially piggyback on that runtime, allowing it to start in seconds. However, it is worth noting that if it's the first time you've run a Spark job in a while, it will take slightly longer to spin up a Spark session.

Python Notebooks

Python notebooks in Fabric offer a pure Python coding environment without Spark. They run on a single-node cluster (2 vCores / 16 GB RAM) making them a cost-effective tool for processing small to medium sized datasets where distributed computing is not required. Using the Apache Spark engine for small datasets can get expensive and is often overkill. Depending on your workload size and complexity, Python Notebooks in Fabric may be a more cost-efficient option than using the Spark-based notebook experience in Fabric.

Python Notebooks in Fabric.

The Python notebook runtime comes pre-installed with libraries like delta-rs and DuckDB (See Barry Smart's 3-part series on DuckDB) for reading and writing Delta Lake data, as well as the Polars and Pandas libraries for fast, data manipulation and analysis. This environment is ideal for those who want to leverage these libraries without additional setup. These libraries are not available by default in PySpark notebooks in Fabric, meaning that you would need to manually install and configure them to access similar functionality. For workflows that benefit from these specific libraries, Python notebooks offer a more ready-to-go experience.

The final key feature of Fabric notebooks that this blog post is going to cover is their integration with Power BI semantic models through Semantic Link. Semantic Link is a feature in Fabric that connects Power BI semantic models with Fabric notebooks. It enables the propagation of semantic information - like relationships and hierarchies from Power BI into Fabric notebooks.

Fabric notebooks also have access to Semantic Link Labs, which is an open-source Python library built on top of Semantic Link, which contains over 450 functions that enable you to programmatically manage semantic models and Power BI reports all from within Fabric notebooks. You can do things like rebinding reports to new models, detecting broken visuals, saving reports as .pbip files for version control or deploying semantic models across multiple workspaces with consistent governance.

Python notebooks in Fabric also offer support for the SemPy library. This is another Python library built on top of Semantic Link that enables you to interact with Power BI semantic models using pandas-like operations (but it's not actually pandas under the hood). SemPy introduces a custom object called FabricDataFrame, which behaves similarly to a pandas dataframe but it is semantically enriched. This means it carries metadata from Power BI semantic models - like relationships, hierarchies, and column descriptions. SemPy supports operations like slicing, merging and concatenating dataframes whilst preserving these semantic annotations. This means that you can explore and transform your data, with semantic awareness maintained.

Using the SemPy library in Fabric Notebooks to connect to Power BI semantic models.

Another key feature of the SemPy library is the ability to retrieve and evaluate DAX measures from your Power BI semantic models. For example, you can use SemPy to retrieve DAX measures like "Total Sales" from your semantic model. Similarly, with SemPy, you can also write new DAX expressions and evaluate them within your notebook.

Using the SemPy library in Fabric Notebooks to retrieve DAX measures and write DAX expressions.

This means you can use business logic, like calculated KPIs or aggregations, defined in Power BI directly in your notebooks, without needing to reimplement the logic manually in Python. Using business logic already defined in Power BI, directly in your Fabric notebooks, reduces duplication and ensures consistency. It also promotes collaboration between data scientists working in Fabric notebooks and business analysts working in Power BI - as both are using a shared semantic layer.

Note that all notebook experiences in Fabric support Semantic Link, but only the Python notebook experience in Fabric offers support for the SemPy library.

Conclusion

Fabric notebooks offer a lot of great features. You can access your lakehouse data easily through the Lakehouse Explorer and store your notebook-specific assets in the built-in file system. The drag-and-drop experience with Data Wrangler makes data exploration accessible, and Copilot provides AI assistance for writing, documenting, and debugging code as well as generating insights from your data. Spark sessions are quick to start up due to Fabric's starter pool model, which makes distributed processing faster than platforms like Azure Synapse.

Python notebooks provide a lightweight, cost-effective alternative for smaller workloads, and come pre-installed with libraries like Polars, DuckDB, and delta-rs - providing a ready-to-use environment for your analytics. Finally, the integration with Power BI semantic models through Semantic Link, Semantic Link Labs, and SemPy allow you to interact with semantic models programmatically, apply DAX measures directly in notebooks, and maintain semantic integrity across transformations. This shared semantic layer promotes collaboration between data scientists and business analysts, which ensures consistency and reduces duplication of logic across platforms.

Whilst these are the top features that stood out to me, there are also lots of other capabilities within Fabric notebooks, so do go and check them out yourself. My colleague Ed has produced a great YouTube video series on getting started with Fabric notebooks, including Microsoft Fabric: Processing Bronze to Silver using Fabric Notebooks and Microsoft Fabric: Good Notebook Development Practices.

Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories