Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149706 stories
·
33 followers

Microsoft MVPs Shine at GitHub Universe 2025

1 Share
MVP Rob Bos at GitHub Universe 2025

MVPs: Leading, Inspiring, and Innovating at GitHub Universe

The GitHub Universe 2025 conference brought together developers, innovators, and community leaders from around the world to explore the future of software development. Microsoft MVPs truly stood out—not only as experts in their fields, but as passionate mentors, trailblazers, and advocates for the power of open source and AI-driven development. From leading high-impact sessions and launching game-changing resources to inspiring the next generation of tech leaders, MVPs played a pivotal role in shaping the conversations and experiences that defined the event. Their energy, expertise, and commitment to sharing knowledge made GitHub Universe 2025 an unforgettable celebration of what’s possible when community comes together to drive the future of software development.

 

MVP Jesse Houwing Photo Credit: Priyanshi R

Inspiring the Next Generation: MVPs & GitHub Campus Experts

Before the main event, MVPs joined the GitHub Campus Experts - university students passionate about community building and open source - for a special pre-day at the GitHub office. MVPs engaged with GitHub Campus Experts - university students passionate about community building and open source - during a special pre-day event at the GitHub office. This gathering was designed to inspire young leaders, recognize their contributions, and encourage them to continue making an impact. MVPs shared stories of their own journeys, emphasizing that community involvement is not just a career milestone but a lifelong commitment to learning, sharing, and growing together. MVP Jesse Houwing shared: “The preday was a way to inspire these young folks. Many people who enter the workforce are scooped up in the torrents of work. This day was part of giving them thanks for their services in the past years and a way to inspire them to continue community building and open-source contributions.” Jesse also emphasized the value of MVPs engaging with student communities: “MVPs and Stars have all contributed to community in one way or form. Public speaking, blogging, open source, community organizing. Many of us have been doing this all throughout our professional careers and were awarded at some point. The award isn't the start, it's a token of appreciation along the way. The Campus Experts are at the start of their career. And our possible future stars and MVPs.” Reflecting on the experience, Houwing added: “I spent a couple of hours on the day one with them, and kept seeing them during the conference days. And later connected over LinkedIn. Many have shared their story and how they were inspired. At the same time, they inspired me. Inspired me to keep sharing, to connect to the younger generations and to learn from their enthusiasm.”

 

Microsoft MVPs Randy Pagels and Rob Bos doing a tech check before they lead a session, “From prompts to productivity: Lessons learned with GitHub Copilot and agentic AI”.

Lessons Learned: From Prompts to Productivity

MVPs Randy Pagels and Rob Bos led a dynamic session, “From prompts to productivity: Lessons learned with GitHub Copilot and agentic AI.” The energy in the room was electric, with attendees eager to learn tips for maximizing Copilot’s capabilities. The session emphasized the importance of collaboration - sharing tips and tricks, building internal communities, and supporting each other’s growth. Rob Bos shared about the energy in the room during his session, “It was amazing, lots of people leaning in to learn some of our tips on how we get the most out of Copilot. We got quite some questions afterwards as well, showing the interest on the topic!” When asked how the audience might apply what they learned in their day-to-day development work, Bos added, “I think the message about collaborating on the way you use Copilot and sharing tips and tricks really resonated with the audience. A lot of folks were agreeing and felt supported with our focus on that. They will take that back home and start an internal community of sharing their knowledge.”

MVPRandy Pagels, Senior Developer Advocate at GitHub April Yoho (who wrote the foreword in The GitHub Copilot Handbook) and MVP Rob Bos

Launching “The GitHub Copilot Handbook”

At GitHub Universe 2025, Microsoft MVPs Randy Pagels and Rob Bos unveiled their new book The GitHub Copilot Handbook - a comprehensive guide designed to help developers harness the full potential of AI-assisted coding. This book goes beyond basic code completion, offering practical strategies to apply GitHub Copilot across the entire software development lifecycle.

From writing smarter prompts and automating tests to streamlining code reviews and integrating Copilot into IDEs, the handbook equips teams with actionable insights to boost productivity. It also addresses onboarding strategies, building a knowledge-sharing culture, and fostering collaboration through Copilot’s chat and agentic AI features.

Why does this matter? As AI becomes central to modern development, this resource empowers developers and organizations to adopt Copilot effectively, reduce friction, and unlock innovation at scale. By launching the book at GitHub Universe—a hub for cutting-edge developer tools - MVPs signal a clear message: AI-driven development isn’t the future; it’s here, and it’s transforming how we build software today.

When asked about launching the book at the event, Rob Bos said: “We timed the release with GitHub Universe as that is THE GitHub event of the year. We visit San Francisco every year to be at this event and meet up with the community and our friends at GitHub, and both learn and share more knowledge on the platform. Of course, we shared the new book with the people that have supported us in writing it: April Yoho and Martin Woodward!”

Rob also shared what inspired the book and its broader impact: “Over the years we have learned so much from the thousands of engineers we trained on GitHub Copilot and seen a lot of the common pitfalls as well, that we had to share this knowledge for everyone to learn from. What better way to make one cohesive overview that takes engineers from just getting started to an experienced user? And we did not stop there, we also included the basics of Generative AI, as well as how to learn together within a local community of people sharing their lessons learned.” If there’s one key takeaway from the book, Bos continued “You can use GitHub Copilot for more than just coding! We show use cases for everyone in your team that is not an engineer, and share how they can also leverage the tools, just like you, for requirements engineering, writing better user stories, test cases, and much more!” Asked why building a knowledge-sharing culture around Copilot matters for organizations, Bos explained, “The best way to learn Copilot is from sharing, as we all use it in different ways. To prevent being stuck in the way you always have worked, getting some inspiration from other people is key. As trainers, we have learned the most from other trainers and Copilot users in how they think and work with this tool.”

Microsoft MVPs continue to set the standard for technical excellence and community leadership. MVPs didn’t just show up – they took the lead, inspired, and elevated the entire GitHub Universe conference experience. Their contributions at GitHub Universe 2025 exemplify the spirit of innovation, mentorship, and collaboration that defines our program and makes our community so special. Thank you to all MVPs for driving excellence, inspiring the next generation of tech leaders and driving the future of software development!

Want to learn more about MVP impact? Visit the Microsoft MVP Program Blog and join the conversation!

 

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

What's New in Excel (November 2025)

1 Share

Welcome to the November 2025 update. This month, we’re excited to share several enhancements across Excel. Announced at Ignite, Agent Mode in Excel now includes web search and Anthropic model support, and is available in Excel for Windows—via the Frontier program. Excel for Windows introduces a modernized Get Data dialog, providing a clean, simple starting point for connecting to data. Additionally, users on Windows, web, and iOS can preview comments on protected files directly in email notifications. For Insider users, Excel for iOS adds Liquid Glass styling and template filters, introducing a new, modern home experience.

Excel for Windows:
- Agent Mode in Excel enhancements (Frontier)
- Get Data dialog

Excel for Windows, web, and iOS:
- Comment previews on protected files #FIA

Excel for iOS:
- Liquid Glass and template filters (Insiders)

Excel for Windows

Agent Mode in Excel enhancements (Frontier)

1. Web search. At Ignite last week, we introduced web search in Agent mode. Imagine pulling real-time information from the web straight into your spreadsheet workflows—market trends, historical stats, scientific figures—without juggling browser tabs or copy/pasting from a chat window.

For example, you can ask Agent Mode to compile the latest GDP growth and CO₂ emissions data for G20 countries or create a table of this year's Nobel Prize winners with detailed attributes. Copilot can now pull this data from trusted sources into Agent mode's multi-step workflow and build directly in your spreadsheet, saving time and reducing manual effort. Plus, it supports citation links for transparency so you can have confidence in the output. This integration is perfect for analysts, researchers, and anyone who needs up-to-date external data to make informed decisions.

2. Anthropic model support. Choice matters, and we are committed to providing multi-model options in Microsoft 365. Building on Researcher agent and Copilot Studio, Agent mode now offers an option to choose Anthropic’s Claude models to power your experience. Just choose the "Try Claude" option to get started. For enterprise users: your admin must allow access to Anthropic AI models. Learn more about using Claude in Agent mode in Excel.

Claude brings a different approach to spreadsheet generation offering a distinct experience from the default OpenAI models powering Agent Mode. While Claude streams its chain-of-thought and explanations differently, ongoing improvements aim to deliver a smooth experience in this early preview. This flexibility ensures you can pick the model that best fits your needs—whether it’s speed, accuracy, or style.

3. Now available in Excel for Windows. Last month, we introduced Agent mode in Copilot in Excel for Web through the Frontier program. At Ignite, we announced that Agent mode is now available in Excel for Windows too, making AI assistance available for users and professionals who rely on Excel in the desktop app for their work. While Mac support is planned for later, Windows users will benefit immediately from this rollout. Users must be in the Insiders Beta Channel on Windows.

Get Data Dialog

The modern Get Data dialog gives you a clean, simple starting point for connecting to data. With built-in search and quick access to popular data sources, you can easily find the right source and start working on your data. This feature is currently rolling out to Windows Current Channel users. Read more here >

Excel for Windows, web, and iOS

Comment previews on protected files #FIA

Excel now lets you preview comments on protected files directly from your email notifications. When someone adds a comment, the email includes the comment text and its context within the file, so you can quickly review feedback without unlocking or opening the document.

Excel for iOS 

Liquid Glass and template filters (Insiders)

Your favorite Microsoft 365 apps on iPhone, iPad, and Apple Vision Pro now feature Liquid Glass styling. We’ve also made the search experience available from the bottom of the screen, to align with iOS 26’s search patterns and make it easier to use with one hand.

When searching for templates, you’ll now also see quick filter buttons at the top that let you browse by category – like Flyers, Resumes, or Invoices – instead of scrolling through a single long list, so finding the perfect template is faster and more intuitive. Read more here >

 


Check if a specific feature is in your version of Excel

Click here to open in a new browser tab

 

 

 


Many of these features are the result of your feedback. THANK YOU! Your continued Feedback in Action (#FIA) helps improve Excel for everyone. Please let us know how you like a particular feature and what we can improve upon—"Give a compliment" or "Make a suggestion"..  You can also submit new ideas or vote for other ideas via Microsoft Feedback.

Subscribe to our Excel Blog and the Insiders Blog to get the latest updates. Stay connected with us and other Excel fans around the world – join our Excel Community and follow us on X, formerly Twitter.

Special thanks to our Excel MVPs David Benaim, Bill JelenAlan Murray, and John Michaloudis for their contribution to this month's What's New in Excel article. David publishes weekly YouTube videos and regular LinkedIn posts about the latest innovations in Excel and more. Bill is the founder and host of MrExcel.com and the author of several books about Excel. Alan is an Excel trainer, author and speaker, best known for his blog computergaga.com and YouTube channel with the same name. John is the Founder & Chief Inspirational Officer at MyExcelOnline.com where he passionately teaches thousands of professionals how to use Excel to stand out from the crowd.

 

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

How to ensure your expert C# knowledge doesn’t make you a TypeScript noob

1 Share

Coming from C# can quietly sabotage your TypeScript code in ways that are easy to overlook. Because the two languages share familiar syntax and patterns, it is natural to assume that the same instincts will carry over when modeling data or structuring state in an Angular app. But TypeScript behaves very differently once the types disappear at runtime, and approaches that make perfect sense in C# (like nullable flags, enums for everything, and class-driven thinking) can introduce unnecessary friction. Understanding where these habits diverge sets the stage for writing cleaner, more predictable TypeScript.

alt

For full-stack developers, mental context switching never ends. When you’re moving between APIs and Angular apps, similarities across languages can feel deceptively comforting. Sometimes, this works in your favor. Concepts like ReactiveX and Observability translate cleanly between JavaScript and C#, so reusing mental models feels natural.

But other times, things only look similar on the surface. Looks like a duck, quacks like a duck, but it’s actually an excavator.

Bulldozer With Speech Bubble Reading Quack

The origin story

Many Microsoft-stack developers arrived at TypeScript accidentally. For years, Microsoft’s default web tooling centered on MVC, where the frontend was mostly static pages sprinkled with jQuery for interactivity. As those apps grew more complex, Microsoft needed a clearer frontend path, so they started including Angular templates out of the box.

Suddenly, C# developers found themselves working in Angular because it shipped with their project. TypeScript was not something we chose to learn. It was something we inherited.

This created a predictable misalignment. Developers treated their Angular code as an extension of their C# projects rather than a fundamentally different environment. And since C# is a strongly typed, runtime-aware language with features like reflection, many assumptions did not translate. TypeScript, by contrast, evaporates its types at build time. Your shipped code is plain JavaScript. No runtime types. No metadata. No safety net.

Yet TypeScript still has to support the entire expressive surface of JavaScript. That is why you can rename a .js file to .ts and watch everything still run. TypeScript’s type system must co-exist with JavaScript’s flexibility, which leads to some mind-bending patterns when you first encounter them.

This means one thing: approaching TypeScript with C# instincts can make your apps rigid, confusing, or unnecessarily defensive. I wrote TypeScript like a C# developer for years. It held me back. And if you learned C# first, the same might be happening to you.

It’s all about the types

Not “types” in the philosophical sense. The type keyword.

Many developers start with interfaces for everything. It is what the documentation shows. It is familiar. It feels class-adjacent.

Let’s imagine a simple API response shape:

interface ApiResponse {
  data: TypedResponseObjectDefinition,
  success?: boolean,
  errorMessage?: string,
  loading: boolean
}

interface TypedResponseObjectDefinition {
  id: number;
  name: string;
}

You know the drill. Infer loading, success, and error states based on combinations of nullable fields. Then wire up your fetch:

fetchData(): void {
  this.userId++;
  if (this.userId > 12) {
    this.userId = 0;
  }

  this.response$ = this.http.get<TypedResponseObjectDefinition>(
    'https://jsonplaceholder.typicode.com/users/' + this.userId
  ).pipe(
    map((x) => ({
      data: x,
      success: true,
      loading: false
    }) as ApiResponse),
    catchError((error: Error) => of<ApiResponse>({
      success: false,
      errorMessage: error.message,
      loading: false
    })),
    startWith({ loading: true } as ApiResponse)
  );
}

The component template is equally predictable. Check for loading. Then data. Otherwise errorMessage.

The problem is that this structure is incredibly fuzzy. You are relying on nullable properties and human intuition to describe state. It works, but it is brittle. It silently allows impossible states, for example an object with success: false and data populated.

Enter discriminated unions.

How type helps

Our API can only return three logical outcomes:

  • Success
  • Failure
  • Loading

We can express this directly using a union:

type ApiOutcome =
  | { state: 'SUCCESS'; data: TypedResponseObjectDefinition }
  | { state: 'ERROR'; error: string }
  | { state: 'LOADING' };

This unlocks one of TypeScript’s nicest features: exhaustive checking. Your IDE will generate all required branches inside a switch. Access is constrained to what each state actually allows. No nullable gymnastics. No speculative fields.

Updated fetch:

fetchData(): void {
  this.userId++;
  if (this.userId > 12) {
    this.userId = 0;
  }

  this.response$ = this.http.get<TypedResponseObjectDefinition>(
    'https://jsonplaceholder.typicode.com/users/' + this.userId
  ).pipe(
    map((x) => ({
      state: 'SUCCESS',
      data: x
    }) as ApiOutcome),
    catchError((error: Error) => of<ApiOutcome>({
      state: 'ERROR',
      error: error.message
    })),
    startWith({ state: 'LOADING' } as ApiOutcome)
  );
}

The template becomes drastically clearer:

@if (response$ | async; as response) {
  @switch (response.state) {
    @case ('LOADING') {
      Loading...
    }
    @case ('SUCCESS') {
      Employees name is {{ response.data.name }}
    }
    @case ('ERROR') {
      Error: {{ response.error }}
    }
  }
  <pre style="background-color: lightgray">{{ response | json }}</pre>
}

This version is expressive, impossible to misuse, and nearly self-documenting. Once you get used to it, it is hard to return to nullable booleans holding everything together with duct tape.

Sad, sad enums

C# developers love enums. They are safe, predictable, and clear. But TypeScript enums behave differently.

Consider:

enum ToddlersWords {
  Panday,
  Bridge,
  Bird,
  Spot,
  Flowers
}

String labeling with numeric backing. Sounds fine. Then you do this:

Object.keys(ToddlersWords);
Object.values(ToddlersWords);

You get both numbers and labels. Equality checks compare numeric representations, not strings. It is technically correct behavior, but rarely what you intend for UI logic or domain modeling.

Union time

If what you want is a set of string options, literal unions are the better tool:

const Words = ['Panday', 'Bridge', 'Bird', 'Spot', 'Flowers'] as const;
type ToddlerWords = typeof Words[number];

This keeps your domain values string-only, type-safe, and iterable without the strange enum duplication.

Defining arrays is straightforward:

toddlerWords: ToddlerWords[] = ['Panday', 'Bridge', 'Bird', 'Spot', 'Flowers'];

If you try to add a value that does not belong, TypeScript will complain at compile time instead of letting a bad state leak into your UI.

Harness the power of TypeScript

Writing clear, intentional TypeScript leads to simpler code and easier maintenance. The language has quirks, and its type system can feel alien when you are coming from a runtime-typed world like C#. But once you adopt patterns that align with JavaScript’s flexibility rather than resisting it, your apps get sturdier and your mental load shrinks.

If you want to explore the examples, feel free to clone the repo. And if your C# instincts shaped how you approached TypeScript, share how it went for you in the comments.

The post How to ensure your expert C# knowledge doesn’t make you a TypeScript noob appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI-Powered Alt Text Generation with mostlylucid.llmlltText

1 Share
AI-Powered Alt Text Generation with mostlylucid.llmlltText
Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Using AI tooling effectively - a lessons learned exercise

1 Share

TL; DR -> Whether or not you are for or against AI, if you do not at least read up on the landscape and the capabilities out there you WILL fall behind. These tools can be used in many different ways and provide support where you least expect it.

It is not about taking away the fun tasks, if you do not want it to code for you, then stop, want to create your own artwork, fine do that. But do not discount the capabilities of the tools because of preconceptions.

You do not need to use it, but understanding what you CAN use it for better arms you in the road ahead to know what you are up against.

Work smarter, not harder

The evolution of AI tools is ever increasing, as is any technology or area that is adopting these tools. It was already an almost impossible task to “keep up” with advances, causing developers and engineers to keep narrowing their specialties, reducing the mental load to be able to take in the latest and greatest.

Since the unveiling of the current generation of AI tooling, the curve keeps getting sharper and sharper with facilities such as:

  • Guides on instructions to provide consistent context between queries.
  • MCP servers and their capabilities have increased.
  • More and more LLM model services for injecting capabilities into a workflow or service.
  • Prompt generation research.
  • New agent modes from Chat -> Agent -> Planning -> …

The list goes on and keeps getting bigger, refinements, new tools, new features. Like any technological “revolution” its speed is increasing.

However one thing remains clear and should not be underestimated, is that AI STILL does not know it is right. So always be on your guard and check the facts! But that does not mean you should not use the tools, just learn the correct ways to work with the tools.

Based on my understanding (at the time of writing), this article aims to impart how my workflow has evolved to a level where I can institute “some” level of trust in my processes with the results from AI.

Fun fact, even today when I started writing this post, I listened to a podcast with Jesse Liberty that featured “James Montemagno on Vibe Coding” ( a term I do not like tbh). In this episode James mentions a few “tricks” I had not considered before, such as using AI to generate the tool or a prompt based on your current activity to enable you to do that activity faster next time. Effectively engineering your workflow as you DO IT.

It starts with a prompt - The Misconception

I have said it previously, but it is worth repeating. Long gone are the days when you put in a simple question to get an answer or complete a task to fulfil. Sure, on your phone or via your browser, the results are FAR BETTER than dropping it into Bing/Google, but for any serious workflow. Without content/structure and even architecture, the most basic of queries will not result in anything that resembles what you desire. (except for that one day in march where the hares are flocking).

AI is suggestive, it is not indicative. It NEVER knows if it is right, only that the probabilities align with a result compared to your query

So let us breakdown the critical paths as I understand them at the moment.

Generate Instructions/Agent document

Instructions or Agent guides are critical to modern AI interactions, they define the background to your request/conversation, define the scope of the work and critically also inform the AI of any boundaries it should not cross. Traditionally, it is also used to define the architecture to follow and conventions to use, but it can also be used as a thought board.

Do not think of your instructions as a static document, it is a living/breathing entity that should grow and learn as your project evolves. Include tips/tricks, any repeated lessons or “daft things” the AI performs that you would rather it did not.

The instruction guide is sent as a pre-context to EVERY chat/conversation, this is information it should know whenever it is sent off to think about anything!

But do not worry about its length or performance, the local agents “tune” the instructions and summarize it for easy digestion. You would be surprised about how much “pre-processing” happens in your client before it even contacts the cloud AI agent.

Simple steps to follow as your project evolves, whether it is a coding session, a Q&A query or just taking notes for a subject:

  • If you have an existing project, use the built in VSCode (or other tool) “Generate Instructions” to set an initial framework.
  • If you are beginning thoughts for a project, start a conversation with the aim of generating this instruction guide.
  • Keep refining the instructions, with or without AI help to frame future conversations.

Above ALL, keep reviewing the instructions manually after generation, do NOT just trust the result, it is meant as a starting point not a finish line. Keep reviewing, updating and even deleting content until you are satisfied with the result. And as your project grows, so should your instruction guide.

Use MCPs relevant to your process

MCP servers are fast becoming the “go to” tool to assist in local AI support and cut down on the amount of traffic used by cloud agents, they help form, and in some cases, generate results to assist the AI in its journey, these can range from:

Model Context Protocol servers, or MCP for short actually details the method used to communicate from an AI LLM with a local AI or context agent, a tool used to support AI on its journey. Their capabilities can range from just an API for serving data or results, to full blown AI agents themselves. The MCP protocol effectively allows AI’s to talk to each other and work together to reach a goal.

  • Simple functions used in calculations or a fact table.
  • Headers for local AI agents or models, such as image generation.
  • Self-contained LLM’s, primarily used for documentation or API references. Useful for keeping the AI bound to accurate source information rather than just the web.
  • Proxies to other cloud hosted AI agents, woven together to form a mesh.

Do not be afraid to AI your own MCP server locally, if you find certain areas where you are constantly correcting the AI agent, or need it to use pre-defined terms in its response, then just generate your own server to manage this for you and update your instructions to defer to the MCP server in responses. This can lessen the load in your instructions if you can have an agent maintain it for you.

Each MCP effectively adds another tool the AI can use in the construction of a response as well as off-loading tasks locally to save time or money (credits), or in some cases MANY tools. And there is a growing movement where you can use AI to generate your own local MCP servers, especially for repeated tasks or guardrails.

Too many tools can degrade performance While having lots of MCP servers at the ready is fantastic, be aware you need to limit which Servers/Tools are active at any one time. Giving the AI too many options can severely degrade performance as it need to check the toolset with each interaction (even within a response) to see if there is a tool to help.

Make sure for each interaction/conversation, that you are using the tools that will assist with that thread. Not asking it about code or pull requests, then disable the GitHub MCP tools. Also critical for any language or toolset focused conversations. Just use the tools that will help with your query.

Create custom prompts for repeated tasks

A recent trick I picked up surprised me how effective it was. If you find yourself either asking the same question, or making the same corrections, or even just doing the same thing repeatedly, then stop and just make a new customized prompt for yourself.

You see these already when in VSCode you hold ctrl + shift + P (or cmd + shift + P on Mac) to bring up the command window, from here you see regular actions available to you based on the extensions you have installed and the MCP servers you have registered. Once activated, they complete a set of tasks using what is available to perform an operation, be it cloning a repository, or formatting your document (my most used command).

But you can also create your own, and again, use AI to help you generate it. In the middle of a conversation when you have completed a task, ask the AI to summarize the task that was completed and ask it to generate a new Command in the tool. You will get the source for the command, allowing you to tweak it to your needs (and likely remove some unnecessary steps) and then save it with a command name, thus, every time you need to perform the same task, save yourself some keystrokes and just launch your command instead.

Leverage resources online

A common misconception when working with AI Agents, is to assume it is trawling the web for the current/latest information, sadly this is not true (in the most part), LLM’s rely on historical information they have been trained on. If there is newer information, it is best to direct the AI to also read specific documents, websites or other sources of information in addition to its trained model.

Although this does not “train” the AI in new information, the data that is gathered is only summarized and used as part of the current conversation. If you asked it again in a new conversation without providing the source, it will not know.

You can add such sources to your instruction guide, although it will only be summarized from the last time you generated the guide, alternatively you can either use an MCP server such as “Context7” (A useful docs MCP) or generate your own MCP server that refreshes its memory from online sources that you critically rely upon, like your companies document server.

Cost Management - The hidden tax

While many of the tools we use today offer generous free tiers or fixed monthly subscriptions, the move towards “pay-as-you-go” models for advanced agents and API access brings a new consideration: Cost.

Every interaction with an AI is not just the question you ask; it includes the entire conversation history, your instruction guides, and any files you have attached. This “Context Window” is re-sent with every single query. If you have a long-running chat with megabytes of code attached, you are paying for that data transfer every time you press enter.

Token Hygiene To keep costs (and latency) down, practice good token hygiene:

  • Start fresh: Do not keep one giant chat going forever. Start a new conversation for each distinct task.
  • Limit context: Only attach the files relevant to the specific question.
  • Watch the loops: Autonomous agents that “think”, “plan”, and “verify” can trigger dozens of internal calls for a single user request.

Being mindful of your token usage does not just save money; it often results in faster and more accurate responses because the AI is not distracted by irrelevant noise from 50 messages ago.

Change your workflow

AI tooling should not be just about work, used correctly, you can use the tools to influence or guide almost any task, such as:

  • Guided research.
  • Ideation and thought generation.
  • Validating preconceptions.
  • Applying structure.
  • Architecture compliance.
  • General validation.

And these are just from my building workflow over recent months/years. Using AI to not just do work, but to plan, evolve, generate more tools to further accelerate your workflow, defining the ultimate development method of DRY - Do not Repeat Yourself.

Did you know there are several “MODES” of AI usage now? Not just Agentic (Agent) or “ask” (query), but also “Plan”, which is great for digging deep and coming up with a stepped plan of actions which you can either get the AI to start working from or do yourself, as well as several others and even custom modes you can create yourself, tailor the experience to meet your needs.

Guided research

When researching any topic, wading through mounds of documents, websites, information and searching for the latest (or even historical) information can be very time consuming.

Granted, this should be treated as source material and taken with a grain of salt as critical facts will need to be verified. Even with recent advances in correction and validation, it is still just an approximation. In many cases, it is worth asking different agents to read over the results and verify sources for cases.

You might think, why bother, if the information is potentially unreliable what is the point. But the answer is simple, the AI can search through data to find you a shortlist far quicker than any human can, reducing the scope of your research with summaries.

But do not stop there, keep iterating the results upon itself, demand references and lookups to the latest sources to verify the results, which in the end results in data you can rely on for your studies.

Even so, in research and summarizing the research, use your own words. Use the data as refined source material which is just a smaller window of the wider world contained within results. The ultimate aim is to save you TIME, if nothing else, it can also type faster than you can.

Ideation and thought generation

A concept I have also been using AI tooling for of late is idea generation. You get a half sleep deprived thought or waking moment of “something”, it sounds fine in principle but will it work.

This is where, depending on the type of project it is, I will lean on AI tooling to help ratify my thoughts:

  • Has this been done before, if so what worked and what did not.
  • Is there any market opportunity for such an idea.
  • Summarizing options for “go-to market” strategies.

If the idea is a solution, even using GitHub Copilot to form a new project automatically, generating source, identifying workloads and deployment options. Which it will take on, generate a new repository, and start building out the framework of a potential solution (fantastic for web solutions). Give it enough detail, apply references for similar things you have in mind, and set it off. All of which will happen without you being involved (or even at your desk) thanks to workflow agents.

It does not need to end up with something real, except to prove to yourself whether something is valid or even possible. It gives you a framework to build upon and cuts down the time involved in your personal research.

Validating preconceptions

Often during my development and management life, we build our own internal bias, it is not always intentional and can lead you to incorrect assumptions. My geography knowledge for instance, is waaay out of date.

Most consumers of AI will turn to the agent on their phones for a quick query, usually with FAR better results than a one line google search (with so many Ad results these days), the results are better, but usually very inefficient.

I have been “trying” to teach my family a better etiquette for searching with AI assistance, forming not just a question but also how to form the response, for example:

Instead of just:

What are the best locations to go on holiday in June?

Better to give the AI heads up as to what you are really looking for:

I am looking to go on holiday in June, what are the best locations to visit?  I prefer hot locations (but not too hot) with spa facilities and a good nightlife.  Make sure to also review other visitors reviews of locations and only include places that reviewed well.  Form the results into a table listing the destination, the hotel or cabin, average review rating and links to booking information.  Also check the sustainability rating for the areas to ensure it is not over populated with tourists and has affordable travel insurance (around £30 pp)

This structure of query is simply more informative while staying natural and tries to tick as many boxes as possible for what you are REALLY looking for:

  • What is the purpose of the search.
  • What are the constraints.
  • How you want the results formatted.
  • And additional concerns or limitations.

This is just an example, but as time and tide has shown us, the more effort you put into defining your question (and results) the better the output will be and reduce the need for further onward searches to clarify results.

Applying structure

One of the hardest parts in any software (or other designs) is architecture, how well formed the solution needs to be. What constraints are needed, which conventions should be used and where the checkpoints need to be in order to avoid pitfalls.

Of the many designs I have done in the past, it is always the longest struggle to keep constantly refining the structure around any project, whether it is in design or implementation. What other considerations need to be considered or applied. Even with many years of training and experience, you always wonder “what if” or try to evaluate uncommon scenarios that could affect operation.

Using AI tooling to review the architecture, critically looking for “things you did not consider” is crucial. Even if you choose not to act on the information as it is not relevant, having another set of eyes with a much larger dataset can really help. I have solved many an issue in a junior’s design, as have my seniors done for me, so why not also use the tools available for a third set of eyes.

Defining a well architected solution leads into the next section for using AI to ENFORCE that architecture and prevent future works overriding the design (or at least get them to question the approach).

Architecture compliance

With any well defined architecture, adhering to that architecture can be challenging. But you can use the AI to help provide a ground truth based on the defined architecture, as well as verifying how compliant any development work undertaken is.

Additionally, the architecture itself can be used as a guide for any future work, using the implementation, design or instructions to inform work that is being undertaken.

As an example, I was building a new component within a service driven architecture and the AI tooling correctly identified all the key touchpoints within the architecture to add the new component, as well as adding Unit tests around the feature. It derived everything it needed from:

  • The existing implementations
  • The documentation associated with the project
  • Designed instructions surrounding the architecture and bounds
  • And existing Unit tests for other services (in fact it suggested some additional tests which we then implemented across the board)

This also meant that new work complied with the existing architecture and made the final review and update a lot quicker and simpler, I would estimate it reduced the time needed to build the new component by about 80%, meaning I could get more done faster. BUT, I still had to do a fair amount of work to review the implementation, but no more than I would normally.

General validation

Checks and balances are key for any professional work, in fact I now regularly use either Copilot as a code review agent or use the Copilot review processes for all documentation, not only to check for the usual grammar / consistency, but also to check my own facts, I prefer to always write myself and use the AI as a reviewer and NEVER the other way round (well, except that one time).

There are many tools in the AI world that are available and you will probably find you have already been using a lot of them, such as a spell checker in your documents (a nascent form of AI that is strictly contained), but writing with AI and also double checking your own preconceptions is now far easier. As an author I used about seven different tools in my own personal proofing exercise and with the addition of AI into that workflow, I have been able to reduce it down to three (because it pays not to have all your eggs in one basket) which saves me considerable time in review.

Summary

Hopefully this article opens some additional thoughts patterns, as mine do almost daily as I use, reuse and learn from my own use. I have gone far beyond the level of “generate this method” or “this error occurs, why?”.

Keeping up to date is a struggle, but it does not have to be, include all the tools available to you to make your life easier, or at least smoother. The tools can simply type faster than you, so use that to your advantage.

But even given all the resource, DO NOT trust the result, check it, validate it, challenge the generated pre-conceptions. Even with the extra steps to read through what has been created, the workflow is still significantly faster. So use it when it makes sense, and for those things you take joy in, and keep having fun.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Row Goals: Part 1

1 Share

Row Goals: Part 1


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post Row Goals: Part 1 appeared first on Darling Data.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories