Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153173 stories
·
33 followers

Is that allowed? Authentication and authorization in Model Context Protocol

1 Share
Learn how to protect MCP servers from unauthorized access and how authentication of MCP clients to MCP servers works.
Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft 365 Apps for Enterprise Get Stronger Defenses with Latest Security Baseline

1 Share

Key Takeaways:

  • Microsoft releases security baseline update for M365 Apps for Enterprise version 2512.
  • Excel, PowerPoint, and core system settings receive enhanced protections.
  • These updates block risky links, insecure protocols, and legacy automation features.

Microsoft is beefing up its enterprise defenses this week with a major security and compliance update for Microsoft 365 Apps for Enterprise version 2512. This new baseline strengthens Excel, PowerPoint, and core system settings to help enterprises stay protected against evolving cyber threats.

Specifically, the latest baseline updates several Microsoft 365 Apps components, especially Excel, PowerPoint, and core system settings. These updates address new attacker techniques, customer feedback, and secure‑by‑design principles.

Excel security enhancements

According to Microsoft, File Block now includes external link files, which prevent external links in blocked workbooks from refreshing. “Attempts to create or update links to blocked files return an error. This prevents data ingestion from untrusted or potentially malicious sources,” Microsoft explained.

Blocking insecure protocols

This new security baseline also blocks all non‑HTTPS protocols when opening documents. This capability helps to eliminate unsafe downgrade paths and enforces TLS‑secure communication across apps and cloud services.

Additional hardening

Lastly, Microsoft’s new security baseline blocks risky automation interfaces such as MSGraph.Application and MSGraph.Chart. Microsoft 365 Apps will render them as static images instead to address a risky automation interface. Moreover, this baseline disables the legacy OrgChart add‑in for security reasons. It also prevents fallback to FrontPage Server Extensions RPC to ensure the usage of modern, authenticated file‑access methods.

Starting this week, administrators can deploy the security baseline via Office cloud policies, ADMX policies through Microsoft Intune, or Group Policy for on-premise Active Directory environments. Microsoft has divided the more complex settings into individual Group Policy Objects. These include policies that block Dynamic Data Exchange, legacy file formats, legacy JScript, and unsigned macros.

The post Microsoft 365 Apps for Enterprise Get Stronger Defenses with Latest Security Baseline appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building AI-Powered SaaS Businesses

1 Share

In preparation for our upcoming Building SaaS Businesses with AI Superstream, I sat down with event chair Jason Gilmore to discuss the full lifecycle of an AI-powered SaaS product, from initial ideation all the way to a successful launch.

Jason Gilmore is CTO of Adalo, a popular no-code mobile app builder. A technologist and software product leader with over 25 years of industry experience, Jason’s spent 13 years building SaaS products at companies including Gatherit.co and the highly successful Nomorobo and as the CEO of the coding education platform Treehouse. He’s also a veteran of Xenon Partners, where he leads technical M&A due diligence, advises their portfolio of SaaS companies on AI adoption, and previously served as CTO of DreamFactory.

Here’s our interview, edited for clarity and length.

Ideation

Michelle Smith: As a SaaS developer, what are the first steps you take when beginning the ideation process for a new product?

Jason Gilmore: I always start by finding a name that I love, buying the domain, and then creating a logo. Once I’ve done this, I feel like the idea is becoming real. This used to be a torturous process, but thanks to AI, my process is now quite smooth. I generate product names by asking ChatGPT for 10 candidates, refining them until I have three preferred options, and then checking availability via Lean Domain Search. I usually use ChatGPT to help with logos, but interestingly, while I was using Cursor, the popular AI-powered coding editor, it automatically created a logo for ContributorIQ as it set up the landing page. I hadn’t even asked for one, but it looked great, so I went with it!

Once I nail down a name and logo, I’ll return to ChatGPT yet again and use it like a rubber duck. Of course, I’m not doing any coding or debugging at this point; instead, I’m just using ChatGPT as a sounding board, asking it to expand upon my idea, poke holes in it, and so forth.

Next, I’ll create a GitHub repository and start adding issues (basically feature requests). I’ve used the GitHub kanban board in the past and have also been a heavy Trello user at various times. However, these days I keep it simple and create GitHub issues until I feel I have enough to constitute an MVP. Then I’ll use the GitHub MCP server in conjunction with Claude Code or Cursor to pull and implement these issues.

Before committing resources to development, how do you approach initial validation to ensure the market opportunity exists for a new SaaS product?

The answer to this question is simple. I don’t. If the problem is sufficiently annoying that I eventually can’t resist building something to solve it, then that’s enough for me. That said, once I have an MVP, I’ll start telling everybody I know about it and really try to lower the barrier associated with getting started.

For instance, if someone expresses interest in using SecurityBot, I’ll proactively volunteer to help them validate their site via DNS. If someone wants to give ContributorIQ a try, I’ll ask to meet with the person running due diligence to ensure they can successfully connect to their GitHub organization. It’s in these early stages of customer acquisition that you can determine what users truly want rather than merely trying to replicate what competitors are doing.

Execution, Tools, and Code

When deciding to build a new SaaS product, what’s the most critical strategic question you seek to answer before writing any code?

Personally, the question I ask myself is whether I seriously believe I will use the product every day. If the answer is an adamant yes, then I proceed. If it’s anything but a “heck yes,” then I’ve learned that it’s best to sit on the idea for a few more weeks before investing any additional time.

Which tools do you recommend, and why?

I regularly use a number of different tools for building software, including Cursor and Claude Code for AI-assisted coding and development, Laravel Forge for deployment, Cloudflare and SecurityBot for security, and Google Analytics and Search Console for analytics. Check out my comprehensive list at the end of this article for more details.

How do you accurately measure the success and adoption of your product? What key metrics (KPIs) do you prioritize tracking immediately after launch?

Something I’ve learned the hard way is that being in such a hurry to launch a product means that you neglect to add an appropriate level of monitoring. I’m not necessarily referring to monitoring in the sense of Sentry or Datadog; rather I’m referring to simply knowing when somebody starts a trial.

At a minimum, you should add a restricted admin dashboard to your SaaS which displays various KPIs such as who started a trial and when. You should also be able to quickly determine when trialers reach a key milestone. For instance, at SecurityBot, that key milestone is connecting their Slack, because once that happens, trialers will periodically receive useful notifications right in the very place where they spend a large part of their day.

On build versus buy: What’s your critical decision framework for choosing to use prebuilt frameworks and third-party platforms?

I think it’s a tremendous mistake to try to reinvent the wheel. Frameworks and libraries such as Ruby on Rails, Laravel, Django, and others are what’s known as “batteries included,” meaning they provide everything 99% of what developers require to build a tremendously useful, scalable, and maintainable software product. If your intention is to build a successful SaaS product, then you should focus exclusively on building a quality product and acquiring customers, period. Anything else is just playing with computers. And there’s nothing wrong with playing with computers! It’s my favorite thing to do in the world. But it’s not the same thing as building a software business.

Quality and Security

What unique security and quality assurance (QA) protocols does an intelligent SaaS product require that a standard, non-AI application doesn’t?

The two most important are prompt management and output monitoring. To minimize response drift (the LLM’s tendency for creative, inconsistent interpretation), you should rigorously test and tightly define the LLM prompt. This must be repeatedly tested against diverse datasets to ensure consistent and desired behavior.

Developers should look beyond general OpenAI APIs and consider specialized custom models (like the 2.2 million available on Hugging Face) that are better suited for specific tasks.

To ensure quality and prevent harm, you’ll also need to proactively monitor and review the LLM’s output (particularly when it’s low-confidence or potentially sensitive) and continuously refine and tune the prompt. Keeping a human in the loop (HITL) is essential: At Nomorobo, for instance, we manually reviewed low-confidence robocall categorizations to improve the model. At Adalo, we’ve reviewed thousands of app-building prompt responses to ensure desired outcomes.

Critically, businesses must transparently communicate to users exactly how their data and intellectual property are being used, particularly before passing it to a third-party LLM service.

It’s also important to differentiate when AI is truly necessary. Sometimes, AI can be used most effectively to enhance non-AI tools—for instance, using an LLM to generate complex, difficult-to-write scripts or reviewing schemas for database optimization—rather than trying to solve the core problem with a large, general model.

Marketing, Launch, and Business Success

What are your top two strategies for launching a product?

For early-stage growth, founders should focus intently on two core strategies: prioritizing SEO and proactively promoting the product.

I recommend prioritizing SEO early and aggressively. Currently, the majority of organic traffic still comes from traditional search results, not AI-generated answers (GEO). We are however certainly seeing GEO being attributed to a larger share of visitors. So while you should focus on Google organic traffic, I also suggest spending time tuning your marketing pages for AI crawlers.

Implement a feature-to-landing page workflow: For SecurityBot, nearly all traffic was driven by creating a dedicated SEO-friendly landing page for every new feature. AI tools like Cursor can automate the creation of these pages, including generating necessary assets like screenshots and promotional tweets. Landing pages for features like Broken Link Checker and PageSpeed Insights were 100% created by Cursor and Sonnet 4.5.

Many technical founders hesitate to promote their work, but visibility is crucial. Overcome founder shyness: Be vocal about your product and get it out there. Share your product immediately with friends, colleagues, and former customers to start gaining early traction and feedback.

Mastering these two strategies is more than enough to keep your team busy and effectively drive initial growth.

On scaling: What’s the single biggest operational hurdle when trying to scale your business from a handful of users to a large, paying user base?

I’ve had the opportunity to see business scaling hurdles firsthand, not only at Xenon but also during the M&A process, as well as within my own projects. The biggest operational hurdle, by far, is maintaining focus on customer acquisition. It is so tempting to build “just one more feature” instead of creating another video or writing a blog post.

Conversely, for those companies that do reach a measure of product-market fit, my observation is they tend to focus far too much on customer acquisition at the cost of customer retention. There’s a concept in subscription-based businesses known as “max MRR,” which identifies the point at which your business will simply stop growing once revenue lost due to customer churn reaches an absolute dollar point that erases any revenue gains made through customer acquisition. In short, at a certain point, you need to focus on both, and that’s difficult to do.

We’ll end with monetization. What’s the most successful and reliable monetization strategy you’ve seen for a new AI-powered SaaS feature? Is it usage-based, feature-gated, or a premium tier?

We’re certainly seeing usage-based monetization models take off these days, and I think for certain types of businesses, that makes a lot of sense. However, my advice to those trying to build a new SaaS business is to keep your subscription model as simple and understandable as possible in order to maximize customer acquisition opportunities.

Thanks, Jason.

For more from Jason Gilmore on developing successful SaaS products, join us on February 10 for our AI Superstream: Building SaaS Businesses with AI. Jason and a lineup of AI specialists from Dynatrace, Sendspark, DBGorilla, Changebot, and more will examine every phase of building with AI, from initial ideation and hands-on coding to launch, security, and marketing—and share case studies and hard-won insights from production. Register here; it’s free and open to all.

Appendix: Recommended Tools

CategoryTool/servicePrimary useNotes
AI-assisted codingCursor (with Opus 4.5) and Claude CodeCoding and AI assistanceClaude Opus 4.5 highly valued
Code managementGitHubManaging code repositoriesStandard code management
DeploymentLaravel ForgeDeploying projects to Digital OceanHighly valued for simplifying deployment
API/SaaS interactionMCP serversInteracting with GitHub, Stripe, Chrome devtools, and TrelloCentralized interaction point
ArchitectureMermaidCreating architectural diagramsUsed for visualization
ResearchChatGPTRubber duck debugging and general AI assistanceDedicated tool for problem-solving
SecurityCloudflareSecurity services and blocking bad actorsPrimarily focused on protection
Marketing and SEOGoogle Search ConsoleTracking marketing page performanceFocuses on search visibility
AnalyticsGoogle Analytics 4 (GA4)Site metrics and reportingConsidered a “horrible” but necessary tool due to lack of better alternatives


Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Developers, your EGO is the real bug in the system

1 Share

Picture a team arguing over whether to keep a global Redux store or roll out a context-only solution for a new React component. Minutes later, someone pushes a change that breaks the whole test suite.

A week later, another debate erupts about whether the REST API should be versioned or use a graph-based approach. And there’s always that one Git rebase that turns a quiet sprint into a full-blown firefight.

The line between brilliant innovation and productivity meltdown is thinner than you think – and ego is often the culprit lurking behind every heated Slack thread and passive-aggressive code comment.

What is egoless development?

Egoless development is not about surrendering conviction or becoming passive. It’s about letting the problem drive the solution, not the other way around. Gerald Weinberg, in his seminal work The Psychology of Computer Programming, wrote: “The most powerful learning occurs when someone produces a better solution than you had imagined.” If your ego can’t digest a “better” or “different” answer, you have no business leading a software team.

In practice, egoless engineering means:

  • Staying open to new ideas – even if they come from a junior teammate or an external library.
  • Focusing on the product’s value – not on proving your own architectural choices.
  • Accepting feedback without defensiveness – because every line of code is a hypothesis, not a manifesto.
  • Separating self-worth from code quality – your value isn’t measured by how rarely your code needs revision.

How ego shows up and undermines everything

Defensive responses to feedback“That was how we did it before.”Stifles incremental improvement and discourages team members from speaking up
Not-Invented-Here (NIH) syndrome“We can’t use that library; it’s not ours.”Drives tech debt, missed bugs, and stagnation while reinventing solved problems
Micro-management“I will rewrite this myself.”Silences creativity, hurts velocity, and signals distrust
Selective listening“We only care about the UI.”Neglects backend or infrastructure needs, creating systemic weaknesses
Knowledge hoarding“I’m the only one who understands this module.”Creates single points of failure and prevents team growth
Status-driven architecture“Let’s use [trendy tech] because it looks good on my resume.”Prioritizes personal branding over product needs

Ego erodes trust, inflates the perception of individual contribution, and turns collaboration into a zero-sum game. The consequences ripple outward: missed deadlines, higher bug counts, knowledge silos, and ultimately, a loss of team morale that drives your best people toward the exit.

The psychology behind code ownership

When we write code, our brains treat it as an extension of ourselves. Neuroscience research shows that criticism of our work activates the same neural pathways as physical threats. This is why a simple “This could be refactored” comment can feel like a personal attack.

Understanding this psychological reality helps us build better defenses:

  • Recognize the feeling – When defensiveness arises, pause and acknowledge it
  • Reframe criticism as collaboration – Someone improving your code is helping you, not attacking you
  • Build emotional distance – Use phrases like “the code” instead of “my code”
  • Practice receiving feedback – Like any skill, gracefully accepting criticism improves with practice

Real examples of ego at work

Linus Torvalds, creator of Linux, became known for a harsh communication style on the Linux Kernel Mailing List, often going beyond technical feedback.

In 2015, during the “Brain Damage” incident, he dismissed a developer’s security proposal as “mindless security theater” and called it “brain damaged.” The developer stopped contributing, and the security concern became a real exploit two years later.

The fallout? Talented contributors left, community diversity dropped, and valid concerns were ignored based on who raised them.

After years of this, Torvalds took a break and issued a public apology in 2018, acknowledging that his ego had harmed the project and committing to a Code of Conduct.

Even brilliant technical leaders can cause real damage when ego outweighs empathy – but egoless engineering is a conscious choice.

Now let’s see egoless communication in action

This is my own example. It was 2:37 PM when my code hit production. By 3:00 PM, our monitoring  tool sent alerts all over and showed 50 errors. New users couldn’t sign up. Existing users couldn’t complete their profiles.

This is where most teams spiral. Blame gets assigned. Confidence gets crushed. Someone becomes the cautionary tale.

Not this team.

The team lead’s first message: “Okay, let’s work the problem. Ivan, walk us through what happened – no judgment, just facts. Everyone else, what do we need to stabilize right now?”

My response: “I skipped the integration test because I thought it was just an isolated signup flow change. Clearly, I was wrong. I can roll back immediately, then we need to figure out why our CI didn’t catch this.”

Ten minutes later, production was stable. The retrospective didn’t focus on my mistake – it focused on the process gap. Why weren’t integration tests enforced for all changes? How could CI catch this automatically?

System got fixed and the whole team started advocating for comprehensive testing.

Lead with purpose, not ego

The antidote to ego-driven culture is a purpose-driven mindset. Put the team first: “Ship better products. Learn together. Celebrate collective wins.”

Adopt core values such as:

  • Open collaboration – Everyone’s input matters, regardless of tenure or title.
  • Continuous improvement – No single person owns the ship; we all keep it afloat.
  • Shared ownership – Success is a team metric, not a résumé headline.
  • Blameless culture – Focus on systems and processes, not individual fault.
  • Intellectual humility – “I don’t know” is a strength, not a weakness.

These values create a safety net that lets people experiment, fail fast, and iterate without fear of ego-related retribution.

How to cultivate egoless teams?

1. Morning coffee feedback sessions

Schedule a quick, informal chat with each teammate once a month.

Purpose: Surface concerns early, show genuine interest, reinforce the “we” narrative.

Pro tip: Keep it casual – grab actual coffee, go for a walk, or just sit outside the usual meeting room environment.

2. Code review as conversation

Treat every PR as a dialogue, not a verdict.

Instead of: “Why did you do that?”
Try: “I’m curious about your approach to X – what tradeoffs did you consider?”

Key practices:

  • Assume good intent always
  • Ask questions before making statements
  • Provide context for your suggestions
  • Praise the good before noting improvements

3. Process-focused retrospectives

Avoid blame, focus on improving the workflow.

Outcome: The team learns to iterate on their own practices, identifying systemic issues rather than scapegoating individuals.

Framework to try: Use “What went well? What could be better? What will we try next?” instead of “Who caused the production incident?”

4. Celebrate team wins

Highlight milestones that benefited the whole product, not individual heroics.

Effect: Reinforces that value comes from collaboration. Consider a “Team Win Thursday” where you spotlight collaborative achievements in your all-hands.

5. Rotate responsibilities

Give everyone a chance to lead a sprint, own a component, or make architectural decisions.

Result: Empowers diverse viewpoints and reduces power-centered thinking. It also prevents knowledge silos and builds empathy for different roles.

6. Create a “learning debt” board

Just like technical debt, track what the team needs to learn.

How it works: Anyone can add skills or knowledge gaps. The team prioritizes learning together through lunch-and-learns, workshops, or dedicated study time. This normalizes not knowing everything.

Purpose-driven teams really excel

The data backs up the philosophy:

  • Faster problem solving – Collective knowledge surfaces solutions faster.
  • Higher-quality products – Diverse eyes catch more bugs; studies show pair programming can reduce bugs by 15-50%.
  • Happier teams – People feel valued beyond ego metrics.
  • Lower turnover – People stay when they feel part of something bigger; psychological safety is the #1 predictor of team success (Google’s Project Aristotle).
  • Innovation acceleration – When failure isn’t punished, experimentation flourishes.

In our own team, we’ve seen the difference. The “Misfit Days” workshops we introduced – short, daily stand-ups where anyone can bring a friction point- have slashed bug-backlog time by 35% and doubled the rate of new feature delivery. More importantly, our employee satisfaction scores jumped 28 points in six months.

Warning signs your team has an ego problem

Watch for these red flags:

  1. Pull requests sit for days because no one wants to critique the senior dev.
  2. “I told you so” moments are celebrated instead of collective problem-solving.
  3. Knowledge hoarding where individuals protect “their” domains.
  4. Defensive Slack threads that spiral into 47-message arguments over syntax.
  5. Silent meetings where juniors don’t speak up for fear of looking stupid.
  6. Resume-driven development where tech choices prioritize personal branding.
  7. Hero culture where all-nighters and firefighting are glorified over prevention.

If you recognize three or more of these in your team, it’s time to intervene.

What should I do?

Ego is a silent saboteur that thrives in the quiet corners of code reviews and pull-request comments. It whispers that your solution is the only solution, that criticism is an attack, that admitting uncertainty is weakness. But the best engineering teams know differently.

By embracing egoless engineering – focusing on shared purpose, valuing every voice, and making collaboration the default – we turn potential productivity meltdowns into engines of innovation. The code improves, the products ship faster, and most importantly, people actually enjoy coming to work.

Remember: The best code you’ll ever write is the code you wrote with others. The best solution you’ll ever implement is the one that made someone else’s idea better. And the best career move you’ll ever make is building a reputation as someone who makes everyone around them better.

Starting tomorrow:

  • Implement a monthly open-feedback routine – Put it on the calendar now.
  • Audit your code review comments – Are you asking questions or issuing verdicts?
  • Celebrate a team win – Find one this week and make noise about it.
  • Try one new strategy – Pick from the list above and experiment for a sprint.

Every idea is respected. Constructive feedback is not just welcome; it’s required. Success is measured by shared achievements, not individual credit.

The post Developers, your EGO is the real bug in the system appeared first on ShiftMag.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Secure Third-Party Tool Calling in LlamaIndex Using Auth0

1 Share
Learn how to implement secure, user-level third-party tool calling in LlamaIndex using Auth0 Token Vault to act on behalf of users securely.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Rust does Async differently (and why it matters)

1 Share
Rusty pipes

This is the first of a four-part series.

If you are coming from JavaScript, Python or Go, Rust’s asynchronous model can feel like a bit of a culture shock. In those languages, the runtime is a “black box” that just works. In Rust, the hood is wide open, and the engine looks very different.

Why learn this? Most developers “use” async. Very few understand it. By peeling back the layers of Rust’s implementation, you aren’t just learning a language; you’re learning how systems work at the architectural level. You’ll move from wondering why the compiler is complaining about lifetimes to intuitively understanding how your code is being transformed into a high-performance machine.

This four-part series will explore:

  • Part I: The poll-based model (This article) A look at why Rust futures are “lazy,” how the “pull” model differs from other languages, and how to build a state machine by hand.
  • Part II: The mystery of pinning – It will demystify Pin, explain self-referential structs, and see why “moving” a future in memory can be dangerous.
  • Part III: Executors and wakers – A dive into the “reactors” that drive code, exploring how the waker tells the executor exactly when to wake up and finish the job.
  • Part IV: Async in practice – Moving beyond theory to look at real-world patterns like joining, selecting and handling timeouts.

1. The ‘pull’ model: Laziness as a virtue

In many languages, async operations are “push-based.” When you create a promise in JavaScript or spawn a Goroutine in Go, the operation starts immediately. The runtime schedules it, and it pushes the result to you when it’s done.

Rust futures are “pull-based.” They are lazy.

If you call an async function in Rust but don’t .await it (or poll it), absolutely nothing happens. The code inside the function is not executed.

Code example: The lazy future

use std::time::Duration;

async fncomplex_calculation() {
println!("(2) Starting calculation...");
tokio::time::sleep(Duration::from_secs(1)).await;
println!("(3) Calculation finished!");
}

#[tokio::main]
async fnmain() {
println!("(1) Calling the function...");

// ⚠️ NOTHING HAPPENS HERE
// The function is called, but the code inside isn't executed yet.
// It returns a 'Future' state machine.
let my_future = complex_calculation();

println!("(4) I haven't awaited it yet, so nothing printed above.");

// 🚀 NOW the runtime starts pulling the future
my_future.await;
}


Think of a Rust future as a state machine that is currently paused. It sits dormant in memory until an executor (the runtime) actively asks it, “Are you done yet?” This querying process is called polling.

The executor polls the future. If the future is waiting on I/O (like a network request), it returns Pending and yields control back to the executor, allowing other tasks to run. When the I/O is ready, the operating system notifies the executor, which then wakes up the future and polls it again.

2. The future trait: The engine under the hood

At the core of this abstraction is the future trait. Simplified, it looks like this:

pub trait Future {
    type Output;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}

pub enum Poll<T> {
    Ready(T),
    Pending,
}


When you write an async fn, the Rust compiler automatically generates an anonymous struct for you that implements this trait. It transforms your linear code into a state machine, breaking the function at every .await point.

Building the state machine by hand

We will create a CountdownFuture. It will:

  • Start with a count (in this case, 3).
  • Every time the runtime polls it, it decrements the count.
  • If the count is not 0, it tells the runtime “I’m not done, ask me again” (returns Pending).
  • If the count is 0, it says “I’m done!” (returns Ready).

use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::time::Duration;// 1. The State Machine
// This struct holds the state of our operation.
// In a generated async block, this would hold all your local variables.
structCountdownFuture {
count: u32,
}

impl CountdownFuture {
fnnew(count: u32) -> Self {
Self { count }
}
}

// 2. The Implementation
impl Future for CountdownFuture {
// This is what the future returns when it finishes.
typeOutput = String;

fnpoll(mutself: Pin<&mutSelf>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// Access the inner count
ifself.count == 0 {
// BASE CASE: We are done!
return Poll::Ready("Blastoff! 🚀".to_string());
} else {
// PROGRESS CASE: We are not done yet.
println!("Counting down: {}", self.count);

// Decrement our state
self.count -= 1;

// ⚠️ CRITICAL STEP: The Waker
// If we returned Pending without doing this, the runtime would
// put this task to sleep and NEVER check it again (a deadlock).
// By calling `wake_by_ref()`, we tell the runtime:
// "I made progress! Put me back in the queue to be polled again immediately."
cx.waker().wake_by_ref();

// Return Pending to yield control back to the executor
return Poll::Pending;
}
}
}

// 3. Using it
#[tokio::main]
async fnmain() {
let countdown = CountdownFuture::new(3);

// The runtime will poll this ~4 times until it returns Ready
let result = countdown.await;

println!("{}", result);
}

3. Breaking down the magic

Let’s break down exactly what is going on in that manual implementation.

The poll signature

fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>)

  • Pin<&mut Self>: This allows us to mutate our state (self.count -= 1). The Pin wrapper ensures we are safe to use even if we were self-referential (though we aren’t in this simple example).
  • Context: This carries the waker. The waker is the most important part of the ecosystem. It is the “callback” mechanism.

The return values

  • Poll::Ready(T): The contract is fulfilled. The value “T” is handed to the caller, and the future is dropped.
  • Poll::Pending: The future says, “I cannot complete right now.”

The waker magic

This is the specific line that confuses people:

cx.waker().wake_by_ref();


In a real-world scenario (like reading from a socket), you wouldn’t wake immediately. You would hand this waker to the operating system. The OS would trigger it later when data arrives.

In our simple countdown example, we don’t have an OS waiting for us. We just want to run again immediately. So we wake ourselves up. This tells the executor (Tokio) to put our task back at the end of the “Ready” queue instantly.

What about Pin?

You might have noticed the Pin type in the function signature above and wondered what exactly it does. While we briefly touched on it, Pin is one of the most complex (and misunderstood) topics in Rust.

Why does the compiler force us to use it? What happens if we move a future in memory while it’s running?

Part II of this series will demystify Pin, explore self-referential structs and explain why pinning is the secret sauce that makes Rust’s zero-cost async possible.

The post How Rust does Async differently (and why it matters) appeared first on The New Stack.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories