Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150607 stories
·
33 followers

Meet the Programmer: Inside the Career Path of a Mobile Software Engineer

1 Share

What's your current role, and what do you like about it?

At this moment, my main focus is mobile development — not just “button painting,” but mobile architectures, system aspects, and solution design for business. While my background is generally connected to Android software development, I'm also keen on investigating KMP and Flutter as cross-platform frameworks for mobile development, which allow us to extract and move a common codebase and rely on it from other parts. I'm really passionate about resolving business problems or enhancing user experience in their daily routine using a compact device that people interact with on a daily basis. On the other hand, I get charged after spending hours thinking and finally getting a solution that works as expected — it's a highly exciting feeling, and I adore that.

My additional role involves being responsible for developing a pool of talented programmers, assigning them to projects, conducting their assessments, and salary reviews. This role allows me to improve my soft skills, such as communication and leadership, and grow horizontally.

How did you get into Programming?

I've been passionate about programming since high school. Around 15 years ago, I remember one challenge that involved automating an alarming system in the school. At that time, I was taking my first steps into the world of software development using the Delphi platform. In that moment, I realized I had the necessary competence to design the system and resolve the issue — in fact, my first real business problem to handle. Initially, I applied the architecture and all the necessary algorithms that were used in the future application. I used a special tool to draw some diagrams and made notes with my thoughts connected with the future solution. Then, I created some static forms that represented the UI, which allowed me to decompose the entire idea into smaller tasks. In the final stage, I implemented the application that fully automated the alarming process, which motivated and inspired me even more. That first project was so successful that it was used as my course project later in college while diving deeper into Computer Science.

How did you get into writing about Programming?

I commenced a mentoring activity a couple of years ago, and I noticed that some basic topics had to be repeated. Moreover, students tend to ask similar questions, and it dawned on me to write some articles to share insights and give them the opportunity to refer to the resources. I prefer a practice-based approach, and I usually use my open-source project and describe theory using the ready-to-use codebase.

What's your earliest memory of learning to code?

In low–middle school, I first encountered Turbo Pascal when I was learning informatics. I don't remember why — either I understood it incorrectly, or I was told that Turbo Pascal is not just a programming language to communicate with a computer but, in particular, a tool to resolve quadratic equations — or so I mistakenly thought. Obviously, in that case, the discriminant had to be above 0 ;)

When Elon Musk achieves his dream of getting us to Mars, what technology do you think would be important on Mars and why?

I'm not a specialist in this particular field; however, I assume the following several technologies matter to investigate. Primarily, it's crucial to come up with a mechanism that would protect us from radiation. I heard there are some initiatives that involve diving into the ground and building our camp under there. Additionally, we have to figure out how to run photosynthesis in that environment to have the opportunity to generate oxygen as well as support growing vegetation. Finally, we must think about a source of energy to use for the life support of our future potential colony. The most promising and efficient energy technology I see under those conditions is nuclear-based, and we will need to build a nuclear reactor there.

What's a programming language that you would build EVERYTHING and ANYTHING in and why?

I wouldn't separate programming languages from particular frameworks or platforms, which allow us to create specific products or solve business problems. Furthermore, other frameworks are good for their specific purposes, making them valuable in themselves. However, expanding the question, everything in software development can be recognized as a tool for development, such as compilers, IDEs, frameworks themselves, and others. Nowadays, C++ still remains a standard for system programming since it allows direct memory manipulation, as well as offering performance, predictability, determinism, and much more. So, if I had to select, I would choose C++.

What's something you think Software developers do not do enough of?

Some programmers do not pay enough attention to developing soft skills. While obviously technical skills play a crucial role in software development, it's impossible to imagine creating a bigger project solo. In the meantime, communicating within a team, planning, brainstorming together, and so on involves the ability to explain and listen. That's just a small example of only a part of soft skills.

What is your least favorite thing about programming?

I wouldn't say my least favorite thing is directly related to programming, but rather to development in general. It's the undetermined things, which involve piecing together step by step all the information from other departments and so on. The human brain is wired so that uninformed things feel dangerous. On the other hand, when everything is cleared up, we usually tend to get endorphins, which always feels cool.

What’s a technology you’re currently learning or excited to learn?

While my main focus is Android, I'm passionate about delving into cross-platform mobile software development, in particular, Kotlin Multiplatform (KMP). Additionally, I always follow trends, so AI is an additional interest that allows me to delegate some technical and non-technical tasks. I use GitHub Copilot and JetBrains Assistance in my daily work, which is an ideal combination — AI + KMP :)

What’s your favorite Programming story of all-time on HackerNoon?

A story I definitely loved on HackerNoon was about image handling using TensorFlow for Android. I was really inspired by the approach and results the author demonstrated, which helped me define the tools for my own solutions. Since I'm currently working on an educational project involving skin-condition recognition with on-device AI, the article resonated with me on a practical level.

Time travel 10 years into the past or 10 years into the future? What does technology look like? Give reasons for your answer.

Since my main expertise is in Android development, I would prefer to slightly focus on that technology in the context of AI.

Talking about the past, I remember we had a lot of “technical freedom” in Android development, which meant we had a variety of opportunities to manipulate the OS's resources. For instance, we could run operations in the background without informing our users. Nowadays, the tendency is to limit such aspects, and I believe this trend will continue.

When it comes to the future, my vision is that the mentioned trend will proceed — more regulations from governments and more limitations from Google. We will need to use resources carefully and expect approvals from users to access sensitive data. Additionally, AI will finally find its particular niche to be used where it’s truly needed, instead of being incorporated everywhere unnecessarily.

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What If? AI in 2026 and Beyond

1 Share

The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?

At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape.

This is how we’ve always identified topics to cover in our publishing program, our online learning platform, and our conferences. We watch what we call “the alpha geeks“: paying attention to hackers and other early adopters of technology with the conviction that, as William Gibson put it, “The future is here, it’s just not evenly distributed yet.” As a great example of this today, note how the industry hangs on every word from AI pioneer Andrej Karpathy, hacker Simon Willison, and AI for business guru Ethan Mollick.

We are also fans of a discipline called scenario planning, which we learned decades ago during a workshop with Lawrence Wilkinson about possible futures for what is now the O’Reilly learning platform. The point of scenario planning is not to predict any future but rather to stretch your imagination in the direction of radically different futures and then to identify “robust strategies” that can survive either outcome. Scenario planners also use a version of our “watching the alpha geeks” methodology. They call it “news from the future.”

Is AI an Economic Singularity or a Normal Technology?

For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct.

Scenario one: AGI is an economic singularity. AI boosters are already backing away from predictions of imminent superintelligent AI leading to a complete break with all human history, but they still envision a fast takeoff of systems capable enough to perform most cognitive work that humans do today. Not perfectly, perhaps, and not in every domain immediately, but well enough, and improving fast enough, that the economic and social consequences will be transformative within this decade. We might call this the economic singularity (to distinguish it from the more complete singularity envisioned by thinkers from John von Neumann, I. J. Good, and Vernor Vinge to Ray Kurzweil).

In this possible future, we aren’t experiencing an ordinary technology cycle. We are experiencing the start of a civilization-level discontinuity. The nature of work changes fundamentally. The question is not which jobs AI will take but which jobs it won’t. Capital’s share of economic output rises dramatically; labor’s share falls. The companies and countries that master this technology first will gain advantages that compound rapidly.

If this scenario is correct, most of the frameworks we use to think about technology adoption are wrong, or at least inadequate. The parallels to previous technology transitions such as electricity, the internet, or mobile are misleading because they suggest gradual diffusion and adaptation. What’s coming will be faster and more disruptive than anything we’ve experienced.

Scenario two: AI is a normal technology. In this scenario, articulated most clearly by Arvind Narayanan and Sayash Kapoor of Princeton, AI is a powerful and important technology but nonetheless subject to all the normal dynamics of adoption, integration, and diminishing returns. Even if we develop true AGI, adoption will still be a slow process. Like previous waves of automation, it will transform some industries, augment many workers, displace some, but most importantly, take decades to fully diffuse through the economy.

In this world, AI faces the same barriers that every enterprise technology faces: integration costs, organizational resistance, regulatory friction, security concerns, training requirements, and the stubborn complexity of real-world workflows. Impressive demos don’t translate smoothly into deployed systems. The ROI is real but incremental. The hype cycle does what hype cycles do: Expectations crash before realistic adoption begins.

If this scenario is correct, the breathless coverage and trillion-dollar valuations are symptoms of a bubble, not harbingers of transformation.

Reading News from the Future

These two scenarios lead to radically different conclusions. If AGI is an economic singularity, then massive infrastructure investment is rational, and companies borrowing hundreds of billions to spend on data centers to be used by companies that haven’t yet found a viable economic model are making prudent bets. If AI is a normal technology, that spending looks like the fiber-optic overbuild of 1999. It’s capital that will largely be written off.

If AGI is an economic singularity, then workers in knowledge professions should be preparing for fundamental career transitions; firms should be thinking how to radically rethink their products, services, and business models; and societies should be planning for disruptions to employment, taxation, and social structure that dwarf anything in living memory.

If AI is normal technology, then workers should be learning to use new tools (as they always have), but the breathless displacement predictions will join the long list of automation anxieties that never quite materialized.

So, which scenario is correct? We don’t know yet, or even if this face off is the right framing of possible futures, but we do know that a year or two from now, we will tell ourselves that the answer was right there, in plain sight. How could we not have seen it? We weren’t reading the news from the future.

Some news is hard to miss: The change in tone of reporting in the financial markets, and perhaps more importantly, the change in tone from Sam Altman and Dario Amodei. If you follow tech closely, it’s also hard to miss news of real technical breakthroughs, and if you’re involved in the software industry, as we are, it’s hard to miss the real advances in programming tools and practices. There’s also an area that we’re particularly interested in, one which we think tells us a great deal about the future, and that is market structure, so we’re going to start there.

The Market Structure of AI

The economic singularity scenario has been framed as a winner-takes-all race for AGI that creates a massive concentration of power and wealth. The normal technology scenario suggests much more of a rising tide, where the technology platforms become dominant precisely because they create so much value for everyone else. Winners emerge over time rather than with a big bang.

Quite frankly, we have one big signal that we’re watching here: Does OpenAI, Anthropic, or Google first achieve product-market fit? By product-market fit we don’t just mean that users love the product or that one company has dominant market share but that a company has found a viable economic model, where what people are willing to pay for AI-based services is greater than the cost of delivering them.

OpenAI appears to be trying to blitzscale its way to AGI, building out capacity far in excess of the company’s ability to pay for it. This is a massive one-way bet on the economic singularity scenario, which makes ordinary economics irrelevant. Sam Altman has even said that he has no idea what his business will be post-AI or what the economy will look like. So far, investors have been buying it, but doubts are beginning to shape their decisions.

Anthropic is clearly in pursuit of product-market fit, and its success in one target market, software development, is leading the company on a shorter and more plausible path to profitability. Anthropic leaders talk AGI and economic singularity, but they walk the walk of a normal technology believer. The fact that Anthropic is likely to beat OpenAI to an IPO is a very strong normal technology signal. It’s also a good example of what scenario planners view as a robust strategy, good in either scenario.

Google gives us a different take on normal technology: an incumbent looking to balance its existing business model with advances in AI. In Google’s normal technology vision, AI disappears “into the walls” like networks did. Right now, Google is still foregrounding AI with AI overviews and NotebookLM, but it’s in a position to make it recede into the background of its entire suite of products, from Search and Google Cloud to Android and Google Docs. It has too much at stake in the current economy to believe that the route to the future consists in blowing it all up. That being said, Google also has the resources to place big bets on new markets with clear economic potential, like self-driving cars, drug discovery, and even data centers in space. It’s even competing with Nvidia, not just with OpenAI and Anthropic. This is also a robust strategy.

What to watch for: What tech stack are developers and entrepreneurs building on?

Right now, Anthropic’s Claude appears to be winning that race, though that could change quickly. Developers are increasingly not locked into a proprietary stack but are easily switching based on cost or capability differences. Open standards such as MCP are gaining traction.

On the consumer side, Google Gemini is gaining on ChatGPT in terms of daily active users, and investors are starting to question OpenAI’s lack of a plausible business model to support its planned investments.

These developments suggest that the key idea behind the massive investment driving AI boom, that one winner gets all the advantages, just doesn’t hold up.

Capability Trajectories

The economic singularity scenario depends on capabilities continuing to improve rapidly. The normal technology scenario is comfortable with limits rather than hyperscaled discontinuity. There is already so much to digest!

On the economic singularity side of the ledger, positive signs would include a capability jump that surprises even insiders, such as Yann LeCun’s objections being overcome. That is, AI systems demonstrably have world models, can reason about physics and causality, and aren’t just sophisticated pattern matchers. Another game changer would be a robotics breakthrough: embodied AI that can navigate novel physical environments and perform useful manipulation tasks.

Evidence that AI is normal technology include AI systems that are good enough to be useful but not good enough to be trusted, continuing to require human oversight that limits productivity gains; prompt injection and security vulnerabilities remain unsolved, constraining what agents can be trusted to do; domain complexity continues to defeat generalization, and what works in coding doesn’t transfer to medicine, law, science; regulatory and liability barriers prove high enough to slow adoption regardless of capability; and professional guilds successfully protect their territory. These problems may be solved over time, but they don’t just disappear with a new model release.

Regard benchmark performance with skepticism, since benchmarks are even more likely to be gamed when investors are losing enthusiasm than they are now, while everyone is still afraid of missing out.

Reports from practitioners actually deploying AI systems are far more important. Right now, tactical progress is strong. We see software developers in particular making profound changes in development workflows. Watch for whether they are seeing continued improvement or a plateau. Is the gap between demo and production narrowing or persisting? How much human oversight do deployed systems require? Listen carefully to reports from practitioners about what AI can actually do in their domain versus what it’s hyped to do.

We are not persuaded by surveys of corporate attitudes. Having lived through the realities of internet and open source software adoption, we know that, like Hemingway’s marvelous metaphor of bankruptcy, corporate adoption happens gradually, then suddenly, with late adopters often full of regret.

If AI is achieving general intelligence, though, we should see it succeed across multiple domains, not just the ones where it has obvious advantages. Coding has been the breakout application, but coding is in some ways the ideal domain for current AI. It’s characterized by well-defined problems, immediate feedback loops, formally defined languages, and massive training data. The real test is whether AI can break through in domains that are harder and farther away from the expertise of the people developing the AI models.

What to watch for: Real-world constraints start to bite. For example, what if there is not enough power to train or run the next generation of models at the scale company ambitions require? What if capital for the AI build-out dries up?

Our bet is that various real-world constraints will become more clearly recognized as limits to the adoption of AI, despite continued technical advances.

Bubble or Bust?

It’s hard not to notice how the narrative in the financial press has shifted in the past few months, from mindless acceptance of industry narratives to a growing consensus that we are in the throes of a massive investment bubble, with the chief question on everyone’s mind seeming to be when and how it will pop.

The current moment does bear uncomfortable similarities to previous technology bubbles. Famed short investor Michael Burry is comparing Nvidia to Cisco and warning of a worse crash than the dot-com bust of 2000. The circular nature of AI investment—in which Nvidia invests in OpenAI, which buys Nvidia chips; Microsoft invests in OpenAI, which pays Microsoft for Azure; and OpenAI commits to massive data center build-outs with little evidence that it will ever have enough profit to justify those commitments—has reached levels that would be comical if the numbers weren’t so large.

But there’s a counterargument: Every transformative infrastructure build-out begins with a bubble. The railroads of the 1840s, the electrical grid of the 1900s, the fiber-optic networks of the 1990s all involved speculative excess, but all left behind infrastructure that powered decades of subsequent growth. One question is whether AI infrastructure is like the dot-com bubble (which left behind useful fiber and data centers) or the housing bubble (which left behind empty subdivisions and a financial crisis).

The real question when faced with a bubble is What will be the source of value in what is left? It most likely won’t be in the AI chips, which have a short useful life. It may not even be in the data centers themselves. It may be in a new approach to programming that unlocks entirely new classes of applications. But one pretty good bet is that there will be enduring value in the energy infrastructure build-out. Given the Trump administration’s war on renewable energy, the market demand for energy in the AI build-out may be its saving grace. A future of abundant, cheap energy rather than the current fight for access that drives up prices for consumers could be a very nice outcome.

Signs pointing toward economic singularity: Sustained high utilization of AI infrastructure (data centers, GPU clusters) over multiple years; actual demand meets or exceeds capacity; major new applications emerge that just couldn’t exist without AI; continued spiking of energy prices, especially in areas with many data centers.

Signs pointing toward bubble: Continued reliance on circular financing structures (vendor financing, equity swaps between AI companies); enterprise AI projects stall in the pilot phase, failing to scale; a “show me the money” moment arrives, where investors demand profitability and AI companies can’t deliver.

Signs pointing towards normal technology recovery postbubble: Strong revenue growth at AI application companies, not just infrastructure providers; enterprises report concrete, measurable ROI from AI deployments.

What to watch: There are so many possibilities that this is an act of imagination! Start with Wile E. Coyote running over a cliff in pursuit of Road Runner in the classic Warner Brothers cartoons. Imagine the moment when investors realize that they are trying to defy gravity.

What made them notice? Was it the failure of a much-hyped data center project? Was it that it couldn’t get financing, that it couldn’t get completed because of regulatory constraints, that it couldn’t get enough chips, that it couldn’t get enough power, that it couldn’t get enough customers?

Imagine one or more storied AI lab or startup unable to complete its next fundraise. Imagine Oracle or SoftBank trying to get out of a big capital commitment. Imagine Nvidia announcing a revenue miss. Imagine another DeepSeek moment coming out of China.

Our bet for the most likely prick to pop the bubble is that Anthropic and Google’s success against OpenAI persuades investors that OpenAI will not be able to pay for the massive amount of data center capacity it has contracted for. Given the company’s centrality to the AGI singularity narrative, a failure of belief in OpenAI could bring down the whole web of interconnected data center bets, many of them financed by debt. But that’s not the only possibility.

Always Update Your Priors

DeepSeek’s emergence in January was a signal that the American AI establishment may not have the commanding lead it assumed. Rather than racing for AGI, China seems to be heavily betting on normal technology, building towards low-cost, efficient AI, industrial capacity, and clear markets. While claims about what DeepSeek spent on training its V3 model have been contested, training isn’t the only cost: There’s also the cost of inference and, for increasingly popular reasoning models, the cost of reasoning. And when these are taken into account, DeepSeek is very much a leader.

If DeepSeek and other Chinese AI labs are right, the US may be intent on winning the wrong race. What’s more, our conversations with Chinese AI investors reveals a much heavier tilt towards embodied AI (robotics and all its cousins) than towards consumer or even enterprise applications. Given the geopolitical tensions between China and the US, it’s worth asking what kind of advantage a GPT-9 with limited access to the real world might provide against an army of drones and robots powered by the equivalent of GPT-8!

The point is that the discussion above is meant to be provocative, not exhaustive. Expand your horizons. Think about how US and international politics, advances in other technologies, and financial market impacts ranging from a massive market collapse to a simple change in investor priorities might change industry dynamics.

What you’re watching for is not any single data point but the pattern across multiple vectors over time. Remember that the AGI versus normal technology framing is not the only or maybe even the most useful way to look at the future.

The most likely outcome, even restricted to these two hypothetical scenarios, is something in between. AI may achieve something like AGI for coding, text, and video while remaining a normal technology for embodied tasks and complex reasoning. It may transform some industries rapidly while others resist for decades. The world is rarely as neat as any scenario.

But that’s precisely why the “news from the future” approach matters. Rather than committing to a single prediction, you stay alert to the signals, ready to update your thinking as evidence accumulates. You don’t need to know which scenario is correct today. You need to recognize which scenario is becoming correct as it happens.

What If? Robust Strategies in the Face of Uncertainty

The second part of scenario planning is to identify robust strategies that will help you do well regardless of which possible future unfolds. In this final section, as a way of making clear what we mean by that, we’ll consider 10 “What if?” questions and ask what the robust strategies might be.

1. What if the AI bubble bursts in 2026?

The vector: We are seeing massive funding rounds for AI foundries and massive capital expenditure on GPUs and data centers without a corresponding explosion in revenue for the application layer.

The scenario: The “revenue gap” becomes undeniable. Wall Street loses patience. Valuations for foundational model companies collapse and the river of cheap venture capital dries up.

In this scenario, we would see responses like OpenAI’s “Code Red” reaction to improvements in competing products. We would see declines in prices for stocks that aren’t yet traded publicly. And we might see signs that the massive fundraising for data centers and power are performative, not backed by real capital. In the words of one commenter, they are “bragawatts.”

A robust strategy: Don’t build a business model that relies on subsidized intelligence. If your margins only work because VC money is paying for 40% of your inference costs, you are vulnerable. Focus on unit economics. Build products where the AI adds value that customers are willing to pay for now, not in a theoretical future where AI does everything. If the bubble bursts, infrastructure will remain, just as the dark fiber did, becoming cheaper for the survivors to use.

2. What if energy becomes the hard limit?

The vector: Data centers are already stressing grids. We are seeing a shift from the AI equivalent of Moore’s law to a world where progress may be limited by energy constraints.

The scenario: In 2026, we hit a wall. Utilities simply cannot provision power fast enough. Inference becomes a scarce resource, available only to the highest bidders or those with private nuclear reactors. Highly touted data center projects are put on hold because there isn’t enough power to run them, and rapidly depreciating GPUs are put in storage because there aren’t enough data centers to deploy them.

A robust strategy: Efficiency is your hedge. Stop treating compute as infinite. Invest in small language models (SLMs) and edge AI that run locally. If you can run 80% of your workload on a laptop-grade chip rather than an H100 in the cloud, you are at least partially insulated from the energy crunch.

3. What if inference becomes a commodity?

The vector: Chinese labs continue to release open weight models with performance comparable to each previous generation of top-of-the line US frontier models but at a fraction of the training and inference cost. What’s more, they are training them with lower-cost chips. And it appears to be working.

The scenario: The price of “intelligence” collapses to near zero. The moat of having the biggest model and the best cutting-edge chips for training evaporates.

A robust strategy: Move up the stack. If the model is a commodity, the value is in the integration, the data, and the workflow. Build applications and services using the unique data, context, and workflows that no one else has.

4. What if Yann LeCun is right?

The vector: LeCun has long argued that auto-regressive LLMs are an “off-ramp” on the highway to AGI because they can’t reason or plan; they only predict the next token. He bets on world models (JEPA). OpenAI cofounder Ilya Sutskever has also argued that the AI industry needs fundamental research to solve basic problems like the ability to generalize.

The scenario: In 2026, LLMs hit a plateau. The market realizes we’ve spent billions on a dead end technology for true AGI.

A robust strategy: Diversify your architecture. Don’t bet the farm on today’s AI. Focus on compound AI systems that use LLMs as just one component, while relying on deterministic code, databases, and small, specialized models for additional capabilities. Keep your eyes and your options open.

5. What if there is a major security incident?

The vector: We are currently hooking insecure LLMs up to banking APIs, email, and purchasing agents. Security researchers have been screaming about indirect prompt injection for years.

The scenario: A worm spreads through email auto-replies, tricking AI agents into transferring funds or approving fraudulent invoices at scale. Trust in agentic AI collapses.

A robust strategy: “Trust but verify” is dead; use “verify then trust.” Implement well-known security practices like least privilege (restrict your agents to the minimal list of resources they need) and zero trust (require authentication before every action). Stay on top of OWASP’s lists of AI vulnerabilities and mitigations. Keep a “human in the loop” for high-stakes actions. Advocate for and adopt standard AI disclosure and audit trails. If you can’t trace why your agent did something, you shouldn’t let it handle money.

6. What if China is actually ahead?

The vector: While the US focuses on raw scale and chip export bans, China is focusing on efficiency and embedded AI in manufacturing, EVs, and consumer hardware.

The scenario: We discover that 2026’s “iPhone moment” comes from Shenzhen, not Cupertino, because Chinese companies integrated AI into hardware better while we were fighting over chatbot and agentic AI dominance.

A robust strategy: Look globally. Don’t let geopolitical narratives blind you to technical innovation. If the best open source models or efficiency techniques are coming from China, study them. Open source has always been the best way to bridge geopolitical divides. Keep your stack compatible with the global ecosystem, not just the US silo.

7. What if robotics has its “ChatGPT moment”?

The vector: End-to-end learning for robots is advancing rapidly.

The scenario: Suddenly, physical labor automation becomes as possible as digital automation.

A robust strategy: If you are in a “bits” business, ask how you can bridge to “atoms.” Can your software control a machine? How might you embody useful intelligence into your products?

8. What if vibe coding is just the start?

The vector: Anthropic and Cursor are changing programming from writing syntax to managing logic and workflow. Vibe coding lets nonprogrammers build apps by just describing what they want.

The scenario: The barrier to entry for software creation drops to zero. We see a Cambrian explosion of apps built for a single meeting or a single family vacation. Alex Komoroske calls it disposable software: “Less like canned vegetables and more like a personal farmer’s market.”

A robust strategy: In a world where AI is good enough to generate whatever code we ask for, value shifts to knowing what to ask for. Coding is much like writing: Anyone can do it, but some people have more to say than others. Programming isn’t just about writing code; it’s about understanding problems, contexts, organizations, and even organizational politics to come up with a solution. Create systems and tools that embody unique knowledge and context that others can use to solve their own problems.

9. What if AI kills the aggregator business model?

The vector: Amazon and Google make money by being the tollbooth between you and the product or information you want. If people get answers from AI, or an AI agent buys for you, it bypasses the ads and the sponsored listings, undermining the business model of internet incumbents.

The scenario: Search traffic (and ad revenue) plummets. Brands lose their ability to influence consumers via display ads. AI has destroyed the source of internet monetization and hasn’t yet figured out what will take its place.

A robust strategy: Own the customer relationship directly. If Google stops sending you traffic, you need an MCP, an API, or a channel for direct brand loyalty that an AI agent respects. Make sure your information is accessible to bots, not just humans. Optimize for agent readability and reuse.

10. What if a political backlash arrives?

The vector: The divide between the AI rich and those who fear being replaced by AI is growing.

The scenario: A populist movement targets Big Tech and AI automation. We see taxes on compute, robot taxes, or strict liability laws for AI errors.

A robust strategy: Focus on value creation, not value capture. If your AI strategy is “fire 50% of the support staff,” you are not only making a shortsighted business decision; you are painting a target on your back. If your strategy is “supercharge our staff to do things we couldn’t do before,” you are building a defensible future. Align your success with the success of both your workers and customers.

In Conclusion

The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”

As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in.



Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SharePoint Framework v1.22: What's in the Latest SPFx Update

1 Share

SPFx v1.22 delivers a major toolchain overhaul, TypeScript 5.8 upgrade, and cleaner npm audits. Discover what’s changed and how it affects your dev workflow.

Read the full article (0 minutes reading time): SharePoint Framework v1.22: What's in the Latest SPFx Update.

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Learn for Educators Newsletter – December 2025

1 Share

Welcome to your December update—here’s what’s new and how you can get involved! 

🚀 In This Issue 

  • 📅 December Community Call – Add to Your Calendar! 
  • 🌟 Faculty Spotlight: Share Your Success Story! 
  • ✨ Ignite 2025: Top AI Innovations for Educators 
  • 🧑‍💻 New Tools & Resources for Faculty 
  • 💬 How to Get Involved 

📅 December MSLE Community Call – Don’t Miss Out! 

Date: Thursday, December 11, 2025 

Time: 10:00 – 11:00 AM PDT 

Agenda: 

  • Microsoft Ignite Recap 
  • Azure for Startups 
  • MSLE Faculty Spotlight 
  • MSLE AI Bootcamps 
  • Academic Research Webinar Series 

 Visit this link to add the call to your calendar: December MSLE Community Call 

🌟 Faculty Spotlight: Share Your Success Story! 

Have you or a colleague leveraged MSLE to make a meaningful impact on teaching or student success? We’d love to hear about it! Recognize your own work or spotlight a colleague who is elevating learning through MSLE. Your story could inspire faculty around the world. 

👉 Share your story in the Faculty Spotlight and help amplify faculty voices across the MSLE community. 

✨ Ignite 2025: Top AI Innovations for Educators 

Microsoft Ignite 2025 introduced groundbreaking tools to help you teach smarter, research faster, and engage students in new ways. Here’s what’s new: 

  • Microsoft 365 Copilot Gets Smarter – Meet your new AI-powered teaching assistant—now with Work IQ for personalized insights, Agent Mode for creating syllabi and research papers, and voice commands for hands-free productivity. 
  • Secure AI with Agent 365 – Deploy departmental AI tools with confidence. Agent 365 ensures compliance and security, so you can innovate while maintaining institutional standards. 
  • Foundry IQ & Fabric IQ – Make data-driven decisions with a unified knowledge layer and real-time analytics for research and student success. 
  • Sora 2 Video Integration – Bring your lessons to life with AI-powered video creation—no editing skills required. 
  • AI Skills Navigator – Explore a dynamic, AI-driven learning platform with personalized playlists, expert-led sessions, and free Microsoft Applied Skills credentials. 

 Learn more about all the exciting updates from Ignite at https://ignite.microsoft.com 

🧑‍💻 Why These Updates Matter to You 

  • Save time on administrative tasks 
  • Engage students with dynamic, AI-powered content 
  • Advance your research with new analytics and secure AI tools 

 💬 Get Involved 

Have questions or want to get more involved? Visit MSLEGetInvolved or connect with our MSLE Community Managers––they’re always here to support you. 

Explore upcoming events and added resources in the MSLE Community. 

We are excited to see how you will use these tools to shape the future of learning. See you at the upcoming Community Call! 

Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Foundry - Everything you need to build AI apps & agents

1 Share

Our unified, interoperable AI platform enables developers to build faster and smarter, while organizations gain fleetwide security and governance in a unified portal. 

Yina Arenas, Microsoft Foundry CVP, shares how to keep your development and operations teams coordinated, ensuring productivity, governance, and visibility across all your AI projects. 

Learn more in this Microsoft Mechanics demo, and start building with Microsoft Foundry at ai.azure.com

Feed your agents multiple trusted data sources.

For accurate, contextual responses, get started with Microsoft Foundry. Start here.

Apply safety & security guardrails. 

Ensure responsible AI behavior. Check it out.

Keep your AI apps running smoothly.

Deploy agents to Teams and Copilot Chat, then monitor performance and costs in Microsoft Foundry. See how it works. 

QUICK LINKS: 

00:54 — Tour the Microsoft Foundry portal 

03:32 — The Build tab and Workflows 

05:03 — How to build an agentic app 

07:02 — Evaluate agent performance 

08:37 — Safety and security 

09:18 — Publish your agentic app 

09:41 — Post deployment 

11:36 — Wrap up

Link References 

Visit https://ai.azure.com and get started today 

Unfamiliar with Microsoft Mechanics? 

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. 

Keep getting this insider knowledge, join us on social: 


Video Transcript:

-If you are building AI apps and agents and want to move faster with more control, the newly expounded Foundry helps you do exactly that, while integrating directly with your code. It works like a unified AI app and agent factory, with rich tooling and observability. A simple developer experience helps you and your team find the right components you need to start building your agents and move seamlessly from idea all the way to production. It is augmented by powerful new capabilities, such as an agent framework for multi-agentic apps and workflow automation, or multisource knowledge-based creation to support deep reasoning. New levels of observability across your fleet of agents then help you evaluate how well they’re operating. And it is easier than ever to ensure security and safety controls are in place to support the right level of trust and much more. 

-Let’s tour the new Microsoft Foundry portal while we build an agentic app. We’ll play the role of a clothing company using AI to research new market opportunities. The homepage at ai.azure.com guides you right through a build experience. It’s simple to start building, to create an agent, design a workflow, and browse available AI models right from here. Alternatively, you can quickly copy the project endpoint, the key, and the region to use it directly in your code with the Microsoft Foundry SDK. One of the most notable improvements is how everything you need to do is aligned to the development lifecycle. 

-If you are just getting started, the Discovery tab makes it simple to find everything you need. Feature models are front and center, from OpenAI, Grok, Meta, DeepSeek, Mistral AI, and now for the first time, Anthropic. You can also browse model collections, including models that you can run from your local device from Foundry Local. Model Leaderboard then helps you reference how the top models compare across quality, safety, throughput, and cost. And you’ll see the feature tools, including MCP servers, that you can connect to. Then moving to the left nav, in Agents, you can find samples for different standalone agent types to quickly get you up and running. 

-In Models, you can browse a massive industry-leading catalog of thousand of foundational open source and specialized models. Click any model to see its capabilities, like this one for GPT-5 Chat. Then clicking into Deploy, we can try it out from here. I’ll add a prompt: “What is a must-have apparel for the fall in the Pacific Northwest?” Now, looking at its generated response with recommendations for outerwear, it looks like GPT-5 Chat knows that it rains quite a bit here. If I move back to the catalog view, we can also see the new model router that automatically routes prompts to the most efficient models in real time, ensuring high-quality results while minimizing costs. I already have it deployed here and ready to use. 

-Under Tools, you’ll find all of the available tools that you can use to connect your agents and apps. You can easily find MCP servers and more than a thousand connectors to add to your workflows. You can add them from here or right as you’re building your agent. Next, to accelerate your efforts, you can access dozens of curated solution templates with step-by-step instructions for coding AI right into your apps. These are customizable code samples with preintegrated Azure services and GitHub-hosted quickstart guides for different app types. So there are plenty of components to discover while designing your agent. 

-Next, the Build tab brings powerful new capabilities, whether you’re creating a single agent or a multi-agentic solution. Build is where you manage the assets you own: agents, workflows, models, tools, knowledge and more. And straightaway it’s easy to get to all your current agents or create new ones. I have a few here already that I’ll be calling later to support our multi-agentic app, including this research agent. In Workflows, you can create and see all your multi-agentic apps and workflow automations. 

-To get started, you can pick from different topologies such as Sequential, Human in the Loop, or Group Chat and more. I have a few here, including this one for research that we’ll use in our agentic app. We’ll go deeper on this in just a moment. As you continue building your app, your deployed models can be viewed in context. Here’s the model router that we saw before. And then further down the left rail you’ll find fine-tuning options where you can customize model behavior and outputs using supervised learning, direct preference optimization, and reinforcement techniques. Under the Tools, it’s easy to see which ones are already connected to your environment. Knowledge then allows you to add knowledge bases from Foundry IQ so you can bring not just one but multiple sources, including SharePoint online, OneLake, which is part of Microsoft Fabric, and your search index to ground your agents. 

-And in Data, you can create synthetic datasets, which are very handy for fine-tuning and evaluation. Now that we have the foundational ingredients for our agentic app collected, let’s actually build it. I’ll start with a multi-agent workflow that my team is working on. Workflows are also a type of agent with similar constructs for development, deployment, and the management, and they can contain their own logic as well as other agents. The visualizer lets you easily define and view the nodes in the workflow, as well as all connected agents. You can apply conditions like this to a workflow step. Here we’re assessing the competitiveness of the insights generated as we research opportunities for market expansion. 

-There is also a go-to loop. If the insights are not competitive, we’ll iterate on this step. For many of these connectors, you can add agents. I’m going to add an existing agent after the procurement researcher. I’ll choose an agent that we’ve already started working on, the research agent, and jump into the editor. Note that the Playground tab is the starting point for all agents that you create. You can choose the model you want. I’ll choose GPT-5 Chat and then provide the agent with instructions. I’ll add mine here with high-level details for what the agent should do. Below that, in Tools, you can see that my research agent is already connected to our internal SharePoint site in Microsoft 365. I can also add knowledge bases to ground responses right from here. I can turn on memory for my agent to retain notable context and apply guardrails for safety and security controls. I’ll show you more on that later. Agents are also multimodel, including voice, which is great for mobile apps. Using voice, I’ll prompt it with: “What industry is Zava Corp in, and what goods does it produce?”

-[AI] Zava Corporation operates in the apparel industry. It focuses on producing a wide range of clothing and fashion-related goods.

-Next, I’ll type in a text prompt, and that will retrieve content from our SharePoint site to generate its response. And importantly, as I make these changes to my agent, it will now automatically version them, and I can always revert to a previous version. Then as the build phase continues, it’s easy to evaluate agent performance. 

-In Evaluations, I can see all my agent runs. I’ve already started creating an evaluation for our agent using synthetic data to check that we are hitting our goals for output quality and safety. From the Agent, we can review its runs and traces to diagnose latency bottlenecks. And under the Evaluation tab, you can see that our AI quality and safety scores could be better. Using these insights, let’s update our agent and make improvements. Everything shown in the web portal can also be done with code. So let’s do this update in VS Code. This is the same multi-agentic workflow I showed you before, with all of its logic now represented in code. The folders on the left rail represent our different agents, and the workflow structure describes the multi-agent reasoning process. It’s designed to take incoming requests and route them to the relevant expert agent to complete the tasks. We have an intent classifier agent, a procurement researcher, the market researcher one that we just built, and two more with expertise in negotiation and review. 

-And the workflow is connected to a knowledge base with multiple sources to inform agentic responses. This includes a search index for supplier information, relevant financial data from Microsoft Fabric, product data from SharePoint, and we can connect to available MCP servers like this one from GitHub. Having this rich multisource knowledge base feeding our agentic workflow should ensure more accurate results. In fact, if we look at the evaluation for this workflow, you will see that AI quality is a lot higher overall. But we still have to do some work on safety. We’ll address this by adding the right safety and security controls right from Microsoft Foundry. For that, we’ll head over to Guardrails where you can apply controls based on specific AI risks. 

-I’ll target jailbreak attack, and then I can apply additional associated controls like content safety and protected materials to ensure our agents also behave responsibly. And I can scope what this guardrail should govern: either a model or an agent; or in my case, I’ll select our workflow to address the low safety score that we saw earlier. And with that, it’s ready to publish. In fact, we’ve made it easier to get your apps and agents into the productivity tools that people use every day. I can publish our agentic app directly into Microsoft Teams and Copilot Chat right from our workflow. And once it is approved by the Microsoft 365 admin, business users can find it in the Agent Store and pin it for easy access. Now, with everything in production, your developer and operation teams can continue working together in Microsoft Foundry, post-deployment and beyond. 

-The Operate tab has the full Foundry control plane. In the overview, you can quickly monitor key operational metrics and spot what needs your attention. This is a full cross-fleet view of your agents. You can also filter by subscription and then by project if you want. The top active alerts are listed right here for me to take action. And I can optionally view all alerts if I want, along with rollout metrics for estimated cost, agent success rates, and total token usage. Below that, we can see the details of agent runs of our time, along with top- and bottom-performing agents with trends for each. All performance data is built on open telemetry standards that can be easily surfaced inside Azure Monitor or your favorite reporting tool. 

-Next, under Assets, for every agent, model, and tool in your environment, you can see metrics like status, error rates, estimated cost, token usage, and number of runs. This gives you a quick pulse on performance activity and health for each asset. And you can click in for more details if you want to. Compliance then lets IT teams view and set default policies by AI risk for any asset created. You can add controls and then scope it by the entire subscription or resource group. That way they will automatically inherit governance controls. Under Quota, you can keep all of your costs in check while ensuring that your AI applications and agents stay within your token limits. And finally, under Admin, you can find all of your resources and related configuration controls for each project in one place, and click in to manage roles and access. If you go back, the newly integrated AI gateways also allow you to connect and manage agents, even from other clouds. 

-So that’s how the expanded Microsoft Foundry simplifies the development and operations experience to help you and your team build powerful AI apps and agents faster, with more control, while integrated directly into your code. Visit ai.azure.com to learn more and get started today. Keep watching Microsoft Mechanics for the latest tech updates, and subscribe if you haven’t already. Thanks for watching.

 

Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Pre-Purchase Plan: One Unified Path to Scale AI Agents

1 Share

AI is now essential, and at Microsoft Ignite 2025, we introduced a new foundation for intelligent agents: Work IQ, Fabric IQ, and Foundry IQ. These three IQs represent the intelligence layer that gives agents deep context; understanding how people work, connecting to enterprise data, and orchestrating knowledge across platforms. Together with the launch of Microsoft Agent Factory, organizations now have a unified program to build, deploy, and manage agents powered by these IQs.

However, deploying advanced agents can still be complicated by fragmented procurement, unpredictable budgets, and governance challenges. Organizations often have to choose between platforms like Microsoft Foundry and Copilot Studio. Each has unique strengths but using just one limits flexibility and prevents organizations from building truly optimized agents.

Microsoft Agent Pre-Purchase Plan (P3) is designed to reduce the friction of this challenge. By unifying access to agentic services across both Microsoft Foundry and Copilot Studio, Microsoft Agent P3 empowers organizations to harness the full potential of the IQ layer thus removing barriers and unlocking the value of truly intelligent, context-driven agents.

What is Microsoft Agent Pre-Purchase Plan and how to does it work?

  • Microsoft Agent P3 is a one-year pay-upfront option.
  • Customers commit upfront to a lump-sum pool of Agent Commit Units (ACU) that can be used at any time during the one-year term.
  • Every time you consume eligible services within Microsoft Foundry and Copilot Credits* enabled agentic services, the ACUs are automatically drawn down from your P3 balance.
  • If you use up your balance before the year ends, you can add another P3 plan or switch to pay-as-you-go.
  • If you don’t use all your credits by the end of the year, the remaining balance expires.

Pricing*

*Pricing as of November 2025, subject to change.

**Example if Microsoft Copilot Studio generates a retail cost of $100 based on Copilot Credit and Microsoft Foundry usage, then 100 Agent CUs (ACUs) are consumed.

What is covered by the Microsoft Agent Pre-Purchase Plan?

* List as of November 2025, subject to change

** Currently in Private Preview

*** Microsoft reserves the right to update Copilot Credit eligible products

Customer Example

Suppose a customer expects to consume 1,500,000 Copilot Credit with custom agents created in Microsoft Copilot Studio. Assuming the pay-as-you-go rate for Copilot Credit to be $0.01, then at the pay-as-you-go rate, this will cost $15,000.

In addition, if they are using 5000 Microsoft Foundry Provisioned Throughput Units (PTU) and assuming the pay-as-you-go rate for PTU to be $1, then at the pay-as-you-go rate, this will cost $5,000.

By purchasing Tier 1 (20,000 ACUs) Microsoft Agent P3 at the cost of $19,000, it will give a 5% saving compared to the pay-as-you-go rate for the same usage.

How to purchase a Microsoft Agent Pre-Purchase Plan?

  • Sign in to the Azure portal →Reservations → + Add → Microsoft Agent Pre-Purchase Plan.
  • Select your subscription and scope
  • Choose your tier and complete payment.

What sets Microsoft Agent Pre-Purchase Plan apart?

At the heart of Microsoft Agent Pre-Purchase Plan are four pillars that redefine how organizations consume AI services:

  1. One Plan: A single offer that spans Foundry and Copilot Credits * enabled agentic services. No more siloed credits or SKU-level complexity, just one pool for all your AI workloads.
  2. Breadth of Services: Access to 32 services, from Azure AI Search and Cognitive Services to orchestration tools and Copilot-enabled experiences. 
  3. One Governance Path: Simplifies procurement and budget management. Procurement teams gain visibility and control without sacrificing agility.
  4. Predictable Savings: Get discounts and avoid surprises when you choose this plan.

Conclusion

The Microsoft Agent Pre-Purchase Plan is designed to make your AI journey simpler, smarter, and more cost-effective. By combining the strengths of Microsoft Foundry and Copilot Studio into a single, unified offer, the plan eliminates the need to choose between platforms or manage multiple contracts. Organization benefit from predictable budgeting, streamlined procurement, and the flexibility to innovate across more than 32 agentic services—all with one pool of funds.

Whether you’re just starting with AI or scaling enterprise-wide adoption, the Microsoft Agent Pre-Purchase Plan empowers you to unlock the full value of Microsoft’s agentic platform—driving innovation, efficiency, and business impact. And with support for agents built on Work IQ, Fabric IQ, and Foundry IQ, customers can be confident their solutions are grounded in the latest intelligence announced at Ignite.

What’s next

Read the Microsoft Agent P3 Offer MS Learn Doc

Purchase Microsoft Agent P3 in your Azure Portal

* Microsoft Copilot Studio, Dynamics 365 first-party agents, and Copilot Chat. Microsoft reserves the right to update Copilot Credit eligible products.

Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories