Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152449 stories
·
33 followers

Widely Available Web Platform Features I'd Like To Learn

1 Share
Ben Nadel compiles a list of "Widely Available" features that have been added to the web platform in the last few years....
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Signals for 2026

1 Share

We’re three years into a post-ChatGPT world, and AI remains the focal point of the tech industry. In 2025, several ongoing trends intensified: AI investment accelerated; enterprises integrated agents and workflow automation at a faster pace; and the toolscape for professionals seeking a career edge is now overwhelmingly expansive. But the jury’s still out on the ROI from the vast sums that have saturated the industry. 

We anticipate that 2026 will be a year of increased accountability. Expect enterprises to shift focus from experimentation to measurable business outcomes and sustainable AI costs. There are promising productivity and efficiency gains to be had in software engineering and development, operations, security, and product design, but significant challenges also persist.  

Bigger picture, the industry is still grappling with what AI is and where we’re headed. Is AI a worker that will take all our jobs? Is AGI imminent? Is the bubble about to burst? Economic uncertainty, layoffs, and shifting AI hiring expectations have undeniably created stark career anxiety throughout the industry. But as Tim O’Reilly pointedly argues, “AI is not taking jobs: The decisions of people deploying it are.” No one has quite figured out how to make money yet, but the organizations that succeed will do so by creating solutions that “genuinely improve. . .customers’ lives.” That won’t happen by shoehorning AI into existing workflows but by first determining where AI can actually improve upon them, then taking an “AI first” approach to developing products around these insights.

As Tim O’Reilly and Mike Loukides recently explained, “At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present.” We’re watching a number of “possible futures taking shape.” AI will undoubtedly be integrated more deeply into industries, products, and the wider workforce in 2026 as use cases continue to be discovered and shared. Topics we’re keeping tabs on include context engineering for building more reliable, performant AI systems; LLM posttraining techniques, in particular fine-tuning as a means to build more specialized, domain-specific models; the growth of agents, as well as the protocols, like MCP, to support them; and computer vision and multimodal AI more generally to enable the development of physical/embodied AI and the creation of world models. 

Here are some of the other trends that are pointing the way forward.

Software Development

In 2025, AI was embedded in software developers’ everyday work, transforming their roles—in some cases dramatically. A multitude of AI tools are now available to create code, and workflows are undergoing a transformation shaped by new concepts including vibe coding, agentic development, context engineering, eval- and spec-driven development, and more.

In 2026, we’ll see an increased focus on agents and the protocols, like MCP, that support them; new coding workflows; and the impact of AI on assisting with legacy code. But even as software development practices evolve, fundamental skills such as code review, design patterns, debugging, testing, and documentation are as vital as ever.

And despite major disruption from GenAI, programming languages aren’t going anywhere. Type-safe languages like TypeScript, Java, and C# provide compile-time validation that catches AI errors before production, helping mitigate the risks of AI-generated code. Memory safety mandates will drive interest in Rust and Zig for systems programming: Major players such as Google, Microsoft, Amazon, and Meta have adopted Rust for critical systems, and Zig is behind Anthropic’s most recent acquisition, Bun. And Python is central to creating powerful AI and machine learning frameworks, driving complex intelligent automation that extends far beyond simple scripting. It’s also ideal for edge computing and robotics, two areas where AI is likely to make inroads in the coming year.

Takeaways

Which AI tools programmers use matter less than how they use them. With a wide choice of tools now available in the IDE and on the command line, and new options being introduced all the time, it’s useful to focus on the skills needed to produce good code rather than focusing on the tool itself. After all, whatever tool they use, developers are ultimately responsible for the code it produces.

Effectively communicating with AI models is the key to doing good work. The more background AI tools are given about a project, the better the code they generate will be. Developers have to understand both how to manage what the AI knows about their project (context engineering) and how to communicate it (prompt engineering) to get useful outputs.

AI isn’t just a pair programmer; it’s an entire team of developers. Software engineers have moved beyond single coding assistants. They’re building and deploying custom agents, often within complex setups involving multi-agent scenarios, teams of coding agents, and agent swarms. But as the engineering workflow shifts from conducting AI to orchestrating AI, the fundamentals of building and maintaining good software—code review, design patterns, debugging, testing, and documentation—stay the same and will be what elevates purposeful AI-assisted code above the crowd.

Software Architecture

AI has progressed from being something architects might have to consider to something that is now essential to their work. They can use LLMs to accelerate or optimize architecture tasks; they can add AI to existing software systems or use it to modernize those systems; and they can design AI-native architectures, an approach that requires new considerations and patterns for system design. And even if they aren’t working with AI (yet), architects still need to understand how AI relates to other parts of their system and be able to communicate their decisions to stakeholders at all levels.

Takeaways

AI-enhanced and AI-native architectures bring new considerations and patterns for system design. Event-driven models can enable AI agents to act on incoming triggers rather than fixed prompts. In 2026, evolving architectures will become more important as architects look for ways to modernize existing systems for AI. And the rise of agentic AI means architects need to stay up-to-date on emerging protocols like MCP.

Many of the concerns from 2025 will carry over into the new year. Considerations such as incorporating LLMs and RAG into existing architectures, emerging architecture patterns and antipatterns specifically for AI systems, and the focus on API and data integrations elevated by MCP are critical.

The fundamentals still matter. Tools and frameworks are making it possible to automate more tasks. However, to successfully leverage these capabilities to design sustainable architecture, enterprise architects must have a full command of the principles behind them: when to add an agent or a microservice, how to consider cost, how to define boundaries, and how to act on the knowledge they already have.

Infrastructure and Operations

The InfraOps space is undergoing its most significant transformation since cloud computing, as AI evolves from a workload to be managed to an active participant in managing infrastructure itself. With infrastructure sprawling across multicloud environments, edge deployments, and specialized AI accelerators, manual management is becoming nearly impossible. In 2026, the industry will keep moving toward self-healing systems and predictive observability—infrastructure that continuously optimizes itself, shifting the human role from manual maintenance to system oversight, architecture, and long-term strategy.

Platform engineering makes this transformation operational, abstracting infrastructure complexity behind self-service interfaces, which lets developers deploy AI workloads, implement observability, and maintain security without deep infrastructure expertise. The best platforms will evolve into orchestration layers for autonomous systems. While fully autonomous systems remain on the horizon, the trajectory is clear.

Takeaways

AI is becoming a primary driver of infrastructure architecture. AI-native workloads demand GPU orchestration at scale, specialized networking protocols optimized for model training and inference, and frameworks like Ray on Kubernetes that can distribute compute intelligently. Organizations are redesigning infrastructure stacks to accommodate these demands and are increasingly considering hybrid environments and alternatives to hyperscalers to power their AI workloads—“neocloud” platforms like CoreWeave, Lambda, and Vultr.

AI is augmenting the work of operations teams with real-time intelligence. Organizations are turning to AIOps platforms to predict failures before they cascade, identify anomalies humans would miss, and surface optimization opportunities in telemetry data. These systems aim to amplify human judgment, giving operators superhuman pattern recognition across complex environments.

AI is evolving into an autonomous operator that makes its own infrastructure decisions. Companies will implement emerging “agentic SRE” practices: systems that reason about infrastructure problems, form hypotheses about root causes, and take independent corrective action, replicating the cognitive workload that SREs perform, not just following predetermined scripts.

Data

The big story of the back half of 2025 was agents. While the groundwork has been laid, in 2026 we expect focus on the development of agentic systems to persist—and this will necessitate new tools and techniques, particularly on the data side. AI and data platforms continue to converge, with vendors like Snowflake, Databricks, and Salesforce releasing products to help customers build and deploy agents. 

Beyond agents, AI is making its influence felt across the entire data stack, as data professionals target their workflows to support enterprise AI. Significant trends include real-time analytics, enhanced data privacy and security, and the increasing use of low-code/no-code tools to democratize data access. Sustainability also remains a concern, and data professionals need to consider ESG compliance, carbon-aware tooling, and resource-optimized architectures when designing for AI workloads.

Takeaways

Data infrastructure continues to consolidate. The consolidation trend has not only affected the modern data stack but also more traditional areas like the database space. In response, organizations are being more intentional about what kind of databases they deploy. At the same time, modern data stacks have fragmented across cloud platforms and open ecosystems, so engineers must increasingly design for interoperability. 

A multiple database approach is more important than ever. Vector databases like Pinecone, Milvus, Qdrant, and Weaviate help power agentic AI—while they’re a new technology, companies are beginning to adopt vector databases more widely. DuckDB’s popularity is growing for running analytical queries. And even though it’s been around for a while, ClickHouse, an open source distributed OLAP database used for real-time analytics, has finally broken through with data professionals.

The infrastructure to support autonomous agents is coming together. GitOps, observability, identity management, and zero-trust orchestration will all play key roles. And we’re following a number of new initiatives that facilitate agentic development, including AgentDB, a database designed specifically to work effectively with AI agents; Databricks’ recently announced Lakebase, a Postgres database/OLTP engine integrated within the data lakehouse; and Tiger Data’s Agentic Postgres, a database “designed from the ground up” to support agents.

Security

AI is a threat multiplier—59% of tech professionals cited AI-driven cyberthreats as their biggest concern in a recent survey. In response, the cybersecurity analyst role is shifting from low-level human-in-the-loop tasks to complex threat hunting, AI governance, advanced data analysis and coding, and human-AI teaming oversight. But addressing AI-generated threats will also require a fundamental transformation in defensive strategy and skill acquisition—and the sooner it happens, the better.

Takeaways

Security professionals now have to defend a broader attack surface. The proliferation of AI agents expands the attack surface. Security tools must evolve to protect it. Implementing zero trust for machine identities is a smart opening move to mitigate sprawl and nonhuman traffic. Security professionals must also harden their AI systems against common threats such as prompt injection and model manipulation.

Organizations are struggling with governance and compliance. Striking a balance between data utility and vulnerability requires adherence to data governance best practices (e.g., least privilege). Government agencies, industry and professional groups, and technology companies are developing a range of AI governance frameworks to help guide organizations, but it’s up to companies to translate these technical governance frameworks into board-level risk decisions and actionable policy controls.

The security operations center (SOC) is evolving. The velocity and scale of AI-driven attacks can overwhelm traditional SIEM/SOAR solutions. Expect increased adoption of agentic SOC—a system of specialized, coordinated AI agents for triage and response. This shifts the focus of the SOC analyst from reactive alert triage to proactive threat hunting, complex analysis, and AI system oversight.

Product Management and Design

Business focus in 2025 shifted from scattered AI experiments to the challenge of building defensible, AI-native businesses. Next year we’re likely to see product teams moving from proof of concept to proof of value

One thing to look for: Design and product responsibilities may consolidate under a “product builder”—a full stack generalist in product, design, and engineering who can rapidly build, validate, and launch new products. Companies are currently hiring for this role, although few people actually possess the full skill set at the moment. But regardless of whether product builders become ascendant, product folks in 2026 and beyond will need the ability to combine product validation, good-enough engineering, and rapid design, all enabled by AI as a core accelerator. We’re already seeing the “product manager” role becoming more technical as AI spreads throughout the product development process. Nearly all PMs use AI, but they’ll increasingly employ purpose-built AI workflows for research, user-testing, data analysis, and prototyping.

Takeaways

Companies need to bridge the AI product strategy gap. Most companies have moved past simple AI experiments but are now facing a strategic crisis. Their existing product playbooks (how to size markets, roadmapping, UX) weren’t designed for AI-native products. Organizations must develop clear frameworks for building a portfolio of differentiated AI products, managing new risks, and creating sustainable value. 

AI product evaluation is now mission-critical. As AI becomes a core product component and strategy matures, rigorous evaluation is the key to turning products that are good on paper into those that are great in production. Teams should start by defining what “good” means for their specific context, then build reliable evals for models, agents, and conversational UIs to ensure they’re hitting that target.

Design’s new frontier is conversations and interactions. Generative AI has pushed user experience beyond static screens into probabilistic new multimodal territory. This means a harder shift toward designing nonlinear, conversational systems, including AI agents. In 2026, we’re likely to see increased demand for AI conversational designers and AI interaction designers to devise conversation flows for chatbots and even design a model’s behavior and personality.

What It All Means

While big questions about AI remain unanswered, the best way to plan for uncertainty is to consider the real value you can create for your users and for your teams themselves right now. The tools will improve, as they always do, and the strategies to use them will grow more complex. Being deeply versed in the core knowledge of your area of expertise gives you the foundation you’ll need to take advantage of these quickly evolving technologies—and ensure that whatever you create will be built on bedrock, not shaky ground.



Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Strengthening GitLab.com security: Mandatory multi-factor authentication

1 Share

To strengthen the security of all user accounts on GitLab.com, GitLab is implementing mandatory multi-factor authentication (MFA) for all users and API endpoints who sign in using a username and password.

Why this is happening

This move is a vital part of our Secure by Design commitment. MFA provides critical defense against credential stuffing and account takeover attacks, which remain persistent threats across the software development industry.

Key information to know

What is changing?

GitLab is making MFA mandatory for sign-ins that authenticate with a username and password. This introduces a critical second layer of security beyond just a password.

Does this apply to me?

  1. Yes, it applies if: You sign in to GitLab.com with a username and a password, or use a password to authenticate to the API.
  2. No, it does not apply if: You exclusively use social sign-on (such as Google) or single sign-on (SSO) for access. (Please note: If you use SSO, but also have a password for direct login, you will still need MFA for any non-SSO, password-based login.)

When is the rollout?

  1. The implementation will be a phased approach over the coming months, intended to both minimize unexpected interruptions and productivity loss for users and prevent account lockouts. Groups of users will be asked to enable MFA over time. Each group will be selected based on the actions they’ve taken or the code they’ve contributed to. You will be notified in the following ways:
    • ✉️ Email notification - prior to the phase where you will be impacted
    • 🔔 Regular in-product reminders - 14 days before
    • ⏱️ After a specific time period (this will be shared via email) - blocked from accessing GitLab until you enable MFA

What action do I need to take?

  1. If you sign in to GitLab.com with a username and a password:
    • We highly recommend you proactively set up one of the available MFA methods today, such as passkeys, an authenticator app, a WebAuthn device, or email verification. This ensures the most secure and seamless transition:
    • Go to your GitLab.com User Settings.
    • Select the Account section.
    • Activate two-factor authentication and configure your preferred method (e.g., authenticator app or a WebAuthn device).
    • Securely save your recovery codes to guarantee you can regain access if needed.
  2. If you use a password to authenticate to the API:
    • We highly recommend you proactively switch to a personal access token (PAT). Read our documentation to learn more.

FAQ

What happens if I don't enable MFA by the deadline?

  • You'll be required to set up MFA before you can sign in.

Does this affect CI/CD pipelines or automation?

  • Yes, unless you're using PATs or deploy tokens instead of passwords.

I use SSO but sometimes sign in directly, do I need MFA?

  • Yes, MFA is required for any password-based authentication, including fallback scenarios.

Specific timelines and further resources will be shared as rollout dates approach. Thank you for your attention to this important change.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Coding Python With Confidence: Beginners Live Course Participants

1 Share

Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences.

We speak with course instructor Stephen Gruppetta about building a course where the participants start using their knowledge as soon as possible. He describes how he’s evolved his teaching techniques over years of working with beginners. We explore the advantages of having a curated collection of written tutorials, video courses, and a forum for asking those nagging questions.

We also speak with students Louis and Andrew about their experiences learning Python through the course. They discuss learning how to apply their new skills, employing them in their careers, and building confidence to continue their Python learning journey.

Spotlight: Python for Beginners: Code With Confidence

Learn the fundamentals of Python step-by-step in a friendly, interactive cohort. Build confidence writing code and understand the “why” behind Python’s core concepts.

Topics:

  • 00:00:00 – Introduction
  • 00:01:40 – Instructor Stephen Gruppetta
  • 00:02:42 – Designing the course
  • 00:09:01 – Introducing mini-projects early
  • 00:13:22 – How have the questions changed for Python beginners?
  • 00:20:23 – Taking advantage of Real Python resources
  • 00:24:07 – More courses for 2026
  • 00:25:40 – Spotlight: Python for Beginners
  • 00:26:39 – Python for Beginners participants
  • 00:27:50 – Louis’ background in programming
  • 00:30:46 – Andrew’s background in programming
  • 00:37:43 – Starting to use the knowledge with mini-projects
  • 00:42:52 – What were challenges with the language?
  • 00:54:15 – Working on the larger final project
  • 00:59:45 – What advantages did the cohort-style course provide?
  • 01:03:56 – Are you ready for that blank page?
  • 01:12:00 – How do you see yourself using these new skills?
  • 01:17:39 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E279_04_PfB_Students.26d153bfcdc4.mp3
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why the Best Product Owners Let Go of What They're Best At | Carmela Then

1 Share

Carmela Then: Why the Best Product Owners Let Go of What They're Best At

The Great Product Owner: The Humble Leader Who Served His Team

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"He was there, he was present, he was serving the team." - Carmela Then

 

Carmela worked with a Product Owner at a bank who embodied everything servant leadership should look like. This wasn't a PO who lorded his business expertise over the team—instead, he brought cookies, cracked jokes, and made everyone feel valued regardless of their role. He knew the product landscape intimately and participated in every refinement session, yet remained approachable and coachable. 

When team members came to him confused about stakeholder requests, he willingly stepped in as a mediator. Perhaps most impressively, he actively worked to break down the hierarchical mindset that often plagues traditional organizations. In the beginning, testers felt they couldn't question the business analyst or Product Owner. 

By the end, QA team members were confidently pointing out missing scenarios and use cases—and the PO would respond with genuine appreciation: "Oh yes! We missed it! Let's prioritize that story for the next sprint." This PO understood that his role wasn't to have all the answers, but to create an environment where anyone could contribute their expertise. The result was a truly flat, collaborative Scrum team operating exactly as Scrum was designed to work.

 

Self-reflection Question: How accessible are you to your team, and do you create an environment where anyone—regardless of role—feels comfortable challenging your thinking?

The Bad Product Owner: When Expertise Becomes a Barrier to Collaboration

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"He knows everything himself, and everything is in his head. So nobody else knows what he has in his head." - Carmela Then

 

Carmela describes a Product Owner who wasn't a bad person—in fact, he was incredibly capable. He knew the business from front to back, understood the systems intimately from years of analyst work, and could even write pseudocode himself. The problem? His very competence became a barrier to team collaboration. 

Because he knew so much, he struggled to articulate his ideas to others. Frustrated that developers couldn't read his mind, he started writing the code himself and handing it to developers with instructions to simply implement it. The result was disengaged developers who had no understanding of the bigger picture, and a PO who was drowning in work that wasn't his to do. 

Carmela approached this with humility, asking what she calls "dumb questions" and requesting that he draw things on paper so she could understand. She made excuses about her "bad memory" to create documentation that could be shared with the whole team. 

Over multiple Program Increments, she gently coached him to trust his team: "You are one person. Please let the team help you. The developers are great at what they do—if you share what you're trying to achieve, they can write code that's more efficient and easier to maintain." Eventually, he learned to let go of the coding and focus on what only he could do: sharing his deep business knowledge.

 

Self-reflection Question: As a leader, what tasks are you holding onto that you should be delegating—and what is your reluctance costing your team?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Carmela Then

 

Carmela is a Senior Business Analyst with 15+ years in financial and mining sectors. A Certified and Advanced ScrumMaster, she excels in leading agile initiatives, delivering business value, and aligning technical outcomes with strategic goals.

 

You can link with Carmela Then on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260109_Carmela_Then_F.mp3?dest-id=246429
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Video: One-Shotting an App with Claude Code – Jan 2026

1 Share

On 08 Jan 2026, I recorded this video of me using Claude Code to “one-shot” the development of a Windows application. What does one-shot mean? Andy’s definition: A one-shot is when a developer interacts with AI and the results are generated correctly on the first attempt.

Enjoy!

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories