Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152121 stories
·
33 followers

How do developers define their worth when code is written by AI?

1 Share

Lately I’ve been in a few podcasts and interviews and one question came up almost every time:

What is left for developers to care about or define themselves with when all the code is written by AI?

Here is the quick answer: being a developer was never about writing code. Code is a tool to achieve the thing we really care about: solving problems. Every developer I know loves solving problems and when there aren’t any problems to solve, we invent them.

This is the reason why so many frameworks and libraries exist. We take a coding solution we built and make it generic so that it can deal with whatever problem that is thrown at it. And then we lose interest and start all over again.

Don’t get me wrong, writing code is great fun. Having witnessed and helped new languages and environments evolve and change from an OK concept to a platform almost every software solution relies on is also great. Squeezing the last bit of optimisation out of a script whilst keeping it understandable and maintainable is a great feeling. But the code is not the end goal. If we find a tool that does the job as well, we will use that.

Every great developer I know is open to change and eager to learn about new things to do and try out. Asking if the code is what defines us is a sign that people still do not take the role of programmer as a normal thing for humans to do. We’re not some freaks in the corner that nobody understands and that stand just outside of “normal” society.

We’re doing a job and we are honing our craft constantly and to find better ways to make computers help others simplify their lives. Creative people thrive doing the thing that makes them happy. Writers write although the web is 90% AI generated and algorithm optimised slop. Musicians play in their garage and then pubs with 10 people because they like making the music they do. Painters paint although a prompt could give you a seeminlgy perfect picture. People knit, sew and weave although there is already far too much fashion available to ever wear in a lifetime.

Developers use code as a tool to create. So when you ask me if I feel threatened by AI and agents I can safely say that I am not. These things can take the task and the typing and the releasing from me, but I still feel a lot of joy popping open the hood and looking at things the machines created knowing that I can read and understand it. I can take it apart and put it back together. I can make it do things that the machine didn’t think of. I can make it better. I can make it mine. And so can you.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Business and Politics of Platform Status Page Details

1 Share

I like GitHub’s recent blog post on transparency around their status page. Status pages are human and machine-readable properties I’ve tracked on for API providers as part of my APIs.json work for over a decade now. API status pages emerged as a standard shortly after we became dependent on these platforms and their APIs, and have become an expected building block of any serious platform. Seeing more evolution, discussion, and transparency around our platforms is a good thing, and something we should see more of.

You can see GitHub wrestling with the technical details and the best way to communicate around real or perceived instability. They have added a new “degraded performance” state, per-service uptime metrics, and clearer communication around model availability. They are separating things into three buckets now, degraded performance, partial outage, and major outage–with per service metrics determining these states.

The GitHub Status Page breaks things down by Git operations, Webhooks, API requests, issues, pull requests, actions, packages, pages, code spaces, and now copilot, which let’s you asses which part of the platform you care about being up or down, and separates the API from the platform. Each platform will have their own way of breaking things down, but I will be looking for what some of the common patterns are around what status pages report, what services they use, and the separation of API and the rest of the platform.

I’ll have to step back and do some thinking about the business and politics of this transparency from GitHub. What does the introduction of “degraded performance” do when it comes to service level agreements or other general expectations? I am curious to think more about what they might be getting ahead of here. But I don’t want to assume any ill intent behind their blog post. I just get nervous when I see “transparency” used, as I watch transparency pages evolve from something helpful to something that was used to split the business and political hairs in favor of platforms.

I just wanted to write about this so I have a timestamp in the blog, and it is something I can revisit and look at across other providers–then I will likely understand the bigger picture and how GitHub’s changes fit into things.



Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Coding, Streamer Maps, Glitches, and more

1 Share
From: Fritz's Tech Tips and Chatter
Duration: 0:00
Views: 1

Let's have some more fun building a map for streamers

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agent Building Trends

1 Share
From: AIDailyBrief
Views: 15

In this Operator's Bonus episode, NLW zooms out from the Agent Madness bracket to share the patterns emerging across nearly 100 agent submissions — from the shift toward AI org charts and "markets of one" software, to the memory gap holding the whole field back. He also previews the Elite Eight matchups.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Microsoft Planner helped build the Maia 200 AI Accelerator

1 Share

Maia 200 is Microsoft’s second-generation first-party AI accelerator built to support large-scale AI inference workloads in Azure. As Maia 200 moved from pre‑silicon design toward deployment, work across multiple engineering organizations was managed within a single program responsible for bringing the accelerator into Azure. At its peak, that effort tracked over 700 active tasks spanning silicon, software, and datacenters. To manage the intricacies of the effort at that scale, the program team turned to Microsoft Planner.

As the Maia 200 program was forming, much of the individual engineering organizations’ work was already in motion across different products including Planner, Excel, and Azure DevOps. Creating a shared view across multiple products proved challenging, as milestones that looked achievable to one team could quietly introduce risks for another, and executive discussions relied on manually assembled status updates rather than a live picture of where the program stood. Program leads needed a shared view of execution to manage dependencies across workstreams, align milestone progress, and identify risks that had the potential to impact delivery timelines.

Creating program-level visibility with Planner

With work already in motion across multiple organizations, the Maia 200 program maintained existing team workflows and tools, but selected Planner as the central coordination layer. As a collaborative work management solution, Planner provides a unified experience that spans individual task tracking through enterprise‑level project management, giving teams a single place to plan, manage, and complete task‑based initiatives. This allowed swim lanes, milestones, critical‑path tasks, and dependencies to be visible regardless of where the underlying work was managed.

Maia 200 program leads began setting up the plan by creating a series of visual swim lanes for each engineering organization. These were built in Planner as parent tasks and subtasks, covering areas like silicon feature readiness, networking enablement, and system validation. Each swim lane was color-coded to the product or engineering owner responsible for that area, and a custom field was added to track the directly responsible individual (DRI) for each task. This structure made ownership immediately clear throughout the plan. While the plan was originally designed for program-level visibility, it quickly became more than that, as some development leads began using these sections as their primary source of truth for day-to-day project execution.

Screenshots reflect the Planner configuration used throughout this program and have been anonymized for confidentiality.

Next, the Maia 200 program team created a structure for tracking overall program milestones. The use of a single parent task called "Top‑Level Program Milestones" visually distinguished milestones from swim lanes within the plan. Underneath, milestones were captured as individual subtasks and grouped by stages of the full program lifecycle, helping align work across phases and clarify what the program was working toward at each stage.

From the start, the same plan served both executive leadership and engineering team leads by providing a consistent view of program progress while still allowing milestone tracking to connect directly to the work‑level details required for execution.

Screenshots reflect the Planner configuration used throughout this program and have been anonymized for confidentiality.

Tracking the critical path and dependencies within Planner

With milestones and swim lanes in place, the next challenge was to track cross-organizational dependencies. The program leads used Planner's dependency feature to track blockers and reinforced a consistent rule in program meetings that any capability on the critical path needed to be represented in Planner. When one group's work depended on another's, both sides created dependency-linked tasks in their respective swim lanes to make the connection visible.

In practice, this meant that when multiple Azure DevOps items rolled up to a single critical-path capability, that capability was tracked in Planner as a task. Dependencies between teams were also represented within the plan, even when the underlying execution remained in other systems. As a result, if a team became blocked, that blocker was visible in the plan and acknowledged by both sides.

The Maia 200 program team also mapped milestones to related tasks within each swim lane setting each milestone as dependent on the final task (or tasks) required for completion.  Planner's scheduling engine then automatically updated milestone dates as work within the swim lanes progressed.

To monitor progress over time, the team used Planner baselines to capture point-in-time snapshots of the plan, allowing them to export task data to Excel, compare multiple baselines, and identify delivery trends as needed.

Tracking risks within Planner

Risks were also tracked directly in Planner, giving program leaders a single view of the work underway and the factors that could impact it. Each risk was represented as a task in the plan and made dependent on the related tasks within each swim lane. Milestones were similarly linked to these risk tasks, keeping mitigation work directly connected to the execution required to reach each milestone.

Screenshots reflect the Planner configuration used throughout this program and have been anonymized for confidentiality.

With critical path work, cross-team dependencies, and associated risks visible in one place, program leads could anticipate risks earlier and respond before changes have a chance to ripple across the program.

Over time, this shifted the nature of program discussions. Instead of spending meeting time providing status updates, program and engineering leads were able to focus on the decisions that mattered, including which risks to prioritize, which tradeoffs to make, and where to act before a dependency became a blocker.

For program leadership, this meant schedule changes never arrived as a surprise, because blockers, dependencies, and risks were already visible and actively managed within the plan. For the engineering teams, it meant a dependency could not be claimed without being represented in the plan and acknowledged by the team responsible for driving the dependent work. By connecting milestones, critical path work, cross‑team dependencies, and risk tracking in a single coordination layer, Planner gave the Maia 200 program the shared operating picture it needed to keep execution aligned across organizations, systems, and every stage of delivery.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

End-to-End Database OpenTelemetry, Priority Transactions, and More

1 Share

New ODP.NET 23.26.2 Features

Key Takeaways

  • ODP.NET 23.26.2 has been released for managed and core providers.
  • New features include end-to-end Oracle AI Database OpenTelemetry tracing, priority transactions, EF Core spatial, post quantum cryptography, in-memory certificates and credentials, TLS state transfer, and metrics filtering.
  • Oracle Database Vector Store Connector for .NET (production), ODP.NET SQL script execution (preview), and Oracle Deep Data Security for ODP.NET will be arriving soon.

ODP.NET 26ai (23.26.2) is now available on NuGet for both core and managed providers. This release adds new capabilities for observability, transaction management, EF Core spatial, and security — helping you build more robust Oracle .NET apps. Let’s take a closer look at each of these features and benefits.

End-to-end App Observability

OpenTelemetry is a popular observability framework and a standard for application end-to-end tracing. Your app, ODP.NET, and Oracle AI Database each can emit their own traces, but spotting bottlenecks requires a single, combined view over the entire stack. Client apps and ODP.NET can unify traces on the same host via one OpenTelemetry collector; the database, however, emits a separate trace — forcing admins to align client and database traces manually.

With end-to-end OpenTelemetry, traces are unified and aligned automatically, making cross‑stack analysis faster and easier. Here’s an ODP.NET ExecuteReader query execution combined trace, visualized in Jaeger, an open‑source distributed tracing tool.

An ODP.NET ExecuteReader query execution trace, visualized in Jaeger, an open-source distributed tracing tool.

The entire operation takes 14 ms. The database activity (orange) runs early on and lasts 3 ms, while ODP.NET activity (blue) spans the entire operation. ODP.NET and Oracle Database OpenTelemetry tags — such as db.namespace, db.response.returned_rows, and db.user—appear combined in an easy‑to‑read graphical view.

Identifying errors and performance issues is easier when database and ODP.NET operations align visually for each round trip. The db.odp.roundtrip.duration and db.odp.roundtrip.count tag data help pinpoint latency and excessive activity.

As an OpenTelemetry collector, Jaeger provides a holistic view of app activity. Here, it aggregates seven query executions from ODP.NET and Oracle Database traces.

Aggregating query executions from ODP.NET and Oracle AI Database traces

In the screen shot, we see most queries finish in about half a second or less — except the fourth, which takes 3.8 seconds. This view makes slow operations easy to spot, then lets you drill into the query’s trace to see whether the delay is on the client or database side — and why.

While this blog post example uses Jaeger, any OpenTelemetry Protocol (OTLP)–compliant collector, such as Prometheus or Splunk, works with Oracle OpenTelemetry. The only other requirement is ODP.NET and Oracle AI Database must use the same exporter for unified tracing.

Oracle’s end-to-end tracing works by embedding database OpenTelemetry child spans within ODP.NET traces. ODP.NET passes the client app context to the database, which then links its own trace to ODP.NET’s to create a unified view. The combined trace is exported to a collector. Notably, ODP.NET never gains access to the database trace itself.

Database spans are generated on each round trip — such as command executions, reads, and fills. By default, database OpenTelemetry is off. To enable end-to-end tracing, set the ODP.NET property DatabaseOpenTelemetryTracing = true. You can set it on OracleConfiguration, OracleConnection, OracleDataSource, OracleDataSourceBuilder, or in app/web.config.

You can toggle database OpenTelemetry on and off at any point in an ODP.NET connection’s lifetime. This lets you trace only the code you need to without adding unnecessary overhead to the rest of the app.

End-to-end tracing gives developers a holistic view of app activity, speeding performance analysis and error resolution.

Unblock Critical Updates with Priority Transactions

Not every app transaction matters equally. Some are more critical to business operations than others. When a low‑priority transaction unintentionally holds row locks, it can block higher‑priority work from updating the same rows. This can occur after an exception leaves the low‑priority transaction uncommitted. In these cases, an administrator must kill the blocker so the app can continue its work. That manual cleanup adds overhead, degrades user experience, and can trigger downtime.

Starting in Oracle AI Database 26ai (23.26.2), Priority Transactions solve this issue. If a low‑priority transaction blocks a higher‑priority one, the former automatically rolls back and the latter can then commit. Low priority transactions no longer block high priority ones.

While the low-priority transaction session stays active afterwards, the application must acknowledge the rollback — typically by calling ODP.NET Rollback(). Until then, any command execution on the low-priority transaction session returns ORA‑63302 or ORA‑63300.

In ODP.NET 23.26.2, both managed and core providers support Priority Transactions. The priority levels are High (default), Medium, and Low. High‑priority transactions can roll back Medium and Low ones; Medium transactions can roll back Low ones.

Rollbacks don’t happen immediately when a high‑priority transaction is blocked. The database waits for the target time to elapse, then rolls back lower‑priority transactions.

  • High priority transactions wait until PRIORITY_TXNS_HIGH_WAIT_TARGET.
  • Medium priority transactions wait until PRIORITY_TXNS_MEDIUM_WAIT_TARGET.

Set ODP.NET transaction priority with the OracleTransactionPriority enum (High, Medium, Low). Configure it via OracleConfiguration, OracleConnection, OracleDataSourceBuilder, or in the .NET Framework config file using the TransactionPriority property. You can also set it when starting a transaction using OracleConnection.BeginTransaction.

This code snippet below shows how to use ODP.NET transaction priority. Two transactions are created that update the same row; one is low priority and the other high. The low priority transaction executes its data change first but does not commit. The high priority transaction executes and waits until PRIORITY_TXNS_HIGH_WAIT_TARGET expires. When it does, it can then commit the data change.

OracleTransaction low_priority_txn = conA.BeginTransaction(OracleTransactionPriority.Low);
OracleCommand cmdA = new OracleCommand(“update…sal=sal+1 where empno=7654”, conA);
cmdA.ExecuteNonQuery();

OracleTransaction high_priority_txn = conB.BeginTransaction(OracleTransactionPriority.High);
OracleCommand cmdB = new OracleCommand(“update…sal=sal+10 where empno=7654”, conB);

// Waits for PRIORITY_TXNS_HIGH_WAIT_TARGET seconds
cmdB.ExecuteNonQuery();
high_priority_txn.Commit();

EF Core Spatial Data

Oracle EF Core spatial support previewed last year. It enables Oracle spatial types to map to EF Core types via NetTopologySuite and performs create, query, update, and delete operations on them. In this release, the Oracle.EntityFrameworkCore.NetTopologySuite library is production-ready. To learn more, I covered these Oracle EF Core spatial features in a prior post.

New Security Features

ODP.NET 23.26.2 adds new security algorithms, configuration options, and performance optimizations to better protect your .NET apps.

Post Quantum Cryptography

Today’s public‑key cryptography relies on math problems that are hard for classical computers to solve. Quantum computers could eventually break many of these schemes, so some bad actors are harvesting encrypted data now to decrypt in the future when quantum computing is powerful enough.

Post‑quantum cryptography (PQC) uses algorithms designed to resist quantum attacks. For security‑minded organizations, moving to PQC is a proactive step to protect data for the long term.

ODP.NET and Oracle AI Database support PQC with the Module Lattice Key Encapsulation Mechanism (ML-KEM) and Module Lattice Digital Signature Algorithm (ML-DSA) algorithms. This support is available in ODP.NET Core, managed, and unmanaged.

In-memory Certificates and Credentials

ODP.NET connections can now load additional in‑memory certificate and credential types, including single sign-on (SSO), PKCS #12 wallet (P12), and Privacy Enhanced Mail (PEM). Each user can hold multiple certificates and credentials, giving apps more configuration choices and fine grain control. Apps no longer have to read wallets only from directories or URLs — they can configure and load them directly in code.

Three new classes store in‑memory certificates and credentials for OracleConnection:

  • OracleSSO — SSO certificates and credentials (Managed ODP.NET and ODP.NET Core)
  • OracleP12 — P12 certificates and credentials (Managed ODP.NET and ODP.NET Core)
  • OraclePEM —PEM certificates and credentials (ODP.NET Core only)

After configuring these objects, you must make them read‑only before associating them with OracleConnection for use. Here’s an ODP.NET sample that connects using an SSO file.

OracleConnection conn = new OracleConnection("User Id=/;Data Source=oracle");

// Create and populate OracleSSO with SEPS credential.
OracleSSO oracleSSO = new OracleSSO();
byte[] sso = File.ReadAllBytes("C:\\myWallets\\cwallet.sso");

oracleSSO.Set(sso, OracleCertificateFunctionality.None, OracleCredentialFunctionality.SEPS);
oracleSSO.MakeReadOnly();
conn.SetSSO(oracleSSO);
conn.Open();

TLS State Transfer

Managed ODP.NET and ODP.NET Core can share TLS context across processes, avoiding renegotiation and saving compute and network round trips.

You can enable TLS state transfer by setting NET_TLS_STATE_TRANSFER in listener.ora or cman.ora.

ODP.NET Metrics Filtering

ODP.NET metrics track connection statistics for monitoring and alerting purposes. By default, they publish at the application domain, connection pool, and database instance levels. If you only need one or two of these levels, ODP.NET 23.26.2 can be configured to filter for only the level(s) needed— so administrators see only the data that matters.

To configure, set the filter via the MetricsLevel value using the OracleMetricsLevel enum, which is available in managed ODP.NET config files or through OracleConfiguration in both core and managed.

Each metric publishes a “Level” tag to show its source:

  • Application Domain — Level = ”AppDomain”
  • Connection Pool — Level = ”ConnectionPool”
  • Database Instance — Level = ”DbInstance”

Learn More

That’s a quick tour of what’s new in ODP.NET 23.26.2. For details, see the ODP.NET Developer’s Guide. Try these features in your next project and tell me what you think.

Coming Soon

These NuGet packages deliver an initial ODP.NET 23.26.2 feature set — but there’s more coming soon:

  • Oracle Database Vector Store Connector (production): supports .NET vectors, Microsoft Agent Framework, and Semantic Kernel. Build and run AI vectors, agents, LLMs, and workflows using ODP.NET’s and Oracle AI Database’s native AI features seamlessly.
  • SQL Script Execution (preview): execute SQL and PL/SQL scripts directly from files, making database changes easy to apply and automate in .NET.
  • Oracle Deep Data Security Security for .NET apps: enforce fine-grained, database-level authorization for agentic AI, analytics, and .NET apps — applying controls based on user identity and runtime context to prevent unintended data exposure from prompt injection, excessive agency, and other risks.

I’ll cover these in upcoming posts. Stay tuned.

FAQ

What problem does end-to-end OpenTelemetry tracing solve?

It removes the need to manually line up separate client and database traces by automatically embedding database spans within ODP.NET traces for a unified, collector-friendly view.

How do Priority Transactions change blocking behavior?

If a lower-priority transaction blocks a higher-priority one, the database can roll back the lower-priority transaction (after a wait target), allowing the higher-priority transaction to proceed and commit.

What are the main security and operational enhancements in this release?

It introduces post-quantum cryptography algorithms, lets apps load certain wallets/certificates in memory (SSO/P12/PEM), enables TLS state transfer to reduce renegotiation overhead, and allows filtering ODP.NET metrics by level (AppDomain/ConnectionPool/DbInstance).


End-to-End Database OpenTelemetry, Priority Transactions, and More was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories