Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152946 stories
·
33 followers

Supreme Court May Block Thousands of Lawsuits Over Monsanto's Weed Killer

1 Share
The U.S. Supreme Court will hear Monsanto's argument that federal pesticide law should shield it and parent company Bayer from tens of thousands of state lawsuits over Roundup since the Environmental Protection Agency has not required a cancer warning label. The case could determine whether federal rules preempt state failure-to-warn claims without deciding whether glyphosate causes cancer. The Los Angeles Times reports: Some studies have found it is a likely carcinogen, and others concluded it does not pose a true cancer risk for humans. However, the court may free Monsanto and Bayer, its parent company, from legal claims from more than 100,000 plaintiffs who sued over their cancer diagnosis. The legal dispute involves whether the federal regulatory laws shield the company from being sued under state law for failing to warn consumers. [...] "EPA has repeatedly determined that glyphosate, the world's most widely used herbicide, does not cause cancer. EPA has consistently reached that conclusion after studying the extensive body of science on glyphosate for over five decades," the company told the court in its appeal. They said the EPA not only refused to add a cancer warning label to products with Roundup, but said it would be "misbranded" with such a warning. Nonetheless, the "premise of this lawsuit, and the thousands like it, is that Missouri law requires Monsanto to include the precise warning that EPA rejects," they said. On Friday, the court said in a brief order that it would decide "whether the Federal Insecticide, Fungicide, and Rodenticide Act preempts a label-based failure-to-warn claim where EPA has not required the warning." The court is likely to hear arguments in the case of Monsanto vs. Durnell in April and issue a ruling by late June.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Biggest Offshore Wind Project In US To Resume Construction

1 Share
A federal judge has temporarily lifted the Trump administration's suspension of the Coastal Virginia Offshore Wind, allowing construction on the largest offshore wind project in the U.S. to resume. CNBC reports: Judge Jamar Walker of the U.S. District Court for the Eastern District of Virginia granted Dominion's request for a preliminary injunction Friday. Dominion called the Trump suspension "arbitrary and illegal" in its lawsuit. "Our team will now focus on safely restarting work to ensure CVOW begins delivery of critical energy in just weeks," a Dominion spokesperson told CNBC in a statement Friday. "While our legal challenge proceeds, we will continue seeking a durable resolution of this matter through cooperation with the federal government," the spokesperson said. Dominion said in December that "stopping CVOW for any length of time will threaten grid reliability for some of the nation's most important war fighting, AI and civilian assets." Coastal Virginia Offshore Wind is a 176-turbine project that would provide enough power for more than 600,000 homes, according to Dominion. It is scheduled to start dispatching power by the end of the first quarter of 2026. In December, the Trump administration paused the leases on all five offshore wind sites currently under construction in the U.S., blaming the decisions on a classified report from the Department of Defense.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Inside Octoverse 2025 report: The rise of vibe coding & agentic AI | Episode 7 | The GitHub Podcast

1 Share
From: GitHub
Duration: 38:23
Views: 146

Andrea and Kedasha sit down with data whisperer, Jeff Luszcz, one of the wizards behind GitHub’s annual Octoverse report, to unpack this year’s biggest shifts. They get into why TypeScript overtook Python on GitHub, how AI-assisted “vibe coding” and agentic workflows are reshaping everyday engineering, and what it means that more than one new developer joins GitHub every second. From 1.12B open source contributions and 518M merged PRs to COBOL’s unexpected comeback, global growth (hello India, Brazil and Indonesia), and “security by default” with CodeQL and Dependabot, this episode turns the numbers into next steps for your career and your open source projects.

Links mentioned in the episode:

https://octoverse.github.com
https://github.com/jeffrey-luszcz
https://github.com/features/copilot
https://codeql.github.com
https://docs.github.com/code-security/dependabot
https://docs.github.com/code-security/secret-scanning/introduction/about-secret-scanning
https://www.typescriptlang.org
https://www.python.org
https://nextjs.org
https://vitejs.dev
https://marketplace.visualstudio.com/items?itemName=GitHub.copilot
https://www.home-assistant.io
https://code.visualstudio.com
https://github.com/explore

The GitHub Podcast is hosted by Abigail Cabunoc Mayes, Kedasha Kerr and Cassidy Williams. The show is edited, mixed and produced by Victoria Marin. Thank you to our production partner, editaudio.

— CHAPTERS —
00:00 - Intro: unpacking the Octoverse report
02:02 - Why typescript overtook python
09:33 - The surprise return of cobol
12:03 - The year of vibe coding
13:49 - 180 million+ developers on GitHub
18:29 - Skills you need for the AI era
24:30 - Global growth: India, Brazil, Indonesia
27:34 - Agentic workflows and ordinary AI
31:38 - Shifting to secure by default
34:06 - Skills in demand in the future
35:15 - Jeff's top takeaway from the data

Stay up-to-date on all things GitHub by subscribing and following us at:
YouTube: http://bit.ly/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github
Facebook: https://www.facebook.com/GitHub/

About GitHub:
It’s where over 180 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Adding a Custom ToString() Format to BigInteger

1 Share
From: Jason Bock
Duration: 1:16:23
Views: 3

In this stream, I'll work on getting a ToString() extension method built to format a BigInteger such that only a subset of characters are shown.

https://github.com/JasonBock/SpackleNet/issues/33
https://discord.gg/hVSKVk4RPC

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Future of AI in SRE: Preventing Failures, Not Fixing Them

1 Share
Gears with the words "reliability," "efficiency," "quality" and "service."

For years, site reliability engineering (SRE) has centered on one mission: keeping systems healthy while everything else — code, configurations and infrastructure — changes around them. But as complexity grows, even the most experienced SREs hit a wall.

Logs multiply, dependencies intertwine and issues proliferate. The future of reliability engineering is no longer about reacting faster; it’s about preventing failures before they occur.

SRE has evolved through three distinct stages. The first was alerting, where monitoring tools detected symptoms but offered little context. The second introduced AI-assisted triage, where models correlated logs, metrics and traces to pinpoint likely failure points and reduce alert fatigue. The third stage, safe auto-remediation, closed the loop between detection and resolution. AI systems could restart pods, revert misconfigurations or apply hotfixes under strict guardrails.

Each step reduced mean time to recovery, but each still shared one limitation: something had to fail first.

The next frontier is a preventative approach to reliability engineering, where AI learns from the past to harden infrastructure before stress hits.

Using the Past to Protect the Future

Preventative reliability engineering rests on one principle: Every incident contains signals that can prevent the next one. By capturing and structuring these signals, AI systems can learn patterns of instability and use them to harden infrastructure against future outages, performance degradation and other issues.

It starts with historical data. Most organizations already have years of post-mortems, alert histories and runbooks. AI can turn tribal knowledge on symptom, cause, impact and resolution into actionable intelligence.

For example, generative AI doesn’t need to be pre-trained on every failure scenario to spot trouble. It can recognize repeating patterns like “node pressure followed by pod eviction” across logs, metrics and events. When this pattern reappears, it can surface early warnings or highlight configurations that are likely to cause the same issue again.

By ingesting more of an organization’s historical incidents, configurations and operational data, AI’s ability to predict risky rollouts or flag unsafe deployments becomes increasingly more precise and context-aware.

Capacity prediction is another key use case. By modeling historical resource utilization, AI can right-size compute dynamically, forecast saturation points and optimize for both performance and cost. Instead of static thresholds, predictive scaling anticipates demand and mitigates brownouts before they manifest.

Finally, understanding dependencies is critical to seeing the full picture. Most outages don’t start in the service that fails, but in another service it depends on. By mapping relationships across Kubernetes resources, service meshes and CI/CD pipelines, AI can build a real-time dependency graph. With that context, it can spot single points of failure, estimate the blast radius of an issue, and suggest how to isolate the problem before it spreads.

Building the Foundation for Preventative AI SRE

AI can’t prevent what it doesn’t understand. Before organizations can rely on autonomous prevention, they must build the right data and governance pillars. Platform and infrastructure leaders can begin with three critical investments:

1. Structured Incident Knowledge

Unstructured incident data is the biggest barrier to learning systems. Standardize metadata across incident records: service, symptom, root cause, impact and resolution. Link these records to observability artifacts, logs, metrics and traces, so AI can correlate patterns across time and services.

A structured incident dataset becomes the training corpus for AI reliability reasoning. Over time, it enables the system to move from correlation (“these events often occur together”) to causation (“this configuration consistently leads to a crash under load”).

2. Integrated Topology and Dependency Mapping

AI-driven prevention depends on full-stack context. In distributed environments, root causes are often concealed several layers from where symptoms appear. By integrating topology data from Kubernetes, network layers and external services, AI can model dependencies as a living graph.

This topology model lets AI reason about how upstream failures ripple through the system. A latency spike in an external API might degrade a dependent microservice, trigger retries and saturate worker nodes. With dependency awareness, AI can simulate such cascades before deployment, proactively suggesting architectural mitigations or resource adjustments.

3. AI Guardrails and Governance

The transition to preventative automation requires trust. Engineers must know precisely what the AI observes, suggests and acts upon. Define strict guardrails: which actions are read-only, which require approval and which are fully automated.

Auditability reinforces confidence. Every AI-driven decision should cite the evidence, logs, diffs and metrics that informed it. Transparency transforms AI from a black box into a verifiable collaborator. As reliability models mature and earn trust, teams can safely increase autonomy, progressing from co-pilot to auto-pilot operation.

From Reactive Firefighting to Reliability by Design

Preventative AI in SRE isn’t just a technical evolution, it’s a mindset shift. Traditional SRE emphasizes fast recovery when systems break, but reliability is ultimately about building systems that don’t fail in the first place and can scale without incidents, staffing or costs growing alongside them. Preventative reliability engineering encourages teams to treat every anomaly or near-miss as valuable input, feeding it back into AI systems to improve foresight and resilience.

The goal isn’t to replace human judgment but to amplify it, freeing SREs to focus on architectural resilience, chaos testing and continuous optimization rather than repetitive triage.

AI has already proven it can accelerate detection and remediation. The next phase is proactive defense: learning from the entire reliability life cycle to reinforce weak spots before they fracture. This transformation demands discipline, clean data, repeatable processes and guardrails that make automation trustworthy.

But the payoff is transformative. Platform engineering is moving toward a reliability by design approach that doesn’t just react to failure but prevents it. One where AI flags a risky configuration before deployment, explains why it’s problematic, references similar past incidents and proposes a safer rollout.

The post The Future of AI in SRE: Preventing Failures, Not Fixing Them appeared first on The New Stack.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A Developer’s Guide to Marshaling Data With JSON

1 Share
marching

While code and data interplay with each other to form running programs, we still tend to concern ourselves more with how code is presented than how data is presented. While masses of streaming data can often move unseen and is truly meaningless, JavaScript Object Notation (aka JSON) was specifically designed as a standard text-based format for representing small amounts of human-readable structured data.

The first time you might come across the need for this type of representation is when you want to persist or save state from a program. Typically, a game developer might want to “save” the position of a game to restart later — that is, to save the state of all the objects that represent the current state of the game. We use the term “marshaling” or “serialization” when referring to turning an in-memory format of an object to a standard storage format. While most languages will have an internal binary way to persist state, using JSON keeps your data open and interchangeable.

This post explores JSON from a fairly simple level, but hopefully you will appreciate that its form on the page is quite important.

Why JSON Is Better than XML for Data Marshalling

While it is a subset of JavaScript, JSON was initially seen as a welcome replacement to the much-maligned XML (eXtensible Markup Language), which was both verbose and complex to parse. That was because it was doing more self-description than was necessary for mere marshalling.

But XML was at least human-readable. This meant it could be reviewed by non-programmers, and it helped to expand the development world. (As a heavy Ruby user, I was also familiar with another format called YAML, which is still favoured for configuration files today.)

The idea of “readable” data needs a little explanation. In this case, it does not really mean, “I want to read the data.” It means, “I want to be able to understand the shape of things.” There is a sweet spot where small amounts of data can be viewed efficiently, but in a structure that helps the reader parse it naturally.

So JSON data is ideal for readable data that’s intended to be stored in files or with code. JSON is often used in configuration files, as new data can be read in without touching code. Slightly different versions of the same structure (e.g., extra or missing keys) are quite easy for language parsers to manage. You can ignore new keys, or mark keys as “optional.”

The crux of the matter is that if you are working with people who are focusing on content, you rely on tools to let them contribute directly. JSON is good for that.

A Practical Example of JSON: Describing a Cat

I asked ChatGPT to “describe an average cat in JSON”. It replied with the following:

{
  "species": "Felis catus",
  "average_age_years": 4,
  "average_weight_kg": 4.5,
  "body_length_cm": 46,
  "tail_length_cm": 30,
  "fur": {
    "length": "short",
    "texture": "soft",
    "common_colors": ["black", "white", "gray", "brown", "orange"],
    "patterns": ["solid", "tabby", "bicolor", "calico"]
  },
  "eyes": {
    "shape": "round",
    "colors": ["green", "yellow", "blue", "amber"]
  },
  "ears": {
    "shape": "triangular",
    "position": "upright"
  },
  "temperament": {
    "independence": "moderate",
    "affection": "selective",
    "playfulness": "moderate",
    "curiosity": "high"
  },
  "activity_level": "medium",
  "sleep_hours_per_day": 14,
  "diet": "obligate carnivore",
  "vocalization_level": "low to moderate",
  "lifespan_years": {
    "indoor": 15,
    "outdoor": 8
  }
}


JSON doesn’t require the previous definition of cat before helping ChatGPT proceed to showing us a cat. In fact, this feels like a reasonable format to describe any small mammalian pet.

Even if you had not seen JSON before (but had dabbled with some programming), this is very readable. There are a bunch of key/value pairs. Strings use quotes, otherwise there are floating point numbers. You could guess (correctly) that it also recognises true and false for binary. There are substructures and arrays. There are, for example, effectively four attributes of “temperament.” Eye colour has four possible values. The keys within “temperament” already seem cat-like, even without the actual values.

And I have no idea what a “medium activity” level is. But the large language model (LLM) was happy to participate in my slightly whimsical request, because JSON is fine with this. In fact, JSON is quite useful in LLM prompts, as it is a sharp way of introducing design nomenclature without writing any code.

I made the same request for XML, but the result was approximately 75% bigger because it has to repeat opening and closing tags to parse properly. (A version of JSON also exists with comments, but that isn’t quite standard.)

How to Parse JSON Data in Your Code

Most languages have common libraries for parsing JSON, which means I could write a bit of Ruby to read the cat back into memory. I’ll allow ChatGPT to do the honors, and write the parsing code for Ruby. This assumes that I saved our previously mentioned JSON generic cat in a file called “cat.json”:

After saving this code as “cats.rb”, I run it in my terminal (assuming that I have a Ruby installation):

Thus, the cat is revived.

Using FracturedJson to Improve Readability

I recently heard about a nice library called FracturedJson, which is “a JSON formatter that produces human-readable but fairly compact output.” I think this moves us a bit nearer to that sweet spot.

This doesn’t actually have a Ruby library yet, so first we’ll put it through its paces initially with the browser. Let’s see how it shapes our persisted cat:

It only makes a few choices, including fixing the space formatting and keeping attributes on the same line where it can.

We could also use this in VS Code, as there is a NuGet library for FracturedJson. As usual, I started VS Code in my terminal, created a console project and added the NuGet library. Then I added the handful of lines to reshape the cat:

You can see from the code that the heavy lifting is done by .NET System.Text.Json services, and that FracturedJson is just a formatter. It also has extra options to explore, as you can see from the browser example.

The Importance of Readable JSON in Development

Parsing JSON data quickly by eye is still a very useful skill in the information-dense forest of computing. And any tool that helps form a quiet glade is welcome.

Small amounts of JSON also work well in LLM prompts, to steer tasks without programming. FractureJson tries to get your JSON string data nearer the sweet spot of readability and compactness, helping us to admire that persisted cat a little more easily.

The post A Developer’s Guide to Marshaling Data With JSON appeared first on The New Stack.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories