Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152614 stories
·
33 followers

A New Theme for Short-Form Blogging on WordPress.com

1 Share

At WordPress.com, we believe short thoughts deserve a real home. Today we’re introducing a new theme built for quick posts, replies, and reblogs: the kind of writing that lives somewhere between a tweet and a blog post, on a site that’s entirely yours.

If you’ve been thinking about starting your own small, private social network with friends or family, or you want a space to post thoughts freely, or to import your historical posts from Twitter, Mastodon, or Bluesky without handing your words over to someone else’s platform, this one’s for you.

Let’s take a look — or sign up now at wordpress.com/social.

Write now, not later

Click the “Compose” button, type your thoughts, watch the 500-character counter, and tap Post. No blank canvas, no formatting toolbar to navigate first. Just a simple prompt, What’s happening?, and a place to answer it.

A profile page that feels familiar

Your profile collects everything in one place: your avatar, bio, and the counts your readers will look for posts, following, followers. Tabs for Posts, Replies, Media, and Likes let visitors browse the way they already know how. A sidebar keeps Home, Explore, and your profile one click away.

Reblogs that actually work

This is the feature we’re most excited about. Click the reblog icon on any post and it flows into your own feed, credited to the original author, automatically. No screenshots, no copy-paste, no lost attribution.

Every post is a real post

Here’s what makes this different from a social app: every quick thought and every reblog is a real WordPress.com post on a site you own, and every reply is saved as a comment. You get the speed and feel of a social feed, with the permanence and portability of a blog. Export it, back it up, migrate it to another host. It’s yours.

Built for the open web

The theme is fully mobile-responsive, so posting from your phone feels just as natural as from your desktop. Tap Compose from wherever the thought hits you.

And because every blog on WordPress.com comes with RSS out of the box, your readers can follow along in whatever feed reader they already use. No algorithm, no app required, just a URL they can subscribe to and content that shows up when you publish it.

Give it a try

Head to wordpress.com/social to sign up for a new blog and get started.

We’d love to hear what you think.





Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Is Your Auth Ready for AI? Why Identity Is the First Thing Developers Need to Fix

1 Share
Learn why identity infrastructure is the primary bottleneck for scaling AI agents and how to move toward a secure, AI-ready auth model.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon’s color screen Kindles are finally getting a system-wide dark mode

1 Share
A person holds an Amazon Kindle Colorsoft with dark mode turned on.
Dark mode will soon be available for all parts of the Kindle Colorsoft’s UI. | Image: Amazon

Most modern Kindle devices with a black and white E Ink screen offer an alternate inverted dark mode with white text against a black background across their entire user interface. Today Amazon has announced the same feature is coming to the Kindle Colorsoft and Kindle Scribe Colorsoft that instead feature color E Ink screens, which could previously only invert the pages of ebooks. The software update introducing the system-wide dark mode to Colorsoft devices "will be rolling out to readers worldwide" in the coming weeks, and available for download through Amazon's website.

While dark mode will be available system-wide for every section of t …

Read the full story at The Verge.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Our 2026 Direction: AI and Classic Workflows in JetBrains IDEs

1 Share

Two valid ways of writing code. One place to own it.

Quick version for AI-news-tired readers:

There are two ways developers create code now:

  1. The classic way: By typing, refactoring, debugging, and building up intent line by line.
  2. The new way: Through collaborating with AI – sometimes via autocomplete and other times by using an agent that can draft whole chunks of work.

We don’t think one is better than the other.

Our goal is to ensure both workflows can coexist inside JetBrains IDEs without hindering each other. In practice, this means that:

  • If you want to write code yourself, the IDE should be focused on code writing, and AI shouldn’t compromise the core coding experience.
  • If you want to generate code with AI (or delegate tasks to agents), the IDE should make that feel natural and powerful, both in terms of UX and functionality.

Either way, one thing doesn’t change: A human is responsible for the code that ships. And the best place to read, understand, and own that code is still the IDE.


What “AI in the IDE” means without the snake oil or hype

We’re not limiting this to one “official” workflow. The market is moving too fast for that – and developers are too diverse for a one-size-fits-all approach.

So when we say “AI in JetBrains IDEs”, we mean agentic added value: UX and features that become available as and when useful:

  • In the AI Chat tool window, as a chat-first workflow.
  • in the IDE terminal, where many developers already work with CLI tools.
  • In the new opt-in modes created for agentic systems, where you can run an agent and leave it to work for hours.

Think of it as follows: One IDE, with multiple AI-powered ways to get work done – picked by the user, shaped by the team, and constrained by real development expectations.


The AI strategy: Avoid vendor lock‑in and keep workflows compatible

If there’s one thing we’re confident about, it’s this: The “best” model, provider, or agent today won’t be the best forever – or perhaps even next month.

That’s why we’re deliberately building toward an IDE experience that does not depend on a single vendor’s roadmap.

Practically, that means our AI chat experience supports multiple ways to connect – depending on what’s allowed by providers’ terms, and what users actually want:

  • JetBrains AI-managed setup (with JetBrains AI subscription).
  • BYOK: Bring your own API key.
  • OAuth sign-in for supported provider accounts (where the provider supports it).
  • ACP agents: Connect external coding agents through a standard protocol.

One honest footnote: OAuth isn’t always available. If an agent provider doesn’t offer OAuth (or doesn’t offer it in a way an IDE can use), we can’t invent it.


Agent Client Protocol (ACP): “Bring your own agent”

ACP lets you connect external coding agents to JetBrains IDEs through a standard interface, so the IDE doesn’t need a bespoke integration for every agent. Agents can be installed from a curated registry (or configured manually), and the installed agents appear inside the AI chat. 

A practical example of one that people have been asking for is the Cursor agent. Cursor is already available as an AI agent inside JetBrains IDEs through ACP – you can select it from the agent picker and use its agentic workflow inside your JetBrains IDE. 

This is the shape we want:

  • You choose the agent that fits your workflow or team.
  • You keep working in the IDE you already rely on.
  • Classic IDE workflows don’t get shoved aside for “agent mode”.

“Professional coding with AI” means more responsibility

We’re not anti-AI. We’re anti-confusion.

There’s a kind of coding that’s optimized for disposable output – and it’s totally valid in the right context. But JetBrains IDEs are built for code that isn’t disposable, but rather for code that is intended for long-term use.

So here’s the principle we design for: Generated code should be treated like real code. That means it should be possible to:

  • Read it
  • Review it
  • Change it
  • Revert it when it’s wrong
  • Understand its impact on the codebase

In practice, our baseline expectation is boring (in the best possible sense):

  • Changes should be visible
  • Changes should be reversible
  • Your project isn’t left in a broken state (“no red code” is a pretty good starting point)

And yes, agents can edit many files. That can be a superpower – but only if you can fully inspect, understand, and correct the outcome. That’s where the IDE matters: It gives you visibility and control over the code produced by humans or AI.


AI on your terms: our product commitments

1. AI and classic modes live side by side 

Typing-first workflows and AI-first workflows are both valid. We’re not building for developer-replacement narratives, and we’re not building an IDE that nudges you into a single “approved” way of working. We respect both approaches. 

2. AI agents must respect the core IDE promise

Every push toward agents must keep the IDE’s core promise intact: deep code intelligence, safe refactoring, debugging, navigation, inspections, reviews – the stuff professional development is made of.

3. Zero vendor lock-in

Multiple activation pathways (subscription, BYOK, OAuth, where possible, and ACP agents) are not a “nice to have.” We are committed to ensuring your workflow is never tied to a single vendor.

4. Long-term utility over hype

If people keep using these workflows weeks later (real retention, real projects), that’s the signal. A lot of AI-driven workflows today are just hype (I’m talking about you, Ralph-loop). 

5. Prioritizing candid community feedback

We value the honesty of Reddit users, Marketplace reviewers, and community members who don’t owe us politeness. Those are exactly the people we want judging our progress.


AI will create a lot of code. That’s not a prediction anymore – it's the reality in April 2026.

But someone still has to be responsible for that code. Someone still has to read it before it merges. And right now, agents can help you move fast – but they can’t carry the risk for you.

So our commitment is straightforward:

We’ll keep building AI workflows that speed up creation – and we’ll keep strengthening the IDE as the best place to review, understand, and own what gets shipped.

You decide how much AI you want. We’ll make sure both paths – AI-assisted and classic – work great together, but you can stay on the path you prefer.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Securing the git push pipeline: Responding to a critical remote code execution vulnerability

1 Share

On March 4, 2026, we received a vulnerability report through our Bug Bounty program from researchers at Wiz describing a critical remote code execution vulnerability affecting both github.com, GitHub Enterprise Cloud, GitHub Enterprise Cloud with Data Residency, GitHub Enterprise Cloud with Enterprise Managed Users, and GitHub Enterprise Server.

In less than two hours we had validated the finding, deployed a fix to github.com, and begun a forensic investigation that concluded there was no exploitation.

In this post, we want to share what happened, how we responded, and what we are doing to prevent similar issues in the future.

Receiving the bug bounty report

The bug bounty report described a way for any user with push access to a repository, including a repository they created themselves, to achieve arbitrary command execution on the GitHub server handling their git push operation. The attack required only a single command: git push with a crafted push option that leveraged an unsanitized character.

Our security team immediately began validating the bug bounty report. Within 40 minutes, we had reproduced the vulnerability internally and confirmed the severity. This was a critical issue that required immediate action.

Understanding the vulnerability

When a user pushes code to GitHub, the operation passes through multiple internal services. As part of this process, metadata about the push, such as the repository type and the environment it should be processed in, is passed between services using an internal protocol.

The vulnerability leveraged how user-supplied git push options were handled within this metadata. Push options are an intentional feature of git that allow clients to send key-value strings to the server during a push. However, the values provided by the user were incorporated into the internal metadata without sufficient sanitization. Because the internal metadata format used a delimiter character that could also appear in user input, an attacker could inject additional fields that the downstream service would interpret as trusted internal values.

By chaining several injected values together, the researchers demonstrated that an attacker could override the environment the push was processed in, bypass sandboxing protections that normally constrain hook execution, and ultimately execute arbitrary commands on the server.

Responding to the vulnerability

With the root cause identified on March, 4, 2026, at 5:45 p.m. UTC, our engineering team developed and deployed a fix to github.com at 7:00 p.m. UTC that same day. The fix ensures that user-supplied push option values are properly sanitized and can no longer influence internal metadata fields.

For GitHub Enterprise Server, we prepared patches across all supported releases (3.14.25, 3.15.20, 3.16.16, 3.17.13, 3.18.8, 3.19.4, 3.20.0, or later) and published CVE-2026-3854. These are available today and we strongly recommend that all GHES customers upgrade immediately.

Investigating for exploitation

With the immediate fix in place on github.com, we moved to the pressing question of whether anyone else found and exploited this vulnerability before the researchers reported it.

A key property of this vulnerability gave us confidence in our ability to answer that question. The exploit forces the server to take a code path that is never used during normal operations on github.com. This is not something an attacker can avoid or suppress, as it is an inherent consequence of how the injection works.

We logged this path and queried our telemetry for any instance of this anomalous code path being executed. The results were clear:

  • Every occurrence mapped to the Wiz researchers’ own testing activity.
  • No other users or accounts triggered this code path.
  • No customer data was accessed, modified, or exfiltrated as a result of this vulnerability.

For GHES customers, exploitation would require an authenticated user with push access on your instance. We recommend reviewing your access logs out of an abundance of caution.

Defense in depth

Beyond fixing the immediate input sanitization issue, our investigation surfaced an additional finding worth sharing.

The exploit worked in part because the server had access to a code path that was not intended for the environment it was running in. This code path existed on disk as part of the server’s container image, even though it was only meant to be used in a different product configuration. An older deployment method had correctly excluded this code, but when the deployment model changed, the exclusion was not carried forward.

This is a useful reminder that defense in depth matters. The input sanitization fix is the primary remediation, but we have also removed the unnecessary code path from environments where it should not exist. Even if a similar injection vulnerability were discovered in the future, this additional hardening would limit what an attacker could do with it.

What you should do

GitHub Enterprise Cloud, GitHub Enterprise Cloud with Enterprise Managed Users, GitHub Enterprise Cloud with Data Residency, and github.com were patched on March 4, 2026. No action is required from users of any of these.

As mentioned previously, exploitation on GitHub Enterprise Server requires an authenticated user with push access on your instance. We recommend that you review /var/log/github-audit.log* for any unusual activity, particularly push operations containing unexpected special characters in push options. Updates are available in the following releases:

  • GitHub Enterprise Server 3.14.25 or later
  • GitHub Enterprise Server 3.15.20 or later
  • GitHub Enterprise Server 3.16.16 or later
  • GitHub Enterprise Server 3.17.13 or later
  • GitHub Enterprise Server 3.18.7 or later
  • GitHub Enterprise Server 3.19.4 or later
  • GitHub Enterprise Server 3.20.0 or later

We strongly recommend upgrading to the latest patch release as soon as possible. See the GHES release notes for details.

This vulnerability has been assigned CVE-2026-3854.

Acknowledgments

This vulnerability was discovered and responsibly disclosed by researchers at Wiz. Their report was thorough, clearly demonstrated the impact, and enabled us to move quickly from validation to remediation. This finding will receive one of the highest rewards in the history of our Bug Bounty program, which has been a cornerstone of our security program for over a decade.

The post Securing the git push pipeline: Responding to a critical remote code execution vulnerability appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What are the top database platforms in 2026? A look at the latest data

1 Share

The database market in 2026 remains stable at the top, with AWS, Microsoft, Oracle, and Google Cloud Platform leading by revenue. However, open-source and cloud-native platforms like MongoDB, Snowflake, and Databricks are gaining popularity – albeit very slowly – as uncovered in the latest Gartner and DB-Engines data.

Leading research and analysis firm Gartner recently revealed its DBMS Market Share Ranks for the 2011-2025 period, and it shows a clear pattern. That is: while the dominant database vendors are losing their stranglehold on the market, it’s happening very slowly – so don’t expect to see big changes at the top any time soon.

It’s a trend already uncovered in the Redgate DB-Engines rankings in recent times, despite it using a very different set of metrics compared to Gartner’s analysis.

Gartner focuses on a ‘stack’ ranking of revenue, so popular systems like PostgreSQL are only measured as part of commercial services.

DB-Engines, on the other hand, looks at individual system popularity. This includes tracking the number of Google searches for a system, mentions on online forums, LinkedIn and social media, job postings on Indeed, and more.

This means we have two very different, but equally insightful, sources of data for tracking the overall success and popularity of database platforms in 2026.

Summarizing the Gartner and DB-Engines rankings (and the trend they’ve both uncovered)

Both rankings show a clear (but very gradual) rise of open-source, AI, and cloud-native platforms. We’re talking the likes of MongoDB, Snowflake, Databricks, Redis, IBM Db2, and Apache Cassandra. This momentum reflects the evolving nature of a database industry increasingly impacted by AI and the cloud

Indeed, those systems make up more than half of the top ten in April’s DB-Engines rankings, with Snowflake and Databricks performing particularly well. Together, these platforms are (very) gently chipping away at the market leaders.

Who are the current market leaders?
According to Gartner’s data, they are AWS, Microsoft, Oracle, and Google Cloud Platform (in that order, by revenue). Over on DB-Engines, it’s Oracle, MySQL, SQL Server and PostgreSQL leading the way on popularity – and that order has been static for the past 12 months. 

However, while this trend is continuous, it is very incremental. It’s reasonable to assume that this is partly a consequence of the growth being spread across so many systems – not just one or two dominant players.  

It’s also safe to assume that the momentum will simply continue at this very steady pace – or may even slow further – as new systems are regularly introduced, and the existing platforms ‘battle’ one another. Outside of the DB-Engines top ten, there are hundreds of databases rising and falling in popularity each month. 

Simply put: from what we currently know, don’t expect to see major changes at the top for the time being. Oracle, MySQL, SQL Server and PostgreSQL still dominate the landscape, with MongoDB consistently up there as well.

How are the leading vendors performing against each other? 

As mentioned, there have been no changes to the top four in the DB-Engines rankings in more than a year. Even Gartner’s market share data – covering a significant 14-year period – doesn’t reveal quite as many dramatic moves as you may expect.

What was happening back in 2011?
Gartner’s data is for the 2011-2025 period – a significant amount of time. To put it into perspective, back in 2011 Barack Obama was in the early days of his U.S presidency, Android smartphones were only just coming into prominence (and BlackBerry was still a thing), Steve Jobs stood down from Apple (and they launched both Siri and iCloud alongside iPhone 4s), Microsoft announced SQL Server 2012 at PASS Summit…and Redgate were working on sending one lucky DBA to space. 

The most notable were AWS and Google Cloud Platform’s rapid ascent, in line with the increasing popularity of the cloud and cloud-adjacent databases. AWS moved from position seven in 2013 all the way to market leader by 2022, where it still stands today. And Google Cloud Platform jumped from even further down the order to position four by 2021. 

Microsoft and Oracle have both been remarkably consistent across the 14 years. Microsoft have held position two for most of that time (with two years on top in 2020/2021), while Oracle was market leader for 8 years before falling to third in 2019. It still maintains that position today and, as previously noted, has also maintained its position at the top of the DB-Engines rankings.

What was happening elsewhere?

Well, as Adam Ronthal (Vice President Analyst at Gartner) explained in his LinkedIn post (where you can see the graph in full), it’s been a fairly steady few years even slightly further down the order.

“With the exception of Tencent surpassing Huawei,” he explains, “there has been no churn for the top 17 vendors for the past 2 years.”

Churn – to clarify – is the ‘Churn Index’. This is defined by Ronthal as “calculated as a percentage of vendors that either gained or lost market position in the stack ranking”.

Where can I see the full rankings of database platforms in 2026?

You’ll find Gartner’s DBMS Market Share Ranks: 2011-2025 graph in Adam Ronthal’s LinkedIn post, alongside further insight and a promise of a more detailed analysis to come.

Over on DB-Engines, the rankings are published monthly, so expect to see the latest update this Friday, May 1st. Both the rankings and the trend chart are free to access. You’ll also find regular analysis of the rankings on the blog, details of all 437 databases currently listed in the rankings, and more.

Simple Talk is brought to you by Redgate Software

Take control of your databases with the trusted Database DevOps solutions provider. Automate with confidence, scale securely, and unlock growth through AI.
Discover how Redgate can help you

FAQs: What are the top databases in 2026?

1. Who are the top database vendors in 2026?

The leading vendors are Amazon Web Services, Microsoft, Oracle, and Google Cloud Platform, based on Gartner’s revenue data.

2. Which databases are growing fastest?

Platforms like Snowflake, Databricks, and MongoDB are steadily gaining popularity due to cloud and AI adoption.

3. Will traditional databases lose dominance?

No, not in the short term. Systems like Oracle, MySQL, SQL Server, and PostgreSQL remain dominant, with change happening gradually rather than rapidly.

The post What are the top database platforms in 2026? A look at the latest data appeared first on Simple Talk.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories