Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146036 stories
·
33 followers

Maintaining shadow branches for GitHub PRs

1 Share

I've created pr-shadow with vibecoding, a tool that maintains a shadow branch for GitHub pull requests(PR) that never requires force-pushing. This addresses pain points Idescribed in Reflectionson LLVM's switch to GitHub pull requests#Patch evolution.

The problem

GitHub structures pull requests around branches, enforcing abranch-centric workflow. When you force-push a branch after a rebase,the UI displays "force-pushed the BB branch from X to Y". Clicking"compare" shows git diff X..Y, which includes unrelatedupstream commits—not the actual patch difference. For a project likeLLVM with 100+ commits daily, this makes the comparison essentiallyuseless.

Inline comments suffer too: they may become "outdated" or misplacedafter force pushes.

Additionally, if your commit message references an issue or anotherPR, each force push creates a new link on the referenced page,cluttering it with duplicate mentions. (You can work around this byadding backticks around the link text, but it is not ideal.)

Due to these difficulties, some recommendations suggest less flexibleworkflows that only append new commits and discourage rebases.However, this means working with an outdated base, and switching betweenthe main branch and PR branches causes numerous rebuilds-especiallypainful for large repositories like llvm-project.

In a large repository, avoiding rebases isn't realistic—other commitsfrequently modify nearby lines, and rebasing is often the only way todiscover that your patch needs adjustments due to interactions withother landed changes.

The solution

pr-shadow maintains a separate PR branch (e.g.,pr/feature) that only receives commits—never force-pushed.You work freely on your local branch (rebase, amend, squash), then syncto the PR branch using git commit-tree to create a commitwith the same tree but parented to the previous PR HEAD.

1
2
3
4
5
6
Local branch (feature)     PR branch (pr/feature)
A A
| |
B (amend) C1 "Fix bug"
| |
C (rebase) C2 "Address review"

Reviewers see clean diffs between C1 and C2, even though theunderlying commits were rewritten.

When a rebase is detected (git merge-base withmain/master changed), the new PR commit is created as a merge commitwith the new merge-base as the second parent. GitHub displays these as"condensed" merges, preserving the diff view for reviewers.

Usage

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Initialize and create PR
git switch -c feature
prs init # Creates pr/feature, pushes, opens PR
# prs init --draft # Same but creates draft PR

# Work locally (rebase, amend, etc.)
git rebase main
git commit --amend

# Sync to PR
prs push "Fix bug"
prs push --force "Rewrite" # Force push if remote diverged

# Update PR title/body from local commit message
prs desc

# Run gh commands on the PR
prs gh view
prs gh checks

The tool supports both fork-based workflows (pushing to your fork)and same-repo workflows (for branches likeuser/<name>/feature). It also works with GitHubEnterprise, auto-detecting the host from the repository URL.

The name "prs" is a tribute to spr, which implements asimilar shadow branch concept. However, spr pushes user branches to themain repository rather than a personal fork. While necessary for stackedpull requests, this approach is discouraged for single PRs as itclutters the upstream repository. pr-shadow avoids this by pushing toyour fork by default.

I owe an apology to folks who receiveusers/MaskRay/feature branches (if they use the defaultfetch = +refs/heads/*:refs/remotes/origin/* to receive userbranches). I had been abusing spr for a long time after LLVM'sGitHub transition to avoid unnecessary rebuilds when switchingbetween the main branch and PR branches.

Additionally, spr embeds a PR URL in commit messages (e.g.,Pull Request: https://github.com/llvm/llvm-project/pull/150816),which can cause downstream forks to add unwanted backlinks to theoriginal PR.

If I need stacked pull requests, I will probably use pr-shadow withthe base patch and just rebase stacked ones - it's unclear how sprhandles stacked PRs.

Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Podcast: Culture Through Tension: Leading Interdisciplinary Teams with Nick Gillian

1 Share

In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke to Nick Gillian about building cross-functional teams for physical AI innovation, growing engineering culture through positive tensions, and navigating the journey from technical execution to organizational influence.

By Nick Gillian
Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

SSMS 22 Productivity Tips Video

1 Share

Discover essential features in SSMS 22 to boost your productivity and make your SQL work more efficient and organized.

The post SSMS 22 Productivity Tips Video appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – January 22, 2026 (#705)

1 Share

Once again, a great day of content about practices and how to think about tech topics. Love it.

[article] Hardened containers don’t fix a broken software supply chain. Interesting take on where security really needs to happen, which means building trusted software from the source.

[blog] MCP, Skills, and Agents. So good. Skills don’t “kill” MCP. Poorly done MCP is bad either way, and done well it’s useful. Lots of other great insights here.

[article] Best Practices for Claude Code. I’d like you to use the Gemini CLI, but that doesn’t mean we can’t use and learn from other tools too.

[blog] Conductor: Testing the new Gemini CLI Extension by migrating a Next.js app to Bun. Great read. Agentic IDEs seem eager to get through planning and straight to work. My experience mirrors Esther’s where the Gemini CLI (with Conductor) genuinely wants to plan and co-create with you.

[blog] Results from the 2025 Go Developer Survey. Transparent, interesting data from this team, as always. What are Go devs doing, what are their concerns, and how they tackling AI? Get the answers here.

[blog] How Google SREs Use Gemini CLI to Solve Real-World Outages. The title says it all. We use tools like the Gemini CLI to help us keep Google running smoothly.

[article] Reimagining LinkedIn’s search tech stack. Lots of LLM stuff in there, which isn’t a surprise. Especially given the graph they need to navigate.

[blog] Personal Intelligence in AI Mode in Search: Help that’s uniquely yours. If you choose to turn it on, you can get personalized answers in Google AI Mode that leverage your Gmail and Google Photos.

[article] When Everything Is a Crisis, Nothing Is: The Numbing Effect of the Infinite Scroll. An important read, especially for those of us that can accidentally be perpetually online for periods of time. We’re not meant for this.

[blog] Review of Google Antigravity for Building Jira Apps. Solid real-world example, with highlights and gotchas. I like that once he had the right app (and corresponding specs) built, he deleted all the code to see if Antigravity could build it correctly just from the spec.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

America’s coming war over AI regulation

1 Share

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a “minimally burdensome” national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation.

In 2026, the battleground will shift to the courts. While some states might back down from passing AI laws, others will charge ahead, buoyed by mounting public pressure to protect children from chatbots and rein in power-hungry data centers. Meanwhile, dueling super PACs bankrolled by tech moguls and AI-safety advocates will pour tens of millions into congressional and state elections to seat lawmakers who champion their competing visions for AI regulation. 

Trump’s executive order directs the Department of Justice to establish a task force that sues states whose AI laws clash with his vision for light-touch regulation. It also directs the Department of Commerce to starve states of federal broadband funding if their AI laws are “onerous.” In practice, the order may target a handful of laws in Democratic states, says James Grimmelmann, a law professor at Cornell Law School. “The executive order will be used to challenge a smaller number of provisions, mostly relating to transparency and bias in AI, which tend to be more liberal issues,” Grimmelmann says.

For now, many states aren’t flinching. On December 19, New York’s governor, Kathy Hochul, signed the Responsible AI Safety and Education (RAISE) Act, a landmark law requiring AI companies to publish the protocols used to ensure the safe development of their AI models and report critical safety incidents. On January 1, California debuted the nation’s first frontier AI safety law, SB 53—which the RAISE Act was modeled on—aimed at preventing catastrophic harms such as biological weapons or cyberattacks. While both laws were watered down from earlier iterations to survive bruising industry lobbying, they struck a rare, if fragile, compromise between tech giants and AI safety advocates.

If Trump targets these hard-won laws, Democratic states like California and New York will likely take the fight to court. Republican states like Florida with vocal champions for AI regulation might follow suit. Trump could face an uphill battle. “The Trump administration is stretching itself thin with some of its attempts to effectively preempt [legislation] via executive action,” says Margot Kaminski, a law professor at the University of Colorado Law School. “It’s on thin ice.”

But Republican states that are anxious to stay off Trump’s radar or can’t afford to lose federal broadband funding for their sprawling rural communities might retreat from passing or enforcing AI laws. Win or lose in court, the chaos and uncertainty could chill state lawmaking. Paradoxically, the Democratic states that Trump wants to rein in—armed with big budgets and emboldened by the optics of battling the administration—may be the least likely to budge.

In lieu of state laws, Trump promises to create a federal AI policy with Congress. But the gridlocked and polarized body won’t be delivering a bill this year. In July, the Senate killed a moratorium on state AI laws that had been inserted into a tax bill, and in November, the House scrapped an encore attempt in a defense bill. In fact, Trump’s bid to strong-arm Congress with an executive order may sour any appetite for a bipartisan deal. 

The executive order “has made it harder to pass responsible AI policy by hardening a lot of positions, making it a much more partisan issue,” says Brad Carson, a former Democratic congressman from Oklahoma who is building a network of super PACs backing candidates who support AI regulation. “It hardened Democrats and created incredible fault lines among Republicans,” he says. 

While AI accelerationists in Trump’s orbit—AI and crypto czar David Sacks among them—champion deregulation, populist MAGA firebrands like Steve Bannon warn of rogue superintelligence and mass unemployment. In response to Trump’s executive order, Republican state attorneys general signed a bipartisan letter urging the FCC not to supersede state AI laws.

With Americans increasingly anxious about how AI could harm mental health, jobs, and the environment, public demand for regulation is growing. If Congress stays paralyzed, states will be the only ones acting to keep the AI industry in check. In 2025, state legislators introduced more than 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures.

Efforts to protect children from chatbots may inspire rare consensus. On January 7, Google and Character Technologies, a startup behind the companion chatbot Character.AI, settled several lawsuits with families of teenagers who killed themselves after interacting with the bot. Just a day later, the Kentucky attorney general sued Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm. OpenAI and Meta face a barrage of similar suits. Expect more to pile up this year. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. “It’s an open question what the courts will do,” says Grimmelmann. 

While litigation brews, states will move to pass child safety laws, which are exempt from Trump’s proposed ban on state AI laws. On January 9, OpenAI inked a deal with a former foe, the child-safety advocacy group Common Sense Media, to back a ballot initiative in California called the Parents & Kids Safe AI Act, setting guardrails around how chatbots interact with children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots. 

Fueled by widespread backlash against data centers, states will also try to regulate the resources needed to run AI. That means bills requiring data centers to report on their power and water use and foot their own electricity bills. If AI starts to displace jobs at scale, labor groups might float AI bans in specific professions. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act. 

Meanwhile, tech titans will continue to use their deep pockets to crush AI regulations. Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, will try to elect candidates who endorse unfettered AI development to Congress and state legislatures. They’ll follow the crypto industry’s playbook for electing allies and writing the rules. To counter this, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. We might even see a handful of candidates running on anti-AI populist platforms.

In 2026, the slow, messy process of American democracy will grind on. And the rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Day 23: Asking AI for Edge Cases You’d Never Think Of

1 Share

The bug report said it all: “Username displays as undefined undefined.”

Somewhere in my code, I’d assumed every user had a first name and last name. Most do. But one user had only entered an email. No name at all. My concatenation code produced “undefined undefined” and displayed it proudly.

I thought about names. I didn’t think about missing names.

This is the edge case problem. You build for what you expect. You break on what you don’t expect. And there’s always something you don’t expect.

AI has seen millions of edge cases. When you ask it what could break your code, it pulls from all of them. It thinks of the empty strings, the null values, the Unicode characters, the leap years, the timezone boundaries. The things you forget because you’re focused on making it work.

The Edge Case Generation Prompt

For any function:

Generate edge cases that could break this code.

Function: [paste the code]

Think about:
- Empty, null, undefined inputs
- Boundary values (0, -1, MAX_INT)
- Type coercion issues
- Unicode and special characters
- Concurrent access
- Network and database failures
- Time and timezone issues
- Very large or very small values
- Malformed but valid-looking data

For each edge case:
1. The input that breaks it
2. What goes wrong
3. How to handle it

A Real Edge Case Analysis

Here’s a simple function:

function formatUserName(user: User): string {
  return `${user.firstName} ${user.lastName}`;
}

AI’s edge case analysis:

Empty/Missing Values:

  • user.firstName is undefined → “undefined Smith”
  • user.lastName is undefined → “John undefined”
  • Both undefined → “undefined undefined”
  • Empty strings → “ “ (just a space)
  • Whitespace only → “ “ (multiple spaces)

Unicode and Special Characters:

  • Name contains emoji → “John 🚀” (might display wrong)
  • Right-to-left characters → Display order issues
  • Very long names → UI overflow
  • Names with quotes → “John O’Brien” (apostrophe might cause issues in certain contexts)
  • Names with HTML → “” (XSS if not escaped)

Type Issues:

  • user is null → TypeError
  • user is undefined → TypeError
  • firstName is a number → “123 Smith” (type coercion)

Edge Cases in Valid Data:

  • Single word name (like “Madonna”) → Unclear how to handle
  • Hyphenated names → “Mary Jane Watson-Parker”
  • Multiple middle names → Not handled
  • Prefixes/suffixes → “Dr. John Smith Jr.”

A better implementation:

function formatUserName(user: User | null | undefined): string {
  if (!user) {
    return 'Unknown User';
  }

  const parts = [user.firstName, user.lastName]
    .filter(part => part && part.trim())
    .map(part => part.trim());

  if (parts.length === 0) {
    return user.email || 'Unknown User';
  }

  return parts.join(' ');
}

The Input Validation Prompt

For API endpoints:

Generate all the ways users could send invalid data to this endpoint.

Endpoint: [describe or paste code]
Expected input: [describe expected format]

Categories to consider:
1. Missing required fields
2. Wrong types (string instead of number)
3. Out of range values
4. Invalid formats (email, date, UUID)
5. SQL injection attempts
6. XSS attempts
7. Very large payloads
8. Unusual but valid values
9. Boundary values
10. Unicode edge cases

For each case:
- The invalid input
- What happens without validation
- Proper validation to add

The State Machine Prompt

For features with multiple states:

Map the state transitions for this feature and find invalid states.

Feature: [describe the feature]
States: [list known states]

Questions:
1. What states can transition to what other states?
2. What transitions are impossible but might be attempted?
3. What happens if the same transition is triggered twice?
4. What happens if transitions happen out of order?
5. What happens if the system crashes mid-transition?
6. Can you end up in an unrecoverable state?

For each problematic scenario, describe what goes wrong and how to prevent it.

Time and Date Edge Cases

Time is always harder than you think:

Find time-related edge cases in this code.

Code: [paste code that deals with time]

Consider:
1. Timezone boundaries (UTC vs local)
2. Daylight saving time transitions
3. Midnight (00:00 vs 24:00)
4. End of month/year boundaries
5. Leap years (Feb 29)
6. Dates far in past or future
7. Unix timestamp overflow (2038 problem)
8. Different calendar systems
9. Server time vs user time vs database time
10. Clock skew between systems

What breaks? What should be handled differently?

Concurrency Edge Cases

When multiple things happen at once:

Find concurrency issues in this code.

Code: [paste code]

Consider:
1. Two users doing the same action simultaneously
2. Same user on two devices
3. Retry after timeout (request succeeded but response failed)
4. Race condition between read and write
5. Deadlocks between resources
6. Lost updates (both read, both write, one wins)
7. Phantom reads (data changes between queries)

For each issue:
- How to trigger it
- What goes wrong
- How to prevent it

The Failure Mode Prompt

For integrations and external dependencies:

What happens when things fail?

This code depends on: [list dependencies - database, APIs, services]

For each dependency, what happens when:
1. It's completely down
2. It's very slow (30 second response)
3. It returns an error
4. It returns invalid data
5. It returns partial data
6. Connection is lost mid-request
7. It times out
8. It rate limits you

For each failure mode:
- What does the user experience?
- What gets logged?
- How do you recover?

The User Behavior Prompt

Users do unexpected things:

What unexpected user behaviors could break this feature?

Feature: [describe it]

Consider users who:
1. Click submit multiple times rapidly
2. Navigate away mid-operation
3. Refresh during a multi-step process
4. Use browser back button
5. Have multiple tabs open
6. Copy-paste unexpected data
7. Use autofill with wrong data
8. Have slow or flaky connections
9. Use old cached versions
10. Are actively trying to break things

For each behavior:
- What happens?
- Should it be prevented or handled?
- How?

The Data Volume Prompt

Scale reveals edge cases:

What breaks at scale?

Current assumption: [what you're building for]
Scale scenario: [10x, 100x, 1000x]

Consider:
1. Database tables with millions of rows
2. API returning thousands of items
3. User with thousands of records
4. Processing millions of events
5. Files that are gigabytes
6. Requests at 10,000/second

What assumptions break?
What queries become slow?
What memory limits get hit?
What timeouts get exceeded?

Building Edge Cases Into Your Workflow

Don’t wait until production. Ask for edge cases:

During design: “What inputs could break this?” During implementation: “What am I not handling?” During review: “What edge cases should tests cover?” During testing: “What else should I test?”

Make edge case thinking habitual.

The Edge Case Checklist

For any feature:

□ Null and undefined inputs
□ Empty strings, arrays, objects
□ Boundary values (0, -1, max)
□ Invalid types
□ Malformed data
□ Very large inputs
□ Concurrent access
□ Network failures
□ Database failures
□ Time/timezone issues
□ Unicode edge cases
□ User mistakes
□ Malicious input

Tomorrow

You’ve found the edge cases. Now you need to deploy the fixes. Deployment is scary, especially with AI-generated code. Tomorrow I’ll show you how to use AI to generate deployment automation: migrations, rollback scripts, and runbooks.


Try This Today

  1. Take a feature you built recently
  2. Run the edge case generation prompt
  3. Pick one edge case you didn’t handle
  4. Fix it

There’s always at least one. Usually there are ten. Find them before your users do.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories