Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152539 stories
·
33 followers

Cory Doctorow: Legalising Reverse Engineering Could End 'Enshittification'

2 Shares
Scifi author/tech activist Cory Doctorow has decried the "enshittification" of our technologies to extract more profit. But Saturday he also described what could be "the beginning of the end for enshittification" in a new article for the Guardian — "our chance to make tech good again". There is only one reason the world isn't bursting with wildly profitable products and projects that disenshittify the US's defective products: its (former) trading partners were bullied into passing an "anti-circumvention" law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer)... Post-Brexit, the UK is uniquely able to seize this moment. Unlike our European cousins, we needn't wait for the copyright directive to be repealed before we can strike article 6 off our own law books and thereby salvage something good out of Brexit... Until we repeal the anti-circumvention law, we can't reverse-engineer the US's cloud software, whether it's a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours — say, the solar inverters and batteries we buy from China. This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks. Admittedly, the door is only open a crack, but it's been locked tight since the turn of the century. When it comes to a better technology future, "open a crack" is the most exciting proposition I've heard in decades. Thanks to Slashdot reader Bruce66423 for sharing the article.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android Weekly Issue #709

1 Share
Articles & Tutorials
Sponsored
Code 10x faster. Tell Firebender to create full screens, ship features, or fix bugs - and watch it do the work for you. It's been battle tested by the best android teams at companies like Tinder, Adobe, and Instacart.
alt
efe budak explains implementing a Google-Photos-style animated top bar in Compose Multiplatform using scroll behavior and nested scrolling.
Alysson Cirilo shows how to set up Kotest with proper Gradle and source set configuration for Kotlin Multiplatform testing.
Jaewoong Eum presents Landscapist Core, a small, KMP-first image loader with efficient caching and UI integration for Compose Multiplatform.
Oğuzhan Aslan covers using Compose’s Pager APIs, state control, custom layouts, and Paging 3 for advanced paginated UI.
Max Kach explains creating and integrating a VHS glitch shader in Jetpack Compose using AGSL and reusable components.
Veronica Putri Anggraini demonstrates creating a custom glowing bottom navigation in Jetpack Compose using AGSL shaders.
Sergey Drymchenko outlines practical performance tips like keys, immutable data, and content types to optimize LazyColumn lists when moving from RecyclerView to Jetpack Compose.
Sam Edwards describes using agents in IntelliJ IDEA and a Research-Plan-Implement workflow to automate research, planning, and incremental coding tasks in a project.
Jaewoong Eum demonstrates crafting advanced animated custom paywalls in Jetpack Compose with RevenueCat integration and remote content testing.
Te Zov explains DI fundamentals for Kotlin/KMP and gradually introduces Koin as an effective DI solution.
Mohan Sankaran explains how reference leaks in Jetpack Compose arise from improper reference retention and how to diagnose and resolve them.
Place a sponsored post
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
alt
Libraries & Code
Kotlin-first llama.cpp integration for on-device and remote LLM inference
alt
Trailblaze is an AI-powered mobile testing framework that lets you author and execute tests using natural language.
Videos & Podcasts
Philipp Lackner covers Kotlin 2.3.0's new features, including nested type aliases and data flow checks.
Jov Mit demonstrates how to implement a sticky footer for a ModalBottomSheet in Jetpack Compose.
Enrique Lopez Manas discusses his KotlinConf 2025 talk on using Kotlin for custom financial data visualization.
Philipp Lackner explores AI's coding capabilities using objective metrics and real-world tests.
Mykola Miroshnychenko implements platform-specific dependency injection in a Compose Multiplatform (CMP) application using Koin.
Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Don't fall into the anti-AI hype

1 Share
I love writing software, line by line. It could be said that my career was a continuous effort to create software well written, minimal, where the human touch was the fundamental feature. I also hope for a society where the last are not forgotten. Moreover, I don't want AI to economically succeed, I don't care if the current economic system is subverted (I could be very happy, honestly, if it goes in the direction of a massive redistribution of wealth). But, I would not respect myself and my intelligence if my idea of software and society would impair my vision: facts are facts, and AI is going to change programming forever.

In 2020 I left my job in order to write a novel about AI, universal basic income, a society that adapted to the automation of work facing many challenges. At the very end of 2024 I opened a YouTube channel focused on AI, its use in coding tasks, its potential social and economical effects. But while I recognized what was going to happen very early, I thought that we had more time before programming would be completely reshaped, at least a few years. I no longer believe this is the case. Recently, state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be. The degree of success you'll get is related to the kind of programming you do (the more isolated, and the more textually representable, the better: system programming is particularly apt), and to your ability to create a mental representation of the problem to communicate to the LLM. But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.

In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks:

1. I modified my linenoise library to support UTF-8, and created a framework for line editing testing that uses an emulated terminal that is able to report what is getting displayed in each character cell. Something that I always wanted to do, but it was hard to justify the work needed just to test a side project of mine. But if you can just describe your idea, and it materializes in the code, things are very different.

2. I fixed transient failures in the Redis test. This is very annoying work, timing related issues, TCP deadlock conditions, and so forth. Claude Code iterated for all the time needed to reproduce it, inspected the state of the processes to understand what was happening, and fixed the bugs.

3. Yesterday I wanted a pure C library that would be able to do the inference of BERT like embedding models. Claude Code created it in 5 minutes. Same output and same speed (15% slower) than PyTorch. 700 lines of code. A Python tool to convert the GTE-small model.

4. In the past weeks I operated changes to Redis Streams internals. I had a design document for the work I did. I tried to give it to Claude Code and it reproduced my work in, like, 20 minutes or less (mostly because I'm slow at checking and authorizing to run the commands needed).

It is simply impossible not to see the reality of what is happening. Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too). It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run. It does not matter if this or the other CEO of some unicorn is telling you something that is off putting, or absurd. Programming changed forever, anyway.

How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies. The same thing open source software did in the 90s.

However, this technology is far too important to be in the hands of a few companies. For now, you can do the pre-training better or not, you can do reinforcement learning in a much more effective way than others, but the open models, especially the ones produced in China, continue to compete (even if they are behind) with frontier models of closed labs. There is a sufficient democratization of AI, so far, even if imperfect. But: it is absolutely not obvious that it will be like that forever. I'm scared about the centralization. At the same time, I believe neural networks, at scale, are simply able to do incredible things, and that there is not enough "magic" inside current frontier AI for the other labs and teams not to catch up (otherwise it would be very hard to explain, for instance, why OpenAI, Anthropic and Google are so near in their results, for years now).

As a programmer, I want to write more open source than ever, now. I want to improve certain repositories of mine abandoned for time concerns. I want to apply AI to my Redis workflow. Improve the Vector Sets implementation and then other data structures, like I'm doing with Streams now.

But I'm worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more? Or will they try to cut salary costs, having fewer programmers that are better at prompting? And, there are other sectors where humans will become completely replaceable, I fear.

What is the social solution, then? Innovation can't be taken back after all. I believe we should vote for governments that recognize what is happening, and are willing to support those who will remain jobless. And, the more people get fired, the more political pressure there will be to vote for those who will guarantee a certain degree of protection. But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.

Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched. Comments
Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

File I/O Performance: Picking the Fastest Weapon in Your Arsenal

1 Share
This article provides insights on the fastest file I/O methods for .NET 10, emphasizing benchmarks with 1 MB payloads.





Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Day 11: Teaching AI Your Patterns With Examples

1 Share

Back on Day 3, I showed you how to create a design system reference so AI generates UI that matches your application. The idea was simple: instead of describing what you want, show AI an example of what good looks like.

That concept doesn’t just apply to UI.

Every developer has patterns. The way you structure services. How you handle errors. Where you put validation logic. Your naming conventions. The shape of your test files. These patterns accumulate over years of writing code, debugging production issues, and learning what works for you.

AI doesn’t know your patterns. It knows patterns from the internet. From Stack Overflow answers. From GitHub repositories. From training data that represents how millions of developers write code.

That’s the problem. Without guidance, your AI agent will do what it thinks is right in the moment. It’ll generate perfectly reasonable code that looks nothing like the rest of your codebase.

Generic Code vs Your Code

Ask AI to build a service without any context, and you’ll get something like this:

export class UserService {
  async getUser(id: string) {
    try {
      const user = await db.query('SELECT * FROM users WHERE id = ?', [id]);
      return user;
    } catch (error) {
      console.log('Error fetching user:', error);
      throw error;
    }
  }
}

This code works. It’s also generic. Maybe you use dependency injection. Maybe you separate database operations into private methods. Maybe you have specific error handling that logs to your telemetry system instead of console.log. Maybe you validate inputs before querying.

AI doesn’t know any of that unless you tell it.

You could describe your patterns in words:

Build a user service.
Use dependency injection for the database and logger.
Separate database operations into private methods.
Use our standard error handling pattern with telemetry.
Validate inputs before operations.

This is better than nothing. But “standard error handling pattern with telemetry” means different things to different developers. There’s interpretation. Room for AI to guess wrong.

Here’s what works better: show it an existing service and say “match this.”

The Reference Pattern

Find a file in your codebase that represents your patterns well. Not average code. Your cleanest, most pattern-compliant code. The file you’d point a new team member to and say “do it like this.”

Then reference it explicitly:

Build a NotificationService.

Reference: server/services/UserService.ts

Match:
- Constructor pattern (dependency injection)
- Method structure (public methods call private helpers)
- Error handling (try/catch with telemetry logging)
- Return types (explicit Promise<T> with custom types)
- Input validation approach

Point to specific patterns if the file is long. AI can read the whole file but benefits from knowing what matters.

What Patterns Are Worth Teaching?

Not everything needs a reference example. Save this technique for patterns that:

Repeat across your codebase. If you have 20 services that all follow the same structure, showing AI one example pays off on the other 19.

Are easy to get wrong. Error handling, logging, validation. The stuff that’s boring to write but critical to get right.

Define your codebase’s character. The patterns that make your code feel like your code. When someone opens a file, they should know immediately they’re in your codebase.

AI consistently misses. If you’ve described something twice and AI still gets it wrong, stop describing. Start showing.

Different Types of References

Service Structure

Create OrderService following server/services/UserService.ts.

Same patterns:
- Constructor takes db and logger
- Public methods for business operations
- Private methods for database queries
- Error handling with telemetry

Different content:
- Operations: createOrder, getOrder, cancelOrder
- Tables: orders, order_items

Route Patterns

Add routes for orders at server/routes/orders.ts.

Reference: server/routes/users.ts

Copy:
- Router setup pattern
- Middleware chain (auth, validation)
- Error response format
- Request typing approach

New content:
- POST /orders (create order)
- GET /orders/:id (get order details)
- DELETE /orders/:id (cancel order)

Test Structure

Write tests for OrderService in server/services/OrderService.test.ts.

Reference: server/services/UserService.test.ts

Copy exactly:
- Test file structure (describe blocks)
- Mock setup pattern
- beforeEach/afterEach usage
- Assertion style

Test these scenarios:
- Create order with valid items
- Create order with empty cart (should fail)
- Get order by ID
- Cancel order
- Cancel already-cancelled order (should fail)

Component Patterns

Build an OrderSummary component in client/components/OrderSummary.tsx.

Reference: client/components/UserProfile.tsx

Match:
- Props interface at top
- Styled-components pattern
- Loading state handling
- Error boundary pattern

Content:
- Display order details
- Show line items with prices
- Total calculation
- Status indicator

Making Your Best Code Easy to Reference

Some files become your go-to references. Make them good:

Keep them clean. Your reference files are templates. They should be well-structured and exemplary.

Keep them current. When patterns evolve, update reference files first. They’re the source of truth.

Keep them discoverable. Mention them in CLAUDE.md:

## Reference Files

When building new code, reference these files for patterns:

- Services: server/services/UserService.ts
- Routes: server/routes/users.ts
- Components: client/components/UserProfile.tsx
- Tests: server/services/UserService.test.ts

Now AI knows where to look without you specifying each time.

The Pattern Library Approach

Some teams create explicit pattern files that aren’t part of the running application. Just templates:

patterns/
  service-pattern.ts       # Template service
  route-pattern.ts         # Template route
  component-pattern.tsx    # Template component
  test-pattern.test.ts     # Template test

These files include comments explaining the pattern:

// patterns/service-pattern.ts

import { Database } from '../db';
import { TelemetryService } from './telemetryService';

// PATTERN: Services take dependencies via constructor
export class ExampleService {
  constructor(
    private db: Database,
    private telemetry: TelemetryService
  ) {}

  // PATTERN: Public methods are business operations
  async publicMethod(input: InputType): Promise<OutputType> {
    try {
      // PATTERN: Validate at entry point
      this.validateInput(input);

      // PATTERN: Call private helpers for DB operations
      const result = await this._fetchFromDatabase(input);

      return result;
    } catch (error) {
      // PATTERN: Always log with context
      this.telemetry.logError('publicMethod failed', error, { input });
      throw error;
    }
  }

  // PATTERN: Private methods prefixed with underscore
  private async _fetchFromDatabase(input: InputType): Promise<Data> {
    // Database operation here
  }

  // PATTERN: Validation in separate private method
  private validateInput(input: InputType): void {
    // Validation logic
  }
}

Reference these in prompts:

Build PaymentService following patterns/service-pattern.ts.

When AI Misses the Pattern

Sometimes AI reads the example but still misses something. Be specific about the mismatch:

The OrderService you generated doesn't match UserService.

In UserService:
- All database calls are in private methods (lines 45-80)
- Public methods only contain business logic

In your OrderService:
- createOrder has inline database calls

Refactor to match the UserService pattern exactly.

Point to the specific mismatch. AI can fix targeted issues better than vague “make it match.”

Why Examples Beat Descriptions

When you describe a pattern in words, there’s interpretation. “Use dependency injection” could mean constructor injection, property injection, or a service locator. “Handle errors properly” means different things to every developer.

When you show an example, there’s no interpretation. The code is the spec. AI sees exactly what you mean.

This is the same principle from Day 3 with the design system. AI is better at reading code than reading prose. Code shows. Documentation describes. Showing wins.

Tomorrow

Your AI knows your standards (Day 10) and your patterns (today). But what about the mistakes it keeps making? The same wrong assumptions. The same bad habits.

Tomorrow I’ll show you how to build a “common mistakes” file. Document the mistakes once, reference it in prompts, stop repeating corrections.


Try This Today

  1. Find your best service, component, or route file. The one that exemplifies your patterns.
  2. Ask AI to build something similar: “Create X following the pattern in Y”
  3. Compare the output to your original. Did AI match the structure?
  4. Note what matched and what didn’t.
  5. Refine your prompt to call out specific patterns that matter.

The first time you see AI perfectly match your codebase style, you’ll understand why examples beat descriptions.

It’s the difference between “use my patterns” and “use these patterns, exactly, see this file.”

Specificity wins.

Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Fails at Most Remote Work, Researchers Find

1 Share
A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers. To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study... The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found. One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all." The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories