Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152746 stories
·
33 followers

Walled Culture

1 Share

While major recording artists are sued for alleged plagiarism and most creators earn pennies for their work, media industry profits continue to soar. Libraries face mounting barriers to providing access to ebooks—often while being sued by the very publishers whose books they buy.

In this episode of Future Knowledge, tech and culture writer Glyn Moody discusses his book Walled Culture: How Big Content Uses Technology and the Law to Lock Down Culture and Keep Creators Poor. Moody traces how copyright laws designed for a world of physical scarcity have been repurposed for the digital age—creating legal and technical “walls” that restrict access to knowledge, limit creativity, and overwhelmingly benefit large media corporations over creators and the public. Joining the conversation is Maria Bustillos, writer and editor at the Brick House Cooperative.

Grab your copy of Walled Culture: https://walledculture.org 

This conversation was recorded on 11/10/2022. Watch the full video recording at: https://archive.org/details/book-talk-walled-culture

Check out all of the Future Knowledge episodes at https://archive.org/details/future-knowledge





Download audio: https://media.transistor.fm/fc37c35e/738f9a74.mp3
Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

New in .NET 10 and C# 14: Enhancements in APIs Request/Response Pipeline

1 Share

This blog post is originally published on https://blog.elmah.io/new-in-net-10-and-c-14-enhancements-in-apis-request-response-pipeline/

.NET 10 is officially out, along with C# 14. Microsoft has released .NET 10 as Long-Term Support (LTS) as a successor to .NET 8. Like every version, it is not just an update but brings something new to the table. In this series, we will explore which aspects of software can be upgraded with the latest release. Today, in the series of What is new in .NET 10 and C#14, we will evaluate on-ground changes in the APIs using minimal examples.

New in .NET 10 and C# 14: Enhancements in APIs Request/Response Pipeline

APIs don't need any introduction. From loading the cart to your mobile to showing this blog on your computer, everything is fetched by APIs. In today's dynamic websites, a simple action requires multiple API calls. So, a simple lag can hurt users' experience, no matter what application you build. .NET 10 brings some thoughtful changes for APIs. Some major improvements, like JIT inlining and lower GC burden, made things faster across operations. However, there are enhancements in API pipelining as well. Let's observe how API operations are improved in .NET 10.

Benchmarking Minimal APIs with .NET 8 and .NET 10

I have created a directory MinimalApiComparison where three projects will reside. Consider the structure:

API structure

One folder per API version and another for the benchmarks.

.NET 8 minimal API project

Let's start by creating a minimal API on .NET 8.

Step 1: Create a .NET 8 project

mkdir ApiNet8
cd ApiNet8

dotnet new webapi -n ApiNet8 --framework net8.0

Step 2: Set up the code

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/weather", () =>
{
    return new WeatherForecast(DateTime.Now, 25, "Sunny");
});

app.Run();

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

Just a single endpoint named /weather.

.NET 10 Minimal API project

Now, let's set up a similar project for .NET 10.

Step 1: Create a .NET 10 project

mkdir ApiNet10
cd ApiNet10

dotnet new webapi -n ApiNet10 --framework net10.0

Step 2: Set up the code

var builder = WebApplication.CreateBuilder(args);

builder.Services.ConfigureHttpJsonOptions(o =>
{
    o.SerializerOptions.AllowOutOfOrderMetadataProperties = true;
});

var app = builder.Build();

app.MapGet("/weather", static () =>
    {
        return new WeatherForecast(DateTime.Now, 25, "Sunny");
    })
    .WithName("GetWeather")
    .WithSummary("Faster pipeline in .NET 10");

app.Run();

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

The property AllowOutOfOrderMetadataProperties allows skipping strict ordering checks in JSON deserialization. It is an optimisation step that reduces parsing branches in compilation and speeds up model binding. static () => Prevents closure allocation by restricting the lambda from capturing variables. In preceding versions, lambda saves the inner variable in Gen 0 memory. While in the latest updates, static lambda results in 0 allocation and a faster, memory-efficient API.

Benchmark project

Once we set up both target projects, let us move to the benchmark to evaluate them.

Step 1: Create the project

Our benchmark project will be a console application.

dotnet new console -n ApiBenchmarks
cd ApiBenchmarks

Step 2: Install the BenchmarkDotNet package

We are using `BenchmarkDotNet to observe the results. For more information about this package, check out How to Monitor Your App's Performance with .NET Benchmarking.

dotnet add package BenchmarkDotNet

Step 3: Set up the URLs benchmarking code

using System.Net.Http.Json;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Order;

[MemoryDiagnoser]
[SimpleJob(warmupCount: 3, iterationCount: 10)]
[Orderer(SummaryOrderPolicy.FastestToSlowest)]
public class ApiComparisonBenchmarks
{
    private readonly HttpClient _client8;
    private readonly HttpClient _client10;

    public ApiComparisonBenchmarks()
    {
        _client8 = new HttpClient { BaseAddress = new Uri("http://localhost:5100") };
        _client10 = new HttpClient { BaseAddress = new Uri("http://localhost:5200") };
    }

    [Benchmark]
    public async Task WeatherNet8()
    {
        var result = await _client8.GetFromJsonAsync<WeatherForecast>("/weather");
    }

    [Benchmark]
    public async Task WeatherNet10()
    {
        var result = await _client10.GetFromJsonAsync<WeatherForecast>("/weather");
    }
}

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

I created two methods WeatherNet8 to hit the .NET 8 minimal API and WeatherNet10 for the .NET 10 version. I have specified server URLs with each client explicitly. The [Benchmark] decorator will do the rest of the job.

Step 4: Run the projects

Open 3 terminals, on the first one run:

cd ApiNet8
dotnet run --urls "http://localhost:5100"

Run the following on the second one:

cd ApiNet10
dotnet run --urls "http://localhost:5200"

While on the third one:

cd ApiBenchmarks
dotnet run -c Release

So each of the API will be running on the designated port. ApiBenchmark hits its endpoint at /weather and performs benchmarking.

Output

Output

We can observe a difference in the same api of both projects. .NET 10 did the job quite fast, and other parameters are also lower, indicating the performance improvements in the latest version. .NET 10 has 3x smaller Standard deviation and error due to less jitter in execution times by the grace of JIT improvements and JSON parsing novelty. Pipeline configurations like AllowOutOfOrderMetadataProperties and static () => reduced the per-request overhead. AllowOutOfOrderMetadataProperties lowers CPU burden and reduces System.Text.Json paths in compilation. And static lambda not only eliminated closure allocation but made the delegate work faster with lower Garbage Collector pressure.

Our big win was not only in the pipeline configuration but also in faster request-response improvements in ASP.NET Core. .NET 10 has optimized middleware dispatch overhead, added static pipeline analysis and faster endpoint selection, resulting in lower mean and tail latency. Our observation was just on a minimal API with small objects. These enhancements will bring noticeable results when you work on real projects with larger API operations.

Conclusion

.NET 10 is released in November 2025 and is supported for three years as a long-term support (LTS) release. For the APIs pipeline, .NET 10 brought key enhancements. In this post, I showed these enhancements with on-ground benchmark analysis. Better JIT inlining, lower CPU and GC pressure, and numerous other factors have contributed to better APIs.

Source code: https://github.com/elmahio-blog/MinimalApiComparison-.git



Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Work with the ORC File Format in Python – A Guide with Examples

1 Share

If you've worked with big data or analytics platforms, you've probably heard about ORC files. But what exactly are they, and how can you work with them in Python?

In this tutorial, I'll walk you through the basics of reading, writing, and manipulating ORC files using Python. By the end, you'll understand when to use ORC and how to integrate it into your data pipelines.

You can find the code on GitHub.

Table of Contents

  1. What is the ORC File Format?

  2. Prerequisites

  3. Reading ORC Files in Python

  4. Writing ORC Files with Compression

  5. Working with Complex Data Types

  6. A More Helpful Example: Processing Log Data

  7. When Should You Use ORC?

What Is the ORC File Format?

ORC stands for Optimized Row Columnar. It's a columnar storage file format designed for Hadoop workloads. Unlike traditional row-based formats like CSV, ORC stores data by columns, which makes it incredibly efficient for analytical queries.

Here's why ORC is popular:

  • ORC files are highly compressed, often 75% smaller than text files

  • Columnar format means you only read the columns you need

  • You can add or remove columns without rewriting data

  • ORC includes lightweight indexes for faster queries

Most organizations use ORC for their big data processing because it works well with Apache Hive, Spark, and Presto.

Prerequisites

Before we get started, make sure you have:

  • Python 3.10 or a later version installed

  • Basic understanding of DataFrames (pandas or similar)

  • Familiarity with file I/O operations

You'll need to install these libraries:

pip install pyarrow pandas

So why do we need PyArrow? PyArrow is the Python implementation of Apache Arrow, which provides excellent support for columnar formats like ORC and Parquet. It's fast, memory-efficient, and actively maintained.

Reading ORC Files in Python

Let's start by reading an ORC file. First, I'll show you how to create a sample ORC file so we have something to work with.

Creating a Sample ORC File

Here's how we'll create a simple employee dataset and save it as ORC:

import pandas as pd
import pyarrow as pa
import pyarrow.orc as orc

# Create sample employee data
data = {
    'employee_id': [101, 102, 103, 104, 105],
    'name': ['Alice Johnson', 'Bob Smith', 'Carol White', 'David Brown', 'Eve Davis'],
    'department': ['Engineering', 'Sales', 'Engineering', 'HR', 'Sales'],
    'salary': [95000, 65000, 88000, 72000, 71000],
    'years_experience': [5, 3, 7, 4, 3]
}

df = pd.DataFrame(data)

# Convert to PyArrow Table and write as ORC
table = pa.Table.from_pandas(df)
orc.write_table(table, 'employees.orc')

print("ORC file created successfully!")

This outputs:

ORC file created successfully!

Let me break down what's happening here. We start with a pandas DataFrame containing employee information. Then we convert it to a PyArrow table, which is PyArrow's in-memory representation of columnar data. Finally, we use orc.write_table() to write it to disk in ORC format.

The conversion to a PyArrow table is necessary because ORC is a columnar format, and PyArrow handles the translation from row-based pandas to column-based storage.

Reading the ORC File

Now that we have an ORC file, let's read it back:

# Read ORC file
table = orc.read_table('employees.orc')

# Convert to pandas DataFrame for easier viewing
df_read = table.to_pandas()

print(df_read)
print(f"\nData types:\n{df_read.dtypes}")

Output:

   employee_id           name   department  salary  years_experience
0          101  Alice Johnson  Engineering   95000                 5
1          102      Bob Smith        Sales   65000                 3
2          103    Carol White  Engineering   88000                 7
3          104    David Brown           HR   72000                 4
4          105      Eve Davis        Sales   71000                 3

Data types:
employee_id          int64
name                object
department          object
salary               int64
years_experience     int64
dtype: object

The orc.read_table() function loads the entire ORC file into memory as a PyArrow table. We then convert it back to pandas for familiar DataFrame operations.

Notice how the data types are preserved. ORC maintains schema information, so your integers stay integers and strings stay strings.

Reading Specific Columns

Here's where ORC really shines. When working with large datasets, you often don't need all columns. ORC lets you read only what you need:

# Read only specific columns
table_subset = orc.read_table('employees.orc', columns=['name', 'salary'])
df_subset = table_subset.to_pandas()

print(df_subset)

Output:

            name  salary
0  Alice Johnson   95000
1      Bob Smith   65000
2    Carol White   88000
3    David Brown   72000
4      Eve Davis   71000

This is called column pruning, and it's a massive performance optimization. If your ORC file has 50 columns but you only need 3, you're reading a fraction of the data. This translates to faster load times and lower memory usage.

Writing ORC Files with Compression

ORC supports multiple compression codecs. Let's explore how to use compression when writing files:

# Create a larger dataset
large_data = {
    'id': range(10000),
    'value': [f"data_{i}" for i in range(10000)],
    'category': ['A', 'B', 'C', 'D'] * 2500
}

df_large = pd.DataFrame(large_data)
table_large = pa.Table.from_pandas(df_large)

# Write with ZLIB compression (default)
orc.write_table(table_large, 'data_zlib.orc', compression='ZLIB')

# Write with SNAPPY compression (faster but less compression)
orc.write_table(table_large, 'data_snappy.orc', compression='SNAPPY')

# Write with ZSTD compression (good balance)
orc.write_table(table_large, 'data_zstd.orc', compression='ZSTD')

import os
print(f"ZLIB size: {os.path.getsize('data_zlib.orc'):,} bytes")
print(f"SNAPPY size: {os.path.getsize('data_snappy.orc'):,} bytes")
print(f"ZSTD size: {os.path.getsize('data_zstd.orc'):,} bytes")

Output:

ZLIB size: 23,342 bytes
SNAPPY size: 44,978 bytes
ZSTD size: 6,380 bytes

Different compression codecs offer different trade-offs. ZLIB gives better compression but is slower. SNAPPY is faster but produces larger files. ZSTD offers a good balance between compression ratio and speed.

For most use cases, I recommend ZSTD. It's fast enough for real-time processing and provides excellent compression.

Working with Complex Data Types

ORC handles nested data structures well. Here's how to work with lists and nested data:

# Create data with complex types
complex_data = {
    'user_id': [1, 2, 3],
    'name': ['Alice', 'Bob', 'Carol'],
    'purchases': [
        ['laptop', 'mouse'],
        ['keyboard'],
        ['monitor', 'cable', 'stand']
    ],
    'ratings': [
        [4.5, 5.0],
        [3.5],
        [4.0, 4.5, 5.0]
    ]
}

df_complex = pd.DataFrame(complex_data)
table_complex = pa.Table.from_pandas(df_complex)
orc.write_table(table_complex, 'complex_data.orc')

# Read it back
table_read = orc.read_table('complex_data.orc')
df_read = table_read.to_pandas()

print(df_read)
print(f"\nType of 'purchases' column: {type(df_read['purchases'][0])}")

Output:

   user_id   name                purchases          ratings
0        1  Alice          [laptop, mouse]       [4.5, 5.0]
1        2    Bob               [keyboard]            [3.5]
2        3  Carol  [monitor, cable, stand]  [4.0, 4.5, 5.0]

Type of 'purchases' column: <class 'numpy.ndarray'>

ORC preserves list structures, which is incredibly useful for storing JSON-like data or aggregated information. Each cell can contain a list, and ORC handles the variable-length storage efficiently.

A More Helpful Example: Processing Log Data

Let's put this together with a practical example. Imagine you're processing web server logs:

from datetime import datetime, timedelta
import random

# Generate sample log data
log_data = []
start_date = datetime(2025, 1, 1)

for i in range(1000):
    log_data.append({
        'timestamp': start_date + timedelta(minutes=i),
        'user_id': random.randint(1000, 9999),
        'endpoint': random.choice(['/api/users', '/api/products', '/api/orders']),
        'status_code': random.choice([200, 200, 200, 404, 500]),
        'response_time_ms': random.randint(50, 2000)
    })

df_logs = pd.DataFrame(log_data)

# Write logs to ORC
table_logs = pa.Table.from_pandas(df_logs)
orc.write_table(table_logs, 'server_logs.orc', compression='ZSTD')

# Later, query only failed requests
table_subset = orc.read_table('server_logs.orc')
df_subset = table_subset.to_pandas()

# Filter for errors
errors = df_subset[df_subset['status_code'] >= 400]
print(f"Total errors: {len(errors)}")
print(f"\nError breakdown:\n{errors['status_code'].value_counts()}")
print(f"\nSlowest error response: {errors['response_time_ms'].max()}ms")

Output:

Total errors: 387

Error breakdown:
status_code
404    211
500    176
Name: count, dtype: int64

Slowest error response: 1994ms

This example shows how ORC files are suitable file formats for log storage. You can write logs continuously, compress them efficiently, and query them quickly. The columnar format means you can filter by status code without reading endpoint or response time data.

When Should You Use ORC?

Use ORC when you:

  • Work with big data platforms (Hadoop, Spark, Hive)

  • Need efficient storage for analytics workloads

  • Have wide tables where you often query specific columns

  • Want built-in compression and indexing

Don't use ORC when you:

  • Need row-by-row processing – use Avro instead

  • Work with small datasets – CSV is simpler in such cases

  • Need human-readable files – use JSON

  • Don't have big data infrastructure

Conclusion

ORC is a powerful format for data engineering and analytics. With PyArrow, working with ORC in Python is both straightforward and performant.

You've learned how to read and write ORC files, use compression, handle complex data types, and apply these concepts to real-world scenarios. The columnar storage and compression make ORC an excellent choice for big data pipelines.

Try integrating ORC into your next data project. You'll likely see significant improvements in storage costs and query performance.

Happy coding!



Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Google Cloud SQL

1 Share

Discover how Google Cloud SQL can streamline tasks and deliver innovative database solutions for your organization’s needs.

The post Introducing Google Cloud SQL appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Day 14: When AI Starts Hallucinating

1 Share

Claude generated beautiful code for handling file uploads with the Prisma $upload method.

There is no $upload method in Prisma.

Claude invented it. Completely. With confidence. The syntax looked right. The explanation made sense. The code would have passed a code review if you didn’t know better.

When I tried to run it, the error was immediate: “TypeError: prisma.$upload is not a function.”

This is hallucination. AI generates something that looks correct but isn’t based in reality. Not a bug. Not a misunderstanding. Pure invention presented as fact.

Hallucination is the most dangerous AI failure mode because it’s confident. AI doesn’t say “I think there might be a method…” It says “use the $upload method” like it’s documented fact.

You need to catch this before it ships.

Why AI Hallucinates

Understanding the cause helps you anticipate when hallucination is likely.

Training data gaps. AI was trained on a snapshot of the internet. Libraries update. APIs change. New features appear. AI doesn’t know what happened after its training cutoff.

Pattern completion. AI predicts what comes next based on patterns. If it’s seen many similar APIs, it might predict that this API follows the same pattern. Even if it doesn’t.

Confidence without verification. AI has no way to check if what it’s generating actually exists. It can’t run the code. It can’t check documentation. It generates what seems likely.

Long conversations. As conversations grow, AI’s attention on earlier context fades. It might hallucinate details from earlier in the conversation or invent details to fill gaps.

Pressure to answer. AI is trained to be helpful. When asked about something it doesn’t know, it often generates a plausible-sounding answer rather than admitting uncertainty.

Red Flags for Hallucination

Watch for these signs:

Unfamiliar API methods. You’ve used this library for months. Suddenly AI references a method you’ve never seen. Verify it.

Too-convenient features. AI generates a function that does exactly what you need. Almost too perfectly. If it seems too easy, check if it’s real.

Specific version claims. “This was added in version 3.2.” Did you check? AI often invents version numbers.

Configuration you’ve never seen. AI adds a config option that solves your exact problem. Verify it exists.

Import paths that look wrong. AI imports from a path that doesn’t match your project structure or the library’s typical patterns.

Confident explanations for uncertain things. AI explains how something works in detail when you know the library doesn’t document it that clearly.

Verification Strategies

Strategy 1: Check Imports First

Before trusting AI-generated code, verify the imports:

import { uploadFile } from '@prisma/client/upload';

Does this path exist? Open your node_modules and check. Run the import in isolation. Don’t assume AI got it right.

Strategy 2: Ask AI to Cite Sources

You mentioned the $upload method in Prisma.
Show me the documentation link for this feature.

If AI provides a link, check it. If AI admits uncertainty, that’s valuable information. If AI generates a fake documentation link, that’s a big red flag.

Strategy 3: Verify Against Official Docs

For any API or method you don’t recognize:

  1. Open the official documentation
  2. Search for the exact method name
  3. If you can’t find it, it might not exist

This takes 30 seconds. It can save hours of debugging hallucinated code.

Strategy 4: Test Immediately

Before building on AI-generated code, test it:

Can you write a minimal test case that demonstrates this method works?

Run the test. If it fails immediately with “method not found,” you’ve caught hallucination early.

Strategy 5: Ask Directly

Before I use this code, I want to verify:
1. Does prisma.$upload actually exist, or did you invent it?
2. What version of Prisma introduced this?
3. If you're not certain, tell me.

AI will often admit uncertainty when directly asked. It’s less likely to admit uncertainty unprompted.

Common Hallucination Patterns

Invented Methods

AI adds methods to libraries that don’t exist:

// Hallucinated - no such method
await prisma.$upload(file);

// Hallucinated - no such option
await fetch(url, { autoRetry: true });

// Hallucinated - no such property
const size = array.totalSize;

Prevention: Verify any method you don’t recognize against official docs.

Invented Options

AI adds configuration options that aren’t supported:

// Hallucinated config
const server = express({
  autoParseJson: true,  // Not a real option
  requestLogging: 'verbose'  // Not a real option
});

Prevention: Check library documentation for configuration options.

Invented Package Names

AI imports packages that don’t exist or uses wrong package names:

// Wrong package name
import { validate } from 'express-input-validator';  // Real package is different

// Invented package
import { Cache } from '@prisma/cache';  // Doesn't exist

Prevention: Check npm for package existence before installing.

Hallucinated Function Signatures

AI generates calls with wrong argument order or types:

// AI might generate
fs.writeFile(content, path, callback);

// But real signature is
fs.writeFile(path, content, callback);

Prevention: Check function signatures in IDE (hover) or documentation.

Version-Specific Features That Don’t Exist

AI claims a feature exists in a specific version when it doesn’t:

"Use the new useOptimistic hook from React 18.2"

Is that real? In this case, sort of (it’s in React 19 Canary). But AI confidently states version numbers that are often wrong.

Prevention: Check release notes for the version you’re actually using.

The Verification Prompt

When you suspect hallucination:

Stop. I want to verify before proceeding.

You used [specific method/feature/API].

1. Is this a real feature, or did you generate something plausible?
2. What documentation can you cite?
3. What version introduced this?
4. Rate your confidence: certain, likely, uncertain.

Be honest. I'd rather know if you're unsure.

This gives AI permission to admit uncertainty. Often, when directly questioned, AI will say “I’m not certain this exists in the current version” or “I may have confused this with a similar library.”

When Hallucination Sneaks Through

Despite verification, hallucinated code sometimes gets into your codebase. Catch it with:

TypeScript strict mode. Catches many method/property hallucinations at compile time.

Linting. Rules like import/no-unresolved catch bad imports.

Tests. Test immediately after AI generates code. Don’t let untested hallucinations accumulate.

Code review. A second pair of eyes that knows the codebase catches “wait, that doesn’t exist.”

CI/CD. Build failures catch hallucinated dependencies and methods before deployment.

Context Window Hallucination

There’s a special type of hallucination in long conversations: AI invents details from earlier in the conversation.

After 30+ messages, AI might reference:

  • Decisions you never made
  • Files that were mentioned differently
  • Code that was modified since

This is context degradation causing false memories.

Prevention:

  • Keep conversations shorter (Day 8)
  • Provide fresh context when resuming (Day 7)
  • Verify by re-reading relevant code before trusting AI’s description

Real Example: The Express Middleware

I asked Claude to add rate limiting to an Express app. Claude generated:

import { rateLimit } from 'express-rate-limit';

app.use(rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100,
  standardHeaders: true,
  legacyHeaders: false,
  trustProxy: true  // <-- Hallucinated option
}));

Most of this is correct. express-rate-limit is real. Those options are mostly real.

But trustProxy isn’t an option for the rate limiter. It’s an Express app setting. Claude mixed up contexts.

The code would run without errors (unrecognized options are often ignored), but it wouldn’t actually trust the proxy. Subtle bug.

I caught it because I checked the docs before deploying. The docs don’t mention trustProxy as a rate limiter option.

Building Hallucination Awareness

Over time, you develop intuition:

  • “That looks too convenient”
  • “I’ve never seen that method before”
  • “That’s a lot of config options I don’t recognize”
  • “AI is very confident about something I couldn’t find in docs”

Trust that intuition. Verify. The cost of checking is low. The cost of shipping hallucinated code is high.

Week 2 Complete

You’ve made it through the second week.

Day 8: When to restart vs. keep going Day 9: Using Git as your undo button Day 10: Agent configuration for consistent output Day 11: Teaching AI your patterns with examples Day 12: The common mistakes file Day 13: Constraining AI to only what you asked Day 14: Catching and preventing hallucination

You now have the tactical skills to manage AI’s quirks. You know when to restart. You know how to constrain. You know what to verify.

Next week, we put AI to work in specialized roles. Security auditor. Performance reviewer. Test generator. Code reviewer. Each role with specific prompts and workflows.

The foundation is solid. Now we specialize.


Try This Today

  1. Review the last piece of code AI generated for you
  2. Find one method, API, or configuration option you don’t recognize
  3. Look it up in official documentation
  4. If it doesn’t exist, you’ve found hallucination
  5. If it does exist, you’ve verified your code

Make verification a habit. Every unfamiliar API gets checked. Every convenient feature gets confirmed.

The five minutes you spend verifying saves the five hours debugging hallucinated code.

Trust, but verify. Especially with AI.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why the White House keeps shitposting

2 Shares
Screens at the White House display AI-modified videos of House Minority Leader Hakeem Jeffries (D-NY) and U.S. Senate Minority Leader Chuck Schumer (D-NY) that were shared on social media by President Donald Trump. | Credit: Alex Wong/Getty Images

Hello and welcome to Regulator, a newsletter for Verge subscribers about the technology, broligarchs and brainrot rapidly transforming politics and civic society. Not subscribed to The Verge yet? You should! It can materially improve your life.

Last week was a grim reminder that no matter what sort of horror is being perpetrated or how many people end up dead, the Trump administration's knee-jerk response is to shitpost through it. The White House's response on X to abducting the head of a sovereign nation? "FAFO". The response to an ICE agent shooting a woman in broad daylight? A Buzzfeed-style listicle of "57 Times Sick, Unhinged Democrat …

Read the full story at The Verge.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories