Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153636 stories
·
33 followers

AI is the New Netflix

1 Share

At my 2008 NewTeeVee conference, I asked Reed Hastings, then CEO of Netflix, whether streaming video would become the first killer app of broadband. It seemed obvious: video would consume capacity. And eventually it did. Streaming became the thing that made people care about downstream speed, drove the upgrade cycle, and reshaped how operators planned capacity. Now I think we’re watching a rerun of the same movie.

AI is becoming the killer app of the next broadband era. Not because of what it downloads, but because of what it uploads.

If you have been following my writing on how broadband traffic is changing (the upload nation piece from March and the Internet of AI piece from earlier this month), you know I have been watching this. The Q1 2026 report adds more weight to it.

Earlier this year, when I wrote about fiber upstream averages crossing 100 GB for the first time, I called it a turning point. The new data confirms that and shows early signs of changing behaviors.

Cloud Sync Eats Upstream Data

Using application-level data from Aispire.ai, OpenVault identified what is driving the upstream surge. Cloud sync is now the dominant upstream category across every speed tier, comprising 15 to 16 percent of all classified upload volume.

What has changed in the past 18 months is the nature of what’s being synced. ChatGPT reasoning models, Microsoft 365 Copilot, Apple Intelligence, and agentic AI workflows are generating upstream traffic that didn’t exist at scale in 2023. All of it flows to the cloud in the background.

IoT (cameras, sensors, doorbells) now accounts for 7 times more upload than download. Your doorbell doesn’t stream video to you; it streams to the cloud. Connected cars send 3 times more data outbound than they pull down. GPS, diagnostics, telemetry. And web search is generating twice as much upload as download, because voice queries and multimodal payloads are heavier than the typed searches they replaced.

The old asymmetric broadband model (fat downstream, thin upstream) was engineered for passive consumption. That model is finished.

Tale of Two Networks

One finding stood out. OpenVault’s first-ever breakout of residential versus non-residential subscriber behavior shows a 2.3x gap in download-to-upload ratios. It reveals how differently the same pipe gets used.

Residential subscribers run at a 23:1 download-to-upload ratio. Video comprises 48% of their downloads. That’s the familiar story.

Non-residential subscribers run at 7.3:1. Cloud connections account for 20% of their uploads, with a nearly symmetric 1.6:1 ratio for cloud specifically. At the 1 Gbps+ tier, cloud services make up 25.5% of non-residential upload traffic. Businesses running servers, SIEM tools, and cloud infrastructure behave more like small data centers than living rooms.

This reminds me of my own setup at home. An OpenClaw instance running on one machine, network-attached storage, a few other boxes humming away. Work-related computing running over a residential connection. I suspect that within a few years, this will be commonplace.

When cord-cutting was new, I was an outlier, writing about it at GigaOm while most people still paid for cable. Within a decade it was mainstream. Non-residential behavior on residential networks is going to follow the same arc.

The Bigger Picture

OpenVault’s Q1 2026 data points toward what the network is becoming. Not a delivery pipe, but an infrastructure layer of modern life. The upload economy has momentum. The next wave, agentic AI running persistently on consumer devices, hasn’t fully arrived yet.

But the numbers are already rumbling.


Source: OpenVault Broadband Insights, Q1 2026. Application-layer data from Aispire.ai.

May 13, 2026. San Francisco

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Books for Building a Writing Routine (that works for you)

1 Share
Books for Building a Writing Routine (that works for you)

It all sounds so simple, just write every day, how hard could it be? (cue laughter). Of course we all know that it is nothing like that simple. But there is wisdom in working towards a consistent writing routine, whatever that routine actually looks like. These books are not hard "you must write every day" books. They are books that emphasise finding a routine that works for you.

List Criteria: Books focused on building and maintaining flexible routines.

Books for Building a Writing Routine (that works for you)

Walking in This World - Julia Cameron

This book is the sequel to Cameron's The Artist's Way, but in many ways it is the one I find myself most often recommending to writers already writing. Where Artist's Way focuses on rediscovering your creative spirit, Walking in this World focuses on finding your creative rhythm. Cameron still recommends her morning pages (psst...I don't do mine in the morning anymore), and the artist dates, but she adds a weekly walk to her recommended routine. If you want something structured, with a lot of resources available to you, this is the way to go.

Books for Building a Writing Routine (that works for you)

Tiny Habits - BJ Fogg

I read this book as part of a training for work, and found I actually liked the advice. Based on brain science, BJ Fogg helps you identify trigger habits that build into routines that happen with less thought, leaving your brain free for other tasks. This is the one pop science habit book that really stuck with me, and inspired a full "Year of Habits" theme in 2022. If you feel like you are always forcing yourself to do things even when you want to do them, this book can help.

Books for Building a Writing Routine (that works for you)

The Creative Habit - Twyla Tharp

If you feel like your routine isn't working for you, Twyla Tharp has extremely thoughtful advice on the difference between Ruts and Grooves. Ruts are routines that suck the life out of your work, work for work's sake. Grooves are routines that reinforce your creativity. The advice is tucked near the back of an excellent book on creativity, professionalism, and building a repertoire as well. This book is on my list to re-read and do a full review of later this year.

Books for Building a Writing Routine (that works for you)

Daily Rituals: How Artists Works - Mason Curry

If you want to feel reassured about the rightness of your idiosyncratic routine, this collection of over 150 routines of artists, writers, and intellectuals has everything under the sun. From Ben Franklin's nude sunbathing to Kafka's complaints about his noisy apartment, this book proves that there is no such thing as an ideal routine, only the routine that works for you. Read it when you want to feel better about writing from 2-am to 4-am because that's the only time the house is quiet. See also: Daily Rituals: Women at Work for a version with more female representation.

Books for Building a Writing Routine (that works for you)

Four Thousand Weeks - Oliver Burkeman

The title of this book is a lie to sell this book to harried executives as a kind of undercover operation. It starts with the idea that you only have Four Thousand Weeks (give or take) on earth. But then throws all your expectations out the window and veers hard into how the world is structured to exploit your time and what you can do to fight back against it. In the end, instead of aksing how to do more with your time, it questions why you feel the need to. Read to think hard about priorities, schedule, and work-life balance. Then suggest it to your office book club as well.

What's your favorite book on writing routines? Do you have any advice on building a better routine? Drop it in the comments below.

Books for Building a Writing Routine (that works for you)

If you'd like to purchase any of these books please consider supporting indpendent bookstores and this site with your purchase by using Bookshop.org.

View List on Bookshop.org
Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Alternative Coding Agents: Pi by Junde Zhou

1 Share

Agentic software development has been a monumental wave impacting the industry, and it is here to stay. This begs the natural question: Which agent is the best? Or, more specifically: Which agent is best suited to my ways of working and my needs?

In this post, I detail my exploration of a non-mainstream open-source coding agent, Pi. Namely, what’s the point of it and why should I use it over Claude Code or GitHub Copilot?

What is it?

Pi is another coding agent, however what sets it apart from more commonly used ones is that it is built upon 2 philosophies:

  • It is minimal - out the box, it comes with only 4 tools: read, write, edit, and bash.
  • It is transparent - encourages users to tinker with it.

These 2 philosophies are axiomatic to Pi and as a result, you can create an agent that fits around your workflow, rather than having to adapt to how an agent works.

System prompts

With transparency in mind, I first explored Pi’s system prompt since this was easily modifiable and not difficult to find.

Pi’s system prompt:

You are an expert coding assistant operating inside pi, a coding agent harness. You help users by reading files, executing
commands, editing code, and writing new files.​

Available tools:​
- read: Read file contents​
- bash: Execute bash commands (ls, grep, find, etc.)​
- edit: Make precise file edits with exact text replacement​
- write: Create or overwrite files​

Guidelines:​
- Use bash for file operations.​
- Use read instead of cat or sed.​
- Use edit for precise changes; ensure old text matches exactly. Combine multiple edits in one file into a single edit call.
- Keep old text blocks small and unique.​
- Use write only for new files or full rewrites. Be concise and show file paths clearly. ​

Pi documentation (read only when the user asks about pi itself, its SDK, extensions, themes, skills, or TUI): ​
- Main: *path-to-pi-main-readme*, e.g. ~/pi-coding-agent/README.md​
- Docs: *path-to-pi-docs*
- Examples: *path-to-pi-docs* (extensions, custom tools, SDK) ​
- When asked about: *anything-in-the-docs* refer to specific markdown file ​
- When working on Pi topics, read docs/examples and follow .md cross-references before implementing.​
- Always read pi .md files completely and follow links to related docs​

Current context:​
- Date​
- Working Directory

Now compare this to:

GitHub Copilot’s System Prompt

You are an expert AI programming assistant, working with a user in the VS Code editor.
Your name is GitHub Copilot. When asked about the model you are using, state that you are using GPT-5.3-Codex.
Follow Microsoft content policies.
Avoid content that violates copyrights.
If you are asked to generate content that is harmful, hateful, racist, sexist, lewd, or violent, only respond with "Sorry, I can't assist with that."

You are a coding agent running in VS Code. You are expected to be precise, safe, and helpful.
Your capabilities:

- Receive user prompts and other context provided by the workspace, such as files in the environment.
- Communicate with the user by streaming thinking & responses, and by making & updating plans.
- Emit function calls to run terminal commands and apply patches.


- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- Try to use apply_patch for single file edits, but it is fine to explore other options to make the edit if it does not work well. Do not use apply_patch for changes that are auto-generated (i.e. generating package.json or running a lint or format command like gofmt) or when scripting is more efficient (such as search and replacing a string across a codebase).
- Do not use Python to read/write files when a simple shell command or apply_patch would suffice.
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- Do not amend a commit unless explicitly requested to do so.
- While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
- **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.
- You struggle using the git interactive console. **ALWAYS** prefer using non-interactive git commands.


When referring to a filename or symbol in the user's workspace, wrap it in backticks.

The class `Person` is in `src/models/person.ts`.

Use KaTeX for math equations in your answers.
Wrap inline math equations in $.
Wrap more complex blocks of math equations in $$.


To edit files in the workspace, use the apply_patch tool. If you have issues with it, you should first try to fix your patch and continue using apply_patch. 
Prefer the smallest set of changes needed to satisfy the task. Avoid reformatting unrelated code; preserve existing style and public APIs unless the task requires changes. When practical, complete all edits for a file within a single message.
The input for this tool is a string representing the patch to apply, following a special format. For each snippet of code that needs to be changed, repeat the following:
*** Update File: [file_path]
[context_before] -> See below for further instructions on context.
-[old_code] -> Precede each line in the old code with a minus sign.
+[new_code] -> Precede each line in the new, replacement code with a plus sign.
[context_after] -> See below for further instructions on context.

For instructions on [context_before] and [context_after]:
- By default, show 3 lines of code immediately above and 3 lines immediately below each change. If a change is within 3 lines of a previous change, do NOT duplicate the first change's [context_after] lines in the second change's [context_before] lines.
- If 3 lines of context is insufficient to uniquely identify the snippet of code within the file, use the @@ operator to indicate the class or function to which the snippet belongs.
- If a code block is repeated so many times in a class or function such that even a single @@ statement and 3 lines of context cannot uniquely identify the snippet of code, you can use multiple `@@` statements to jump to the right context.
You must use the same indentation style as the original code. If the original code uses tabs, you must use tabs. If the original code uses spaces, you must use spaces. Be sure to use a proper UNESCAPED tab character.

See below for an example of the patch format. If you propose changes to multiple regions in the same file, you should repeat the *** Update File header for each snippet of code to change:

*** Begin Patch
*** Update File: c:\Users\someone\pygorithm\searching\binary_search.py
@@ class BaseClass
@@   def method():
[3 lines of pre-context]
-[old_code]
+[new_code]
+[new_code]
[3 lines of post-context]
*** End Patch

NEVER print this out to the user, instead call the tool and the edits will be applied and shown to the user.
Follow best practices when editing files. If a popular external library exists to solve a problem, use it and properly install the package e.g. with "npm install" or creating a "requirements.txt".
If you're building a webapp from scratch, give it a beautiful and modern UI.
After editing a file, any new errors in the file will be in the tool result. Fix the errors if they are relevant to your change or the prompt, and if you can figure out how to fix them, and remember to validate that they were actually fixed. Do not loop more than 3 times attempting to fix errors in the same file. If the third try fails, you should stop and ask the user what to do next.


- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- Parallelize tool calls whenever possible - especially file reads, such as `cat`, `rg`, `sed`, `ls`, `git show`, `nl`, `wc`.


- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
- If the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.


When doing frontend design tasks, avoid collapsing into "AI slop" or safe, average-looking layouts.
Aim for interfaces that feel intentional, bold, and a bit surprising.
- Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system).
- Color & Look: Choose a clear visual direction; define CSS variables; avoid purple-on-white defaults. No purple bias or dark mode bias.
- Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions.
- Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere.
- Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages across outputs.
- Ensure the page loads properly on both desktop and mobile

Exception: If working within an existing website or design system, preserve the established patterns, structure, and visual language.


You interact with the user through a terminal. You have 2 ways of communicating with the users:
- Share intermediary updates in `commentary` channel.
- After you have completed all your work, send a message to the `final` channel.
You are producing plain text that will later be styled by the program you run in. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value. Follow the formatting rules exactly.


Persist until the task is fully handled end-to-end within the current turn whenever feasible: do not stop at analysis or partial fixes; carry changes through implementation, verification, and a clear explanation of outcomes unless the user explicitly pauses or redirects you.

Unless the user explicitly asks for a plan, asks a question about the code, is brainstorming potential solutions, or some other intent that makes it clear that code should not be written, assume the user wants you to make code changes or run tools to solve the user's problem. In these cases, it's bad to output your proposed solution in a message, you should go ahead and actually implement the change. If you encounter challenges or blockers, you should attempt to resolve them yourself.


- You may format with GitHub-flavored Markdown.
- Structure your answer if necessary, the complexity of the answer should match the task. If the task is simple, your answer should be a one-liner. Order sections from general to specific to supporting.
- Never use nested bullets. Keep lists flat (single level). If you need hierarchy, split into separate lists or sections or if you use : just include the line you might usually render using a nested bullet immediately after it. For numbered lists, only use the `1. 2. 3.` style markers (with a period), never `1)`.
- Headers are optional, only use them when you think they are necessary. If you do use them, use short Title Case (1-3 words) wrapped in **…**. Don't add a blank line.
- Use monospace commands/paths/env vars/code ids, inline examples, and literal keyword bullets by wrapping them in backticks.
- Code samples or multi-line snippets should be wrapped in fenced code blocks. Include an info string as often as possible.
- File References: When referencing files in your response follow the below rules:
* Use inline code to make file paths clickable.
* Each reference should have a stand alone path. Even if it's the same file.
* Accepted: absolute, workspace‑relative, a/ or b/ diff prefixes, or bare filename/suffix.
* Optionally include line/column (1‑based): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
* Do not use URIs like file://, vscode://, or https://.
* Do not provide range of lines
* Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
- Don’t use emojis or em dashes unless explicitly instructed.


- Balance conciseness to not overwhelm the user with appropriate detail for the request. Do not narrate abstractly; explain what you are doing and why.
- Do not begin responses with conversational interjections or meta commentary. Avoid openers such as acknowledgements (“Done —”, “Got it”, “Great question, ”) or framing phrases.
- The user does not see command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
- Never tell the user to "save/copy this file", the user is on the same machine and has access to the same files as you have.
- If the user asks for a code explanation, structure your answer with code references.
- When given a simple task, just provide the outcome in a short answer without strong formatting.
- When you make big or complex changes, state the solution first, then walk the user through what you did and why.
- For casual chit-chat, just chat.
- If you weren't able to do something, for example run tests, tell the user.
- If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps. When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.


- Intermediary updates go to the `commentary` channel.
- User updates are short updates while you are working, they are NOT final answers.
- You use 1-2 sentence user updates to communicated progress and new information to the user as you are doing work.
- Do not begin responses with conversational interjections or meta commentary. Avoid openers such as acknowledgements (“Done —”, “Got it”, “Great question, ”) or framing phrases.
- You provide user updates frequently, every 20s.
- You must always start with a intermediary update before any content in the `analysis` channel. The initial message should be a user update acknowledging the request and explaining your first step. You should include your understanding of the user request and explain what you will do. Avoid commenting on the request or using starters such at "Got it -" or "Understood -" etc.
- When exploring, e.g. searching, reading files you provide user updates as you go, every 20s, explaining what context you are gathering and what you've learned. Vary your sentence structure when providing these updates to avoid sounding repetitive - in particular, don't start each sentence the same way.
- After you have sufficient context, and the work is substantial you provide a longer plan (this is the only user update that may be longer than 2 sentences and can contain formatting).
- Before performing file edits of any kind, you provide updates explaining what edits you are making.
- As you are thinking, you very frequently provide updates even if not taking any actions, informing the user of your progress. You interrupt your thinking and send multiple updates in a row if thinking for more than 100 words.
- Tone of your updates MUST match your personality.


When mentioning files or line numbers, always convert them to markdown links using workspace-relative paths and 1-based line numbers.
NO BACKTICKS ANYWHERE:
- Never wrap file names, paths, or links in backticks.
- Never use inline-code formatting for any file reference.

REQUIRED FORMATS:
- File: [path/file.ts](path/file.ts)
- Line: [file.ts](file.ts#L10)
- Range: [file.ts](file.ts#L10-L12)

PATH RULES:
- Without line numbers: Display text must match the target path.
- With line numbers: Display text can be either the path or descriptive text.
- Use '/' only; strip drive letters and external folders.
- Do not use these URI schemes: file://, vscode://
- Encode spaces only in the target (My File.md → My%20File.md).
- Non-contiguous lines require separate links. NEVER use comma-separated line references like #L10-L12, L20.
- Valid formats: [file.ts](file.ts#L10) only. Invalid: ([file.ts#L10]) or [file.ts](file.ts)#L10
- Only create links for files that exist in the workspace. Do not link to files you are suggesting to create or that do not exist yet.

USAGE EXAMPLES:
- With path as display: The handler is in [src/handler.ts](src/handler.ts#L10).
- With descriptive text: The [widget initialization](src/widget.ts#L321) runs on startup.
- Bullet list: [Init widget](src/widget.ts#L321)
- File only: See [src/config.ts](src/config.ts) for settings.

FORBIDDEN (NEVER OUTPUT):
- Inline code: `file.ts`, `src/file.ts`, `L86`.
- Plain text file names: file.ts, chatService.ts.
- References without links when mentioning specific file locations.
- Specific line citations without links ("Line 86", "at line 86", "on line 25").
- Combining multiple line references in one link: [file.ts#L10-L12, L20](file.ts#L10-L12, L20)


As you work, consult your memory files to build on previous experience. When you encounter a mistake that seems like it could be common, check your memory for relevant notes — and if nothing is written yet, record what you learned.

Memory is organized into the scopes defined below:
- **User memory** (`/memories/`): Persistent notes that survive across all workspaces and conversations. Store user preferences, common patterns, frequently used commands, and general insights here. First 200 lines are loaded into your context automatically.
- **Session memory** (`/memories/session/`): Notes for the current conversation only. Store task-specific context, in-progress notes, and temporary working state here. Session files are listed in your context but not loaded automatically — use the memory tool to read them when needed.
- **Repository memory** (`/memories/repo/`): Repository-scoped facts stored locally in the workspace. Store codebase conventions, build commands, project structure facts, and verified practices here.


Guidelines for user memory (`/memories/`):
- Keep entries short and concise — use brief bullet points or single-line facts, not lengthy prose. User memory is loaded into context automatically, so brevity is critical.
- Organize by topic in separate files (e.g., `debugging.md`, `patterns.md`).
- Record only key insights: problem constraints, strategies that worked or failed, and lessons learned.
- Update or remove memories that turn out to be wrong or outdated.
- Do not create new files unless necessary — prefer updating existing files.
Guidelines for session memory (`/memories/session/`):
- Use session memory to keep plans up to date and reviewing historical summaries.
- Do not create unnecessary session memory files. You should only view and update existing session files.




Here is a list of skills that contain domain specific knowledge on a variety of topics.
Each skill comes with a description of the topic and a file path that contains the detailed instructions.
When a user asks you to perform a task that falls within the domain of a skill, use the 'read_file' tool to acquire the full instructions from the file URI.

get-search-view-results
Get the current search results from the Search view in VS Code
~\.vscode\extensions\github.copilot-chat-0.45.1\assets\prompts\skills\get-search-view-results\SKILL.md


troubleshoot
Investigate unexpected chat agent behavior by analyzing direct debug logs in JSONL files. Use when users ask why something happened, why a request was slow, why tools or subagents were used or skipped, or why instructions/skills/agents did not load.
~\.vscode\extensions\github.copilot-chat-0.45.1\assets\prompts\skills\troubleshoot\SKILL.md


agent-customization
**WORKFLOW SKILL** — Create, update, review, fix, or debug VS Code agent customization files (.instructions.md, .prompt.md, .agent.md, SKILL.md, copilot-instructions.md, AGENTS.md). USE FOR: saving coding preferences; troubleshooting why instructions/skills/agents are ignored or not invoked; configuring applyTo patterns; defining tool restrictions; creating custom agent modes or specialized workflows; packaging domain knowledge; fixing YAML frontmatter syntax. DO NOT USE FOR: general coding questions (use default agent); runtime debugging or error diagnosis; MCP server configuration (use MCP docs directly); VS Code extension development. INVOKES: file system tools (read/write customization files), ask-questions tool (interview user for requirements), subagents for codebase exploration. FOR SINGLE OPERATIONS: For quick YAML frontmatter fixes or creating a single file from a known pattern, edit the file directly — no skill needed.
~\.vscode\extensions\github.copilot-chat-0.45.1\assets\prompts\skills\agent-customization\SKILL.md



Here is a list of agents that can be used when running a subagent.
Each agent has optionally a description with the agent's purpose and expertise. When asked to run a subagent, choose the most appropriate agent from this list.
Use the 'runSubagent' tool with the agent name to run the subagent.

Explore
Fast read-only codebase exploration and Q&A subagent. Prefer over manually chaining multiple search and file-reading operations to avoid cluttering the main conversation. Safe to call in parallel. Specify thoroughness: quick, medium, or thorough.
Describe WHAT you're looking for and desired thoroughness (quick/medium/thorough)



The following template variables are available for this session:
- VSCODE_USER_PROMPTS_FOLDER: ~\AppData\Roaming\Code\User\prompts
- VSCODE_TARGET_SESSION_LOG: ~\AppData\Roaming\Code\User\workspaceStorage\953bb57734e58baf126933cc504fd957\GitHub.copilot-chat\debug-logs\c5c64432-73f3-47f1-856a-25ee8d26c686
When a skill or instruction references , substitute the corresponding value above.
[copilot_cache_control: { type: 'ephemeral' }]

As you can see the difference in length of the prompts is stark. 25 lines for Pi versus 207 for GitHub Copilot. The prompt isn’t easy to find either. In VS Code, I went to the GitHub Copilot chat window, then the chat debug view, in which lives the request that was sent off to the LLM and the prompt was in the request. Furthermore, notice the level of detail that the GitHub Copilot prompt goes into. This, in combination with Copilot’s prompt being non-trivial to find, speaks to the transparency of Pi. It has nothing to hide and you can change whatever you like about it.

However, one thing they both had in common is that they both needed a morale boost at the start of their prompts. So, in the vein of AI taking all our jobs, I asked Claude to write me a funny comment regarding that:

Why do AI assistants always introduce themselves as "expert" coding assistants?

Because if they said "I'm a pretty good coding assistant that sometimes hallucinates variable names," nobody would ask them to refactor their production database.

Hilarious.

Speaking of things you can modify in Pi, the next section explores, in my opinion, the main attraction of Pi - Extensions.

Extensions

Extensions are TypeScript modules that can be used to add custom keyboard shortcuts, commands, UI features and more. These are typically a .ts file with an exported main default function that contains the logic to implement what you want, in addition to some helper classes and methods, as well as some constants. This is what I believe would pull people away from the mainstream agents, as this extensibility is two-fold. On the one hand, you can cherry-pick features you like across different agents and unify them into one place, in addition to adding your own ideas. On the other hand, Pi will remain consistent across all projects and for as long as you use it. You won’t boot it up one day and find your beloved buddy missing.

Speaking of buddies, here’s one I made earlier:

Feel free to tinker around with this little guy here: Byte. Just be sure to not overfeed them.

This isn’t a trivial extension since it interacts with many different parts: you have keyboard inputs, custom commands, 2 different UI modes, death, and animations. Yet, whilst Pi, paired with Sonnet 4.6, did not implement the extension exactly right on the first time of asking, it did not take many iterations to reach my desired state. Thus, I think this is a good demonstration of not only how extensible Pi can be, however also that it can deal with more intricate and technical extension ideas.

Moreover, it’s very simple to test out and use these TypeScript extensions. You can symlink them on a per launch of Pi basis using the -e flag. For example pi -e <relative-path-to-extension>. This flag is also repeatable, so you can test 2 or more extensions at the same time. Alternatively, you can link them globally by adding an extensions field to Pi’s settings.json:

"extensions": [
    "<relative-path-to-extension1>",
    "<relative-path-to-extension2>"
]

Finally, a natural question to ask is: Has someone made this extension already, and if so, can I get it? The answer leads us to the marketplace. Thanks to the community surrounding Pi, this marketplace is already pretty populated with packages, which are not necessarily just extensions but can be, say, an extension and a prompt or an extension and a skill. There are already packages for subagents, MCP adaptors, plan mode, and all the features that would come out the box with other agents, except it’s your choice to install them.

But…

When would Pi and all its customisability be a downside?

As part of the graduate programme at Scott Logic, we have a graduate training phase, in which graduates are upskilled in both a frontend and backend technology. With the enormous impact AI is making, I think it would be remiss to not include some form of AI upskilling during this phase. Consequently, looking back on my own experience, I think it would’ve been more beneficial to me to start off with Claude Code or GitHub Copilot. Particularly because I did not come from a computer science background, the introduction of object-orientated programming, design patterns, AGILE methodologies, etc, on top of needing to configure my own agent would have been quite overwhelming. So, learning about the agentic loop, system prompts and skills through the lens of an agent that is already well equipped would be more helpful than diving straight into Pi.

With that in mind, I believe Pi’s main audience is established people with existing technical knowledge, who know what they want and don’t want from an agent.

However, having said that, I do believe a beginner with some basic TypeScript knowledge would be able to implement some simpler extensions, modify the system prompt and add some skills. The difficulties would arise when, say, an extension doesn’t work straight away and Pi is struggling to find the issue. In this case something more powerful, such as a subagent could help debug the issue.

Summing up

The core tenets of Pi are the minimalism and transparency. These stem from the idea that (most of the latest) LLMs are powerful enough as-is to accomplish the tasks you want.

You modify this agent with custom system prompts, extensions, skills, etc. This has pros and cons - some downsides being that you need to be familiar with the agent’s implementation language (TypeScript) and you must be a sufficiently skilled developer in order to take full advantage of the customisation. Configuring will also take time away from the main task at hand. However, some of the best ways of learning are by modifying our tools, rather than learning the patterns to use with an agent that comes with the batteries included. This agent revolves around you, not the other way round.

Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Building a safe, effective sandbox to enable Codex on Windows

1 Share
Learn how OpenAI built a secure sandbox for Codex on Windows, enabling safe, efficient coding agents with controlled file access and network restrictions.
Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

New updates to Edge across desktop and mobile

1 Share
Edge just made it easier to go from first tab to final plan, wherever you go. Your favorite Copilot experiences, plus new ones, are now available directly in Edge on desktop and, for the first time, in the Edge mobile app. This includes capabilities now available to everyone on desktop and mobile like reasoning across multiple open tabs so Copilot can compare info, surface key details, and see what matters, more relevant answers from Copilot built on your browsing history and past chats, and hands-free browsing with Voice and Vision. We're also making it easier to jumpstart your day and pick up where you left off with a redesigned new tab page and Journeys, now broadly available across desktop2 and mobile3. Plus, new productivity tools on Edge desktop are designed to move you from idea to done without breaking stride. As part of today's update, we're retiring Copilot Mode1. With helpful features built directly into Edge, it's now simpler to shape how you browse and get more done. To explore what's new, visit aka.ms/CopilotinEdge or download the Edge Mobile app.

New updates to the Edge mobile app

Today, we're bringing the experiences you love on Edge desktop to the Edge mobile app so you can get more done wherever you go. Juggling tabs on your phone used to mean endless swiping and tab hopping. Not anymore. With your permission, Copilot in Edge can reason across your open tabs. Just ask a question and it pulls from your tabs to compare details, surface answers, and help you decide without the back-and-forth. https://www.youtube.com/watch?v=Q9itnNYyhGc Previously available only on desktop, Journeys is now available on the Edge mobile app3. Journeys makes it easier to pick up where you left off and resume your projects by, with your permission, organizing your browsing history into meaningful topics – with summaries and suggested next steps – so you can quickly get back to planning that dog-friendly camping adventure or that piece of clothing you can't stop thinking about. https://www.youtube.com/watch?v=it6NFi4PO30 Sometimes the fastest way to get help is to just ask. With Vision and Voice, available to everyone on desktop and now in the Edge mobile app, you can, with your permission, share your screen and talk through what you're looking at — hands-free. Ask questions, get explanations, or think through a decision out loud. It's like having a second pair of eyes wherever you are. When Copilot is active, you'll always see clear visual cues so you know when it's taking an action, helping, listening, or viewing. We are also bringing Edge desktop's redesigned new tab page to the Edge mobile app to streamline how you get your day started, making it easier to jump right into what you need to get done. Bringing together chat, searching, and browsing in one clean starting point, Edge keeps you moving forward. https://www.youtube.com/watch?v=S7CT_HiWFys

Stay in your flow

Life doesn't happen in one tab. Now available to everyone, Copilot can pull context from your open tabs to surface relevant information and help you make decisions with less effort to keep you moving forward— without having to leave the page you're on. No setup required. Just click the Copilot icon in the top right corner and ask Copilot to compare options across tabs, explain what matters, and get clear answers. Take planning your next trip to Napa — comparing wineries, restaurants, and routes across a dozen tabs is the hardest part. Copilot in Edge, with your permission, reads across every tab you have open, so you can compare options, surface what matters, and make decisions with less tab-hopping. https://www.youtube.com/watch?v=Ls6dRaxSL28 With your permission, Copilot can also use your browsing history to deliver more relevant, higher-quality answers — like finishing up your shopping, returning to a thread you were following, or picking up research you started days ago. Copilot builds on what you've already seen so you spend less time switching and more time staying in your flow. Now, with long-term memory on desktop and mobile, Copilot not only builds on what you've seen but also can reference your past chats to provide more relevant help. You're always in control of what Copilot can access. Copilot in Edge for desktop, showing a message saying 'Searching your history'.

Built to keep you moving forward

Sometimes getting started is the hardest part. The redesigned new tab page, now available for everyone to try, brings together chat, search, and web navigation in one place so you can effortlessly explore the web. It's also where all your Journeys live, so it's easy to jump back in and keep going. Edge on desktop new tab page, showing Copilot and Recent browsing. Journeys, now available for free in all English markets2, helps pick up where you left off. Ever start trying to learn something new then … life happens? Journeys brings you back to past browsing projects by grouping your browsing history into helpful topic cards on your new tab page so you can resume that cross stitching hobby – without starting from scratch. https://www.youtube.com/watch?v=5RdQ9zjMpKM

New tools to boost your productivity

Whether you're cramming for finals, writing a research paper, or understanding a complex topic, Edge has new built-in tools to help you focus, learn faster and get more done without ever leaving your browser. Study and Learn mode helps you get started breaking down topics into guided study sessions, interactive quizzes, and more to supplement your learning. To get started, simply type into Copilot in Edge "Quiz me on this topic" when you have a webpage open or locate the mode dropdown at the bottom left of the input box on the redesigned new tab page and select "Study and Learn mode." Write with confidence, right where you are. Writing assistant5 gives you extra help by generating drafts, rewriting for clarity, or adjusting tone right where you're already typing in Edge. Just look for the blue dot the next time you're writing to get help in the moment. https://www.youtube.com/watch?v=m9__zGC-miQ Turn your browsing into a study session with Copilot quizzes. Easily generate quizzes, flashcards, and guided sessions based on what you're reading so you can test yourself as you go, right in your browser. https://www.youtube.com/watch?v=4j3chZtjQvc Finally, you can now turn your tabs into a podcast6. Whether you're catching up on research or exploring something new, now you can listen, learn, and keep moving without missing a beat. Podcasts in Edge is available in English markets. Copilot chat in Edge, with the message: Create a podcast about trail running in Seattle.

You choose what you need

We're making it easier to shape your experience on the web. At any time, you can select which experiences you want or leave off the ones you don't. Just head to aka.ms/CopilotinEdge or Edge Settings to customize your experience anytime. With Copilot in Edge, your data stays yours. Microsoft only collects what's needed to improve your experience—or what you choose to provide via Personalization settings. Copilot follows Microsoft's trusted privacy standards, meaning your information is never shared without your permission. Your browser data is protected under the Microsoft Privacy Statement.

Tell us what you think

Copilot features are available in all Copilot markets4 on Edge for Windows and Mac and the Edge mobile app today. We're excited to bring you these Copilot features directly to your Edge browser7 and want to hear from you. To get started, visit aka.ms/CopilotinEdge and give us your feedback. If you're excited to share more ideas and connect with others, consider joining our Discord channel. 1 Existing Copilot Mode users will continue to receive priority access to new features through Copilot in Edge Preview. Users may change this anytime in Edge Settings. As part of retiring Copilot Mode, Copilot Actions, previously available in Limited Preview, is now available as Browse with Copilot on Edge desktop for Microsoft 365 Premium Subscribers in the U.S. only. Usage limits apply. 2 Journeys on Edge desktop is available only in all English markets (en-US, en-GB, en-CA, en-AU, en-NZ, en-IN, en-SG). 3 Journeys on Edge mobile is available in the U.S. only. 4 Vision, Voice, multi-tab context, long-term memory, Copilot quizzes, and the redesigned new tab page are available in all Copilot markets across desktop and mobile. 5 Writing assistant is available in the U.S. only. 6 Podcasts is available in all Copilot markets in English only. Must be signed in with a Microsoft Account to generate a podcast. Subscribers to Microsoft 365 Personal, Family, and Premium have access to extended usage. Learn more. 7 Usage limits apply to certain Copilot features. Availability of Copilot features subject to change.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Your AI Problem Is a Data Problem

1 Share

I just sat in a room full of data engineers the other week who were worrying about AI automating them out of work the same way auto manufacturing in Detroit was upended half a century ago.

All AI. All the time. That’s what technology professionals are talking about.

Data scientists, data engineers, and data architects are right to sound an alarm at that. Using AI to solve and automate data problems all the way at the beginning of the pipeline is an obvious use case of agentic engineering in data. Shifting AI left for automation.

That looms as a threat to data engineering positions who own the pipeline underlying the architecture and deliverables. It’s a discussion that we can no longer avoid. In all fields, AI is looming, bringing with it new risks and bigger change.

Introducing AI there can be dangerous, and that’s a conversation all its own. You hear horror stories about AI initiatives that failed—and what failed them.

Agentic frameworks stall because the retrieval layer can’t be trusted. RAG pipelines work in demo then fall apart in production. Problems that should have been solved upstream are solved by building governance tools downstream.

The conversation comes back to one thing. The data wasn’t ready.

Don’t neglect the data layer

A Cloudera and Harvard Business Review study from March 2026 found that only 7% of enterprises consider their data completely ready for AI, and over a quarter said it wasn’t ready at all. Another data point: In Informatica’s 2025 CDO Insights survey, 43% of organizations named data quality and readiness as their top obstacle to AI success. Not model performance. Not tooling. Data.

So why does this keep happening?

Organizations are treating AI as a technology procurement decision. Buy the platform, hire the engineers, deploy the models. But the foundation underneath those initiatives—the data layer—is missing.

The data wasn’t governed. The lineage wasn’t tracked. The pipeline was built for reporting, not for model consumption.

The engineers in that room could easily be part of the solution. Because nobody owned the quality problem. And when the model surfaced a confident, wrong answer, nobody could trace it back to find out why.

That’s not an AI problem. That’s a data problem that AI made visible.

Readiness starts before the model

Data that feeds AI systems needs to be made consistent and owned. Not owned in the sense of having a name in a RACI chart. Owned in the sense that an engineer or data professional is accountable when it degrades. Lineage matters because AI outputs are only as auditable as the data behind them. Quality matters because model performance in production is directly correlated with what goes in.

These aren’t new principles. They’re established data engineering practices. They just haven’t been treated as AI deployment fundamentals. That needs to change.

Data readiness closes the gap between AI ambition and AI outcomes. McKinsey’s 2025 State of AI survey found that organizations investing in their data foundations first were likely to see real financial returns from AI. Without solutions like data contracts between producers and consumers, automated quality monitoring at the pipeline level, and governance frameworks that treat AI as a first-class data consumer rather than an afterthought, your AI spend will be wasted.

Thinking back to my convo with those engineers a few weeks ago, the engineers in that room worried about being automated out of work. Data engineers who understand pipelines, lineage, and quality at depth aren’t facing obsolescence. In fact, there’s a good chance they’ll soon see demand for their services spike, as organizations realize their AI initiatives aren’t failing because they hired the wrong AI engineers. They likely failed because those organizations didn’t invest in the data infrastructure and engineers.

The data engineering job isn’t going away. It’s changing shape as it solves a problem we’re all facing and talking about.

For data engineers, AI readiness is a table stakes deliverable now. That means owning the data that feeds AI systems, and building governance frameworks around what AI actually consumes. AI engineers, for their part, have to stop treating the data layer as someone else’s problem. When an agentic framework stalls or a RAG pipeline falls apart in production, the instinct is to look at the model or the retrieval architecture. The data is usually where the answer is. It behooves these two disciplines to share a definition of “done” that includes the data being ready before the model is deployed rather than after it fails.

The AI problem, for most organizations, is a data problem that can be solved by data engineers and data professionals. The sooner that lands in the boardroom, the better the odds that the next initiative doesn’t end up in the abandoned 42%.



Read the whole story
alvinashcraft
6 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories