Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152077 stories
·
33 followers

AI-powered documentation updates: From code diff to docs PR in one comment

1 Share

TL;DR: Want to automatically update your documentation following code changes? See it in action with this interactive demo.

Every developer knows the feeling: You merge a pull request changing how a configuration option works, or refactors an API endpoint, and somewhere in a separate repository, the documentation still describes the old behavior. Weeks later, a user or fellow contributor follows those outdated instructions and hits a wall.

The problem isn't that developers don't care about documentation. It's that keeping docs in sync with code is a separate task, often in a separate repository, that's easy to forget when you're focused on shipping features and fixing bugs. For large open source projects with mountains of documentation pages, even knowing which files need updating after a code change can be a challenge.

Code-to-Docs is an open source GitHub Action that addresses this gap. It uses AI to analyze your code changes, identify which documentation files are affected, and generate updated content — all triggered by a simple comment on your pull request.

How Code-to-Docs works

Code-to-Docs integrates into your pull request workflow through two commands, typed as PR comments:

  • [review-docs]: AI examines your code diff, finds relevant documentation files, and posts a review comment with checkboxes listing each file it thinks needs an update, along with a short explanation of why.
  • [update-docs]: Takes the files you approved in the review (or runs the full pipeline if you skipped the review step) and creates a pull request in your docs repository with the generated updates.

Recommended workflow for Code-to-Docs

The two-step approach keeps humans in control while letting AI handle the discovery and drafting: AI suggests (as in Figure 1), and you decide. Nothing gets merged without human approval.

The GitHub Action analyzes the code diff and posts a review listing which documentation files need updates, with checkboxes for the reviewer to accept or reject each suggestion. This image is from the interactive demo.
Figure 1: The GitHub Action analyzes the code diff and posts a review listing which documentation files need updates, with checkboxes for the reviewer to accept or reject each suggestion. This image is from the interactive demo.

Here's an example scenario:

  1. A developer opens a pull request (PR) that changes a configuration option.
  2. A reviewer (or the developer) adds the [review-docs] comment to the PR.
  3. The GitHub Action analyzes the diff and responds with a list of probable changes: "These 3 doc files reference the config you changed. Here's what I'd update in each one…"
  4. The reviewer disables the one file that doesn't actually need changes.
  5. The reviewer adds the [update-docs] comment to the PR, and a new docs PR appears in the documentation repository with the accepted updates ready for review (see Figure 2).
AI-generated documentation updates shown side-by-side. In this example, the GitHub Action has added rate limiting details to the API docs, matching the project's existing style. This image is from the interactive demo.
Figure 2: AI-generated documentation updates shown side-by-side. In this example, the GitHub Action has added rate limiting details to the API docs, matching the project's existing style. This image is from the interactive demo.

Guiding the AI with instructions

Sometimes you want more control over what AI generates. With Code-to-Docs, you can add instructions directly in your [update-docs] comment: Global instructions on the first line, and instructions for specific files on the lines that follow (see Figure 3).

A reviewer guides the AI with global instructions on the first line and file-specific instructions on the lines that follow. This image is from the interactive demo.
Figure 3: A reviewer guides the AI with global instructions on the first line and file-specific instructions on the lines that follow. This image is from the interactive demo.

This is useful when you know exactly what part of a doc needs attention, or when you want to constrain the AI to small, focused edits rather than broad rewrites.

Setting up Code-to-Docs

Getting started takes two steps: Add a GitHub Actions workflow file, and configure a few repository secrets.

Step 1: Add the workflow file

Create .github/workflows/docs-assistant.yml in your repository, following these setup steps.

The workflow listens for issue_comment events, and triggers when a comment contains [review-docs] or [update-docs]. It checks out the PR's code, extracts the diff, and runs the Code-to-Docs action.

Step 2: Configure repository secrets

In your repository's Settings > Secrets and variables > Actions panel, add the required secrets, following these configuration details.

That's it. The next time someone comments [review-docs] on a pull request, the GitHub Action kicks in.

How AI identifies and updates documentation files

Code-to-Docs uses any OpenAI-compatible LLM to analyze code changes and generate documentation updates. It works, with no customization necessary, with models deployed on Red Hat OpenShift AI with vLLM, Google Gemini, OpenAI, Ollama, and any other inference server that exposes the OpenAI API format. Here's a closer look at the key components that make this work.

Smart file discovery with semantic indexing

One of the biggest challenges is figuring out which documentation files are affected by a code change. In a project with hundreds of doc files, scanning every one with AI on each run would be slow and expensive. Code-to-Docs solves this with a two-stage semantic indexing system:

  1. Folder indexes: On the first run, the GitHub Action generates AI-powered summaries of each documentation folder, describing what topics each folder covers. These summaries are cached in a .doc-index directory and committed to your main branch.
  2. File summaries: For long documentation files, the GitHub Action generates and caches concise summaries so it doesn't need to send the full file content to the AI on every run.

When a new PR triggers the GitHub Action, it first checks the folder-level indexes to narrow down which areas of the documentation are relevant (a single API call instead of scanning hundreds of files). Then it examines only the files in those relevant folders. This cuts runtime from around 20 minutes to about 4 minutes on large documentation sets. Performance improves over time as more summaries are built up, and when a documentation file is updated, its cached summary is regenerated automatically.

The AI is prompted to be conservative in its file selection. It only picks files that directly document code being changed, preferring to return few results over noisy false positives. It works with AsciiDoc, Markdown, and reStructuredText, and supports both separate documentation repositories and docs that live alongside your code in the same repo.

Get started

Code contributors are often domain experts in the code they're changing, but they may not know the documentation structure well enough to find every affected file. Code-to-Docs lowers that barrier by handling doc discovery and drafting automatically. Maintainers get a clear signal when documentation needs attention, and the review-then-update workflow means nothing happens without human approval. The AI is a tool, not a gatekeeper.

Code-to-Docs is open source. Here are the resources to help you get started:

The post AI-powered documentation updates: From code diff to docs PR in one comment appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Agents as Code: Treating AI Agents as Versioned Artifacts

1 Share

Disclaimer: This post was originally published on Azure with AJ and has been reproduced here with permission. You can find the original post here.  

TL;DR 

Custom agents defined in your repository (.agent.md files in .github/agents/) combined with AGENTS.md for shared project context are the new standard for embedding AI into development workflows. They’re versioned, reviewed, and shared like any other code artifact. This blog covers how to structure them, what governance you gain, and practical patterns for adoption. 

Table of Contents

We version our infrastructure. We version our pipelines. We version our policies. So why are most teams still treating their AI agents like throwaway prompt snippets floating in someone’s clipboard? 

If you’ve been following the Agentic DevOps journey, you’ll know I’ve been tracking how AI agents are reshaping development workflows. From the coding agent to agentic workflows, the trajectory has been clear: agents are becoming core development infrastructure, not just fancy autocomplete. 

GitHub Copilot now supports custom agents defined directly in your repository, with full workspace awareness, tool access, and MCP connections. With over 60,000 open source projects already adopting AGENTS.md files and GitHub’s analysis of 2,500+ repositories revealing clear patterns of what works, agents as code is the next evolution of how engineering organisations standardise, govern, and scale AI-assisted workflows. 

The problem: Agents without anchors 

Let’s be real for a second. Right now, most teams are using AI coding assistants in full Wild West mode, everyone has their own prompts, their own mental model, and their own bag of tricks. The result? 

  • Inconsistent behaviour: Each developer’s AI writes code in a different style because they prompted it differently 
  • Knowledge silos: That brilliant system prompt your senior engineer crafted lives in their personal settings and walks out the door when they do 
  • No audit trail: When an AI agent breaks something, there’s no traceable instruction set to review 
  • Zero governance: Security teams have no visibility into what instructions guide AI behaviour 

 

Sound familiar? These are exactly the problems we solved with Infrastructure as Code. Before IaC languages like Terraform, Bicep, Pulumi, and CloudFormation became standard, infrastructure was undocumented folklore and snowflake configurations. We fixed it by codifying everything into versioned, reviewable artifacts. Agents deserve the same treatment. 

AGENTS.md vs Custom Agents: Know the difference 

This is the distinction most teams get wrong. Two separate mechanisms exist and they serve fundamentally different purposes. 

AGENTS.md: The operating manual 

An AGENTS.md file at the root of your repository is a passive, always-on instruction set for any AI coding agent. Think of it as a README written for agents instead of humans: build commands, code style, testing instructions, and boundaries that every agent should respect. 

# AGENTS.md 
 
## Setup Commands 
- Install deps: `pnpm install` 
- Run tests: `pnpm test` 
 
## Code Style 
- TypeScript strict mode, single quotes, no semicolons 
 
## Boundaries 
- Never commit secrets or API keys 
- Never edit `node_modules/` or `vendor/` 
- Always run lint and test before committing 

It’s cross-platform (works across GitHub Copilot, OpenAI Codex, Gemini CLI, Cursor, Windsurf, and more), nestable in monorepos, and stewarded by the Agentic AI Foundation under the Linux Foundation. 

Custom Agents (.github/agents/): The Specialist Roles 

Custom agents in .github/agents/ are active, switchable personas you invoke by name. Each .agent.md file creates a mentionable specialist (e.g., @test-agent) with its own tools, model preferences, and workflow handoffs. 

--- 
name: test-agent 
description: Writes and maintains unit tests for this project 
tools: ['editFiles', 'terminal', 'search'] 
model: Claude Sonnet 4.5 (copilot) 
---

You are a quality software engineer specialising in test coverage.

## Boundaries 
- **Always:** Write tests to `tests/`, run tests before committing 
- **Ask first:** Before adding new test dependencies 
- **Never:** Modify source code in `src/`, remove failing tests

These support tool restrictions per rolemodel selection per taskworkflow handoffs between agents, and can be defined at the organisation level for consistency across repositories. 

How they work together 

The two approaches are complementary, not competingAGENTS.md provides the foundational knowledge every agent inherits. Custom agents layer on specialised roles with specific tool access. 

Agents as Code Treating AI Agents as Versioned Artifacts

The rule of thumb: if every agent should know it, put it in AGENTS.md. If only a specific role needs it, put it in a custom agent. 

How this changes team workflows 

Once agents become versioned artifacts, the way teams work shifts fundamentally. 

Shared capabilities replace personal prompts. A new developer clones the repo and immediately has access to the same @test-agent@docs-agent, and @lint-agent that everyone else uses, configured precisely for this codebase. No more “ask Sarah how she prompts the AI for tests.”  

This is the same transformation we saw with CI/CD. Before pipelines as code, builds were manual, inconsistent, and locked in someone’s head. Once they lived in the repo, they became a shared, evolving team asset. 

Onboarding becomes instant. The agents encode the team’s collective knowledge about how to write tests, structure docs, and review code in this specific codebase. No wiki hunting, no Slack archaeology. 

Agents evolve through pull requests. Because definitions live in the repo, they follow the same lifecycle as any other code: feature branches for experimentation, PRs with review and discussion, git blame when something goes wrong. Governance you get for free just by putting agents where your code already lives. 

 

Governance that scales 

For engineering organisations, this is where it gets really interesting. 

Security review becomes standard practice. Every agent definition includes boundaries stating what it can and cannot do. Security teams review these in the normal PR process, just like IAM policies. The best agent files use a three-tier boundary model: 

  • Always do: Safe actions (run tests, write to designated directories) 
  • Ask first: Actions requiring judgment (schema changes, adding dependencies) 
  • Never do: Hard boundaries (commit secrets, modify production configs, delete data) 

 

Organisational standardisation becomes achievable. Platform teams maintain base agent templates that project teams inherit and customise. Distribute a @security-review-agent across 50 repositories and you’ve rolled out consistent AI-powered security review, versioned and auditable through Git. 

Compliance gets a paper trail. Auditors can review what instructions guided AI behaviour at any point in time, who approved changes, and what boundaries were in place. That’s a significant improvement over the alternative: “some developer typed something into a chat box and the AI did a thing.” 

“Agents aren’t prompts—they’re infrastructure. And infrastructure belongs in your repo.”

Lessons from 2,500+ repositories 

GitHub’s analysis of over 2,500 agents.md files provides concrete, data-backed guidance on what actually makes agents effective: 

  1. Specificity wins. “You are a test engineer who writes Vitest tests for React 18 using Testing Library, following AAA pattern” beats “You are a helpful assistant” every time. 
  2. Commands come first. Front-load executable commands (pnpm testnpm run build). Agents reference these frequently. 
  3. Code examples trump explanations. One snippet showing preferred style beats three paragraphs describing it. 
  4. Cover six core areas. Commands, testing, project structure, code style, git workflow, and boundaries. Missing any degrades performance. 
  5. Boundaries prevent disasters. The always/ask first/never model is the most effective pattern. Without boundaries, agents will eventually surprise you. 

 

What doesn’t work: vague personas, walls of prose with no examples, missing boundaries, and generic instructions that could apply to any project. 

 

Practical patterns to adopt 

Layer both for maximum effect. Start with an AGENTS.md covering build commands, code style, and universal boundaries.  

Then add focused custom agents for common tasks: @docs-agent (read-only source, writes to docs/), @test-agent (writes to tests/, runs tests), and @lint-agent (style changes only). These are safe, high-value, and easy to validate. 

Iterate through feedback. Start minimal, use the agent on real tasks, and when it stumbles, add a specific instruction to prevent it. Commit the improvement through a PR with context. This mirrors how teams refine CI/CD pipelines: the configuration is a living document, not a set-and-forget template. 

Standardise at the org and enterprise level. GitHub now supports managing custom agents and instructions centrally, not just per repository. Organisation and enterprise owners can define custom agents in a dedicated .github-private repository, making them available across all repositories within the organisation.  

Enterprise owners get additional controls, including rulesets that restrict who can edit agent profiles and the ability to delegate management to a team of AI managers. 

On top of that, organisation owners can set organisation-level custom instructions that apply to every Copilot interaction across the organisation.  

Think of this as the org-wide equivalent of a .github/copilot-instructions.md file: coding standards, security requirements, and compliance constraints that every Copilot response respects, regardless of which repository or agent is in use. 

The layering model becomes: org-level instructions (universal standards) → AGENTS.md (project-specific context) → custom agents (specialised roles). Teams can still fork and customise, but the baseline is consistent and governed. 

 

The bigger picture: Everything as code 

Agents as code is the natural continuation of a decade-long trend: 

Agents as Code: Treating AI Agents as Versioned Artifacts

Each evolution followed the same pattern: take something manual and scattered across people’s heads, codify it into versioned artifacts, and gain consistency, auditability, and scalability as a result. Agents as code is the latest application of this principle. 

Wrapping up 

Treating AI agents as versioned artifacts isn’t just a nice engineering practice. It’s the foundation for scaling AI-assisted development with the same rigour we apply to infrastructure, pipelines, and policies. The tools are here.  

GitHub Copilot’s custom agents, the open AGENTS.md standard, and the patterns emerging from thousands of real-world repositories all point in the same direction: agents belong in the repo. 

Start small. Define three agents for your most common tasks. Review their boundaries. Iterate when they stumble. Commit improvements through pull requests.  

Before long, your team’s AI capabilities will be as well-governed and consistently applied as your CI/CD pipelines. 

The era of ad-hoc prompting is ending. The era of agents as code has begun.  

References 

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Are we the parents in the AI Agent relationship? The drama continues

1 Share

The Agent drama continues… Lately I have been having problems with GitHub Copilot and/or with the agents not “listening” or “remembering” directives. In the last three days I have had to repeat myself multiple times, “All work must have a GitHub issue. All work must have a branch with the following format issue <number>-description. All work must have a PR submitted and tagged with the issue”. The last three issues were done on main.” I asked copilot, what can we do to fix this. After a little thinking, it come up with some agent instructions, some githooks, and a new task (pr-metadata) as part of the CI job to validate the metadata for a PR and a bit more that we implemented a few hours ago. I just gave it some more work and guess what happened. You guessed it. Ding, ding, ding! All the work was done on main. I asked Copilot why.

The changes were in main. We just went through a bunch of changes to prevent this and instruct ALL WORK TO BE DONE IN ISSUE BRANCHES and PRs. HOW DID THIS HAPPEN? “

Yes, I used all caps, I was frustrated. 😊 . It responded with.

It happened because I routed and launched the work from the current checkout on main instead of enforcing issue-branch setup first. The session was on main, and I spawned Trinity/Tank with WORKTREE_MODE: false and WORKTREE_PATH: n/a, so the edits landed in the live checkout.

Root cause: coordinator failure, not an agent one. I did not enforce the branch gate before starting issue work, even though the repo policy already says work must go through issue branches and PRs. I’ve captured your directive explicitly: all work must be done in issue branches and PRs, never directly on main.”

I then said:

We’ve recorded this rule multiple times and added git commit hooks and more to stop this. As a coordinator, why are you not looking at the rules and decisions first. I, and we, keep on wasting time because you are not following the directions

It responded with:

You’re right. I’m recording this as a blocking coordinator directive now, then I’ll answer the failure plainly.

Then some thinking, and then:

I confirmed the rule is already explicit in routing, and I’m adding your stronger coordinator directive now.

It was thinking for a while and then said:

📌 Captured. Coordinator must read and obey the existing rules and decisions first, especially branch-and-PR-only policy, before launching work.

Why I failed: I did not execute the preflight discipline I was supposed to. The rule was already present in routing.md (All work must be delivered via a pull request — no direct commits to main), and the team decisions already show branch/worktree-based flow. I still routed from the live main checkout instead of stopping on that rule first.

Why the protections didn’t save us: hooks and similar guards mostly protect commit/push boundaries. They do not stop a coordinator from launching edits into a dirty working tree on main before any commit exists. The mistake happened earlier in the flow: I should have refused to start issue work until we were on an issue branch or dedicated worktree.

Bottom line: this was not a missing-rule problem; it was a coordinator compliance failure.

As one of my friends and LinkedIn contacts said on this post:

Yep they often act like teenagers. They “yup” you to death, ignore your remarks, and the money keeps going out the door. Just like being a parent!

It’s kind of funny, a little accurate, and a little sad.

After this conversation, the “team” began to implement some of the changes that I wanted and submitted a PR with the changes. Guess what, the PR failed because the metadata was not correct. This is a good thing and a bad thing. The good thing is that the system is working and catching the mistakes. The bad thing is that agents did not “remember” or knew about the directives and rules that we have in place. So I asked the Coordinator, aka Scribe

Scribe, update everyone’s charters to make them aware of the PR metadata requirements we have to avoid unnecessary failures please.

Yes, I said “please”. I am trying to be nice. It responded with:

Scribe’s on it — updating the squad charters so PR metadata rules are explicit up front.

Agent Work
📋 Scribe (claude-haiku-4.5 fast) Add PR metadata requirements to the team’s charter files to prevent repeat CI failures

Baseline guidance added: PR titles must follow <type>(#<issue>): <summary>, and PR bodies must include closing metadata like Closes #<issue> or Fixes #<issue> when applicable. Scribe also recorded the wording in .squad\decisions\inbox\scribe-pr-metadata-charters.md.

Wrapping up

I am not sure if this is a problem with the agent or the coordinator or my configuration, but I am leaning towards the configuration and the maturity of the system. I have been working with agents for a while now and I have seen them make mistakes, but mostly because I did not provide clear enough directives or enforce the rules consistently.

Hopefully this will help and we will not have to go through this again. I will continue to monitor this and see if it happens again. In the meantime, I will be more vigilant in making sure that all work is done in issue branches and PRs.

References

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – April 20, 2026 (#767)

1 Share

I’m in Las Vegas this week for Google Cloud Next ’26, and spent the last two (full) days in various keynote rehearsals. Fun stuff with people I enjoy hanging out with.

[blog] The Agent Stack Bet. Important post. Have we given agents too much … agency? Addy calls out four architectural bets to make to get back in control.

[blog] How to hire people who are better than you. Great stuff if you’re trying to improve who and how you hire. Which I think is all of us!

[blog] From operational to analytical: The unified Spanner Graph and BigQuery Graph solution. I haven’t used many graph databases, but it seems convenient to have these capabilities built into database engines I already use.

[article] When Your Ambition Starts to Exhaust You. These are five helpful questions to ask yourself.

[guide] Multi-agent private networking patterns in Google Cloud. Very good content here for anyone working through multi-agent setups.

[article] ‘Tokenmaxxing’ is making developers less productive than they think. I understand why some think this is a good proxy metric for meaningful adoption internally. But it’s far from an indicator of excellence.

[blog] Designing synthetic datasets for the real world: Mechanism design and reasoning from first principles. Here’s a new Google Research project that produces better synthetic datasets with the help of AI.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

python-1.1.0

1 Share

[1.1.0] - 2026-04-21

Added

  • agent-framework-gemini: Add GeminiChatClient (#4847)
  • agent-framework-core: Add context_providers and description to workflow.as_agent() (#4651)
  • agent-framework-core: Add experimental file history provider (#5248)
  • agent-framework-core: Add OpenAI types to the default checkpoint encoding allow list (#5297)
  • agent-framework-core: Add AgentExecutorResponse.with_text() to preserve conversation history through custom executors (#5255)
  • agent-framework-a2a: Propagate A2A metadata from Message, Artifact, Task, and event types (#5256)
  • agent-framework-core: Add finish_reason support to AgentResponse and AgentResponseUpdate (#5211)
  • agent-framework-hyperlight: Add Hyperlight CodeAct package and docs (#5185)
  • agent-framework-openai: Add search tool content support for OpenAI responses (#5302)
  • agent-framework-foundry: Add support for Foundry Toolboxes (#5346)
  • agent-framework-ag-ui: Expose forwardedProps to agents and tools via session metadata (#5264)
  • agent-framework-foundry: Add hosted agent V2 support (#5379)

Changed

  • agent-framework-azure-cosmos: [BREAKING] CosmosCheckpointStorage now uses restricted pickle deserialization by default, matching FileCheckpointStorage behavior. If your checkpoints contain application-defined types, pass them via allowed_checkpoint_types=["my_app.models:MyState"]. (#5200)
  • agent-framework-core: Improve skill name validation (#4530)
  • agent-framework-azure-cosmos: Add allowed_checkpoint_types support to CosmosCheckpointStorage for parity with FileCheckpointStorage (#5202)
  • agent-framework-core: Move InMemory history provider injection to first invocation (#5236)
  • agent-framework-github-copilot: Forward provider config to SessionConfig in GitHubCopilotAgent (#5195)
  • agent-framework-hyperlight-codeact: Flatten execute_code output (#5333)
  • dependencies: Bump pygments from 2.19.2 to 2.20.0 in /python (#4978)
  • tests: Bump misc integration retry delay to 30s (#5293)
  • tests: Improve misc integration test robustness (#5295)
  • tests: Skip hosted tools test on transient upstream MCP errors (#5296)

Fixed

  • agent-framework-core: Fix python-feature-lifecycle skill YAML frontmatter (#5226)
  • agent-framework-core: Fix HandoffBuilder dropping function-level middleware when cloning agents (#5220)
  • agent-framework-ag-ui: Fix deterministic state updates from tool results (#5201)
  • agent-framework-devui: Fix streaming memory growth and add cross-platform regression coverage (#5221)
  • agent-framework-core: Skip get_final_response in _finalize_stream when the stream has errored (#5232)
  • agent-framework-openai: Fix reasoning replay when store=False (#5250)
  • agent-framework-foundry: Handle url_citation annotations in FoundryChatClient streaming responses (#5071)
  • agent-framework-gemini: Fix Gemini client support for Gemini API and Vertex AI (#5258)
  • agent-framework-copilotstudio: Fix CopilotStudioAgent to reuse conversation ID from an existing session (#5299)

Full Changelog: python-1.0.1...python-1.1.0

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Why Many Beginner Self-Taught Developers Struggle (And What to Do About It)

1 Share

Self‑taught developers often begin with the same “starter pack”: a laptop, internet access, and sheer determination. What they lack, however, is structured guidance, a defined curriculum, or any form of pedagogical support.

This absence of direction makes the journey significantly harder. Faced with an overwhelming abundance of online resources, many beginners become confused about where to start and often attempt to learn everything at once.

This is where the struggle with knowledge retention begins.

Not because they lack intelligence or effort, but because they're learning in a way that contradicts how the human brain actually works.

They dive into tutorials and courses without understanding the mechanism of the brain – that is, how the brain processes, stores, and retrieves information. As a result, much of what they learn simply doesn’t stick.

How the Brain Processes Information

So what's the connection between the brain and learning how to code, you might ponder?

The connection is direct and unavoidable.

Coding isn't learned through willpower or motivation — though both matter — or by spending countless hours watching tutorials.

It's learned through the brain’s ability to process, store, and retrieve information.

Every variable, function, data structure, or debugging pattern must pass through the brain’s cognitive systems before it becomes usable knowledge.

If your learning process doesn't align with how the brain naturally acquires and organises information, your retention will collapse, no matter how determined you are.

Now imagine you’re trying to fill a bucket with water. You keep pouring and pouring, but the bucket has tiny holes at the bottom. No matter how much effort you put in, the water keeps leaking out. You might blame yourself for not pouring fast enough, or you might try switching to a bigger jug, but the real problem isn’t your effort — it’s the bucket.

The water is the information you’re trying to learn.

The bucket is your brain’s memory system.

The holes in the bucket are the natural forgetting mechanisms of the brain: cognitive overload, limited working memory, and other constraints that make retention difficult.

If you don’t understand these mechanisms, you can pour in as much information as you want, but most of it will leak out.

Not because you’re incapable, but because you’re learning in a way that contradicts how the brain actually retains knowledge.

The Role of Academic Learning Theories

Since learning ultimately takes place in the brain, an important question is: How does the human brain acquire, organize, and apply knowledge and why does the typical self‑taught learning process clash with these principles?

This is where academic learning theories become indispensable. These frameworks explain how the brain actually acquires, retains, and applies complex information and they offer a scientific roadmap for learning more effectively. Without understanding these principles, self‑taught developers unintentionally work against the brain’s natural architecture.

The purpose of this article is to unpack these essential learning theories and apply them directly to the beginner self‑taught developer’s journey.

By understanding how the brain processes information, beginners can structure their learning more intentionally, retain knowledge more reliably, and move toward becoming competent developers with far greater confidence and clarity.

Table of Contents

Cognitive Load Theory (CLT)

Learning a new concept often requires mental effort by the brain to process newly acquired information. This effort exerted by the brain is known as cognitive load, a term coined by Australian educational psychologist John Sweller in 1988 during his study on how the brain acquires and retains information (Sweller, 1988).

Since then, his work has been expanded upon by other researchers. Notably, Dylan Wiliam famously tweeted in 2017 that Cognitive Load Theory (CLT) is "the single most important thing for teachers to know" (Dylan William, 2017).

You might wonder again: What does this have to do with me? As a beginner self‑taught developer, the answer is simple: you're both the teacher and the student.

So this is the most important theory you should know. In this self-tutoring journey, you're tasked with designing your own curriculum, choosing your own resources, pace your own learning, and evaluating your own progress.

Without understanding how cognitive load affects your ability to absorb and retain information, you may unintentionally overload your brain and sabotage your own learning.

Before we get into the nitty-gritty of CLT, there are important concepts masterminded by David Geary that you'll need to grasp to sufficiently understand this concept : “that which can be learnt” (biologically primary knowledge), “that which can be taught”(biologically secondary knowledge) (Geary, 2007, 2008).

According Geary (2007, 2008), "biologically primary knowledge" consists of "instinctual" skills that the brain is evolved to pick up naturally without formal schooling.

Examples include learning a first language, recognizing faces, or basic social navigation.

"Biologically secondary knowledge", on the other hand, consists of cultural and technical skills, like reading and writing, that are necessary for society but don't come naturally to the brain.

This is because we aren't "wired" to pick these up automatically. Instead, they require formal instruction and schools to pass them down.

Therefore, coding is a prime example of biologically secondary knowledge. The human brain is remarkably plastic, but it didn't evolve to interpret syntax, manage memory allocation, or debug logical loops.

These are cultural inventions, not natural instincts. Unlike learning to walk or speak your native language (which are biologically primary skills) you can't learn to code simply by “being around” computers.

Recognising that the human brain is not instinctively prepared for coding allows you to change your strategy. Once you accept that coding concepts are not “natural,” you can finally approach them with the structured, deliberate effort they require.

The second set of concepts beginner self-taught developers should know and understand are working memory, Miller's Laws, chunking, long-term memory, and schemas.

Working Memory

Working memory is where thinking happens. It's the active mental workspace where you hold information while you process it. When you encounter concepts like syntax, loops, functions, or an if/elseif statement for the first time, all of that information sits inside your working memory. The problem is that working memory is extremely limited and fragile.

When you first learn to code, your working memory functions like a small mental desk where only a few items can be placed at once.

Imagine trying to assemble a piece of IKEA furniture on a tiny coffee table. If you spread out the instruction manual, the screws, the wooden panels, and the tools all at the same time, the table becomes cluttered instantly. You start losing track of which part goes where, not because you’re incapable, but because the surface you’re working on is too small to hold everything at once.

Working memory behaves the same way. When you’re learning new concepts – like arrays, loops, functions, or error handling – each idea takes up space on that mental desk. If you then overload it, the desk becomes overcrowded.

Once it exceeds its capacity, things begin to fall off, and your ability to retain collapses.

It’s not a lack of intelligence. It’s simply the natural limit of working memory.

Now this collapse happens because you went against the threshold your working memory can hold. This is backed up by research that shows that working memory can typically process only 5–9 pieces of information at any given time (Miller, 1956). This is known as Miller’s Law.

Miller's Law

In 1956, George Miller found that the average human can hold about seven items (plus or minus two) in working memory at once, even some recent research has stated the number is even lower about four item (Nelson Cowan, 2001).

So imagine you encounter a tutorial that introduces the following concepts all at the same time: a Route, a Controller, a Model, a Migration, a View, a Request, Helper files, Jobs and Queues, Middleware, Roles and Permissions, and a Service Provider. If you attempt to hold all of these in your mind simultaneously, you'll inevitably hit Miller’s Wall, as your working memory becomes overloaded, and you'll likely forget the first concept long before you reach the last.

So how do you handle complex tasks if the brain can only juggle 4–9 items at once?

You use chunking — the process of grouping small pieces of information into a single, meaningful unit.

Chunking

Chunking is the brain’s strategy for compressing complexity. Instead of forcing working memory to hold a dozen unrelated items, you reorganise them into a few coherent structures. This reduces cognitive load, prevents overload, and allows you to work with far more information than your raw working‑memory limits would normally allow.

Let's consider an example:

A beginner learning Laravel might see Route, Controller, Model, Migration, and View as five separate, overwhelming items. To a beginner, each one feels like a distinct cognitive burden. But an experienced developer doesn't treat them as isolated concepts. Instead, they're understood as a single meaningful unit: the MVC pattern. Instead of holding five items in working memory, the expert holds one.

This raises an important question: how does a beginner know that these five elements belong together when they have only just encountered them?

It's crucial to emphasise that chunking isn't automatic. It depends on recognising meaningful relationships between concepts, and beginners typically lack the prior knowledge needed to perceive those relationships early on.

But as learners repeatedly encounter the same sequence during the learning process, they begin to notice consistent patterns. Over time, the brain’s natural tendency to seek structure enables them to identify which components reliably operate together, allowing these elements to gradually fuse into a single, meaningful chunk.

For example, when I first followed a Laravel e-commerce tutorial, I noticed that for every new resource the tutor created – Payment, Cart, KYC, and Contact – the same pattern was repeated: a Controller, a Model, and a View were always created together.

After encountering this sequence several times, it became clear that these components consistently belonged together as a set. Over time, I began to perceive the Controller, Model, and View not as separate elements, but as a single, integrated unit.

So beginners may not be able to chunk effectively on day one because they lack the prior knowledge needed to recognise what belongs together. But with time, and repeated encounters across different contexts, these individual pieces fuse into stable mental units stored in long‑term memory.

What feels overwhelming at first eventually becomes effortless, not because the task became simpler, but because your internal representation became more organised.

This is the power of chunking: it transforms scattered pieces of information into organised units that fit comfortably within the limits of working memory.

Without chunking, beginners drown in details. With chunking, they gain the cognitive space needed to understand, retain, and apply what they learn.

Long-term memory

Unlike working memory, long‑term memory has virtually infinite capacity. The goal of all study is to move information from the cramped working memory into the vast long‑term memory.

Here is the real secret: you don’t learn in working memory – you only process there.

True learning is the permanent change that happens in long‑term memory.

Schema

Once stored in long-term memory, information becomes part of a schema — a mental map or filing system that organizes related ideas.

For example, when you finally learn that Laravel is an MVC framework, you aren’t just memorizing three letters. You're building a schema that tells your brain: Models handle data, Views handle presentation, and Controllers handle logic.

Once a schema is built, it can be pulled into working memory as a single chunk, effectively bypassing Miller’s Law.

This is how experts think effortlessly while beginners feel overwhelmed.

And this is why Garnett (2020) argues that "being competent or lacking competence in something depends entirely on how secure the retrieval of knowledge held in the schema is".

Now that the foundations of working memory, long‑term memory, schemas, and chunking are clear, we can turn to another set of concepts every self‑taught developer must understand: intrinsic load, extraneous load, and germane load. These three components make up the full structure of Cognitive Load Theory, and they determine whether learning feels manageable or overwhelming.

Intrinsic Load: The Natural Difficulty of the Task

Intrinsic load refers to the inherent complexity of the material itself. Some concepts are simply harder than others because they contain more interacting elements that must be processed at the same time.

In Laravel, understanding a simple Route has low intrinsic load.

But concepts like Dependency Injection or Polymorphic Relationships have high intrinsic load because they involve multiple layers of abstraction and interdependent ideas.

You can't change the intrinsic load of a concept, but you can manage it by breaking the idea into smaller, more digestible sub‑tasks. This is why good teaching — and good self‑teaching — always begins with simplification and sequencing.

Simplification means stripping a concept down to its essential parts so the learner isn't overwhelmed by unnecessary detail.

Sequencing means introducing parts in a logical order, where each step builds on the previous one. This helps reduce unnecessary cognitive load and allows learners to devote more mental effort (germane load) to building schemas.

It’s like meeting someone new, and they tell you their name and it happens to be your mother’s name. Instantly, your brain forms a connection. You associate this new person’s name with the strong, deeply stored memory of your mother.

Because that schema already exists in your long-term memory, the new information “attaches” to it. Later, when you try to recall the name, you don’t struggle, you simply think of your mother, and the name comes back easily.

While many believe self‑taught developers struggle because they lack immediate, reliable, personal guidance, there's actually a hidden advantage in this predicament. When a teacher explains a concept, even if they try their best to “chunk” the information, they can't truly know the student’s internal limits — how much intrinsic load the learner can handle, how quickly they can process new ideas, or how much prior knowledge they can activate.

This is where self‑taught developer quietly shines. Because you are both the teacher and the student, you know your own cognitive limits better than anyone else. You can slow down when something feels heavy, pause when working memory is overloaded, and chunk information in a way that perfectly matches your personal capacity.

You can simplify a concept to its bare essentials and sequence it at a pace that aligns with your own understanding.

Extraneous Load: The Mental Noise

Extraneous load is the enemy of the self‑taught developer. It's the mental effort wasted on tasks that don't contribute to actual learning. This is where a self-taught developer's strength must truly shine.

A teacher in a classroom is responsible for removing any distractions that might derail a child or slow down their assimilation of knowledge. As a self-taught developer, that responsibility falls entirely on you. You must identify these distractions and eliminate them.

As a self-taught developer myself, I use specific strategies to ensure I stay focused. Before starting any course, I spend time reading the comment section to see what others have experienced. If I see complaints about low audio quality, unclear explanations, or tutorials that move too fast, I immediately abandon that course and look for one with better reviews.

Anything that might derail my progress must be removed. If you spend half of your mental energy trying to figure out all of these, you only have the remaining half available for understanding the logic of the code. And remember: when learning new concepts, we use working memory, which is fragile.

As your own “inner teacher,” you must eliminate this noise so your limited working memory can focus entirely on the material that matters.

Germane Load: The Construction Work

Germane load is the productive mental effort used to build and refine schemas — the mental structures that make future learning easier.

This is the “Aha!” moment when new information connects meaningfully to what you already know.

For example, germane load appears when you realise that a Database Migration is essentially a version‑control system for your table structure.

That insight is schema construction in action.

Teachers are often advised to help manage a child's germane load. One way they do this is by connecting the new idea being taught to an existing concept.

By doing this, they help the student build schemas: mental frameworks that organise and interpret information.

For a self-taught developer, this means instead of memorizing a new syntax in isolation, you look for a 'hook' in something you already understand.

For example, if you already know how a physical filing cabinet works, understanding Arrays or Objects in code becomes much easier.

You aren't learning from scratch – you're simply "plugging" new data into an old socket. This reduces the mental strain and makes the new knowledge stick permanently.

But this can only happen when intrinsic load is properly managed and extraneous load is removed.

It's important to note that, unlike intrinsic and extraneous load, germane load isn't an independent type of cognitive load.

Instead, it represents the portion of your working memory that remains available to handle the element interactivity associated with intrinsic load.

In other words, germane load is the mental energy you have left for learning once the unnecessary noise is stripped away.

Understanding cognitive load explains why learning can feel overwhelming in the moment, but it doesn't explain why knowledge fades after the moment has passed. For that, we turn to another foundational principle in learning science: the Ebbinghaus Forgetting Curve.

Ebbinghaus Forgetting Curve

If you remember the bucket analogy, this curve represents one of the holes at the bottom — the brain’s natural tendency to let information leak away unless it's reinforced.

In the late 19th century, Hermann Ebbinghaus discovered that human memory follows a predictable pattern of decline. After learning something new, we forget most of it astonishingly quickly — often within hours — unless the information is revisited. The forgetting curve shows that memory retention drops sharply at first and then continues to decline more slowly over time.

Studies based on Ebbinghaus’ forgetting curve found that without a conscious effort to retain newly acquired information, we lose approximately 50% of new information within 24 hours, and up to 90% within a week (Clearwater, 2024).

In other words, the brain is designed to discard information that isn't reinforced.

For self‑taught developers, this has profound implications.

You may understand a Laravel controller today, spatial roles and permission concepts, and so on – but if you don't revisit it, practice it, or apply it within the next few hours, your brain will naturally let it fade.

This is not a sign of weakness or lack of talent. It's simply how human brain works.

The forgetting curve also explains why tutorials feel deceptively easy the moment you're going through them.

While watching, everything seems clear — but a week later, the same concepts feel unfamiliar.

The knowledge never made it into long‑term memory because you didn't revisit, practice, or connect it to existing schemas.

Since the human brain is designed to forget anything that isn't repeated, repetition becomes the signal that tells the brain, “This matters — keep it.”

This is why, when you meet someone for the first time and they tell you their name, you'll almost certainly forget it unless you consciously repeat it to yourself several times. If you don’t reinforce it, you end up asking — often with embarrassment — “Sorry, what was your name again?”

The same principle applies to learning code: without deliberate repetition, the brain simply lets the information fade. However, with a technique called spaced repetition, retention is significantly improved .

How the Theory of Spaced Repetition Works

Spaced repetition is a learning technique grounded in cognitive psychology that involves reviewing information at increasingly spaced intervals to strengthen long‑term memory retention.

It's based on the principle that memory decays predictably over time — as demonstrated by the Ebbinghaus Forgetting Curve — and that strategically timed reviews or repetition interrupt this decay, making the memory more durable with each repetition.

This idea is what gave birth to Anki-Flash cards.

Imagine you're trying to memorise the time complexity of different algorithms.

This is a classic "dry" academic topic that's easy to forget.

To understand why spaced repetition is so powerful, consider a familiar scenario. You spend Sunday night staring at a chart of Big‑O complexities for four hours. By Monday’s review, you can recall most of them. By Friday, only a few remain. Two weeks later, the entire chart has vanished from memory.

Spaced repetition reverses this process by reviewing information at the precise moment it's about to be forgotten. Instead of cramming Big‑O notation in a single session, you revisit it across expanding intervals:

  1. Day 1 (Initial Learning): You study the Big‑O chart and understand each complexity class.

  2. Day 2 (First Review): You test yourself. If you recall an item correctly, you schedule the next review three days later. If you miss it, you review it again the following day.

  3. Day 5 (Second Review): You encounter the material again. Because you still remember it, the interval expands to ten days.

  4. Day 15 (Third Review): Your memory has begun to fade, but the moment you see the prompt, the concept resurfaces. This slight struggle to retrieve the information is precisely what strengthens long‑term retention.

  5. Day 45 (Fourth Review): By now, the memory is deeply consolidated. Concepts like O(log⁡n) feel as natural and accessible as your own phone number.

Through this process, spaced repetition transforms fragile, short‑term awareness into durable, long‑term knowledge. Each review interrupts the forgetting curve, reinforces the schema, and reduces the cognitive load required to recall the concept in the future.

For self-taught developers, spaced repetition can take many forms. You might rewrite code from memory, re‑implement a feature days later, build small variations of the same concept, or return to the concept after working on different tasks.

Every review strengthens the schema and reduces the cognitive load required to recall it. Over time, what once felt complex becomes automatic — not because the concept changed, but because your brain reorganised it into a stable, efficient structure.

As you can see, learning isn't a single event but a cycle of exposure, forgetting, and reinforcement.

Mastery comes not from seeing something once, but from returning to it until it becomes part of your cognitive architecture.

But we must be careful with repetition. Doing the same thing over and over again doesn't guarantee improvement. In fact, mindless repetition can trap you at the same level indefinitely.

This is where the theory of deliberate practice becomes essential, as it emphasises increasing the level of challenge, focusing on specific weaknesses, and actively seeking feedback so that each repetition leads to measurable improvement rather than just familiarity.

Theory of Deliberate Practice

Developed by psychologist K. Anders Ericsson, it argues that expertise is not the result of talent but of high‑quality, intentional practice (Ericsson, 1993). This type of practice is fundamentally different from simply doing something repeatedly.

He coined the term "deliberate practice" while researching how people become experts. Studying experts from several different fields, he dismantled the myth that expert performers have unusual innate talents.

Instead, he discovered that experts attain their high performance through how they practice: it's a deliberate effort to become an expert. This effort is characterized by breaking down required skills into smaller parts and practicing these parts repeatedly.

According to Anders Ericsson, Deliberate Practice requires:

  1. Clear goals

  2. Immediate feedback

  3. Tasks that stretch your ability just beyond your comfort zone

  4. Full concentration and effort

The main tenet of Deliberate Practice is that tasks must stretch your ability just beyond your comfort zone. This is paramount to the advancement of learning.

Imagine a child being taught 1+1 every day. That child will never grow beyond basic arithmetic. Anders Ericsson calls this "Arrested Development" (Ericsson, Nandagopal and Roring, 2005). For that child to grow to become a mathematician, their knowledge must be stretched.

The takeaway for developers is a play on the DRY principle (Don’t Repeat Yourself): If you are only repeating what you already know without stretching yourself, you aren't growing. This "stretch" is the extra edge that Deliberate Practice adds to Spaced Repetition.

Building a simple to-do list, a calculator, or a weather app over and over again won't take you anywhere. You already know how to do those.

To truly grow, you must stretch yourself. Instead, try a project that integrates new ideas, like building a mini-app where the weather data affects your to-do list. For example, if the API shows it's raining, the app automatically hides outdoor tasks and calculates the time you'll save or the indoor tasks you should prioritize instead.

This forces you to handle complex logic and state management, moving you beyond simple repetition into true mastery.

This ability to create brings me to the last theory: Bloom's Taxonomy.

What is Bloom's Taxonomy?

Bloom’s Taxonomy provides a hierarchy of cognitive skills that learners move through as they develop mastery. It begins with the simplest tasks and progresses toward the most complex:

  1. Remember – recalling facts or syntax

  2. Understand – explaining concepts in your own words

  3. Apply – using knowledge in real situations

  4. Analyze – breaking problems into parts

  5. Evaluate – judging solutions or comparing approaches

  6. Create – building original systems or applications

Most self‑taught developers get stuck in the first two levels. They memorize syntax and understand examples, but they struggle to apply, analyze, or create.

This isn't because they lack ability. Rather, it's because they haven't been taught that learning must progress through these stages.

Bloom’s Taxonomy gives structure to the learning journey.

It reminds self-taught developers that mastery isn't achieved by watching tutorials but by climbing the ladder from remembering → understanding → applying → analyzing → evaluating → creating (with an emphasis on Creation).

Creation is one of the most difficult yet most transformative experiences in your journey as a developer. It forces you to think abstractly, confront ambiguity, and notice dimensions of a problem that tutorials rarely reveal.

When you build something real, the neatness of the theory in your head collapses, and you begin to see its true complexity. You must then devise strategies to navigate these challenges, and through this process, you learn.

And as with anything worthwhile, the process isn't smooth. You'll encounter bugs — not just one or two, but hundreds. Yet this is precisely how real knowledge is built. Every bug you solve becomes a permanent entry in your long‑term memory.

The next time you see that error, you won’t panic. Instead, you’ll recognise it instantly and know exactly where it’s coming from and how to fix it.

Some self‑taught developers encounter a few bugs and never return to their projects again, concluding that “coding isn’t for me.”

After trying several fixes and seeing no progress, they abandon the work and look for something else. But this is the wrong conclusion. The problem is rarely a lack of talent — it's a misunderstanding of how the brain behaves under cognitive strain.

Focused Mode vs Diffuse Mode

When you spend a long time wrestling with a bug, you may be experiencing mental fixation or functional fixedness.

This is when your brain becomes locked into a single line of reasoning, repeating the same logic path over and over because it feels like the right direction. The longer you stare at the problem, the deeper the cognitive rut becomes. You develop tunnel vision, making it almost impossible to see alternative solutions.

This is where understanding how the brain operates becomes essential.

According to Oakley (2014), the brain works in two primary modes:

  1. Focused Mode: Ideal for executing a known formula or following a clear procedure, terrible for discovering a new approach or breaking out of a mental rut.

  2. Diffuse Mode: This is activated when you step away — walking, showering, relaxing, or sleeping.

In this second mode, the brain enters a “big‑picture” state where neural connections stretch across different regions.

The background processes continue working on the problem without the restrictive tunnel vision of conscious focus.

This phenomenon is known as incubation.

This is why solutions often appear when you’re not actively thinking about the problem. You step away, and suddenly the answer emerges, not because you stopped working, but because a different part of your brain started working for you.

The reality is that many developers never allow for incubation. While you step away from the problem, your brain performs subconscious synthesis: it clears out the noise (Extraneous Load) and lets the core logic (Germane Load) settle. When you return, the “wrong” paths you were obsessing over have faded, and the correct path which was there all along often finally becomes visible.

This is why developers must deliberately allow for incubation. We can take some lessons from great minds of the past:

Henri Poincaré famously struggled with Fuchsian functions for weeks. It was only during a geological excursion when he had completely forgotten about the mathematics that the solution appeared with “perfect certainty” the moment he stepped onto an omnibus. His breakthrough did not come from more effort, but from stepping away long enough for diffuse mode to take over.

Friedrich August Kekulé experienced something similar after years of wondering why benzene’s carbon atoms didn't fit a linear structure.

If some of the greatest minds in history stepped away from their problems and found solutions in diffuse mode, why should developers treat themselves any differently?

Now that you're familiar with some key learning strategies – Cognitive Load Theory, Spaced Repetition, and Bloom's Taxonomy – creating or building a project from the ground up should be your next task. It will help you curate, retrieve, organise, and seal in all that diverse knowledge you've gathered as a self-taught developer.

Conclusion

In this article, we explored why the human brain isn't instinctively wired to understand programming. Coding is a biologically secondary skill, which means it doesn't develop naturally through immersion but requires explicit instruction, structure, and patience.

We also talked about the limits of working memory, the importance of chunking, and the need to manage cognitive load so that learning remains possible rather than overwhelming.

We then analyzed the three components of Cognitive Load Theory – intrinsic, extraneous, and germane load – and discussed how each influences the learning process. Reducing extraneous load is especially crucial for self‑taught developers, as it frees up mental resources for meaningful understanding.

From there, we turned to the Ebbinghaus Forgetting Curve, which demonstrates how quickly newly learned information fades without reinforcement.

To counter this natural forgetting, we introduced Spaced Repetition, a method that strengthens memory by reviewing material at expanding intervals. We also examined Deliberate Practice, which pushes learners just beyond their comfort zone to promote genuine skill development, and Bloom’s Taxonomy, which outlines the stages of cognitive growth from remembering to creating.

Finally, we emphasized the importance of knowing when to step back. The brain operates in both focused and diffuse modes, and effective learning requires movement between the two. Breaks are not signs of weakness but essential components of consolidation and insight.

Together, these theories form a comprehensive framework for learning to code with scientific precision. When self‑taught developers understand how their brain learns, forgets, and grows, they can design a learning process that isn't only more efficient but far more sustainable.

With all this new knowledge, one truth is certain: focus, determination, and consistency are the forces that transform theory into mastery.

Learning science can guide the process, but only sustained effort turns knowledge into skill.

References

  1. Clearwater, L. (2024). Understanding the Science Behind Learning Retention | Reports | What We Think | Indegene. [online] www.indegene.com. Available at: https://www.indegene.com/what-we-think/reports/understanding-science-behind-learning-retention.

  2. Dylan Wiliam [@dylanwiliam]. (2017, January 25). I’ve come to the conclusion Sweller’s Cognitive Load Theory is the single most important thing for teachers to know [Tweet]. X. https://x.com/dylanwiliam/status/824682504602943489

  3. Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993).
    The role of deliberate practice in the acquisition of expert performance.
    Psychological Review, 100(3), 363–406.

  4. Garnett, S. (2020.). Cognitive Load Theory A handbook for teachers. [online] Available at: https://www.crownhouse.co.uk/assets/look-inside/9781785835018.pdf.

  5. Geary, D. C. (2007). An evolutionary perspective on learning disability in mathematics. Developmental Neuropsychology, 32(1), 471–519. https://doi.org/10.1080/87565640701360924

  6. Geary, D. C. (2008). An evolutionarily informed education science. Educational Psychologist, 43(4), 179–195. https://doi.org/10.1080/00461520802392133

  7. George A. Miller (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158

  8. Nelson Cowan (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114. https://doi.org/10.1017/S0140525X01003922

  9. Oakley, B. (2014). A Mind for Numbers: How to Excel at Math and Science (Even If You Flunked Algebra). New York: TarcherPerigee.

  10. Sweller, J. (1988). Cognitive Load during Problem Solving: Effects on Learning. Cognitive Science, [online] 12(2), pp.257–285. doi:https://doi.org/10.1207/s15516709cog1202_4.



Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories