Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
144536 stories
·
32 followers

Xbox’s cross-device play history syncs your recently played games on every screen

1 Share

On Thursday, Xbox announced it’s widely rolling out cross-device play history. With the new update, even if you’re on a different Xbox console, Ally handheld, or PC, your recently played game list will remain the same, so you can jump right back in where you left off.

The change, which first started testing with Insiders last month and is now rolling out to everyone, also includes putting cloud-playable games in your recently-played list. As described in the blog post, “That means every cloud-enabled title, from original Xbox classics to Xbox Series X|S exclusives, is now in one place whether you own it or play through Xbox Game Pass.”

On console, you can find your recently played games through the “Play history” tile on the home page. Your recent titles will also surface on the Xbox PC app within the “Play history” tab beneath the “Most Recent” section, as well as in your library.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Typepad is Shutting Down

1 Share
Typepad, which launched in 2003 to make it easier for the masses to start their blogging journey, is shutting down. From a blog post: We have made the difficult decision to discontinue Typepad, effective September 30, 2025. After September 30, 2025, access to Typepad -- including account management, blogs, and all associated content -- will no longer be available. Your account and all related services will be permanently deactivated. Please note that after this date, you will no longer be able to access or export any blog content.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How GitHub Models can help open source maintainers focus on what matters

1 Share

Open source runs on passion and persistence. Maintainers are the volunteers who show up to triage issues, review contributions, manage duplicates, and do the quiet work that keeps projects going.

Most don’t plan on becoming community managers. But they built something useful, shared it, and stayed when people started depending on it. That’s how creators become stewards.

But as your project grows, your time to build shrinks. Instead, you’re writing the same “this looks like a duplicate of #1234” comment, asking for missing reproduction steps, and manually labeling issues. It’s necessary work. But it’s not what sparked your love for the project or open source.

That’s why we built GitHub Models: to help you automate the repetitive parts of project management using AI, right where your code lives and in your workflows, so you can focus on what brought you here in the first place. 

What maintainers told us

We surveyed over 500 maintainers of leading open source projects about their AI needs. Here’s what they reported:

  • 60% want help with issue triage — labeling, categorizing, and managing the flow
  • 30% need duplicate detection — finding and linking similar issues automatically
  • 10% want spam protection — filtering out low quality contributions
  • 5% need slop detection — identifying low quality pull requests that add noise

Folks surveyed indicated that they wanted AI to serve as a second pair of eyes and to not intervene unless asked. They also said triaging issues, finding similar issues, helping write minimal reproductions were top of mind. Clustering issues based on topic or feature was also possibly the most important concern to some.

How GitHub Models + GitHub Actions = Continuous maintainer support

We’re calling this pattern Continuous AI using automated AI workflows to enhance collaboration, just like CI/CD transformed testing and deployment. With GitHub Models and GitHub Actions, you can start applying it today. 

Here’s how Continuous AI can help maintainers (you!) manage their projects


The following examples are designed for you to easily copy and paste into your project. Make sure GitHub Models is enabled for your repository or organization, and then just copy the YAML into your repo’s .github/workflows directory. Customize these code blocks as needed for your project.

Add permissions: models: read to your workflow YAML, and your action will be able to call models using the built-in GITHUB_TOKEN. No special setup or external keys are required for most projects. 

Automatic issue deduplication

Problem: You wake up to three new issues, two of them are describing the same bug. You copy and paste links, close duplicates, and move on… until it happens again tomorrow.

Solution: Implement GitHub Models and a workflow to automatically check if a new issue is similar to existing ones and post a comment with links.

name: Detect duplicate issues

on:
  issues:
    types: [opened, reopened]

permissions:
  models: read
  issues: write

concurrency:
  group: ${{ github.workflow }}-${{ github.event.issue.number }}
  cancel-in-progress: true

jobs:
  continuous-triage-dedup:
    if: ${{ github.event.issue.user.type != 'Bot' }}
    runs-on: ubuntu-latest
    steps:
      - uses: pelikhan/action-genai-issue-dedup@v0
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          # Optional tuning:
          # labels: "auto"          # compare within matching labels, or "bug,api"
          # count: "20"             # how many recent issues to check
          # since: "90d"            # look back window, supports d/w/m

This keeps your issues organized, reduces triage work, and helps contributors find answers faster. You can adjust labels, count, and since to fine tune what it compares against.

Issue completeness

Problem: A bug report lands in your repo with no version number, no reproduction steps, and no expected versus actual behavior. You need that information before you can help.

Solution: Automatically detect incomplete issues and ask for the missing details.

name: Issue Completeness Check

on:
  issues:
    types: [opened]

permissions:
  issues: write
  models: read

jobs:
  check-completeness:
    runs-on: ubuntu-latest
    steps:
      - name: Check issue completeness
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Analyze this GitHub issue for completeness. If missing reproduction steps, version info, or expected/actual behavior, respond with a friendly request for the missing info. If complete, say so.
            
            Title: ${{ github.event.issue.title }}
            Body: ${{ github.event.issue.body }}
          system-prompt: You are a helpful assistant that helps analyze GitHub issues for completeness.
          model: openai/gpt-4o-mini
          temperature: 0.2

      - name: Comment on issue
        if: steps.ai.outputs.response != ''
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: ${{ github.event.issue.number }},
              body: `${{ steps.ai.outputs.response }}`
            })

The bot could respond: “Hi! Thanks for reporting this. To help us investigate, could you please provide: 1) Your Node.js version, 2) Steps to reproduce the issue, 3) What you expected to happen versus what actually happened?”

Or you can take it a step further and ensure the issue is following your contributing guidelines, like ben-balter/ai-community-moderator (MIT License).

Spam and “slop” detection

Problem: You check notifications and find multiple spam pull requests or low effort “fix typo” issues.

Solution: Use AI to flag suspicious or low quality contributions as they come in.

name: Contribution Quality Check

on:
  pull_request:
    types: [opened]
  issues:
    types: [opened]

permissions:
  pull-requests: write
  issues: write
  models: read

jobs:
  quality-check:
    runs-on: ubuntu-latest
    steps:
      - name: Detect spam or low-quality content
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Is this GitHub ${{ github.event_name == 'issues' && 'issue' || 'pull request' }} spam, AI-generated slop, or low quality?
            
            Title: ${{ github.event.issue.title || github.event.pull_request.title }}
            Body: ${{ github.event.issue.body || github.event.pull_request.body }}
            
            Respond with one of: spam, ai-generated, needs-review, or ok
          system-prompt: You detect spam and low-quality contributions. Be conservative - only flag obvious spam or AI slop.
          model: openai/gpt-4o-mini
          temperature: 0.1

      - name: Apply label if needed
        if: steps.ai.outputs.response != 'ok'
        uses: actions/github-script@v7
        with:
          script: |
            const label = `${{ steps.ai.outputs.response }}`;
            const number = ${{ github.event.issue.number || github.event.pull_request.number }};
            
            if (label && label !== 'ok') {
              await github.rest.issues.addLabels({
                owner: context.repo.owner,
                repo: context.repo.repo,
                issue_number: number,
                labels: [label]
              });
            }

This workflow auto-screens new issues and new pull requests for spam/slop/low-quality, and auto labels them based on an LLM’s judgment.

Tip: If the repo doesn’t already have spam or needs-review labels, addLabels will create them with default styling. If you want custom colors or descriptions, pre-create them.

You can also check out these related projects: github/ai-assessment-comment-labeler (MIT license) and github/ai-moderator (MIT license).

Continuous resolver

Problem: Your repo has hundreds of open issues, many of them already fixed or outdated. Closing them manually would take hours.

Solution: Run a scheduled workflow that identifies resolved or no-longer-relevant issues and pull requests, and either comments with context or closes them.

name: Continuous AI Resolver


on:
  schedule:
    - cron: '0 0 * * 0' # Runs every Sunday at midnight UTC
  workflow_dispatch:


permissions:
  issues: write
  pull-requests: write


jobs:
  resolver:
    runs-on: ubuntu-latest
    steps:
      - name: Run resolver
        uses: ashleywolf/continuous-ai-resolver@main
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}

Note: The above code references an existing action in ashleywolf/continuous-ai-resolver (MIT license).

This makes it easier for contributors to find active, relevant work. By automatically identifying and addressing stale issues, you prevent the dreaded “issue pileup” that discourages new contributors and makes it harder to spot actual problems that need attention.

New contributor onboarding

Problem: A first time contributor opens a pull request, but they’ve missed key steps from your CONTRIBUTING.md.

Solution: Send them a friendly, AI-generated welcome message with links to guidelines and any helpful suggestions.

name: Welcome New Contributors

on:
  pull_request:
    types: [opened]

permissions:
  pull-requests: write
  models: read

jobs:
  welcome:
    runs-on: ubuntu-latest
    if: github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
    steps:
      - name: Generate welcome message
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Write a friendly welcome message for a first-time contributor. Include:
            1. Thank them for their first PR
            2. Mention checking CONTRIBUTING.md
            3. Offer to help if they have questions
            
            Keep it brief and encouraging.
          model: openai/gpt-4o-mini
          temperature: 0.7

      - name: Post welcome comment
        uses: actions/github-script@v7
        with:
          script: |
            const message = `${{ steps.ai.outputs.response }}`;
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: ${{ github.event.pull_request.number }},
              body: message
            });

This makes contributors feel welcome while setting them up for success by reducing rework and improving merge times.

Why these?

These examples hit the biggest pain points we hear from maintainers: triage, deduplication, spam handling, backlog cleanup, and onboarding. They’re quick to try, safe to run, and easy to tweak. Even one can save you hours per month.

Best practices 

  • Start with one workflow and expand from there
  • Keep maintainers in the loop until you trust the automation
  • Customize prompts so the AI matches your project’s tone and style
  • Monitor results and tweak as needed
  • Avoid one size fits all automation, unreviewed changes, or anything that spams your contributors

Get started today

If you’re ready to experiment with AI:

  1. Enable GitHub Models in your repository settings
  2. Start with the playground to test prompts and models
  3. Save working prompts as .prompt.yml files in your repo
  4. Build your first action using the examples above
  5. Share with the community — we’re all learning together!

The more we share what works, the better these tools will get. If you build something useful, add it to the Continuous AI Awesome List.

If you’re looking for more join the Maintainer Community >

The post How GitHub Models can help open source maintainers focus on what matters appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How we accelerated Secret Protection engineering with Copilot

1 Share

Accidentally committing secrets to source code is a mistake every developer dreads — and one that’s surprisingly easy to make. GitHub Secret Protection was built for moments like these, helping teams catch exposed credentials before they cause harm.

Secret Protection works by creating alerts for sensitive credentials found in code, and it offers several features to help mitigate leaks even further.

  • Push protection helps stop leaks before they happen by blocking any commits that contain sensitive data and ensuring that credentials do not make it into a code base.
  • Validity checks help users triage alerts by indicating which secrets are active and need immediate attention.
  • The partner program allows for providers to be notified of leaks in public repositories for certain token types. When these types are detected, providers can take immediate action on the exposed secret (e.g. immediate revocation, application of a quarantine policy, notification).
  • Copilot secret scanning allows for detecting generic secrets, things like passwords or connection strings that may not be associated with a specific provider.
  • Custom patterns let you define expressions for detecting secrets that are specific to your project or organization.

Aaron and I have worked extensively on validity checks during our time at GitHub. It’s become a core part of our product, and many users rely on it day-to-day as part of their triage and remediation workflows. Secret Protection calculates the validity of a leaked credential by testing it against an unobtrusive API endpoint associated with the token’s provider. 

We released this feature in 2023, and we started by adding validity checks support for the most common token types we saw leaked in code (e.g., AWS keys, GCP credentials, Slack tokens). Secret Protection got to a point where it was validating roughly 80% of newly created alerts. While the less common token types remained (and continue to remain) important, our team shifted focus to make sure we delivered the greatest value for our customers.

Towards the end of 2024 and into 2025 we gradually saw the advent of agentic AI, and soon coding agents started to gain mainstream popularity. Our team got together earlier this year and had a thought: Could we successfully use coding agents to help cover this gap?

Augmenting a repeatable workflow

To identify opportunities for automation, we first took a close look at our existing process for adding validation support for new token types. This framework-driven workflow included the following steps for each token type:

  1. We researched the provider to determine a good endpoint for validating the token in question.
  2. We wrote code — a validator — to implement this change.
  3. We darkshipped the validator, thus allowing us to update our implementation with errors we saw.
  4. We fully shipped the validator by removing the darkship configuration.
A diagram showing the framework-driven workflow of research, code, darkship (observe), and release. As needed, the workflow can repeat the "code" step after "darkship."

The coding and release parts (second and fourth steps) of this process were the obvious first choices for automation.

The first step above involves finding a suitable endpoint to validate a new token type. We typically use  /me (or equivalent) endpoints if they exist. Sometimes they do exist, but they’re buried in documentation and not easy to find. We experimented with handing off this research to Copilot, but it sometimes struggled. It could not reliably find the same least-intrusive endpoint an engineer would choose. We also discovered that creating and testing live tokens, and interpreting nuanced API changes, remained tasks best handled by experienced engineers.

Copilot did an excellent job of making code changes. The output of the human-driven research task was fed into a manually dispatched GitHub workflow that created a detailed issue we could assign to the coding agent. The issue served as a comprehensive prompt that included background on the project, links to API documentation, and various examples to look at. We learned that the coding agent sometimes struggled with following links, so we added an extra field for any additional notes. 

Screenshot of the GitHub Actions “Run workflow” form to create a new validator. Form shows a dropdown for branch, and text fields for Token Type, Token Name, Provider, Documentation URL, Endpoint URL, Other Notes for Validator all with sample input using a PAT from Example.comas demonstration.

After assigning an issue to Copilot, the coding agent automatically generated a pull request, instantly turning our research and planning into actionable, feedback-ready code. We treated code generated by the agent just like code written by our team: it went through automated testing, a human review process, and was eventually deployed by engineers. GitHub provided a streamlined process for requesting changes from the agent — just add comments to a pull request. The agent is not perfect, and it did make some mistakes. For example, we expected that Copilot would follow documentation links in a prompt and reference the information there as it implemented its change, but in practice we found that it sometimes missed details or didn’t follow documentation as intended.

Our framework included the ability to darkship a validator. That is, we observed the results of our new code without writing validity inferences to the database. It wasn’t uncommon for our engineers to encounter some amount of drift in API documentation and actual behavior. This stage allowed us to safely fix any errors. When we were ready to fully release a change, we asked Copilot to make a small configuration change to take the new validator out of darkship mode.

The result

Prior to our AI experimentation, progress was steady but slow. We were validating 32 partner token types. It took us several months to get here as engineers balanced onboarding new checks with day-to-day feature development. With Copilot, we onboarded almost 90 new types in just a few weeks as our engineering interns, @inshalak and @matthew-tzong, directed Copilot through this process.

Coding agents are a viable option for accelerating framework-driven repeatable workflows with automation. In our case, Copilot was literally a force multiplier. Being able to parallelize the output of N research tasks over N agents was huge. Copilot delivers speed and scale, but it’s no replacement for human engineering judgment. Always review, test, and verify the code it produces. We were successful by grafting Copilot into very specific parts of this framework.

Takeaways and tips

Our experiment using Copilot coding agent made a measurable impact: we dramatically accelerated our coverage of token types, parallelized the most time-consuming parts of the workflow, and freed up engineers to focus on the nuanced research and review stages. Copilot didn’t replace the need for thoughtful engineering, but it did prove to be a powerful teammate for framework-driven, repeatable engineering tasks.

A few things we learned along the way:

  • Automation amplifies repeatability: If you have a process with well-defined steps, coding agents can help you scale your efforts and multiply your impact.
  • Treat Copilot like a team member: Its contributions need the same careful review, testing, and feedback as any human’s code.
  • Prompt quality drives results: Detailed, example-rich prompts (and sometimes extra notes) helped Copilot deliver higher-quality pull requests.
  • Iterate on your process: Prompts often needed refinement, and workflows benefited from small adjustments as we learned what worked best.
  • Parallelization is a superpower: With the right setup, we could assign many tasks at once and unblock work that would have otherwise queued up behind a single engineer.

We see huge potential for coding agents wherever there are repeatable engineering tasks. We are experimenting with similar processes in other onboarding workflows in our project. We’re confident that many other teams and projects across the industry have similar framework-driven workflows that are great candidates for this kind of automation.

If you’re looking to bring automation into your own workflow, take advantage of what’s already repeatable, invest in good prompts, and always keep collaboration and review at the center.

Thanks for reading! We’re excited to see how the next generation of agentic AI and coding agents will continue to accelerate software engineering — not just at GitHub, but across the entire developer ecosystem.

The post How we accelerated Secret Protection engineering with Copilot appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Install Cursor and Learn Programming With AI Help

1 Share

I’m not a big fan of using AI as a shortcut. On the other hand, I’m perfectly OK using it as a tool for learning. One thing AI is very good at is teaching programming languages. Yes, you have to be careful to ensure you’re not being misled by mistakes, but with follow-up queries, I’ve found you can almost always get AI to help correct its gaffes.

There are plenty of AI-powered IDEs on the market now and even terminal applications (such as Warp) that include a powerful AI that can do the very same thing.

One such IDE is called Cursor. You can make Cursor a part of your workflow and learn how this new IDE sets the standard for AI-powered programming tools. However, before you get to those stages, you’ll need to get Cursor installed and configured, so it’s usable for helping you interact with AI to learn the art of programming.

Let’s do just that.

What You’ll Need

Because Linux is my go-to OS (and because it’s rising in popularity), I’m going to demonstrate this process on Pop!_OS Linux. You can also install this IDE on macOS and Windows by downloading the installer file from the downloads page and going through the standard motions of installing apps on your platform of choice.

If you’re using Linux, the only option is an AppImage, which is nice because it means Cursor will run on any Linux desktop distribution.

Installing Cursor on Linux

Before you download the Cursor AppImage, I would suggest you install a handy app called Gear Lever, which makes working with AppImages much easier. Gear Lever can create a launcher in your desktop menu so you don’t have to run the AppImage from the command line.

Gear Lever can be installed on any Linux distribution that supports Flatpak with the command:

Once installed, log out and log back in to make sure the Gear Lever launcher is added to your desktop menu.

Next, download the AppImage from the Cursor download page.

After the download is finished, open Gear Lever and click the Open button in the top left corner (Figure 1).

Figure 1 shows a screenshot of the Gear Lever main window, listing the installed applications.

Figure 1. The Gear Lever main window.

Navigate to the folder housing the Cursor AppImage and select the file. You will then be prompted to move the AppImage to the desktop menu. Do that and you’re ready to launch the app.

Using Cursor

When you first launch Cursor, you’ll have to walk through a welcome wizard, during which you’ll need to sign up for an account. You can either use an email address or sign up with an account such as Google.

Once you’re finished, it’s time to start using Cursor.

Let me walk you through the process of using Cursor’s AI features.

The first thing you must do is select a model. As you’re probably aware, some models require an account (paid even) and/or an API. For your first steps, I would suggest opting for one of the free models. To do this, locate the model drop-down in the query field at the bottom right of the Cursor window. From that drop-down (Figure 2), select a model like deepseek-v3.1 (which can be used for free). After you’ve selected the model you want to use, run your query.

Figure 2 is a screenshot of Cursor showing that you can select whatever model you want, but if you choose a paid model, you'll have to use your account and (possibly) an API.

Figure 2. You can select whatever model you want, but if you choose a paid model, you’ll have to use your account and (possibly) an API.

For example, let’s have Cursor write a Python program that accepts input from a user for name, address, email and phone. That prompt might look like this:

Write a Python program that accepts user input for name, address, email and phone and then writes it to a file named users.txt.

Before you hit Enter, it’s important that you switch from Agent to Ask, which is done via the drop-down to the left of the model selector. If you try to use Agent, you’ll be prompted to sign up for a Pro or Business plan.

Cursor will then begin to write the app, and does so very quickly.

At this point, I attempted to use the Run Without Debugging option and was presented with an error. It wasn’t a Python error but, rather, the inability of Cursor to find the __main__ module in my home directory.

To that end, I copied the code, pasted it into the file collect_user_info.py, and then ran it with the command:

Guess what? The code ran perfectly.

I then went back for a follow-up and added (via query):

Add to the program the ability to accept user input for gender.

Cursor went to town and generated the new code. I overwrote the original file to see if the new code would work as expected. Blammo — worked like a charm.

At the end of the presented code, Cursor even gives you the steps to run the new Python program.

I’m not saying you should install Cursor and start using it to build all of your applications, but this is a good way for those trying to learn how to code to make that a reality.

Give Cursor a try and see if it doesn’t help with your understanding of whatever programming language you want to learn.

The post Install Cursor and Learn Programming With AI Help appeared first on The New Stack.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building a Web Based Comic Book Reader

1 Share

Ok, so I know I've been spending way too much time lately talking about comic books, but I've been reading them for roughly 80% of my life now so they're just a natural part of my life. Now, my best friend Todd Sharp told me this crazy lie that he's never read a comic book before, but surely that's a lie. Surely.

Earlier this week, I took a look at parsing electronic comic books and sending them to GenAI as a way to get summaries of stories. That was a fun experiment and it actually worked quite well. I thought I'd take a stab at trying a similar approach with Chrome's Built-in AI support as well when I discovered that... wait... I don't actually have a way to view comics on the web. Or so I thought.

Way, way, back in 2012 I wrote a post on that very topic: "Building an HTML5 Comic Book Reader". This was back when you would still describe 'modern' web apps as HTML5 apps. Now that looks dated as hell. The code in this post is absolutely outdated now. It made use of the FileSystem API for extraction versus just doing everything in memory. It also only used CBZ files as I wasn't able to find a RAR library for JavaScript back then. I decided to take a stab at updating it to a more modern version and here's what I came up with.

The Stack

For the updated demo, I made use of the following libraries:

  • Shoelace - I love Shoelace's look and web component API, but I have to be honest, I barely used it in my demo and it's probably over kill for what I built. But I like it - so I'm keeping it.
  • zip.js - for supporting CBZ files.
  • Unarchiver.js - for RAR support. Technically this library supports zip files (and more) too, but I came to this after I had zip working well and ... I didn't want to poke the bear. If I were to be shipping this as a 'real' project, I'd probably remove zip.js and just use this library.

And that's it. The application is entirely client-side code. Oh, and no React. Is that allowed?

Drag/Drop Comics

Alright, let's get into the code proper. I began by simply adding a div to the page where you could drop your file. To be honest, I could have supported it on the document as a whole, but I liked the idea of a nice little box.

Here's the HTML I used:

<div id="dropZone">
Drop .CBR/.CBZ here.
</div>

And here's the JavaScript that's going to handle it. To keep things a bit simpler, I'm going to ignore some of the DOM setup code and such. I'll be linking to everything below.

document.addEventListener('DOMContentLoaded', init, false);

async function init() {

	$dropZone = document.querySelector('div#dropZone');
	$dropZone.addEventListener('dragover', e => e.preventDefault());
	$dropZone.addEventListener('drop', handleDrop);

}

The function to handle file drops is below:

function handleDrop(e) {
	e.preventDefault();

	let droppedFiles = e.dataTransfer.files;
	if(!droppedFiles) return;
	let myFile = droppedFiles[0];
	let ext = myFile.name.split('.').pop().toLowerCase();

	if(ext !== 'cbr' && ext !== 'cbz') {
		$filetypeAlert.toast();
		return;
	} 

	$filetypeAlert.hide();
	$dropZone.style.display = 'none';

	// note, for rar, go right to handler 
	if(ext == 'cbr') {
		handleRar(myFile);
		return;
	}

	let reader = new FileReader();
	reader.onload = e => {
		if(ext === 'cbz') handleZip(e.target.result);
	};
	reader.readAsArrayBuffer(myFile);
}

I've got a few things going on. First, I look for the file data associated with the dropped file and check the extension. If it doesn't match what I'm looking for, I show an error toast (provided by Shoelace).

For my RAR files, I can pass the file object directly to a function to work with it. I don't believe zip.js supports this so for that case, I'm reading in the bits and then passing it off to the function to handle it. (This is probably another clue I should have just used Unarchiver.js.)

Parsing the Archives

This is the cool part I think. I wrote two functions, one to handle RARs, and one to handle Zips. My thinking is that these functions would hand off the results, a set of images, to a display function, but I also knew both libraries had a wrapped interface to working with archive entries. So I thought - what if these functions also created a function that literally says, "Given you want page X, here's a function to return that image data."

Here's both those functions, and make note of the inner functions. This is that special handler for images.

async function handleRar(d) {
	const getData = async p => {
		let data = await p.read();
		return URL.createObjectURL(data);
	}

	let archive = await Unarchiver.open(d);

	// todo - remove Thumbs.db if possible
	let entries = archive.entries.filter(e => e.is_file);

	displayComic(entries, getData);
}

async function handleZip(d) {

	const getB64 = async p => {
		let dw = new zip.Data64URIWriter();
		return await p.getData(dw);
	}

	const blob = new Blob([d], { type: 'application/octet-stream' });
	const reader = new zip.ZipReader(new zip.BlobReader(blob));

	const entries = (await reader.getEntries()).filter(e => !e.directory && !e.filename.endsWith('Thumbs.db'));

	displayComic(entries, getB64);
}

Note that I've got code in to filter directories. Many comic book archives begin with a folder of images rather than simply storing the images as is. I also look out for Thumbs.db, at least in my CBZ files.

Rendering the Comic Pages

Next up - actually rendering the pages. I've got a bit of basic HTML for this that will handle rendering a page count, buttons, and the image:

<div id="comicDisplay">
  <div id="comicNav">
    <div id="pageNumbers"></div>
    <div id="pageNavigation">
      <sl-button-group label="Navigation">
        <sl-button id="prevButton">Previous</sl-button>
        <sl-button id="nextButton">Next</sl-button>
      </sl-button-group>
    </div>
  </div>
  <p>
  <img id="currentPage">
  </p>
</div>

And here's the JavaScript:

async function displayComic(pages, reader) {

	const doPrevPage = async () => {
		if(currentPage == 0) return;
		currentPage--;

		$pageNumbers.innerHTML = `Page ${currentPage+1} of ${pages.length}`;
		$currentPage.src = await reader(pages[currentPage]);
	};

	const doNextPage = async () => {
		if(currentPage+1 === pages.length) return;
		currentPage++;
		$pageNumbers.innerHTML = `Page ${currentPage+1} of ${pages.length}`;
		$currentPage.src = await reader(pages[currentPage]);
	};

	let currentPage = 0;
	$comicDisplay.style.display = 'block';
	$pageNumbers.innerHTML = `Page 1 of ${pages.length}`;
	$currentPage.src = await reader(pages[0]);
	$prevButton.addEventListener('click', doPrevPage);
	$nextButton.addEventListener('click', doNextPage);
}

Again, I'm pretty proud of this. I love that the logic for getting the actual bits is passed in by the corresponding zip/rar handlers and this can be done more generic.

The App

I assume most folks won't have electronic comic books handy unless you're a big nerd like me. If you want, head over to ComicBook+ and grab a few. Here's the app before you upload:

App waiting for you to drop the mic...

And here's a sample comic. Note that I could probably render the image a bit better here.

Example rendering a Batman comic

Want to try it yourself? You can play with it here: https://cfjedimaster.github.io/ai-testingzone/comic_web/index.html

And the full code may be found here: https://github.com/cfjedimaster/ai-testingzone/tree/main/comic_web

The next step will be to add AI integration!

Image by kidsnews.hu from Pixabay

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories