Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153029 stories
·
33 followers

Copyright vs. Trademark vs. Patent – If You Create, Build, or Brand Anything, This Matters

1 Share

I’m sitting in Washington, D.C. today, preparing to meet with lawmakers from the Senate and the House, and one thing is crystal clear:

America’s innovation economy runs on ideas.  But having a great idea is only half the battle.

The real question is:

How do you protect it?

Whether you’re an entrepreneur, software developer, artist, startup founder, author, inventor, content creator, or business owner, understanding the difference between Copyright, Trademark, and Patent isn’t optional, it’s essential.

These three forms of intellectual property protection are often confused, sometimes used interchangeably, and frequently misunderstood.

They shouldn’t be.

Because choosing the wrong protection, or no protection at all, can cost you your competitive advantage, your revenue, and in some cases, ownership of what you created.

So let’s break it down.


1. Copyright: Protecting Creative Expression

When people hear “intellectual property,” copyright is usually the first thing that comes to mind.

A copyright protects original creative works that are fixed in a tangible form.

That includes:

  • Books and articles
  • Blog posts
  • Music and lyrics
  • Videos and podcasts
  • Photography
  • Artwork and graphics
  • Movies and screenplays
  • Software code
  • Training materials
  • Presentations
  • Website content

In other words:

Copyright protects how you express an idea, not the idea itself.

If you write a book about AI agents, copyright protects your words, your structure, your diagrams, your code examples.

It does not prevent someone else from writing their own book about AI agents.

When does copyright begin?

This surprises many people:

In the United States, copyright protection begins the moment your original work is created and fixed in a tangible medium.

That means:

  • Saved on your laptop
  • Written on paper
  • Recorded on video
  • Stored in the cloud

Registration is not required for ownership.

However…

Registration with the United States Copyright Office gives you powerful legal advantages, including the ability to bring infringement lawsuits and potentially recover statutory damages.

Current state in the U.S.

Today, copyright law is facing intense pressure from:

  • Generative AI training
  • Synthetic content
  • Ownership of AI-assisted creations
  • Digital piracy
  • Social media redistribution

Major questions being debated right now include:

  • Can AI-generated works be copyrighted?
  • Is training AI on copyrighted content “fair use”?
  • Who owns human-plus-AI creations?

The law is evolving fast.  And lawmakers in Washington know it.


2. Trademark: Protecting Your Brand Identity

If copyright protects creativity…

Trademark protects identity.

A trademark protects the symbols that tell the market:

“This product or service comes from me.”

That includes:

  • Business names
  • Product names
  • Logos
  • Slogans
  • Taglines
  • Distinctive packaging
  • Brand colors (in some cases)
  • Sounds (yes, even sounds)

Think about names like:

  • Apple
  • Nike
  • Coca-Cola

Those names instantly communicate trust, quality, and reputation.

That’s trademark power.

When should you use a trademark?

The moment you launch:

  • A company
  • A product
  • A course
  • A consulting brand
  • A podcast
  • A software platform

If people will associate a name with your reputation…

Trademark matters.

Current state in the U.S.

Trademark law in America remains strong, but businesses are facing new challenges:

  • Domain name conflicts
  • Social media impersonation
  • Marketplace counterfeiting
  • International copycats
  • AI-generated branding collisions

The United States Patent and Trademark Office continues modernizing digital filing and enforcement, but brand owners still need to actively monitor misuse.

Because trademarks aren’t “set it and forget it.”

If you don’t defend your mark…  You can weaken it.


3. Patent: Protecting Inventions and Innovation

Now we get to the heavyweight.

A patent protects new, useful, and non-obvious inventions.

This includes:

  • Machines
  • Manufacturing processes
  • Chemical compositions
  • Medical devices
  • Electronics
  • Software-based inventions (in some cases)
  • Industrial processes
  • Hardware innovations

In plain English:

If you invented something the world has never seen before…

Patent may be your strongest protection.

Types of patents in the U.S.

Utility Patents

  • The most common.
  • Protect how something works.

Design Patents

Protect how something looks.

Plant Patents

Protect new plant varieties.

Yes… plants.

When should you file?

As early as possible.

The U.S. operates under a “first inventor to file” system.

That means speed matters.

Talking publicly about your invention before filing can create serious complications.

Current state in the U.S.

Patent law today is navigating:

  • AI-generated inventions
  • Software patent eligibility
  • Semiconductor innovation
  • Biotechnology acceleration
  • Global competition with China and other innovation economies

One of the hottest legal debates:

Can an AI system be listed as an inventor?

Right now in the United States:

The answer is NO.

Inventorship currently requires a human.


The Simplest Way to Remember It

If you create it…

Copyright.

If you brand it…

Trademark.

If you invent it…

Patent.

Or even simpler:

  • Words, music, code, content → Copyright
  • Names, logos, slogans, reputation → Trademark
  • Products, processes, inventions, technology → Patent

The Reality in 2026: AI Is Changing Everything

We are entering a new era where creators, founders, and innovators are asking entirely new questions:

  • Who owns AI-generated content?
  • Can an AI-designed product be patented?
  • Can an AI-generated logo be trademarked?
  • How do we protect human creativity in a machine-assisted world?

These aren’t future questions.

These are Washington questions.

Right now.

And as I sit here in our nation’s capital preparing to engage with policymakers, one message has never been more important:

Innovation without protection is exposure.

If you create…

Protect it.

If you build…

Protect it.

If you brand…

Protect it.

Because in today’s economy, your ideas may be your most valuable asset.

The post Copyright vs. Trademark vs. Patent – If You Create, Build, or Brand Anything, This Matters appeared first on The Training Boss.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Unit Testing Validation: AI Can Generate Tests – But Who Validates Them?

1 Share

AI Unit Testing Validation is becoming one of the most important challenges in modern software development.

Developers can now generate unit tests in seconds. Entire test suites appear from a single prompt. Coverage numbers rise instantly. Refactoring feels safer. Delivery feels faster.

It sounds like a dream.

But there is a dangerous assumption hiding underneath:
If AI generated the test, it must be correct.
That assumption is where teams get into trouble.

Why AI Unit Testing Validation Matters More Than Coverage

High code coverage looks impressive.

Dashboards turn green.
Managers feel safe.
Release pipelines move faster.

But coverage does not prove quality.
A test can execute code and still validate nothing meaningful.
AI is excellent at producing tests that look correct:
clean syntax, proper assertions, even edge cases.

But AI does not understand your business logic.
It predicts patterns.
That means it can generate:

  • redundant tests
  • incorrect assumptions
  • false positives
  • tests that verify implementation details instead of behavior
  • tests that pass while protecting nothing important

This creates a new risk:

False confidence.

And false confidence is often worse than missing tests entirely.

The Shift: From Writing Tests to Reviewing Tests

Before AI, writing tests was the hard part.
Now, validation is the hard part.
This is the same shift we saw with search engines.

Before Google, finding information was difficult.
After Google, information became abundant. The challenge became filtering truth from noise.

AI has done the same thing to software development.
Creating tests is now easy.
Knowing whether those tests matter is the real engineering skill.

The question is no longer:
“How do we write more tests?”

It becomes:
“How do we know these tests are worth trusting?”

Good Developers Review Tests Like They Review Code

Nobody would merge production code without review.
Why should tests be different?
Generated tests need the same discipline:

  • Is the assertion meaningful?
  • Is the test validating behavior or just implementation?
  • Does the mock represent reality?
  • Is the dependency isolation correct?
  • Are external resources properly controlled?
  • Is the test preventing regressions or just increasing numbers?

This is where strong mocking and isolation matter.
A bad mock creates a bad test.
A meaningful isolated test creates confidence.
That distinction is critical.

Example of Failed AI Unit Testing Validation

Imagine AI generates this:

[Test]
public void SaveCustomer_ShouldCallRepository()
{
    var repo = Isolate.Fake.Instance<ICustomerRepository>();
    var service = new CustomerService(repo);

    service.Save(new Customer());

    Isolate.Verify.WasCalled(()=> repo.Save(_));
}

Looks good.
Passes fast.
Great coverage.

But what if the real business rule is:
“Do not save invalid customers.”
The test never checks that.
It verifies mechanics, not behavior.
This is how teams get fooled.
The test passes.
Production fails.

Validation Is the New Competitive Advantage

The best engineering teams will not be the ones generating the most tests.
They will be the teams validating the right tests.
This is where tools must evolve.

Developers need more than test generation.
They need test review.

They need visibility into:

  • duplicate tests
  • weak assertions
  • hidden external dependencies
  • improper isolation
  • unreliable test patterns

Because speed without trust is technical debt with better marketing.

AI Is Not Replacing TDD

It is exposing whether teams truly understood it.
Test-Driven Development was never about writing more tests.
It was about clarity.
Intent.
Design.
Confidence.

AI does not remove that responsibility.
It amplifies it.
The future of unit testing is not AI-generated tests.
It is AI-assisted validation.
And the teams that understand that will ship faster and safer.

Learn more in this article from SD times

Final Thought on AI Unit Testing Validation

AI can generate tests.
That part is easy.
The real question is: Who validates them?

Because in modern software development, trust is no longer created by writing tests.
It is created by knowing which tests deserve to exist.

Learn more:

See also this article from SD Times

And this article: AI Unit Testing

The post AI Unit Testing Validation: AI Can Generate Tests – But Who Validates Them? appeared first on Typemock.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From Azure DevOps to GitHub Enterprise: Why Now Is the Right Time to Make the Move

1 Share

I have to start this blog post by saying I am an Azure DevOps fanboy and always will be. I’ve used it since day one. I’ve worked on a lot of projects over the years, using repos, building pipelines, and using Azure Boards throughout. Let get into this subject further.

For years, Azure DevOps has been the default choice for enterprise software teams. It has served organisations well, managing source code, running build pipelines, tracking work, and housing artefacts under one roof. It does the job.

GitHub Enterprise is something different though. Not just a nicer place to host code. It’s a platform built around developers rather than process governance, and the AI sitting on top of it has gotten good enough that it genuinely changes how code gets written. That last part is what shifted my thinking.

If your company is still on Azure DevOps, here’s the honest case for moving.


A Platform Built for Developers

Azure DevOps grew out of Team Foundation Server. Its DNA is project management and governance – that’s what TFS was for, and you can still feel it in the bones of the product. GitHub started somewhere completely different: as a place developers actually wanted to be.

The pull request flow, the way repos are structured, the automation model, the tone of the whole platform – it all points in the same direction. Developers like spending time there. That sounds soft until you try to hire engineers and realise they already have GitHub profiles. Their open-source work lives there. The libraries they pull into your codebase live there. Over 180 million developers, depending on which GitHub statistic you read. Moving onto GitHub Enterprise means your team is on the same platform they already use for everything else.


Migrating Is Easier Than You Think

he fear of migration is what kills most of these conversations before they start. In practice it’s not that bad – Microsoft and GitHub have put real engineering effort into making this work.

The GitHub Enterprise Importer (GEI) is the official CLI for moving from Azure DevOps. It carries over Git history, pull request history, repo settings, metadata. For a Git repo, it’s literally one command:

gh gei migrate-repo \
--github-target-org "my-org" \
--target-repo "my-repo" \
--ado-source-org "my-ado-org" \
--ado-source-team-project "MyProject" \
--ado-source-repo "MyRepo"

For shops with hundreds of repos, GEI supports bulk scripts so you can queue and sequence the work. If you’re still on TFVC, you’ve got an extra hop – either git-tfs or Azure DevOps’s built-in TFVC-to-Git conversion – before you can run the importer. Annoying, but doable.

I’d recommend migrating in waves. Start with the repos no one will cry over if something goes sideways. Validate. Update integrations. Move to the important stuff once you trust the process. Once a repo lands on GitHub, branch protection, CODEOWNERS files and repo templates are sitting right there, and you’ll likely have better governance on day one than you had in Azure DevOps.

Actions vs Pipelines

If repo migration is the visible part, pipeline migration is where the actual work is. Azure Pipelines and GitHub Actions both run on YAML and look superficially similar, but the differences are real and you can’t just copy-paste your YAML across. Plan for some rewriting.

What you get for the effort is worth it though. Actions has a marketplace stuffed with pre-built actions – deployments to Azure, AWS and GCP, container builds, security scanning, the lot. And almost any event in GitHub can trigger a workflow. Not just pushes and PRs but releases, issue comments, schedules, manual dispatches. Once you start thinking event-first, it’s hard to go back.

For Azure deployments, ditch service principal secrets and use OIDC. Azure trusts GitHub’s identity provider, short-lived tokens get issued at runtime, and there’s nothing sitting in a vault waiting to leak:

- name: Azure Login (OIDC)
uses: azure/login@v2
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

Multi-stage deployments translate cleanly. Build, dev, staging with approval gates, production – environment protection rules cover all of it. And reusable workflows are the equivalent of Azure Pipelines templates, so your DRY pipeline code lives in a central .github repo and gets pulled in by everything else.

Security stops being someone else’s problem

In most Azure DevOps shops, security still works the old way: a scan before release, a pen test before go-live, findings dumped on developers weeks after the code was written. By the time someone’s looking at the finding, they’ve forgotten what they were trying to do when they wrote it.

GitHub Advanced Security pulls security into the pull request itself. Findings show up where the code is being reviewed, not in a separate report you’ll get to next sprint.

Three things drive this:

CodeQL. GitHub’s semantic scanner. Treats your code as a database you can query and goes after the obvious stuff – injection, XSS, insecure deserialisation, dodgy crypto, broken auth. Findings appear as inline annotations on the PR diff, and branch protection can stop the merge until they’re fixed. If you’ve already paid for Semgrep, Snyk, Sonar or Checkmarx, they all speak SARIF and feed into the same view.

Secret scanning with push protection. The bit everyone forgets about is the second half – push protection. Instead of catching secrets after they’re committed (too late, go rotate everything), it blocks the git push itself when it spots one of 200+ known credential patterns. The developer gets the error before the secret ever hits the repo. For partnered providers – AWS, Azure, Stripe, others – GitHub also pings the provider so they can revoke the token automatically.

Dependabot. Watches the dependency graph, raises alerts on known CVEs, and opens PRs to bump vulnerable packages to patched versions. The dependency review action blocks PRs introducing new vulnerable or licence-incompatible dependencies at merge time.

None of this needs third-party integrations or custom pipeline plumbing. It’s just there.

The Copilot question

Everything above is a real step forward. Better repos, better pipelines, security that actually works in the developer loop. But honestly – the thing pushing organisations to move now is AI.

GitHub Copilot is the most adopted AI coding assistant in the world, and unlike a lot of AI tooling that bolts onto the side of your workflow, Copilot is properly woven in. IDE, pull requests, GitHub.com, and increasingly an autonomous agent that can implement whole features.

In the editor it does what most people have seen by now: real-time completions, single-line through to whole functions. In the Enterprise tier it’s also indexed against your own codebase, so it learns your libraries, your naming, your error handling, your internal APIs. That’s the bit that makes it feel different from generic Copilot – it stops suggesting plausible-but-wrong patterns and starts suggesting your team’s patterns.

Copilot Chat lives in the IDE and on GitHub.com. Ask it about unfamiliar code, request a refactor, debug a failing test, get it to write a workflow file from a description. Copilot Code Review does a first-pass review on every PR with inline suggestions, which doesn’t replace human review but does mean your reviewers spend their time on the things that matter rather than the obvious stuff.

The piece I’m watching most closely is Copilot Workspace. You describe a task, Copilot reads the relevant code, proposes a plan across files, makes the changes, writes the tests, drafts the PR description. The developer reviews everything before anything’s committed. It’s not replacing engineers – it’s collapsing the boring middle of the work.

The productivity numbers are healthy if you take vendor stats with the appropriate amount of salt. Around 55% faster on controlled tasks, ~30% suggestion acceptance rate, higher on boilerplate-heavy work. Your mileage will vary. But ask any engineer who’s been using it daily for six months whether they’d give it back, and you’ll get your answer.

Azure DevOps doesn’t really have an answer here. Microsoft has bolted some Copilot features onto it – work item summaries, basic PR descriptions – but it’s surface-level. Copilot has been native to GitHub since 2021, and every feature is built around the developer loop. This is where Microsoft’s AI investment in dev tooling is going. The gap is going to keep growing.

Don’t drag it out

The migration is well-tuned, well-documented, and, at this point, well-trodden by organisations that have already been through it. The ROI shows up in security posture, pipeline performance, developer experience, and the productivity gains from Copilot.

There’s also a cost to sitting on Azure DevOps that’s easy to wave away – not just the features you’re not getting, but the slowly widening gap between how your team works and how the engineering organisations you’d benchmark yourselves against work. Each month, that gap gets a bit wider.

The real question isn’t whether to move. It’s when to start. And there’s no good reason for it not to be now.

The post From Azure DevOps to GitHub Enterprise: Why Now Is the Right Time to Make the Move appeared first on Azure Greg.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How do I inform Windows that I’m writing a binary file?

1 Share

A customer wanted to know how to inform Windows that they were opening a file in text mode, as opposed to binary mode. That way, Windows can perform text conversions as necessary, like adding carriage returns before linefeeds, or converting ASCII to Unicode.

Windows doesn’t know whether your file is binary or text. As far as Windows is concerned, it’s just a bunch of bytes, and it’s up to you to interpret it. So in a sense, all files are binary files. If you want to insert carriage returns before linefeeds, you will have to do it yourself.

Now, it is often the case that you are using a higher level library, like the C runtime, in which case you can ask the library to do it for you, such as opening the file in "w" mode to indicate that the runtime should treat the file as a text file, or in "wb" to open as a binary file. But this work happens in the runtime library, not in Windows itself. The runtime library performs the necessary transformations and passes binary data to Windows. There are no further transformations once the data hits Write­File.

“But wait, there’s an old MS-DOS ioctl AH=4401h (Set device information) where you pass flags in DX, and bit 5 is the raw (binary) mode bit. So what’s the Windows version of this ioctl?”

If you look more closely, that MS-DOS ioctl applies only to character devices. If you try to use it on a disk file, you get ERROR_INVALID_FUNCTION.

ioctl_check_permissions:
        CMP     AL,2
        JAE     ioctl_control_string
        CMP     AL,0
        MOV     AL,BYTE PTR ES:[DI+sf_fcb+fcb_devid]
        JZ      ioctl_read              ; read the byte
        OR      DH,DH
        JZ      ioctl_check_device      ; can I set with this data?
        error   error_invalid_data      ; no DH <> 0

ioctl_check_device:
        TEST    AL,devid_ISDEV          ; can I set this handle?
        JZ      ioctl_bad_fun           ; no, it is a file.

...

ioctl_bad_fun:
        error   error_invalid_function

This IOCTL can be used to tell the console things like whether to perform line buffering on input. The Win32 equivalent is Set­Console­Mode, roughly corresponding to the Unix stty.

If you want to perform content transformations on files, you’ll have to do it yourself, or ask someone else (like the runtime library) to do it for you.

The post How do I inform Windows that I’m writing a binary file? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Image AI models now drive app growth, beating chatbot upgrades

1 Share
Appfigures finds visual model launches generate 6.5x more downloads — but most don’t convert that spike into revenue.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft’s OpenClaw team takes on the personal assistant challenge

1 Share
Microsoft’s unofficial Ninja Cat mascot rides the OpenClaw lobster. (Image via Omar Shahine’s blog)

Bob. Clippy. Cortana. Copilot. Microsoft has been trying to unlock the personal-assistant puzzle for decades. Now a fledgling team inside the company that’s been experimenting with OpenClaw — an open-source framework that acts both a virtual assistant and platform for building and managing proactive agents — is taking a stab at the problem.

That team, headed by Corporate Vice President Omar Shahine, already has a working agent prototype and, as of May 1, more than 3,000 daily users inside Microsoft testing “Project Lobster,” the team’s OpenClaw-based desktop environment, up from 100 the previous week.

Not bad for a technology that CEO Satya Nadella dismissed as a security risk akin to “a virus” just a few months ago. A number of other companies, including OpenAI and NVIDIA, are also rushing to integrate the technology with their own.

Omar Shahine. (LinkedIn Photo)

The vision of Shahine’s team is to create “an always-on agent team (a Chief of Staff agent, an Executive Assistant agent, and a roster of specialist agents) that works 24/7 on your behalf within the Microsoft 365 ecosystem,” as he described it in a blog post.

It’s a “persistent runtime that monitors your signals continuously, prepares your day before you wake up, triages your inbox while you’re in meetings, and follows up on action items without being asked,” he explained.

OpenClaw, developed by Peter Steinberger (who, as of Feb. 2026, works for OpenAI) has only been publicly available since Nov. 2025, originally under the name Clawdbot.

Shahine had been dabbling with OpenClaw since earlier this year to automate tasks at home, such as drafting an email or investigating concert-ticket prices. He demonstrated how Lobster works during a presentation to Microsoft’s AI Accelerator group on Feb. 26. And by March 31, he had a new role at Microsoft: To bring OpenClaw and personal agents to Microsoft 365.

Microsoft recently has made forays into the autonomous-agent space with Copilot Tasks, an agent in preview for consumers that is designed to help with chores like triaging email and booking travel. On the business side, Microsoft is integrating Anthropic’s Cowork technology with Microsoft 365 Copilot in the form of “Claude Cowork,” which takes action inside the various Microsoft Office apps.

But neither of these approaches provides a virtual assistant working on users’ behalf 24/7 with access to people’s full, real lives, Shahine maintains. They can’t do things like order from DoorDash if a user is in back-to-back meetings or reschedule a call if it interferes with a family dinner. That gap is why he decided to target knowledge workers, he says.

Shahine’s team, known as Ocean 11, includes a handful of people, each running his/her own Lobster agent. The team is building out the runtime and supporting infrastructure needed to make Lobster work in an enterprise environment.

As Lobster is currently envisioned, it will work across all kinds of apps and Microsoft 365 and other data sources. It won’t need constant prompting, but instead, will suggest courses of action it can take, pending user approval.

And this is why Nadella and other security-minded professionals have qualms about OpenClaw: It works autonomously, can ingest untested inputs, maintains persistent credentials, and could turn things like prompt-injection attacks into action-injected ones.

Microsoft’s own Defender security team’s current guidance states: “OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation.”

In an interview, Shahine acknowledged that enterprise-hardening Microsoft’s OpenClaw-based offerings needs to be job No. 1. His team is designing prototype agents to have their own Microsoft 365 identities, meaning their own Entra IDs for governance, their own Exchange mailbox, their own Teams presence, and integration with the Microsoft Graph.

“My goal is to contribute to make OpenClaw better but also consume it and run it so that it’s also a reference design, reference pattern that people can look to and say, ‘Well, you know, it’s great. Microsoft figured out how to make this thing enterprise great,” he said.

Shahine wasn’t ready to talk timetables or deliverables, beyond the Teams plug-in available for OpenClaw. But the team already has developed a Mac and Windows desktop environment called ClawPilot (no relation to clawpilot.ai) that it’s using internally to work with “claw-like agentic workflows.” Shahine said ClawPilot is acting as his personal assistant and goes by “Sebastien” (a nod to “The Little Mermaid”).

Microsoft Vice President Scott Hanselman has built a Windows node for OpenClaw which could get some airtime at Microsoft’s upcoming Build developer conference in San Francisco in June. Shahine said “there will be some concrete information about how we’re working to make Windows a fantastic environment for OpenClaw and other agentic systems to operate.”

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories