Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152255 stories
·
33 followers

How GitHub could secure npm

1 Share

In 2025, npm experienced an unprecedented number of compromised packages in a series of coordinated attacks on the JavaScript open source supply chain. These packages ranged from crypto-stealing malware1 to credential-stealing exploits2. While GitHub announced changes3 to address these attacks, many maintainers (myself included) found the response insufficient.

The impact of compromised packages

The scale of these attacks is staggering. In September 2025 alone, over 500 packages were compromised across two major attack waves. The first wave on September 8 compromised 20 widely-used packages with over 2 billion weekly downloads4. Despite being live for only 2 hours, the compromised versions were downloaded 2.5 million times. The second wave, known as Shai-Hulud, was even more insidious: a self-replicating worm that automatically propagated across 500+ packages.

While the total financial damage appears limited (approximately $500 in stolen cryptocurrency), the potential for significant harm is clear. If this was merely a test to gauge the feasibility of self-replicating attacks, we should prepare for more damaging attempts in the future.

The anatomy of an attack

To understand why npm’s latest updates may fall short, it’s important to understand how these attacks proceed.

  1. Steal credentials. Attackers steal the credentials of an existing npm maintainer, preferably someone with access to a high-traffic package or one frequently used by the intended target. Credentials are stolen either by compromising the maintainer’s npm account (as in the case of Qix) or by stealing npm tokens (as in the Nx compromise5).
  2. Add a preinstall or postinstall script. The attacker creates a malicious script that executes during the preinstall or postinstall phase of the npm package. This script runs automatically whenever npm install is executed, whether for the malicious package itself or any project using it.
  3. Publish the compromised package. The compromised package is published to the npm registry as a semver-patch update, increasing the likelihood it will be installed before the compromise is discovered.

How compromised packages spread

Compromised packages are often installed quickly after publishing due to npm’s default behavior. When using npm install, packages are added to package.json using a semver range beginning with a caret (^), such as ^1.2.3. This tells npm to install any version starting with the given version (in this case, 1.2.3) up to, but excluding, the next major version (2.0.0). So if 1.2.4 is the latest version, it will be installed instead of 1.2.3. This behavior assumes packages follow semantic versioning, meaning non-major version bumps are always backwards compatible.

Attackers exploit this behavior by publishing compromised versions as semver-patch or semver-minor increments. This ensures that anyone doing a fresh install of a project using the package will download the compromised version instead of a safe one, provided their version range includes the new version.

While individuals may not do fresh installs frequently, continuous integration (CI) systems typically do. If not properly configured, CI systems can install the compromised package, potentially giving attackers access to cloud credentials that enable further attacks.

Ultimately, package consumers must take extra steps to avoid installing compromised packages, such as using lock files and immutable installs. However, npm’s defaults still make it too easy to use version ranges that result in automatic installation of compromised packages.

GitHub’s response to the attacks

In September, GitHub announced its response6 to the attacks. The changes included:

  • Limiting publishing to local 2FA, granular access tokens7, and trusted publishing8.
  • Deprecating legacy classic tokens and time-based one-time password (TOTP) 2FA.
  • Enforcing shorter expiration windows for granular tokens.
  • Disabling access tokens by default for new packages.

These steps targeted the Shai-Hulud attack9, which used a compromised package to scan for additional tokens and secrets. Those tokens and secrets were then used to publish more compromised packages, making the attack self-replicating.

GitHub’s response specifically targeted preventing future self-replicating attacks of this nature. Deprecating older, less secure legacy tokens helps limit the scope of an attack if a malicious actor obtains someone’s credentials.

However, the response has some limitations:

  • Reducing the usable lifetime of tokens only ensures that older, possibly forgotten tokens can’t be used in attacks. Infiltration of machines with up-to-date tokens yield the same results.
  • While promoting trusted publishing as an alternative to tokens makes sense for open source projects hosted on GitHub or GitLab, it leaves others without a viable option. npm currently only supports GitHub and GitLab as OpenID Connect (OIDC) providers, so maintainers not using these systems cannot use this feature.
  • The first publish of a new package can’t use trusted publishing—it must be done with a token or locally using 2FA.
  • Trusted publishing is not yet complete, most notably its missing 2FA. This caused the Open JS Foundation to recommend not using trusted publishing10 for critical packages.
  • Maintainers of many packages now need to rotate tokens at least every 90 days, creating significant additional maintenance burden11.
  • Maintainers of many packages must manually update every package through the npm web app, completing multiple 2FA verifications for each package.
  • Removing TOTP means maintainers always need a web browser available in the same environment as the publish operation.
  • The rapid rollout, along with shifting dates and lack of UI to accommodate common use cases, created confusion and frustration12 among maintainers.

In short, GitHub’s response placed more responsibility on maintainers whose credentials were stolen and packages compromised. This created additional work for maintainers, especially those managing many packages. While these changes may reduce a certain type of attack, they don’t address npm’s systemic problems.

Problems like this require a different approach. To understand what that might look like, it helps to examine another industry facing similar challenges.

How npm is like the credit card industry

The credit card industry faces challenges similar to npm’s, except instead of compromised packages, they deal with fraudulent transactions. The attack vector is similar: both begin with stealing credentials. In this case, the credentials are credit card information rather than an npm login or token. I’m old enough to remember when stores would take imprints of credit cards and process all transactions in a batch at the end of the day. It was easy to commit fraud and never be caught using that system, so the credit card industry adapted.

Today, credit cards have several ways to prevent credential theft:

  • The cards themselves have chips that are difficult to duplicate (as opposed to the old magnetic stripes), making it easier to authenticate a physical card.
  • In some countries, you must enter a PIN along with presenting the chipped card to make a transaction, adding second-factor authentication to the process.
  • When using a credit card online, you need to enter not just the number, but also the expiration date, CVC number, cardholder name, and sometimes the postal code. All of this helps ensure that someone possesses the physical card and not just the card number.

Even so, credit card companies know that cards will still be stolen and used for fraudulent purchases, so they don’t stop at these measures. They also monitor their networks for suspicious activity.

If you’re a frequent credit card user, you’ve likely received a text or phone call asking if you made a particular transaction. That’s the credit card company’s algorithms flagging a transaction as outside your normal spending pattern. Maybe you typically make small purchases and suddenly buy a new kitchen appliance. It’s not fraudulent, but it’s unusual, so it gets flagged for verification. Maybe you travel to another state or country and use your credit card there. Again, it’s not fraudulent, but it doesn’t follow your typical usage pattern, so it’s best to verify before allowing the transaction. This is called anomaly detection, a standard practice for identifying unwanted or unexpected data in data streams.

What npm got wrong

GitHub’s response to the ongoing supply chain attacks focused solely on credential theft, which is why it falls short. We already know how packages become compromised, and while securing credentials is important, we also know that credentials will inevitably be stolen.

Credit card companies understand that fraudulent transactions will still occur regardless of how many additional factors they add to validation. That’s why they invest in anomaly detection in addition to securing credentials. Once credentials are compromised, they still want to protect consumers and merchants from fraud.

GitHub, on the other hand, has not invested in protecting the ecosystem from compromised packages as they are published. The latest changes place most of the responsibility on package maintainers. Long-time maintainer Qix fell victim to a convincing phishing attack—if even experienced maintainers can be compromised, less-seasoned maintainers face even greater risk.

Meanwhile, GitHub continues taking down malicious packages after they’ve already caused damage. However, there are proactive measures GitHub could implement, such as investing in the same kind of anomaly detection that helps credit card companies flag fraudulent transactions.

What GitHub could do with npm

Instead of continuing to focus solely on credential security, GitHub could analyze packages as they are published. (They already do this once they have identified Indicators of Compromise, effectively blocking new packages containing the same IoCs.) Given what we know about malicious packages, there are several ways the npm registry could be made more secure. Each of the following suggestions assumes the maintainer’s npm account has been compromised and therefore we cannot rely on the npm web app for verification.

Location tracking of publishes

Similar to how credit card companies track purchase locations and flag unexpected transactions, the npm registry could flag package publishes that occur from an unexpected location. The npm registry likely already tracks the IP address of operations, which can be used to infer the location of the person or system publishing the package. If an npm publish operation occurs from a location significantly different from the previous publish, npm could require verification via email to at least one maintainer.

Because we are assuming the package owner’s npm account has been compromised, npm 2FA offers little validation of the package owner’s identity. Instead, npm could require the maintainer to retrieve a code sent to their email to publish a package from an unusual location. This would require the attacker to have access to both the npm account and the email account, significantly raising the bar for publishing a compromised package.

What would count as an unusual location? Here are some examples:

  • The publish typically happens from a GitHub Actions datacenter but this one happens from outside the datacenter.
  • The publish typically happens from a location in Florida but this one happens in California.
  • The publish typically happens from a location in the United States but this one happens in China.

These heuristics can be tuned according to the actual patterns observed in the npm registry. Popular web apps like Gmail and Facebook use similar location tracking to proactively intervene when an account appears compromised.

Require semver-major version bumps when adding preinstall or postinstall scripts

Because these attacks frequently use preinstall or postinstall scripts on packages that didn’t have one previously, detecting when a package is published with a preinstall or postinstall script for the first time is key. This could be done with a single bit indicating whether a major release line has a preinstall script and a single bit indicating whether it has a postinstall script. For instance, when 1.0.0 is published, the 1.x release line bits are set to indicate whether it has either a preinstall or postinstall script.

When the next version of the package is published in the same major release line (for example, 1.1.0 or 1.0.1), check the bits of the 1.x release line to see whether a preinstall script already exists. If the bit is set, there’s no need to further investigate preinstall for this new version (preinstall is already allowed). If the bit is not set, check package.json to see whether a preinstall script exists. If it does, this is a violation and the package publish must fail. If desired, the package may be published as the next major version (in the previous example, 2.0.0). Repeat the process with the postinstall bit.

This type of anomaly detection effectively removes one of the attacker’s main weapons: the speed with which a compromised package is installed. Because forcing a semver-major version bump removes it from the default range for npm dependencies, it will not automatically be installed in most projects. Some projects with customized dependency ranges (such as > 1.0.2) will still be affected, but the majority will be safe. This delay will hopefully both dissuade some attackers and make it easier to detect problems before they affect too many systems.

Require email-based 2FA when adding preinstall or postinstall scripts

In addition to requiring a semver-major version bump when adding preinstall or postinstall scripts, npm could also enforce verification via email to publish a new version with a preinstall or postinstall script where one didn’t previously exist. This could use the same email-based 2FA system as location anomaly detection.

Require double verification for invited maintainers

The current system for inviting maintainers to a package leaves a gap that could allow attackers to circumvent email-based 2FA. Because the invitation process is single opt-in on the part of the invitee, an attacker could compromise an npm account and then invite a separate npm account as a maintainer to receive any email-based 2FA requests. To prevent this, the invite system should be updated so that all current maintainers receive an email asking them to confirm they intended to invite the new maintainer. As long as one of the current maintainers approves, the invite will be sent to the new maintainer.

A plea to GitHub

We know you want to be responsible stewards of the JavaScript ecosystem. We know the npm registry requires significant effort to maintain and is costly to run. However, npm’s infrastructure needs more attention and resources. The response to these attacks was reactive and implemented without gathering feedback from the community most affected. Now is the time to invest in proactive security measures that can protect the registry against what is certain to be an increasing number and intensity of attacks.

Conclusion

GitHub has an opportunity to take a more proactive approach to securing the npm registry. Rather than placing the burden solely on maintainers to protect their credentials, GitHub could implement anomaly detection systems similar to those used by the credit card industry. The suggestions outlined here (location tracking, restrictions on lifecycle scripts, and improved verification processes) would create multiple layers of defense that work even after credentials are compromised. These measures wouldn’t eliminate all supply chain attacks, but they would significantly reduce the window of opportunity for attackers and limit the damage compromised packages can cause. Most importantly, they would demonstrate a commitment to protecting the entire JavaScript ecosystem, not just responding to attacks after they’ve already succeeded. The technology and patterns for these protections already exist in other industries. It’s time for GitHub to apply them to npm.

Footnotes

  1. npm Author Qix Compromised via Phishing Email in Major Supply Chain Attack ↩

  2. Popular Tinycolor npm Package Compromised in Supply Chain Attack Affecting 40+ Packages ↩

  3. Our plan for a more secure npm supply chain ↩

  4. New compromised packages identified in largest npm attack in history ↩

  5. Nx Investigation Reveals GitHub Actions Workflow Exploit Led to npm Token Theft, Prompting Switch to Trusted Publishing ↩

  6. Our plan for a more secure npm supply chain ↩

  7. About granular access tokens ↩

  8. Trusted Publishers for All Package Repositories ↩

  9. Updated and Ongoing Supply Chain Attack Targets CrowdStrike npm Packages ↩

  10. Publishing More Securely on npm: Guidance from the OpenJS Security Collaboration Space ↩

  11. Comment: Classic token removal moves to December 9, bundled with new CLI improvements ↩

  12. Update: Classic token removal moves to December 9, bundled with new CLI improvements ↩

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

MVP Dominick Raimato Showcases Real-Time Intelligence with Microsoft Fabric & IoT

1 Share

What Happens When Real‑Time Data Gets Hands‑On? MVP Dominick Raimato Shows Us

When you walk into a conference session and someone immediately pulls out a Raspberry Pi, you know you’re in for something fun. And when that someone is Microsoft MVP Dominick Raimato, you’re definitely about to learn something unforgettable.

At the PASS Data Community Summit, Dominick showcased in his session how real‑time intelligence becomes real‑world magic - powered by Microsoft Fabric, a few pocket‑sized devices, and a whole lot of creativity. His goal? Make real‑time data feel tangible, interactive, and exciting for everyone - no technical background required.

 

Dominick Raimato shows the Raspberry Pi devices that power his hands‑on real‑time intelligence demo with Microsoft Fabric

Why Raspberry Pi? Because Seeing Is Believing

Dominick didn’t choose Raspberry Pi because it’s trendy. He used it because people learn best when they can touch what they’re learning. Years ago, at The Hershey Company, he and a colleague built a portable IoT‑to‑ML solution that streamed predictions in real time so stakeholders could literally watch data come alive. That same idea - “let people touch the data” - is what inspired him to bring Raspberry Pi into the Microsoft Fabric world.

With Fabric, he rebuilt the experience so anyone in the room could influence the data themselves. Tap the device. Trigger events. Watch predictions stream instantly. It’s hands‑on learning meets high‑energy show‑and‑tell.

 

The Raspberry Pi devices Dominick Raimato used to bring real‑time intelligence to life during his Microsoft Fabric demo.

The Demo Challenge No One Sees Coming

Conference sessions move fast. Like - really fast. Dominick had just minutes between sessions to set up a network for six devices and connect to a projector. Re‑creating his old Ubuntu‑powered portable network didn’t go smoothly (RIP to the stability issues), but a recommendation for the perfect travel router saved the day. Because of course: behind every great demo is a great piece of travel gear.

Explaining Real‑Time Intelligence… Without the Jargon

If you’re new to the topic, Dominick has a way of making it all click immediately. Take this example he uses: You get alerts on your phone all day - messages, news, reminders. Now imagine getting a proactive heads‑up that you need an umbrella before stepping outside. That’s real‑time intelligence: constant, instant insights that help you decide what to do right now.

Why Real‑Time Data Matters More Than Ever

Businesses aren’t waiting for dashboards anymore - they need information as it happens. In healthcare, fraud detection, retail, and manufacturing, a 15‑minute delay isn’t acceptable. In some cases, it’s life‑or‑death. Real‑Time Intelligence enables immediate action: the moment something changes, your system knows - and responds.

How Fabric Makes It All Easier

Traditionally, real‑time architecture requires stitching together Event Hubs, Stream Analytics, storage containers, ML models, and more. With Microsoft Fabric? You can run the whole pipeline in one place, without spinning up any Azure resources. It’s simpler, faster, and far more accessible - perfect for demos, teaching, and prototyping bold ideas.

Real‑World Impact: From Weather Sensors to… Twizzlers?

One of Dominick’s favorite examples is a project from his time at Hershey. Using concepts like his demo, their team created a solution to predict Twizzlers product weights during extrusion - helping operators avoid underweights (rework) and overweights (giving away product). The goal wasn’t to replace workers; it was to empower them with real-time insights for better control. You can read more how Hershey used IoT for improved efficiency in making Twizzlers.

Want to Try It Yourself?

Dominick points newcomers to a few easy, approachable ways to start experimenting:

No engineering background required - just curiosity.

The Future: Data That Moves People

Dominick sees a world where event‑driven systems are core to modern analytics. Real‑time dashboards shouldn’t be “hotel art”—interesting to look at but driving no action. The future belongs to insights that trigger immediate decisions, escalating eventually into full automation as organizations mature their data strategy.

The Most Underrated Skills for This Work

Here’s the part that surprises many aspiring data pros: Dominick says the most important skills aren’t technical at all. Communication, project management, building business cases, and cross‑team collaboration often matter more than coding. Real‑time projects touch many parts of an organization, and success requires navigating that complexity with people skills.

One Last Piece of Advice

If you’re thinking of taking on a similar project: expect things to go wrong. Dominick’s demo has broken – spectacularly - mid‑presentation three times. But failures are part of innovation. Keep your eyes on the bigger picture and the long-term value your idea can bring.

About MVP Dominick Raimato

Dominick is a Microsoft MVP recognized for expertise in data analytics and cloud technologies. He’s passionate about making complex topics feel simple, approachable, and - most of all - fun. Through hands‑on demos, real‑world stories, and community contributions, he inspires data professionals to explore the future of real-time intelligence and IoT.

Photo credit of Dominick Raimato at Pittsburgh SQL Saturday: MVP Eugene Meidinger 

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

SKIA Everywhere Demo | Uno Platform Studio + 6.0 Webinar

1 Share


Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Windows Package Manager 1.28.90-preview

1 Share

This is a preview build of WinGet for those interested in trying out upcoming features and fixes. While it has had some use and should be free of major issues, it may have bugs or usability problems. If you find any, please help us out by filing an issue.

New in v1.28

What's Changed

Full Changelog: v1.12.440...v1.28.90-preview

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Semantic Reranking with Azure SQL, SQL Server 2025 and Cohere Rerank models

1 Share

Supporting re‑ranking has been one of the most common requests lately. While not always essential, it can be a valuable addition to a solution when you want to improve the precision of your results. Unfortunately, there isn’t a universal, standardized API for a “re‑rank” call across providers, so the most reliable approach today is to issue a manual REST request and build the payload according to the documentation of the re‑ranker you choose.

How a Re-ranking Model Improves Retrieval

Vector search is excellent for quickly finding likely matches, but it can still surface items that aren’t the best answer. A re‑ranker, typically a cross‑encoder, takes your query and each candidate document and scores them for semantic relevance against the search request issued by the user, then sorts the list so the most useful items rise to the top. This two‑stage pattern, retrieve and then re‑rank, can significantly enhance RAG pipelines and enterprise search by improving relevance and reducing noise. Joe Sack wrote a great article about this here: “From 15th Place to Gold Medal” if you want to learn more.

Azure SQL DB Vector Search Sample

The Azure Samples repository for Azure SQL DB Vector Search demonstrates how to build retrieval with native vector functions, hybrid search, EF Core/SqlClient usage, and more, giving you the first stage retrieval that produces candidates to feed a re‑ranker. You can plug any re‑ranker behind it via a REST call. I have updated the existing DiskANN sample to include reranking and I have also have created a completely new example using the SemanticShoresDB sample database (which has been created by Joe too!)

  • SemanticShoresDB Reranking Sample: Uses Joe Sack’s sample database for a more realistic dataset. His excellent post, From 15th Place to Gold Medal, explains why semantic ranking matters and how it can dramatically improve relevance.
  • Wikipedia Reranking Sample: Provides a simple kickstart and demonstrates a full end-to-end scenario that combines hybrid search (vector search plus full-text search) with semantic re‑ranking.

Making the Re-rank REST call

Cohere’s Rerank models, also available through Azure AI Foundry, accept a query and a list of documents and return each item with a relevance score and a re‑ordered list. The essence of the payload looks like this:

{
  "model": "rerank-v3.5",
  "query": "Reset my SQL login password",
  "documents": [
    { "text": "How to change a password in Azure SQL Database..." },
    { "text": "Troubleshooting login failures in SQL Server..." },
    { "text": "Granting permissions to users..." }
  ],
  "top_n": 3
}

The response contains results with index and relevance_score, which you then use to reorder your candidate set before building the final context for your RAG answer. If you want to pass more than just text to the re-ranker, say for example you want to pass the Id of a product or an article in addition to the description, you need to pass everything as YAML. So, YAML inside JSON. An interesting approach if you tell me 🤣:

{
  "model": "Cohere-rerank-v4.0-fast",
  "query": "cozy bungalow with original hardwood and charm",
  "top_n": 3,
  "documents": [
    "Id: 48506\nContent: <text>",
    "Id: 29903\nContent: <text>",
    "Id: 12285\nContent: <text>"
  ]
}

One of the most powerful recent updates to the SQL engine is its ability to manipulate JSON and strings directly within T-SQL. This is helpful for building the payload required to communicate with models like Cohere’s Rerank. Thanks to features like REGEXP_SUBSTR, JSON Path expressions, and the string concatenation operator ||, constructing complex JSON or YAML structures is now straightforward and efficient. These capabilities allow you to dynamically assemble the query, documents, and parameters for the REST call without leaving the database context, making integration with external AI services much easier.

Interpreting the returned result from Cohere model, once the REST call is done via sp_invoke_external_rest_endpoint is also an interesting challenge as the returned results are positional based:

"results": [
    {
      "index": 4,
      "relevance_score": 0.812034
    },
    {
      "index": 0,
      "relevance_score": 0.8075214
    },
    {
      "index": 1,
      "relevance_score": 0.80415994
    }
  ],

which means that the new ability of SQL Server 2025 and Azure SQL to extract any specific item from a JSON array comes very handy and simple:

SELECT 
    -- Use REGEXP_SUBSTR to extract the ID from the document text
    CAST(REGEXP_SUBSTR(
        JSON_VALUE(@documents, '$[' || [index] || ']'),  -- Get nth document using JSON path
        'Id: (\d*)\n', 1, 1, '', 1  -- Extract the numeric ID
    ) AS INT) AS property_id,
    *
FROM 
    OPENJSON(@response, '$.result.results')
    WITH (
        [index] INT,
        [relevance_score] DECIMAL(18,10)
    )

Putting It All Together

The general approach is simple: retrieve candidates in Azure SQL using vector functions and optionally combine with full-text for hybrid retrieval, call your chosen re‑ranker via REST with the query and documents, reorder based on the returned relevance scores, and finally assemble the grounded context for your LLM. While not mandatory, adding re‑ranking can be a valuable enhancement to improve precision and deliver more relevant answers.

Check out the GitHub repo here: https://github.com/Azure-Samples/azure-sql-db-vector-search to start evaluating adoption of semantic re-ranking in your solutions.

The post Semantic Reranking with Azure SQL, SQL Server 2025 and Cohere Rerank models appeared first on Azure SQL Devs’ Corner.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The Untold Story of Visual Basic

1 Share

At 2:00 a.m., when the rest of the world is deep in slumber, a warehouse grinds to a halt. Shipping labels refuse to print—no slow trickle of output, just the abrupt cessation of a critical process. The production line stalls, and expensive idle time ticks by as workers stand helpless, a visual metaphor for the cascading effects of legacy systems in the technology-driven world.

This video is from CodeSource.

This late-night crisis isn’t just a case of outdated software failing at an inopportune time. It’s a profound illustration of how businesses are inexorably tied to the technological tools they rely on—tools that were often not designed for the tasks they now handle but have become critical to operations. The software in question, armed with a nostalgic .exe name reminiscent of a simpler digital age, still carries the weight of modern commerce on its shoulders because, at its core, is Visual Basic—a programming language that bridged the gap between the complex, unforgiving languages used to build earlier software applications and the more intuitive, user-friendly interfaces that have become standard today.

Let’s rewind to the late 1980s and early 1990s, a pivotal time for software development. Windows was burgeoning as the forefront OS, inviting a diverse audience to engage with technology, from tech-savvy administrators to novices like bank tellers. This environment set the stage for what would become a seismic shift in how software development was approached, courtesy of an innovation named Visual Basic.

Created initially as a tool to let users design their own Windows experiences, Visual Basic was almost sidelined internally at Microsoft. It was born from a prototype tool, crafted by Alan Cooper, meant to allow customization of Windows interfaces by its users through a palette of drag-and-drop elements. This approach was groundbreaking—it democratized Windows programming, making it accessible and understandable. The philosophy was simple: don’t create one rigid system but empower users to build an environment tailored to their needs.

The transformative nature of Visual Basic became evident when its potential was recognized by none other than Bill Gates, who pivoted the project away from being just another utility into a core product offering, reimagined through the lens of the BASIC programming language, known for its ease of use and forgiveness in coding syntax. This pivotal moment underscored a fundamental shift from coding being a gatekept domain of specialists to becoming a tool accessible to a broader base, aligning with Windows’ goal to be the platform for everyone regardless of technical skill.

When Visual Basic hit the market in 1991, it fundamentally inverted the development paradigm: no longer did you need to start by establishing a detailed, complex groundwork before adding functionality. Instead, Visual Basic allowed developers to start with a form, an empty canvas, and add elements progressively in a more natural, intuitive manner. This alignment with human thinking—buttons where fingers click, fields where eyes naturally scan—represented not just a technological advancement but a cognitive breakthrough.

The simplicity and intuitiveness of Visual Basic sparked its widespread adoption beyond professional developers to “citizen developers” at businesses—analysts, office managers, IT generalists—who could now craft customized tools quickly without waiting for budget approvals or extensive development cycles. This capability to address immediate business needs cheaply and quickly cemented Visual Basic at the core of many business operations, even as newer, sleeker programming languages and technologies emerged.

Yet, with the arrival of Visual Basic .NET in 2002, the landscape changed again, challenging businesses reliant on the older VB6 with difficult decisions about whether to transition to the new, incompatible version or maintain their existing codebase—often choosing the latter for economic reasons. Thus, while newer software development platforms moved forward, Visual Basic, particularly VB6, remained entrenched in numerous organizations, a quiet but formidable presence dictated not by innovation but by necessity and inertia.

So, did Visual Basic fade into obscurity? Not exactly. It transitioned from a leading technology to a critical legacy system, often invisible until a dire need—like a warehouse’s halted label printer at 2:00 a.m.—calls it back into action. It demonstrates that technology’s lifespan in the business world extends far beyond its last line of official support; it lives as long as it serves a crucial need, becoming a silent partner in the ongoing operation of businesses.

Visual Basic’s legacy is a testament to the power of user-friendly design in technology—a reminder that the tools we build profoundly shape the tasks we do and, by extension, the businesses and lives those tasks support. Like any good tool, it doesn’t disappear just because newer models have emerged; it becomes part of the foundation, holding up the structures built long after its supposed prime. And that resilience, that ability to persist because it works, is perhaps the truest measure of success in technology.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories