Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
137244 stories
·
31 followers

Microsoft reportedly pulls back on its data center plans

1 Share
Microsoft has pulled back on data center projects around the world, Bloomberg reports, suggesting that the company is wary of expanding its cloud computing infrastructure too rapidly. Microsoft has halted talks for or delayed development sites of data centers in the U.K., Australia, North Dakota, Wisconsin, and Illinois, per Bloomberg. A spokesperson told the publication […]
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

3 ways to level up your studying with NotebookLM

1 Share
Here’s how NotebookLM can help students prepare for their final exams.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Interview: Microsoft CEO Satya Nadella on the tech giant’s 50th anniversary — and what’s next

1 Share

[Editor’s Note: Microsoft @ 50 is a year-long GeekWire project exploring the tech giant’s past, present, and future, recognizing its 50th anniversary in 2025.]

Satya Nadella sees in Microsoft’s history a blueprint for its future.

“That very first product of ours — that BASIC interpreter for the Altair — I think says it all,” the Microsoft CEO said in an interview with GeekWire this week, as the company prepared to mark its 50th anniversary.

By developing a programming tool for one of the first personal computers, Nadella explained, Microsoft co-founders Bill Gates and Paul Allen were creating technology to help others create more technology.

“That was true in ’75, and that is true in ’25, and that will be true, I believe, in 2050,” he said. “Technologies will come and go, but the idea that this company can stay relevant by producing technology so that more and more people around the world can create more digital technology … that, I think, is the core thread of Microsoft.”

Nadella is just the third person to serve as Microsoft’s CEO, following Gates and Steve Ballmer — both of whom are expected to join him for a rare joint appearance at Microsoft’s Redmond headquarters Friday in recognition of the company’s first half-century in business. 

Now in his 12th year as Microsoft’s CEO, Nadella has led a resurgence of the venerable tech company, helping Microsoft find its footing in the cloud and stake its claim in the new world of artificial intelligence.

Microsoft is one of the world’s most valuable public companies, with a market capitalization hovering around $2.8 trillion as of this week, second only to its longtime rival and partner Apple by that measure. 

Leveraging its partnership with OpenAI, the company jumped out to an early lead with its GitHub Copilot coding companion. It has been aggressively rolling out AI in an effort to update its flagship franchises like Windows and Office, and offering AI tools via its Azure cloud platform.

But as Gates pointed out in an interview with GeekWire, the competitive landscape is fundamentally shifting.

In the past, Gates explained, the major players in tech have carved out different corners of the tech world. He cited Google in search, Microsoft in Office and Windows, and Amazon in cloud computing and retail.

“Although there’s some intense competition and overlap, we each have some areas of very high strength,” Gates said. 

But now, as all of these companies race into AI, the lines are blurring, the pace is accelerating, and the battle is becoming “hyper competitive.”

“The pace of innovation will have to be very, very fast, despite the capital costs involved. And these tools will just improve very rapidly,” said Gates, who continues to advise Nadella and Microsoft’s product teams. 

With a tone of cautious optimism, the famously competitive and paranoid Microsoft co-founder added, “I hope Microsoft can lead the way.”

In addition to intense competition, challenges for Microsoft will include the massive capital expenditures that come with its AI infrastructure buildout — expected to total $80 billion in the current fiscal year alone.  

Microsoft also needs to navigate its complicated partnership and investment in OpenAI, the AI pioneer best known for developing ChatGPT. 

Nadella (center) with former Microsoft CEOs Bill Gates (left) and Steve Ballmer (right) on the day he was announced as Microsoft’s third CEO. (Microsoft Photo)

Ballmer, the company’s largest individual shareholder, said in an interview that he understood the pragmatic trade-off at the heart of that relationship, given Microsoft’s decades of investment in its own AI research.

“What Satya did with OpenAI, I think was brilliant — and I think it’s fraught with peril, but I know they know that,” he said. “It’s sort of a juggling act.”

One big question looming over all of this: Can Microsoft deliver the killer app for AI — the defining breakthrough that cements its role in the next era of computing? 

There are more parallels here to the early days of Microsoft and the PC, when applications like spreadsheets and word processors opened the eyes of the industry and the public to the power of new technology. 

In the interview this week, Nadella said he sees signs of that same potential in tools like GitHub Copilot, Microsoft’s AI-powered coding assistant, which he described as a turning point that opened his eyes to the potential of generative AI. 

“When I started seeing code completions is when I started believing,” Nadella said. 

The features later expanded to include chat functionality, enabling developers to ask questions and get AI-generated answers directly in their coding environment. Then came multi-file editing, followed by AI agents capable of making changes across entire code repositories.

“We are going from a pair programmer to a peer programmer,” Nadella explained. “That’s the type of system we now have.”

Nadella pointed to similar advances across Microsoft 365, where Copilot tools and agents now assist with everything from research to data analysis — tasks that once required teams of humans or hours of manual work.

Just prior to the interview this week, Nadella said, he had three customer meetings. Beforehand, he asked his Microsoft 365 Copilot Researcher agent to get him up to speed.

It created comprehensive briefing documents comparable to what a human analyst would produce, from internal and external sources including Office documents, a CRM database, and the web.

“It’s unbelievable,” Nadella said. “These are products I use all the time with high intensity. I think we’re beginning to see the value, just like Excel and PowerPoint or Outlook did it back in the day.”

Without divulging Microsoft’s product plans, Nadella offered a deeper explanation of something both he and Gates have alluded to in recent months: the need for a new type of inbox for the AI era.

He described a future in which knowledge workers are supported by fleets of AI agents — researchers, analysts, coders — each performing tasks autonomously or in coordination with their human counterpart. 

In this model, users issue instructions, sometimes staying in the loop, sometimes delegating entirely — while still needing a clear way to coordinate and manage the flow of these AI agents. 

That’s where it starts to feel like “a new type of inbox,” he said, “where the coordination of the work agents do, with us in the loop, will require new types of organizing layers.”

Back in 2014, when Nadella became Microsoft CEO, Ballmer encouraged him to be his own person. “In other words, don’t try to please Bill Gates or anyone else,” Nadella wrote in his 2017 book, Hit Refresh.

In that spirit, Nadella has brought his own global perspective and personality to the role — including his longtime love of poetry. 

In an interview in 2017, after his book’s release, I asked Nadella to cite a line of poetry that he thought best described the future at that time. He quoted a line from Vijay Seshadri’s Imaginary Number: “The soul, like the square root of minus one, is an impossibility that has its uses.”

Satya Nadella, Microsoft CEO, addresses a crowd in Redmond.
Microsoft CEO Satya Nadella discusses the company’s Copilot AI technology during a media event in Redmond, May 2024. (GeekWire Photo / Todd Bishop)

Nadella said at the time that the line captured the force inside us “that seeks out the unimaginable, that gets us up to solve the impossible.” 

These days, the line also conjures up images of quantum technologies, a field in which Microsoft recently claimed a breakthrough that it says will advance the world beyond traditional binary computing, promising to ultimately help solve some of the world’s most difficult problems.

So I asked this week, is there another line of poetry that Nadella would cite in 2025 to reflect his feelings about Microsoft, the industry, or the future?

This time, Nadella referenced one of his all-time favorite lines, from the mystical Austrian poet Rainer Maria Rilke, who wrote that “the future enters into us, in order to transform itself in us, long before it happens.” 

Nadella called this “a beautiful thing” for technology builders — the people for whom Microsoft has been making technology for five decades now. To make the future a reality, first you have to live it. And that, the Microsoft CEO said, “is probably the best ‘builder’ line that I’ve ever heard.”

Watch GeekWire’s interview with Microsoft CEO Satya Nadella above.


Sponsor Post

Accenture proudly joins GeekWire in recognizing Microsoft’s 50th anniversary, marking over 35 years as a trusted partner and change driver.

As the 2024 Partner of the Year in Business Transformation for Copilot, our unique alliance with Microsoft and Avanade positions us to reimagine the industry and reinvent the future through the revolutionary impact of AI. Together, we are partners in change.

Want to learn more?

Click for more about underwritten and sponsored content on GeekWire.


Read the whole story
· · · · · · · · · · ·
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft’s miniature Windows 365 Link PC is available to buy now

1 Share

Microsoft’s business-oriented “Link” mini-desktop PC, which connects directly to the company’s Windows 365 cloud service, is now available to buy for $349.99 in the US and in several other countries. Windows 365 Link, which was announced last November, is a device that is more easily manageable by IT departments than a typical computer while also reducing the needs of hands on support.

If you’ve worked for a company with an internal IT department in the last decade, you’ve probably come across small “thin client” PCs that run a virtual Windows PC off an on-site server. The Windows 365 Link is basically a modern version of the thin client, but it runs over the internet so that you can work from home or anywhere. It’s also designed to boot in seconds, which sounds like a better experience than the thin clients of the past. Microsoft says that Windows 365 Link was tested in a preview program by over 100 organizations and that the company has refined the software experience before going on sale.

Since it is being marketed to businesses, you won’t be able to easily buy it for home use like any consumer PC; instead, you’ll need to contact a Microsoft account team or authorized reseller (and may have to buy more than one). Windows 365 Link is available in the US, Canada, Australia, UK, Germany, Japan, and New Zealand.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Google’s NotebookLM can now find its own sources

1 Share

Google has added a new feature to NotebookLM that lets the AI note-taking tool find its own web sources to summarize and narrate. Instead of manually uploading sources like documents or YouTube links, users can now tap the “Discover” button and simply describe the topic they want to get a better understanding of, with the tool then gathering web sources around the subject.

Google says the Discover feature started rolling out on Wednesday, and will take “about a week or so” to be available to all users.

NotebookLM will hunt through “hundreds of potential web sources in seconds” according to Google, analyzing the most relevant options and then presenting a list of up to ten recommendations, each with a summary explaining its relevance. Users can select which of these sources they want NotebookLM to reference, and import them to use in other features, including FAQs, Briefing Docs, and podcast-like Audio Overviews that use AI hosts to discuss a topic.

A GIF demonstrating NotebookLM’s new Discover sources feature.

Sources will be saved within NotebookLM to allow users to read them directly and use them as references for citations, note-taking, and question-answering capabilities. Google says that Discover sources is the first of several Gemini-powered NotebookLM features that are being developed to make it easier for users to find relevant notebook reference materials.

Another capability spun from this is “I’m Feeling Curious” — a button that prompts NotebookLM to generate sources on a completely random topic. It’s a good way to see what the feature is capable of, but also a fun way to learn about new subjects, much like Wikipedia’s random article feature.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

PowerShell Remoting in a Workgroup Environment: A Step-by-Step Guide

1 Share
This tutorial guides you through setting up PowerShell remoting between non-domain-joined computers.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories
Loading...
Page 2

A Developer’s Guide to Server-Side JavaScript

1 Share
Modules

Developers resort to databases for storing, retrieving and manipulating data whenever applications need to handle state. This approach isn’t debated anymore; using databases for your persistence layer is a proven, mature approach. However, the follow-up decision has been fiercely discussed for decades: Where should you place your application’s business logic?

Client-Side vs. Server-Side Business Logic

On the one hand, developers like to be in control, performing all operations concerning the application’s data in the frontend. Writing stored code in the database requires knowledge of SQL and the procedural language your database supports. Whether that’s PL/SQL, T-SQL or PL/pgSQL, a React developer might not be familiar with it. Writing business logic in the same language as the frontend (or microservice) comes naturally.

The proponents of stored code executed within the database rightly point out that duplicating database functionality inside the application — ensuring atomicity, consistency, isolation and durability — is redundant. Consider the duplication of effort across many applications and you will become painfully aware of the extra effort spent. Furthermore, issues might arise regarding data quality, governance, auditing and so on. And you haven’t even talked about the performance benefits of well-written stored code yet.

This discussion appears to have reached a stalemate if you follow social media and websites like Reddit and Stack Overflow.

Wouldn’t having the best of both worlds be nice — a familiar programming language to write your business logic, plus all the benefits of running the code where the data resides? JavaScript, for example, is one of the most popular languages. Oracle Database 23ai is among the databases supporting server-side JavaScript based on the hugely popular GraalVM. MySQL is a good example of such a database management system.

Let’s take a look at how developers can write server-side JavaScript in Oracle Database 23ai.

What Is Multilingual Engine and How Do You Use It?

Multilingual Engine, or MLE for short, allows developers to store and execute JavaScript code inside the Oracle database. It implements the ECMAScript 2023 standard and has many built-in functions.

You can use existing JavaScript modules from a content delivery network or write your code just as you would in PL/SQL. Using existing modules can significantly speed up development, provided the module’s license is compatible with your project and no other compliance issues prevent its use.

Use Case No. 1: Embed Third-Party Modules in Your App

A common database task is to validate input to help ensure data quality. The popular validator library provides a plethora of string validation methods. Let’s assume that your task at hand is to validate email addresses. Using JavaScript, that’s simple.

Start by downloading the validatorjs module from your favorite CDN. The following example has been run on MacOS; you might have to adapt the curl arguments for Windows.

curl -Lo validator-13.12.0.js 'https://cdn.jsdelivr.net/npm/validator@13.12.0/+esm'


Oracle’s SQL Developer Command Line (SQLcl) offers the most convenient way to deploy the JavaScript module to the database. The following SQLcl command creates a new module named validator_module in the database based on the downloaded file’s contents. It’s good practice to also provide the module version.

mle create-module -filename validator-13.12.0.js -module-name validator_module -version 13.12.0


The module is created as a new schema object; its properties are available in the data dictionary. Before you can use it in SQL and PL/SQL, you must create a so-called call specification. According to its documentation, validatorjs offers a function named isEmail that does precisely what is needed: validate whether a string is an email address. Let’s expose the function to SQL:

create function is_email(p_string varchar2)
return boolean as
 mle module validator_module
 signature 'default.isEmail';
/

That’s all there is to it.  Let’s validate some strings:
SQL> with sample_data (email_address) as (
  2   values
  3    ('not a  valid email address'),
  4    ('user@domain.com'),
  5    ('user@'),
  6    ('user~~name@domain.com')
  7  )
  8  select
  9    email_address,
 10    is_email(email_address) valid_email_address
 11  from
 12    sample_data
 13  /

EMAIL_ADDRESS                 VALID_EMAIL_ADDRESS    
_____________________________ ______________________ 
not a  valid email address    false                  
user@domain.com               true                   
user@                         false                  
user~~name@domain.com         true


Any database client capable of executing SQL calls can call the function.

Use Case No. 2: Writing Custom MLE Modules

Writing custom JavaScript modules is another popular use case. Before diving into the mechanics, it’s essential to understand how module resolution works in Oracle Database. Unlike Node, where you have multiple ways of defining import specifiers, the database stores JavaScript modules as schema objects. Therefore, Oracle’s naming resolution algorithm must map an import specifier to an existing JavaScript module. This is done using an MLE environment, another new schema object introduced in release 23ai.

Continuing the previous example, you can use validatorjs in your code after creating the MLE environment like so:

create mle env newstack_env imports ('validator' module validator_module);


With the environment created, it’s time to turn attention to the JavaScript module. Let’s assume your task is to validate a JSON document your application received via a POST request. The JSON must contain a field named “requestor.” You must then provide a valid email address for the value. Here is an example of how you might perform this validation:

import validator from "validator";

/**
 * Validates a POST request object against certain criteria.
 *
 * @param {object} data - The POST request body to be validated.
 * @throws {Error} If no data is provided or validation fails.
 * @returns {boolean} true if the request is valid
 */
export function validatePOSTRequest(data) {
    // make sure data has been received, fail if that is not the case
    if (data === undefined) {
        throw new Error("please provide the POST request body for validation");
    }

    /**
     * Check if the 'requestor' field exists in the request body and
     * whether its value is a valid email address.
     */
    if ("requestor" in data) {
        if (typeof data.requestor !== "string") {
            throw new Error("the requestor field must provide a value of type 'string'");
        }

        if (!validator.isEmail(data.requestor)) {
            throw new Error("the requestor field does not contain a valid email address");
        }
    } else {
        throw new Error("the required requestor field is missing from the POST request");
    }

    // many more validations
    return true;
}


Next, load the module into the database using SQLcl:

mle create-module -filename newstack.js -module-name validate_post_request_module


Before you can use the JavaScript code in your application, you need to provide a call specification:

create or replace function validate_post_request(
  p_data json
) return boolean as 
mle module validate_post_request_module 
env newstack_env 
signature 'validatePOSTRequest';


That’s it! You can now use this function in your application. Again, any client capable of executing SQL and PL/SQL can use this function seamlessly.

Summary

Developers no longer need to feel intimidated when coding server-side business logic. The availability of JavaScript adds another language to developers’ toolbox. There is, of course, a lot more to say about MLE. To learn more, visit the Oracle JavaScript Developer’s Guide and Oracle developer blog.

The post A Developer’s Guide to Server-Side JavaScript appeared first on The New Stack.

Read the whole story
· · · · · · · ·
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Localhost dangers: CORS and DNS rebinding

1 Share

At GitHub Security Lab, one of the most common vulnerability types we find relates to the cross-origin resource sharing (CORS) mechanism. CORS allows a server to instruct a browser to permit loading resources from specified origins other than its own, such as a different domain or port.

Many developers change their CORS rules because users want to connect to third party sites, such as payment or social media sites. However, developers often don’t fully understand the dangers of changing the same-origin policy, and they use unnecessarily broad rules or faulty logic to prevent users from filing further issues.

In this blog post, we’ll examine some case studies of how a broad or faulty CORS policy led to dangerous vulnerabilities in open source software. We’ll also discuss DNS rebinding, an attack with similar effects to a CORS misconfiguration that’s not as well known among developers.

What is CORS and how does it work?

CORS is a way to allow websites to communicate with each other directly by bypassing the same-origin policy, a security measure that restricts websites from making requests to a different domain than the one that served the web page. Understanding the Access-Control-Allow-Origin and Access-Control-Allow-Credentials response headers is crucial for correct and secure CORS implementation.

Access-Control-Allow-Origin is the list of origins that are allowed to make cross site requests and read the response from the webserver. If the Access-Control-Allow-Credentials header is set, the browser is also allowed to send credentials (cookies, http authentication) if the origin requests it. Some requests are considered simple requests and do not need a CORS header in order to be sent cross-site. This includes the GET, POST, and HEAD requests with content types restricted to application/x-www-form-urlencoded, multipart/form-data, and text/plain. When a third-party website needs access to account data from your website, adding a concise CORS policy is often one of the best ways to facilitate such communication.

To implement CORS, developers can either manually set the Access-Control-Allow-Origin header, or they can utilize a CORS framework, such as RSCors, that will do it for them. If you choose to use a framework, make sure to read the documentation—don’t assume the framework is safe by default. For example, if you tell the CORS library you choose to reflect all origins, does it send back the response with a blanket pattern matching star (*) or a response with the actual domain name (e.g., stripe.com)?

Alternatively, you can create a custom function or middleware that checks the origin to see whether or not to send the Access-Control-Allow-Origin header. The problem is, you can make some security mistakes when rolling your own code that well-known libraries usually mitigate.

Common mistakes when implementing CORS

For example, when comparing the origin header with the allowed list of domains, developers may use the string comparison function equivalents of startsWith, exactMatch, and endsWith functions for their language of choice. The safest function is exactMatch where the domain must match the allow list exactly. However, what if payment.stripe.com wants to make a request to our backend instead of stripe.com? To get around this, we’d have to add every subdomain to the allow list. This would inevitably cause users frustration when third-party websites change their APIs.

Alternatively, we can use the endsWith function. If we want connections from Stripe, let’s just add stripe.com to the allowlist and use endsWith to validate and call it a day. Not so fast, since the domain attackerstripe.com is now also valid. We can tell the user to only add full urls to the allowlist, such as https://stripe.com, but then we have the same problem as exactMatch.

We occasionally see developers using the startsWith function in order to validate domains. This also doesn’t work. If the allowlist includes https://stripe.com then we can just do https://stripe.com.attacker.com.

For any origin with subdomains, we must use .stripe.com (notice the extra period) in order to ensure that we are looking at a subdomain. If we combine exactMatch for second level domains and endsWith for subdomains, we can make a secure validator for cross site requests.

Lastly, there’s one edge case found in CORS: the null origin should never be added to allowed domains. The null origin can be hardcoded into the code or added by the user to the allowlist, and it’s used when requests come from a file or from a privacy-sensitive context, such as a redirect. However, it can also come from a sandboxed iframe, which an attacker can include in their website. For more practice attacking a website with null origin, check out this CORS vulnerability with trusted null origin exercise in the Portswigger Security Academy.

How can attackers exploit a CORS misconfiguration?

CORS issues allow an attacker to make actions on behalf of the user when a web application uses cookies (with SameSite None) or HTTP basic authentication, since the browser must send those requests with the required authentication.

Fortunately for users, Chrome has defaulted cookies with no Samesite to SameSite Lax, which has made CORS misconfiguration useless in most scenarios. However, Firefox and Safari are still vulnerable to these issues using bypass techniques found by PTSecurity, whose research we highly recommend reading for knowing how someone can exploit CORS issues.

What impact can a CORS misconfiguration have?

CORS issues can give a user the power of an administrator of a web application, so the usefulness depends on the application. In many cases, administrators have the ability to execute scripts or binaries on the server’s host. These relaxed security restrictions allow attackers to get remote code execution (RCE) capabilities on the server host by convincing administrators to visit an attacker-owned website.

CORS issues can also be chained with other vulnerabilities to increase their impact. Since an attacker now has the permissions of an administrator, they are able to access a broader range of services and activities, making it more likely they’ll find something vulnerable. Attackers often focus on vulnerabilities that affect the host system, such as arbitrary file write or RCE.

Real-world examples

A CORS misconfiguration allows for RCE

Cognita is a Python project that allows users to test the retrieval-augmented generation (RAG) ability of LLM models. If we look at how it used to call the FastAPI CORS middleware, we can see it used an unsafe default setting, with allow_origins set to all and allow_credentials set to true. Usually if the browser receives Access-Control-Allow-Origin: * and Access-Control-Allow-Credentials: true, the browser knows not to send credentials with the origin, since the application did not reflect the actual domain, just a wildcard.

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

However, FastAPI CORS middleware is unsafe by default and setting these two headers like this resulted in the origin being reflected along with credentials.

Currently, Cognita does not have authentication, but if its developers implemented authentication without fixing the CORS policy, their authentication could be bypassed. As it stands, any website can send arbitrary requests to any endpoint in Cognita, as long as they know how to access it. Due to its lack of authentication, Cognita appears intended to be hosted on intranets or locally. An attacking website can try guessing the local IP of a Cognita instance by sending requests to local addresses such as localhost, or it can enumerate the internal IP address space by continually making requests until it finds the Cognita instance. With this bug alone, our access is limited to just using the RAG endpoints and possibly deleting data. We want to get a foothold in the network. Let’s look for a real primitive.

We found a simple arbitrary file write primitive; the developers added an endpoint for Docker without considering file sanitization, and now we can write to any file we want. The file.filename is controlled by the request and os.path.join resolves the “..”, allowing file_path to be fully controlled.

@router.post("/upload-to-local-directory")
async def upload_to_docker_directory(
    upload_name: str = Form(
        default_factory=lambda: str(uuid.uuid4()), regex=r"^[a-z][a-z0-9-]*$"
    ),
    files: List[UploadFile] = File(...),
):
...
        for file in files:
            logger.info(f"Copying file: {file.filename}, to folder: {folder_path}")
            file_path = os.path.join(folder_path, file.filename)
            with open(file_path, "wb") as f:
                f.write(file.file.read())

Now that we have an arbitrary file write target, what should we target to get RCE? This endpoint is for Docker users and the Cognita documentation only shows how to install via Docker. Let’s take a look at that Dockerfile.

command: -c "set -e; prisma db push --schema ./backend/database/schema.prisma && uvicorn --host 0.0.0.0 --port 8000 backend.server.app:app --reload"

Looking carefully, there’s the --reload when starting up the backend server. So we can overwrite any file in the server and uvicorn will automatically restart the server to apply changes. Thanks uvicorn! Let’s target the init.py files that are run on start, and now we have RCE on the Cognita instance. We can use this to read data from Cognita, or use it as a starting point on the network and attempt to connect to other vulnerable devices from there.

Logic issues lead to credit card charges and backdoor access

Next, let’s look at some additional real life examples of faulty CORS logic.

We found the following code was found on the website https://tamagui.dev. Since the source code is found on GitHub, we decided to take a quick look. (Note: The found vulnerability has since been reported by our team and fixed by the developer.)

export function setupCors(req: NextApiRequest, res: NextApiResponse) {
  const origin = req.headers.origin

  if (
    typeof origin === 'string' &&
    (origin.endsWith('tamagui.dev') ||
      origin.endsWith('localhost:1421') ||
      origin.endsWith('stripe.com'))
  ) {
    res.setHeader('Access-Control-Allow-Origin', origin)
    res.setHeader('Access-Control-Allow-Credentials', 'true')
  }
}

As you can see, the developer added hardcoded endpoints. Taking a guess, the developer most likely used Stripe for payment, localhost for local development and tamagui.dev for subdomain access or to deal with https issues. In short, it looks like the developer added allowed domains as they became needed.

As we know, using endsWith is insufficient and an attacker may be able to create a domain that fulfills those qualities. Depending on the tamagui.dev account’s permissions, an attacker could perform a range of actions on behalf of the user, such as potentially buying products on the website by charging their credit card.

Lastly, some projects don’t prioritize security and developers are simply writing the code to work. For example, the following project used the HasPrefix and Contains functions to check the origin, which is easily exploitable. Using this vulnerability, we can trick an administrator to click on a specific link (let’s say https://localhost.attacker.com), and use the user-add endpoint to install a backdoor account in the application.

func CorsFilter(ctx *context.Context) {
    origin := ctx.Input.Header(headerOrigin)
    originConf := conf.GetConfigString("origin")
    originHostname := getHostname(origin)
    host := removePort(ctx.Request.Host)

    if strings.HasPrefix(origin, "http://localhost") || strings.HasPrefix(origin, "https://localhost") || strings.HasPrefix(origin, "http://127.0.0.1") || strings.HasPrefix(origin, "http://casdoor-app") || strings.Contains(origin, ".chromiumapp.org") {
        setCorsHeaders(ctx, origin)
        return
    }

func setCorsHeaders(ctx *context.Context, origin string) {
    ctx.Output.Header(headerAllowOrigin, origin)
    ctx.Output.Header(headerAllowMethods, "POST, GET, OPTIONS, DELETE")
    ctx.Output.Header(headerAllowHeaders, "Content-Type, Authorization")
    ctx.Output.Header(headerAllowCredentials, "true")

    if ctx.Input.Method() == "OPTIONS" {
        ctx.ResponseWriter.WriteHeader(http.StatusOK)
    }
}

DNS rebinding

Diagram showing how DNS rebinding utilizes the DNS system to exploit vulnerable web applications.

DNS rebinding has the same mechanism as a CORS misconfiguration, but its ability is limited. DNS rebinding does not require a misconfiguration or bug on the part of the developer or user. Rather, it’s an attack on how the DNS system works.

Both CORS and DNS rebinding vulnerabilities facilitate requests to API endpoints from unintended origins. First, an attacker lures the victim’s browser to a domain that serves malicious javascript. The malicious javascript makes a request to a host that the attacker controls, and sets the DNS records to redirect the browser to a local address. With control over the resolving DNS server, the attacker can change the IP address of the domain and its subdomains in order to get the browser to connect to various IP addresses. The malicious javascript will scan for open connections and send their malicious payload requests to them.

This attack is very easy to set up using NCCGroup’s singularity tool. Under the payloads folder, you can view the scripts that interact with singularity and even add your own script to tell singularity how to send requests and respond.

Fortunately, DNS rebinding is very easy to mitigate as it cannot contain cookies, so adding simple authentication for all sensitive and critical endpoints will prevent this attack. Since the browser thinks it is contacting the attacker domain, it would send any cookies from the attacker domain, not those from the actual web application, and authorization would fail.

If you don’t want to add authentication for a simple application, then you should check that the host header matches an approved host name or a local name. Unfortunately, many newly created AI projects currently proliferating do not have any of these security protections built in, making any data on those web applications possibly retrievable and any vulnerability remotely exploitable.

   public boolean isValidHost(String host) {

        // Allow loopback IPv4 and IPv6 addresses, as well as localhost
        if (LOOPBACK_PATTERN.matcher(host).find()) {
            return true;
        }

        // Strip port from hostname - for IPv6 addresses, if
        // they end with a bracket, then there is no port
        int index = host.lastIndexOf(':');
        if (index > 0 && !host.endsWith("]")) {
            host = host.substring(0, index);
        }

        // Strip brackets from IPv6 addresses
        if (host.startsWith("[") && host.endsWith("]")) {
            host = host.substring(1, host.length() - 2);
        }

        // Allow only if stripped hostname matches expected hostname
        return expectedHost.equalsIgnoreCase(host);
    }

Because DNS rebinding requires certain parameters to be effective, it is not caught by security scanners for the fear of many false positives. At GitHub, our DNS rebinding reports to maintainers commonly go unfixed due to the unusual nature of this attack, and we see that only the most popular repos have checks in place.

When publishing software that holds security critical information or takes privileged actions, we strongly encourage developers to write code that checks that the origin header matches the host or an allowlist.

Conclusion

Using CORS to bypass the same-origin policy has always led to common mistakes. Finding and fixing these issues is relatively simple once you understand CORS mechanics. New and improving browser protections have mitigated some of the risk and may eliminate this bug class altogether in the future. Oftentimes, finding CORS issues is as simple as searching for “CORS” or Access-Control-Allow-Origin in the code to see if any insecure presets or logic are used.

Check out the Mozilla Developer Network CORS page if you wish to become better acquainted with how CORS works and the config you choose when using a CORS framework.

If you’re building an application without authentication that utilizes critical functionality, remember to check the Host header as an extra security measure.

Finally, GitHub Code Security can help you secure your project by detecting and suggesting a fix for bugs such as CORS misconfiguration!

The post Localhost dangers: CORS and DNS rebinding appeared first on The GitHub Blog.

Read the whole story
· · · · · · · · · · · · ·
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From training to inference: The new role of web data in LLMs

1 Share
Data has always been key to LLM success, but it's becoming key to inference-time performance as well.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft 365 Certification control spotlight: Data access management

1 Share

Data access management ensures only authorized users and applications can securely access sensitive data. Restricting access to minimize risk. Only users with a legitimate business need should have access to sensitive data and encryption keys.

Microsoft 365 Certification confirms that ISVs have established a documented process for access requests, following the principles of least privilege, and have a clearly defined access request procedure for their apps.

Certification auditors will verify that ISVs maintain a list of individuals with access to data and/or encryption keys. The list should provide business justification for each individual and include a formal approval process that aligns access privileges with job functions.

The procedure for granting access to data or encryption keys should require approval to confirm that access is essential for an individual’s job responsibilities. This prevents employees without a legitimate reason from gaining access.

When utilizing third parties for the storage or processing of Microsoft 365 data, these entities can represent significant risk factors. Certification requires ISVs to institute a comprehensive due diligence and management process to ensure that third parties are securely storing or processing data and will comply with any legal obligations, such as those required of data processors under GDPR.

ISVs should maintain a detailed record of all third parties with whom they share data to support their applications. This record should include the services provided, the data shared, the reasons for sharing the data, key contact information including a breach notification contact, contract renewal or expiration dates, and legal or compliance obligations such as GDPR, HIPAA, and FedRAMP. Data sharing agreements will be reviewed to ensure that third parties are processing data only as needed and that they understand their security obligations.

Next steps

To learn how Microsoft 365 Certification validates your application uses the most up to date controls for data access management , visit the Microsoft 365 Certification data at rest control evidence requirements.

To start certification, go to the Microsoft Partner Center dashboard, select an app from Marketplace offers overview, and select App Compliance.

The post Microsoft 365 Certification control spotlight: Data access management appeared first on Microsoft 365 Developer Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Blunt Force Trauma of the Trump Tariffs

1 Share
The US is barreling toward a recession for no good reason, and dragging the world—and a few thousand penguins on remote Antarctic islands—down with it.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Bringing intelligence to every workflow

1 Share
Notion is a connected workspace where teams write, plan, and organize everything from meeting notes to product roadmaps. Today, it’s also a deeply AI-powered platform, used by millions to summarize content, generate writing, and ask questions in natural language across their entire workspace.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories
Loading...