Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122083 stories
·
29 followers

Read Satya Nadella’s Microsoft memo on putting security first

1 Share
Illustration of Microsoft CEO Satya Nadella
Image: Laura Normand / The Verge

Microsoft is overhauling its security processes after a series of high-profile attacks in recent years. Security is now Microsoft’s “top priority,” the company outlined today in response to ongoing questions about its security practices and the US Cyber Safety Review Board’s labeling of Microsoft’s security culture as “inadequate.”

Microsoft CEO Satya Nadella is now making it clear to every employee that security should be prioritized above all else. The Verge has obtained a memo from Nadella to Microsoft’s more than 200,000 employees, where he discusses the new security overhaul and how the company is learning from attackers to improve its security processes. Nadella also makes it explicitly clear that employees should not make...

Continue reading…

Read the whole story
alvinashcraft
43 minutes ago
reply
West Grove, PA
Share this story
Delete

Microsoft will base part of senior exec comp on security, add deputy CISOs to product groups

1 Share
Charlie Bell, executive vice president of Microsoft security, speaks at the GeekWire Summit in 2022. (GeekWire Photo / Dan DeLong)

Microsoft is changing its security practices, organizational structure, and executive compensation in an attempt to address a series of major security breaches, under growing pressure from government leaders and big customers.

The company said Friday morning that it will base a portion of senior executive compensation on progress toward security goals, install deputy chief information security officers (CISOs) in each product group, and bring together teams from its major platforms and product teams in “engineering waves” to overhaul security.

“We will take our learnings from security incidents, feed them back into our security standards, and operationalize these learnings as ‘paved paths’ that can enable secure design and operations at scale,” wrote Charlie Bell, the Microsoft Security executive vice president, in a blog post outlining the changes.

Bell said the changes build on the Secure Future Initiative (SFI), introduced last fall.

“Ultimately, Microsoft runs on trust and this trust must be earned and maintained,” he wrote. “As a global provider of software, infrastructure, and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure.”

The changes follow a critical report by the Cyber Safety Review Board (CSRB) that described Microsoft’s security culture as “inadequate” and called on the company to make security its top priority, effectively reviving the spirit of the Trustworthy Computing initiative that Microsoft co-founder Bill Gates instituted in 2002.

The report called for security initiatives to be “overseen directly and closely” by Microsoft’s CEO and board, and said “all senior leaders should be held accountable for implementing all necessary changes with utmost urgency.”

After the CSRB report’s release, Sen. Ron Wyden of Oregon introduced legislation designed in part to reduce the U.S. government’s reliance on Microsoft software, citing the company’s “shambolic cybersecurity practices.”

Bell wrote that Microsoft is “integrating the recent recommendations from the CSRB” as part of the changes announced Friday, in addition to lessons learned from high-profile cyberattacks.

The compensation changes announced Friday will apply to Microsoft’s senior leadership team, the top executives who report to CEO Satya Nadella. The company did not say how much of their compensation will be based on security.

Nadella hinted at these changes last week on the company’s quarterly earnings call when he said the company would be “putting security above all else — before all other features and investments.”

In an internal memo Friday morning, obtained by GeekWire, Nadella delivered a mandate to employees, expanding on the themes outlined in Bell’s public blog post.

“If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,” the Microsoft CEO told employees. “In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.”

Bell wrote in his post that the company’s new “security governance framework” will be overseen by Microsoft’s Chief Information Security Office, which is led by Igor Tsyganskiy as Microsoft’s CISO following an executive shakeup in December.

The deputy CISOs in product teams will report directly to Tsyganskiy, according to the company. This change in organizational and reporting structure was first reported by Bloomberg News on Thursday.

“This framework introduces a partnership between engineering teams and newly formed Deputy CISOs, collectively responsible for overseeing SFI, managing risks and reporting progress directly to the Senior Leadership Team,” Bell wrote. “Progress will be reviewed weekly with this executive forum and quarterly with our Board of Directors.”

Microsoft revealed in January of this year that a Russian state-sponsored actor known as Nobelium or Midnight Blizzard accessed its internal systems and executive email accounts. More recently, the company said the same attackers were able to access some of its source code repositories and internal systems.

In another high-profile incident, in May and June 2023, the Chinese hacking group known as Storm-0558 is believed to have compromised the Microsoft Exchange Online mailboxes of more than 500 people and 22 organizations worldwide, including senior U.S. government officials.

Read the whole story
alvinashcraft
43 minutes ago
reply
West Grove, PA
Share this story
Delete

Microsoft overhaul treats security as ‘top priority’ after a series of failures

2 Shares
Vector collage of the Microsoft logo among arrows and lines going up and down.
Image: The Verge

Microsoft is making security its number one priority for every employee, following years of security issues and mounting criticisms. After a scathing report from the US Cyber Safety Review Board recently concluded that “Microsoft’s security culture was inadequate and requires an overhaul,” it’s doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft’s senior leadership team.

Last November, Microsoft announced a Secure Future Initiative (SFI) in response to mounting pressure on the company to respond to attacks that allowed Chinese hackers to breach US government email accounts. Just days after announcing this initiative, Russian hackers managed to breach Microsoft’s...

Continue reading…

Read the whole story
alvinashcraft
44 minutes ago
reply
West Grove, PA
Share this story
Delete

Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs

1 Share

Large language models (LLMs) have emerged as powerful engines of creativity, transforming simple prompts into a world of possibilities.

But beneath their potential capacity lies a critical challenge. The data that flows into LLMs touches countless enterprise systems, and this interconnectedness poses a growing data security threat to organizations.

LLMs are nascent and not always completely understood. Depending on the model, their inner workings may be a black box, even to their creators — meaning that we can’t fully understand what will happen to the data we put in, and how or where it may come out.

To stave off risks, organizations will need to build infrastructure and processes that perform rigorous data sanitization of both inputs and outputs, and can monitor and canvas every LLM on an ongoing basis.

Model Inventory: Take Stock of What You’re Deploying

As the old saying goes, “You can’t secure what you can’t see.” Maintaining a comprehensive inventory of models throughout both production and development phases is critical to achieving transparency, accountability and operational efficiency.

In production, tracking each model is crucial for monitoring performance, diagnosing issues and executing timely updates. During development, inventory management helps track iterations, facilitating the decision-making process for model promotion.

To be clear, this is not a “record-keeping task” — a robust model inventory is absolutely essential in building reliability and trust in AI-driven systems.

Data Mapping: Know What Data You’re Feeding Models

Data mapping is a critical component of responsible data management. It involves a meticulous process to comprehend the origin, nature and volume of data that feeds into these models.

It’s imperative to know where the data originates, whether it contains sensitive information like personally identifiable information (PII) or protected health information (PHI), especially given the sheer quantity of data being processed.

Understanding the precise data flow is a must; this includes tracking which data goes into which models, when this data is utilized and for what specific purposes. This level of insight not only enhances data governance and compliance but also aids in risk mitigation and the preservation of data privacy. It ensures that machine learning operations remain transparent, accountable and aligned with ethical standards while optimizing the utilization of data resources for meaningful insights and model performance improvements.

Data mapping bears striking resemblance to compliance efforts often undertaken for regulations like the General Data Protection Regulation (GDPR). Just as GDPR mandates a thorough understanding of data flows, the types of data being processed and their purpose, the data mapping exercise extends these principles to the realm of machine learning. By applying similar practices to both regulatory compliance and model data management, organizations can ensure that their data practices adhere to the highest standards of transparency, privacy and accountability across all facets of operations, whether it’s meeting legal obligations or optimizing the performance of AI models.

Data Input Sanitation: Weed out Risky Data

“Garbage in, garbage out” has never rung truer than with LLMs. Just because you have vast troves of data to train a model doesn’t mean you should do so. Whatever data you use should have a reasonable and defined purpose.

The fact is, some data is just too risky to input into a model.  Some can carry significant risks, such as privacy violations or biases.

It is crucial to establish a robust data sanitization process to filter out such problematic data points and ensure the integrity and fairness of the model’s predictions. In this era of data-driven decision-making, the quality and suitability of the inputs are just as vital as the sophistication of the models themselves.

One method rising in popularity is adversarial testing on models. Just as selecting clean and purposeful data is vital for model training, assessing the model’s performance and robustness is equally crucial in the development and deployment stages. These evaluations help detect potential biases, vulnerabilities or unintended consequences that may arise from the model’s predictions.

There’s already a growing market of startups specializing in providing services for precisely this purpose. These companies offer invaluable expertise and tools to rigorously test and challenge models, ensuring they meet ethical, regulatory and performance standards.

Data Output Sanitation: Ensure Trust and Coherence

Data sanitation isn’t limited to just the inputs in the context of large language models; it extends to what’s generated as well. Given the inherently unpredictable nature of LLMs, the output data requires careful scrutiny in order to establish effective guard rails.

The outputs should not only be relevant but also coherent and sensible within the context of their intended use. Failing to ensure this coherence can swiftly erode trust in the system, as nonsensical or inappropriate responses can have detrimental consequences.

As organizations continue to embrace LLMs, they will need to pay close attention to the sanitation and validation of model outputs in order to maintain the reliability and credibility of any AI-driven systems.

The inclusion of a diverse set of stakeholders and experts when creating and maintaining the rules for outputs and when building tools to monitor outputs is are key steps toward successfully safeguarding models.

Putting Data Hygiene into Action

Using LLMs in a business context is no longer an option; it’s essential to stay ahead of the competition. This means organizations will have to establish measures to ensure model safety and data privacy. Data sanitization and meticulous model monitoring are a good start, but the landscape of LLMs evolves quickly. Staying abreast of the latest and greatest, as well as regulations, will be key to making continuous improvements to your processes.

The post Clean Data, Trusted Model: Ensure Good Data Hygiene for Your LLMs appeared first on The New Stack.

Read the whole story
alvinashcraft
44 minutes ago
reply
West Grove, PA
Share this story
Delete

HOWTO: Request access to Azure OpenAI Service for Azure Commercial or Government

1 Share

For individuals with Azure Commercial cloud instances:
If you are interested in using Azure OpenAI Service, the Azure subscription administrator will need to request approval to access Azure OpenAI Service by completing the following request form:

Approval can take 2 days or more. Couple notes:

  1. You will need to know the “Azure Subscription ID” of the Azure subscription to be enabled for Azure OpenAI to complete the form.  Instructions on how to obtain this if you don’t have it readily available are available in the form.
  2. Request access to “GPT-3.5, GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and/or Embeddings Models”.  There are other AI models available such as GPT-4 Turbo Vision, Whisper, etc. however unless explicitly required, most people only need “GPT-3.5, GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and/or Embeddings Models”. Requesting additional models may delay your approval.

For individuals with Azure Government cloud instances:

Approval can take more than 4 days for Azure Government requests. Again, the notes above about requiring an Azure Government Subscription ID apply.



Read the whole story
alvinashcraft
44 minutes ago
reply
West Grove, PA
Share this story
Delete

Fluid Framework: How SharedTree merges changes

1 Share

SharedTree is a data structure that allows users to collaboratively edit hierarchical data, such as documents, spreadsheets, or outlines. SharedTree is part of the Fluid Framework, which is a platform for building distributed applications that enable real-time collaboration and data synchronization.

The basic unit of a SharedTree is a node. SharedTree supports four types of nodes:

  • Object nodes contain named fields that can hold any type of node. Object nodes are like JSON or JavaScript objects.
  • Map nodes are a collection of key-value pairs, where the keys are strings, and the values can be any type of node. Map nodes are like object nodes, but they allow more flexibility in adding, removing, and updating keys and values.
  • Array nodes are a collection of nodes that are ordered by index. Array nodes are like JSON arrays or JavaScript arrays.
  • Leaf nodes are nodes that have no children and only a single value as their content. The value can be a primitive type, such as a string, a number, a boolean, or null. Leaf nodes are like JSON values or JavaScript primitives.

What are merge semantics?

Merge semantics define how SharedTree reconciles concurrent edits – most importantly, those that may conflict. Concurrent edits are changes that are made by different users or processes at the same time, or in an overlapping time window. More specifically, two changes are concurrent if the clients making each change have not received the other’s change. For example, if Alice and Bob are both editing the same tree, and Alice adds a new node while Bob removes an existing node, these are concurrent edits.

Concurrent edits can lead to situations where two or more edits change the same part of the tree. For example, if Alice and Bob both try to edit the same node, or if Alice tries to add a child to a node that Bob removed. Merge semantics determine how these situations are handled.

How edits are sequenced

Sequencing impacts merge semantics by determining the order in which edits are applied to the tree. Sequencing is determined by the Fluid relay service, which assigns a sequence number to each edit based on the order in which it receives them. The sequence number determines the logical order of edits for all clients.

When edits occur concurrently, each editor has no knowledge of the other editor’s concurrent edit. Once concurrent edits have been sequenced and the clients have received that sequence, the SharedTree merge semantics will determine the correct state of the tree based on the assigned sequence.

Handling concurrent edits

Reconciling concurrent edits is trivial when they affect independent parts of the tree. However, it’s possible for some concurrent edits to affect overlapping parts of the tree. This leads to a situation where there may be multiple reasonable outcomes.

For example, if Alice and Bob concurrently change the color of the same item such that Alice would change it from yellow to red and Bob would change it from yellow to blue, then one could imagine multiple possible outcomes:

  • change the color to red
  • change the color to blue
  • keep the color yellow

Different merge semantics may lead to different outcomes. In this case, the item will either be red or blue depending on how the changes are sequenced.

SharedTree never models edits (or documents) as being “conflicted”, even in the scenario just described. In fact, currently SharedTree never surfaces a conflict where one of two states or one of two edits must be selected as the winner or manually reconciled. Instead of treating edits as conflicted, SharedTree always applies each edit one after the other in sequencing order as long as the edit is valid. For the example above, this means that it is the last edit to be sequenced that determines the final color for the item.

This approach works because SharedTree’s edits work to capture precise intentions, which enable SharedTree to resolve potential conflicts either by accommodating all concurrent intentions, or by picking one to override the others in a deterministic way.

For example, consider the following array, managed by a collaborative TODO application:

Image 1 jpg

Alice might reorder the items so that purchases are grouped together. The resulting state would be as follows:

2.0

Concurrently, Bob may update item #2 to call out the cat by name. The resulting state would be as follows:

Image 3

SharedTree understands that Alice’s intent is to move item 2 to the end of the array and that Bob’s intent is to change the text of item 2 and that these two changes are both valid and mutually compatible.

Image 4

Now consider how a system like Git sees each change. Here is the diff for reordering the items:

Image 5

Here is the diff for updating the text property on item #2:

Image 6

Unlike SharedTree, Git only understands the changes through diffing, which makes it unable to perceive the fact that the intentions behind the changes are not in conflict and can both be accommodated. While this makes Git very flexible, it is forced to rely on human intervention in all but trivial merges.

Moving is not copying

SharedTree allows an item to be moved from one location to another in the tree. Edits made to the item before it is moved will still apply even if they end up (because of concurrency) being applied after the item moves. This works because the move doesn’t involve a new copy of the moved item at the destination and the deletion of the original at the source. It is a true move of the item.

Consider the example above: Alice moves the to do item from one position to another, while Bob concurrently edits the text of the item. If the move were just a copy, then, if Alice’s move were to be sequenced first, Bob’s edit would not apply to the copy at the destination. By contrast, SharedTree’s move semantics ensure that Bob’s edit will be visible on the item at the destination no matter the sequencing order.

Removal is movement

SharedTree allows items to be removed from the tree. This occurs when an element is removed from an array node, when a key is deleted from a map node, or when the field on an object is overwritten or cleared.

Consider the following scenario: Alice removes a whole array of to do items, while Bob concurrently moves an item to a different array. SharedTree’s removal semantics ensure that Bob’s move will still apply, regardless of whether it ends up being sequenced before or after Alice removed the list where the item came from.

If that weren’t the case, then there would be a race between Alice’s and Bob’s edits, where Bob’s edit would not apply if Alice’s edit were sequenced first, and Bob would lose the item he moved.

In the case where an item is changed as it is removed, those modifications may end up being invisible. However, they will be preserved, so that if the removal is undone the changes will be present.

Last write wins

It’s possible for concurrent edits to represent fundamentally incompatible user intentions. Whenever that happens, the edit that is sequenced last will win.

Example 1: Alice and Bob concurrently change the background color of the same item such that Alice changes it from yellow to red, and Bob changes it from yellow to blue. If the edits are sequenced such that Alice’s edit is applied first and Bob’s edit is applied second, then the background color of the item will change from yellow to red and then from red to blue. If the edits are sequenced in the opposite order, the item’s background color will change from yellow to blue and then from blue to red.

Example 2: Alice and Bob concurrently move the same item such that Alice moves it from location X to location A, and Bob moves it from location X to location B. If the edits are sequenced such that Alice’s edit is applied first and Bob’s edit is applied second, then the item will first be moved from X to A and then from A to B. If the edits are sequenced in the opposite order, then the note will first be moved from X to B and then from B to A. Note that, because we treat removal as movement, this is true even when removals are involved: if the removal is sequenced last, then the node will be moved and then removed. If the move is sequenced last, then the node will be removed and then moved.

Constraints

There are some base conditions that SharedTree uses to ensure that a change is valid – these primarily focus on ensuring that the tree is always compliant with its schema after each change. SharedTree is also designed to allow developers to apply additional constraints to a change to help developers preserve their own application’s invariants. This feature is currently still being developed and we are always listening to feedback on what constraints would be helpful. For example, a developer might want to prevent the removal of an item if the item was concurrently changed or moved. If so, they could use a constraint that tests for these cases and drops the removal if the node has been changed or moved.

To learn more about SharedTree merge semantics see: https://aka.ms/fluid/tree/merge_semantics

The post Fluid Framework: How SharedTree merges changes appeared first on Microsoft 365 Developer Blog.

Read the whole story
alvinashcraft
44 minutes ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories