Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148372 stories
·
33 followers

Passkeys now available for passwordless sign-in and 2FA on GitLab

1 Share

Passkeys are now available on GitLab, and offer a more secure and convenient way to access your account. You can use passkeys for passwordless sign-in or as a phishing-resistant two-factor authentication (2FA) method. Passkeys offer the ability to authenticate using your device's fingerprint, face recognition, or PIN. For accounts with 2FA enabled, passkeys automatically become available as your default 2FA method.



To register a passkey to your account, go to your profile settings and select Account > Manage authentication.

Passkeys use WebAuthn technology and public-key cryptography made up of both a private and public key. Your private key stays securely on your device and never leaves, while your public key is stored on GitLab. Even if GitLab were to become compromised, attackers cannot use your stored credentials to access your account. Passkeys work across desktop browsers (Chrome, Firefox, Safari, Edge), mobile devices (iOS 16+, Android 9+), and FIDO2 hardware security keys, allowing you to register multiple passkeys across your devices for convenient access.

Passkeys sign-in with two-factor authentication

GitLab signed the CISA Secure by Design Pledge, committing to improve our security posture and help customers develop secure software faster. One key objective of the pledge is to increase the use of multi-factor authentication (MFA) across the manufacturer’s products. Passkeys are an integral part of this goal, and provide a seamless, phishing-resistant MFA method that makes signing in to GitLab both more secure and more convenient.

If you have questions, want to share your experience, or would like to engage directly with our team about potential improvements, see the feedback issue.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Writing Plumbing! Use the New Logic Apps MCP Server Wizard

1 Share

As part of our continued investment in making Logic Apps a first‑class platform for providing AI agents with connectivity, we’ve introduced the new Logic Apps MCP server configuration experience—a guided, in‑portal workflow that lets you turn any existing Standard logic app into a fully functional MCP server. With just a few clicks, you can configure authentication, generate API keys, create or manage MCP servers, and convert workflows into discoverable MCP tools that agents can call securely and reliably. This experience consolidates everything you need: setup, tooling, and management into one intuitive surface so developers can focus on building meaningful agentic capabilities instead of wiring up protocol plumbing.

This new experience builds upon our previous investments with API Center and the Microsoft Foundry tools catalog that introduced the ability to dynamically build MCP servers through a publishing wizard experience that leveraged Azure Logic Apps connectors. The investments made with API Center and Microsoft Foundry enabled the underlying platform components that make Logic Apps workflows accessible over the MCP protocol as MCP tools. What we have subsequently invested in is the ability to enable any existing Logic Apps Standard instance as an MCP server.

Core Capabilities

From an existing Logic Apps Standard instance, you will now discover a new section in our table of contents called Agents. Within this section, you will find an entry point called MCP servers. If you click on this link, our configuration experience will load. This experience will provide you with the option to create an MCP server using existing workflows or by creating new workflows.

Create new workflows

Using this option, allows us to dynamically build MCP tools using the Logic Apps managed connectors, much like we provide in API Center and Microsoft Foundry.

You will start by providing:

  • MCP Server Name
  • MCP Server Description

 

Once this information has been provided, we can select a connector that we would like to use in our MCP server like the Salesforce connector.

With our connector selected, we can choose one or more actions. For each action that is selected, a corresponding workflow will be created that enables that connector to be called. For our use case, we will select Create record, Update record and Get Opportunity records from Salesforce.

To connect to Salesforce, we need a connection so we will create that now

If we scroll down, we will discover that we have some actions that require configuration including Create record and Update record. The reason for this is due to the design of the Salesforce connector and its support for may different Salesforce object types. Salesforce has many object types and delegating this decision to a model will result in a lot of unpredictable behavior. We can increase the consistency of our MCP server by making it more specific.

To improve our predictability, we will target a specific object type of Contacts. With this selected, we will also be able to choose which fields we want our model to populate. For any fields that we want to hardcode, we can specify a provided by value to be set to User.

It is also a good practice to update the Description of our action to ensure we have something meaningful so that our model can achieve higher accuracy when trying to call it.

 

To complete our process, click the Register button. Once this is clicked, the related workflows will be automatically get created for us. These underlying workflows will have Request trigger schemas provided and configuration that will wire-up these inputs into our action.

 

Once the registration process is complete, we will see a notification in the upper fight corner indicating that our tools (workflows) have been saved.

We will now see our MCP Server created with the corresponding workflow tools. From this experience, there are multiple functions that we can access from this screen including:

  • Editing MCP Server Name and Description
  • Adding/removing workflows for this MCP server
  • Copy the MCP Server URL
  • Delete MCP Server

o   The underlying workflows will remain, even if the MCP Server is deleted.

If we want to explore one of our workflow tools, we can click on the link and we will be navigated to the underlying workflow. If we want to make changes to this workflow, we are able to do so, including adding additional connectors (managed or built-in), business logic, custom code etc.

As we explore this workflow we will discover that we have a trigger schema that is included and we have our Salesforce action mapped. This was all completed by the wizard experience that we just went through. We can now secure and test our MCP server, as discussed later in this article.

 

Use existing workflows

You may already have workflows built that you want to expose as MCP tools. With this requirement in mind, we can support these use cases by selecting: Use existing workflows option.

Once again, we will provide an MCP server Name and Description. With these values provided, we can select one or more existing workflow(s) to be included in this MCP Server.

In order for a workflow to be eligible for selection it must satisfy the following conditions:

  • HTTP Trigger
  • HTTP Response Action

You have a lot of creative licensing for whatever actions you want to include between these actions. This can include calling APIs, custom code, built-in connectors and additional business logic.

Note: Using this approach also allows you to create a workflow with your own naming conventions in place and support for advanced scenarios.

I have built the following workflow which includes my trigger, ServiceNow action and a response. To optimize our model reliably calling my MCP Server, I have:

  • Added a relevant description for my trigger. This will communicate to the model what this workflow (tool) is able to provide.
  • Within my trigger, provided a JSON schema and included description fields that communicate to the model what these data properties provide.

 

Once we have provided a Name and Description, we can now select our workflow(s) and click Create button.

We now have two MCP servers that are hosted in this Logic App. Each MCP server will have a unique URL, but will share the same security scheme.

Authentication

There are two authentication schemes that are available for MCP servers:

  • API Key-based
  • OAuth

Note: It is important to know that these authentication schemes apply to the entire Logic App and not the individual MCP Servers/tools.

API key-based authentication

When we select Key-based from our method dropdown, we have the ability to generate keys by clicking on the Generate key button. From there, select a Duration and an Access Key, followed by clicking on the Generate button.

An API key will be presented for which you can copy this value.

Note: When you navigate away from this page, you will not be able to see this value again, so store it in a safe place. However, you can generate additional API keys. Prior API keys will continue to be valid until their expiration date is no longer valid or the access keys are regenerated.  Regenerating access keys will be available in a future release, but are available via an API call.

OAuth Authentication

Here we are going to take advantage of EasyAuth authentication for Azure Logic Apps. We will start by clicking on the Manage authentication link.

From there we will click on the Add identity provider button.

Select Microsoft as the identity provider, followed by clicking the Add button.

We will now need an App Registration, so in a separate tab navigate to App registrations in the Azure Portal.

 

Start by providing a Name and select the desired account types.

Click on Expose an API

Add a Scope and provide relevant data

Navigate to the Overview page and then copy the values for:

  • Application (client) ID
  • Directory (tenant) ID
  • Application ID URI

Back in our Logic Apps, Add an identity provider and provide the following information from our App Registration:

In the Additional checks section, enable the following:

  • Allow requests from any application
  • Allow requests from any identity
  • Use default restrictions based on issuer

For the App Server authentication settings, enable unauthenticated access.

 

Note: These settings can be modified to address your organization’s specific needs.

 

Click Add to complete the configuration

 

Testing

You are now ready to test your MCP server from your favorite agent platform including VS Code, Copilot Studio, Microsoft Foundry or Azure Logic Apps. If you are using an API key, you will be asked to create an authentication header. In this case you will want to use: X-API-KEY and then provide your key value.

 

Known issue: We have seen some situations that when using OAuth, the Easy Auth settings need to be re-applied to address an authentication error from occurring. Issuing the following command from a Cloud shell should address the problem and allow you to authenticate using Easy Auth.

az webapp auth microsoft update --name "<LogicAppName>" --resource-group "<Your_ResourceGroup>" --client-id "<Your_Client_ID>" --issuer "https://login.microsoftonline.com/<tenant>/v2.0"

Video content

Want to see this content in video format? Check out the following video.

 

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

When and why you might need the Raspberry Pi AI HAT+ 2

1 Share

Our friends at Hailo wrote this article about how to make the most of the Raspberry Pi AI HAT+ 2, pinpointing some of their favourite generative AI use cases.

The Raspberry Pi AI HAT+ 2 is the official generative AI PCIe add-on for Raspberry Pi 5, released on 15 January 2026. It pairs a Hailo-10H AI accelerator capable of up to 40 TOPS of inference performance (INT4) with 8GB of dedicated on-board LPDDR4X memory, enabling local vision and small generative AI workloads on one of the most popular single-board computers ever made.

This hardware combination is designed to enable efficient on-device generative AI while allowing the AI HAT+ 2 to operate within edge device requirements. These include low power consumption, no cloud connectivity, low latency, and maximum data privacy. However, as with any embedded hardware, performance trade-offs matter: edge devices are limited in memory, compute resources, and power budget (typically single-digit W).

For this reason, generative AI applications that require general world awareness, continuous learning, or conversations based on extensive context and knowledge-heavy reasoning are better suited to run in the cloud. For latency-sensitive, privacy-critical, knowledge-confined applications, the new AI HAT+ 2 is an ideal fit.

Let’s break down when and where the AI HAT+ 2 is most powerful, and why it’s not just another niche gadget.

Where the AI HAT+ 2 really excels

The AI HAT+ 2 is strongest when running workloads that are compute-heavy up front, rather than workloads that are dominated by token-by-token (TBT) generation. In practice, this means it shines when you need the Raspberry Pi’s CPU to be available and responsive while running generative AI applications with the following profiles:

  1. Fast execution of encoders — when turning a visual, audio, or text input into a prompt embedding
  2. Short time to first token (TTFT)* — when interactivity and user experience are critical
  3. Large prefill — when the input context is larger than the output response
  4. Multi-stage pipelines — when sequential processing is needed, in which the output of one model becomes the input of the next

*Example benchmark figures for 96 prefill tokens, measured on the CPU using llama.cpp:

ModelRaspberry Pi 5 CPUHailo-10H
QWEN2.5-1.5B-4int2039ms320ms

Ideal use cases

Vision-language models (VLMs)

VLMs map naturally to the AI HAT+ 2’s strengths, as the image encoder is a high-compute stage that generates compact token embeddings as output. The Hailo-10H accelerator enables event triggering, logging, indexing, captioning, and smart searching with free text, using a 2B-parameter model that would be prohibitively slow to run on the Raspberry Pi’s CPU alone.

We can think of countless applications in home security and surveillance, such as turning off your alarm when your package is being delivered and notifying you once the delivery is complete, or sending you a log of meaningful pet-monitoring events at the end of each day. The AI HAT+ 2 is also ideal for security and monitoring applications in industries like quality assurance, healthcare, and industrial automation.

Voice to action

Another strong application of the AI HAT+ 2 is a local voice-to-action agent, combining high-compute inference with relatively low-bandwidth interaction. These workflows often rely on a large prefill step, i.e. processing a big, changing input context before generating a short response, which can be much slower on the Raspberry Pi’s CPU alone. This is particularly useful for agents that continuously ingest fresh data (including sensor readings, device states, logs, schedules, and recent events) and then respond locally with a short command or action.

The full sequential pipeline first converts free speech to text using a Whisper-class model, after which a small LLM handles intent understanding, decision-making, and natural free-text interaction, triggering real-world actions locally and reliably. This architecture enables agentic AI and physical AI at the edge by supporting larger Whisper models for improved accuracy, delivering low-cost, responsive, privacy-preserving, real-time voice control for a seamless user experience.

There are endless applications here too. For example, local voice to action enables natural, touchless control of devices, eliminating the need to navigate between elaborate menus and submenus or flip through tedious manuals. Another example application is intuitive wayfinding and navigation in public spaces, such as shopping centres, airports, and campuses, where users can state what they want to do rather than the exact location they need to find (e.g. “Where can I buy sunglasses?”, “Where can I get lunch?”, or “How do I reach my gate?”). In robotics and industrial systems, voice to action can facilitate more responsive human–machine interactions and more seamless cooperation.

Advanced vision applications

When it comes to demanding vision workloads, the AI HAT+ 2 enables a step change in performance. Its high compute power and efficient on-device execution translate directly into large performance gains — as much as 100% faster than the previous Raspberry Pi AI HAT+.

The Hailo-10H chip accelerates large convolutional neural networks (CNNs) and transformer-based vision models, including CLIP, zero-shot detection, and high-capacity object detectors, enabling richer perception without increasing bandwidth or power. This makes it possible to build physical AI systems that combine multiple vision stages — detection, embedding, semantic matching, and reasoning — entirely at the edge, unlocking more capable and responsive applications in home automation, security, robotics, retail, industrial automation, and more. With no cloud connectivity, no data leaves the device, and there are no network lags or costs.

Play to its strengths

The Raspberry Pi AI HAT+ 2 is at its most powerful when certain strengths are harnessed for the right applications. Some examples include:

StrengthsIdeal use cases
Free text operation without cloud dependencyOffline home automation and robotics
Small language outputs for event triggering, captioning, and summarisation on top of real-time visionHome security
Air-gapped generative summarisation of logs and sensor dataSecure industrial monitoring
Natural speech and zero-queue interaction with information agentsInformation kiosks

Bottom line: Don’t ask your toaster for history lessons…

The Raspberry Pi AI HAT+ 2 isn’t designed to compete with cloud inferencing; large LLMs will always run better where compute and memory are effectively unconstrained. However, for edge scenarios that value privacy, offline operation, low latency, and low power consumption, it unlocks real capabilities that weren’t feasible on the Raspberry Pi platform before, with or without the original AI HAT+.

You will make the best use of it when you need to run tightly scoped, on-device generative tasks alongside vision or real-world sensor input, particularly when the alternative is cloud dependency or far larger and more expensive hardware.

The robust Hailo Community has thousands of active developers. Recent integrations with Frigate and Home Assistant make the AI HAT+ 2 the most attractive option for anyone looking to make their first steps in physical AI and home automation.

The post When and why you might need the Raspberry Pi AI HAT+ 2 appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Quick Overview of Uno Platform Studio

1 Share


Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Anti-AI Movement

1 Share
From: AIDailyBrief
Views: 184

Analysis of the rising anti‑AI movement and constituent groups ranging from existential risk advocates to datacenter protesters. Summary of polling and anecdotes revealing broad public distrust, economic anxiety, and localized pushback. Discussion of industry messaging failures, policy options, and practical steps to address legitimate concerns while supporting beneficial AI adoption.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
42 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Can AI Achieve Consciousness? — With Michael Pollan

1 Share

Michael Pollan is the author of A World Appears: A Journey into Consciousness. Pollan joins Big Technology Podcast to discuss whether AI can ever become conscious and what that question reveals about the nature of mind. Tune in to hear a nuanced debate about whether consciousness is computable, where today’s LLMs fall short, and how researchers might actually test machine consciousness in the future. We also cover materialism vs. spirituality, the “hard problem” of consciousness, psychedelic experiences, and the emerging science of plant sentience. Hit play for a thoughtful, surprising conversation that brings the AI consciousness debate back down to earth while opening up some of its strangest possibilities.

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b


Learn more about your ad choices. Visit megaphone.fm/adchoices





Download audio: https://pdst.fm/e/tracking.swap.fm/track/t7yC0rGPUqahTF4et8YD/pscrb.fm/rss/p/traffic.megaphone.fm/AMPP2068592565.mp3?updated=1772027350
Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories