Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149374 stories
·
33 followers

Implementing the Backend-for-Frontend (BFF) / Curated API Pattern Using Azure API Management

1 Share

Modern digital applications rarely serve a single type of client. Web portals, mobile apps, partner integrations, and internal tools often consume the same backend services—yet each has different performance, payload, and UX requirements.

Exposing backend APIs directly to all clients frequently leads to over-fetching, chatty networks, and tight coupling between UI and backend domain models. This is where a Curated API or Backend for Frontend API design pattern becomes useful. 

What Is the Backend-for-Frontend (BFF) Pattern?

The Backend-for-Frontend (BFF)—also known as the Curated API pattern—solves this problem by introducing a client-specific API layer that shapes, aggregates, and optimizes data specifically for the consuming experience. There is very good architectural guidance on this at Azure Architecture Center [Check out the 1st Link on Citation section]

The BFF pattern introduces a dedicated backend layer for each frontend experience. Instead of exposing generic backend services directly, the BFF:

  • Aggregates data from multiple backend services
  • Filters and reshapes responses
  • Optimizes payloads for a specific client
  • Shields clients from backend complexity and change

Each frontend (web, mobile, partner) can evolve independently, without forcing backend services to accommodate UI-specific concerns.

Why Azure API Management Is a Natural Fit for BFF

Azure API Management is commonly used as an API gateway, but its policy engine enables much more than routing and security.

Using APIM policies, you can:

  • Call multiple backend services (sequentially or in parallel)
  • Transform request and response payloads to provide a unform experience
  • Apply caching, rate limiting, authentication, and resiliency policies

All of this can be achieved without modifying backend code, making APIM an excellent place to implement the BFF pattern.

When Should You Use a Curated API in APIM?

Using APIM as a BFF makes sense when:

  • Frontend clients require optimized, experience-specific payloads
  • Backend services must remain generic and reusable
  • You want to reduce round trips from mobile or low-bandwidth clients
  • You want to implement uniform polices for cross cutting concerns, authentication/authorization, caching, rate-limiting and logging, etc.
  • You want to avoid building and operating a separate aggregation service
  • You need strong governance, security, and observability at the API layer

How the BFF Pattern Works in Azure API Management

There is a Git Hub Repository [Check out the 2nd Link on Citation section] that provides a wealth of information and samples on how to create complex APIM policies.

I recently contributed to this repository with a sample policy for Curated APIs [Check out the 3rd Link on Citation section]

At a high level, the policy follows this flow:

  1. APIM receives a single client request
  2. APIM issues parallel calls to multiple backend services as shown below<wait for="all"> <send-request mode="copy" response-variable-name="operation1" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>@("{{bff-baseurl}}/operation1?param1=" + context.Request.Url.Query.GetValueOrDefault("param1", "value1"))</set-url> </send-request> <send-request mode="copy" response-variable-name="operation2" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation2</set-url> </send-request> <send-request mode="copy" response-variable-name="operation3" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation3</set-url> </send-request> <send-request mode="copy" response-variable-name="operation4" timeout="{{bff-timeout}}" ignore-error="false"> <set-url>{{bff-baseurl}}/operation4</set-url> </send-request> </wait>
    Few things to consider
    • The Wait policy allows us to make multiple requests using nested send-request policies. The for="all" attribute value implies that the policy execution will await all the nested send requests before moving to the next one.
    • {{bff-baseurl}}: This example assumes a single base URL for all end points. It does not have to be. The calls can be made to any endpoint
    • response-variable-name attribute sets a unique variable name to hold response object from each of the parallel calls. This will be used later in the policy to transform and produce the curated result.
    • timeout attribute: This example assumes uniform timeouts for each endpoint, but it might vary as well.
    • ignore-error: set this to true only when you are not concerned about the response from the backend (like a fire and forget request) otherwise keep it false so that the response variable captures the response with error code. 
  3. Once responses from all the requests have been received (or timed out) the policy execution moves to the next policy
  4. Then the responses from all requests are collected and transformed into a single response data<!-- Collect the complete response in a variable. --> <set-variable name="finalResponseData" value="@{ JObject finalResponse = new JObject(); int finalStatus = 200; // This assumes the final success status (If all backend calls succeed) is 200 - OK, can be customized. string finalStatusReason = "OK"; void ParseBody(JObject element, string propertyName, IResponse response){ string body = ""; if(response!=null){ body = response.Body.As<string>(); try{ var jsonBody = JToken.Parse(body); element.Add(propertyName, jsonBody); } catch(Exception ex){ element.Add(propertyName, body); } } else{ element.Add(propertyName, body); //Add empty body if the response was not captured } } JObject PrepareResponse(string responseVariableName){ JObject responseElement = new JObject(); responseElement.Add("operation", responseVariableName); IResponse response = context.Variables.GetValueOrDefault<IResponse>(responseVariableName); if(response == null){ finalStatus = 207; // if any of the responses are null; the final status will be 207 finalStatusReason = "Multi Status"; ParseBody(responseElement, "error", response); return responseElement; } int status = response.StatusCode; responseElement.Add("status", status); if(status == 200){ // This assumes all the backend APIs return 200, if they return other success responses (e.g. 201) add them here ParseBody(responseElement, "body", response); } else{ // if any of the response codes are non success, the final status will be 207 finalStatus = 207; finalStatusReason = "Multi Status"; ParseBody(responseElement, "error", response); } return responseElement; } // Gather responses into JSON Array // Pass on the each of the response variable names here. JArray finalResponseBody = new JArray(); finalResponseBody.Add(PrepareResponse("operation1")); finalResponseBody.Add(PrepareResponse("operation2")); finalResponseBody.Add(PrepareResponse("operation3")); finalResponseBody.Add(PrepareResponse("operation4")); // Populate finalResponse with aggregated body and status information finalResponse.Add("body", finalResponseBody); finalResponse.Add("status", finalStatus); finalResponse.Add("reason", finalStatusReason); return finalResponse; }" /> What this code does is prepare the response into a single JSON Object. using the help of the  PrepareResponse function. The JSON not only collects the response body from each response variable, but it also captures the response codes and determines the final response code based on the individual response codes. For the purpose of his example, I have assumed all operations are GET operations and if all operations return 200 then the overall response is 200-OK, otherwise it is 206 -Partial Content. This can be customized to the actual scenario as needed. 
  5. Once the final response variable is ready, then construct and return a single response based on the above calculation<!-- This shows how to return the final response code and body. Other response elements (e.g. outbound headers) can be curated and added here the same way --> <return-response> <set-status code="@((int)((JObject)context.Variables["finalResponseData"]).SelectToken("status"))" reason="@(((JObject)context.Variables["finalResponseData"]).SelectToken("reason").ToString())" /> <set-body>@(((JObject)context.Variables["finalResponseData"]).SelectToken("body").ToString(Newtonsoft.Json.Formatting.None))</set-body> </return-response>

     

  6. This effectively turns APIM into an experience-specific backend tailored to frontend needs.

When not to use APIM for BFF Implementation?

While this approach works well when you want to curate a few responses together and apply a unified set of policies, there are some cases where you might want to rethink this approach

  1. When the need for transformation is complex. Maintaining a lot of code in APIM is not fun. If the response transformation requires a lot of code that needs to be unit tested and code that might change over time, it might be better to sand up a curation service. Azure Functions and Azure Container Apps are well suited for this.
  2. When each backend endpoint requires very complex request transformation, then that also increases the amount of code, then that would also indicate a need for an independent curation service.
  3. If you are not already using APIM then this does not warrant adding one to your architecture just to implement BFF.

Conclusion

Using APIM is one of the many approaches you can use to create a BFF layer on top of your existing endpoint. Let me know your thoughts con the comments on what you think of this approach. 

Citations

  1. Azure Architecture Center – Backend-for-Frontends Pattern
  2. Azure API Management Policy Snippets (GitHub)
  3. Curated APIs Policy Example (GitHub)
  4. Send-request Policy Reference
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Fireworks AI on Microsoft Foundry

1 Share

We’re excited to announce that starting today, Microsoft Foundry customers can access high performance, low latency inference performance of popular open models hosted on the Fireworks cloud from their Foundry projects, and even deploy their own customized versions, too!

As part of the Public Preview launch, we’re offering the most popular open models for serverless inference in both pay-per-token (US Data Zone) and provisioned throughput (Global Provisioned Managed) deployments. This includes:

  • Minimax M2.5 🆕
  • OpenAI’s gpt-oss-120b
  • MoonshotAI’s Kimi-K2.5
  • DeepSeek-v3.2

For customers that have been looking for a path to production with models they’ve post-trained, you can now import your own fine-tuned versions of popular open models and deploy them at production scale with Fireworks AI on Microsoft Foundry.

 

The Microsoft Foundry model catalog showing the new Fireworks on Foundry model collection.

Serverless (pay-per-token)

For customers wanting per-token pricing, we’re launching with Data Zone Standard in the United States. You can make model deployments for Foundry resources in the following regions:

  • East US
  • East US 2
  • Central US
  • North Central US
  • West US
  • West US 3

Depending on your Azure subscription type, you’ll automatically receive either a 250K or 25K tokens per minute (TPM) quota limit per region and model. (Azure Student and Trial subscriptions will not receive quota at this time.)

Per-token pricing rates include input, cached input, and output tokens priced per million tokens.

 

Model

Input Tokens
($/1M tokens)

Cached Tokens
($/1M tokens)

Output Tokens
($/1M tokens)

gpt-oss-120b

$0.17

$0.09

$0.66

kimi-k2.5

$0.66

$0.11

$3.30

deepseek-v3.2

$0.62

$0.31

$1.85

minimax-m2.5

$0.33

$0.03

$1.32

 

As we work together with Fireworks to launch the latest OSS models, the supported models will evolve as popular research labs push the frontier!

 

Provisioned Throughput

For customers looking to shift or scale production workloads on these models, we’re launching with support for Global provisioned throughput. (Data Zone support will be coming soon!)

Provisioned throughput for Fireworks models works just like it does for Foundry models: PTUs are designed to deliver consistent performance in terms of time between token latency. Your existing quota for Global PTUs works as does any reservation commitments!

 

 

gpt-oss-120b

Kimi-K2.5

DeepSeek-v3.2

MiniMax-M2.5

Global provisioned minimum deployment

80

800

1,200

400

Global provisioned scale increment

40

400

600

200

Input TPM per PTU

13,500

530

1,500

3,000

Latency Target Value

99% > 50 Tokens Per Second^

99% > 50 Tokens Per Second^

99% > 50 Tokens Per Second^

99% > 50 Tokens Per Second^

^ Calculated as p50 request latency on a per 5 minute basis.

 

Custom Models

Have you post-trained a model like gpt-oss-120b for your particular use case? With Fireworks on Foundry you can deploy, govern, and scale your custom models all within your Foundry project. This means full fine-tuned versions of models from the following families can be imported and deployed as part of preview:

  • Qwen3-14B
  • OpenAI gpt-oss-120b
  • Kimi K2 and K2.5
  • DeepSeek v3.1 and v3.2

 

The new Custom Models page in the Models experience lets you initiate the import process for copying your model weights into your Foundry project.

 

Importing Custom Models into Microsoft Foundry is available under Build -> Models -> Custom Models.

For performing a high-speed transfer of the files into Foundry, we’ve added a new feature to Azure Developer CLI (azd) for facilitating the transfer of a directory of model weights. The Foundry UI will give you cli arguments to copy and paste for quickly running azd ai models create pointed to your Foundry project.

Enabling Fireworks AI on Microsoft Foundry in your Subscription

While in preview, customers must opt-in to integrate their Microsoft Foundry resources with the Fireworks inference cloud to perform model deployments and send inference requests. Opt-in is self-service and available in the Preview features panel within your Azure portal.

 

The Azure Preview features panel for an Azure subscription where you can enable the Fireworks on Foundry experience.

For additional details on finding and enabling the preview feature, please see the new product documentation for Fireworks on Foundry.

 

Frequently Asked Questions

How are Fireworks AI on Microsoft Foundry models different than Foundry Models?

Models provided direct from Azure include some open-source models as well as proprietary models from labs like Black Forest Labs, Cohere, and xAI, and others. These models undergo rigorous model safety and risks assessments based on Microsoft’s Responsible AI standard.

For customers needing the latest open-source models from emerging frontier labs, break-neck speed, or the ability to deploy their own post-trained custom models, Fireworks delivers best-in-class inference performance. Whether you’re focused on minimizing latency or just staying ahead of the trends, Fireworks AI on Microsoft Foundry gives you additional choice in the model catalog.

Still need to quantify model safety and risk? Foundry provides a suite of observability tools with built-in risk and safety evaluators, letting you build AI systems confidently.

 

How is model retirement handled?

Customers using serverless per-token offers of models via Fireworks on Foundry will receive notice no less than 30 days before potential model retirement. You’ll be recommended to upgrade to either an equivalent, longer-term supported Azure Direct model or a newer model provided by Fireworks.

For customers looking to use models beyond the retirement period, they may do so via Provisioned throughput deployments.

 

How can I get more quota?

For TPM quota, you may submit requests via our current Fireworks on Foundry quota form.

For PTU quota, please contact your Microsoft account team.

 

Can you support my custom model?

Let’s talk! In general, if your model meets Fireworks’ current requirements, we have you covered. You can either reach out to your Microsoft account team or your contacts you may already have with Fireworks.

 

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

PicoCPC custom board

1 Share

In the computer playground wars of the 1980s, children would spend hours extolling the virtues of either the ZX Spectrum or Commodore 64, depending on which side of the fence they sat. As the arguments raged, those who owned an Amstrad CPC machine tended to watch from the sidelines; but, deep down, they knew their chosen machine could very much hold its own.

The PicoCPC ROM is used for small data transfers from the add-on to the CPC; the final PicoCPC model’s PCB is going to have an RP2350B microcontroller soldered directly on to it

As time has gone on, there’s been a growing appreciation for the 8-bit computers created by Lord Sugar’s company. In recent years, a small but nonetheless loyal community has been creating a string of games that push the machines to their limits (check out Pinball Dreams as proof). They’ve also been producing hardware to take the CPC to the next level. A good example of this is the M4 board, which not only enables wireless LAN but also allows an SD card to be used for storage.

Joining the hardware roster is the PicoCPC, a new multi-purpose add-on that can be used across the CPC range, including the Plus machines launched in 1990. It’s impossible to sum up its capabilities in one sentence. Rather, this neat little device — built around a Raspberry Pi Pico 2 — provides a whole host of benefits for machines that maxed out at 128kB of memory. 

The prototype PicoCPC fitted with a Raspberry Pi Pico 2 and a small display

For starters, the PicoCPC extends the memory up to 1024kB. It also emulates a floppy disk controller, allows the use of up to 16 emulated ROMs (for instant access to the likes of the Protext word processor and alternative operating systems such as SymbOS and FutureOS), and enables cartridge-loading of software produced for the GX4000 console. It adds six-voice audio, courtesy of the PlayCity sound card emulation, and there’s also a clock. For CPC enthusiasts, it’s fast becoming an essential upgrade.

Key to success

The idea emerged after StĂ©phane Plantard noticed a problem with many previous CPC add-ons. “I discovered there were a lot of expansions for Amstrad’s early computers, but they were expensive, rare, and made with deprecated chips,” he says. Over time, he became very familiar with the inner workings of the CPC. He produced an external Gotek drive for the CPC 664 and the CPC 6128, as well as a device that powers the CPC and allows it to be connected to a TV instead of the bundled monitor.

The device has gone through a few revisions; StĂ©phane is also working on a cartridge with an SD card reader for Amstrad’s GX4000 console

“Then my friend Freddy started a project called PicoMEM for old PCs,” he says, of a device based on the RP2040 microcontroller that runs emulated 8-bit ISA boards on a real PC. “I thought I could do a similar card for the CPC, so I started to write some CPC-related code on the PicoMEM to test the SD card readings, which allowed me to become familiar with Raspberry Pi Pico. Freddy then pushed me to create my own card, and I’ve produced three prototypes so far.”

StĂ©phane says he approached the project according to the agile principles, which essentially means he sought to be flexible, open to change, and willing to work closely with the CPC community (there are twelve principles in total, and they’re part of the Agile Manifesto published in 2001). “It means I maintain a backlog of ideas and set up my epics and my sprints,” he explains. “Having a strategy is a requirement when you work alone on such a big project.”

The Amstrad CPC 464 made its debut in 1984; the PicoCPC plugs into the back
(Credit: Bill Bertram, Wikimedia Commons, CC BY-SA 2.5)

In essence, StĂ©phane has been working in short development cycle bursts, or “sprints”, that tend to stretch to a week. “It has allowed me to achieve some progress and reach goals; it’s often enough to stay motivated,” he notes. Given the project has taken a year so far, motivation has proven important. StĂ©phane has certainly seen the many benefits of basing the project around a Raspberry Pi Pico 2, and he says it’s preferable to the alternatives he considered.

“The CPC generates a lot of signals that the card has to act upon, so an STM32 microcontroller would not be fast enough,” he says, of one potential choice. He could have opted for an FPGA solution instead, but this, too, posed problems: “It would have required expensive chips, and I don’t know how to work with them,” he adds. “At the end of the day, the Pico 2 is cheap and fast. Its two cores and the PIOs make it a lot more suitable for this kind of task than any other available microcontroller. It uses one core to manage CPC I/O, a second core to run the emulations, and the PIO pilots the multiplexors to get addresses and pull/push data from/to the CPC.”

Quick upgrade

Some of StĂ©phane’s ideas are still being worked on. “The PicoCPC does not support USB mice and joysticks yet. It’s in the backlog but not done,” he says. But, as it stands, the PicoCPC makes a big difference to the experience of using an Amstrad CPC, and it also addresses a few practical issues.

“An ordinary CPC user would want to use an original computer to play old games from back in the day,” StĂ©phane says. Since the tape-based CPC 464 is easier to find than the disk-based CPC 6128, the PicoCPC can be used to transform a 464 into a 6128 by adding an emulated floppy drive, more RAM, a later version of BASIC (v1.1), and the C3 mode which allows access to the extra RAM beyond the base 64kB. “All the CPC 6128 games and demos will then work on a CPC 464.”

You can use PicoCPC to boost the amount of memory for any one of Amstrad’s five CPC computers

Likewise, on a CPC 6128, PicoCPC can override the internal floppy. “Users won’t have to take care of deteriorated, old floppy disks,” StĂ©phane says. But that’s just touching the surface, he adds. “Many new games developed in recent years require 512kB of memory,” he explains. “The Pico does not offer enough memory for that, which is why I added an SRAM chip on the card.”

He hopes the availability of PlayCity will push future software developers to use the expanded audio capabilities, and he’s excited about other possibilities. “I plan to add proper hard-disk emulation, AdLib music card emulation, and new blank floppy disk creation,” he reveals. “But the first plan for the PicoCPC is to make it available. I’m working with retailers in France, the UK, and Spain, and hope it’ll be on sale very soon.”

The post PicoCPC custom board appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

GPT-5.4 Now Available in Microsoft Foundry

1 Share

The pace of AI innovation continues to accelerate. Just a few months after the release of GPT-5.2 and GPT-5.3, Microsoft Foundry now brings us GPT-5.4—OpenAI’s most capable frontier model to date to Microsoft Foundry. This release marks a significant leap forward in reasoning, agentic workflows, and professional-grade automation.

Whether you’re building AI agents, automating complex workflows, or exploring new frontiers in software development and data analysis, GPT-5.4 is designed to deliver faster, more reliable, and more intelligent results.

  1. What’s in GPT-5.4?
  2. Introducing GPT-5.4 Pro
  3. Capabilities
  4. Pricing and Availability
  5. Thoughts
  6. Resources

What’s in GPT-5.4?

GPT-5.4 introduces a range of enhancements that elevate its performance across professional and enterprise scenarios:

  • Built-in agentic workflows for planning and execution
  • Native computer use capabilities (keyboard, mouse, screenshots)
  • Tool Search for navigating large tool ecosystems
  • Support for very large context windows (up to 1,050,000 tokens)
  • Improved token efficiency for faster, lower-cost responses
  • Enhanced coding and software automation reliability
  • Higher factual accuracy and reduced hallucinations

These capabilities make GPT-5.4 a powerful tool for document and spreadsheet creation, coding, data analysis, and long-form reasoning tasks.

Introducing GPT-5.4 Pro

For scenarios where analytical depth and completeness are more important than latency, Microsoft Foundry also offers GPT-5.4 Pro—a premium variant of the model.

GPT-5.4 Pro is designed for deep analytical workflows, such as scientific research, strategic decision-making, and complex problem-solving. It introduces:

  • Multi-path reasoning evaluation to explore alternative solutions
  • Greater analytical depth for problems with trade-offs or multiple valid outcomes
  • Improved stability across long reasoning chains
  • Enhanced decision support where rigor outweighs speed

With a larger context window (400,000 tokens at the moment, larger context window coming soon) and the same high output capacity (128,000 tokens), GPT-5.4 Pro is the go-to model for high-assurance, high-complexity tasks.

Capabilities

To help you evaluate the evolution of these models, here’s a side-by-side comparison of GPT-5.4, GPT-5.4 Pro, and GPT-5.2:

CapabilityGPT-5.4GPT-5.4 ProGPT-5.2
ReasoningStronger reasoning for complex, multi-step tasksMulti-path reasoning evaluation for deeper analysisAdaptive reasoning for complex queries
Agentic WorkflowsBuilt-in agentic workflows for planning and executionEnhanced agentic workflows with improved stabilityAccelerates agent development
Computer InteractionNative computer use (keyboard, mouse, screenshots)Same as GPT-5.4Not available
Tool ManagementTool Search for large tool ecosystemsSame as GPT-5.4Reliable tool use and governed integrations
Token EfficiencyImproved for faster, lower-cost responsesSame as GPT-5.4Improved over GPT-5.1
Coding ReliabilityEnhanced software automation and code generationSame as GPT-5.4Reliable code generation and modernization
Factual AccuracyHigher factual accuracy and reduced hallucinationsSame as GPT-5.4Greater consistency and accuracy
Context Window1,050,000 tokens400,000 tokens (1,050,000 coming soon)400,000 tokens
Context Memory – Output128,000 tokens128,000 tokens128,000 tokens
Training Data CutoffAugust 2025August 2025August 2025
Best ForReliable execution, agentic workflows, software automationScientific research, complex decision-making, deep analytical workflowsLong-form reasoning, structured content, enterprise agents
Latency vs. DepthPrioritizes speed and reliabilityPrioritizes analytical depth and completeness over latencyBalanced performance

Pricing and Availability

Microsoft Foundry offers flexible pricing for GPT-5.4 based on context length:

  • GPT‑5.4 (<272K input tokens):
    • $2.50 per million input tokens
    • $0.25 per million cached input tokens
    • $15.00 per million output tokens
  • GPT‑5.4 (>272K input tokens):
    • $5.00 per million input tokens
    • $0.50 per million cached input tokens
    • $22.50 per million output tokens
  • GPT‑5.4 Pro:
    • $30.00 per million input tokens
    • $180.00 per million output tokens

At launch, GPT-5.4 is available in Standard Global and Standard Data Zone (US), while GPT-5.4 Pro is available in Standard Global, with more deployment options expected soon.

Thoughts

GPT-5.4 is a great leap forward in enterprise AI. With its massive context window, built-in agentic capabilities, and native computer interaction, it’s built for the next generation of professional work. And for those who need even more analytical depth, GPT-5.4 Pro offers unmatched reasoning power.

The future of AI in the enterprise is arriving faster than ever—and Microsoft Foundry is making it easier to adopt and scale these capabilities securely and efficiently.

What are you most excited to build with GPT-5.4?


Resources





Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Privacy's Defender

1 Share

For more than three decades, Cindy Cohn, the executive director of the Electronic Frontier Foundation (EFF) has been at the center of the fight to protect privacy, free expression, and innovation online—taking on the NSA’s mass surveillance programs, defending encryption, and pushing back against efforts to weaken digital security in the name of safety. In her new book, Privacy's Defender, she reflects on the landmark cases that shaped the modern internet, the values that guide EFF’s work, and why privacy is not about hiding wrongdoing, but about preserving human autonomy and democracy in a networked world. Rainey Reitman, co-founder of the Freedom of the Press Foundation, leads our conversation.

Grab your copy of Privacy's Defender: https://mitpress.mit.edu/9780262051248/privacys-defender/ 

This conversation was recorded on 02/23/2026.

Check out all of the Future Knowledge episodes at https://archive.org/details/future-knowledge





Download audio: https://media.transistor.fm/fb8e3281/92372d70.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

MBW 1015: Who Shot Apple Intelligence? - The MacBook Neo

1 Share

Apple unveiled the MacBook Neo, the company's foray into a low-cost laptop. The iPhone Fold's supposed design has leaked through 3D CAD rendering files. And a toolkit for hacking iPhones has leaked.

  • 18 years later, Apple ships a $599 computer.
  • Apple's TikTok ads for the MacBook Neo are the right kind of weird.
  • Apple creates adorable little Finder guy to promote its adorable little Mac.
  • The new Apple begins to emerge.
  • Apple 'Ultra' products expansion is up next after MacBook Neo launch.
  • iPhone Fold design leaks in purported 3D CAD rendering files.
  • Apple's 'HomePad' gets launch timing update via leaker.
  • Apple's 512GB Mac Studio vanishes, a quiet acknowledgment of the RAM shortage.
  • Apple Music to add Transparency Tags to distinguish AI music, says report.
  • Apple ran a test on the App Store to see if AI could improve search result rankings.
  • Apple geoblocking downloads of ByteDance-owned apps in the US.
  • A toolkit for hacking iPhones, possibly created for the U.S. Government, has leaked.
  • F1: The Stream - how the launch leveraged Apple's entire ecosystem.
  • 'Apple' Review: Reinvention Incorporated.

Picks of the Week

  • Christina's Pick: What's Your JND Game
  • Andy's Pick: Kids, Wait Till You Hear This
  • Jason's Pick: Cloth Pro Max
  • Leo's Pick: Art Bits from HyperCard

Hosts: Leo Laporte, Andy Ihnatko, Jason Snell, and Christina Warren

Download or subscribe to MacBreak Weekly at https://twit.tv/shows/macbreak-weekly.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:





Download audio: https://pdst.fm/e/pscrb.fm/rss/p/mgln.ai/e/294/cdn.twit.tv/megaphone/mbw_1015/ARML9068113820.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories