Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149094 stories
·
33 followers

1.0.2

1 Share

2026-03-06

To commemorate GitHub Copilot CLI reaching general availability last week, we're incrementing the major version to 1.0!

  • Type 'exit' as a bare command to close the CLI
  • Ask_user form now submits with Enter key and allows custom responses in enum fields
  • Support 'command' field as cross-platform alias for bash/powershell in hook configs
  • Hook configurations now accept timeout as alias for timeoutSec
  • Fix handling of meta with control keys (including shift+enter from /terminal-setup)
Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new in Microsoft Foundry | February 2026

1 Share

TL;DR

  • Claude Opus 4.6 + Sonnet 4.6: Anthropic’s frontier models arrive in Foundry with 1M-token context (beta), adaptive thinking, and context compaction — Opus for deep reasoning, Sonnet for cost-efficient scale.
  • GPT-Realtime-1.5 & GPT-Audio-1.5: Next-gen audio models with +7% instruction following, better multilingual support, and +10% alphanumeric transcription accuracy.
  • Grok 4.0 (GA) + Grok 4.1 Fast (Preview): xAI’s reasoning model graduates to GA; the new Fast variant lands at $0.20/M input tokens for high-throughput non-reasoning workloads.
  • FLUX.2 Flex: Text-heavy image generation purpose-built for UI prototyping and typography at $0.05/megapixel.
  • Microsoft Agent Framework (RC): 1.0.0rc1 for Python — API surface locked. Major breaking changes in credentials, sessions, and response patterns. Migration guide published.
  • Durable Agent Orchestration: New HITL pattern pairs Azure Durable Functions with Agent Framework and SignalR for agents that survive restarts and wait days for human approval.
  • Foundry Local — Sovereign Cloud: Large multimodal models now run fully disconnected on local hardware with APIs that mirror the cloud surface.
  • AI Toolkit for VS Code v0.30.0: Tool Catalog, Agent Inspector with F5 debugging, and a redesigned Agent Builder.
  • REST API v1 (GA): The core Foundry REST surface is now production-ready. SDKs across all languages are building on it in pre-release — GA announcements imminent.
  • SDK releases across all languages: Python (2.0.0b4), .NET (2.0.0-beta.1), JS/TS (2.0.0-beta.4), and Java (2.0.0-beta.1) all shipped new betas targeting the GA v1 REST surface with significant breaking changes — tool class renames, credential updates, and preview feature opt-in flags.

Join the community

Connect with 25,000+ developers on Discord, ask questions in GitHub Discussions, or subscribe via RSS to get this digest monthly.


Models

Claude Opus 4.6

Claude Opus 4.6 — Anthropic’s most capable reasoning model — is now available in Microsoft Foundry as a first-party deployment. If you need a model that can hold an entire codebase in context and reason over it end-to-end, this is it.

  • 1 million token context window (beta) — roughly 2,000 pages of documentation or an entire medium-sized repository in a single prompt
  • 128K max output tokens — complete code refactors, full-length analyses, and comprehensive documentation in one response
  • Adaptive thinking — the model dynamically decides how much reasoning a task needs; you control the floor with four effort levels (low, medium, high, max)
  • Context compaction (beta) — in long-running agentic sessions, older context is automatically summarized so agents don’t hit the wall mid-workflow
  • Pricing: $5 / $25 per million tokens (standard); premium tier applies beyond 200K input tokens ($10 / $37.50)

Available via serverless and managed compute deployments across all Azure regions.

Claude Sonnet 4.6

One week after Opus, Claude Sonnet 4.6 landed in Microsoft Foundry — nearly the same intelligence tier at a substantially lower price point. If Opus is the model you reach for when accuracy is everything, Sonnet is the one you deploy when you need that quality at scale.

  • Same 1M-token context (beta) and 128K output as Opus 4.6
  • Same adaptive thinking and context compaction capabilities
  • Optimized for coding, agentic workflows, and professional content at high throughput
  • Designed for teams prioritizing cost-per-token without sacrificing frontier performance

GPT-Realtime-1.5 & GPT-Audio-1.5

Two new audio models shipped on February 23, targeting the real-time voice and audio processing stack. If you’re building voice agents, IVR deflection, or live transcription pipelines, these are meaningful upgrades.

Model Deployment ID Key Improvements
GPT-Realtime-1.5 gpt-realtime-1.5-2026-02-23 +7% instruction following, improved multilingual support, better tool calling, +5% reasoning (Big Bench Audio)
GPT-Audio-1.5 gpt-audio-1.5-2026-02-23 +10.23% alphanumeric transcription accuracy, improved multilingual support

Both maintain low-latency real-time interactions via chat completion APIs. Drop-in replacements for their predecessors — no API surface changes required.

Grok 4.0 (GA) & Grok 4.1 Fast (Preview)

Grok 4.0 from xAI graduated to general availability on February 27 — the first xAI model to reach GA in Microsoft Foundry (it entered preview back in September 2025). Alongside it, Grok 4.1 Fast arrived in public preview as a high-throughput, non-reasoning variant.

Model Status Pricing (per M tokens) Best For
Grok 4.0 GA $5.50 input / $27.50 output Complex reasoning, multi-step analysis
Grok 4.1 Fast (non-reasoning) Preview $0.20 input / $0.50 output High-throughput classification, extraction, routing

The Grok 4.1 reasoning variant is coming soon. Both are available via serverless or provisioned throughput deployments.

FLUX.2 Flex

FLUX.2 Flex from Black Forest Labs is purpose-built for text-heavy design work — UI prototyping, infographics, typography, and marketing assets where text rendering fidelity matters. It complements the FLUX.2 [pro] model (previewed in December) with a focus on getting text right in generated images.

  • Adjustable inference steps for speed/quality tradeoff
  • Pricing: $0.05 per megapixel (Global Standard)
  • Strong multi-prompt adherence for complex, text-rich compositions

Model Router — GPT-5 Series Support

The Model Router now supports GPT-5 series models. Deploy the router as a single endpoint and it automatically selects the best underlying chat model based on your prompt characteristics — choose Balanced, Cost, or Quality mode to control the optimization axis. No application-level routing logic required.


Agents

Microsoft Agent Framework Reaches Release Candidate

The Microsoft Agent Framework hit 1.0.0rc1 for Python on February 19 — the API surface is locked, and GA is around the corner. This is the biggest developer-facing milestone of the month. If you’ve been tracking the beta releases, now is the time to migrate.

What’s new in the RC:

  • AgentFunctionApp — host agents on Azure Functions with automatic endpoints for workflow runs, status checks, and HITL responses
  • Fan-out/fan-in orchestration with shared state and configurable timeouts
  • BaseAgent implementations for Claude and GitHub Copilot SDKs — use Anthropic or GitHub models as first-class agent providers
  • Simplified AG-UI run method and Anthropic structured outputs via response_format

Breaking changes — this RC ships significant renames and API surface changes. Here’s what you need to update:

Area Before After
Credentials ad_token, ad_token_provider, get_entra_auth_token Single credential parameter (Azure Identity)
Sessions AgentThread, get_new_thread() AgentSession, create_session() / get_session()
Context Multiple context_providers Single context_provider per Agent; middleware as list
Responses text= on ChatResponse/AgentResponse messages=; updates use contents=[Content.from_text(...)]
Exceptions ServiceException, ServiceResponseException AgentException, AgentInvalidResponseException
Factory create_agent as_agent

Action: Pin agent-framework-core==1.0.0rc1 and agent-framework-azure-ai==1.0.0rc1. Follow the migration guide to update your credential handling and session management code before GA lands.

Durable Agent Orchestration — Human-in-the-Loop

A new architectural pattern dropped on February 26: Durable Agent Orchestration — pairing Azure Durable Functions with the Microsoft Agent Framework and SignalR to build agents that can pause indefinitely for human approval.

The core idea: your agent does the heavy lifting (analyze logs, draft a remediation plan, prepare infrastructure changes), then calls wait_for_external_event and halts until a human approves. The durable orchestration survives process restarts, can wait for days, and picks up exactly where it left off.

  • One-line pause: wait_for_external_event in your orchestrator function
  • Real-time UX via SignalR streaming — humans see results as they’re generated
  • Use cases: incident response, infrastructure provisioning, document review workflows

Platform

Foundry Local — Large Model Support for Sovereign Cloud

Foundry Local now supports large multimodal AI models in fully disconnected, sovereign environments. Announced February 24 as part of Microsoft’s broader Sovereign Cloud expansion, this is a significant capability jump — previously limited to smaller language models, Foundry Local can now run advanced multimodal models (text, image, audio) on local NVIDIA GPU hardware with zero cloud connectivity required.

  • APIs mirror the cloud surface: Responses API, function calling, agent services — same code, different runtime
  • Part of a unified sovereign stack alongside Azure Local (infrastructure) and Microsoft 365 Local (productivity)
  • Targets government, defense, finance, healthcare, and telecom organizations with strict data sovereignty requirements

AI Toolkit for VS Code v0.30.0

The AI Toolkit for VS Code shipped a major update — v0.30.0 — focused on making agent development more discoverable and debuggable:

  • Tool Catalog: A centralized hub to discover, configure, and integrate tools into your agents — no more hunting through docs to find what’s available
  • Agent Inspector: End-to-end debugging with F5 breakpoints, variable inspection, step-through execution, and real-time streaming visualization
  • Agent Builder redesign: Quick switcher between agents, Foundry prompt agent support, Tool Catalog integration, and an “Inspire Me” feature for generating agent instructions
  • Model Catalog: OpenAI Response API models (including gpt-5.2-codex) now appear in the catalog with improved reliability
  • Build Agent with GitHub Copilot: Generate entire agent workflows from natural language prompts

REST API v1 (GA)

The Foundry REST API v1 is now generally available. The core endpoints that everything else builds on — chat completions, responses, embeddings, files, fine-tuning, models, and vector stores — are production-ready and carry GA SLAs.

GA endpoints:

Endpoint Status
/openai/v1/chat/completions GA
/openai/v1/responses GA
/openai/v1/embeddings GA
/openai/v1/files GA
/openai/v1/fine_tuning/ GA
/openai/v1/models GA
/openai/v1/vector_stores GA

Still in preview: /openai/v1/evals and fine-tuning alpha graders.

Why this matters: every Foundry SDK across Python, .NET, JS/TS, and Java is building its pre-release versions on top of this GA REST surface. The REST API going GA is the prerequisite for the SDK GA announcements that are now imminent. If you need production stability today and can’t wait for the SDK GA, you can target the v1 REST API directly — the contract is locked.


SDK & Language Changelog (February 2026)

The big picture: the Foundry REST API v1 went GA this month (see above). Every language SDK is now building its pre-release on that stable REST surface. The SDKs are converging on a unified azure-ai-projects package per language — agents, inference, evaluations, and memory all live under one roof. GA SDK announcements are imminent; here’s where each language stands today.

REST

The v1 REST surface is the foundation everything else builds on. Core endpoints (/openai/v1/chat/completions, /openai/v1/responses, /openai/v1/embeddings, /openai/v1/files, /openai/v1/fine_tuning/, /openai/v1/models, /openai/v1/vector_stores) are GA. Preview endpoints (/openai/v1/evals, fine-tuning alpha graders) remain in preview.

Action: If you need production SLAs today and your SDK is still pre-release, target the v1 REST API directly. The contract is locked.

API Lifecycle · REST Reference

Python

Foundry SDK (azure-ai-projects 2.0.0b4, Feb 24)

The fourth beta in the v2 consolidation line — and the first release explicitly targeting the GA v1 REST APIs. Python SDK GA is next.

Features:

  • Unified AIProjectClient for agents, evaluations, datasets, indexes, and memory stores
  • Bundles openai and azure-identity as direct dependencies
  • Tracing improvements align with OpenTelemetry gen_ai.* conventions

Breaking changes:

Tool class renames to align with OpenAI naming conventions (same pattern as JS/TS):

# Before
from azure.ai.projects import AzureAISearchAgentTool, OpenApiAgentTool, BingGroundingAgentTool

# After — GA tools drop the "Agent" infix
from azure.ai.projects import AzureAISearchTool, OpenApiTool, BingGroundingTool

# Before
from azure.ai.projects import MicrosoftFabricAgentTool, SharepointAgentTool, A2ATool

# After — Preview tools adopt a "PreviewTool" suffix
from azure.ai.projects import MicrosoftFabricPreviewTool, SharepointPreviewTool, A2APreviewTool

Agent creation API changed — create() and update() removed in favor of versioned methods:

# Before
agent = project_client.agents.create(model="gpt-5", name="my-agent", instructions="...")
project_client.agents.update(agent.id, name="updated-agent")

# After — use create_version() for all agent creation
agent = project_client.agents.create_version(model="gpt-5", name="my-agent", instructions="...")

Preview operations moved to .beta subclient:

# Before
project_client.memory_stores.create(name="my-store")
project_client.evaluators.list_latest_versions()

# After — preview operations live under .beta
project_client.beta.memory_stores.create(name="my-store")
project_client.beta.evaluators.list()  # also renamed from list_latest_versions()

Preview features now require explicit opt-in via foundry_features:

# Workflow Agents (preview) — requires opt-in flag
from azure.ai.projects import FoundryFeaturesOptInKeys

project_client.agents.create_version(
    model="gpt-5",
    name="my-workflow-agent",
    foundry_features=FoundryFeaturesOptInKeys.WORKFLOW_AGENTS_V1_PREVIEW,
)

Tracing provider and attribute renames:

# Before
# Provider name: "azure.ai.agents"
# Span names: "responses {model_name}" / "responses {agent_name}"
# Event: gen_ai.system.instructions

# After
# Provider name: "microsoft.foundry"
# Span names: "chat {model_name}" / "invoke_agent {agent_name}"
# Attribute: gen_ai.system_instructions (not an event)

Action: Upgrade to azure-ai-projects==2.0.0b4. This is the last beta before the GA announcement — get your code on it now.

Changelog

.NET

Foundry SDK (Azure.AI.Projects 2.0.0-beta.1, Feb 24)

The .NET SDK joins the v2 consolidation with its first beta targeting the GA v1 REST surface, now with full net10 framework compatibility.

Features:

  • Full net10 framework support — <EnablePreviewFeatures> flagging removed
  • Added Evaluation sample

Breaking changes:

ImageBasedHostedAgentDefinition has been merged into HostedAgentDefinition:

// Before
var agentDef = new ImageBasedHostedAgentDefinition(model, image);

// After — Image is now an optional property on HostedAgentDefinition
var agentDef = new HostedAgentDefinition(model) { Image = image };

Tracing provider name and event format updated:

// Before
// Tracing event: gen_ai.system.instructions
// Provider name: "azure.ai.agents"

// After
// Tracing attribute: gen_ai.system_instructions
// Provider name: "microsoft.foundry"

Known issues: Computer use tool, fine tuning, red teams, and evaluation operations don’t yet support the latest API version — pin to the previous library version for those features until v1 operations are available.

Action: Upgrade to Azure.AI.Projects 2.0.0-beta.1. This is the first .NET beta on the v2 consolidation line — watch for the GA announcement.

Changelog

JavaScript / TypeScript

Foundry SDK (@azure/ai-projects 2.0.0-beta.4)

This release aligns all Azure tool class names with OpenAI naming conventions — a breaking change that will carry through to GA.

Breaking — GA tools drop the Agent infix:

// Before
import { AzureAISearchAgentTool, OpenApiAgentTool, AzureFunctionAgentTool, BingGroundingAgentTool } from "@azure/ai-projects";

// After
import { AzureAISearchTool, OpenApiTool, AzureFunctionTool, BingGroundingTool } from "@azure/ai-projects";

Breaking — Preview tools adopt a PreviewTool suffix:

// Before
import { MicrosoftFabricAgentTool, SharepointAgentTool, BingCustomSearchAgentTool, BrowserAutomationAgentTool, A2ATool } from "@azure/ai-projects";

// After
import { MicrosoftFabricPreviewTool, SharepointPreviewTool, BingCustomSearchPreviewTool, BrowserAutomationPreviewTool, A2APreviewTool } from "@azure/ai-projects";
  • ResponsesUserMessageItemParam removed as a valid ItemUnion member

Action: Upgrade to @azure/ai-projects@2.0.0-beta.4 and update all tool class references. These renames are final.

Changelog

Java

Foundry SDK (azure-ai-projects 2.0.0-beta.1, Feb 25)

The Java SDK joins the v2 consolidation with its first beta targeting the GA v1 REST surface.

Features:

  • buildOpenAIClient() and buildOpenAIAsyncClient() on AIProjectClientBuilder for directly obtaining a Stainless OpenAI client
  • New FoundryFeaturesOptInKeys enum for preview feature opt-in flags: EVALUATIONS_V1_PREVIEW, SCHEDULES_V1_PREVIEW, RED_TEAMS_V1_PREVIEW, INSIGHTS_V1_PREVIEW, MEMORY_STORES_V1_PREVIEW
  • Added ModelSamplingParams and AzureAIModelTarget models

Breaking changes:

Service version updated from 2025-11-15-preview to v1. Credential classes and sub-client methods are renamed — here’s what your code looks like before and after:

Before (pre-v2):

// Credentials used plural names
ApiKeyCredentials creds = new ApiKeyCredentials("your-key");
EntraIdCredentials entraId = new EntraIdCredentials(tokenCredential);
AgenticIdentityCredentials agenticId = new AgenticIdentityCredentials();

// Sub-client methods were generic
DeploymentsClient deployments = client.getDeploymentsClient();
deployments.get("my-deployment");
deployments.delete("my-deployment");

SchedulesClient schedules = client.getSchedulesClient();
schedules.delete("my-schedule");

IndexesClient indexes = client.getIndexesClient();
indexes.createOrUpdate("my-index", indexSpec);

EvaluationsClient evals = client.getEvaluationsClient();
OpenAIClient openai = evals.getOpenAIClient();

After (2.0.0-beta.1):

// Credentials drop the plural suffix
ApiKeyCredential creds = new ApiKeyCredential("your-key");
EntraIdCredential entraId = new EntraIdCredential(tokenCredential);
AgenticIdentityPreviewCredentials agenticId = new AgenticIdentityPreviewCredentials();

// Sub-client methods now include the resource name
DeploymentsClient deployments = client.getDeploymentsClient();
deployments.getDeployment("my-deployment");
deployments.deleteDeployment("my-deployment");

SchedulesClient schedules = client.getSchedulesClient();
schedules.deleteSchedule("my-schedule");

IndexesClient indexes = client.getIndexesClient();
indexes.createOrUpdateVersion("my-index", indexSpec);

EvaluationsClient evals = client.getEvaluationsClient();
EvalService evalService = evals.getEvalService();

This rename pattern is pervasive across all sub-clients — see the full changelog for the complete list.

Action: First Java beta on the GA REST surface. Review the breaking changes thoroughly before adopting.

Changelog

Agent Framework

Agent Framework (agent-framework-core 1.0.0rc1, Feb 19) Agent Framework (agent-framework-azure-ai 1.0.0rc1, Feb 19)

Release Candidate — the API surface is locked. See the Agents section above for the full breakdown.

Features:

  • AgentFunctionApp for hosting agents on Azure Functions
  • BaseAgent implementations for Claude and GitHub Copilot SDKs
  • Fan-out/fan-in orchestration with shared state and configurable timeouts
  • Simplified AG-UI run method and Anthropic structured outputs

Breaking — Credential handling unified under Azure Identity:

# Before
from agent_framework_azure_ai import get_entra_auth_token

agent = AzureAIAgent(
    ad_token=get_entra_auth_token(),
    ad_token_provider=my_token_provider,
)

# After — single credential parameter
from azure.identity import DefaultAzureCredential

agent = AzureAIAgent(
    credential=DefaultAzureCredential(),
)

Breaking — Sessions replace Threads:

# Before
thread = agent.get_new_thread()
result = await agent.run("Hello", thread=thread)

# After
session = await agent.create_session()
result = await agent.run("Hello", session=session)

Breaking — Response access pattern changed:

# Before
response = await agent.run("Summarize this")
print(response.text)

# After — use messages list
response = await agent.run("Summarize this")
print(response.messages[-1].content)

# Before — updating responses
update = AgentResponse(text="Updated content")

# After
from agent_framework_core import Content
update = AgentResponse(messages=[], contents=[Content.from_text("Updated content")])

Breaking — Exceptions and factory renames:

# Before
from agent_framework_core import ServiceException, ServiceResponseException
agent = create_agent(config)

# After
from agent_framework_core import AgentException, AgentInvalidResponseException
agent = as_agent(config)

Action: Pin to 1.0.0rc1. Follow the migration guide before GA.

GitHub Releases


Documentation updates

February was a landmark month for Foundry documentation. We shipped 100+ new articles across agents, fine-tuning, safety, and platform setup. Here are the highlights.

New articles

Article
Model Context Protocol (MCP) – Connect agents to external tools via MCP
Build your own MCP server – Create custom MCP servers for agent tool integration
Hosted agents – Deploy and manage cloud-hosted managed agents
Agent-to-agent communication – Coordinate multiple agents in collaborative workflows
Computer use – Enable agents to interact with desktop applications
Foundry IQ – AI-powered assistant for navigating the Foundry platform
Agent development lifecycle – End-to-end guide from build to deploy for agents
Publish agents to Copilot – Publish Foundry agents to Microsoft 365 Copilot
Playgrounds – Interactive environments for testing models and agents
Fine-tuning vision models – Customize GPT-4 Vision for your domain data
Realtime Audio API – Build voice apps with streaming audio over WebSockets
Deep research – Use deep research mode for multi-step reasoning queries

Updated articles

Article
Guardrails overview – Comprehensive rewrite of safety policy configuration
Content filter prompt shields – New severity levels and prompt injection defenses
Configure private link – Revised private networking and managed VNet guidance
Model retirement and lifecycle – Updated dates for GPT-4o, GPT-5, and Cohere models
Quotas and limits – Expanded with expected outputs and usage examples

Stay Connected

March is shaping up to be a big one — SDK GA announcements are on the horizon, and the Foundry SDK will be the single package you need across agents, inference, evaluations, and memory. Get ahead of it now by upgrading to the latest pre-release and targeting the v1 REST surface.

The post What’s new in Microsoft Foundry | February 2026 appeared first on Microsoft Foundry Blog.

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Walking With Claude

1 Share

I’ve noticed that since I started my journey into AI-driven software development, I’ve almost entirely stopped going for long walks for exercise. Some of this was the cold weather, but most of it was the dopamine.

“Just one more feature.”

I always wanted to stay close to my machine, because it would regularly ask me for permissions, or a clarifying question, and I didn’t want to waste the time it might wait for.

I’ve been telling myself for months that I should figure out how to configure my machine to allow remote access from my phone. That way, I could kick off new features from the couch, from the car, or even on a walk.

So I spent some time last weekend doing some research to see how others had approached this problem. And that’s when I discovered Claude’s “/remote-control” feature that had literally only been released 4 days earlier.

A video makes a much better illustration of how this works, click the image below to see it.

So if I can get a 2 or 3 mile walk in, while also making progress developing software? Game changed.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Building Bro Madness

1 Share

image.png

Last week, I talked about how quickly I was able to create a scoring app for a card game. I had a similar experience this week.

Next month in the United States, the college basketball tournament starts. The best 64 teams play 63 games to determine a champion.

A standard bracket for the tournament

The most fun part of this tournament for me is the first weekend of games. Thursday: 16 games. Friday: 16 games. Saturday: 8 games. Sunday: 8 games. This eliminates 75% of the teams from the tournament.

Each year, 25 of my friends rent a giant cabin, and spend the weekend watching those 48 weekend games. Lots of wagers and games take place as a result, and those games have all traditionally been entered and managed on paper.

It felt like this was an opportunity to use AI to build an app that would make all of these games much simpler to administer and score, but also a way to have everyone connect with each other before the tournament as well.

Here are some of the things that amazed me while building this app.

I didn’t have to explain the bracket concept at all.

With my simple prompt, it was able to establish that there are 4 regions, winning teams flow from one game to the next, and those regions meet in a final four.

image.png

I wanted chat features, with notifications and emoji.

Claude was able to build a really compelling chat that is pretty similar to iMessage. It was even able to add Giphy support.

image.png

Administration Tools were the most impressive

I asked for a set of administrative tools that would allow things like game score updates. Tools to manage the data the games needed. What I didn’t expect:

  • Full user management, including marking payments for each contest.
  • Trip cost management. I can mark, in several installments if necessary, how much a user has paid for their trip.
  • The system automatically calculates payouts for contests, and gives me a place to mark that I’ve paid the winner.
  • A tool to enter the weekend’s menu options for each meal. No more “what’s for dinner tonight?”
  • Two powerful dev tools. The first is a time simulator, so that I can see what the app will look like at specific moments. For example, did the picks lock after the first game started? Did the default menu update when the day changed?
  • The second dev tool is a user simulator. This allows me to use the site as any user of the system. If someone can’t or won’t use the site, I can still go in and enter their picks for them. I can see what they see in real time.

image.png

The tournament starts on March 19th. Can’t wait to see how this performs!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Grammarly is using our identities without permission

1 Share
A screenshot of a draft Verge post in Google Docs with an AI-generated Grammarly comment using Nilay Patel’s name

Grammarly's "expert review" feature offers to give users writing advice "inspired by" subject matter experts, including recently-deceased professors, as Wired reported on Wednesday. When I tried the feature out myself, I found some experts that came as a surprise for a different reason - one of them was my boss.

The AI-generated feedback included comments that appeared to be from The Verge's editor-in-chief, Nilay Patel, as well as editor-at-large David Pierce and senior editors Sean Hollister and Tom Warren, none of whom gave Grammarly permission to include them in the "expert reviews."

The feature, which launched in August, claims to h …

Read the full story at The Verge.

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Oracle Reportedly Planning Thousands of Job Cuts Amid Massive AI Spending

1 Share

Oracle is reportedly planning thousands of layoffs as it pours billions into AI data centers, raising investor concerns over costs and long-term cash flow.

The post Oracle Reportedly Planning Thousands of Job Cuts Amid Massive AI Spending appeared first on TechRepublic.

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories