Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146975 stories
·
33 followers

Neo: Share Tasks for Collaborative AI Infrastructure Operations

1 Share

Neo shows its work, but until now that context was only viewable by the user that initiated the conversation. When you wanted a teammate’s input on a decision Neo made, you had to describe it in Slack or screenshot fragments of the conversation. Today we’re introducing task sharing: share a read-only view of any Neo task with anyone in your organization, full context preserved.

To share a Neo task, click the share button to generate a read-only link, then send it to a teammate. They see the complete picture: the original prompt, Neo’s reasoning process, the actions it took, and the outcome. Instead of writing up what happened and losing detail in the retelling, you share the task itself.

We built this with security as a core constraint. The original task system enforced strict RBAC, ensuring users could only see and act on resources they had permission to access. Task sharing preserves these guarantees. Viewers can see the conversation with Neo, but they cannot trigger any actions, and links within the shared task to stacks or resources still enforce the viewer’s existing permissions.

The feature is available now. The next time you want a second opinion or need to show a colleague how you solved something, share the task. You’re no longer working alone.

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Xcode 26.3 - Agentic Coding

1 Share

Xcode 26.3 - Agentic Coding

Apple released Xcode 26.3 with native support for agentic coding, marking a significant evolution in how development work is performed inside Xcode.

With this release, Xcode can now coordinate multi step coding tasks end to end. From a single request, Xcode can drive the full workflow: modifying project configuration, adding entitlements, generating new files, integrating APIs, building the project, detecting errors, and iterating until the build succeeds. The result is a complete feature implementation produced directly inside the IDE.

This capability is powered by Xcode’s adoption of the Model Context Protocol (MCP). Through MCP, Xcode exposes its internal tools and context so agents can work directly with project structure, documentation, build systems, and diagnostics. Because MCP is an open standard, developers are free to use any compatible tooling rather than being locked into a single provider.

Xcode 26.3 also introduces built in, one click integration with Anthropic Claude Code and OpenAI Codex. These integrations run natively inside Xcode, update automatically, and are optimized for efficient tool usage during larger tasks.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Roadmap for AI in Visual Studio (February)

1 Share

After a busy January (catch up here), we’re shifting focus to reliability and refinement. This month is about tightening core workflows, improving agent stability, and building on the MCP foundations we’ve been laying.

These are active areas of work, not delivery commitments. Upvote the features that matter most to you.

Agent Mode & Coding Agents

Reliability is the priority this month. We’re raising the floor on agent-driven scenarios with:

Planning Agent

First steps toward a dedicated agent for multi-step task planning and execution.

Copilot SDK & Platform Integration (Experimental)

We’re also beginning early work to better integrate the Copilot CLI into Visual Studio Copilot.

Model Context Protocol (MCP)

MCP keeps external tools and services connected to VS in a governed, scalable way. February focus:

Models & Context Management

Under-the-hood work to keep Copilot fast as context grows:

Copilot experience in Editor

Smoother Copilot integration with existing editor behavior:

We’re excited for you to try these improvements as they roll out. As always, feedback is incredibly important—please upvote or comment on the linked Developer Community items so we know what matters most to you.

Thanks for continuing to build with us.

The post Roadmap for AI in Visual Studio (February) appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

I Built a Trend Radar for Developers — Here's How the Scoring Engine Works

1 Share

I got tired of finding out about tools and frameworks after everyone else had already written about them. So I built 8of8 — a system that scans 17 data sources every 2 hours and scores every signal from 0 to 100.

Right now it tracks ~120 qualified signals. The top scorer is at 81/100. Here's how the scoring works and what I learned building it.

The Problem

"Trending" is meaningless without context. A repo with 500 GitHub stars means nothing if nobody's actually using it. A ProductHunt launch with 200 upvotes means nothing if it disappears in a week.

I wanted a system that asks: is this signal showing up across multiple independent sources? If something is trending on Hacker News AND companies are hiring for it AND npm downloads are growing AND funding is flowing into that sector — that's not noise. That's signal.

The Data Pipeline

Every 2 hours, a cron job pulls from these sources:

  • Hacker News — top stories and comments
  • GitHub Trending — repos gaining stars
  • ProductHunt — new launches
  • Reddit — 4 developer subreddits (threshold: score 65+)
  • RemoteOK — job postings
  • npm — package download trends (134 packages tracked)
  • GitHub Stars Velocity — star growth rate for 50 repos
  • Hiring Index — 632 companies, 884 active job listings
  • Wikipedia — page view growth on tech topics
  • VC Blogs — 124 posts from investor blogs
  • StackOverflow — unanswered question counts by tag
  • DOE Research — 346 records from research databases
  • Funding Context — sector-level funding heat maps
  • Product Intelligence — engagement and breakout scores for 776 products
  • GPT Reports — AI-generated analysis on ~240 signals

Raw signals go into snapshot files. The scoring engine then processes everything.

10 Scoring Dimensions

Each signal gets scored across 10 dimensions. The raw points add up to a max of 135, then get normalized to 0-100.

1. Snapshot (0-20 pts) — Does the signal exist in the current data pull?

2. Compare (0-15 pts) — How does it compare to signals from previous snapshots?

3. Replay (0-15 pts) — Has this signal appeared consistently over time?

4. Topics (0-15 pts) — Is the keyword growing week-over-week? Checks topic history AND npm download trends.

5. Analytics (0-15 pts) — Cross-validation. Does this signal show up in multiple independent sources? Same URL on 2+ platforms, keywords matching npm packages, matching GitHub repos, matching hiring companies. 3+ validations = 15 points.

6. Intelligence (0-15 pts) — AI analysis. Combines GPT-generated opportunity scores with product intelligence data (engagement + breakout scores across 776 products).

7. Funding (0-10 pts) — Is funding flowing into this signal's sector? Maps signals to funding heat maps. DevTools maps to Technology, AI maps to AI/ML + Technology, etc.

8. Early Signals (0-10 pts) — Research indicators. Wikipedia page view growth, VC blog mentions, StackOverflow activity, DOE research keywords. Individual keywords get indexed (not full phrases) to catch more matches.

9. Tracking (0-10 pts) — Is this signal in our product index of 1,646 tracked products?

10. Hiring (0-10 pts) — Are companies actively hiring for this? Checks company name matches, keyword overlap with job titles, and whether the signal's sector has 10+ companies hiring.

The Key Insight: Cross-Validation

The most important dimension is Analytics — the cross-validation check. A single data source can be gamed. Bots can upvote a HN post. Stars can be bought on GitHub. But when something shows up on HN AND has growing npm downloads AND companies are hiring for it AND GitHub repos are gaining stars — that convergence is very hard to fake.

This is what separates 8of8 from tools like Exploding Topics (which mostly relies on Google Trends) or raw data dumps from scrapers.

Score Distribution

After running this across ~1,000 raw signals, about 120 make it through the qualification filter (3+ dimension passes AND score >= 25/100).

Current distribution:

  • Top score: 81/100
  • Average score: 43/100
  • Most signals cluster: 35-55 range

Signals scoring 70+ are consistently the ones that turn into real trends within weeks.

The Stack

Nothing fancy:

  • Python scoring engine (ultimate_opportunities.py)
  • Flask frontend
  • Gunicorn + nginx on a single Vultr VPS (Ubuntu 24)
  • Cron jobs for data collection (every 2h) and email alerts (daily 8am UTC)
  • Stripe for payments
  • Signal card images auto-generated with Pillow
  • Auto-posting to Twitter, Bluesky, Mastodon 3x daily

Total infrastructure cost: ~$12/month for the VPS.

What I Learned

1. More data sources beat better algorithms. The scoring logic is simple math — weighted points across dimensions. The value comes from the breadth of sources, not clever ML.

2. Normalization matters. Raw scores ranged from 40-80 which looked meaningless to users. Normalizing to 0-100 made instant sense.

3. Free tier has to show value. My first version showed 5 signals behind a paywall. Nobody converted. Now the free tier shows all 120+ signals with scores and checks visible. Pro ($29/mo) unlocks detailed breakdowns, intelligence reports, and email alerts.

4. Signal type matters for communication. A trending HN article is not the same as a trending GitHub repo. I had to teach my auto-posting system the difference to avoid calling blog posts "tools" (embarrassing lesson learned in public).

Try It

The dashboard is live at 8of8.xyz. Free tier shows everything. I'm looking for beta testers — honest feedback in exchange for free Pro access. DM me on X or comment below.

The scoring engine runs on a single Python file. No Kubernetes. No microservices. Just cron and conviction.

Built by Jose Marquez Alberti. Powered by ASOF Intelligence.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Unlocking the Codex harness: how we built the App Server

1 Share
Learn how to embed the Codex agent using the Codex App Server, a bidirectional JSON-RPC API powering streaming progress, tool use, approvals, and diffs.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw’s AI ‘skill’ extensions are a security nightmare

1 Share
The OpenClaw logo on a red background.

OpenClaw, the AI agent that has exploded in popularity over the past week, is raising new security concerns after researchers uncovered malware in hundreds of user-submitted "skill" add-ons on its marketplace. In a post on Monday, 1Password product VP Jason Meller says OpenClaw's skill hub has become "an attack surface," with the most-downloaded add-on serving as a "malware delivery vehicle."

OpenClaw - first called Clawdbot, then Moltbot - is billed as an AI agent that "actually does things," such as managing your calendar, checking in for flights, cleaning out your inbox, and more. It runs locally on devices, and users can interact with t …

Read the full story at The Verge.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories