Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150584 stories
·
33 followers

Stanford study outlines dangers of asking AI chatbots for personal advice

1 Share
While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

You are falling behind because you haven’t fed the insincerity machine in the last 5 minutes

1 Share

split sequence of the game Zak MC kracken and the alien mindbenders showing you two aliens turning on a machine created to make humankind stupid.

I was lucky enough to witness the beginnings of social media, working on the platforms that made it happen. I’ve also seen the decline of its first iterations and products. Currently I am witnessing the idea of a social web being perverted, weaponised and automated out of any trace of human or social aspect…

In my current job I’m running a 200k+ subscribers newsletter and a quite successful podcast. I had my own social presence since around 2004 with varying degrees of success. I really don’t care for the numbers and I never in earnest tried to make a living solely off my social presence. So I never tried “growth hacking” or took deliberate steps to reach millions. I use social media as a channel out, a scratchpad to note down ideas and experiments and invite other people to comment and together create better solutions, share information and joy. Social media to me always meant humans writing things as they wanted to tell the world about them.

Two things that gave me quite some reach over the years have never changed though: it’s important to post a lot and in a reliable cadence and it’s important to have a voice and take a stand, voice an opinion.

Whilst collecting tools to cover in our newsletter, I came across one service that annoys the hell out of me.

AI Social Media Writing Assistant for LinkedIn, Twitter & 6 More Platforms
Your AI reputation coach that learns your voice, reads your feeds, and tells you exactly where to show up, then writes comments and posts that sound like you, drawing from your real stories and experience.

Excellent, isn’t it? Instead of having to do all the reading, thinking or creating you point a machine to the things you did in the past and make it appear as you. And not just for posting, also for commenting and interacting with probably people but more likely other bots. We automate away the human or social part, trading it for growth and numbers.

The speed in which highly successful people publish huge treaties and books lately makes me understand that tools like that are pretty widespread and used. I do get about 10 emails a day offering AI tools that automate my job as developer relations leader.

The thing is that I don’t want that. I don’t want to give the impression that I’m part of a conversation and available for advice when I’m clearly not. I don’t want to publish for the sake of having published at a certain time or in a thread that causes lots of comments.

Social media has become a toxic rage bait machine with the companies that run it clearly being ok with this. I really would love people to call out more when others are obviously replaced by automation and to tell the platforms to bugger off when they ask you to create more content geared towards interaction rather than information.

I remember a long time ago foursquare was a social thing to do. You checked in at a place to show that you’re there and ready to interact with people and meet contacts.

I was at an event that time and bummed out as my flight to the office was early and I couldn’t attend the party with networking booths. So I told another speaker that this is a shame and his answer was to go past the venue on the way to the airport and check in on Foursquare so people thought you’ve been there and it was their fault for not finding you. I lost a ton of respect for that person on that day.

As an actor or author you don’t send your body or stunt double to attend interviews or sell autographs at comic con. Don’t create a virtual double that posts for you on social media when you can’t be arsed or feel overwhelmed. Take that overwhelming feeling and write about it, showing the world that your mental health is as fragile as the one of the people who follow you and read your work. Be human and only there when you can be there.

Intro sequence of the game Zak MC kracken and the alien mindbenders showing you how to difficult disguise yourself to blend in with the aliens bent to make humankind stupid.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Upskilling your agents

1 Share

Share Episode         
         
In this adventure, we sit down with Dan Wahlin, Principal of DevRel for JavaScript, AI, and Cloud at Microsoft, to explore the complexities of modern infrastructure. We examine how cloud platforms like Azure function as "building blocks". Which of course, can quickly become overwhelming without the right instruction manuals. To bridge this gap, one potential solution we discuss is the emerging reliance on AI "skills"—specialized markdown files. They can give coding agents the exact knowledge needed to deploy poorly documented complex open-source projects to container apps without requiring deep infrastructure expertise.

         

And we are saying the silent part outloud, as we review how handing the keys over to autonomous agents introduces terrifying new attack vectors. It's the security nightmare of prompt injections and the careless execution of unvetted AI skills. Which is a blast from the past, and we reminisce how current downloading of random agent instructions to running untrusted executables from early internet sites. While tools like OpenClaw purport to offer incredible automation, such as allowing agents to scour the internet and execute code without human oversight, it's already led us to disastrous leaks of API keys. We emphasize the critical necessity of validating skills through trusted repositories where even having agents perform security reviews on the code before execution is not enough.

         

Finally, we tackle the philosophical debate around AI productivity and why Dorota's LLMs raise the floor and not the ceiling is so spot on. The standout pick requires mentioning, a fascinating 1983 paper titled "Ironies of Automation" by Lisanne Bainbridge. This paper perfectly predicts our current dilemma: automating systems often leaves the most complex, difficult tasks to human operators, proving that as automation scales, the need for rigorous human monitoring actually increases, destroying the very value that was attempting to be captured by the original innovation.

         💡 Notable Links:         
🎯 Picks:         




Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/70959256/download.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model

1 Share
Matt Binder reports: An accidental leak has now been officially confirmed by AI company Anthropic regarding its most powerful AI model yet. The model, now known as “Claude Mythos,” was originally uncovered in a report from Fortune. Anthropic has since confirmed the details about the leak to the outlet. The data leak included details about the upcoming release of the...

Source

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Nvidia’s NemoClaw has three layers of agent security. None of them solve the real problem.

1 Share
Low-poly illustration of a crab with outstretched claws, representing the proliferation of 'Claw'-branded agentic AI tools like OpenClaw and NemoClaw.

The speed of LLM adoption demands that we check its trajectory from time to time. CEO Jensen Huang, talking at the Nvidia GPU Technology Conference, covered the growth of agentic computing. Over a two-year period, there has been a 10,000-fold increase in compute demand per user, with overall usage increasing 100 times. That’s a lot of tokens, which is why AI still sucks up a lot of investment dollars.

As we saw last week, the current star of the agentic world in terms of personal-user popularity is definitely OpenClaw, which appears to deliver on many science-fiction dreams of useful talking computers.

So there is no mystery as to why Nvidia backs OpenClaw all the way. It is the most unrestrained form of token use out there. And of course Mr Huang would also encourage companies to adopt an “OpenClaw strategy”. But just like Anthropic, they know they can only embrace the open-source phenomenon while wearing plenty of armour.

Hence, Nvidia launched NemoClaw, which rides the OpenClaw wave, before adding enough guardrails to make it vaguely safer. But unfortunately, NemoClaw doesn’t replace OpenClaw; it sits on top of it.

Hugging the crab

As we see from recent articles, there will be many opportunities to make OpenClaw safer. And just like Anthropic, Nvidia believes the answer to OpenClaw is to let Nvidia protect you from it. For this, they add three security architecture components.

The first piece is policy enforcement — a system heavily used in the last few decades. This is the boundary-setting governance layer that hopes to make sure the teenager returns home before evening.

By constraining filesystem and network access, the hope is that an agent will reason about why it is blocked and propose a policy update that the human user can approve. But if it leaves through the bedroom window, it can bypass you altogether, with you being none the wiser. And this multiplies for multi-agent systems.

There is an inherent inefficiency in letting self-evolving agents install packages, learn skills, and spawn subagents only to stop them at the door because you don’t like what they are wearing.

“There is an inherent inefficiency in letting self-evolving agents install packages, learn skills, and spawn subagents only to stop them at the door because you don’t like what they are wearing.”

Overall, the more skills the system knows, the less effective policy enforcement is, as it really only learns after the fact. You either stop tasks so often that they are no longer autonomous, or hope you can out-guess a mastermind that you are paying to solve problems 24/7. In reality, the success of any system will be the experience (and cynicism) of the engineers employed to manage it.

The second piece is privacy routing. This is a good way to both control expenses and to stop giving up quite so much of your IP to the cloud providers. (But this doesn’t stop agents from emailing your passwords out because a third party asked nicely.)

Set up well, you decide what stays local and what queries go to the larger cloud models. A router can make decisions about model selection based on cost and an advanced privacy policy. Unlike cloud providers, Nvidia can make good money selling more chips if you try to run heavy inference on your own machines. But it is always sensible to select the right model for the task.

The third piece is sandboxed execution. This is vital to prevent a bad process from having simple access to neighbouring agent processes, but it also provides a way to test a system with much lower risk by tracking and inspecting intended network traffic. This is also important for long-running tasks that cannot be trivially tested otherwise. If you just want to run agents in a container, you can try NanoClaw.

But truly, “significant advancement over OpenClaw” is a low bar. I would expect more attempts to build secure products from the ground up, but until that happens, companies will bide their time and see where the very bottom of the security failure trench is, before taking the plunge.

Too many claws

By the end of 2026, many small outfits and global organisations will probably have an agentic strategy. Hence, the increasing number of “claws” out there. DefenseClaw. PicoClaw. ZeroClaw. There probably is a Sanity Claws.

As the corporate market increases its appetite for agentic computing, the next true barrier will be the ability to employ the right staff to control it. While people are warning us about how many developer jobs may be lost (and seeing share prices rise in the hope of lower overheads), what is less discussed is the difficulty of hiring the right people to babysit the new systems. As I’ve mentioned, it is no longer about employing eager young coders — it is more about grizzled vets spotting potential pitfalls throughout the workflow, and working out risk profiles.

“It is no longer about employing eager young coders — it is more about grizzled vets spotting potential pitfalls throughout the workflow, and working out risk profiles.”

The reason why Apple, Google, Microsoft, et al. did not deliver on the early promises of digital assistants and still haven’t is precisely that they can see the problems. In fact, ever since HAL refused to open the pod bay doors, the big companies have been very careful how they frame AI publicly, knowing full well that enough embarrassing failures would cause a hard rejection. That an open-source project like OpenClaw has opened Pandora’s Box is no reason for responsible organisations to ride on hope while underplaying the risks.

The post Nvidia’s NemoClaw has three layers of agent security. None of them solve the real problem. appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Managing Properties From Records in C#, Part 3

1 Share
From: Jason Bock
Duration: 1:19:36
Views: 18

Changing the title, because I've been insipired to broaden the scope of this feature. It's not just about exclusion...

https://github.com/JasonBock/Transpire/issues/44

#dotnet #csharp

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories