Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151047 stories
·
33 followers

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says

1 Share
An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared. Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham." "'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them." "Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged. "Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Some Engineering Teams Won’t Be Ready for AI Orchestration – and It Will Cost Them

1 Share

It’s a question many engineering teams aren’t ready to answer honestly.

Partly because the answer changes depending on who you ask, and partly because the two emerging answers point in completely opposite directions.

Iain Bishop, CEO of Damala Technology and a former CTO with over two decades of experience, believes that “there are uneven gains with AI at the moment.”

Some teams are moving fast – shipping more, experimenting, shaping decisions, and owning outcomes. Others are still treating AI like a smarter autocomplete, focusing on infrastructure and reliability. The gap between these groups, Iain believes, is only going to grow.

Soon, devs won’t jus use AI – they will coordinate it

Most teams today are still operating in what Iain describes as the copilot phase

AI sits alongside developers, helping them generate code, suggest improvements, or speed up repetitive tasks. It’s useful, but it doesn’t fundamentally change how work is structured, though that could change soon, Iain believes.

What we’ll see over time is a move from a copilot model to an orchestration model.

In that world, developers don’t just use AI, they coordinate it. Instead of writing everything themselves, they manage multiple AI agents, assign tasks, validate outputs, and connect everything into a working system. The role shifts from execution to direction.

You’re still accountable, no matter how smart AI becomes

As tools become more powerful, there’s a growing temptation to push more responsibility onto them. Iain sees that as a dangerous path:

If AI is just like a co-worker, it isn’t truly autonomous and we remain accountable no matter how powerful the tools are.

The risk isn’t that AI will take control. It’s that teams will give it up too easily: 

If we allow AI tools to operate completely autonomously, we lose that accountability. And that’s the wrong approach.

This means developers aren’t becoming less responsible, they’re becoming more. They’re accountable not just for what they write, but for what they orchestrate.

AI’s first impact won’t be mass layoffs, it will be role compression

AI’s first big impact won’t be mass layoffs, it will be role compression. “In the coming years, teams will shrink, and people will need to wear multiple hats,” Iain says.

The lines between traditional roles are starting to blur: you’ll see more product engineers build AI-driven solutions. At the same time, deep technical expertise won’t disappear; if anything, it becomes even more critical, Iain explains.

There will always be a need for systems engineers who understand what good code looks like.

As AI generates more code, someone still needs to ensure the architecture makes sense.

Iain sees two clear paths:

  1. Toward product – understanding users, business needs, and delivering end-to-end solutions;
  2. Deeper into systems – architecture, design, and scalability.

“The risk is for engineers who stay in the middle,” he says. “With AI handling more execution, being just kind of technical and kind of product-aware may no longer be enough.”

Structuring AI lets teams move fast without losing control

Most companies aren’t struggling with what AI can do, they’re struggling with how to manage it, Iain says: “There’s a rapid pace of change, and companies need to get control of what’s happening.”

The instinct is to lock things down (limit tools, restrict access, add heavy governance) but engineers will find ways around it.

A more sustainable path is to structure how AI is used. Iain points to orchestration platforms, where standards, design systems, and governance are built into AI workflows. This lets teams move fast without losing control, and ensures organisations don’t have to choose between speed and consistency. Control comes not just from systems, but from people understanding the tools they’re using.

Knowing how to use new models won’t come automatically

With all the focus on automation, one skill is quietly becoming critical: communication.

Iain says that for teams new to AI, it’s about more than prompts – it’s understanding models, structuring context, and guiding outputs into something usable.

Prompt engineering is really about creating the right context to get the best response.

This changes how developers work. Instead of writing everything, they guide systems, shape inputs, and validate outputs. Models will keep improving, that’s inevitable, but knowing how to use them well won’t be automatic.

The post Some Engineering Teams Won’t Be Ready for AI Orchestration – and It Will Cost Them appeared first on ShiftMag.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Download: LiteLLM hacked, Pretext layout engine, OpenAI news & more

1 Share
From: GitHub
Duration: 5:08
Views: 156

Welcome back to The Download! This week, we cover the serious supply chain attack on the LiteLLM Python package and OpenAI's intent to acquire Astral. We also look at Pretext, a new layout engine designed to help your browser handle complex tasks with ease. Plus, learn how to turn your GitHub contributions into a 3D pixel art city. Drop a comment and let us know which update is your favorite!

#DevNews #OpenAI #GitHub

— CHAPTERS —

00:00 Welcome to the Download
00:30 Pretext high performance layout engine
01:00 LiteLLM Python package supply chain attack
02:09 OpenAI to acquire Astral
02:49 GitHub Actions native timezone support
03:18 Agentevals for AI system reliability
04:05 Turn your GitHub profile into Git City
04:53 Outro

— RESOURCES —

Pretext
https://chenglou.me/pretext/
https://x.com/_chenglou/status/2037713766205608234

LiteLLM
https://futuresearch.ai/blog/no-prompt-injection-required/
https://snyk.io/articles/poisoned-security-scanner-backdooring-litellm/

Astral Acquisition
https://openai.com/index/openai-to-acquire-astral/

GitHub Actions Crons support
https://github.blog/changelog/2026-03-19-github-actions-late-march-2026-updates/#github-actions-timezone-support-for-scheduled-workflows

https://www.linkedin.com/posts/bengotch_you-know-what-feels-good-on-a-friday-morning-activity-7440808005568258048-KXSL?utm_source=share&utm_medium=member_desktop&rcm=ACoAABuOjfwBvMcYGBdcy1lJ550ifkI_DwoPEYc

Agent Evals:
https://www.solo.io/press-releases/introducing-new-agentic-open-source-project-agentevals

Open Source Project:
https://github.com/srizzon/git-city

Stay up-to-date on all things GitHub by connecting with us:

YouTube: https://gh.io/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Insider newsletter: https://resources.github.com/newsletter/
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github

About GitHub
It’s where over 180 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Managing Properties From Records in C#, Part 6

1 Share
From: Jason Bock
Duration: 1:11:13
Views: 17

I hope to finish the majority of the work I've been doing to customize equality operations on a C# record in this stream.

https://github.com/JasonBock/Transpire/issues/44

#dotnet #csharp

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Future of Addictive Design + Going Deep at DeepMind + HatGPT

1 Share
“The platforms should be absolutely begging Congress to regulate them, because the alternative is they get sued into oblivion by a bunch of law firms.”
Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 566: The code is actually kinda useless

1 Share

The code is actually kinda useless

This week, we discuss the Claude Code leak, locking down coding agents, and the Axios supply chain attack. Plus, Coté considers breaking up with his Gmail address of 20+ years.

Watch the YouTube Live Recording of Episode 566

Runner-up Titles

  • pawpatrol891@gmail.com
  • I get that dude’s email
  • Those turtles are dyslexic
  • I’m just a space hyperchicken judge
  • curl|bash is the debate
  • Djikstra’s algorithm for calendaring

Rundown

Relevant to your Interests

Nonsense

Conferences

SDT News & Community

Recommendations





Download audio: https://aphid.fireside.fm/d/1437767933/9b74150b-3553-49dc-8332-f89bbbba9f92/e31ef150-de39-4ead-b879-4edb4f479f03.mp3
Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories