Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148338 stories
·
33 followers

EV Sales Plummet In October After Federal Tax Credit Ends

1 Share
Longtime Slashdot reader sinij shares a report from Car and Driver: Sales of electric vehicles surged in September as shoppers rushed to take advantage of the $7500 federal EV tax credit before it disappeared at the end of the month. With the government subsidies now gone, EV sales were expected to take a hit in October. While only a few automakers still report sales on a monthly basis, the results we do have do not paint a rosy picture for EVs in a post-tax credit world. The Korean automakers were hit particularly hard by the loss of the tax credit. The Hyundai Ioniq 5, which was the fifth-best-selling EV through the third quarter of this year, experienced a 63 percent drop, moving 1642 units in October 2025, down from 4498 in 2024. Its platform-mates saw similar declines. The Kia EV6 moved just 508 units, down 71 percent versus the same month the year before, while the luxurious Genesis GV60 only found 93 buyers, a 54 percent slide year over year. Things were even worse at Honda. While the Acura ZDX was recently discontinued after just a single model year, the related Honda Prologue remains on sale but registered just 806 units, down 81 percent from 4130 sales in October 2024. [...] Obviously, this isn't the full picture, as several major players -- including General Motors, Toyota, Nissan, and Volkswagen -- only release sales reports on a quarterly basis, and others, such as Tesla and Rivian, don't break out individual sales at all. But with four of the top 10 bestselling EVs through Q3 all showing noteworthy declines in October, it spells trouble for the EV market at large. The end-of-year sales figures will provide a much clearer picture of whether October was just a blip or the start of a much more widespread problem for EV sales.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

To write secure code, be less gullible than your AI

1 Share
Ryan is joined by Greg Foster, CTO of Graphite, to explore how much we should trust AI-generated code to be secure, the importance of tooling in ensuring code security whether it’s AI-assisted or not, and the need for context and readability for humans in AI code.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

On‑Device AI with Windows AI Foundry

1 Share

From “waiting” to “instant”- without sending data away

AI is everywhere, but speed, privacy, and reliability are critical. Users expect instant answers without compromise. On-device AI makes that possible: fast, private and available, even when the network isn’t - empowering apps to deliver seamless experiences.

Imagine an intelligent assistant that works in seconds, without sending a text to the cloud. This approach brings speed and data control to the places that need it most; while still letting you tap into cloud power when it makes sense.

Windows AI Foundry: A Local Home for Models

Windows AI Foundry is a developer toolkit that makes it simple to run AI models directly on Windows devices. It uses ONNX Runtime under the hood and can leverage CPU, GPU (via DirectML), or NPU acceleration, without requiring you to manage those details.

The principle is straightforward:

  • Keep the model and the data on the same device.
  • Inference becomes faster, and data stays local by default unless you explicitly choose to use the cloud.

Foundry Local

Foundry Local is the engine that powers this experience. Think of it as local AI runtime - fast, private, and easy to integrate into an app.

Why Adopt On‑Device AI?

  • Faster, more responsive apps: Local inference often reduces perceived latency and improves user experience.
  • Privacy‑first by design: Keep sensitive data on the device; avoid cloud round trips unless the user opts in.
  • Offline capability: An app can provide AI features even without a network connection.
  • Cost control: Reduce cloud compute and data costs for common, high‑volume tasks.

This approach is especially useful in regulated industries, field‑work tools, and any app where users expect quick, on‑device responses.

Hybrid Pattern for Real Apps

On-device AI doesn’t replace the cloud, it complements it. Here’s how:

  • Standalone On‑Device: Quick, private actions like document summarization, local search, and offline assistants.
  • Cloud‑Enhanced (Optional): Large-context models, up-to-date knowledge, or heavy multimodal workloads.

Design an app to keep data local by default and surface cloud options transparently with user consent and clear disclosures.

Windows AI Foundry supports hybrid workflows:

  • Use Foundry Local for real-time inference.
  • Sync with Azure AI services for model updates, telemetry, and advanced analytics.
  • Implement fallback strategies for resource-intensive scenarios.

Application Workflow

Application Workflow: On‑Device AI with Windows AI Foundry and Cloud Integration as Hybrid Path

 

Code Example

1. Only On-Device: Tries Foundry Local first, falls back to ONNX

if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}") return "Error: No local AI available"

2. Hybrid approach: On-device first, cloud as last resort

def get_answer(question, context): """ Priority order: 1. Foundry Local (best: advanced + private) 2. ONNX Runtime (good: fast + private) 3. Cloud API (fallback: requires internet, less private) # in case of Hybrid approach, based on real-time scenario """ if foundry_runtime.check_foundry_available(): # Use on-device Foundry Local models try: answer = foundry_runtime.run_inference(question, context) return answer, source="Foundry Local (On-Device)" except Exception as e: logger.warning(f"Foundry failed: {e}, trying ONNX...") if onnx_model.is_loaded(): # Fallback to local BERT ONNX model try: answer = bert_model.get_answer(question, context) return answer, source="BERT ONNX (On-Device)" except Exception as e: logger.warning(f"ONNX failed: {e}, trying cloud...") # Last resort: Cloud API (requires internet) if network_available(): try: import requests response = requests.post( '{BASE_URL_AI_CHAT_COMPLETION}', headers={'Authorization': f'Bearer {API_KEY}'}, json={ 'model': '{MODEL_NAME}', 'messages': [{ 'role': 'user', 'content': f'Context: {context}\n\nQuestion: {question}' }] }, timeout=10 ) answer = response.json()['choices'][0]['message']['content'] return answer, source="Cloud API (Online)" except Exception as e: return "Error: No AI runtime available", source="Failed" else: return "Error: No internet and no local AI available", source="Offline"

Demo Project Output: Foundry Local answering context-based questions offline

Answer found in the context: The Foundry Local engine ran the Phi-4-mini model offline and retrieved context-based data.

 

No answer found in the context: The Foundry Local engine ran the Phi-4-mini model offline and mentioned that there is no answer.

Practical Use Cases

  • Privacy-First Reading Assistant: Summarize documents locally without sending text to the cloud.
  • Healthcare Apps: Analyze medical data on-device for compliance.
  • Financial Tools: Risk scoring without exposing sensitive financial data.
  • IoT & Edge Devices: Real-time anomaly detection without network dependency.

Conclusion

On-device AI isn’t just a trend - it’s a shift toward smarter, faster, and more secure applications. With Windows AI Foundry and Foundry Local, developers can deliver experiences that respect user specific data, reduce latency, and work even when connectivity fails. By combining local inference with optional cloud enhancements, you get the best of both worlds: instant performance and scalable intelligence.

Whether you’re creating document summarizers, offline assistants, or compliance-ready solutions, this approach ensures your apps stay responsive, reliable, and user-centric.

References

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Part One - Bryan McCann, CTO of You.com, on AI, Engineering, Art, and Everything In Between

1 Share

Hey everyone and welcome to today's episode of Developer Tea. It's been quite a while since I've had a guest on the show. Today, I'm joined by Bryan McCann, CTO at you.com. We dive into a wide-ranging discussion, exploring the philosophical origins of his career—from studying meaning and language to working in very early AI research. This discussion is less advice-heavy and more focused on kind of theory and discussion. I hope this is insightful for you and helpful as you crystallize your own philosophies on these subjects.

  • Explore the philosophical journey that led Bryan McCann from being a philosophy major interested in meaning to pioneering early AI research. Bryan views his current work as an extension of those original philosophical questions.
  • Discover how Bryan shifted from hitting a dead end in "armchair philosophy" to using computational tools to study language and try to make machines that could create meaning.
  • Understand why Bryan believes that meaning, in the sense he originally sought it, is an innately human thing, tied to purpose and the narratives we use to shape our sense of reality.
  • Discuss the profound realization that AI breakthroughs might be akin to discovering electricity, suggesting we are tapping into a fundamental framework of meaning or connection that has always existed.
  • Examine the concept of super intelligence and the "flywheel effect," where AI accelerates research and development, building better versions of itself and potentially surpassing the classic anthropomorphic vision of machine intelligence.
  • Explore Bryan’s other interests, including organizations, people, and art, which he sees as continuing the uniquely human search for meaning.
  • Consider the idea that humanity's constant need to differentiate itself from machines may simply be a mechanism for survival, enabling our continued dominance.

📮 Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com..

📮 Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.





Download audio: https://dts.podtrac.com/redirect.mp3/cdn.simplecast.com/audio/c44db111-b60d-436e-ab63-38c7c3402406/episodes/ef4d7183-9e22-4d01-8e8e-7630f6b13967/audio/4d8d71ac-be69-4650-acf9-a2940dfd3ed5/default_tc.mp3?aid=rss_feed&feed=dLRotFGk
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What Roman togas have to do with today's elections. 'Home in' versus 'hone in.'

1 Share

1130. This week, we look at words related to elections, and then I help you remember the difference between "home in" and "hone in" with a tip that includes a shocking historical tidbit about spiders.

🔗 Share your familect recording in a WhatsApp chat.

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey

🔗 Get the edited transcript.

🔗 Get Grammar Girl books

🔗 Join GrammarpaloozaGet ad-free and bonus episodes at Apple Podcasts or SubtextLearn more about the difference

| HOST: Mignon Fogarty

| VOICEMAIL: 833-214-GIRL (833-214-4475).

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Dan Feierabend
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTubeTikTokFacebook. ThreadsInstagramLinkedInMastodonBluesky.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://dts.podtrac.com/redirect.mp3/media.blubrry.com/grammargirl/stitcher.simplecastaudio.com/e7b2fc84-d82d-4b4d-980c-6414facd80c3/episodes/5903fa7a-60b5-4fff-a8d2-41ec2a6f3d28/audio/128/default.mp3?aid=rss_feed&awCollectionId=e7b2fc84-d82d-4b4d-980c-6414facd80c3&awEpisodeId=5903fa7a-60b5-4fff-a8d2-41ec2a6f3d28&feed=XcH2p3Ah
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

TypeScript’s Takeover, AI’s Lift-Off: Inside the 2025 Octoverse Report

1 Share

Andrea and Kedasha sit down with data whisperer Jeff Luszcz, one of the wizards behind GitHub’s annual Octoverse report, to unpack this year’s biggest shifts. They get into why TypeScript overtook Python on GitHub, how AI-assisted “vibe coding” and agentic workflows are reshaping everyday engineering, and what it means that more than one new developer joins GitHub every second. From 1.12B open source contributions and 518M merged PRs to COBOL’s unexpected comeback, global growth (hello India, Brazil and Indonesia), and “security by default” with CodeQL and Dependabot, this episode turns the numbers into next steps for your career and your open source projects.

Links mentioned in the episode:

https://octoverse.github.com

https://github.com/jeffrey-luszcz

https://github.com/features/copilot

https://codeql.github.com

https://docs.github.com/code-security/dependabot

https://docs.github.com/code-security/secret-scanning/introduction/about-secret-scanning

https://www.typescriptlang.org

https://www.python.org

https://nextjs.org

https://vitejs.dev

https://marketplace.visualstudio.com/items?itemName=GitHub.copilot

https://www.home-assistant.io

https://code.visualstudio.com

https://github.com/explore


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://afp-920613-injected.calisto.simplecastaudio.com/98910087-00ff-4e95-acd0-a3da5b27f57f/episodes/bc180ac2-8567-4597-bb3b-8a5bd44867ae/audio/128/default.mp3?aid=rss_feed&awCollectionId=98910087-00ff-4e95-acd0-a3da5b27f57f&awEpisodeId=bc180ac2-8567-4597-bb3b-8a5bd44867ae&feed=ioCY0vfY
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories