Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150830 stories
·
33 followers

Stardock releases Fences 6.20 with new Rollup options and desktop fixes

1 Share
Stardock has rolled out an update for its popular Fences desktop organization tool. The program creates grouped areas on the Windows desktop for managing icons, folders, and automation rules, and version 6.20 introduces new interaction options along with fixes that address behavior in both single and multi display setups. The update's biggest new addition is a "click to open" option for Rollup Fence groups. These groups normally expand when the mouse passes over them, occasionally leading to accidental openings. The feature can now be set to require a click instead, or have the group open only if the pointer pauses… [Continue Reading]
Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Generative AI in the Real World: The Year in AI with Ksenia Se

1 Share

As the founder, editor, and lead writer of Turing Post, Ksenia Se spends her days peering into the emerging future of artificial intelligence. She joined Ben to discuss the current state of adoption: what people are actually doing right now, the big topics that got the most traction this year, and the trends to look for in 2026. Find out why Ksenia thinks the real action next year will be in areas like robotics and embodied AI, spatial intelligence, AI for science, and education.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform.

Transcript

This transcript was created with the help of AI and has been lightly edited for clarity.

00.00: All right, so today we have Ksenia Se. She is the founder and editor at Turing Post, which you can find at turingpost.com. Welcome to the podcast, Ksenia. 

00.17: Thank you so much for having me, Ben. 

00.20: Your publication obviously covers a lot of the most bleeding edge things in AI, but I guess let’s start with a heat check, which is around the state of adoption. So I talked to a lot of people in the enterprise about what they’re doing in AI. But I’m curious what you’re hearing in terms of what people are actually doing. So, for example, the big topics this year, at least in the startup world, are agents and multimodal reasoning. I think a lot of those are happening in the enterprise [to] various degrees. But what’s your sense in terms of the reality on the ground? 

01.05: Yeah. I just recently came from [a] conference for software developers, and it was really interesting to see how AI is widely adopted by software developers and engineers. And it was not about vibe coding—it was people from Capital One, it was people from universities, from OpenAI, Anthropic, telling how they also implement AI in their daily work. 

So, I think what we saw this year is that 2025 did not become the year of agents. You know, this conversation about “decade of agents.” But I think 2025 became the year where we got used to AI on many, many levels, including enterprise, business people, but also people who [are] building the infrastructure in the enterprises.

02.00:So, this conference you attended, as you mentioned, there were obviously the people building the tools, but there were also people who were using tools. Right? So, give us a sense of the perspective of the people using the tools. 

02.14: So it was mostly a conference about coding. And there were people who are building these coding tools using different agentic workflows. But what was interesting is that there were people from OpenAI [and] Anthropic, and they were pushing the agenda for coders to start using their platforms more because it’s all connected inside. And then, it’s better for you to just use this platform. So it was an interesting talk. 

And then there was a talk from MiniMax, which is a Chinese company. And it was super interesting that they have a completely different view on it and a different approach. They see coders and researchers and app developers together, everyone’s together, and that becomes a combination of using and building, and that’s very different. That’s very different from how Western companies presented [it] and how this Chinese company presented it. So I think that’s another thing that we see: just cross-pollination and building together inside different companies, different platforms. 

03.34: I’m curious, did you get a chance to talk to people from nontool providers, like you mentioned Capital One, for example? So companies like those, which one associates with enterprise. 

03.47: I haven’t talked to this person specifically, but he was talking a lot about trust. And I think that’s one of the biggest topics in enterprise. Right? How do we trust the systems? And then the topic of verification becomes one of the main ones for enterprises, specifically. 

04.07: You mentioned that this year, obviously, we all chatted and talked and wrote and built with agents. But, it seems like the actual adoption in the enterprise is a bit slower than we expected. So what’s your sense of agents in the enterprise? 

04.29: I was looking through the articles that I’ve written throughout this year because so many things happened, and it’s really hard to even remember what happened. But in the middle of the year was the “state of AI” [report] by Stanford University. And in this report they were saying that actually enterprises are adopting AI on many levels. And I think it’s a work in progress. It’s not agents, you know, [where you] take them and they work. It’s building these workflows and building the infrastructure for these agents to be able to perform work alongside humans. And the infrastructure level changes, on many different levels. 

I just want to maybe go a little deeper on enterprise from your perspective because I think you know more about it. And I’m very curious what you see from an enterprise perspective. 

05.26: I think that, actually, there’s a lot of piloting happening. A lot of people are definitely trying and building pilots, prototypes, but that large-scale automation is a bit slower than we thought it would be. So you mentioned coding—I think that’s one area where there’s a lot of actual usage, because that’s not necessarily customer-facing.

05.59: I think the distinction that people make is, you know, “Is this going to be internal or external?” It’s a big kind of fork in terms of how much are we going to push this? I think that one thing that people underestimated going into this, as you mentioned, is that there’s a certain level of foundation that you need to have in place.

A lot of that has to do with data, frankly, given that this current manifestation of AI really relies on you being able to provide it more context. So, it really is going to come down to your data foundation and all those integration points. Now when it comes to agents, obviously, there’s also the extra integration around tools. And so then that also requires some amount of preparation and foundation in the enterprise.

What’s interesting is that there’s actually three options for enterprises generally. The first is they take their existing machine learning platform that they were using for forecasting those kinds of things, structured data, and try to extend that to generative AI.

07.22: It’s a bit challenging, as you imagine, because the models are different, the workloads, the data pipelines are a little more challenging for generative AI. The second option is to do the end point. So you rely mainly on external services: “I’m just going to use API end points. Hopefully these end points allow me to do some amount of model customization like fine-tuning, maybe some RAG.”

07.48: But the challenge there, of course, is you kind of lose the skill set. You don’t develop the skills to push this technology further because you’re completely reliant on someone else, right? So your internal tech team doesn’t really get better. And then finally, the most bleeding-edge companies, mostly in tech—a lot of them here in Silicon Valley, actually—almost all the Silicon Valley startups are building custom AI platforms.

On the compute side, it’s comprised of three open source projects: PyTorch, Ray, and Kubernetes. And then some AI models at their disposal, like Kimi, DeepSeek, Gemma, open weights models. You’ve got PyTorch, AI Ray, and Kubernetes, the so-called PARK now. 

But anyway, I kind of hijacked your interview. So let me ask you a question. Last year, as I mentioned, people were abuzz about reasoning because of the release of DeepSeek, and then multimodality and agents. So next year, what’s your sense of what the buzzwords will be, given that the current buzzwords, Ksenia, have not been actually kind of fully deployed yet. What will people be kind of excited about? 

09.13: Yeah, we will keep talking about agentic workflows, for sure, for years to come. I would drop in a word: robotics. But before that, I would like to return to what you said about enterprises because I think here’s an important distinction about infrastructure and the companies that you mentioned that are building custom platforms, and actual usage.

Because I think this year, and as you mentioned, there were a lot of pilots and [there was] a lot of intention to use AI in enterprises. So it was someone very excited about AI and trying to bring it into enterprise. An interesting thing happened recently with Microsoft, who deployed everything they built to every one of their clients.

If you imagine how many enterprises are their clients, that becomes a different level of adoption [by] people who didn’t even sign up for being interested in AI. But now through Microsoft, they will be adopting it very quickly in their business environments. I think that’s very important for next year.

10.26: And Google is doing something similar, right?

10.29: Yeah. It’s just that Microsoft is much more enterprise-related. This adoption will be much bigger next year in the enterprise as well. 

10.39: So you were saying robotics, which, by the way, Ksenia, the new marketing term is “embodied AI.” 

10.47: Embodied AI,physical AI, yeah, yeah, yeah. But you know, robotics is still struggling with the thing that you mentioned. Data. There is not enough data. And I think that next year, with all this interest in spatial intelligence and world models in creating this new data, that [will be an] exciting year to observe. I don’t think we will be able to have domestic robots picking up our laundry and doing laundry, but we will be getting there slowly—five, six years. I don’t think it will be next year. 

11.25: Yeah, it seems in robotics, they have their own kind of tricks for generating data: learning in the virtual world, learning by watching humans, and then some sort of hybrid. And then also there’s these robotics researchers who are kind of promoting this notion of the robotics foundation model, where rather than having a raw robot just learn everything from scratch, you build the foundation model, which you can just then fine-tune. Hey, instead of folding a towel, you will now fold the T-shirt. But then there’s all these skeptics, right? 

I don’t know if you follow the work of Rodney Brooks. He’s like one of the grandfathers of robotics. But he’s a bit skeptical about the whole robotics foundation models. Particularly, he says that one of the main problems of this type of physical robotics is grasping. So it’s basically the sense of touch and the fingers, something we as humans take for granted, which he doesn’t believe that deep learning can get to. Anyway, again, I derailed your [interview]. So robotics. . . 

12.53: You know, I think there are interesting things happening here in terms of creating data. Not synthetic data but actual data from the real world, because open source robotics becomes much more popular. And I think what we will see is that the interest is high, especially from children’s perspectives.

And it’s not that expensive now to 3D-print a robot arm and get on NVIDIA and get, I don’t know, a Jetson Thor computer. And then connect it together and start building these robotics projects. Open source; everything is out there now; LeRobot from Hugging Face. So that’s very exciting. And I think that [these projects] will expand the data.

13.40: By the way, Rodney Brooks makes a couple of interesting points as well. One is when we say the word “robotics” or “embodied AI,” we focus too much on this humanoid metaphor, which actually is far from reality. But the point he makes is [that] there’s a lot of robotics already in warehouses. And [they] are not humanoids. They’re just carts moving around. 

And then the second point he makes is that robots will have to exist with humans. So those robots that move things around in a warehouse, they are navigating the same space as humans do. There’s going to be a lot of implications of that in terms of safety and just the way the robot has to coexist with humans. So embodied AI. . . Anything else that you think will explode in the popular mindset next year? 

14.47: Yeah, I don’t know about “explode.” 

14.50: Let me throw a term that, actually, I’ve been thinking a lot about lately, which is this “world model.” But the reason I say I have been thinking about it lately is because I’ve literally started reading about this notion of a world model, and then it turns out I actually came up with seven different definitions of “world.” But I think “world model,” if you look at Google Trends, is a trendy term, right? What do you think is behind the interest in this term “world model”? 

15.27: Well, I think it’s all connected to robotics as well. It’s this spatial intelligence that’s also on the rise now, thanks to Fei-Fei Li, who is so very precise and stubborn [about] pushing this new term and creating a whole new field around her.

I was just reading her book The Worlds I See. And it’s fascinating how throughout her career, for the last 25, 30 years, she’s been so precise about computer vision, and now she’s so articulate about spatial intelligence and the world models that they build, that it’s all for better understanding how computers, how robotics, how self-driving can be reliable.

So I don’t know if world models will captivate a majority of the population, but it for sure will be one of the biggest research areas. Now, I’ll throw in the term “AI for science.” 

16.35: Okay. Yeah, yeah, yeah. Kevin Weil at OpenAI just moved over to doing AI for science. I mean, it’s super exciting. So what specific applications in science, do you think? 

16.50: Well, there is a bunch, right? Google DeepMind is of course ahead of everyone. And, what they’re building to create new algorithms that can solve many different scientific problems is just mind-blowing. But what it started was all these new startups appeared: AI for chemistry, AI for math, and AI science from Sakana AI. So this is one of the biggest movements, I think, that we will see developing more in the next year, because the biggest minds from big labs are moving into the startup area just because they’re so passionate about creating these algorithms that can solve scientific problems for us. 

17.38: AI for math, I think, is natural because basically that’s how they test their models. And then AI for drug discovery because of the success of AlphaFold, and things like that. Are there any other specific verticals that you’re paying attention to besides those two? Is there a big movement around AI for physics? 

18.07: AI for physics? 

18.10: I think there are some people, but not to the extent of math.

18.14: I would say it’s more around quantum computing, all the research that’s happening around physics and going into this quantum physics world and—also not for the next year—but quantum computers are already here. We still do not fully know how to use them and for what, but NVIDIA is working hard to build this and the Q link to connect GPUs to QPUs.

This is also a very exciting area that just started actively developing this year. And I think next year we will see some interesting breakthroughs. 

18.59: So I have a phrase for you which is, I think, likely next year. But don’t hold my feet to the fire: “AI bubble bursts.” 

19.12: Well, let’s discuss what is the AI bubble?

19.15: There definitely seems to be an overinvestment in AI ahead of usage in revenue, right? So definitely, if you look at the preannounced commitments, I don’t know how hard or soft those commitments are due to data center buildout. We’re talking trillions of dollars, but as we mentioned, usage is lagging. You look at the biggest private companies in the space, OpenAI and Anthropic—the multiples are off the charts.

They have a lot of revenue, but their burn rates far exceed the revenue. And then obviously they have this announced commitment to build even more data centers. And then obviously there’s that weird circular financing dance that’s happening in AI, where NVIDIA invests in OpenAI and OpenAI invests in CoreWeave, and then OpenAI buys NVIDIA chips.

I mean, people are paying attention. But at the root of it is leverage. And the multiples just don’t make sense for a lot of people. So that’s what the bubble is. So, then, is next year going to be the year of reckoning? Is next year the day the music stops? 

20.52: I don’t think so. I think there are a couple of bubbles that people discuss in the industry. Most [are] discussing the LLM bubble—that everyone is putting so much money into LLMs. But that’s actually not the main area, or it’s not the only one, it’s not how we get to superintelligence. There are also world models and spatial intelligence. There are also other sorts of intelligence, like causal, that we don’t even pay attention to much, though I think it’s super important. 

So I think the attention will switch to other areas of research. It’s really needed. In terms of companies, well, OpenAI definitely needs to come up with some great business strategy because otherwise they will just burn through GPUs, and that’s not enough revenue. In terms of the loop—and you said the usage is lagging—the usage from users is lagging because not that many people are using AI. 

21.58: But the revenue is lagging. 

22.02: But if we think about what’s happening in research, what’s happening in science, in self-driving, this is a huge consumption of all this compute. So it’s actually working.

22.21: By the way, self-driving is also losing money. 

22:26 But it’s something that’s happening. Now we can try Tesla to drive around, which is exciting. That was not the case two years ago. So I think it’s more of a bubble around some companies, but it’s not a bubble about AI, per se. 

And some people, you know, compare it to the dot-com bubble. But I don’t think it’s the same because, back then, the internet was such a novelty. Nobody knew what it was. There was so much infrastructure to build. Everything was just new. And with AI, as you well know, and machine learning, it’s like the last 60 years of actual usage.

Like, you know, AI [was] with our iPhones from the very beginning. So I don’t think it’s an AI bubble. I think it’s maybe some business strategist bubble, but…

23.25: Isn’t that just splitting hairs? By the way, I lived through the dot-com bubble as well. The point is the financial fundamentals are challenging and will remain challenging.

The assumption is that there’s always going to be someone else to fund your next round, at a higher valuation. Imagine raising money on the down round. What would be the implication for your workforce? The morale? So anyway, we’ll see. We’ll see what happens. Clearly there’s other approaches to AI. But the point is that none of them seem to be what people are investing in at the moment. There’s a bit of a herd mentality. 

If you go back to “Why did deep learning blow up?” well, because they did well in ImageNet. Before then no one was paying attention. So for one of these techniques to draw attention, they really need to do something like that. In AI and machine learning, it’s like search in some ways. So you’re looking for a model in the search space and you’re looking for different models. But right now everyone seems to be looking in the same area. In order to convince all these people to move to a different area, you have to show them some signs of hope, right?

But even after that, you still have all this build-out and debt. By the way, one thing that’s changed now is the role of debt. Debt used to be an East Coast thing, but now West Coast companies are starting to play around with financing some of these data centers with debt. So we’ll see. Hopefully I’m wrong. 

25.51: You think it will burst, and if it will, how…? 

25.56: I think there will be some sort of reckoning next year. Because basically at some point you’re going to…you have to keep raising money, and then you’re going to run out of places to raise money from. The Middle East also has a finite amount of money. And unless they can show real—the revenues [are] so, so lagging right now. Anyway, in closing, what other things are on your radar for ’26? 

26.29: On my radar is how AI is going to change education. I think that’s super important. I think that’s lagging significantly both in schools and universities because the opportunities that AI provides—and we can talk about bad sides, we can talk about good stuff—but having kids who are growing into this new era and talking with AI with them and seeing how it can accelerate the acquiring of knowledge, I’m very inspired by that. And I think this is a topic that not that many people talk about, but it should completely change the whole educational system. 

27.16: Yeah, I’m curious actually, because, you know, I was a professor in a previous life, and I can’t imagine, now, teaching the same way I would back then. Because back then you’re this person in front of the room who has all of the knowledge and authority. Which is completely not the case anymore. In light of that, what’s your role and how do you manage a classroom? AI is the kind of thing you can try to take away from students, but no, they’re going to use it anyway. So in light of that, what is your role and what should be the tools and guardrails?

28.01: I think one of the most important roles is to teach [how to] ask questions and fact check, because I think we forgot [that] with social networks. That was one of the biggest disadvantages of social networks. You just believe everything you see. And I think with generative AI, it’s so easy to be fooled.

So the role of the teacher becomes to tell you how to talk with these models and how to ask questions. I’m a big believer in asking the right question. So I think this is what trains critical thinking the most. And I think that’s the role of the teacher, helping, going deeper and deeper and deeper, and asking the best questions.

28.47: I want to close with this question, which is on the open weights models. So obviously right now the top open weights models are from China. Kimi, Moonshot. Alibaba. So are there any Western open weights models? I guess, Gemma. I’m not sure Mistral really counts, but Gemma might. I did talk to someone on Google’s Gemma team, and they said they could release even better models if they wanted to. The key is, if they want to, right? Obviously, the first mover here was Llama, which I don’t know if they’re going to continue. So, Ksenia, what’s going to be our source of Western open weights models? 

29.37: Well, the Allen Institute for AI is pushing open source very heavily, and in November they released Olmo 3, which is fully open—not only weights—it’s all transparent. And this is just an amazing way to demonstrate to the closed labs how to do that. And one of the researchers at Ai2, Nathan Lambert, organized a sort of movement for Western open source. Hugging Face is doing this amazing job. And through their work, the companies like NVIDIA really use a lot of open source models, some of them open weights, some of them [aren’t]. But even OpenAI, I think, started to open up a little bit. Meta is moving kind of in a different direction, though. 

30.35: Yeah, it’s kind of a TBD. We don’t know. Hopefully, they do something. Like I said, the Gemma team could release even better models, but someone has to convince them to do that. I guess I’m waiting for the time when I go to the LMArena leaderboard and I start seeing more Western open weights models again. 

31.01: Well, they had the restriction of getting more revenue that they cannot solve. 

31.07: And with that, thank you, Ksenia. 

31.11: Thank you so much, Ben.



Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel

1 Share

BONUS: Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel

Clarification Before Quantification

"Quantification is not the main idea. The key idea is clarification—so that the executive team understands each other."

 

Tom emphasizes that measurement is a means to an end. The real goal is shared understanding. But quantification is a powerful clarification tactic because it forces precision. When someone says they want a "very fast car," asking "can we define a scale of measure?" immediately surfaces the vagueness. Miles per hour? Acceleration time? Top speed? Each choice defines what you're actually optimizing for.

The Scale-Meter-Target Framework

"First, define a scale of measure. Second, define the meter—the device for measuring. Third, set numbers: where are we now, what's the minimum to survive, and what does success look like?"

 

Tom's framework makes the abstract concrete:

 

  • Scale of measure: What dimension are you measuring? (e.g., time to complete task)

  • Meter: How will you measure it? (e.g., user testing with stopwatch)

  • Past/Status: Where are you now? (e.g., currently takes 47 seconds)

  • Tolerable: What's the minimum acceptable? (e.g., must be under 30 seconds to survive)

  • Target/Goal: What does success look like? (e.g., 15 seconds or less)

 

Many important concepts like "usability" decompose into 10+ different scales of measure—you're not looking for one magic number but a set of relevant metrics.

Trust as the Organizational Hormone

"Change moves at the speed of trust. Once there's trust, information flows. Once information flows, the system comes to life and can learn. Until there's trust, you have the Soviet problem."

 

Simon introduces trust as the "human growth hormone" of organizational change—it's fast, doesn't require a user's manual, and enables everything else. Low-trust environments hoard information, guaranteeing poor outcomes. The practical advice? Make your work visible to your manager, alignment-check first, do something, show results. Living the learning cycle yourself builds trust incrementally. And as Tom adds: if you deliver increased critical value every week, you will build trust.

 

About Tom Gilb and Simon Holzapfel

 

Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him.

You can listen to Tom Gilb's previous episodes here

 

You can link with Tom Gilb on LinkedIn 

 

Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack.

And you can listen to Simon's previous episodes on the podcast here

 

You can link with Simon Holzapfel on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251211_Simon_Tom_Thu.mp3?dest-id=246429
Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rider and ReSharper 2025.3.0.4: Important Updates Released

1 Share

Another set of updates for the 2025.3 versions of ReSharper and Rider has just been released. This release contains important bug-fixes as well as feature updates.

Let’s take a look at what’s been improved.

Rider 2025.3.0.4

Multi-agent experience in the AI Chat window: Junie and Claude Agent

Claude Agent has become the first third-party AI agent natively integrated into JetBrains IDEs. With its addition, JetBrains introduces a multi-agent experience that brings even more flexibility and power to your development workflow. Now that Claude Agent and Junie are available in the same chat interface, you can switch between agents seamlessly and get the right kind of assistance for every task.

The easiest way to start working with any agent now is to launch it directly from the AI chat. However, the Junie plugin (and some of its features exclusively) will still be available for you if you prefer to use Junie it this way.

Learn more here.

Transparent AI quota tracking in the IDE

You can now view your remaining AI credits, renewal date, and top-up balance directly inside your IDE, and if you run out of credits, you can initiate a top-up from there as well. This update makes it easier to monitor and manage your AI resources – bringing more clarity and convenience to your AI usage. 

In Junie, if your task uses more than 1.2 AI credits, you will get a notification. This feature is currently available in the Junie plugin and will be coming to the AI chat soon.

Learn more about AI quotas in this blog post.

[Coming Soon] Bring Your Own Key

With BYOK, you will be able to connect to OpenAI, Anthropic, or any OpenAI API-compatible local model using your own API key, without logging into JetBrains AI. This gives you more control over how you use AI in IntelliJ IDEA, and it’s ideal if you prefer to work with a specific provider.

This setup is particularly powerful when paired with a JetBrains AI subscription (including the free tier), which provides enhanced completion, extra models, and bonus credits while still allowing you to use your own key for chat and agents.

Learn more here

Game development

Rider now provides cloud-powered multi-line code completion for shaders in Unity, Unreal Engine, and Godot projects.For Godot developers, cloud completion suggestions are also available for GDScript files.

Working with databases

Rider replaces the term query console with query file – because consoles have essentially always been files, and it’s time the UI reflected that. We’ve also simplified the workflow, making it more discoverable and consistent. You can learn more about the change in this post. 

Notable fixes included in this build:

  • We’ve resolved the issue of excessive memory usage when opening .NET 6 solutions [RIDER-132952].
  • Rider now ships with the correct IJent version, restoring full WSL support. Git, Docker integration, terminals, and device discovery over WSL work reliably again. [IJPL-219668]
  • Deploying to Android devices now works reliably again. Rider correctly uploads the application APK file, even when a previous version is already installed on the device. [RIDER-132740]
  • Rendered documentation now correctly displays code containing < and > characters. [RIDER-119249]
  • Search Everywhere now returns results in a stable, predictable order. [RIDER-132113]
  • Source-generated files from System.Text.Json are now detected correctly and no longer produce false errors. [RIDER-132634]

For the complete list of resolved issues, please refer to our issue tracker

ReSharper 2025.3.0.4 

Here are the most notable fixes included in this update:

  • SQL/NoSQL support can now be excluded from the dotUltimate installer. An installation key is available for silent setups, allowing teams to omit these features when not needed. [RSRP-501992]
  • The Unit Test Output panel in Out-of-Process mode now shows vertical scrollbars correctly, making long test output fully navigable again. [RSRP-501537]

For the full list of issues resolved in this build, please refer to our issue tracker.


You can download the latest builds from our website (RiderReSharper) or via the Toolbox App. You can also update Rider as a snap.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Raised analog. Living digital.

1 Share

I grew up in a world where the phone was attached to the wall, the TV was attached to three channels, and the only clouds anyone talked about were the ones that might cancel school. My childhood was fully analog—tapes rewound with a pencil, encyclopedias delivered in 26 heavy volumes (literally had this in my house growing up), and cameras where you didn’t know if you blinked or had red eyes until the drugstore handed you your photos.  I literally got college credit my freshman year at Purdue because I could use Word Perfect and Lotus Notes.

A spreadsheet display showing employee data, including columns for employee ID, name, department, job title, years of service, salary, and bonus information.

And yet I (accidentally) built a career deeply rooted in digital technology.

I didn’t grow up with it.
I grew up into it.
And that transition—from an analog childhood to a digital adulthood—isn’t just my story. It’s the story of an entire generation.

Somewhere along the way, someone labeled Gen X “digital immigrants.”

As if we arrived late to the tech party, confused by the buttons, clutching a paper map while everyone else understood the interface intuitively. It’s a tidy concept—and completely wrong.

The myth assumes equal access. It assumes anyone born before a certain year must have entered digital life reluctantly, blinking at a glowing monitor like it was witchcraft. But that tidy narrative ignores the messy reality:

Gen X didn’t inherit digital technology.

We ran into it unevenly, unpredictably, and most often when someone in the house could finally afford it.

For many of us, the “first computer” wasn’t something unboxed at home—it lived in a school lab, a library, or a friend’s basement. Some Xers didn’t touch a keyboard until college. Others were writing code before they were tall enough to reach every key without leaning.

So no, Gen X isn’t digital immigrants.

We’re the ones who got shoved into the digital deep end— and then somehow ended up teaching everyone else how to swim.

And let’s not forget: early technology was expensive.
Not “expensive for a kid.”
Expensive, period.

A home computer wasn’t an impulse buy; it was a negotiation.
Printers cost a small fortune.  I seriously know no one who had one.
Even blank floppy disks felt like supplies you needed to ration.  And you could make them two-sided if you knew the hack.

An image of a large, black floppy disk with a blue write-protect tab attached.

So Gen X became the only generation where digital access wasn’t determined by birth year—it was determined by income, geography, and luck. Within the same cohort, you had:

  • kids who saved chore money for a starter machine,
  • kids who only saw a computer during a tightly timed school slot,
  • and kids who didn’t touch one until their first job.

No other generation had that kind of spread.

Digital fluency for us wasn’t automatic.

It was earned.

And we didn’t earn it through intuition—we earned it through persistence.

Early tech didn’t greet us with friendly icons. It confronted us with blinking cursors, cryptic error messages, and manuals the size of small novels. Nothing “just worked.” You had to make it work.

We weren’t digital natives.
We weren’t digital immigrants.
We were something much more Gen X:

The “Ok Fine, I’ll Do It” Generation.

We learned technology because someone had to—and somehow, that someone kept being us.

When 80s kids see smart phones

By the time Millennials came along, technology had been smoothed out. Computers booted without rituals. Interfaces became friendlier. Devices got cheaper. The worst of the complexity had been quietly sanded down.

Older Millennials still caught a bit of the awkward middle.
Younger Millennials and Gen Z?
For them, technology arrived fully assembled.

Gen Alpha doesn’t troubleshoot. They reset.
They don’t adapt to systems; the systems adapt to them.

Meanwhile, Gen X spans a different arc entirely:

We remember before digital.
We lived through the messy middle.
We mastered the polished era too.

If you look at the digital timeline, Gen X is the only generation without a simple storyline. We weren’t born into tech, and we didn’t exit the workplace before it took over everything. We lived in the before, the during, and the after.

And honestly? That’s become our quiet superpower.

We know how to navigate a world that changes its rules every few years—because it’s been doing exactly that our entire adult lives. We’ve learned new tools so often we barely notice anymore. We know how things work under the hood—not because we set out to, but because back then you had to.

So yes, Gen X became the bridge.
Not because we volunteered, but because everyone kept handing us the cables.

We’re the troubleshooters.
The explainers.
The translators.
The only generation fluent in every version of the world we’ve lived through.

Not bad for the “forgotten middle child” of the generational chart.

We didn’t just live through the digital divide.

We’re the ones who quietly held it together while everyone else crossed.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Octopus Deploy GitHub app improvements

1 Share

In early 2024, we released the Octopus Deploy GitHub App — a seamless, secure integration between GitHub and Octopus Deploy. One of its biggest advantages is that it eliminates the need to manually manage GitHub credentials inside Octopus.

Since then, we’ve been listening closely to your feedback , looking for ways to make the experience even better. And today, we’re excited to share an improvement that will save you time and frustration when working with repositories.

What’s changed

Search and select support for GitHub repositories

Previously, when creating or modifying the repository used in a GitHub Connection within Octopus, you’d be presented with a list of all accessible repositories. This list was limited to 50 at a time — and there was no search feature.

If you had hundreds (or even thousands!) of repositories, this meant scrolling endlessly and hunting manually for the right one — especially if its name wasn’t near the top alphabetically.

With this update, you can now:

  • Search for the repository you want — no more scrolling through pages to find that repo starting with a “z.”
  • See a clear view of your selected repository — making it much easier to add or remove repositories as needed.

These improvements are automatically available to any GitHub App connections with access to all repositories. If your GitHub App is configured with access to only specific repositories, the previous functionality will remain.

Tip: If you work with many repositories and find the current experience slowing you down, consider updating your GitHub App settings to allow access to all repositories.

Screenshot of new selector modal

Use your GitHub App for any Git dependency

Previously, if a process step or database project needed to pull scripts, templates, or other files from another repository, you couldn’t use your existing GitHub App connection unless that repository was the project itself.

That meant:

  • Managing separate Git credentials or access tokens for each additional repository.
  • Expanding credential scopes and performing extra maintenance work.
  • Using inconsistent authentication — project repo via GitHub App, dependencies via basic Git.

Now, you can use your GitHub App connection for any Git-based dependency, including:

  • Automation script repositories
  • Dependencies for database projects
  • Any repository linked through the GitHub App

Benefits:

  • Simpler setup — choose your GitHub App connection in the portal, without extra secrets or tokens.
  • Consistent experience — the same secure connection works for your project repository and all related dependencies.

Screenshot of deployment process page with a GitHub Connection selected

What’s next

When we first launched the Octopus Deploy GitHub App, it was available only for Octopus Cloud. But we know many of our self-hosted customers use GitHub too — and we have great news.

We will be starting work shortly to bring the GitHub App to self‑hosted Octopus Deploy instances. If you want to follow our progress, check out our roadmap item

Happy Deployments!

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories