Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
145780 stories
·
32 followers

Trump’s H-1B visa fee isn’t just about immigration, it’s about fealty

1 Share

Donald Trump has never made his distaste for immigrants a secret. It's been a cornerstone of his political movement since he descended that escalator on June 16th, 2015 and started hurling racist vitriol in the general direction of Mexico and Mexican Americans. On the surface, his assault on the H-1B visa program seems like part of the White House's ongoing campaign to reduce the number of immigrants in the country. It might have that effect, but the biggest goal for Trump may not be forcing companies to hire more Americans or cutting down on the number of workers from India moving to the US. It's giving the government more leverage over his …

Read the full story at The Verge.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Resurrecting EmitDebugging Using Interceptors, Part 3

1 Share
From: Jason Bock
Duration: 1:24:25
Views: 43

I keep working on an interceptor for System.Reflection.Emit. Let's see if I can make any new progress.

https://github.com/JasonBock/EmitDebugging/issues/6

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic AI research-methodology - part 2

1 Share

This post continues our series (previous post) on agentic AI research methodology. Building on our previous discussion on AI system design, we now shift focus to an evaluation-first perspective.

Tamara Gaidar, Data Scientist, Defender for Cloud Research
Fady Copty, Principal Researcher, Defender for Cloud Research

 

 

TL;DR:

Evaluation-First Approach to Agentic AI Systems

This blog post advocates for evaluation as the core value of any AI product. As generic models grow more capable, their alignment with specific business goals remains a challenge - making robust evaluation essential for trust, safety, and impact.

This post post presents a comprehensive framework for evaluating agentic AI systems, starting from business goals and responsible AI principles to detailed performance assessments. It emphasizes using synthetic and real-world data, diverse evaluation methods, and coverage metrics to build a repeatable, risk-aware process that highlights system-specific value.

 

Why evaluate at all?

While issues like hallucinations in AI systems are widely recognized, we propose a broader and more strategic perspective:

Evaluation is not just a safeguard - it is the core value proposition of any AI product.

As foundation models grow increasingly capable, their ability to self-assess against domain-specific business objectives remains limited. This gap places the burden of evaluation on system designers. Robust evaluation ensures alignment with customer needs, mitigates operational and reputational risks, and supports informed decision-making. In high-stakes domains, the absence of rigorous output validation has already led to notable failures - underscoring the importance of treating evaluation as a first-class concern in agentic AI development.

Evaluating an AI system involves two foundational steps:

  1. Developing an evaluation plan that translates business objectives into measurable criteria for decision-making.
  2. Executing the plan using appropriate evaluation methods and metrics tailored to the system’s architecture and use cases.

The following sections detail each step, offering practical guidance for building robust, risk-aware evaluation frameworks in agentic AI systems.

Evaluation plan development

The purpose of an evaluation plan is to systematically translate business objectives into measurable criteria that guide decision-making throughout the AI system’s lifecycle. Begin by clearly defining the system’s intended business value, identifying its core functionalities, and specifying evaluation targets aligned with those goals. A well-constructed plan should enable stakeholders to make informed decisions based on empirical evidence. It must encompass end-to-end system evaluation, expected and unexpected usage patterns, quality benchmarks, and considerations for security, privacy, and responsible AI. Additionally, the plan should extend to individual sub-components, incorporating evaluation of their performance and the dependencies between them to ensure coherent and reliable system behavior.

 

Example - Evaluation of a Financial Report Summarization Agent

To illustrate the evaluation-first approach, consider the example from the previous post of an AI system designed to generate a two-page executive summary from a financial annual report. The system was composed of three agents: split report into chapters, extract information from chapters and tables, and summarize the findings. The evaluation plan for this system should operate at two levels: end-to-end system evaluation and agent-level evaluation.

End-to-End Evaluation

At the system level, the goal is to assess the agent’s ability to accurately and efficiently transform a full financial report into a concise, readable summary. The business purpose is to accelerate financial analysis and decision-making by enabling stakeholders - such as executives, analysts, and investors - to extract key insights without reading the entire document. Key objectives include improving analyst productivity, enhancing accessibility for non-experts, and reducing time-to-decision.

To fulfill this purpose, the system must support several core functionalities:

  • Natural Language Understanding: Extracting financial metrics, trends, and qualitative insights.
  • Summarization Engine: Producing a structured summary that includes an executive overview, key financial metrics (e.g., revenue, EBITDA), notable risks, and forward-looking statements.

The evaluation targets should include:

  • Accuracy: Fidelity of financial figures and strategic insights.
  • Readability: Clarity and structure of the summary for the intended audience.
  • Coverage: Inclusion of all critical report elements.
  • Efficiency: Time saved compared to manual summarization.
  • User Satisfaction: Perceived usefulness and quality by end users.
  • Robustness: Performance across diverse report formats and styles.

These targets inform a set of evaluation items that directly support business decision-making. For example, high accuracy and readability in risk sections are essential for reducing time-to-decision and must meet stringent thresholds to be considered acceptable. The plan should also account for edge cases, unexpected inputs, and responsible AI concerns such as fairness, privacy, and security.

Agent-Level Evaluation

Suppose the system is composed of three specialized agents:

  • Chapter Analysis
  • Tables Analysis
  • Summarization

Each agent requires its own evaluation plan. For instance, the chapter analysis agent should be tested across various chapter types, unexpected input formats, and content quality scenarios. Similarly, the tables analysis agent must be evaluated for its ability to extract structured data accurately, and the summarization agent for coherence and factual consistency.

Evaluating Agent Dependencies

Finally, the evaluation must address inter-agent dependencies. In this example, the summarization agent relies on outputs from the chapter and tables analysis agents. The plan should include dependency checks such as local fact verification - ensuring that the summarization agent correctly cites and integrates information from upstream agents. This ensures that the system functions cohesively and maintains integrity across its modular components.

Executing the Evaluation Plan

Once the evaluation plan is defined, the next step is to operate it using appropriate methods and metrics. While traditional techniques such as code reviews and manual testing remain valuable, we focus here on simulation-based evaluation - a practical and scalable approach that compares system outputs against expected results. For each item in the evaluation plan, this process involves:

  1. Defining representative input samples and corresponding expected outputs
  2. Selecting simulation methods tailored to each agent under evaluation
  3. Measuring and analyzing results using quantitative and qualitative metrics

This structured approach enables consistent, repeatable evaluation across both individual agents and the full system workflow.

Defining Samples and Expected Outputs

A robust evaluation begins with a representative set of input samples and corresponding expected outputs. Ideally, these should reflect real-world business scenarios to ensure relevance and reliability. While a comprehensive evaluation may require hundreds or even thousands of real-life examples, early-stage testing can begin with a smaller, curated set - such as 30 synthetic input-output pairs generated via GenAI and validated by domain experts.

Simulation-Based Evaluation Methods

Early-stage evaluations can leverage lightweight tools such as Python scripts, LLM frameworks (e.g., LangChain), or platform-specific playgrounds (e.g., Azure OpenAI). As the system matures, more robust infrastructure is required to support production-grade testing. It is essential to design tests with reusability in mind - avoiding hardcoded samples and outputs - to ensure continuity across development stages and deployment environments.

Measuring Evaluation Outcomes

Evaluation results should be assessed in two primary dimensions:

  1. Output Quality: Comparing actual system outputs against expected results.
  2. Coverage: Ensuring all items in the evaluation plan are adequately tested.

Comparing Outputs

Agentic AI systems often generate unstructured text, making direct comparisons challenging. To address this, we recommend a combination of:

  • LLM-as-a-Judge: Using large language models to evaluate outputs based on predefined criteria.
  • Domain Expert Review: Leveraging human expertise for nuanced assessments.
  • Similarity Metrics: Applying lexical and semantic similarity techniques to quantify alignment with reference outputs.

Using LLMs as Evaluation Judges

Large Language Models (LLMs) are emerging as a powerful tool for evaluating AI system outputs, offering a scalable alternative to manual review. Their ability to emulate domain-expert reasoning enables fast, cost-effective assessments across a wide range of criteria - including correctness, coherence, groundedness, relevance, fluency, hallucination detection, sensitivity, and even code readability. When properly prompted and anchored to reliable ground truth, LLMs can deliver high-quality classification and scoring performance.

For example, consider the following prompt used to evaluate the alignment between a security recommendation and its remediation steps:

“Below you will find a description of a security recommendation and relevant remediation steps. Evaluate whether the remediation steps adequately address the recommendation. Use a score from 1 to 5:

  • 1: Does not solve at all
  • 2: Poor solution
  • 3: Fair solution
  • 4: Good solution
  • 5: Exact solution
    Security recommendation: {recommendation}
    Remediation steps: {steps}”

While LLM-based evaluation offers significant advantages, it is not without limitations. Performance is highly sensitive to prompt design and the specificity of evaluation criteria. Subjective metrics - such as “usefulness” or “helpfulness” - can lead to inconsistent judgments depending on domain context or user expertise. Additionally, LLMs may exhibit biases, such as favoring their own generated responses, preferring longer answers, or being influenced by the order of presented options.

Although LLMs can be used independently to assess outputs, we strongly recommend using them in comparison mode - evaluating actual outputs against expected ones - to improve reliability and reduce ambiguity. Regardless of method, all LLM-based evaluations should be validated against real-world data and expert feedback to ensure robustness and trustworthiness.

Domain expert evaluation

Engaging domain experts to assess AI output remains one of the most reliable methods for evaluating quality, especially in high-stakes or specialized contexts. Experts can provide nuanced judgments on correctness, relevance, and usefulness that automated methods may miss. However, this approach is inherently limited in scalability, repeatability, and cost-efficiency. It is also susceptible to human biases - such as cultural context, subjective interpretation, and inconsistency across reviewers—which must be accounted for when interpreting results.

Similarity techniques

Similarity techniques offer a scalable alternative by comparing AI-generated outputs against reference data using quantitative metrics. These methods assess how closely the system’s output aligns with expected results, using measures such as exact match, lexical overlap, and semantic similarity. While less nuanced than expert review, similarity metrics are useful for benchmarking performance across large datasets and can be automated for continuous evaluation. They are particularly effective when ground truth data is well-defined and structured.

Coverage evaluation in Agentic AI Systems

A foundational metric in any evaluation framework is coverage - ensuring that all items defined in the evaluation plan are adequately tested. However, simple checklist-style coverage is insufficient, as each item may require nuanced assessment across different dimensions of functionality, safety, and robustness.

To formalize this, we introduce two metrics inspired by software engineering practices:

  • Prompt-Coverage: Assesses how well a single prompt invocation addresses both its functional objectives (e.g., “summarize financial risk”) and non-functional constraints (e.g., “avoid speculative language” or “ensure privacy compliance”). This metric should reflect the complexity embedded in the prompt and its alignment with business-critical expectations.
  • Agentic-Workflow Coverage: Measures the completeness of evaluation across the logical and operational dependencies within an agentic workflow. This includes interactions between agents, tools, and tasks - analogous to branch coverage in software testing. It ensures that all integration points and edge cases are exercised.

We recommend aiming for the highest possible coverage across evaluation dimensions. Coverage gaps should be prioritized based on their potential risk and business impact, and revisited regularly as prompts and workflows evolve to ensure continued alignment and robustness.

Closing Thoughts

As agentic AI systems become increasingly central to business-critical workflows, evaluation must evolve from a post-hoc activity to a foundational design principle. By adopting an evaluation-first mindset - grounded in structured planning, simulation-based testing, and comprehensive coverage - you not only mitigate risk but also unlock the full strategic value of your AI solution. Whether you're building internal tools or customer-facing products, a robust evaluation framework ensures your system is trustworthy, aligned with business goals, and differentiated from generic models.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Silksong reignites Difficulty debate, do NOT finish your tasks

1 Share

Hello and Welcome, I’m your Code Monkey!

As I'm writing this it is currently 7am and the sun is just coming out, there's no clouds and it looks quite beautiful outside!

I've been making really nice progress on my Problem Solving course! I've been creating all sorts of systems and scenarios to then create lots of problems for you to solve! I really hope this strategy works nicely and I can teach you this super valuable skill much faster than the 10 years it took me.

  • Game Dev: Silksong Difficulty; Don’t Finish Tasks

  • Tech: Meta AI Glasses

  • Fun: Walk-Cycle ≠ Walk→Cycle



Game Dev

Silksong reignites Difficulty debate

Silksong is out now and most people are loving the game, however some people are finding the game to be a bit too difficult, meaning the usual "game difficulty" debate is on fire once again, this happens every time a souls game comes out.

What is the correct difficulty setting for a game? This is actually an extremely difficult game design question to answer and different developers take on different approaches. Most games just have difficulty settings, but some developers really insist on having one specific designed difficulty and that's it.

Silksong takes the second approach with only one difficulty setting. And for some people that setting is just right, whereas for others it is too frustratingly difficult.

I have a theory on game difficulty that I have no idea if it has any basis in reality. My theory is how there are basically two types of people.

  • One type finds a difficult boss fight to be challenging, but they can see they are making progress, and when they finally defeat the boss they get a massive rush of endorphins that makes them feel insanely good.

  • The other type finds a difficult boss, but every time they lose they just get more and more frustrated, and when they finally manage to defeat the boss they do NOT get that rush of endorphins, instead they think "oh thank god I finally beat this stupid boss and now I can keep playing the game"

I am very much the second type, I get NO pleasure from finally defeating a difficult boss, I don't even feel a sense of relief, so for me it's just never-ending frustration with no positive feelings in sight, therefore I do NOT like punishingly difficult games.

That's my theory on why some people LOVE difficult games and others (like myself) hate them. I have no idea if that theory has any basis in reality but I do know I never feel the massive sense of accomplishment that some people talk about.

The excellent channel Game Maker's Toolkit has a video on this. If you want to be a good game designer then you NEED to study this topic. This is not just important in terms of difficulty itself but also super important in terms of accessibility. Some people might physically not have fast enough reflexes to beat some boss, regardless of how much they practice, and without difficulty settings it just means the game is unplayable for them.

There is an interesting comment in that video that makes an analogy to spices in food, everyone has different preferences for what they like. If you give me some super spicy food I won't eat it no matter how delicious you tell me it is.

My advice to you on this topic is how you should include difficulty settings in your own games. In the case of Silksong and Souls games they can get away with this because they have massive audiences that will love the games no matter what. But for your indie games, if you go with just one difficulty setting that is too hard then you will likely get lots of negative reviews.

I hate super difficult games, and since I'm so busy and rarely have time to play games I definitely do NOT want that limited time to be spent feeling feelings of frustration, so I will always pick Normal or Easy modes. I really loved playing Sekiro and I really wanted to finish it, but after 3 hours of dying non-stop against some boss on a tower I just decided that the game wasn't worth the stress it was causing me, so even though I wanted to keep playing and exploring this world, I just had to quit for my own mental sanity.


Affiliate

Bundle 99% OFF, FREE Terrain Tool!

Do you like Realistic assets? There is a HumbleBundle out right now with tons of awesome stuff! It includes assets on all kinds of themes from Fantasy to Modern and Sci-fi.

As usual it’s a massive discount at 99% OFF! And it’s ending in just 3 DAYS!

Get it HERE!

The Publisher of the Week this time is Jason Booth, veteran publisher with lots of Terrain tools and stamps.

Get the FREE MicroVerse which is a tool for helping you edit your terrain in a non-destructive real-time manner.

Get it HERE and use coupon JASONBOOTH at checkout to get it for FREE!


Game Dev

DON'T finish your tasks!

Here's an interesting tip for motivation, do NOT finish your tasks!

It might sound odd but it's a strategy that can work quite nicely. If you're coming to the end of your workday or just a focus session, try to leave a tiny task left undone. Maybe just a button missing a click handler, maybe you just need to write an event and hook it onto the UI, maybe you just need to drag some references, something simple that would complete that task.

That way when you get back to work you have a quick easy win waiting for you! One of the hardest parts to do with "motivation" is starting, so if you have a very easy quick win that you can get started with then it will help you start and after that quick win you will likely keep going.

However when it comes to this advice it might or might not work for you. There is an interesting thread on Reddit talking about this and the OP sees this as a great tip, whereas someone else commented "I just can't sleep with an unfinished task".

So on this I would say two things. First of all if you're trying to make games seriously (as opposed to just a hobby) then I would advise you to rely more on self-discipline as opposed to motivation. Self-discipline means you do what you need to do regardless of whether you feel like it or not, whereas Motivation is something that you have no control over. If you're trying to make games seriously then you should "just do it" as opposed to only when you feel like it.

And secondly, when it comes to literally any kind of advice, my best tip is just try everything and see what works for you. Some advice might work great for one person and terrible for another person. In this particular scenario maybe leaving a task left undone will work wonders for you, or perhaps it will leave you stressed until you get back to it. For some people working mornings will be great, for someone else working at night will be better. So always remember how YOU are a unique human being, try out all sorts of approaches and over time you will learn what works for YOU specifically.

I do this strategy every once in a while, it can work nicely. However personally I'm more in the camp of feeling stressed with something unfinished, so my in-between strategy is to finish my tasks, but before I shut down my brain I think of some quick small tasks I can do when I get back to work.



Tech

Meta's new AR (AI) Glasses!

Meta has just unveiled their latest AR glasses, the Meta Ray-Ban Display which is an insanely impressive piece of tech. It has the format of normal glasses (slightly thicker) and within the frame it contains an entire computer capable of processing all sorts of voice commands, it can show you a screen directly on the lenses (full color!), and is connected to yet another super impressive piece of tech, the Meta Neural Band.

This band has been shown before and always amazes me, it can read brain signals to your hands and use those to control the glasses. By doing various gestures you can "click" on buttons, do slides, zooms, rotates, and many other actions. I hope this EMG Band will be purchasable separately sometime soon, you could do a lot of fun stuff with this! And of course the glasses also contain AI, in fact they're actually calling these AI Glasses and not AR Glasses.

Very important is how this is NOT a prototype, it's an actual product you will be able to purchase on September 30 (US only) for $799.

So perhaps most impressive of all is the rate of progress. MKBHD did a video on this and in there he talked about how just a few months ago he was playing around with their prototype and now here we are not long after and it's coming out as an actual consumer product! Usually it takes years to go from prototype to final product so this is a super impressive timeline!

The live demo went pretty terribly with constant fails. Funnily enough the CTO told Yahoo how the reason is when they said in the demo "Hey Meta, start Live AI" how it started every single Meta Ray-Ban in the room which basically caused them to do a DDoS attack on themselves heh.

I love the concept of AR and any time I see news like these I am constantly questioning "how long until AR contacts?". I also love their wrist brain interface, personally I am not a fan of voice controls (it never understands my accent) so I think this can be a super awesome new input type for the future!



Fun

Dev wanted a Walk-Cycle, got a Walk->Cycle

Sometimes you have to be very explicit with exactly what you want, especially with AI tools, it will give you the exact result you ask for it, even if it's not what you meant.

Here is a funny example of a dev that asked for something, and got literally what they asked for even though it wasn't what they wanted. They wanted a walk cycle, as in just a looping walking animation, but the AI interpreted it as "oh this dev wants an animation where you start walking, and then jump on a bicycle!"

It's a pretty funny result and the output is surprisingly good, it genuinely generated an animation that starts walking and gets on a bike and cycles away, very nice super niche high quality animation!

This Unity AI tool for Animation generation is actually pretty excellent, if you don't know how to use it then check out my detailed tutorial on it.

I thought this was super funny. If you want to use AI tools like these then you have to be very explicit about what exactly you want, you cannot leave any room for interpretation otherwise you will likely not get exactly what you want.




Get Rewards by Sending the Game Dev Report to a friend!

(please don’t try to cheat the system with temp emails, it won’t work, just makes it annoying for me to validate)

Thanks for reading!

Code Monkey

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Java Language Architect Brian Goetz on How Java Could Evolve

1 Share

Java language architect Brian Goetz spoke at last month’s JVM Summit, delivering a talk that looked to Java’s future.

Goetz discussed not the Java we have now, but a hypothetical “set of features that are designed not to be used by themselves as a way to write better programs — but as a mechanism for making the language more growable and more extensible.”

In short, Goetz explained how he sees the Java language evolving.

“I spent a lot of time looking at what other languages did,” Goetz said, “and we feel like we’ve come to a point now where we have a pretty good idea of which way we want to go with this.”

On Reddit, Goetz described his talk as a “statement of likely direction.” There’s no official Java Enhancement Proposal, and “This is literally the first time we’ve talked about this in any detail. You have to start somewhere.”

But it was a fascinating chance to see not only how a programming language changes, but also the thoughtful philosophy that’s motivating those decisions.

The Philosophy of a ‘Growable’ Language

Beginning his talk, Goetz had stressed that he wasn’t speaking about “features that we’re planning to deliver immediately.” Instead, he would look at “more motivational examples” for the long term. Goetz had titled his talk “Growing the Java Language” — and for a heartfelt reason. Goetz remembers a famous 1998 paper (and talk) by Sun Microsystems computer scientist Guy Steele titled “Growing a Language.”

Goetz said Steele had made “a call to action for language designers to consider growability as an axis of design in programming languages.”

While many languages let users extend the “vocabulary” through user-created libraries, Steele noted that it’s harder if this new vocabulary doesn’t look the same as the language’s own essential “primitives.” Goetz said, “In many ways, this paper was kind of the starting gun for project Valhalla” — an OpenJDK project started in 2014 to incubate new Java language features, which is led by Goetz.

So Goetz wanted to describe not just a new Java feature, but also a language evolution philosophy that prioritizes extensibility when adding new Java features, and a mechanism for making it happen. “Some will say this goes too far,” says a bullet point on Goetz’s slide. “Some will say this doesn’t go far enough.”

“And that’s how we know we’re … right in the middle.”

Introducing ‘Witnesses’: A New Concept for Java

So what’s the new idea? Java’s method-defining interfaces have been called “blueprints of behavior.” Goetz suggested that now “We want to do all the things interfaces do — take a set of named behaviors and group them into a named bundle, that you can claim that this type conforms to, or this group of types conforms to (and allows the compiler to type check that).”

So there’s a crucial difference. Java’s language design team wants it to be about types — and not instances of types.

“We want to move this behavior to a third-party witness object instead,” explains one slide.

What’s being proposed is a simple, straightforward keyword — a witness literal (along with the ability to “summon” a witness, Goetz says, “merely by uttering its type”).

So…

public static final Comparator COMPARATOR =

becomes…

public static final witness Comparator COMPARATOR =

Elaborating later, Goetz told his audience that “We can add type classes to Java by adding relatively little to the language — a mechanism for publishing witnesses, and a mechanism for finding witnesses — that we can piggyback on existing language constructs like interfaces, fields and methods.”

Why not just define interfaces with all the desired methods, and then let classes implement that interface? It turns out this isn’t always a useful place for abstraction, Goetz said, facing language designers with lots of tricky corner cases and “gotchas.”

Goetz’s next slide explained that this is “really using the wrong tool.”

“We need something that is similar but not exactly the same thing as interfaces.” Haskell has type classes (which “abstract over types, and not the behavior of types”), while C# and Kotlin are “both going through their own set of explorations of this.” The C# community proposed something similar called shapes and extensions.

“All of these are sort of dancing around the same puzzle. Which is: how do I abstract over the behavior of types, without it being part of the definition of a type?”

Opportunities for Growth: Potential New Java Features

Goetz says this idea went through many iterations, but “we’ve kind of distilled it down to something that fits into Java much more cleanly than some of our previous ideas.”

“It’s about growing the language,” says one slide. Goetz sees huge potential for “growability” — and presented several new potential features:

  • New numeric classes, but “with the runtime behavior of primitives” — like 16-bit floating point numbers.
  • Math operators. Using a standard plus sign for your Float16 variables “would be really nice,” rather than having separate methods, Goetz said. Other languages have attempted this so-called “operator overloading” — associating the symbol with multiple operations, depending on the type of variables involved. Goetz says that’s “somewhat of a linguistic minefield … a number of languages have hatched various flavors of disasters with operator overloading.”
  • Collection expressions “for building a sequence-like structure,” similar to what’s available in C#. “This is at the ‘why don’t you just’ level of specification. But it seems like a viable path to get there, in the way that the proposal back in the Java 7 days was not a viable path to get there.”
  • Creational expressions. When creating an array today, the default value for its elements is always “null” or zero. What if there were a witness that could indicate when there is (and isn’t) a valid “blank” value? In Project Valhalla, Goetz says, adding validity checking when initializing an array “is a feature that we’ve been kind of diffident about,” because they didn’t want to add it into Java’s virtual machine (VM). But “This is a way to keep that feature in the language, but allow a given class to participate in the feature based on whether they’ve done some extra work or not. So it means that we get to put this behavior in the right place, which is a good feeling.”

A multipurpose language addition isn’t without precedent. Goetz presented two earlier “notable examples of language features designed to be extensible by libraries” — the foreach loop and try. Developers could use the foreach feature just by implementing the Iterable class. (Goetz says the JDK’s developers then “went and retrofitted a bunch of classes to implement iterable” — as did other Java developers.) But most importantly, it just “looked like it was built-in.”

Goetz was glad Java didn’t just restrict the feature to just a handful of obvious use cases (like list, map and set). “I’m really glad that somebody stood up and said, ‘No, no, it’s really important for classes other than these few magic classes to be able to participate in it.'”

Goetz said he wanted to continue that tradition.

The Future Roadmap for Java’s Evolution

In concluding his talk, Goetz said it demonstrated not only the idea of witnesses, but sketched out “how we would use it for four potential features that have been irking us for quite a while.”

Looking ahead, Goetz believes witnesses “enable you to design better features, richer features, features that users can do more with, and ultimately maybe we won’t have to design as many language features in the future as a result. … Hopefully in the long run, we’ll be able to use this build richer generic libraries and conditional behavior and those things as well.

“But in the short term, we can use this to deliver growable language features, including features people have been asking for for quite a while.”

One Reddit commenter even joked later that Goetz’s talk reminded them of “Dungeons and Dragons” spells. “There was definitely a point where I felt like Brian was about to cast magic missile.”

Brian Goetz, Java language architect - Reddit comment about 2025 JVM summit talk on witnesses

The Reddit commenter added later that “It was a good an interesting talk. I hope these features land.” But one of Goetz’s final slides explained clearly where we stand. “The examples in the previous slides are not designs, they are ideas.”

Brian Goetz, Java language architect - Reddit comment about complexity and 2025 JVM summit talk on witnesses

Still, in another Reddit comment, Goetz said Java’s design team now has a story they’re “comfortable” with, “so we were ready to share it. But note, it is still a story, and there’s a lot of other Valhalla stuff that has to happen first.”

And Goetz had drawn a warm round of applause after his presentation — and then opened up his talk to questions from the audience. And the first questioner acknowledged that they already saw a lot of value in the idea, calling Goetz’s talk “a really big proposal packaged in a fairly small syntactic change.”

Goetz’s response? “Shh, don’t tell them!”

The post Java Language Architect Brian Goetz on How Java Could Evolve appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Get The GIMP Working With macOS Tahoe (and What Happened)

1 Share

I know, I know… The GIMP is far better on GNOME, where it belongs, but I use macOS for my job and I still want it on my devices: I use Inkscape for SVGs as well. Big fan of Adobe, not so fan of its prices, then I don’t use its products if my company doesn’t provide them to me. Spoiler, it doesn’t.

Again, I’m a full-stack web developer, so my use of GIMP for a living is limited, but I still need to crop, resize, and export images whenever I work on the front-end side of a project. Sometimes I also use it for image optimizations. Being on macOS, installing GIMP is easy, but it doesn’t integrate with the UI as it does with GNOME.

No matter if it’s Linux or BSD, but for sure having it installed with GNOME makes more sense, since above all it’s GNOME default image editor: of course, GIMP over the years became more than it, but it has many things in common with the desktop environment, starting from the GTK libraries, and Adwaita.

That’s why a couple of months ago some macOS advanced GIMP users discovered that the UI turned blank with the latest stable release. It seems that the same issue was already found earlier, but I didn’t know anything about it, since the Tahoe upgrade. Yes, now I’m only using stable software on my devices.

This is funny, if you know me, because I used to live on the edge, installing alpha operating systems and applications. But this doesn’t have much to do with the topic. Long story short, stable GIMP doesn’t work with macOS Tahoe: and I don’t think they will fix it, so you must opt for a different solution.

Luckily, a solution do exist and it’s called GIMP 3.1.4, the latest development release: by replacing a stable install with this new one, you’ll come back using it without issues. It fixes the empty UI as seen in version 3.0.4, but it’s an unstable version, and then it can have other problems I don’t know yet.

I don’t know if we’ll wait for the next stable release to have the UI fixed, or we we’ll have it with the 3.1.x version only. I can just ensure that I got GIMP working again by installing the 3.1.4: to date, it’s the only way of solving the problem. Did you find a different solution? Let me know in the comments.

I really love GIMP and I think I’ll go on using it with every operating system. I used to have it installed on Windows as well, where it often had similar problems with the GTK toolkit. Adwaita, that I mentioned, is the GNOME 3.x design system, indeed. So the best setup would be on Linux.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories