Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152754 stories
·
33 followers

The Problem with AI “Artists”

1 Share

A performance reel. Instagram, TikTok, and Facebook accounts. A separate contact email for enquiries. All staples of an actor’s website.

Except these all belong to Tilly Norwood, an AI “actor.”

This creation represents one of the newer AI trends, which is AI “artists” that eerily represent real humans (which, according to their creators, is the goal). Eline Van der Velden, the creator of Tilly Norwood, has said that she is focused on making the creation “a big star” in the “AI genre,” a distinction that has been used to justify the existence of AI created artists as not taking away jobs from real actors. Van der Velden has explicitly said that Tilly Norwood was made to be photorealistic to provoke a reaction, and it’s working, as reportedly talent agencies are looking to represent it.

And it’s not just Hollywood. Major producer Timbaland has created his own AI entertainment company and launched his first “artist,” TaTa, with the music created by uploading demos of his own to the platform Suno, reworking it with AI, and adding lyrics afterward.

But while technologically impressive, the emergence of AI “artists” risks devaluing creativity as a fundamentally human act, and in the process, dehumanizing and “slopifying” creative labor.

Heightening Industry at the Expense of Creativity

The generative AI boom is deeply tied to creative industries, with profit-hungry machines monetizing every movie, song, and TV show as much as they possibly can. This, of course, predates AI “artists,” but AI is making the agenda even clearer. One of the motivations behind the Writer’s Guild Strike of 2023 was countering the threat of studios replacing writers with AI.

For industry power players, employing AI “artists” means less reliance on human labor—cutting costs and making it possible to churn out products at a much higher rate. And in an industry already known for poor working conditions, there’s significant appeal in dealing with a creation they do not “need” to treat humanely.

Technological innovation has always posed a risk to eliminating certain jobs, but AI “artists” are a whole new monster in industry. It isn’t just about speeding up processes or certain tasks but about excising human labor from the product. This means in an industry that is already notoriously hard to make money in as a creative, the demand will become even more scarce—and that’s not even looking at the consequences on the art itself.

The AI “Slop” Takeover

The interest of making money over quality has always prevailed in industry; Netflix and Hallmark aren’t making all those Christmas romantic comedies with the same plot because they’re original stories, nor are studios embracing endless amount of reboots and remakes based on successful art because it would be visionary to remake a ’90s movie with a 20-something Hollywood star. But they still have their audiences, and in the end, require creative output and labor to be made.

Now, imagine that instead of these rom-coms cluttering Netflix, we have AI-generated movies and TV shows, starring creations like Tilly Norwood, and the soundtrack comes from a voice, lyrics, and production that was generated by AI.

The whole model of generative AI is dependent on regurgitating and recycling existing data. Admittedly, it’s a technological feat that Suno can generate a song and Sora can convert text to video images; what it is NOT is a creative renaissance. AI-generated writing is already taking over, from essays in the classroom to motivational LinkedIn posts, and in addition to ruining the em dash, it consistently puts out material of low and robotic quality. AI “artists” “singing” and “acting” is the next uncanny destroyer of quality and likely will alienate audiences, who turn to art to feel connection.

Art has a long tradition of being used as resistance and a way of challenging the status quo; protest music has been a staple of culture—look no further than civil rights and antiwar movements in the United States in the 1960s. It is so powerful that there are attempts by political actors to suppress it and punish artists. Iranian filmmaker Jafar Panahi, who won the Palme d’Or at the Cannes Film Festival for It Was Just an Accident, was sentenced to prison in absentia in Iran for making the film, and this is not the first punishment he has received for his films. Will studios like Sony or Warner Bros. release songs or movies like these if they can just order marketing-compliant content from a bot?

A sign during the writer’s strike famously said “ChatGPT doesn’t have childhood trauma.” An AI “artist” may be able to carry out a creator’s agenda to a limited extent, but what value does it have coming from a generated creation that has no lived experiences and emotions—especially when this drives motivation to make art in the first place?

To top it off, generative AI is not a neutral entity by any means; we’re in for a lot of stereotypical and harmful material, especially without the input of real artists. The fact most AI “artists” are portrayed as young women with specific physical features is not a coincidence. It’s an intensification of the longstanding trend of making virtual assistants—from ELIZA to Siri to Alexa to AI “artists” like Tilly Norwood or Timbaland’s TaTa—“female,” which reinforces the trope of relegating women to “helper” roles that are designed to cater to the needs of the user, a clear manifestation of human biases.

Privacy and Plagiarism

Ensuring that “actors” and “singers” look and sound as human as possible in films, commercials, and songs requires that they be trained on real-world data. Tilly Norwood creator Van der Welden has defended herself by claiming that she only used licensed data and went through an extensive research process, looking at thousands of images for her creation. But “licensed data” does not make taking the data automatically ethical; look at Reddit, which signed a multimillion dollar contract to allow Google to train its AI models on Reddit data. The vast data of Reddit users is not protected, just monetized by the organization.

AI expert Ed Newton-Rex has discussed how generative AI is consistently stealing from artists, and has proposed measures in place to make sure data is licensed and trained in the public domain to be used in creating. There are ways for individual artists to protect their online work: including watermarks, opting out of data collection, and taking measures to block AI bots. While these strategies can keep data more secure, considering how vast generative AI is, they’re probably more a safeguard than a solution.

Jennifer King from Stanford’s Human-Centered Artificial Intelligence has provided some ways to protect data and personal information more generally, such as making the “opt out” the default option for data sharing, and for legislation that focuses not just on transparency of AI use but on its regulation—likely an uphill battle with the Trump administration trying to take away state AI regulations.

This is the ethical home that AI “artists” are living in. Think of all the faces of real people that went into making Tilly Norwood. A company may have licensed that data for use, but the artists whose “data” is their likeness and creativity likely didn’t (at least directly). In this light, AI “artists” are a form of plagiarism.

Undermining Creativity as Fundamentally Human

Looking at how art has been transformed by technology before generative AI, it could be argued that this is simply the next step in the process of change rather than something to be concerned about. But photography and animation and typewriters and all the other inventions used to justify the onslaught of AI “artists” were not eliminations of human creativity. Photography was not a replacement to painting, but a new art form, even if it did concern painters. There’s a difference between having a new, experimental way of doing something and extensively using data (particularly data that is taken without consent) to make creations that blur the lines of what is and isn’t human.  For instance, Rebecca Xu, a professor of computer art and animation at Syracuse who teaches an “AI in Creative Practice” course, argues that artists can incorporate AI into their creative process. But as she warns, “AI offers useful tools, but you still need to produce your own original work instead of using something generated by AI.”

It’s hard to understand exactly how AI “artists” benefit human creativity, which is a fundamental part of our expression and intellectual development. Just look at the cave art from the Paleolithic era. Even humans 30,000 years ago who didn’t have secure food and shelter were making art. Unlike other industries, art did not come into existence purely for profit.

The arts are already undervalued economically, as is evident from the lack of funding in schools. Today, a kid who may want to be a writer will likely be bombarded with marketing from generative AI platforms like ChatGPT to use these tools to “write” a story. The result may resemble a narrative, but there’s not necessarily any creativity or emotional depth that comes from being human, and more importantly, the kid didn’t actually write. Still, the very fact that this AI-generated story is now possible curbs the industrial need for human artists.

How Do We Move Forward?

Though profit-hungry power players may be embracing AI “artists,” the same cannot be said for public opinion. The vast majority of artists and audiences alike are not interested in AI-generated art, much less AI “artists.” The power of public opinion shouldn’t be underestimated; the writer’s strike is probably the best example of that.

Collective mobilization thus will likely be key in the future when it comes to challenging AI “artists” against the interest of studios, record labels, and other members of the creative industry’s ruling class. There have been wins already, such as the Writer’s Guild of America Strike in 2023, which resulted in a contract stipulating that studios can’t use AI as a credited writer. And because music and film and television are full of stars, often with financial and cultural power, the resistance being voiced in the media could benefit from more actionable steps; for example, maybe a prominent production company run by an A-list actor pledges not to have any “artists” generated by AI in their work.

Beyond industry and labor, the devaluing of art as unimportant unless you’re a “star” can also play a significant role in changing conversations around it. This means funding art programs in schools and libraries so that young people know that art is something they can do, something that is fun and that brings joy—not necessarily to make money or a living but to express themselves and engage with the world.

The fundamental risk of AI “artists” is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities. But human-made art and human artists will never become obsolete—that would require fundamentally eliminating human impulses and the existence of human-made art. The challenge is making sure that artistic creation is not relegated to the margins of life.



Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond the AI Fear—Discovering What Makes Scrum Masters Truly Irreplaceable | Mohini Kissoon

1 Share

Mohini Kissoon: Beyond the AI Fear—Discovering What Makes Scrum Masters Truly Irreplaceable

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"The real challenge isn't whether AI will replace Scrum Masters. It's whether we understand what parts of our work are actually irreplaceable—and whether we're spending our time on those things." - Mohini Kissoon

 

Mohini is wrestling with a challenge that's coming up repeatedly in conversations with Agile coaches and Scrum Masters: the anxiety around AI and what it means for their role. She hears questions like "Will AI replace Scrum Masters?" but believes we're asking the wrong question. The real challenge is understanding which parts of our work are truly irreplaceable and demonstrating value in those areas. 

People might think that AI can generate sprint reports and analyze team metrics—so why do we need Scrum Masters? But what's missing is the human touch: reading the room, sensing unspoken tension, building trust through presence, and asking questions that shift perspectives. Mohini and Vasco explore how the Scrum Master role may have accidentally become defined by process and structure rather than impact on teams. 

The solution lies in showing value through concrete metrics—demonstrating improvement in team happiness, flow, cycle time, and lead time. Scrum Masters need to use storytelling and create history that shows the before and after. They should leverage champions from teams they've worked with to share testimonials. We are like diplomats: we work through influence and need allies both inside and outside the team to support our work.

 

Self-reflection Question: If AI could handle all the administrative and mechanical aspects of your Scrum Master role tomorrow, what would you spend your time doing—and are you already investing enough time in those irreplaceable human elements?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Mohini Kissoon

 

Mohini is an Agility Lead with over eight years of experience as a Scrum Master. She is passionate about building high-performing, self-managing teams that delight customers. Mohini improves flow and collaboration across systems, meets teams where they are, and co-creates environments enabling adaptability, meaningful interactions, and continuous improvement and learning.

 

You can link with Mohini Kissoon on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260114_Mohini_Kissoon_W.mp3?dest-id=246429
Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

970: Why Did Anthropic Buy Bun?

1 Share

Wes and Scott answer your questions about whether Git GUIs beat the terminal, balancing accessibility with experimental web projects, blocking malicious traffic, smart home setups, why Anthropic bought Bun, navigating tricky team dynamics, and more!

Show Notes

Sick Picks

Shameless Plugs

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI6251294774.mp3
Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Smashing Animations Part 8: Theming Animations Using CSS Relative Colour

1 Share

I’ve recently refreshed the animated graphics on my website with a new theme and a group of pioneering characters, putting into practice plenty of the techniques I shared in this series. A few of my animations change appearance when someone interacts with them or at different times of day.

The colours in the graphic atop my blog pages change from morning until night every day. Then, there’s the snow mode, which adds chilly colours and a wintery theme, courtesy of an overlay layer and a blending mode.

While working on this, I started to wonder whether CSS relative colour values could give me more control while also simplifying the process.

Note: In this tutorial, I’ll focus on relative colour values and the OKLCH colour space for theming graphics and animations. If you want to dive deep into relative colour, Ahmad Shadeed created a superb interactive guide. As for colour spaces, gamuts, and OKLCH, our own Geoff Graham wrote about them.

Repeated use of elements was key. Backgrounds were reused whenever possible, with zooms and overlays helping construct new scenes from the same artwork. It was born of necessity, but it also encouraged thinking in terms of series rather than individual scenes.

The problem With Manually Updating Colour Palettes

Let’s get straight to my challenge. In Toon Titles like this one — based on the 1959 Yogi Bear Show episode “Lullabye-Bye Bear” — and my work generally, palettes are limited to a select few colours.

I create shades and tints from what I call my “foundation” colour to expand the palette without adding more hues.

In Sketch, I work in the HSL colour space, so this process involves increasing or decreasing the lightness value of my foundation colour. Honestly, it’s not an arduous task — but choosing a different foundation colour requires creating a whole new set of shades and tints. Doing that manually, again and again, quickly becomes laborious.

I mentioned the HSL — H (hue), S (saturation), and L (lightness) — colour space, but that’s just one of several ways to describe colour.

RGB — R (red), G (green), B (blue) — is probably the most familiar, at least in its Hex form.

There’s also LAB — L (lightness), A (green–red), B (blue–yellow) — and the newer, but now widely supported LCH — L (lightness), C (chroma), H (hue) — model in its OKLCH form. With LCH — specifically OKLCH in CSS — I can adjust the lightness value of my foundation colour.

Or I can alter its chroma. LCH chroma and HSL saturation both describe the intensity or richness of a colour, but they do so in different ways. LCH gives me a wider range and more predictable blending between colours.

I can also alter the hue to create a palette of colours that share the same lightness and chroma values. In both HSL and LCH, the hue spectrum starts at red, moves through green and blue, and returns to red.

Why OKLCH Changed How I Think About Colour

Browser support for the OKLCH colour space is now widespread, even if design tools — including Sketch — haven’t caught up. Fortunately, that shouldn’t stop you from using OKLCH. Browsers will happily convert Hex, HSL, LAB, and RGB values into OKLCH for you. You can define a CSS custom property with a foundation colour in any space, including Hex:

/* Foundation colour */
--foundation: #5accd6;

Any colours derived from it will be converted into OKLCH automatically:

--foundation-light: oklch(from var(--foundation) [...]; }
--foundation-mid: oklch(from var(--foundation) [...]; }
--foundation-dark: oklch(from var(--foundation) [...]; }
Relative Colour As A Design System

Think of relative colour as saying: “Take this colour, tweak it, then give me the result.” There are two ways to adjust a colour: absolute changes and proportional changes. They look similar in code, but behave very differently once you start swapping foundation colours. Understanding that difference is what can turn using relative colour into a system.

/* Foundation colour */
--foundation: #5accd6;

For example, the lightness value of my foundation colour is 0.7837, while a darker version has a value of 0.5837. To calculate the difference, I subtract the lower value from the higher one and apply the result using a calc() function:

--foundation-dark: 
  oklch(from var(--foundation)
  calc(l - 0.20) c h);

To achieve a lighter colour, I add the difference instead:

--foundation-light:
  oklch(from var(--foundation)
  calc(l + 0.10) c h);

Chroma adjustments follow the same process. To reduce the intensity of my foundation colour from 0.1035 to 0.0035, I subtract one value from the other:

oklch(from var(--foundation)
l calc(c - 0.10) h);

To create a palette of hues, I calculate the difference between the hue value of my foundation colour (200) and my new hue (260):

oklch(from var(--foundation)
l c calc(h + 60));

Those calculations are absolute. When I subtract a fixed amount, I’m effectively saying, “Always subtract this much.” The same applies when adding fixed values:

calc(c - 0.10)
calc(c + 0.10)

I learned the limits of this approach the hard way. When I relied on subtracting fixed chroma values, colours collapsed towards grey as soon as I changed the foundation. A palette that worked for one colour fell apart for another.

Multiplication behaves differently. When I multiply chroma, I’m telling the browser: “Reduce this colour’s intensity by a proportion.” The relationship between colours remains intact, even when the foundation changes:

calc(c * 0.10)
My Move It, Scale It, Rotate It Rules
  • Move lightness (add or subtract),
  • Scale chroma (multiply),
  • Rotate hue (add or subtract degrees).

I scale chroma because I want intensity changes to stay proportional to the base colour. Hue relationships are rotational, so multiplying hue makes no sense. Lightness is perceptual and absolute — multiplying it often produces odd results.

From One Colour To An Entire Theme

Relative colour allows me to define a foundation colour and generate every other colour I need — fills, strokes, gradient stops, shadows — from it. At that point, colour stops being a palette and starts being a system.

SVG illustrations tend to reuse the same few colours across fills, strokes, and gradients. Relative colour lets you define those relationships once and reuse them everywhere — much like animators reused backgrounds to create new scenes.

Change the foundation colour once, and every derived colour updates automatically, without recalculating anything by hand. Outside of animated graphics, I could use this same approach to define colours for the states of interactive elements such as buttons and links.

The foundation colour I used in my “Lullabye-Bye Bear” Toon Title is a cyan-looking blue. The background is a radial gradient between my foundation and a darker version.

To create alternative versions with entirely different moods, I only need to change the foundation colour:

--foundation: #5accd6;
--grad-end: var(--foundation);
--grad-start: oklch(from var(--foundation)
  calc(l - 0.2357) calc(c * 0.833) h);

To bind those custom properties to my SVG gradient without duplicating colour values, I replaced hard-coded stop-color values with inline styles:

<defs>
  <radialGradient id="bg-grad" […]>
    <stop offset="0%" style="stop-color: var(--grad-end);" />
    <stop offset="100%" style="stop-color: var(--grad-start);" />
  </radialGradient>
</defs>
<path fill="url(#bg-grad)" fill="#5DCDD8" d="[...]"/>

Next, I needed to ensure that my Toon Text always contrasts with whatever foundation colour I choose. A 180deg hue rotation produces a complementary colour that certainly pops — but can vibrate uncomfortably:

.text-light {
  fill: oklch(from var(--foundation)
    l c calc(h + 180));
}

A 90° shift produces a vivid secondary colour without being fully complementary:

.text-light {
  fill: oklch(from var(--foundation)
    l c calc(h - 90));
}

My recreation of Quick Draw McGraw’s 1959 Toon Title “El Kabong“ uses the same techniques but with a more varied palette. For example, there’s another radial gradient between the foundation colour and a darker shade.

The building and tree in the background are simply different shades of the same foundation colour. For those paths, I needed two additional fill colours:

.bg-mid {
  fill: oklch(from var(--foundation)
    calc(l - 0.04) calc(c * 0.91) h);
}

.bg-dark {
  fill: oklch(from var(--foundation)
    calc(l - 0.12) calc(c * 0.64) h);
}
When The Foundations Start To Move

So far, everything I’ve shown has been static. Even when someone uses a colour picker to change the foundation colour, that change happens instantly. But animated graphics rarely stand still — the clue is in the name. So, if colour is part of the system, there’s no reason it can’t animate, too.

To animate the foundation colour, I first need to split it into its OKLCH channels — lightness, chroma, and hue. But there’s an important extra step: I need to register those values as typed custom properties. But what does that mean?

By default, a browser doesn’t know whether a CSS custom property value represents a colour, length, number, or something else entirely. That often means they can’t be interpolated smoothly during animation, and jump from one value to the next.

Registering a custom property tells the browser the type of value it represents and how it should behave over time. In this case, I want the browser to treat my colour channels as numbers so they can be animated smoothly.

@property --f-l {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.40;
}

@property --f-c {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.11;
}

@property --f-h {
  syntax: "<number>";
  inherits: true;
  initial-value: 305;
}

Once registered, these custom properties behave like native CSS. The browser can interpolate them frame-by-frame. I then rebuild the foundation colour from those channels:

--foundation: oklch(var(--f-l) var(--f-c) var(--f-h));

This makes the foundation colour become animatable, just like any other numeric value. Here’s a simple “breathing” animation that gently shifts lightness over time:

@keyframes breathe {
  0%, 100% { --f-l: 0.36; }
  50% { --f-l: 0.46; }
}

.toon-title {
  animation: breathe 10s ease-in-out infinite;
}

Because every other colour in fills, gradients, and strokes is derived from --foundation, they all animate together, and nothing needs to be updated manually.

One Animated Colour, Many Effects

At the start of this process, I wondered whether CSS relative colour values could offer more possibilities while also making them simpler to implement. I recently added a new gold mine background to my website’s contact page, and the first iteration included oil lamps that glow and swing.

I wanted to explore how animating CSS relative colours could make the mine interior more realistic by tinting it with colours from the lamps. I wanted them to affect the world around them, the way real light does. So, rather than animating multiple colours, I built a tiny lighting system that animates just one colour.

My first task was to slot an overlay layer between the background and my lamps:

<path 
  id="overlay"
  fill="var(--overlay-tint)" 
  [...] 
  style="mix-blend-mode: color"
/>

I used mix-blend-mode: color because that tints what’s beneath it while preserving the underlying luminance. As I only want the overlay to be visible when animations are turned on, I made the overlay opt-in:

.svg-mine #overlay {
  display: none;
}

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] #overlay {
    display: block;
    opacity: 0.5;
  }
}

The overlay was in place, but not yet connected to the lamps. I needed a light source. My lamps are simple, and each one contains a circle element that I blurred with a filter. The filter produces a very soft blur over the entire circle.

<filter id="lamp-glow-1" x="-120%" y="-120%" width="340%" height="340%">
  <feGaussianBlur in="SourceGraphic" stdDeviation="56"/>
</filter>

Instead of animating the overlay and lamps separately, I animate a single “flame” colour token and derive everything else from that. First, I register three typed custom properties for OKLCH channels:

@property --fl-l {
  syntax: "<number>"; 
  inherits: true;
  initial-value: 0.86;
}
@property --fl-c {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.12;
}
@property --fl-h {
  syntax: "<number>";
  inherits: true;
  initial-value: 95;
}

I animated those channels, deliberately pushing a few frames towards orange so the flicker reads clearly as firelight:

@keyframes flame {
  0%, 100% { --fl-l: 0.86; --fl-c: 0.12; --fl-h: 95; }
  6% { --fl-l: 0.91; --fl-c: 0.10; --fl-h: 92; }
  12% { --fl-l: 0.83; --fl-c: 0.14; --fl-h: 100; }
  18% { --fl-l: 0.88; --fl-c: 0.11; --fl-h: 94; }
  24% { --fl-l: 0.82; --fl-c: 0.16; --fl-h: 82; }
  30% { --fl-l: 0.90; --fl-c: 0.12; --fl-h: 90; }
  36% { --fl-l: 0.79; --fl-c: 0.17; --fl-h: 76; }
  44% { --fl-l: 0.87; --fl-c: 0.12; --fl-h: 96; }
  52% { --fl-l: 0.81; --fl-c: 0.15; --fl-h: 102; }
  60% { --fl-l: 0.89; --fl-c: 0.11; --fl-h: 93; }
  68% { --fl-l: 0.83; --fl-c: 0.16; --fl-h: 85; }
  76% { --fl-l: 0.91; --fl-c: 0.10; --fl-h: 91; }
  84% { --fl-l: 0.85; --fl-c: 0.14; --fl-h: 98; }
  92% { --fl-l: 0.80; --fl-c: 0.17; --fl-h: 74; }
}

Then I scoped that animation to the SVG, so the shared variables are available to both the lamps and my overlay:

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] {
    animation: flame 3.6s infinite linear;
    isolation: isolate;

    /* Build a flame colour from animated channels */
    --flame: oklch(var(--fl-l) var(--fl-c) var(--fl-h));

    /* Lamp colour derived from flame */
    --lamp-core: oklch(from var(--flame) calc(l + 0.05) calc(c * 0.70) h);

    /* Overlay tint derived from the same flame */
    --overlay-tint: oklch(from var(--flame)
      calc(l + 0.06) calc(c * 0.65) calc(h - 10));
  }
}

Finally, I applied those derived colours to the glowing lamps and the overlay they affect:

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] #mine-lamp-1 > circle,
  .svg-mine[data-animations=on] #mine-lamp-2 > circle {
    fill: var(--lamp-core);
  }

  .svg-mine[data-animations=on] #overlay {
    display: block;
    fill: var(--overlay-tint);
    opacity: 0.5;
  }
}

When the flame shifts toward orange, the lamps warm up, and the scene warms with them. When the flame cools, everything settles together. The best part is that nothing is written manually. If I change the foundation colour or tweak the flame animation ranges, the entire lighting system updates simultaneously.

You can see the final result on my website.

Reuse, Repurpose, Revisited

Those Hanna-Barbera animators were forced to repurpose elements out of necessity, but I reuse colours because it makes my work more consistent and easier to maintain. CSS relative colour values allow me to:

  • Define a single foundation colour,
  • Describe how other colours relate to it,
  • Reuse those relationships everywhere, and
  • Animate the system by changing one value.

Relative colour doesn’t just make theming easier. It encourages a way of thinking where colour, like motion, is intentional — and where changing one value can transform an entire scene without rewriting the work beneath it.



Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Prompt Product Updates – January 2026

1 Share

SQL Prompt’s January release brings support for Microsoft Fabric and an exciting new preview feature.

Microsoft Fabric support

SQL Prompt v11.3.2 now supports Microsoft Fabric, extending autocomplete suggestions and formatting rules to Fabric-specific queries.

This includes support for key Fabric Data Warehouse statements such as COPY INTO, OPENROWSET BULK, CTAS (Create Table As Select), and MERGE, along with Fabric-specific implementations of CREATE/ALTER TABLE, statistics commands, and query hints. See our documentation for full details.

AI Code Completion (preview)

SQL Prompt v11.3.0 introduces AI code completion as a preview feature, leveraging the AI to generate multi-line code blocks based on your query context and natural language comments.

The feature provides intelligent suggestions for complex clauses, join conditions, and entire code blocks while matching your existing formatting conventions, and can even generate SQL directly from plain English comments in your editor (we recommend this approach if you are starting out with a blank query window) 

To use AI code completion, enable it in SQL Prompt Options under Prompt AI > AI, ensure code suggestions are turned on under Options > Behavior, and accept suggestions using the Tab key or manually trigger them with Ctrl+Alt+Up Arrow.

Please note that this is an experimental feature and may change or be removed in future releases. See our documentation for full details.

Try the new functionality

If you have an active subscription or a supported license for SQL Prompt, SQL Toolbelt Essentials, or SQL Toolbelt, you can download the latest version today. Please note: SQL Prompt’s AI-powered features are available exclusively with an active subscription and are not included with perpetual licenses.

Don’t have an active subscription? You can buy online to experience Microsoft Fabric support and AI Code Completion (preview).

We hope you enjoy using the latest updates. As always, we’d love to hear your feedback. Please share any insights via the feedback button within the product or email us on sqlprompt-feedback@red-gate.com.

The post SQL Prompt Product Updates – January 2026 appeared first on Redgate.

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Choosing a Cross-Platform Strategy for .NET: MAUI vs Uno

1 Share

Building a cross-platform app usually starts with a deceptively simple goal: one team, one codebase, multiple targets. The reality is that your framework choice shapes everything from the UI architecture, to the delivery speed, testing strategy, hiring, and how much “platform weirdness” you’ll be living with.

There are plenty of cross-platform options, including React Native, Ionic, NativeScript. All of these are fundamentally web-based or hybrid UI stacks. However, if your goal is to stay all-in on C#/.NET for the application UI and core, the two primary options to evaluate are .NET MAUI and Uno Platform.

This post is a practical guide to that choice, and if you’re building a serious product that needs broad reach, there’s a strong case that Uno should be on your shortlist early.

Start By Defining What You’re Optimizing For

Most teams are making tradeoffs whether they say it out loud or not. Are you trying to share business logic but accept platform-specific UI? Are you trying to share as much UI as possible across devices? Do you need a native look and feel, or do you need a consistent design system everywhere?

A quick way to clarify this is to write down your “non-negotiables.” For example:

  • Which platforms must ship first, and which can wait?
  • Is web a real target or not?
  • Do you need offline support?
  • Are there deep device integrations (camera, Bluetooth, push notifications)?
  • Should the UI follow each platform’s conventions, or match your design system?

Once those answers are clear, the MAUI vs Uno gets a little easier. Let’s let at four questions that will help you decide even more clearly between these two platforms.

The Questions That Actually Decide MAUI Vs Uno

In practice, the Uno vs MAUI debate collapses into a handful of questions:

1. Does web matter as a first-class target?
If WebAssembly is part of the product story, that should heavily influence your decision toward Uno.

2. How broad does your platform reach need to be over the next 12–24 months?
Many teams start with mobile and Windows, then discover web or Linux demand later. Others know from day one they need wide coverage provided by Uno.

3. Do you care more about native UI conventions or consistent visuals across platforms?
Both are valid, but they lead to different architecture and design decisions. If you want a more native look and behavior, MAUI can be a good fit; if you want more consistent visuals across platforms (especially including web), Uno often comes out ahead.

4. How complex is your UI and workflow surface area?
Dense forms, dashboards, heavy data entry, and virtualization make tradeoffs show up faster. This is where consistency in layout/styling, predictable rendering, and solid performance patterns matter, and where Uno often shines for “serious app UI,” while simpler UI surfaces may not justify anything beyond MAUI’s more straightforward controls approach.

If you’re scoring high on web, broad reach, and UI consistency (especially in enterprise apps) you’ll often find yourself leaning Uno.

When .NET MAUI Is A Great Fit

MAUI is a solid option, especially for teams who want to stay close to the mainstream Microsoft path. MAUI tends to fit well when:

  • Your scope is primarily iOS/Android (plus maybe Windows desktop), and web is not central
  • You prefer a native control approach and platform conventions
  • Your team is comfortable planning for some platform-specific UI work

Where MAUI shines is its end-to-end Microsoft story and a developer experience that feels familiar to many .NET teams. The key to success is going in with eyes open: cross-platform UI almost always produces edge cases, and the teams that do best are the ones that intentionally contain those escape hatches.

Why Uno Often Deserves To Be The Starting Point In 2026

If MAUI is a strong “mainstream Microsoft” path, Uno is a strong “maximum reach .NET” path. For many product teams building long-lived apps, Uno’s strengths line up with modern requirements—especially around web and broader platform coverage.

Uno is particularly compelling when:

  • Web is a real target (WebAssembly matters to your roadmap)
  • You want broad platform reach without treating web as a second-class citizen
  • Your app has serious workflow complexity (forms, dashboards, dense UI)
  • You care about UI consistency and a predictable design system across devices

One of the most useful things Uno does is make a key tradeoff explicit: where you want the app to feel “most native,” and where you want it to behave consistently everywhere. Instead of discovering inconsistencies late in the cycle, you can choose intentionally early and align design, testing, and performance expectations around that choice.

One more reason Uno is worth a fresh look in 2026 is the platform’s push into AI-assisted developer workflows. They’ve been shipping features aimed at shortening the UI build loop—things like design-to-code and tooling that help you go from intent to working UI faster. If you’re curious, Uno has a great overview on their website.

The Decision Inside The Decision: Native UI Vs Consistent UI

A lot of teams think they’re choosing a framework when they’re really choosing a product philosophy.

If you lean toward native UI, you’re optimizing for:

  • platform conventions and “it feels right” UX
  • OS-level behaviors
  • closer alignment with native platform UI patterns

If you lean toward consistency, you’re optimizing for:

  • a predictable design system across devices
  • fewer platform-specific UI surprises
  • easier cross-platform QA and visual validation

A simple rule of thumb is that consumer apps often benefit from native conventions, while enterprise apps often benefit from consistency and predictability. It’s not a law, but it’s a useful starting point.

A Fast Way To Decide Without Debating For Months

If the choice is still unclear, don’t argue about it, prototype it. A two-week spike can settle most questions quickly. Build a thin vertical slice you’d ship in real life:

  • Authentication
  • One “real” complex screen (forms + validation + a list that needs virtualization)
  • One device integration (camera or push notifications)
  • Basic offline caching
  • Telemetry and crash reporting

Then measure what actually matters: development loop speed, UI fidelity, performance on real devices, build/release friction, and how often you needed a platform-specific workaround.

Closing Thoughts

Both MAUI and Uno can help you succeed with your cross-platform project. The best choice is the one that matches your platform targets, UX goals, team strengths, and maintenance horizon. That said, if your product roadmap includes web as a target, broad reach across platforms, and serious application UI, it’s hard to ignore the case for Uno Platform as a .NET-first foundation.

This post pairs with our Blue Blazes podcast episode featuring Sam Basu from Uno Platform, where we dig into real-world tradeoffs and where cross-platform .NET is headed. Check it out now on video or podcast.

The post Choosing a Cross-Platform Strategy for .NET: MAUI vs Uno appeared first on Trailhead Technology Partners.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories