Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153859 stories
·
33 followers

Wood Burning Is Reintroducing Lead Pollution Into the Air, Scientists Find

1 Share
An anonymous reader quotes a report from The Guardian: Wood heating is reintroducing lead into the air of local communities and homes, a systematic investigation by academics has found. Overwhelming evidence of lead's neurotoxicity meant the metal was banned as an additive in petrol more than 25 years ago. The research by academics from the University of Massachusetts Amherst began by analysing samples of particle pollution from five suburban and rural towns in the north-east US. They looked for tiny particles of potassium that are given off when wood is burned and also particles containing lead. Samples from seven winters revealed associations between potassium and lead. When there were more wood burning particles in a daily sample, there was more lead in the air, with clear straight-line relationships in four of the five towns. The project was extended to 22 other towns across the US. The relationships between lead and potassium varied from place to place, being strongest in the Rocky Mountains. By factoring in the effects of temperature, moderate to strong associations in their analysis strengthened the conclusion that the extra lead came from wood burning. The lead concentrations were less than the US legal limits, but any exposure to the metal is harmful. [...] Although less than legal limits, lead particles are routinely measured in UK cities in winter when people are also burning wood. This is normally attributed to waste wood covered with old lead paint, but the Umass Amherst study suggests the metal is coming from the wood itself. This means that any wood burning could increase exposure in neighborhoods and at home. Tricia Henegan, a PhD student at Umass Amherst and the first author on the research, said: "The most logical answer [to the question of how lead ends up in wood] is that it comes from uptake in the soil, probably riding along with the nutrients and water that trees need. Once in the tree, it deposits in the tree's tissues and remains until that tree is burned." Other research has found that it can then become part of the smoke. "The use of wood as an energy source is a relic of the past, one that should not be relived if given a choice. Although wood fuel use can feel nostalgic, it does have negative consequences on air quality, and therefore public health."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

30 Days of coreutils: sort

1 Share

A program that has been superseded by modern GUI-based tools, but still has plenty of uses.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

DeepSeek-V4-Flash means LLM steering is interesting again

1 Share

Ever since Golden Gate Claude I’ve been fascinated with “steering”: the idea that you can guide LLM outputs by directly manipulating the activations of the model mid-flight.

DeepSeek V4 Flash

I was inspired to write this post by antirez’s recent project DwarfStar 4, which is a version of llama.cpp that’s been stripped down to run only DeepSeek-V4-Flash. What’s so special about this model? It might be what many engineers have been waiting for: a local model good enough to compete with at least the low end of frontier model agentic coding.

Since steering requires a local model, it’s now practical for many engineers to try it out for the first time. And indeed, antirez has baked steering into DwarfStar 4 as a first-class citizen. Right now it’s very rudimentary (basically just the toy “verbosity” example you can replicate via prompting), but the initial release was only eight days ago. I plan to follow this project closely.

How steering works

The basic idea behind steering is extracting a concept (like “respond tersely”) from the model’s internal brain state, then reaching in during inference and boosting the numerical activations that form that concept.

One way you might do this is to feed your model the same set of a hundred prompts twice, once with the normal prompts and once with the words “respond tersely” appended. Then measure the difference in the model’s activations1 for each prompt pair (by subtracting one activation matrix from the other). That’s your “steering vector”. In theory, you can go and add that to the same activation layer for any prompt and get the same effect (of the model responding tersely).

Another, more sophisticated way you might do this is to train a second model to extract “features” from your model’s activations: patterns of behavior that seem to show up together. Then you can try to map those features back to individual concepts, and boost them in the same way. This is more or less what Anthropic is doing with sparse autoencoders2. It’s the same principle as the naive approach, but it lets you capture deeper patterns (at the cost of being much more expensive in time, compute and expertise).

Why steering is interesting

Steering sounds like a cheat code. Instead of painstakingly assembling a training set that tries to push the model towards the “smart” end of the distribution in its training data, why not simply go uncover the “smart” dial in the model’s brain and turn it all the way to the right?

It also seems like a more elegant way to adjust the way models talk. Instead of fiddling with the prompt (adding or removing qualifiers like “you MUST”), couldn’t we just have a control panel of sliders like “succinctness/verbosity” or “conscientiousness/speed” and move them around directly?

Finally, it’s just cool. Watching Golden Gate Claude unwillingly drag every sentence back to the Golden Gate Bridge is as fascinating and unsettling as Oliver Sacks’ neurological anecdotes. What if your own mind was tweaked in a similar way? Would it still be you?

Why steering hasn’t been used

Why don’t we steer more, then? Why don’t ChatGPT and Claude Code already have a steering panel where you can adjust the model’s brain in real time? One reason is that steering is kind of an unfortunately “middle class” idea in AI research.

It’s beneath the big AI labs, who can manipulate their models directly without having to do awkward brain surgery mid-inference. Anthropic is working on this stuff, but largely from an interpretability and safety perspective (as far as I know). When they want a model to behave in a certain way, they don’t mess around with steering, they just train the model.

Steering is also out of reach for regular AI users like you and me3, who use LLMs via an API and thus don’t have access to the model weights or activations needed to steer the model. Only OpenAI can identify or expose steering vectors for GPT-5.5, for instance. We could do this for open-weights models, but until very recently (more on that later) there haven’t been any open models strong enough to be worth doing this for.

On top of that, most basic applications of steering are outcompeted by just prompting the model. It sounds pretty impressive to be able to manipulate the model’s brain directly. But you know what else manipulates the model’s brain directly? Prompt tokens. You can exercise fairly fine-grained control over activations with steering, but you can already exercise extremely fine-grained control by tweaking the language of your prompt. In other words, there’s not much point going to the trouble to steer a model to be more verbose when you could simply ask.

Steering the unpromptable

One way for steering to be really useful is if we could identify a concept that can’t be prompted for. What about “intelligence”? You used to be able to prompt for intelligence - this is why 4o-era prompting always began with “you are an expert” - but current-generation models have that baked into their personalities, so prompting for it does nothing. Maybe steering for it would still work?

Ultimately this is an empirical question, but I’m skeptical that we’ll be able to find an “intelligence” steering vector. Put another way, the steering vector that makes up a concept as difficult as “intelligence” might be almost coextensive with the entire set of weights of the model, and thus identifying it reduces to the problem of “training a smart model”.

A sufficiently sophisticated steering approach ends up just replacing the actual model. If I take GPT-2, and at each layer I swap out the activations with the activations from a much stronger model with the same architecture, I will get a much better result. But at that point you’re not making GPT-2 more intelligent, you’re just talking to the stronger model instead. The intelligence is in the steering, not in the model. For much more on this, see my post AI interpretability has the same problems as philosophy of mind.

Steering as data compression

Another way for steering to be useful is if we could somehow steer for a concept that requires a ton of tokens to express. Steering would thus save us a big chunk of the model’s context window. Intuitively, we might think of this as a way to shift a concept from the model’s working memory into its implicit memory.

For instance, what if we could identify a “knowledge of my particular codebase” concept? When GPT-5.5 speed-reads my codebase, some of that knowledge it gains has to be buried in the activations, right? Maybe we could drag that out into a very large steering vector.

I would be surprised if this could work. I think we’ll run into the same problem as with extracting “intelligence”: the “knows my codebase” concept is probably sophisticated enough to require a full fine-tune of the model4. But it at least seems possible.

Conclusion

I’m fascinated with steering, but I’m not particularly optimistic about it. I think most of the gains can be more efficiently reproduced with prompts, and that the truly ambitious steering goals can be more efficiently reproduced by training or fine-tuning the model.

However, the open-source community hasn’t done a lot of work on steering yet, and that might be just starting to change now. If I’m wrong and it does have practical applications, we should find that out in the next six months.

It’ll be interesting to see if bespoke per-model tools like DwarfStar 4 end up including a “library” of boostable features. When a popular open-weights model is released, the community always rushes to release a suite of wrappers and quantized versions. Could we also see a rush to extract boostable features from the model?


  1. Models have lots of different activations you might measure (after attention, between each layer, etc). You can basically pick any one you want, or try multiple and see what works best.

  2. I recently read a really good deep dive into doing this with an open LLaMA model (and I tried it myself a few months ago, with mixed results.)

  3. Apologies to my readers from the big AI labs. Please email me if you have tried steering internally to boost capabilities and it hasn’t worked. I promise I won’t tell anyone.

  4. And even then, the results of “fine tune a model on your codebase” in the industry have largely been unsuccessful.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What Is a Character Trait? 350 Essential Character Traits for Writers

1 Share

What is a character trait? Delve into 350 character traits to help you choose the right positive and negative traits for building well-rounded characters in your stories.

What Is A Character Trait?

A character trait defines a person’s personality through a distinguishing quality or characteristic. In writing, we use character traits to show behaviour, attitudes, and how someone interacts with others.

People develop character traits through their actions, habits, and mindset. Every person displays a mix of positive and negative traits. If you do this for your characters, they will feel realistic and relatable in stories.

Writers often use these synonyms for character traits: attributes, characteristics, features, qualities, quirks, mannerisms, and peculiarities.

Tips For Including Character Traits

Even if you adore your protagonist and loathe your antagonist, it is important to remember that nobody is perfectly good or perfectly evil.

Every character will have positive and negative personality traits. Make sure you have created real people rather than caricatures by giving your cast a selection of both.

Include them when you complete the character questionnaires for your fictional creations.

How To Use Character Traits In Plotting

When you know what your characters’ traits are, you can use this to add to or to change your plot.

Examples

  1. An unreliable character might lose a job and the course of the story will change.
  2. A helpful or scrupulous character may inadvertently find out information when they are lending a hand. This information could create conflict and might force them to act or react.
  3. A romantic might start an affair and cause complications in their relationships. This could be the inciting incident for a story.
  4. A curious character could investigate something they shouldn’t, uncovering secrets that escalate the story.
  5. A selfish person may want to be more dependable and self-disciplined, but this trait could prevent them from achieving this. You can use this to create internal conflict in your characters.
  6. A hostile character may want to be included in society to improve their life, but their anger will ensure this does not happen.
  7. A loyal character might protect a friend’s secret, even when it puts them in danger or creates a moral dilemma.
  8. An ambitious character may take risks or betray others to achieve their goals, driving the plot toward confrontation.
  9. A stubborn character may refuse to change course, leading to escalating conflict or even tragedy.
  10. A compassionate character might help the wrong person, unintentionally creating new problems.

350 Essential Character Traits for Writers

We hope these lists help you choose the negative and positive character traits you will need in your books.

175 Negative Character Traits - An Essential Resource For Writers
175 Positive Character Traits - An Essential Resource For Writers

Enjoy developing character traits. Have fun, and happy writing.

The Last Word

Understanding what a character trait is helps you create stronger, more believable characters. By using a wide range of traits—both positive and negative—you can shape personalities, deepen conflict, and bring your stories to life. Keep this list of 350 character traits as a go-to resource whenever you need to build well-rounded characters.


by Amanda Patterson
© Amanda Patterson

If you enjoyed this blogger’s post, read:

  1. How To Outline A Short Story – For Beginners
  2. 6 Sub-Plots Every Writer Should Know
  3. How To Write Great Dialogue In Fiction
  4. What Is An Unreliable Narrator? 9 Types Every Writer Should Know
  5. How Writers Use The 4 Main Characters As Literary Devices
  6. Mastering Point Of View In Writing
  7. 7 Essential Elements Every Writer Must Master
  8. 12 Setting Secrets Every Storyteller Needs To Know
  9. 17 Ways To Think Like A Writer Every Day
  10. Why The Silence Of The Lambs Is A Masterclass For Thriller Writers

Top Tip: Sign up for our free daily writing links.

The post What Is a Character Trait? 350 Essential Character Traits for Writers appeared first on Writers Write.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Xbox is now XBOX

1 Share
Vector illustration the Xbox logo.

Xbox just allcapsmaxxed: Meet XBOX. This isn't a joke; Microsoft appears to be actually rebranding Xbox to XBOX. Asha Sharma, Xbox CEO, ran a poll on X earlier this week, asking fans whether Microsoft should use Xbox or XBOX. The results were in favor of XBOX, and the company has now renamed its X account.

Curiously, the Threads and Bluesky accounts for Xbox haven't been renamed yet, but if Microsoft is going ahead with a rebranding then I expect those will change soon. I asked Microsoft to comment on this potential Xbox rebranding and the company simply referred me to Sharma's post.

The use of all caps for Xbox is a return to original for …

Read the full story at The Verge.

Read the whole story
alvinashcraft
15 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Does Trump Mobile know how many stripes are on the American flag?

1 Share
A still from Trump Mobile’s promotional video showing the T1 Phone surrounded by the accessories it ships with.
The T1 Phone has the wrong number of stripes, but it does at least have 50 stars. | Screenshot: Trump Mobile

Where's the Trump phone? We're going to keep talking about it every week. We've reached out, as usual, to ask about the Trump phone's whereabouts. This week, despite our best hopes, we still don't have our phone - but we do have some fresh doubts about the company's patriotic credentials.

This has been a momentous few days for Trump Mobile, in which it defied the haters by announcing that its phones will be shipping to buyers this very week. Not that there's any sign the company has actually done that, but I digress. Because what I really want to talk about today is the American flag.

I am not an American, which probably explains why I did …

Read the full story at The Verge.

Read the whole story
alvinashcraft
15 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories