A colleague shared a newsletter with me titled The Shape of Leadership by Mike Fisher, here is an excerpt:
The V-formation is the one most of us are familiar with in organizations. It maps cleanly to how we think about leadership: someone sets direction, others align behind them, and progress is made through coordination and efficiency. Everyone knows where they’re going. Roles are clear. Responsibility is explicit. When it works, it’s a beautiful thing.
There’s a reason migrating birds use it. Flying long distances is expensive. Energy matters. Small inefficiencies compound over thousands of miles. The V-formation minimizes wasted effort by design. Each bird benefits from the work of the bird ahead of it, and the group as a whole goes farther than any individual could alone. In leadership terms, this is what strong alignment looks like. A clear vision reduces wasted motion. When people understand the destination and their role in getting there, they don’t have to guess. They don’t have to hedge. They can put their energy into execution instead of interpretation...
Murmurations, on the other hand, feel almost like the opposite extreme. There’s no visible plan. No clear leader. No stable shape. And yet, they are remarkably resilient. When a predator strikes, the flock doesn’t panic. It doesn’t wait for instructions. It responds instantly, each bird adjusting based on the movement of the birds nearest to it.
What’s fascinating is that murmurations aren’t chaotic at all. They operate on a small set of simple rules: maintain distance from your neighbors, match their velocity, and pay attention to sudden changes. That’s it. No bird has a global view of the flock, but the flock as a whole behaves intelligently.
This is what strong cultures look like.
Most corporate leaders do not choose the shape of their environments or their teams. They inherited them. They absorb the patterns that were in place long before they arrived, especially when those patterns have a history of success. The quiet assumption is that whatever worked in the past must be correct, so the inherited shape might go unchallenged. The problem though is that the conditions shift. The work shifts. Teams shift. Yet the shape of leadership often stays the same.
Systems, including the ways we lead, carry their own inertia, and they intuitively preserve whatever state produced success in the past. This is not a conspiracy or a character flaw. Success creates momentum, and momentum takes deliberate effort to redirect, especially when it is in service of finding and securing unrealized opportunities. It requires planning, patience, execution and a willingness to recognize when the moment has changed.
A team’s culture becomes most evident in what people do when no one is asking and no one is watching. It shows up in the choices made in unobserved moments, in the habits that persist without direction, and in the behaviors that surface when pressure is low (or indeed, high). What emerges in those moments is the real system, not the one written in documents or described in meetings. And if a team consistently falls back into a familiar V‑formation, even when unwarranted, it is usually because the culture has rewarded and reinforced that pattern over time.
Culture sets the boundaries of what feels acceptable, what feels risky, and what feels necessary. So when a team reverts to old patterns, when the intended formation collapses under pressure, or when outcomes fail to match stated values, it is the culture doing exactly what it was historically shaped to do.
Transforming culture is by no means easy. It is not a matter of slogans or revised instruction sets. It happens when teams practice different behaviors long enough for those behaviors to become the instinct. Meeting the moment requires leaders who can create the conditions where those new patterns can take root.
Are you running the risk of overwhelming your readers? In this post, we tell you why you don’t need to put everything in your book.
One of the main reasons beginner writers don’t finish their books is because they try to put everything into the story.
If you want to write a novel, you need to follow some basic rules. You need to limit the number of your characters. You need to give them story goals. You need to limit the number of settings. You need to include necessary dialogue and leave out unimportant conversations.
If you don’t do this, you run the risk of overwhelming your readers. Readers who feel lost are likely to abandon your story and find another one where they feel more comfortable.
Readers read to live vicariously through a fictional character. You cannot expect them to fragment into 10 characters and empathise with everybody.
We follow the rule that you should concentrate on the four main characters, with special emphasis on your protagonist. You need to create story goals for them as well.
Allow readers to bond with these characters so that they can identify with them.
Suggested reading:Â The 4 Main Characters As Literary Devices
The same goes for settings. Readers like to feel that they know where the story takes place. They become comfortable with the world you’ve created. If you continuously add new settings, you will distract them and you will interrupt the flow of the story.
We follow the rule that you should introduce most of your settings within the first quarter of your book. You should also limit them to the worlds of the four main characters.
Suggested reading:Â 12 Crucial Things To Remember About Setting
Readers also don’t want to feel confused by too many story lines. Again, look at your protagonist’s story goal and use this to figure out your plot and sub-plot.
Readers are comfortable with one main plot and one or two sub-plots. Remember that this is not the only book you will write. Keep some of the plots you want to include for other novels – or maybe a sequel.
Suggested reading:Â 6 Sub-Plots That Add Style To Your Story
Readers also don’t want to read about greetings, comments about the weather, questions about relatives, etc. – unless they move the plot forward in some way. Use dialogue carefully. Use it show people, create conflict, and show and not tell.
Suggested reading: A Quick Start Guide To Writing Dialogue
This does not mean that you are dumbing down your story, but you are following the rules of fiction writing. Choose your characters. Give them clear story goals. Write the book.
If you do this, you are more likely to be published. Editors are more likely to give you a chance. More importantly, readers are more likely to enjoy your book,
Good luck with your writing!

by Amanda Patterson
© Amanda Patterson
If you enjoyed this blogger’s writing, read:
Top Tip: Find out more about our workbooks and online courses in our shop.
The post Why You Don’t Need To Put Everything In Your Book appeared first on Writers Write.
I write this in the midst of a huge ice event - which thankfully isn't so bad here in south Louisiana. We're very cold and rainy, but no real ice yet, which is good. The worst is coming in later tonight and the schools have already shut down, but thankfully I work at home so there's no need to get on the roads. Today is also the 26th birthday of my eldest child, which makes the age ranges of my little army (8 kids total) go from 10 to 26. Wow.
Ok, most likely you've seen this across your feeds already, I swear I saw it at least ten times, but "Date is out, Temporal is in" is a great introduction to the new date hotness in JavaScript, the Temporal API.
According to MDN, the support is good so I imagine I'll be using this soon myself.
Or at least that's what I hear. Personally, I miss seeing all the cute, weird, personal web pages from the old days, so any effort to help promote this trend is a good thing in my book. Personalsit.es is a collection of personal web sites and a quick way to hop to a random one. Anyone can submit a PR to add your own. (I did!)
This isn't new, but I love a good API, and what's better than one good API? How about near 500 of them! Free Public APIs is exactly what it sounds like, a collection of free APIs. Building simple API wrappers are a great way to learn a new language. Sadly though there are only three cat APIs - someone fix that please!
Another music discovery for me, hemlocke springs is an American singer and songwriter who just started being active in the music scene over the past few years. She's got a great sound, and while this is the only track I've tried so far, I'm looking forward to listening to more from her.
Play Video
Read more of this story at Slashdot.

One of the first steps we take when we want to optimize software is to look
at profiling data. Software profilers are tools that try to identify where
your software spends its time. Though the exact approach can vary, a typical profiler samples your software (steps it at regular intervals) and collects statistics. If your software is routinely stopped in a given function, this function is likely using a lot of time. In turn, it might be where you should put your optimization efforts.
Matteo Collina recently shared with me his work on feeding profiler data for software optimization purposes in JavaScript. Essentially, Matteo takes the profiling data, and prepares it in a way that an AI can comprehend. The insight is simple but intriguing: tell an AI how it can capture profiling data and then let it optimize your code, possibly by repeatedly profiling the code. The idea is not original since AI tools will, on their own, figure out that they can get profiling data.
How well does it work? I had to try it.
For the simdutf software library, we use an amalgamation script: it collects all of the C++ files on disk, does some shallow parsing and glues them together according to some rules.
I first ask the AI to optimize the script without access to profiling data. What it did immediately was to add a file cache. The script repeatedly loads the same files from disk (the script is a bit complex). This saved about 20% of the running time.
Specifically, the AI replaced this naive code…
def read_file(file):
with open(file, 'r') as f:
for line in f:
yield line.rstrip()
by this version with caching…
def read_file(file):
if file in file_cache:
for line in file_cache[file]:
yield line
else:
lines = []
with open(file, 'r') as f:
for line in f:
line = line.rstrip()
lines.append(line)
yield line
file_cache[file] = lines
Could the AI do better with profiling data? I instructed it to run the Python profiler: python -m cProfile -s cumtime myprogram.py. It found two additional optimizations:
1. It precompiled the regular expressions (re.compile). It replaced
if re.match('.*generic/.*.h', file):
# ...
by
if generic_pattern.match(file):
# ...
where elsewhere in the code, we have…
generic_pattern = re.compile(r'.*generic/.*\.h')
2. Instead of repeatedly calling re.sub to do a regular expression substitution, it filtered the strings by checking for the presence of a keyword in the string first.
if 'SIMDUTF_IMPLEMENTATION' in line: # This IF is the optimization
print(uses_simdutf_implementation.sub(context.current_implementation+"\\1", line), file=fid)
else:
print(line, file=fid) # Fast path
These two optimizations could probably have been arrived at by looking at the code directly, and I cannot be certain that they were driven by the profiling data. But I can tell that they do appear in the profile data.
Unfortunately, the low-hanging fruit, caching the file access, represented the bulk of the gain. The AI was not able to further optimize the code. So the profiling data did not help much.
When I design online courses, I often use a lot of links. These links break over time. So I have a simple Python script that goes through all the links, and verifies them.
I first ask my AI to optimize the code. It did the same regex trick, compiling the regular expression. It created a thread pool and made the script asynchronous.
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
url_results = {url: executor.submit(check_url, url) for url in urls_to_check}
for url, future in url_results.items():
url_cache[url] = future.result()
This parallelization more than doubled the speed of the script.
It cached the URL checks in an interesting way, using functools:
from functools import lru_cache
@lru_cache(maxsize=None)
def check(link):
# ...
I did not know about this nice trick. This proved useless in my context because I rarely have several times the same link.
I then started again, and told it to use the profiler. It did much the same thing, except for the optimization of the regular expression.
As far as I can tell all optimizations were in vain, except for the multithreading. And it could do this part without the profiling data.
The Python scripts I tried were not heavily optimized, as their performance was not critical. They are relatively simple.
For the amalgamation, I got a 20% performance gain for ‘free’ thanks to the file caching. The link checker is going to be faster now that it is multithreaded. Both optimizations are valid and useful, and will make my life marginally better.
In neither case I was able to discern benefits due to the profiler data. I was initially hoping to get the AI busy optimizing the code in a loop, continuously running the profiler, but it did not happen in these simple cases. The AI optimized code segments that contributed little to the running time as per the profiler data.
To be fair, profiling data is often of limited use. The real problems are often architectural and not related to narrow bottlenecks. Even when there are identifiable bottlenecks, a simple profiling run can fail to make them clearly identifiable. Further, profilers become more useful as the code base grows, while my test cases are tiny.
Overall, I expect that the main reason for my relative failure is that I did not have the right use cases. I think that collecting profiling data and asking an AI to have a look might be a reasonable first step at this point.