Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151353 stories
·
33 followers

How To Write An Irresistible Book Blurb In 5 Easy Steps

1 Share

In this post, we tell you how to write an irresistible book blurb in 5 easy steps. We call it SCOPE and it works for any blurb.

Your book blurb must be well-written and compelling to get a reader’s attention. Your blurb will be an important part of your marketing.

To write a good blurb, you have to make it short and exciting, yet relevant. You need to:

  1. Cut out sub-plots.
  2. Add tension to make it dramatic.
  3. Try not to mention more than two characters’ names.
  4. Promise your audience a read they won’t forget.

How To Write An Irresistible Book Blurb In 5 Easy Steps

I’ve come up with this easy acronym to help you create a book blurb. I call it SCOPE. Follow these five pointers and see if it works for you.

SCOPE

Setting
Conflict
Objective
Possible Solution
Emotional Promise

  1. Setting: All stories involve characters who are in a certain setting at a certain time.
  2. Conflict: A good story places these characters in a situation where they have to act or react. A good way to start this part of your blurb is with the words: But, However, Until
  3. Objective: What do your characters need to do?
  4. Possible Solution: Offer the reader hope here. Show them how the protagonist can overcome. Give them a reason to pick up the book. Use the word ‘If’ here.
  5. Emotional Promise: Tell them how the book will make them feel. This sets the mood for your reader.

I am using the film The Edge of Tomorrow to write a blurb using this formula.

Book Blurb Example:

  1. London. The near future. Aliens have invaded Earth and colonised Europe. Major William Cage is a PR expert for the US Army working with the British to stop the invaders crossing the English Channel. Battle after battle is lost until an unexpected victory gives humanity hope.
  2. But a planned push into Europe fails. Cage finds himself in a war he has no way to fight and he is killed. However, he wakes up, rebooted back a day every time he dies.
  3. He lives through hellish day after day, until he finds Sergeant Rita Vrataski who shows him how to fight the enemy.
  4. They painstakingly work out how to destroy the aliens. If they succeed, they will save Earth.
  5. This thrilling, action-packed, science-fiction war story will show you how heroes are made and wars can be won. Against the odds.

You can shorten (or lengthen) this to suit your needs.

SCOPE will work for any blurb. Why don’t you try it on a book you’ve read or a film you’ve seen recently?

If you liked this article, you will enjoy:

  1. How To Write A Query Letter In 12 Easy Steps
  2. How To Write A One-Page Synopsis
  3. 8 Points To Consider When You Name Your Book


by Amanda Patterson
© Amanda Patterson

If you enjoyed this blogger’s writing, read:

  1. How To Write A Play – For Beginners
  2. What Is Metafiction & How Do I Write It?
  3. Fabulous Resources For Crime Writers
  4. What Is A Character Flaw? 123 Ideas For Character Flaws
  5. All About Betrayal In Fiction
  6. The Best Priest Detectives In Fiction
  7. What Is Slipstream Fiction?
  8. What Is Imagery & How Do You Use It In Fiction Writing?
  9. How To Deal With A Writer’s Inner Critic
  10. What Is Exposition In A Story?

The post How To Write An Irresistible Book Blurb In 5 Easy Steps appeared first on Writers Write.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Layoff Thinking

1 Share

LinkedIn has been awash in layoff stories for, God, it feels like forever now. But a recent post got me thinking about layoffs, and how some of our reactions are deeply visceral when others get laid off around us, and why it's such a deeply personal thing to be suddenly unemployed.

First, the post

The video does some interesting analysis around what the real cause of the layoff is (and I think she's right about that), but the text above the video reads, in part:

We are so conditioned to believe that we have no inherent worth in capitalism unless we are EARNING. So we outsource our own worth to the very privileged few who are seemingly doing capitalism "right."

You're worthy, I promise you. Struggle isn't necessary, poverty doesn't happen because you're lazy and entitled.

... which got me thinking. Why is it we take it so hard when we are separated from a company?

In Western (American) society, we often place a great deal of our identity into what we do.

Consider, for a moment, how we greet each other when strangers first meet: "Hi, what's your name? What do you do?" (In the Deep South, I'm told the question is often, "What church do you go to?" while New Yorkers, I'm told, ask "Where ya from?" meaning "Which of the boroughs do you live?" and answers of anything other than a New York borough is essentially discounted and heavily judged.) These questions, right out of the gate, are how we look to understand other people, meaning we are using them to understand that other person.

In essence, these questions are what we use to establish our identity, our sense of selves, and how we represent that self both to others and to ourselves. It's ingrained into us as kids--in fact, it's a natural outgrowth of how, when we are children, we self-identify based on our school/grade/teacher which then leads naturally to college which then leads naturally to employer.

Notice how "What do you do" is right up there, right after the name, even? We use that as a definition of who we are, to ourselves every bit as much as we do to others.

Is it any surprise, then, that people take a layoff hard? Employers are literally striking a hole at somebody's sense of self when laying them off. It's as deep of a blow as taking away their national identity or displacing them out of their culture.

While I've always enjoyed programming and making money as a programmer, I don't think I've had that sense of "self" wrapped entirely in that concept of being a programmer. When everybody around me was a "Java developer" or a ".NET developer", I was just "a developer". Possibly because I've spent a ton of time thinking about all the other things I could (and wanted) to do: fiction author, sommelier, dungeon master, game developer, and a few more to boot. Don't get me wrong, I love coding and I love learning about all this new tech stuff, but if I couldn't make money at it, I'd do it on the side while making money doing whatever else. It's a weird thing to explain sometimes.

My reason for bringing all this up? In the spirit of trying to console people by counseling actions to take: If you're experiencing a layoff, I think it critical to lean into all of the non-work parts of your self. Re-center your sense of identity away from work. Hobbies. Family. Voracious consumer of urban fantasy romance slam poetry. Whatever. Take the chance to rebuild your sense of self around things that aren't work, so that when you get back into work, you're never quite as vulnerable as you were before.

In other words: You are way more than what you do. You have skills, insights, views, and probably a whole lotta love that you can offer. Your company said you have no worth to them? Fuck 'em. You have worth, just because. It sucks, yes, and it's important to grieve. Then get up and go wander the coffee aisle at the local grocery store, enjoying all the smells. Go watch kids in the park for a while. Swing on a swing like you did when you were five. Whatever. Be you. Reconnect with yourself, and realize that nowhere inside you is a company logo. You are waaaaay more than just what you do, and that in of itself is waaaaay bigger than where you do it.

I don't know if that helps anyone else. But it kept me sane (and even a degree hopeful) during my three-year "involuntary sabbatical" a few years back.

Read the whole story
alvinashcraft
24 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – April 8, 2026 (#759)

1 Share

Today was a day I was happy to have access to experts and decision-makers. Instead of playing telephone through intermediaries, I still like going right to the source. Risky at times, but usually pays off!

[blog] Highlights from my conversation about agentic engineering on Lenny’s Podcast. I’m going to listen to the podcast, but these notes from Simon give a useful peek into key points. Especially on the difficulty of estimating software and managing personal energy.

[blog] The 2nd Phase of Agentic Development. Instead of cloning things, we can now reimagine software based on the jobs-to-be-done. Coding agents make reimagining practical.

[article] Goodbye, Llama? Meta launches new proprietary AI model Muse Spark — first since Superintelligence Labs’ formation. Is Meta back? This looks like a very impressive model.

[blog] How I Use AI Before the First Line of Code. Don’t just AI for implementation work. It’s a good exploration tool, reviewer of requirements, and architectural sparring partner.

[blog] Portability is the new productivity. Do you need to rebuild software for every type of client? The gaming industry solved this, and app devs have Flutter for this too.

[blog] Ultimate prompting guide for Lyria 3 models. I like the detailed instructions here that explain the best way to get music, lyrics, and vocals from this set of Google models.

[blog] The Git Commands I Run Before Reading Any Code. Educational. What can you learn about a codebase before you open up any files?

[blog] Google Cloud named a Leader in The Forrester Wave™: Sovereign Cloud Platforms, Q2 2026. There’s no shortcut to quality compliance work and sovereign offerings. Regardless of what some startups thought. Proud to see us leading on both strategy AND implementation.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Meta's new model is Muse Spark, and meta.ai chat has some interesting tools

1 Share

Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago. It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required).

Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows".

The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro.

A couple of pelicans

I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly.

Here's the pelican I got for "Instant":

This is a pretty basic pelican. The bicycle is mangled, the pelican itself has a rectangular beak albeit with a hint of pouch curve below it. Not a very good one.

And this one for "Thinking":

Much better. Clearly a pelican. Bicycle is the correct shape. Pelican is wearing a blue cycling helmet (albeit badly rendered). Not a bad job at all.

Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused Playables SDK v1.0.0 JavaScript libraries.

Which got me curious...

Poking around with tools

Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style.

But what else can it do?

I asked it:

what tools do you have access to?

And then:

I want the exact tool names, parameter names and tool descriptions, in the original format

It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks.

Here are highlights derived from that response:

  • Browse and search. browser.search can run a web search through an undisclosed search engine, browser.open can load the full page from one of those search results and browser.find can run pattern matches against the returned page content.

  • Meta content search. meta_1p.content_search can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including author_ids, key_celebrities, commented_by_user_ids, and liked_by_user_ids.

  • "Catalog search" - meta_1p.meta_catalog_search can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector.

  • Image generation. media.image_gen generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images.

  • container.python_execution - yes! It's Code Interpreter, my favourite feature of both ChatGPT and Claude.

    Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at /mnt/data/.

    Python 3.9 is EOL these days but the library collection looks useful.

    I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021).

  • container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to html for websites/apps or svg for vector graphics."

  • container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or catalog_search_citation_id for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox.

  • container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar?

  • Tools for editing files in the container - container.view, container.insert (with new_str and insert_line), container.str_replace. These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness.

  • container.visual_grounding - see below, this one is fun.

  • subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response."

  • third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook.

Image analysis in the container

Let's talk about that container.visual_grounding one. Here's the description in full:

Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects.

Parameters:

  • object_names (array, required)
  • image_path (string, default "")
  • format_type ("bbox"|"point"|"count"|null, default "bbox")
  • title (string|null)[string]

So it can take an image path, optional object names and return things in format bbox, point or count.

Sounds to me like Meta's Segment Anything. I decided to take it for a spin.

Prompt:

generate a photo of a raccoon sitting on a trash can wearing trash as a hat

Response:

Here you go — king of the alley, in full regalia:

It's a photo of a raccoon wearing a disposable coffee cup and some newspaper and banana peel as a hat. It's pretty good

Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too?

My guess is that image generation in Meta AI is powered by their Emu model, or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's to media.image_gen).

The image_gen tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried:

use python OpenCV to analyze that image and find out neat things about it

Annoyingly it did not show me the code it ran, but it did create this image:

OpenCV image analysis dashboard titled "OpenCV Analysis: Raccoon Trash Hat" with four panels: top-left shows the original 1280x1280 photo of a raccoon wearing a trash-lid hat with a coffee cup and newspaper on top in an alley; top-right shows a Canny edge detection version with white edges on black background labeled "Canny Edges - 4.6% density"; bottom-left shows "Top 5 Dominant Colors" as swatches with values RGB(20, 19, 24): 35.3%, RGB(43, 43, 47): 34.2%, RGB(84, 80, 79): 18.3%, RGB(139, 135, 131): 6.3%, RGB(210, 213, 208): 5.8%; bottom-right shows "Color Histograms" line chart with Red, Green, and Blue channels plotted from 0-250 on x-axis and 0-35000 on y-axis, peaking around value 25-50.

Pretty neat! We can both generate images and then use them with Python tools.

Next, the visual_grounding tool. Knowing the names of the tools makes it easy to refer to them in prompts:

Now try the container.visual_grounding tool

The response (full response here):

Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision:

Visual grounding analysis of a raccoon photo titled "Visual Grounding: Found 8 objects" with colored dot annotations on the image marking each detected object: coffee cup (515,205) in yellow, banana peel (569,261) in light yellow, newspaper (399,279) in blue, right eye (461,396) in magenta, left eye (582,397) in pink, nose (514,472) in cyan, raccoon (553,548) in green, and trash can lid (550,892) in red. A legend at the bottom lists all objects with their normalized 0-1000 coordinates. Footer text reads "Coordinates are 0-1000 normalized. visual_grounding returned precise points for each component of the trash hat ensemble." Watermark reads "Meta AI".

[...]

Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect?

Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly point mode, but how about the others?

I prompted:

yes draw bounding boxes, can it do masks too?

Response:

Yes — bounding boxes work great. Masks? Not directly in visual_grounding; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks.

Here's the bbox output:

Bounding box object detection visualization titled "Bounding Boxes (visual_grounding)" with subtitle "8 objects detected — coordinates are 0-1000 normalized" showing a raccoon photo with colored rectangular bounding boxes around detected objects: coffee cup in yellow [424,115,610,273] 186×158, banana peel in yellow [472,215,667,291] 195×76, newspaper in blue [305,223,548,315] 243×92, raccoon in green [270,240,892,845] 622×605, raccoon's face in purple [305,283,750,537] 445×254, right eye in magenta [442,379,489,413] 47×34, left eye in magenta [565,379,605,416] 40×37, and trash can lid in red [107,800,1000,998] 893×198. A legend at the bottom shows each object's name, coordinates, and pixel dimensions in colored cards. Watermark reads "Meta AI".

What you’re seeing:

  • raccoon: big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal
  • coffee cup: (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space
  • banana peel: (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side
  • newspaper: (305,223)-(548,315) — left ear coverage
  • trash can lid: (107,800)-(1000,998) — spans almost the full image width at the bottom
  • face: (305,283)-(750,537) — tight crop around the mask
  • eyes: both ~40×37 pixel boxes, perfectly level at y=379

Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers.


About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options:

  1. GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask
  2. SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel

Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG.

Here's the HTML for that bbox visualization.

Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here. The count mode is particularly interesting - here's that in full:

[
  {
    "object_name": "raccoon whisker",
    "points": [
      {"x": 403, "y": 445},
      {"x": 397, "y": 458},
      {"x": 392, "y": 472},
      {"x": 391, "y": 485},
      {"x": 398, "y": 497},
      {"x": 415, "y": 512},
      {"x": 618, "y": 446},
      {"x": 622, "y": 461},
      {"x": 625, "y": 475},
      {"x": 628, "y": 488},
      {"x": 635, "y": 501},
      {"x": 651, "y": 518}
    ],
    "count": 12
  },
  {
    "object_name": "raccoon paw claw",
    "points": [
      {"x": 406, "y": 845},
      {"x": 424, "y": 847},
      {"x": 441, "y": 842},
      {"x": 473, "y": 833},
      {"x": 596, "y": 830},
      {"x": 635, "y": 844},
      {"x": 660, "y": 843},
      {"x": 679, "y": 838}
    ],
    "count": 8
  },
  {
    "object_name": "trash item on head",
    "points": [
      {"x": 514, "y": 191},
      {"x": 568, "y": 263},
      {"x": 392, "y": 274}
    ],
    "count": 3
  },
  {
    "object_name": "eye",
    "points": [
      {"x": 462, "y": 395},
      {"x": 582, "y": 398}
    ],
    "count": 2
  },
  {
    "object_name": "ear",
    "points": [
      {"x": 359, "y": 313},
      {"x": 680, "y": 294}
    ],
    "count": 2
  }
]

So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools.

Which means... it can count pelicans too!

Screenshot of a Meta AI conversation where the user uploaded a photo of pelicans on rocks and asked "Count the pelicans". response reads: "That rocky haul-out is packed — I count 25 brown pelicans in your photo. I went bird by bird, including the ones tucked behind rocks, heads turned away, or just peeking out at the edges. They're all California brown pelicans, with those pale yellow heads and long bills, clustered on the riprap right at the waterline. To make it easy to verify, I've numbered each one on the image:" followed by an annotated version of the photo with red numbered circles (1–25) placed on each pelican. The annotated image has a downward chevron and an "Open" button at the bottom right.

Here's that overlay exported as HTML.

Maybe open weights in the future?

On Twitter Alexandr Wang said:

this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions.

I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency:

[...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison.

So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively.

I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it.

Tags: facebook, ai, generative-ai, llms, code-interpreter, llm-tool-use, meta, pelican-riding-a-bicycle, llm-reasoning, llm-release

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Unable to use database view option in Access - Microsoft 365

1 Share

Error - You can't set criteria before you add a field or expression to the Field Row

 

I need some help on this.

 

 

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Testing C# Source Generators: A Practical Guide

1 Share

Learn how to test C# source generators with .NET 10. Complete guide covering Microsoft.CodeAnalysis.Testing, unit tests, snapshot testing with Verify, and CI integration.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories