Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147637 stories
·
33 followers

How to Learn AI with AI

1 Share
From: AIDailyBrief
Duration: 16:35
Views: 705

Overview of the shift from instructor-led courses to agent-first, context-driven learning with AI as a collaborative build partner. Key mindsets: start with vision, think out loud, insist on mutual pushback, and use AI as a mirror for refining ideas. Practical tactics: create handoff documents, paste exact errors or code into prompts, use AI to craft prompts for other models, preserve session context, and prefer voice over typing for faster iteration.

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #7, 2026 โ€“ .NET 11 Preview 1 & Rider 2026.1 EAP 3

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

Microsoft News

Next week @dsyme.bsky.social shows how agentic workflows can continuously improve #fsharp libraries.amplifyingfsharp.io/sessions/202…

(@amplifyingfsharp.io) 2026-02-10T11:59:34.636Z

Videos

Blogs

Highlighted projects

Perhaps not too impressive for now but this shows Giraffe (F#) running on the BEAM (Erlang Runtime) using Fable (WIP) #fablecompiler #fsharp

Dag Brattli (@dbrattli.bsky.social) 2026-02-11T20:24:17.335Z

New Releases

๐Ÿš€ EasyBuild.ShipIt 1.0.0 is out! ๐ŸŽ‰Automate your release chores. ShipIt parses your Conventional Commits to calculate versions, generate changelogs, and auto-open Release PRs! ๐Ÿ› โœ… Auto Release PRs โœ… Monorepo readyStart shipping: ๐Ÿšขhttps://github.com/easybuild-org/EasyBuild.ShipIt#dotnet #fsharp

Maxime (@mangelmaxime.bsky.social) 2026-02-11T18:37:08.087Z

Thatโ€™s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Justifying text-wrap: pretty

1 Share

Justifying text-wrap: pretty

Something truly monumental happened in the world of software development in 2025. Safari shipped a reasonable implementation of text-wrap: pretty: https://webkit.org/blog/16547/better-typography-with-text-wrap-pretty/. We are getting closer and closer to the cutting-edge XV-century technology. Beautiful paragraphs!

Gutenberg bible Old Testament Epistle of St Jerome

We are not quite there yet, hence the present bug report.


A naive way to break text into lines to form a paragraph of a given width is greediness: add the next word to the current line if it fits, otherwise start a new line. The result is unlikely to be pretty โ€” sometimes it makes sense to try to squeeze one more word on a line to make the lines more balanced overall. Johannes Gutenberg did this sort of thing manually, to produce a beautiful page above. In 1981, Knuth and Plass figured out a way to teach computer to do this, using dynamic programming, for line breaking in TeX.

Inexplicably, until 2025, browsers stuck with the naive greedy algorithm, subjecting generations of web users to ugly typography. To be fair, the problem in a browser is harder version than the one solved by Gutenberg, Plass, and Knuth. In print, the size of the page is fixed, so you can compute optimal line breaking once, offline. In the web context, the window width is arbitrary and even changes dynamically, so the line-breaking has to be โ€œonlineโ€. On the other hand, XXI century browsers have a bit more compute resources than we had in 1980 or even 1450!


Making lines approximately equal in terms of number of characters is only half-way through towards a beautiful paragraph. No matter how you try, the length wonโ€™t be exactly the same, so, if you want both the left and the right edges of the page to be aligned, you also need to fudge the spaces between the words a bit. In CSS, text-wrap: pretty asks the browser to select line breaks in an intelligent way to make lines roughly equal, and text-align: justify adjusts whitespace to make them equal exactly.

Although Safari is the first browser to ship a non-joke implementation of text-wrap, the combination with text-align looks ugly, as you can see in this very blog post. To pin the ugliness down, the whitespace between the words is blown out of proportion. Hereโ€™s the same justified paragraph with and without text-wrap: pretty:

The paragraph happens to look ok with greedy line-breaking. But the โ€œsmartโ€ algorithm decides to add an entire line to it, which requires inflating all the white space proportionally. By itself, either of

p {
    text-wrap: pretty;
    text-align: justify;
}

looks alright. Itโ€™s just the combination of the two that is broken.


This behavior is a natural consequence of implementation. My understanding is that the dynamic programming scoring function aims to get each line close to the target width, and is penalized for deviations. Crucially, the actual max width of a paragraph is fixed: while a line can be arbitrary shorter, it canโ€™t be any longer, otherwise itโ€™ll overflow. For this reason, the dynamic programming sets the target width to be a touch narrower than the paragraph. That way, itโ€™s possible to both under and overshoot, leading to better balance overall. As per original article:

The browser aims to wrap each line sooner than the maximum limit of the text box. It wraps within the range, definitely after the magenta line, and definitely before the red line.

But if you subsequently justify all the way to the red line, the systematic overshoot will manifest itself as too wide inter-word space!

WebKit devs, you are awesome for shipping this feature ahead of everyone else, please fix this small wrinkle such that I can make my blog look the way I had intended all along ;-)

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Save the date!

1 Share
Save the date and tune in for Aspire Conf on March 23! A free livestream event. Discover Aspire and learn how it can transform the way you build and deploy your distributed apps and agents.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 300 - "IoT of Back!" (Celebrating 6 years of IoT Coffee Talk!!)

1 Share
From: Iot Coffee Talk
Duration: 58:31
Views: 3

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week Rob, Rick, Mark, Alistair, David, Bill, Anthony, Wienke, Oliver, Tom, Debbie, Pete, and Leonard jump on Web3 for a discussion about:

๐ŸŽถ ๐ŸŽ™๏ธ BAD KARAOKE! ๐ŸŽธ ๐Ÿฅ "Jessica", The Allman Brothers Band"
๐Ÿฃ IoT Coffee Talk celebrates 300 episodes and 6 years of uninterrupted ridiculousness!!
๐Ÿฃ What makes our show so amazing?
๐Ÿฃ Will AI be around longer than other tech fads?
๐Ÿฃ How much do we not care about our privacy? Is it a good thing or bad?
๐Ÿฃ What happens when you realize that AI doesn't forget?
๐Ÿฃ What is the risk of an AI hallucinating (lying or misrepresenting) YOU?
๐Ÿฃ Is SaaS dead? Is the developer dead thanks to AI?
๐Ÿฃ How do you leverage AI responsibly for coding and software development?
๐Ÿฃ Who will be liable for crap vibe code?
๐Ÿฃ If you don't catch the hallucination in your vibe code, should you get fired or Claude?
๐Ÿฃ The new philosopher of our time, Mediocrates, The Lazy and Mindless.
๐Ÿฃ Was Qualcomm's takeover of Arduino a good thing or bad? Why?
๐Ÿฃ Europe may force everyone to be responsible with IoT and AI. Find out how.

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

HammerDB tproc-c on a small server, Postgres and MySQL

1 Share

This has results for HammerDB tproc-c on a small server using MySQL and Postgres. I am new to HammerDB and still figuring out how to explain and present results so I will keep this simple and just share graphs without explaining the results.

tl;dr

  • Modern Postgres is faster than old Postgres
  • Modern MySQL has large perf regressions relative to old MySQL, and they are worst at low concurrency for CPU-bound worklads. This is similar to what I see on other benchmarks.
  • Modern Postgres is about 2X faster than MySQL at low concurrency (vu=1) and when the workload isn't IO-bound (w=100). But with some concurrency (vu=6) or with more IO per transaction (w=1000, w=2000) they have similar throughput. Note that partitioning is used at w=1000 and 2000 but not at w=100.

Builds, configuration and hardware

I compiled Postgres versions from source: 12.22, 13.23, 14.20, 15.15, 16.11, 17.7 and 18.1.

I compiled MySQL versions from source: 5.6.51, 5.7.44, 8.0.44, 8.4.7, 9.4.0 and 9.5.0.

The server is an ASUS ExpertCenter PN53 with an AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, and 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.

For versions prior to 18, the config file is named conf.diff.cx10a_c8r32 and they are as similar as possible and here for versions 1213141516 and 17.

For Postgres 18 the config file is named conf.diff.cx10b_c8r32 and adds io_mod='sync' which matches behavior in earlier Postgres versions.

For MySQL the config files are named my.cnf.cz12a_c8r32 and are here: 5.6.515.7.448.0.4x8.4.x9.x.0.

For both Postgres and MySQL fsync on commit is disabled to avoid turning this into an fsync benchmark. The server has an SSD with high fsync latency.

Benchmark

The benchmark is tproc-c from HammerDB. The tproc-c benchmark is derived from TPC-C.

The benchmark was run for several workloads:
  • vu=1, w=100 - 1 virtual user, 100 warehouses
  • vu=6, w=100 - 6 virtual users, 100 warehouses
  • vu=1, w=1000 - 1 virtual user, 1000 warehouses
  • vu=6, w=1000 - 6 virtual users, 1000 warehouses
  • vu=1, w=2000 - 1 virtual user, 2000 warehouses
  • vu=6, w=2000 - 6 virtual users, 2000 warehouses
The w=100 workloads are less heavy on IO. The w=1000 and w=2000 workloads are more heavy on IO.

The benchmark for Postgres is run by this script which depends on scripts here. The MySQL scripts are similar.
  • stored procedures are enabled
  • partitioning is used for when the warehouse count is >= 1000
  • a 5 minute rampup is used
  • then performance is measured for 120 minutes
Results

My analysis at this point is simple -- I only consider average throughput. Eventually I will examine throughput over time and efficiency (CPU and IO).

On the charts that follow y-axis does not start at 0 to improve readability at the risk of overstating the differences. The y-axis shows relative throughput. There might be a regression when the relative throughput is less than 1.0. There might be an improvement when it is > 1.0. The relative throughput is:
(NOPM for some-version / NOPM for base-version)

I provide three charts below:

  • only MySQL - base-version is MySQL 5.6.51
  • only Postgres - base-version is Postgres 12.22
  • Postgres vs MySQL - base-version is Postgres 18.1, some-version is MySQL 8.4.7

Results: MySQL 5.6 to 8.4

Legend:

  • my5651.z12a is MySQL 5.6.51 with the z12a_c8r32 config
  • my5744.z12a is MySQL 5.7.44 with the z12a_c8r32 config
  • my8044.z12a is MySQL 8.0.44 with the z12a_c8r32 config
  • my847.z12a is MySQL 8.4.7 with the z12a_c8r32 config
  • my9400.z12a is MySQL 9.4.0 with the z12a_c8r32 config
  • my9500.z12a is MySQL 9.5.0 with the z12a_c8r32 config

Summary

  • Perf regressions in MySQL 8.4 are smaller with vu=6 and wh >= 1000 -- the cases where there is more concurrency (vu=6) and the workload does more IO per transaction (wh=1000 & 2000). Note that partitioning is used at w=1000 and 2000 but not at w=100.
  • Perf regressions in MySQL 8.4 are larger with vu=1 and even more so with wh=100 (low concurrency, less IO per transaction).
  • Performance has mostly been dropping from MySQL 5.6 to 8.4. From other benchmarks the problem is from new CPU overheads at low concurrency.
  • While perf regressions in modern MySQL at high concurrency have been less of a problem on other benchmarks, this server is too small to support high concurrency.

Results: Postgres 12 to 18

Legend:

  • pg1222.x10a is Postgres 12.22 with the x10a_c8r32 config
  • pg1323.x10a is Postgres 13.23 with the x10a_c8r32 config
  • pg1420.x10a is Postgres 14.20 with the x10a_c8r32 config
  • pg1515.x10a is Postgres 15.15 with the x10a_c8r32 config
  • pg1611.x10a is Postgres 16.11 with the x10a_c8r32 config
  • pg177.x10a is Postgres 17.7 with the x10a_c8r32 config
  • pg181.x10b is Postgres 18.1 with the x10b_c8r32 config

Summary

  • Modern Postgres is faster than old Postgres



Results: MySQL vs Postgres

Legend:

  • pg181.x10b is Postgres 18.1 with the x10b_c8r32 config
  • my847.z12a is MySQL 8.4.7 with the z12a_c8r32 config

Summary

  • MySQL and Postgres have similar throughput for vu=6 at w=1000 and 2000. Note that partitioning is used at w=1000 and 2000 but not at w=100.
  • Otherwise Postgres is 2X faster than MySQL








 

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories