Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
129002 stories
·
29 followers

paint.net 5.1 beta (build 9056)

1 Share

Just a few more changes and fixes before the stable release next month, including some tweaks to clipboard handling with respect to color management.

Change Log

Changes since 5.1 beta (build 9038):

  • Updated clipboard handling with respect to color management: instead of being converted to sRGB, PNGs copied to the clipboard now contain the image’s color profile. When pasting a PNG from the clipboard, the color profile is used if it’s available. For plugins using IClipboardService, there are now methods for including the color profile when copying, and obtaining the color profile when pasting.
  • Fixed Edit->Paste into New Layer and Layers->Import from File so they fill with transparent black instead of the secondary color when expanding the canvas size
  • Fixed some flickering in the toolbar when undoing certain commands
  • Slightly improved overall performance by switching to .NET 9’s System.Threading.Lock
  • Fixed a few small performance bugs with the new Direct2D Flip Mode code
  • Updated to .NET 9.0-rc2, which fixes a small visual glitch in window titlebars

Download and Install

This build is available via the built-in updater as long as you have opted-in to pre-release updates. From within Settings -> Updates, enable “Also check for pre-release (beta) versions of paint.net” and then click on the Check Now button. You can also use the links below to download an offline installer or portable ZIP.

image.png

You can also download the installer here (for any supported CPU and OS), which is also where you can find downloads for offline installers, portable ZIPs, and deployable MSIs.





Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

A Hacker’s Guide to Language Models: A Code-First Approach

1 Share

In artificial intelligence, language models stand as towering achievements, symbolizing the remarkable progress we’ve made. Yet, for many, these models remain enigmatic, their inner workings shrouded in complexity. This essay aims to demystify language models, offering a hacker’s guide—a code-first approach to understanding and utilizing these powerful tools in practice.

This video is from Jeremy Howard.

Understanding Language Models

At their core, language models are algorithms capable of predicting the next word in a sentence or filling in missing words. This capability might seem simple at first glance, but it’s the foundation for a wide range of applications, from generating human-like text to improving natural language understanding systems.

One of the most renowned examples of a language model is OpenAI’s GPT (Generative Pre-trained Transformer) series. These models have been trained on vast datasets, enabling them to generate coherent and contextually relevant text based on the input they receive. The magic of GPT lies in its ability to learn patterns and relationships between words, sentences, and even entire paragraphs from the data it was trained on.

The Code-First Approach

For hackers and developers eager to dive into the world of language models, a code-first approach is both enlightening and practical. This method involves directly interacting with language models through programming, allowing one to experiment with their capabilities firsthand.

One can start by playing with pre-trained models available through platforms like Hugging Face’s Transformers library. This library provides access to a multitude of models, including GPT-2, GPT-3, and others, along with the tools needed to fine-tune these models for specific tasks.

Fine-tuning is a critical concept in the world of language models. It involves adjusting a pre-trained model on a smaller, task-specific dataset. This process allows the model to adapt its knowledge to perform well on a particular task, whether it be sentiment analysis, question-answering, or text generation.

Practical Applications and Experiments

Language models are not just academic curiosities; they have practical applications that span various domains. For instance, they can be used to automate content creation, enhance chatbots, or even generate code. The possibilities are limited only by one’s creativity and understanding of these models.

Experimenting with language models can start with something as simple as generating text based on a prompt. However, as one becomes more familiar with these models, more complex projects become feasible. For example, one could train a model to write poetry, summarize articles, or even create a chatbot that mimics a historical figure’s speech patterns.

Challenges and Considerations

While working with language models is exciting, it’s not without challenges. One must consider the ethical implications of generating text that could be misleading or harmful. Additionally, the computational resources required to train large models are significant, though fine-tuning smaller models or using cloud-based services can mitigate this issue.

Another consideration is the “black box” nature of deep learning models. Understanding why a model generates specific outputs can be challenging, making debugging and improvement an iterative process of hypothesis and experimentation.

Conclusion

Language models represent a fascinating intersection of linguistics and artificial intelligence. For hackers and developers willing to explore these models through a code-first approach, the opportunities for learning and innovation are boundless. By experimenting with pre-trained models, fine-tuning them for specific tasks, and considering the ethical implications of their use, one can unlock new frontiers in natural language processing and beyond.

In essence, the journey into the world of language models is not just about harnessing their power but also about understanding the nuances of human language through the lens of AI. As we continue to push the boundaries of what’s possible with these models, we also deepen our appreciation for the complexity and beauty of human communication.

Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Fine Tuning LLM Models – Generative AI Course

1 Share

This video is from freeCodeCamp.org.

Learn how to fine tuning LLM models. This course will teach you fine tuning using using QLORA and LORA, as well as Quantization using LLama2, Gradient and the Google Gemma model. This crash course includes both theoretical and practical instruction to help you understand how to perform fine tuning.

💻 Code: https://github.com/krishnaik06/Finetuning-LLM

✏ Course developed by @krishnaik06

⌨ (0:00:00) Introduction
⌨ (0:01:39) Quantization Intuition
⌨ (0:34:03) Lora And QLORA Indepth Intuition
⌨ (0:56:26) Finetuning With LLama2
⌨ (1:20:35) 1 bit LLM Indepth Intuition
⌨ (1:37:33) Finetuning with Google Gemma Models
⌨ (1:59:45) Building LLm Pipelines With No code
⌨ (2:20:33) Fine tuning With Own Custom Data

Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Daily Reading List – October 17, 2024 (#421)

1 Share

There’s a handful of career-oriented content in today’s list. You might always be thinking about your next step, or never thinking about it, but it’s not a bad idea to assess your current state every now and then!

[article] More attrition awaits overworked IT teams. Don’t just focus on employee satisfaction when the job market is hot. Continually try and build a good environment so that folks want to stay.

[blog] How Shopify improved consumer search intent with real-time ML. Why did Shopify go for near-real time embeddings for use with ML models versus a simpler batch approach? Good details here.

[blog] Management or IC: What’s Your Next Move? There are some good myths and realities in this post. I’ve definitely learned that management is a new job, not a variation of IC work.

[blog] Safer with Google: Advancing Memory Safety. Here are some details about how we’re looking at memory safe languages and reducing risk.

[article] Terraform Beta Supports Multicloud, Complex Environments. This looks at their Stacks product which was announced last year. It’s now ready for use.

[blog] New in NotebookLM: Customizing your Audio Overviews and introducing NotebookLM Business. The releases keep coming for this viral hit. Offering the Business version of this product is smart.

[blog] gRPC: 5 Years Later, Is It Still Worth It? This is widely used, but I wouldn’t call it mainstream. But support for gRPC in products as an available communication protocol is growing.

[article] Is Your Career Heading in the Right Direction? No idea, but I’m enjoying myself. It’s always good to check in with yourself (and trusted folks) to see if you’re satisfied and growing.

[blog] Get Started with Chrome Built-in AI : Access Gemini Nano Model locally. A lot of potential here! Learn how to tap into the built-in AI model for Chrome.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

MassTransit v8.3.0 - RabbitMQ ReplyTo Support

1 Share
From: Chris Patterson
Duration: 4:23

With MassTransit v8.3.0, it is now possible to configure the ReplyTo endpoint using RabbitMQ.

RabbitMQ provides a default ReplyTo address for every broker connection that can be used to send messages directly to the connection without the need to create a temporary queue. Using the new request client, the bus endpoint is no longer used/needed and MassTransit will send responses using the connection's ReplyTo address.

MassTransit
Documentation: https://masstransit.io/
Discord: https://discord.gg/rNpQgYn

Connect on Twitter (X):
https://twitter.com/phatboyg

Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete

SE Radio 638: Nick Tune and Jean-Georges Perrin on Architecture Modernization

1 Share

Nick Tune and Jean-Georges Perrin join host Giovanni Asproni to talk about their proposed approach to modernizing legacy systems. The episode starts with some high-level perspective to set context for the approach described in their book, Architecture Modernization (Manning, 2024). From there, the discussion turns to important details, including criteria for deciding which aspects to revisit; some of the activities, processes, and tools; and the importance of data engineering in modernization efforts. Nick and Jean-Georges describe how to successfully implement an architecture-modernization effort, and how to fit that work with the teams' other priorities. The episode finishes with some warnings about the typical risks associated with modernizing a legacy system, and suggestions on how to mitigate them.

This episode is sponsored by QA Wolf.





Download audio: https://traffic.libsyn.com/secure/seradio/638-nick-tune-jean-georges-perrin-architecture-modernization.mp3?dest-id=23379
Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories