Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122685 stories
·
29 followers

Twitter is officially X.com now

1 Share
An image showing the former Twitter logo with the X logo on its head
The Verge

The social network formerly known as Twitter has officially adopted X.com for all its core systems. That means typing twitter.com in your browser will now redirect to Elon Musk’s favored domain, or should. At the time of publication, we’re seeing a mix of results depending upon browser choice and whether you’re logged in or not.

A message also now appears at the bottom of the X login page that reads, “We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same.”

The domain transition has been one of the more awkward aspects of Elon Musk’s move to rebrand the company. Although many aspects of X migrated to the new branding long ago — including its official account, its mobile apps,...

Continue reading…

Read the whole story
alvinashcraft
51 minutes ago
reply
West Grove, PA
Share this story
Delete

How Technical Debt Can Impact Innovation and How to Fix It

1 Share
New research from vFunction reveals architectural complexity as the leading culprit behind technical debt.

Read the whole story
alvinashcraft
52 minutes ago
reply
West Grove, PA
Share this story
Delete

OverflowAI and the holy grail of search

1 Share
Product manager Ash Zade joins the home team to talk about the journey to OverflowAI, a GenAI-powered add-on for Stack Overflow for Teams that’s available now. Ash describes how his team built Enhanced Search, the problems they set out to solve, how they ensured data quality and accuracy, the role of metadata and prompt engineering, and the feedback they’ve gotten from users so far.
Read the whole story
alvinashcraft
52 minutes ago
reply
West Grove, PA
Share this story
Delete

5 easy tips to improve your personal website performance

1 Share

If you’re a developer, you need a personal website. While billionaire-owned, algorithm-based social media platforms arbitrarily decide what people should and should not see on their timelines, there’s no better time for you to carve out your own cozy corner on the web and own your content.

Why you need a personal website

Your personal website is your personal showcase: a one-stop-shop to show off your work, skills, personality, and ✨aesthetic✨. You could use your personal website as a place to collect what you’ve learned and what you’ve built over the years, whether that’s through traditional blogging or creating interactive demo pages. Or you could use it to build community through shared interests, to land that exciting new job, as a place to improve your refactoring skills, as an archive for your weekly newsletter, to test-drive Test Driven Development, or to build a proof-of-concept of something that really excites you.

Your personal website is also a great place to use as a playground for new and experimental features on the web, to try out new UI component frameworks or CSS methodologies, or to stretch the boundaries of what’s possible in a browser.

Personal websites are low-risk, high reward

Think of your personal website as a super low-risk test environment where you can build anything you ever dreamed of without having to worry about deadlines, stakeholder management, or client feedback.

And because a personal website is usually low-risk (i.e. there’s no P0 panic if you find a bug or it goes down) it’s also a great tool to help you learn about web performance. Through using a variety of tools such as Google Lighthouse, built-in browser dev tools, or an Application Performance Monitoring tool like Sentry, you can use your personal website to identify common performance issues and experiment with different fixes to see what gets the best results. What’s more, you can transfer all that learning to the things you build in your day job. And if you are using your personal website as an online resume, you’ll make a better first impression if it loads quickly and feels fast to use: win-win. (As a hiring manager in the past, I definitely inspected the DOM and the network tab of candidate websites.)

If you don’t yet have a personal website, go make one! It could be as simple as a single page of HTML deployed to a CDN. And then come back to check out these five easy tips for speeding up your personal website performance that you can apply to any website, project, or product.

1. Serve static HTML as much as possible

It’s too easy to render pages on the server in 2024. Most modern JavaScript-based frameworks come with pre-built adaptors to be able to deploy and server-side-render (SSR) your websites on demand using your preferred hosting platform. We also have edge-rendering and edge functions that allow us to intercept HTTP requests at the edge (the closest server location to the request) to add even more dynamic content to our pages based on geolocation, cookies, and more. But all of this comes at a cost — most notably to performance and your Time to First Byte (TTFB): the time it takes for the browser to receive the first byte of content from the server. (And let’s not forget the ongoing, and sometimes unexpected, monetary costs of running SSR.)

A brief history of the web

The web moves in cycles. In the 1990s the web was mainly static, delivering plain HTML documents that were stored on servers somewhere in the world; this was the Read Things era. As the web became a place where you could now do things (such as sign up and log in and CRUD), SSR — building pages on the server before sending them back to the browser — was necessary. In the early 2010s when JavaScript was more widely adopted, client-side-rendering (CSR) — sending a blank or skeleton HTML document and then building the page content in the browser — became the norm. The issue was that people still needed to read things on the internet as well as do things. And with the rise in SSR and CSR, reading things started to feel slow. In the mid-2010s, the modern web offered up a modern alternative: static site generators.

Static site generators make fast sites

Static site generators (or early “front end frameworks”) provided developers with a way to build a folder full of HTML files, each populated with the required page data at the time of build. Developers could now create tens of thousands of static HTML pages, stick them on a Content Delivery Network (CDN), and not have to worry about maintaining servers, scaling for traffic, or security policies. This was the beginning of Jamstack.

Fast-forward ten years, and “Jamstack” may be considered dead as a concept, but the principles of serving cached static content from a CDN for secure and performant websites still hold true. Serving static HTML files is much more performant than generating web pages on demand on a server for every request. This is because the server doesn’t need to do anything apart from retrieve your HTML document from the cache and send it back. No business logic, no database queries, no delays. Of course, there are times when you may need some level of dynamic page content. But on a personal website, you probably don’t need SSR.

You probably don’t need SSR

What has been interesting to observe is how the rise in popularity of “full stack front end frameworks” such as Next.js has started to promote using SSR methods and even edge-rendering over serving static content. What I also observed in the mid 2010s is that Next.js was originally marketed as yet another static site generator, encouraging developers to use dynamic page generation sparingly. But now, many blog or personal website templates built on frameworks like Next.js often come with SSR as the default page behavior. But content-based websites — which most personal websites are — don’t need SSR. Personal websites usually involve reading things, not doing things.

If you’re currently using SSR on your personal website, ask yourself if you really need to. By pre-building and serving pages statically I guarantee you’ll see performance gains. Take for example a case study on my personal website which should convince you. Over the years I experimented so hard with Edge Functions to add some dynamic elements to my static personal website that I pushed my p75 TTFB up to almost 4 seconds. I was devastated to learn this. Read how I improved the TTFB by 80% and if you want a spoiler: I just built the pages statically again.

2. Optimize images

Even if it’s just a headshot somewhere on your “about” page or a meme on your homepage, you’re probably using images on your personal website. Images are everywhere on the web. And they need to be optimized. But what do we mean by optimizing images?

Serve images in next-gen formats

If you’ve analyzed your personal website performance using Google Lighthouse, you might have been given the advice to “serve images in next-gen formats”. Next-gen (next generation) image formats are file types that use modern image compression algorithms, resulting in smaller image file sizes. This, in turn, results in better web page performance given the web browser has fewer bytes to download per image. Instead of serving PNG or JPEG image file formats, it is recommended to convert those to webp or avif file formats. Additionally, if you want to serve an animated gif, convert it to a webp file, which will preserve the animation while reducing the file size by almost 90%. Here’s a throwback to when I discovered this in 2021.

WebP 167kb, still animated, original GIF is 1.2mb.

How to convert your images to next-gen formats

Depending on your preferences and how many images you need to optimize, you can use offline tools, build-time tools, third-party services, and image manipulation APIs to convert your images to modern formats on your personal website. Also, bear in mind that not all browsers support avif and webp image formats just yet, so it’s important to use the HTML <picture> element to allow the browser to choose the most appropriate image file format to download and display.

Lazar guides you through all the low-effort image optimization tips you need to make sure your personal website stays in good shape, with some bonus advice on how you can monitor your website performance using Sentry in terms of image resources.

3. Use system fonts

It’s too easy to use Google fonts. Choose a font and slap a link to the font file in the head of your pages. Done. The hardest part is deciding which fonts to use. And I get it. Fancy fonts look great. But do you really need custom fonts? Like really, really? You could get a huge performance boost by using system fonts. No downloading, no layout shifts, no flashes of unstyled content.

If you do want to use non-system fonts, it’s recommended to serve font files from the same server as your website, rather than ask the browser to fetch them from a third-party service at runtime such as Google. While you can download font files from the Google Fonts website and serve them yourself instead of linking to them, you’ll still have to convert the provided ttf font files to file formats required for the web (woff and woff2). In the past, I’ve always used Font Squirrel Webfont Generator to do this, but in 2024 this feels hacky. In my experience, Font Squirrel tends to do weird things with variable font files (which you should also use to improve your website performance).

I’m not sure why we haven’t solved the font file format problem yet. And because the best network request is the one that’s never made, why not scrap the custom fonts altogether and just use system fonts? You may fret that your website might look different on MacOS, Linux, and Windows, but maybe that’s the fun of it. And system fonts are no longer limited to just Times New Roman and Arial. Modern Font Stacks shows us just how good system fonts can look in context.

Lazar goes deeper into the web font conversation in Web Fonts and the Dreaded Cumulative Layout Shift.

4. Remove render-blocking resources

In 2022, I refactored my personal website from Next.js to Eleventy. This decision was centered around being able to know exactly what files were being delivered to the browser so that I could keep tabs on performance. With Next.js I noticed my website was making over 30 requests for separate chunked JavaScript files for a static page with no interactivity. It didn’t make sense. That same page created with Eleventy served no JavaScript files, which is correct — because the page didn’t need any JavaScript. All this is to say that as a responsible developer, you should be aware of what you’re sending down the wire to your users — especially if it ends up being a render-blocking resource. I made a YouTube short about this in 2022 to show the difference in what the two frameworks delivered to the browser.

A render-blocking resource is as it’s described: a downloadable resource that blocks the render (or “first paint”) of a web page, delaying the time that a user sees something in the browser. The more render-blocking resources that stack up on top of each other, the longer your page takes to load, and the worse your First Contentful Paint (FCP) web vital score will be. Google stipulates that to provide a good user experience, web pages must have an FCP of 1.8 seconds or less.

There are render-blocking resources we can’t escape, such as CSS (both inline CSS and CSS contained in stylesheets), but your website may have fallen victim to other render-blocking resources loaded via <script> tags in the <head> of your website pages. Take for example when the SyntaxFM team at Sentry discovered a 4kB render-blocking script that added around 500ms to each page load on average.

It may be that some of your dependencies need to be render-blocking, and that’s fine. But by considering the impact of render-blocking resources and being mindful of how and when you load JavaScript every step of the way, you’ll most likely be able to keep your Core Web Vitals scores looking good. Google offers more advice in this article on how to eliminate render-blocking resources.

5. Use less JavaScript

I am once again going to reference this evergreen piece of advice from Cassidy Williams in an article on CSS Tricks: “Your websites start fast until you add too much to make them slow.” When reaching to build that next feature on your personal website, ask yourself, can you do it without JavaScript?

Perhaps you want to build a contact form. You don’t need JavaScript for a form. How about using native HTML form functionality which includes HTML5 field validation? Why not go wild and just include a mailto link?

You probably don’t need JavaScript to achieve many modern UI animations. CSS is super-powerful these days, and I’m always looking for opportunities to use some of the newer features, such as scroll-snap-type.

You can even almost create a CSS-only masonry layout. The technology is still experimental, and in Safari Technical Preview, but let’s hope it’s coming soon.

And if you’re thinking about rendering some UI components using JavaScript on the client — can you build those components using static HTML, instead?

Challenge yourself to add less.

Read the whole story
alvinashcraft
53 minutes ago
reply
West Grove, PA
Share this story
Delete

Why Should Business Adopt RAG and migrate from LLMs?

1 Share

kevin_comba_9-1715550472228.png

 

In this blog we are going to discuss the importance of migrating your product or startup project from LLMS to RAG. Adopting RAG empowers businesses to leverage external knowledge, enhance accuracy, and create more robust AI applications. It’s a strategic move toward building intelligent systems that bridge the gap between generative capabilities and authoritative information. Below are topics in this blog. 

  • Brief History of AI 
  • What are Large Language Models (LLMS). 
  • Limitation of LLMS. 
  • How can we incorporate domain knowledge. 
  • What is Retrieval Augmented Generation (RAG). 
  • What is Robust retrieval for RAG Apps.  

Once we are done with these concepts, I hope to convince you to adopt RAG in your project. 

 

Brief History of AI 

 

The concept of having intelligent systems started in 1950s when Artificial intelligence was introduced as a field of computer science. In 1959 Machine Learning was introduced as a subset of AI. 2017 the concept of deep learning took over as a way of using neural networks to process data and make decisions. From 2021(birth of Generative AI) till now we are in the error of Generative AI which basically creates response (images, text, audio or video) when given prompts(query) on the data it has been trained on. In summary Generative AIs are Large Language Models (LLMs) capable of generating coherent and contextual responses. 

 

kevin_comba_0-1715550310284.png

 

 

 

What are Large Language Models (LLMS) 

 

An LLM is a model that is so large that it achieves general-purpose language understanding and generation. When an LLM is trained on data to give sentiment when prompted with a review should be able to produce positive or negative sentiment as its output.  

kevin_comba_1-1715550310289.png

 

On Azure for example we have various models which can be deployed via Azure OpenAI or Azure AI Studio and be readily available to consume, fine-tune and even train using different parameters. 

 

kevin_comba_2-1715550310293.png

 

 

Talking about Generative Pre-trained Transformers (GPT) models which are one of the models available in Azure, they are trained on the next word prediction task. 

 

kevin_comba_3-1715550310295.png

 

 

 

Limitations of LLMS 

 

LLMS have been faced by various limitations ie 

  • Bias and Hallucination 
  • Unforeseen Consequences ie harmful information 

Among others which have been solved ie Microsoft adheres to strong Ethical guidelines and polices for responsible AI practices. Which helps in resolving the issue of unforeseen consequences. Fine-Tuning and Customization helps to reduce bias and hallucinations of LLMs. 

The biggest limitation of all LLMs is Outdated Public knowledge and no internal knowledge. 

kevin_comba_4-1715550310298.png

 

To solve this, we must incorporate techniques offered by domain knowledge. 

 

Incorporating Domain Knowledge 

 

 

kevin_comba_5-1715550310301.png

 

 

Incorporating domain knowledge into LLMs is crucial for enhancing their performance and making them more contextually relevant. Examples are 

  • Prompt engineering – it relies on in-context learning whereby LLMs like GPT-3 is trained on vast amounts of text data to predict the next token based on preceding text. 
  • Fine-tuning - involves training an LLM on a smaller dataset specific to a particular domain. By fine-tuning, the model adapts its pre-learned knowledge to the nuances of the target domain. 

Both In-context learning and fine turning don’t address the issue of Outdated Public knowledge. This brings us to Retrieval Augmented Generation (RAG). 

 

What is Retrieval Augmented Generation (RAG) 

 

RAG is based on the concept of an LLM leaning new facts temporarily. RAG with Azure OpenAI allows developers to use supported AI chat models that can reference specific sources of information to ground the response. Adding this information allows the model to reference both the specific data provided and its pretrained knowledge to provide more effective responses. 

 

kevin_comba_6-1715550310304.png

 

 

Azure OpenAI enables RAG by connecting pretrained models to your own data sources. Azure OpenAI on your data utilizes the search ability of Azure AI Search to add the relevant data chunks to the prompt. Once your data is in a AI Search index, Azure OpenAI on your data goes through the following steps: 

  • Receive user prompt. 
  • Determine relevant content and intent of the prompt. 
  • Query the search index with that content and intent. 
  • Insert search result chunk into the Azure OpenAI prompt, along with system message and user prompt. 
  • Send the entire prompt to Azure OpenAI. 
  • Return response and data reference (if any) to the user. 

This concept enables LLMs to learn fast and efficiently beating the process for fine-tuning which is both costly and time intensive, and should only be used for use cases where it's necessary. 

 

Robust retrieval for RAG Apps 

 

To achieve robust retrieval for RAG Apps we must first consider the importance of the search step (in the image above). Below are points to keep in mind ie responses from RAG Apps are only as good as retrieved data. 

kevin_comba_7-1715550310308.png

 

We can also achieve robust retrieval in our RAG apps by incorporating vector-based search and vector databases.  

 

Read more  

 

Read the whole story
alvinashcraft
53 minutes ago
reply
West Grove, PA
Share this story
Delete

Learn How to Migrate Windows Servers to Azure

1 Share
From: Microsoft Developer
Duration: 7:27

Migrating your Windows servers to Azure provides a flexible, secure and scalable infrastructure that enables agility, cost efficiency and innovation.

In this episode of the Azure Enablement Show, Aaron and Priyanka discuss the benefits of server migration and Priyanka shares a variety of free resources from Microsoft that will help you learn the skills you’ll need to have to be successful.

Resources
• CLOUD SKILLS CHALLENGE: Windows Server Hybrid Administrator https://aka.ms/azenable/161/01
• Microsoft Azure Virtual Training Day: Migrate and Secure Windows Server and SQL Server Workloads https://aka.ms/azenable/161/02
• Microsoft Azure Virtual Training Days: Fundamentals https://aka.ms/azenable/161/03
• APPLIED SKILLS: Secure storage for Azure Files and Azure Blob Storage https://aka.ms/azenable/161/04
• APPLIED SKILLS: Configure secure access to your workloads using Azure networking https://aka.ms/azenable/161/05
• APPLIED SKILLS: Deploy and configure Azure Monitor https://aka.ms/azenable/161/06
• Blog: Maximize your business potential: Migrate your Windows Server to Azure with expert resources https://aka.ms/azenable/161/07
• Exam AZ-800: Administering Windows Server Hybrid Core Infrastructure https://aka.ms/azenable/161/08
• Explore more cloud enablement resources! https://www.azure.com/enablement

Related episodes
• Overview of Cloud Adoption Framework for Azure https://aka.ms/azenable/adopt-intro1

• What’s new in the Well-Architected Framework https://aka.ms/azenable/143

Chapters
0:00 Introduction
0:46 Benefits of migration
1:57 More than a technical shift
2:37 Skilling resources
3:17 Cloud Skills Challenges
4:37 AZ 800 Certification
5:02 Virtual Training Days
6:28 More Resources

Read the whole story
alvinashcraft
54 minutes ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories