Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152180 stories
·
33 followers

Starbucks cuts tech jobs as new CTO reshapes organization

1 Share
Starbucks is cutting an unspecified number of tech jobs. (GeekWire File Photo)

Starbucks is cutting jobs in its technology organization, restructuring the team under a new chief technology officer who joined the coffee giant from Amazon four months ago.

Several affected employees posted about the cuts on LinkedIn on Tuesday afternoon, including people in program and product management and other technology-related roles. Starbucks declined to comment, and the number of people impacted is unclear as of now. 

The Seattle Times reported on the cuts earlier today, citing an internal message in which the company told employees it was “making structural changes to move faster, sharpen focus, and ensure we are set up to deliver on our most important priorities.”  

Anand Varadarajan joined Starbucks as chief technology officer in January after 19 years at Amazon, where he most recently ran tech and supply chain for its global grocery business. 

The restructuring comes as Starbucks pushes ahead with a broader turnaround under CEO Brian Niccol, who joined in 2024. It includes a series of technology initiatives from an AI-powered drink-ordering assistant to an algorithm that manages mobile order timing

The cuts appear to be unrelated to the company’s Nashville expansion. Following up on a prior announcement, Starbucks said Tuesday that it will invest $100 million in the new corporate office in Tennessee that will eventually employ up to 2,000 people.

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Azure SDK Release (April 2026)

1 Share

Thank you for your interest in the new Azure SDKs! We release new features, improvements, and bug fixes every month. Subscribe to our Azure SDK Blog RSS Feed to get notified when a new release is available.

You can find links to packages, code, and docs on our Azure SDK Releases page.

Release highlights

Cosmos DB 4.79.0

The Java Cosmos DB library includes a critical security fix for a Remote Code Execution (RCE) vulnerability (CWE-502). Java deserialization was replaced with JSON-based serialization in CosmosClientMetadataCachesSnapshot, AsyncCache, and DocumentCollection, eliminating the entire class of Java deserialization attacks. This release also adds support for N-Region synchronous commit, a Query Advisor feature, and CosmosFullTextScoreScope for controlling BM25 statistics scope in hybrid search queries.

AI Foundry 2.0.0

The Azure.AI.Projects NuGet package ships its 2.0.0 stable release with significant architectural changes. Evaluations and memory operations moved to separate Azure.AI.Projects.Evaluation and Azure.AI.Projects.Memory namespaces. Many types were renamed for consistency, including InsightsProjectInsights, SchedulesProjectSchedules, EvaluatorsProjectEvaluators, and TriggerScheduleTrigger. Boolean properties now follow the Is* naming convention, and several internal types were made internal.

AI Agents 2.0.0

The Java Azure AI Agents library reaches general availability with version 2.0.0. This release includes breaking changes to improve API consistency:

  • Several enum types were converted from standard Java enum to ExpandableStringEnum-based classes.
  • *Param model classes were renamed to *Parameter.
  • MCPToolConnectorId now uses consistent casing as McpToolConnectorId.
  • A new convenience overload for beginUpdateMemories was added.

Initial stable releases

Initial beta releases

Release notes

The post Azure SDK Release (April 2026) appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

RAG In Detail

1 Share

In my previous post I walked through a RAG example but glossed over the details. In this post I’ll back up and provide the details.

The key steps in RAG are

  • Load the data
  • Split the text into smaller chunks to fit within context limits
  • Create a Document object
  • Embed the document in vectors that represent semantic meaning
  • Store the document—typically in vector stores. These are databases designed to store embeddings and provide fast semantic retrieval
  • Invoke a retriever to query the back end to return the most relevant Document object
  • Create a prompt for the LLM

Let’s walk through the steps shown in the previous post with these in mind.

Loading the document

First, we need to identify and load the documents. In our case, this consists only of a single text file with an excerpt from Romeo and Juliet. In most real-world scenarios you’ll have multiple data sources.

from langchain_community.document_loaders import TextLoader
loader = TextLoader("RomeoAndJuliet.txt", encoding="utf-8")
docs = loader.load()

Notice that we are using the langchain_community document loader to do the text loading. Langchain will be the principal framework we’ll be working with, and it can load many types of data.

Splitting the text

We saw how to chunk that data in the previous post. We begin by using a text splitter to break large text into overlapping chunks using token-based splitting (not characters). In our case, we will set each chunk to about 1,000 tokens with 200 tokens of overlap. The overlap ensures that nothing is lost.

from langchain.text_splitter import RecursiveCharacterTextSplitter 
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
    encoding_name='cl100k_base',
    chunk_size=1000,
    chunk_overlap=200
)
chunks = loader.load_and_split(text_splitter)

The text splitter that we use is the same one OpenAI uses. The cl100k_base is the tokenizer used by many embedding models. The 200 token overlap prevents losing meaning at the boundaries of the chunks and helps embeddings preserve context.

We use a recursive text splitter because it splits text intelligently, splitting by paragraphs when possible, then by sentences if the paragraphs are too big, then by words and finally by characters.

Embedding in a vector store

embedding_model = OpenAIEmbeddings(model="text-embedding-ada-002")

The embedding_model knows how to take text and send it to OpenAI and get back a vector embedding (a list of numbers). Each chunk you pass into Chroma (see below) will be embedded using this model

Our next task is to build the vector store, using the chunks we created above

vectorstore = Chroma.from_documents(
    chunks,
    embedding_model,
    collection_name="RomeoAndJuliet"
)

Here we embed each of the chunks. For each chunk Chroma calls embedding_model.embed_document which produces the vector

For each chunk, Chroma will store the vector embedding, the original text and the metadata such as the source file, etc. This is used for similarity search (see below).

The final value passed in is the collection_name. The vector store is saved under that name.

Getting the retriever

As noted in the previous post, the next step is to create the retriever, which we do from the vector store, telling it that we want the search_type to be similarity and telling it how many of the most relevant chunks to return.

You get back a LangChain Document with the text chunk and the metadata.

Instantiating the LLM

The next section in the previous post is self-explanatory until we instantiate the LLM.

llm = ChatOpenAI(
    model="gpt-4o-mini",                      
    temperature=0,                
    max_tokens=10000,                 
    top_p=0.95,
    frequency_penalty=1.2,
    stop_sequences=['INST']
)

Here we are using the OpenAI gpt-40-mini LLM – a popular and inexpensive LLM for RAG.

We set the temperature, which is a value that determines randomness in the answer. 0 is deterministic and repeatable.

max_tokens sets the upper bounds on how long the model’s response can be.

top_p=0.95 is tricky. This says that the model should sample from the top 95% probability. However, with temperature set to 0, this is meaningless. If you tinker with temperature, however, this can be useful.

frequency_penalty controls how often a token can repeat in the result. We’re using 1.2 which is a strong penalty creating concise, non-repetitive answers.

stop_sequence says to stop generation when the model outputs INST. This just prevents the model from “leaking” into the next instruction.

That’s it! Together with the previous post, you are now fully equipped to implement your RAG. Enjoy!

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Upcycle a 3D Printer

1 Share
Upcycle a 3D Printer

Upcycle your gear and transform that old 3D printer into a cool new CNC tool.

The post Upcycle a 3D Printer appeared first on Make: DIY Projects and Ideas for Makers.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Exploring the Future of UI Design with Claude Design and Opus 4.7

1 Share

If you’ve been following the AI design tools landscape, brace yourself because Claude Design just entered the scene with a bang. Anthropics’ latest offering, powered by the Opus 4.7 platform, aims to redefine how we think about UI/UX design. Let’s dive into what makes Claude Design a potential disruptor in an already dynamic industry.

Based on content from Fireship

What is Claude Design?

Claude Design is a new platform launched by Anthropic, leveraging the advanced capabilities of their flagship model, Opus 4.7. This tool promises to transform rudimentary Figma wireframes into fully-fledged prototypes, pitch decks, and production-ready user interfaces. The main selling point? It can achieve all this without you ever needing to touch a single design tool.

Claude Design vs. Figma and Adobe

In the wake of Claude’s launch, Figma’s stock took a notable hit. Industry giants like Adobe are also feeling the ripples as Claude Design shows the potential to streamline the UI/UX design process dramatically. As junior designers tweak their LinkedIn titles to ‘Prompt Engineer,’ Claude Design shakes up the status quo by offering features like working animations and comprehensive interactivity, putting it a step ahead of competitors.

A Deeper Dive into Claude Design

The Opus 4.7 model within Claude Design can process images at 3.75 megapixels, a substantial benefit for working with high-resolution design elements. It has also scored significantly on software engineering benchmarks, surpassing its predecessor. While some skeptics voice concerns about these metrics, the platform’s demo features — including interactive variations and sophisticated animations — serve as a testament to its capabilities.

One standout feature is the tool’s ability to integrate with existing design systems. You can upload your design specifications via a GitHub repository or directly from a Figma file. This allows designers to create a few base screens and let Claude extrapolate the entire suite, which could significantly cut down design times. However, as with any new tool, there are quirks. For example, the preliminary use of Claude Design revealed some inconsistencies in applying existing design systems, showing that there is still room for improvement.

Potential and Limitations

Despite the impressive offerings, some aspects of Claude Design appear more show than tell. Demos may not wholly capture the tool’s applicability in real-world scenarios. The promise of automatic design consistency and seamless transition from wireframe to final product is exciting, but current limitations like slower processing speeds compared to competitors like Google Stitch need addressing.

Looking Ahead

Claude Design’s potential to revolutionize the landscape of UI/UX design cannot be understated. As the tool evolves, it will be crucial for startups and enterprises alike to keep an eye on its development. The future may very well see AI-driven platforms like Claude Design becoming the norm, shifting more creative processes toward automated solutions.

As designers and developers, tools like Claude Design invite us to rethink our workflows, pushing the boundaries of what’s possible in digital design. Try it for yourself and witness how AI continues to blur the lines between conception and creation.

Stay updated with the latest in design tools and UI/UX innovations by following us on our newsletter and courses. And, explore more about Claude Design’s impact on the design landscape by visiting [Fireship’s YouTube channelhere].

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

.NET 10.0.7 / 10.0.107

1 Share

You can build .NET 10.0 from the repository by cloning the release tag v10.0.107 and following the build instructions in the main README.md.

Alternatively, you can build from the sources attached to this release directly.
More information on this process can be found in the dotnet/dotnet repository.

Attached are PGP signatures for the GitHub generated tarball and zipball. You can find the public key at https://dot.net/release-key-2023

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories