Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
144793 stories
·
32 followers

Latest layoffs at Oracle impact 101 employees in Washington state

1 Share
Oracle’s Cloud Experience Center in downtown Seattle. (GeekWire File Photo / Todd Bishop)

Oracle is laying off more workers in Washington state.

The cloud and database giant is laying off 101 employees in Seattle, according to a new filing with the Washington state Employment Security Department.

This follows a separate filing from Aug. 13 which indicated that Oracle was laying off 161 workers as part of broader reported cuts across the company’s operations.

We’ve contacted the company about the latest cuts and will update this post if we hear back.

Linkedin posts on Tuesday show Oracle engineers, salespeople, and others impacted by layoffs.

Oracle has grown its presence in the Seattle region over the past decade and employs more than 3,800 people in the area, according to LinkedIn.

In more recent years Oracle has established partnerships with Seattle-area tech giants Microsoft and Amazon.

Oracle, Microsoft, and other cloud behemoths are investing heavily to expand capacity for training and running AI models.

Rising capital expenditures have created pressure to reduce operating costs through workforce reductions. Microsoft recently laid off more than 15,000 people globally.

Oracle reported revenue growth of 11% to $15.9 billion in its most recent quarter. The company’s stock is up 35% this year. It has more than 162,000 employees worldwide.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 1000!

1 Share

Episode 1000! Richard Campbell invites Paul Thurrott to join him to celebrate the milestone episode and answer questions from listeners. From the creation of the podcast to the role of Windows in the modern world, the impact of ARM, Cloud, and many other technologies - all addressed in this super-sized episode. And yes, artificial intelligence is part of the conversation—and will be part of the workflows that sysadmins utilize on a day-to-day basis. Thanks to all the folks who sent in questions for this special show - and thanks for listening!

Links

Recorded August 31, 2025





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/3dcba8c7-fd12-454e-b613-64a679a69b02/audio/9195d8c5-9294-4c01-b7a6-0e76492ff2c2/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Noelle Russell

1 Share

Noelle Russell is a multi-award-winning speaker, author, and AI Executive who specializes in transforming businesses through strategic AI adoption. She is a revenue growth + cost optimization expert, 4x Microsoft Responsible AI MVP, and named the #1 Agentic AI Leader in 2025. She has led teams at NPR, Microsoft, IBM, AWS and Amazon Alexa, and is a consistent champion for Data and AI literacy and is the founder of the 'I ❤️ AI' Community teaching responsible AI for everyone.

She is the founder of the AI Leadership Institute and empowers business owners to grow and scale with AI. In the last year, she has been named an awardee of the AI and Cyber Leadership Award from DCALive, the #1 Thought Leader in Agentic AI and a Top 10 Global Thought Leader in Generative AI by Thinkers360.You can find Noelle on the following sites:

Here are some links provided by Noelle:

PLEASE SUBSCRIBE TO THE PODCAST

You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com

Coffee and Open Source is hosted by Isaac Levin





Download audio: https://anchor.fm/s/63982f70/podcast/play/107720027/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-8-2%2F406755557-44100-2-ff1ce6fe42b3b.mp3
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What is Prompt Engineering? Techniques, Examples, and Tools

1 Share

Prompt engineering is the practice of designing effective inputs to guide AI systems toward more accurate, useful, and context-aware outputs. It is increasingly applied in areas such as business automation, creative work, research, and education, offering clear benefits in efficiency and accessibility. At the same time, challenges such as scalability, bias, and the trial-and-error nature of prompting underscore the need for structured approaches and best practices. Looking ahead, advancements in automation, ethical frameworks, and industry-specific tools will shape the future of prompt engineering, making it a critical skill for AI-driven innovation.

What is prompt engineering?

Prompt engineering acts as a bridge between humans and machines. To be more precise, prompt engineering is the process of creating and refining the instructions given to an AI model to improve the accuracy and relevance of its responses. Because systems like ChatGPT and Claude generate outputs based on the way prompts are phrased, even small changes in wording, structure, or context can have a significant impact on results.

By designing high-quality prompts, users can help AI models produce outputs that align with specific goals, whether that’s generating content, automating business tasks, or solving technical problems.

What is prompt engineering used for?

Prompt engineering can be used for a variety of different purposes. Some of these include:

    • Content creation: Writing articles, marketing copy, or social media posts for a specific audience.
    • Software development: Assisting with code generation, debugging, and explaining complex programming concepts in natural language.
    • Customer support: Helping chatbots and virtual assistants provide accurate, empathetic, and context-aware responses.
    • Education and training: Creating study guides, practice problems, or simplified explanations for learners at different levels.
    • Research and analysis: Summarizing documents, highlighting key insights, or comparing data from multiple sources.
    • Business operations: Drafting emails, creating reports, or automating repetitive tasks.

Benefits of prompt engineering

A good prompt can make the difference between a vague, useless AI response and a clear, actionable one. Some specific benefits of prompt engineering include:

    • Improved accuracy: Well-crafted prompts reduce misunderstandings and guide AI toward outputs that align closely with user intent.
    • Efficiency and productivity: Clear instructions reduce the need for repeated edits or regenerations, saving time and effort.
    • Versatility across domains: From business operations to creative writing, prompt engineering enables AI to be applied to a wide variety of uses.
    • Accessibility for non-experts: Users without coding or data science backgrounds can still achieve high-quality results through carefully designed prompts.
    • Consistency in outputs: Standardized prompts facilitate repeatable results, particularly in enterprise or team settings.
    • Improved creativity: By framing prompts in innovative ways, users can encourage AI to generate fresh ideas and perspectives.

Ultimately, prompt engineering helps transform AI from an unpredictable tool into a trustworthy partner for problem solving and creative endeavors.

Challenges of prompt engineering

Understanding the challenges associated with prompt engineering is key to using prompts effectively and responsibly. Here are some potential issues you may encounter:

    • Trial and error required: Crafting effective prompts often involves multiple iterations before achieving the desired outcome.
    • Model limitations: Even with well-structured prompts, AI models may still produce errors, hallucinations, or irrelevant results.
    • Scalability issues: Designing consistent, high-quality prompts for enterprise-level use cases can be time consuming and difficult to maintain.
    • Context window constraints: AI models can only process a limited amount of information at once, which restricts the depth of input.
    • Bias and fairness risks: Poorly phrased prompts may unintentionally reinforce stereotypes, misinformation, or harmful content.
    • Overreliance on prompts: Users may depend too heavily on clever prompting instead of combining it with other strategies like fine-tuning or retrieval-augmented generation (RAG).

By recognizing these challenges, prompt engineers can strike a balance between experimentation and structured best practices to make AI outputs more ethical and reliable.

Prompt engineering techniques

Prompt engineering is more than just asking questions; it’s also about the way you structure prompts. Some commonly used techniques include:

    • Zero-shot prompting: Asking the model to perform a task without providing examples, relying only on clear instructions.
    • Few-shot prompting: Including a handful of examples in the prompt to show the model the desired style, format, or logic.
    • Role prompting: Assigning the AI a persona or perspective (e.g., “You are a CMO”) to influence tone and expertise.
    • Chain-of-thought prompting: Encouraging the AI to explain reasoning step by step to improve accuracy in complex problem-solving.
    • Instruction-based prompting: Using explicit, structured commands such as “List three advantages and disadvantages of…” or “Summarize the key takeaways using bullet points.”
    • Context-rich prompting: Supplying additional background information, constraints, or data so the AI can tailor responses more precisely.
    • Iterative refinement: Adjusting and rephrasing prompts based on initial outputs until the AI consistently produces the desired result.

Prompt engineering best practices

Getting high-quality results from AI models is dependent on the ability to craft clear, detailed prompts. Here are some best practices you should utilize to help you do that:

Start with clear objectives

Before writing, define your goal. Are you in search of ideas, facts, or solutions? A vague prompt, such as “Tell me about marketing,” produces generic results, while a more detailed prompt, like “Explain three digital marketing strategies for B2B SaaS companies with under 50 employees,” yields a more detailed output.

Be specific about format

AI performs better with clear instructions, so it’s crucial to include details like length, tone, and format. Instead of requesting “A social media post about productivity,” you should ask for “A LinkedIn post under 150 words sharing three time-management tips for remote workers in a professional but conversational tone.”

Use examples to guide

Providing examples improves results. This technique, known as “few-shot prompting,” enables the AI to produce what you need.

Example structure:

I need product descriptions like this:
[Product Name] - [One-sentence benefit] [Two key features] [Price point]

Example: "Noise-Canceling Headphones - Block out distractions and focus on what matters. Features active noise cancellation and 30-hour battery life. Starting at $199."

Now write descriptions for: [your products]

Break complex tasks into steps

Large requests can lead to scattered results. Break tasks into steps or outline a process. Instead of: “Create a complete marketing plan for my startup,” try: “Help me create a marketing plan by identifying my target audience, suggesting three marketing channels, and outlining a content calendar.”

Iterate and refine

The first response isn’t always perfect, which is why it’s helpful to request follow-ups to refine the AI model’s output. Some examples of how you might do this include:

    • “Make this more conversational.”
    • “Add examples to point #2.”
    • “Shorten to 100 words but maintain the key message.”

Provide context

Provide the AI model with background information to deliver better-tailored responses. You can do this by taking the basic prompt, “Write a project update email,” and adding context like: “Write a project update email for our mobile app redesign. We’re two weeks behind schedule due to technical challenges; however, core features are 80% complete. The audience is our executive team.”

Experiment with approaches

Don’t just stop at the first working prompt. It’s important to try out different phrasings or structures for better results. You can utilize role-playing prompts like “Act as a marketing expert and analyze…” or “Explain this to a beginner vs. an expert.”


Prompt engineering examples

Here are a few more examples of well-structured prompts:

    • Technical documentation: Instead of simply asking, “Write API documentation,” a refined prompt might specify the target audience, preferred format, necessary sections, and examples, resulting in professional-grade output.
    • Code review: A basic “Review this code” prompt can be enhanced by asking the AI model to “Identify security vulnerabilities, ensure compliance with coding standards, and suggest performance improvements.”
    • Data analysis: A generic “Analyze this data” prompt becomes more effective when it includes specific business goals, key metrics, and visualization preferences.
    • Customer service: A strong customer service prompt includes tone guidelines, company policies, and clear escalation paths to ensure consistent, professional interactions.

Prompt engineering tools

It’s important to select tools that align with your workflow and project objectives, and scale up as your needs evolve. Here’s a quick overview of some options available to you:

Experimentation

OpenAI Playground: Provides a user-friendly interface for testing and refining prompts. You can tweak variables such as temperature, frequency penalty, and max tokens, making it ideal for both beginners and advanced users who want to see how different settings affect AI responses.

Google AI Studio: Allows you to experiment with different prompt templates and provides built-in evaluation metrics, making it easy to compare outputs and choose the most effective approach for your specific use case.

Tracking and organization

PromptLayer: Acts as a logging and analytics layer between your applications and language models. It saves all prompts and responses, allowing you to analyze which prompting strategies work best and build a searchable prompt library for your team.

Prompt Genius: This browser extension enables you to save, categorize, and access your most effective prompts instantly. It’s especially useful for users who work with prompts regularly and want an easy way to organize and retrieve them as needed.

Advanced development

Langchain: A developer framework for building applications with language models. It supports prompt templating, chaining of multiple model calls, and integrating memory, making it a powerful framework for creating complex AI workflows.

Prompt Perfect: Automatically improves and optimizes your prompts by suggesting edits based on best practices. This tool is highly beneficial when you want to enhance prompt clarity or effectiveness without spending excessive time on trial and error.

Prompt libraries

PromptHero: Hosts a curated gallery of top-performing prompts for different models and tasks. You can browse prompts by use case and model type, making it a good source of inspiration and potential starting points.

Awesome ChatGPT Prompts: A widely used, open-source collection on GitHub featuring a large variety of creative and practical prompt ideas contributed by the community. It’s updated regularly and covers everything from productivity tasks to language learning.

Automation

Zapier’s AI integrations: Connects AI-powered prompts to your everyday business tools and processes, allowing you to automate tasks like email generation, data summaries, or responding to customer queries.

Make: Formerly Integromat, this automation tool allows users to build complex workflows integrating prompt-based actions with branching logic. It’s well suited for advanced users looking to automate multi-step processes using AI.


The future of prompt engineering

More intuitive AI interactions

Prompt engineering is shifting from a niche discipline to a vital skill across many professions. As AI models become smarter, prompts will get shorter and more natural, with systems understanding intent and context more easily.

Automated tools and everyday integration

New tools will help refine and optimize prompts automatically, enabling anyone, regardless of experience, to achieve quality results. In the future, much of this optimization will be built directly into everyday software, simplifying user experience.

Growing collaboration and industry-specific tools

Teams will increasingly share prompt libraries and refine prompts together, while specialized tools for different industries will make prompt design faster and more targeted to particular needs.

Greater emphasis on ethical practices

With more powerful AI comes greater focus on responsible use, which means reducing bias, improving transparency, and following ethical standards in prompt development.

Expanding accessibility

Most importantly, prompt engineering will become more accessible to non-experts, empowering more people to use AI effectively. This ongoing shift will make working with AI easier, more responsible, and open to everyone.


Key takeaways and additional resources

Prompt engineering makes the most of AI systems by transforming them into reliable partners for creativity and problem solving. Understanding the techniques, tools, and best practices covered in this blog post will allow you to adapt and grow alongside advancing AI capabilities and effectively leverage them for personal or professional use.

You can review the key takeaways and resources listed below for a quick summary of what was discussed and to continue exploring concepts related to AI advancements.

Key takeaways

    • Prompt engineering refines AI instructions to improve response accuracy and align outputs with specific goals.
    • Methods like zero-shot, few-shot, and role-based prompting improve results by introducing structure to AI prompts.
    • Prompt engineering is versatile, supporting tasks like content creation, coding, customer support, and research.
    • Clear prompts save time, improve accuracy, and make AI accessible to non-experts while ensuring consistent outputs.
    • Issues such as trial and error, model limitations, and ethical concerns necessitate careful experimentation and adherence to best practices.
    • Platforms like OpenAI Playground and Langchain simplify prompt creation, while automation tools streamline workflows.
    • The field is evolving toward intuitive AI, automated optimization, and greater accessibility for all users.

Additional resources

FAQs

What is a prompt in AI? A prompt is the text, instruction, or query provided to an AI system that guides its response. Prompts can range from a simple question to thorough instructions for generating code.

What is a prompt engineer? A prompt engineer is someone who designs effective prompts for AI models to improve output quality and usability.

Does prompt engineering require coding? Not necessarily. While coding can facilitate the development of advanced applications, many prompt engineering techniques can be applied using natural language alone.

Why is prompt engineering important? It ensures that AI outputs are accurate, reliable, and aligned with human goals, thereby reducing the time spent editing or correcting results.

What does prompt engineering entail? It involves crafting, testing, and refining instructions to optimize AI model responses across different use cases.

How does prompt engineering differ across various AI models? Because different models interpret prompts in slightly different ways, prompt structures may require adjustments depending on the system used.

How is prompt engineering different from fine-tuning? Prompt engineering works by adjusting inputs, while fine-tuning changes the model itself with additional training data.

What are some ethical concerns for prompt engineering? Concerns include reinforcing bias, generating misinformation, or misusing prompts for harmful purposes. Responsible prompt design helps mitigate these risks.

The post What is Prompt Engineering? Techniques, Examples, and Tools appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Being A Successful Software Developer - Agilely Managing Expectations

1 Share

agile work item hierarchy

Being a successful software developer is about developing and delivering software that stakeholders find valuable and useful. Getting objective expectations of value from stakeholders can sometimes be difficult. This isn't necessarily a failing of the stakeholder; sometimes, it can be hard to clearly communicate expectations because assumptions and implicit information can lead your audience to have less exact goals. I work with teams frequently challenged with accurately meeting expectations. This post explores some of the particularly helpful techniques.

Level-setting

The context in which I'll be discussing involves agile techniques for describing and managing requirements, but they are applicable regardless of how you refer to requirements or organize them. A generally accepted method of managing requirements is to have a product backlog and a sprint backlog that contain user stories. A user story's purpose is to describe a desired capability, from the perspective of a type of stakeholder (role), that provides a tangible benefit. In addition to a product backlog, a sprint backlog, and user stories, organizations organize the software development efforts with epics and features. Epics and features help break down work into manageable chunks--aligning it by common purpose. An epic may contain several features related to one another. The epic may also have a theme or strategic goal. A feature is a set of functionality that provides value to stakeholders.

There's a saying:

There is no right way to do the wrong thing.

One of the goals of agile software development is to ensure we're focusing on the right thing and to be able to quickly pivot when we realize we may be working on the wrong thing. Let's explore ways to improve the likelihood of focusing on the right things early in an agile context, enabling us as software developers to achieve greater success.

A user story takes the form:

User story template as a PostIt

For example:

As a website user,
I want to recover my password,
so that I can access my account if I forget it.

Although this story sounds complete, measuring its success ultimately hinges on whether the functionality exists, which is lacking in testability. Specifically, it lacks positive testability. As it stands, we can only test that the functionality doesn't crash rather than verifying the value it should provide. With no way to verify the added value, it's also challenging to quantifiably trace that back to an epic in terms of its contribution to a strategic goal. Some teams don't drill down to more detail than a user story with this level of detail (some may have less detail, such as a functional work-item like "Password Reset").

A common way to make it easier to evaluate work completed against a requirement, such as a user story, is to include a series of acceptance criteria.

As a developer looking to ensure that what they ultimately implement is fit for purpose, collaborating with users/owners on acceptance criteria is very helpful.

A popular format for acceptance criteria is Given/When/Then (or Gherkin syntax), which details preconditions, actions, and expected outcomes. The general format is Given <some context>, When <some action is carried out>, Then <outcome is observed>. For example:

Given: the user navigates to the login page,
When: the user selects the "Forgot Password" option and enters a valid email address,
Then: the system sends the password recovery link to the entered email.

User stories can easily fall prey to rote creation from a vague checklist item, such as "password reset". This one simple Given/When/Then example has provided important expectations, including where password reset can occur, that an email address is required to reset a password, and that the system will send an email to authenticate the correct user. But a single Given/When/Then scenario is insufficient to represent realistic expectations. For this user story, scenarios covering receipt of the email and performing the reset are necessary to round out the happy-path expectations. For example

Given the user has entered a valid registered email address on the password recovery page.
When they submit the form,
Then they should receive a confirmation message that a password reset email has been sent.

and

Given the user has received the password reset email.
When they click the link in the email and enter a new password,
Then their password will be updated, and they should be able to log in with the new password.

User story with acceptance criteria example as PostIte

Eliciting Acceptance Criteria

These acceptance criteria may be obvious now that you see them, but when all you've got is a user story, a feature, and an epic, what can we do to get quality acceptance criteria? Fortunately, this example provides some hints. Given/When/Then, "Given" are preconditions; so asking about what preconditions must exist is a good start. "Preconditions" might come across as overly technical, so questions like "Where would this functionality exist?", "How would the user reach this functionality?" or "What needs to happen for this functionality to be enabled?" have served me well.

While working through these questions with the product owner and other stakeholders provides valuable insights for implementing a fit-for-purpose solution, we still lack positive verifiability. We can verify that the functionality achieves the outcomes we've specified, but what is the value of the outcomes of this functionality? Pose this question to stakeholders in an effort to discover something quantifiably measurable. Since clearer outcomes are part of the more detailed scenarios, those newly elicited outcomes can hint at more criteria, from the point of view of quality attributes.

Any functionality (i.e., your work) is judged on quality attributes such as its correctness, performance, usability, security, and reliability, to name a few. These quality attributes serve as a great inspiration for eliciting measurable details. So, asking questions about response times, clarity about the type of user/role, error conditions, negative scenarios, etc. For example, you could expand a Given/When/Then scenario to include performance criteria:

Given the user has entered a valid registered email address on the password recovery page.
When they submit the form,
Then they should receive a confirmation message that a password reset email has been sent.
And the confirmation email is sent within 2 minutes.

Or you could update the main scenario with a clearer role:

Given: an unverified user navigates to the login page,
When: the user selects the "Forgot Password" option and enters a valid email address,
Then: the system sends the password recovery link to the entered email.

No matter where you find inspiration for measurability, it's in relation to a goal and or objective. The more measurable business goals and objectives that are known during estimation and before implementation has begun, the more likely it is that you'll implement functionality that stakeholders will find valuable and useful. The criteria exist, and they can manifest in either expectation details or the need to refine an implementation. Think shift-left.

If you find this useful
I'm a freelance software architect. If you find this post useful and think I can provide value to your team, please reach out to see how I can help. See About for information about the services I provide.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – September 2, 2025 (#619)

1 Share

Happy Fake Monday to my American readers who stumbled through a Tuesday after a long holiday weekend. The reading list today has some industry news, and also a handful of pieces that offer great insight into AI topics.

[blog] 📖 An Open Book: Evaluating AI Agents with ADK. Very good post that clearly explains how to create eval sets and test criteria before executing agent evaluations using the Agent Development Kit.

[blog] The core KPIs of LLM performance (and how to track them). Are there ten ideal LLM performance metrics? Three? Fifteen? I don’t know. Whatever you believe is fine, but it’s good to see what others think.

[article] MCP: the Universal Connector for Building Smarter, Modular AI Agents. I read a lot about MCP, but every article adds something to my knowledge base.

[blog] Go 1.25: The Container-Native Release. Go is already great for container workloads, but it looks like this latest release added some major improvements.

[blog] DocumentDB and the Future of Open Source. This will be an interesting project to watch, as many industry giants have backed it.

[article] Why DocumentDB can be a win for MongoDB. Matt used to work at MongoDB, and offers his own brand of helpful insight to where this might go.

[blog] Introducing your newest study buddy: stackoverflow.ai. Can StackOverflow ever rediscover its peak? Probably not, as the world has changed in a few ways. But they’re not sitting idly by.

[blog] Google Cloud’s open ecosystem for Apache Iceberg. It looks like the industry has quickly embraced Iceberg as a standard.

[article] Anthropic raises $13B Series F at $183B valuation. The spigot for AI funding is still flowing, and it’s going to be tough for new entrants who need to commit to a LOT of compute to keep up.

[blog] /whatsnew: A Custom Gemini CLI Command for Google Cloud Updates in Your Terminal. Fun idea that shows the art of the possible. Use a custom command in the CLI to retrieve release notes for your favorite cloud service.

[blog] Mass Intelligence. When you think about it, the fact that nearly everyone on the planet has access to unprecedented “intelligence” is bonkers. Ethan shares how we got here.

[article] Google avoids break up, but has to give up exclusive search deals in antitrust trial. This removes some uncertainty. Our response. Onward and upward.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories