Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
139140 stories
·
31 followers

Boost Your .NET Projects: Supercharge Your Code with FastStringBuilder in Spargine

1 Share
Spargine is a collection of open-source assemblies and NuGet packages for .NET 8 and 9, aimed at optimizing performance. The FastStringBuilder enhances string manipulation by minimizing memory allocations and boosting speed, featuring methods like Combine and ToDelimitedString. Benchmarks indicate significant improvements over traditional approaches in both speed and memory efficiency.



Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

New in Pulumi IaC: Support for skipping a resource

1 Share

Managing large-scale infrastructure can be challenging, especially when you need to perform operations on specific subsets of your resources. Pulumi’s stack operations like pulumi up and pulumi destroy are powerful for deploying and tearing down environments, but sometimes you need more fine-grained control over which resources are affected.

Today, we’re excited to announce a highly requested feature that will save you time and reduce complexity in your workflows: the ability to exclude specific resources from stack operations using the new --exclude and --exclude-dependents flags.

These new flags complement the existing --target functionality, giving you powerful options whether you want to focus on a small subset of resources or exclude just a few from larger operations. No more workarounds or custom scripts to achieve selective deployments!

The challenge: partial operations on large stacks

When managing infrastructure at scale, you often want to operate on most—but not all—resources in your stack. For example:

  • Deploying all resources except a database that requires a maintenance window
  • Refreshing most resources while skipping those with known differences
  • Updating production infrastructure while leaving critical services untouched
  • Testing changes to most components while preserving test data in development

Pulumi already has the --target flag to specify which resources to include in an operation. This works well when you want to target a small number of resources, but becomes unwieldy when you want to operate on most of your stack while excluding only a few resources.

The solution: introducing --exclude and --exclude-dependents

Our new --exclude flag solves this problem by letting you specify which resources to omit from stack operations. When paired with --exclude-dependents, you can also exclude all child resources of the specified resources, making it easy to exclude entire branches of your resource tree.

These flags are now available for all major stack operations:

pulumi up --exclude <URN>::resource-to-skip
pulumi preview --exclude <URN>::resource-to-skip
pulumi refresh --exclude <URN>::resource-to-skip
pulumi destroy --exclude <URN>::resource-to-skip

Each of these commands can also use the --exclude-dependents flag to exclude child resources.

An example: selective deployment of blog content

Let’s imagine you’re managing a static blog website with Pulumi. As part of your deployment, you have multiple HTML pages you’d like to deploy:

...

for (const file in await glob('posts/**/*.html')) {
 new aws.s3.BucketObject(`post-${file}`, {
 source: new pulumi.asset.FileAsset(file),
 ...
 })
}

...
...

for (const file in await glob('posts/**/*.html')) {
 new aws.s3.BucketObject(`post-${file}`, {
 source: new pulumi.asset.FileAsset(file),
 ...
 })
}

...
...

for file in glob("posts/**/*.html"):
 aws.s3.BucketObject(file,
 source = pulumi.FileAsset(file),
 ....
 )

...
...

files, err := filepath.Glob("posts/**/*.html")
if err != nil { return err }

for _, file := range files {
 _, err := s3.NewBucketObject(ctx, file, &s3.BucketObjectArgs{
 Source: pulumi.FileAsset(file),
 ...
 })

 if err != nil { return err }
}

...
...

var files = Directory.GetFiles("posts", "*.html");

foreach (var file in files)
{
 var bucketObject = new BucketObjectv2(file, new BucketObjectv2Args
 {
 Source = new FileAsset(file),
 ...
 });
}

...

This works well, but what if we have a list of draft articles that we don’t want to include in the deployment? We can optimistically assume we’ve finished more articles than we’ve started, so using --target to specify every article, as well as supporting resources (CSS, JavaScript, ownership controls, et cetera), would quickly become unmanageable.

pulumi up --target <URN>::style.css --target <URN>::post-hello.html ...

With the --exclude flag, this becomes much more manageable:

pulumi up --exclude <URN>::post-draft-1.html --exclude <URN>::post-draft-2.html ...

With this command, everything not specified using an --exclude tag will be included in the up operation, and thus we can avoid the hassle of naming every resource that isn’t a draft.

Next step: a draft group

This is fine for a personal blog site, but can still become unmanageable when we’re dealing with multiple authors, each with multiple drafts. In this case, we might want to group our drafts under a common parent:

...

// A parent component for all drafts
const drafts = new pulumi.ComponentResource('ComponentResource', 'drafts')

for (const file in await glob('drafts/**/*.html')) {
 new aws.s3.BucketObject(`draft-${file}`, {
 source: new pulumi.asset.FileAsset(`drafts/${file}`),
 ...
 }, { parent: drafts })
}

...
...

// A parent component for all drafts
const drafts: pulumi.ComponentResource =
 new pulumi.ComponentResource('ComponentResource', 'drafts')

for (const file in await glob('drafts/**/*.html')) {
 new aws.s3.BucketObject(`draft-${file}`, {
 source: new pulumi.asset.FileAsset(`drafts/${file}`),
 ...
 }, { parent: drafts })
}

...
...

# A parent component for all drafts
drafts = pulumi.ComponentResource(t="ComponentResource", name="drafts")

for file_path in glob("posts/**/*.html"):
 aws.s3.BucketObject(file_path,
 source = pulumi.FileAsset(file_path),
 opts = pulumi.ResourceOptions(parent = drafts),
 ....
 )

...

drafts := &DraftGroupComponent{}
err = ctx.RegisterComponentResource("ComponentResource", "drafts", drafts)
if err != nil { return err }

...

files, err := filepath.Glob("posts/*.html")
if err != nil { return err }

for _, file := range files {
 _, err := s3.NewBucketObject(ctx, file, &s3.BucketObjectArgs{
 Key: pulumi.String(file),
 ...
}, pulumi.Parent(drafts))

 if err != nil { return err }
}

...

public class MyComponentResource : ComponentResource { ... }

...

var files = Directory.GetFiles("posts", "*.html");
var drafts = new MyComponentResource("drafts");

foreach (var file in files)
{
 var bucketObject = new BucketObjectv2(file, new BucketObjectv2Args
 {
 Source = new FileAsset(file),
 ...
 }, new CustomResourceOptions
 {
 Parent = drafts
 });
}

...

In this setup, we now have a parent resource for all drafts. Using --exclude-dependents, we can now exclude everything under this parent resource without having to enumerate all of them individually:

pulumi up --exclude <URN>::ComponentResource::drafts --exclude-dependents

This command will exclude all drafts from the up operation, regardless of how many we have or how they’re named. We now have a nice, scalable way to manage our drafts across production and development environments!

Next steps

With these flags now available in Pulumi CLI v3.158.0., expect to see them introduced in the automation APIs and GitHub action soon. Thanks for reading, and feel free to share any feedback on GitHub, X, or our Community Slack.

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Managing technical debt like financial debt

1 Share

Yesterday on my way home, I was listening to the SoftwareCaptains podcast episode with Mathias Verraes (sorry, the episode is in Dutch). One of the topics discussed was technical debt and the  question was raised why (most) organizations manage very carefully their financial debt, they don’t apply the same rigor for their technical debt.

This triggered a train-of-thought that resulted in this post. Financial debt is meticulously tracked, reported, and managed. CFOs provide regular updates to boards about debt levels, leveraging ratios, and debt servicing costs. Detailed financial statements outline current liabilities, long-term obligations, and repayment schedules. Financial debt is visible, quantified, and actively managed.

Yet technical debt—which can be just as crippling to an organization's future—often exists as an invisible, unquantified burden until it's too late.

What if we managed technical debt with the same rigor as financial debt?

The hidden cost of technical debt

Technical debt accumulates when development teams make implementation choices that prioritize short-term goals over long-term system health. Like financial debt, technical debt isn't inherently bad—it can be strategically leveraged to deliver value quickly. The problem arises when it remains invisible, unmanaged, and ultimately unpaid.

 

The consequences are real and severe:

  • Decreased development velocity as teams navigate increasingly complex systems

  • Rising maintenance costs that steadily eat into innovation budgets

  • Increased system failures and outages impacting customer experience

  • Higher employee turnover as engineers burn out working with problematic codebases

  • Inability to respond quickly to market changes or competitive threats

Yet unlike financial debt, technical debt rarely appears on executive dashboards or board reports. It accumulates silently until it reaches crisis levels that can no longer be ignored.

As a venture capitalist you will probably not invest in a company that is knee deep in financial debt. But can you just ignore the accumulated technical debt?

The role of the CTO

Every company has a CFO who tracks financial obligations and ensures they remain manageable. Where is the equivalent role for technical debt?

I think this should be a responsibility of the Chief Technical Officer (CTO). He or she should be accountable for:

  1. Quantifying existing technical debt across systems and applications
  2. Tracking debt accumulation through development activities
  3. Establishing repayment strategies and prioritizing debt reduction efforts
  4. Reporting technical debt metrics to executive leadership
  5. Setting sustainable "debt policies” for the organization

The CTO shouldn't prevent all technical debt—just as CFOs don't prevent all financial debt. Rather, they would ensure debt is intentional, visible, and managed within sustainable boundaries.

Making technical debt visible and manageable

Financial debt isn't managed through vague conversations or individual awareness—it's tracked through formal processes, reported regularly, and managed strategically. Technical debt deserves the same treatment.

By establishing explicit technical debt management practices and assigning clear accountability through expanded CTO responsibilities, organizations can transform technical debt from an invisible threat to a managed resource.

The companies that thrive in the coming decade won't be those that avoid technical debt entirely—that's unrealistic. The winners will be those that make technical debt visible, intentional, and strategically managed just like any other business liability.

More information

Podcast: Mathias Verraes: software design voor startups en scaleups — SoftwareCaptains — Lead your tech team through your growth

Understanding Technical Debt

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

ASP.NET Core Pitfalls - Action Constraint Order

1 Share

Introduction

When we have more than one action method in MVP or Web API that can match a given request, the request may fail or land on an unwanted method. By default, a number of items is used to figure out which method to call:

  • The HTTP verb
  • The action method name
  • The route template
  • The action method parameters
  • The request content type

An action constaint may be needed to select the right one; an action constraint is an implementation of IActionConstraint, and is normally added through an attribute or a convention. Some examples of built-in action constrains include:

If you want to build your own, you should inherit from ActionMethodSelectorAttribute and implement IsValidForRequest.

Problem

Sometimes, however, just applying an action constraint is not enough, usually because the request matches more than one constraint. One example is with subsets of a given content type, the [Consumes] will also validate against those.

What we then need is to define the order by which we want the constraints checked, where the most "unusual" should go first. This is achieved through the Order property of IActionConstraint or the Order property of HttpMethodAttribute; the smaller order is executed first.

An example using [HttpPatch]:

[HttpPatch]
[Consumes("application/json")]
public IActionResult PatchJson([FromBody] Payload payload)
{            
    //...
}

[HttpPatch(Order = -1000)]
[Consumes("application/merge-patch+json")]
public IActionResult PatchMergePatchJson([FromBody] Payload payload)
{ //... }

When a request is made using "application/json" content-type, the first method matches; when using "application/merge-patch+json", the second does. In this case, the problem is that "application/merge-patch+json" is treated as a subset of "application/json".

Conclusion

Because ASP.NET Core is so flexible, it's usually easy to fix this kind of problems, once we know what they are. Stay tuned for more!

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Data Type Conversions: Your Key to Clean Data & Sharp Queries

1 Share

If you're a data analyst juggling varied datasets, mastering SQL data type conversions isn't just handy—it's crucial. Whether you’re making different data types play nice together or boosting query speed, knowing your way around conversions saves you headaches and errors down the line.

This piece dives into the essential SQL data type conversion methods. I'll show you where they shine in the real world and give you solid advice for crafting more effective queries. Stick with me, and you'll get a solid grip on SQL conversions, making your work more accurate and your queries faster.

When you're deep in SQL databases, effectively managing data types is paramount for data integrity and snappy query performance. Whether you're blending datasets, running calculations, or crafting reports, data type conversion is your go-to for dodging errors and making things run smoother.

Why You Absolutely Need to Master SQL Data Type Conversion

You'll find yourself needing data type conversion constantly. As a SQL user, you often have to aggregate and compare values of different stripes to unearth those golden insights.

Then there's formatting data for reports and visualizations. This is a big one, making sure your data tells a clear story in a well-organized, readable way.

Accurate math? It hinges on correct data type conversion. Get this wrong, and your calculations could spit out nonsense, leading you to some pretty off-base conclusions.

And let's not forget error prevention when you're inserting or updating data. Mismatched types here can cause operations to just keel over or, worse, silently introduce bizarre outcomes.

Simply put, if you don't handle conversions correctly, your SQL queries can break or, just as bad, feed you bad information. That’s precisely why getting comfortable with conversion techniques is a non-negotiable skill for data analysts.

Implicit vs. Explicit: SQL's Two Flavors of Conversion

Implicit Conversion: SQL's Autopilot

SQL engines are smart; they often convert data types automatically when the situation calls for it. This means when you mix different data types in a query, SQL frequently tweaks them behind the scenes, no manual fuss needed from you.

For instance, add an integer to a number with a decimal (a float), and SQL will convert that integer to a float before doing the math. This keeps your calculations consistent and sidesteps those pesky type mismatch errors.

Example:

SELECT 10 + 5.5 AS result; -- SQL automatically changes 10 to 10.0 (float)

Output:

result
------
15.5

Now, while this auto-conversion is pretty neat and usually works like a charm, it can occasionally throw you a curveball. This is especially true with strings, dates, or really big numbers. Knowing when and how SQL pulls off these automatic shifts helps you write queries that are both more accurate and more efficient.

Explicit Conversion: Taking the Wheel

Explicit conversion in SQL is all about you, the user, manually changing a value's data type. You do this to make sure SQL interprets it exactly as you intend. Unlike the implicit, hands-off approach, explicit conversion demands specific SQL functions.

Your main tools for this are CAST and CONVERT. CAST is the universal soldier here; it's standardized across different SQL databases, making it a really flexible pick. For example, CAST('2024-02-20' AS DATE) transforms a text string into a proper date format. This ensures the system treats '2024-02-20' as a date, not just a sequence of characters.

CONVERT, however, is SQL Server’s specialist, offering extra formatting tricks up its sleeve, especially for dates. Want the current date in British format? CONVERT(VARCHAR, GETDATE(), 103) is your friend.

Example:

SELECT CAST('2024-02-20' AS DATE) AS converted_date;

Output:

converted_date
--------------
2024-02-20

You'll lean on explicit conversion when you're wrestling with mixed data types, performing arithmetic, or ensuring your data smoothly transitions between different systems. It's a critical practice for cutting down errors and ensuring your data processing is rock-solid across all your SQL queries.

Your Go-To SQL Data Type Conversion Functions

1. The CAST Function: The Universal Translator

The CAST function? It's your reliable workhorse, part of the ANSI SQL standard. This means you can use it across a multitude of relational database management systems (RDBMS) like MySQL, PostgreSQL, SQL Server, and Oracle. This widespread support makes CAST a dependable and portable choice for handling data type conversions directly in your SQL queries.

Forget tweaking your queries for different SQL platforms; CAST gives you a consistent syntax. It lets you explicitly tell SQL, "Hey, treat this data as this specific type," ensuring it's correctly interpreted for calculations, comparisons, or even just for storage.

Need to switch an integer to text, maybe for joining it with other strings? CAST(123 AS VARCHAR) does the trick, making sure the number behaves like a string. This is incredibly useful for reports, formatting your output, or prepping data to be shipped elsewhere.

CAST is also your function for changing data into date, numeric, or other compatible types. But, be warned: CAST is strict. If you try to convert something that just won't fit—like trying CAST('abc' AS INTEGER)—your query will hit a wall and fail. Some other functions might just give you a NULL, but CAST doesn't play that game.

Syntax:

CAST(expression AS target_data_type)

Example:

SELECT CAST(123 AS VARCHAR) AS text_value;

Output:

text_value
----------
"123"

2. The CONVERT Function: SQL Server's Formatting Ace

If you're in the SQL Server ecosystem, you'll want to get familiar with CONVERT. This function is a gem, especially when you need to display date and time values in various styles. While CAST just changes the data type, CONVERT lets you specify a formatting style code. This makes it invaluable for tailoring reports or dealing with regional date formats.

Syntax:

CONVERT(target_data_type, expression, style)

Example:

SELECT CONVERT(VARCHAR, GETDATE(), 103) AS formatted_date; -- 103 gives dd/mm/yyyy

Output:

formatted_date
--------------
20/02/2024

3. TO_DATE, TO_CHAR, TO_NUMBER: Oracle's Conversion Trio

Oracle databases have their own set of specialized functions for these tasks:

  • TO_DATE: You'll use this to change strings into actual date formats.
  • TO_CHAR: This one flips dates or numbers into strings.
  • TO_NUMBER: Got a string that's really a number? TO_NUMBER handles that.

Example:

SELECT TO_DATE('20-02-2024', 'DD-MM-YYYY') FROM dual; -- 'dual' is a dummy table in Oracle

Common SQL Data Conversion Hurdles and How to Clear Them

Data conversion in SQL isn't always a walk in the park; you can definitely hit snags that lead to errors or results that make you scratch your head. Knowing the usual suspects and how to tackle them will make your queries much more robust.

1. Taming NULL Values

NULLs are notorious troublemakers in data conversions. If you're not careful, they can make your conversions blow up. Your best defense? Use COALESCE or ISNULL to give them a safe default value.

Example:

SELECT COALESCE(CAST(NULL AS INT), 0) AS safe_value;

Output:

safe_value
----------
0

2. Navigating String-to-Number Conversion Pitfalls

Trying to convert a string with non-numeric characters to a number? That's a common way to cause a failure.

Your solution, particularly in SQL Server, is to use TRY_CAST or TRY_CONVERT. These functions attempt the conversion, and if it doesn't work, they gracefully return NULL instead of stopping your query cold.

SELECT TRY_CAST('123abc' AS INT) AS result;

Output:

result
------
NULL

Best Practices for Smooth SQL Data Type Conversion

I always recommend CAST when you need a conversion method that works consistently across different SQL databases. It's your ANSI SQL standard buddy, ensuring that your queries are portable.

Now, if you're exclusively using SQL Server and need finer control over formatting (especially with dates and text), then CONVERT is definitely the stronger choice. Those style parameters it offers are pretty handy.

A word of caution: try not to lean too heavily on implicit conversions, especially in queries where performance is key. They might seem convenient, but SQL making automatic decisions about data types can slow things down and sometimes lead to surprising results.

Handling NULL values with care is absolutely essential during conversions. Make it a habit to use functions like COALESCE or ISNULL. This prevents your conversions from failing or, worse, giving you skewed results because of missing data.

And for the ultimate win in efficiency and accuracy: store your data in the correct format right from the get-go. When you define your column data types properly from the start, you drastically reduce the need for conversions later on, which directly boosts query performance.

Next Steps on Your SQL Journey

Feel like you want to really cement these SQL skills? I had a fantastic experience with a SQL Data Types course; it genuinely helped me get a much deeper understanding of handling and converting various data types in SQL.

A good course will walk you through all the core concepts, show you the different data types and their best uses, and drum in the best practices for working with them efficiently. You'll ideally want something with plenty of hands-on exercises. That's how you truly grasp applying data type conversions to situations you'll actually encounter. Whether you're just starting out or you're an intermediate SQL user aiming to fine-tune your queries, a structured learning path can really boost your SQL game.

You've now got the essential toolkit for SQL data type conversion. Time to put this knowledge to work!

The post SQL Data Type Conversions: Your Key to Clean Data & Sharp Queries appeared first on RealSQLGuy.

The post SQL Data Type Conversions: Your Key to Clean Data & Sharp Queries appeared first on SQLServerCentral.

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The high-performance playbook: How great product teams deliver results

1 Share

The high-performance playbook: How great product teams deliver results

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories