Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147937 stories
·
32 followers

Evaluating Kotlin in Real Projects

1 Share

Guest post by Urs Peter, Senior Software Engineer and JetBrains-certified Kotlin Trainer. For readers who’d like a more structured way to build Kotlin skills, Urs also leads the Kotlin Upskill Program at Xebia Academy.

This is the second post in The Ultimate Guide to Successfully Adopting Kotlin in a Java-Dominated Environment, a series that follows how Kotlin adoption grows among real teams, from a single developer’s curiosity to company-wide transformation.

Read the first part: Getting Started With Kotlin for Java Developers


The Evaluation Stage: Beyond Kotlin as a Playground

Once you’re comfortable with Kotlin in tests, it’s time for a more substantial evaluation. You have two main approaches:

  1. Build a new microservice / application in Kotlin
  2. Extend / convert an existing Java application

1. Build a new microservice/application in Kotlin

Starting fresh with a new application or microservice provides the full Kotlin experience without the constraints of legacy code. This approach often provides the best learning experience and showcases Kotlin’s strengths most clearly.

Pro tip: Get expert help during this stage. While developers are naturally confident in their abilities, avoiding early mistakes in the form of Java-ish Kotlin and a lack of Kotlin-powered libraries can save months of technical debt.

This is how you can avoid common pitfalls when using Kotlin from a Java background:

Pitfall: Choosing a different framework from the one you use in Java.

Tip: Stick to your existing framework

Most likely, you were using Spring Boot with Java, so use it with Kotlin too. Spring Boot Kotlin support is first-class, so there is no additional benefit in using something else. Moreover, you are forced to learn not only a new language but also a new framework, which only adds complexity without providing any advantage.

Important: Spring interferes with Kotlin’s ‘inheritance by design’ principle, which requires you to explicitly mark classes open in order to extend them.

In order to avoid adding the open keyword to all Spring-related classes (like @Configuration, etc.), use the following build plugin: https://kotlinlang.org/docs/all-open-plugin.html#spring-support. If you create a Spring project with the well-known online Spring initializr tool, this build plugin is already configured for you.

Pitfall: Writing Kotlin in a Java-ish way, relying on common Java APIs rather than Kotlin’s standard library: 

This list can be very long, so let’s focus on the most common pitfalls:

Pitfall 1: Using Java Stream rather than Kotlin Collections

Tip: Always use Kotlin Collections.

Kotlin Collections are fully interoperable with Java Collections, yet equipped with straightforward and feature-rich higher-order functions that make Java Stream obsolete. 

As follows is an example that aims to pick the top 3 products by revenue (price * sold) grouped by product category:

Java

record Product(String name, String category, double price, int sold){}

List<Product> products = List.of(
           new Product("Lollipop", "sweets", 1.2, 321),
           new Product("Broccoli", "vegetable", 1.8, 5);

Map<String, List<Product>> top3RevenueByCategory =
       products.stream()
          .collect(Collectors.groupingBy(
                Product::category,
                Collectors.collectingAndThen(
                    Collectors.toList(),
                    list -> list.stream()
                              .sorted(Comparator.comparingDouble(
                                  (Product p) -> p.price() * p.sold())
                                   .reversed())
                                   .limit(3)
                                   .toList()
                       		)
          )
);

Kotlin

val top3RevenueByCategory: Map<String, List<Product>> =
   products.groupBy { it.category }
       .mapValues { (_, list) ->
           list.sortedByDescending { it.price * it.sold }.take(3)
       }

Kotlin Java interop lets you work with Java classes and records as if they were native Kotlin, though you could also use a Kotlin (data) class instead.

Pitfall 2: Keeping on using Java’s Optional.

Tip: Embrace Nullable types

One of the key reasons Java developers switch to Kotlin is for Kotlin’s built-in nullability support, which waves NullPointerExceptions goodbye. Therefore, try to use Nullable types only, no more Optionals. Do you still have Optionals in your interfaces? This is how you easily get rid of them by converting them to Nullable types:

Kotlin

//Let’s assume this repository is hard to change, because it’s a library you depend on
class OrderRepository {
      //it returns Optional, but we want nullable types
      fun getOrderBy(id: Long): Optional<Order> = …
}

//Simply add an extension method and apply the orElse(null) trick
fun OrderRepository.getOrderByOrNull(id: Long): Order? = 
                                    getOrderBy(id).orElse(null)

//Now enjoy the safety and ease of use of nullable types:

//Past:
 val g = repository.getOrderBy(12).flatMap { product ->
     product.goody.map { it.name }
}.orElse("No goody found")

//Future:
 val g = repository.getOrderByOrNull(12)?.goody?.name ?: "No goody found"

Pitfall 3: Continuing to use static wrappers.

Tip: Embrace Extension methods

Extension methods give you many benefits:

  • They make your code much more fluent and readable than wrappers.
  • They can be found with code completion, which is not the case for wrappers.
  • Because Extensions need to be imported, they allow you to selectively use extended functionality in a specific section of your application.

Java

//Very common approach in Java to add additional helper methods
public class DateUtils {
      public static final DateTimeFormatter DEFAULT_DATE_TIME_FORMATTER = 
           DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");

      public String formatted(LocalDateTime dateTime, 
		              DateTimeFormatter formatter) {
         return dateTime.format(formatter);
      }

      public String formatted(LocalDateTime dateTime) {
         return formatted(dateTime, DEFAULT_DATE_TIME_FORMATTER);
      }
}

//Usage
 formatted(LocalDateTime.now());

Kotlin

val DEFAULT_DATE_TIME_FORMATTER: DateTimeFormatter = 
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")

//Use an extension method, including a default argument, which omits the need for an overloaded method. 
fun LocalDateTime.asString(
   formatter: DateTimeFormatter = DEFAULT_DATE_TIME_FORMATTER): String = 
      this.format(formatter)

//Usage
LocalDateTime.now().formatted()

Be aware that Kotlin offers top-level methods and variables. This implies that we can simply declare e.g. the DEFAULT_DATE_TIME_FORMATTER top level without the need to bind to an object like is the case in Java.

Pitfall 4: Relying on (clumsily) Java APIs

Tip: Use Kotlin’s slick counterpart. 

The Kotlin standard library uses extension methods to make Java libraries much more user-friendly, even though the underlying implementation is still Java. Almost all major third-party libraries and frameworks, like Spring, have done the same.

Example standard library:

Java

String text;
try (
       var reader = new BufferedReader(
                  new InputStreamReader(new FileInputStream("out.txt"), 
            StandardCharsets.UTF_8))) {
   text = reader
            .lines()
            .collect(Collectors.joining(System.lineSeparator()));
}
System.out.println("Downloaded text: " +  text + "\n");

Kotlin

//Kotlin has enhanced the Java standard library with many powerful extension methods, like on java.io.*, which makes input stream processing a snap due to its fluent nature, fully supported by code completion

val text = FileInputStream("path").use {
             it.bufferedReader().readText()
           }
println("Downloaded text: $text\n");

Example Spring:
Java

final var books =  RestClient.create()
       .get()
       .uri("http://.../api/books")
       .retrieve()
       .body( new ParameterizedTypeReference<List<Book>>(){}); // ⇦ inconvenient ParameterizedTypeReference

Kotlin

import org.springframework.web.client.body
val books = RestClient.create()
   .get()
   .uri("http://.../api/books")
   .retrieve()
   .body<List<Book>>() //⇦ Kotlin offers an extension that only requires the type without the need for a ParameterizedTypeReference

Pitfall 5: Using a separate file for each public class

Tip: Combine related public classes in a single file. 

This allows you to get a good understanding of how a (sub-)domain is structured without having to navigate dozens of files.

Java

Kotlin

//For domain classes consider data classes - see why below
data class User(val email: String,
            //Use nullable types for safety and expressiveness
           val avatarUrl: URL? = null, 
           var isEmailVerified: Boolean)

data class Account(val user:User,
              val address: Address,
              val mfaEnabled:Boolean,
              val createdAt: Instant)

data class Address(val street: String,
              val city: String,
              val postalCode: String)

Pitfall 6: Relying on the mutable programming paradigm

Tip: Embrace immutability – the default in Kotlin

The trend across many programming languages – including Java – is clear: immutability is winning over mutability. 

The reason is straightforward: immutability prevents unintended side effects, making code safer, more predictable, and easier to reason about. It also simplifies concurrency, since immutable data can be freely shared across threads without the risk of race conditions.

That’s why most modern languages – Kotlin among them – either emphasize immutability by default or strongly encourage it. In Kotlin, immutability is the default, though mutability remains an option when truly needed.

Here’s a quick guide to Kotlin’s immutability power pack:

1. Use val over var

Prefer val over var. IntelliJ IDEA will notify you if you used a var, for which a val could be used. 

2. Use (immutable) data classes with copy(...)

For domain-related classes, use data classes with val. Kotlin data classes are often compared with Java records. Though there is some overlap, data classes offer the killer feature copy(...), whose absence makes transforming record – which is often needed in business logic – so tedious:

Java

//only immutable state
public record Person(String name, int age) {
   //Lack of default parameters requires overloaded constructor
   public Person(String name) { 
       this(name, 0);
   }
   //+ due to lack of String interpolation
  public String sayHi() {
       return "Hello, my name is " + name + " and I am " + age + " years old.";
   }
}

//Usage
final var jack = new Person("Jack", 42);
jack: Person[name=Jack, age=5]

//The issue is here: transforming a record requires manually copying the identical state to the new instance ☹️
final var fred = new Person("Fred", jack.name);

Kotlin

//also supports mutable state (var)
data class Person(val name: String,
                  val age: Int = 0) {
  //string interpolation
  fun sayHi() = "Hi, my name is $name and I am $age years old."
}
val jack = Person("Jack", 42)
jack: Person(name=Jack, age=42)

//Kotlin offers the copy method, which, due to the ‘named argument’ feature, allows you to only adjust the state you want to change 😃
val fred = jack.copy(name = "Fred")
fred: Person(name=Fred, age=42)

Moreover, use data classes for domain-related classes whenever possible. Their immutable nature ensures a safe, concise, and hassle-free experience when working with your application’s core.     

Tip: Prefer Immutable over Mutable Collections

Immutable Collections have clear benefits regarding thread-safety, can be safely passed around, and are easier to reason about. Although Java collections offer some immutability features for Collections, their usage is dangerous because it easily causes exceptions at runtime:

Java

List.of(1,2,3).add(4); ❌unsafe 😬! .add(...) compiles, but throws UnsupportedOperationException

Kotlin

//The default collections in Kotlin are immutable (read-only)
listOf(1,2,3).add(4);  //✅safe: does not compile

val l0 = listOf(1,2,3) 
val l1 = l0 + 4 //✅safe: it will return a new list containing the added element
l1 shouldBe listOf(1,2,3,4) //✅

The same applies for using Collections.unmodifiableList(...), which is not only unsafe, but also requires extra allocation:

Java

class PersonRepo {
   private final List<Person> cache = new ArrayList<>();
   // Java – must clone or wrap every call
   public List<Person> getItems() {
       return Collections.unmodifiableList(cache);   //⚠️extra alloc
   }
}

//Usage
personRepo.getItems().add(joe) ❌unsafe 😬! .add(...) can be called but throws UnsupportedOperationException

Kotlin

class PersonRepo {

//The need to type ‘mutable’ for mutable collections is intentional: Kotlin wants you to use immutable ones by default. But sometimes you need them:

   private val cache: MutableList<Person> = mutableListOf<Person>()

   fun items(): List<Person> = cache //✅safe: though the underlying collection is mutable, by returning it as its superclass List<...>, it only exposes the read-only interface

}

//Usage
personRepo.items().add(joe) //✅safe:😬! Does not compile

When it comes to concurrency, immutable data structures, including collections, should be preferred. In Java, more effort is required with special Collections that offer a different or limited API, like CopyOnWriteArrayList. In Kotlin, on the other hand, the read-only List<...> does the job for almost all use cases. 

If you need mutable, Thread-Safe Collections, Kotlin offers Persistent Collections (persistentListOf(...), persistentMapOf(...)), which all share the same powerful interface.

Java

ConcurrentHashMap<String, Integer> persons = new ConcurrentHashMap<>();
persons.put("Alice", 23);
persons.put("Bob",   21);

//not fluent and data copying going on
Map<String, Integer> incPersons = new HashMap<>(persons.size());
persons.forEach((k, v) -> incPersons.put(k, v + 1));

//wordy and data copying going on
persons
   .entrySet()
   .stream()
   .forEach(entry -> 
      entry.setValue(entry.getValue() + 1));

Kotlin

persistentMapOf("Alice" to 23, "Bob" to 21)
         .mapValues { (key, value) -> value + 1 } //✅same rich API like any other Kotlin Map type and not data copying going on

Pitfall 7: Keeping on using builders (or even worse: trying to use Lombok) 

Tip: Use named arguments.

Builders are very common in Java. Although they are convenient, they add extra code, are unsafe, and increase complexity. In Kotlin, they are of no use, as a simple language feature renders them obsolete: named arguments. 

Java

public record Person(String name, int age) {

   // Builder for Person
   public static class Builder {
       private String name;
       private int age;

       public Builder() {}

       public Builder name(String name) {
           this.name = name;
           return this;
       }

       public Builder age(int age) {
           this.age = age;
           return this;
       }

       public Person build() {
           return new Person(name, age);
       }
   }
}

//Usage
new JPerson.Builder().name("Jack").age(36).build(); //compiles and succeeds at runtime

new JPerson.Builder().age(36).build(); //❌unsafe 😬: compiles but fails at runtime.

Kotlin

data class Person(val name: String, val age: Int = 0)

//Usage - no builder, only named arguments.
Person(name = "Jack") //✅safe: if it compiles, it always succeeds at runtime
Person(name = "Jack", age = 36) //✅

2. Extend/convert an existing Java application

If you have no greenfield option for trying out Kotlin, adding new Kotlin features or whole Kotlin modules to an existing Java codebase is the way to go. Thanks to Kotlin’s seamless Java interoperability, you can write Kotlin code that looks like Java to Java callers. This approach allows for:

  • Gradual migration without big-bang rewrites
  • Real-world testing of Kotlin in your specific context
  • Building team confidence with production Kotlin code

Rather than starting somewhere, consider these different approaches:

Outside-in:

Start in the “leaf” section of your application, e.g. controller, batch job, etc. and then work your way towards the core domain. This will give you the following advantages: 

  • Compile-time isolation: Leaf classes rarely have anything depending on them, so you can flip them to Kotlin and still build the rest of the system unchanged.
  • Fewer ripple edits. A converted UI/controller can call existing Java domain code with almost no changes thanks to seamless interop.
  • Smaller PRs, easier reviews. You can migrate file-by-file or feature-by-feature.

Inside-out:

Starting at the core and then moving to the outer layers is often a riskier approach, as it compromises the advantages of the outside-in approach mentioned above. However, it is a viable option in the following cases:

  • Very small or self-contained core. If your domain layer is only a handful of POJOs and services, flipping it early may be cheap and immediately unlock idiomatic constructs (data class, value classes, sealed hierarchies).
  • Re-architecting anyway. If you plan to refactor invariants or introduce DDD patterns (value objects, aggregates) while you migrate, it’s sometimes cleaner to redesign the domain in Kotlin first.
  • Strict null-safety contracts. Putting Kotlin at the center turns the domain into a “null-safe fortress”; outer Java layers can still send null, but boundaries become explicit and easier to police.

Module by module

  • If your architecture is organized by functionality rather than layers, and the modules have a manageable size, converting them one by one is a good strategy.

Language features for converting Java to Kotlin

Kotlin offers a variety of features – primarily annotations – that allow your Kotlin code to behave like native Java. This is especially valuable in hybrid environments where Kotlin and Java coexist within the same codebase.
Kotlin

class Person @JvmOverloads constructor(val name: String,
                          var age: Int = 0) {
  companion object {

  @JvmStatic
  @Throws(InvalidNameException::class)
  fun newBorn(name: String): Person = if (name.isEmpty()) 
       throw InvalidNameException("name not set")
     else Person(name, 0)

   @JvmField
   val LOG = LoggerFactory.getLogger(KPerson.javaClass)
  }
}

Java

//thanks to @JvmOverloads an additional constructor is created, propagating Kotlin’s default arguments to Java
var john =  new Person("John");

//Kotlin automatically generates getters (val) and setters (var) for Java
john.setAge(23);
var name = ken.getName();

//@JvmStatic and @JvmField all accessing (companion) object fields and methods as statics in Java

//Without @JvmStatic it would be: Person.Companion.newBorn(...)
var ken =  Person.newBorn("Ken"); 

//Without @JvmField it would be: Person.Companion.LOG
Person.LOG.info("Hello World, Ken ;-)");

//@Throws(...) will put the checked Exception in the method signature 
try {
  Person ken =  Person.newBorn("Ken");
} catch (InvalidNameException e) {
  //…
}

Kotlin

@file:JvmName("Persons")
package org.abc

@JvmName("prettyPrint")

fun Person.pretty() =
      Person.LOG.info("$name is $age old")

Java

//@JvmName for files and methods makes accessing static fields look like Java: without it would be: PersonKt.pretty(...)
Persons.prettyPrint(ken)

IntelliJ IDEA’s Java to Kotlin Converter

IntelliJ IDEA offers a Java to Kotlin Converter, so theoretically, the tool can do it for you. However, the resulting code is far from perfect, so use it only as a starting point. From there, convert it to a more Kotlin-esque representation. More on this topic will be discussed in the final section of this blog post series: Success Factors for Large-Scale Kotlin Adoption.

Taking Java as a starting point will most likely make you write Java-ish Kotlin, which gives you some benefits, but will not unleash the power of Kotlin’s potential. Therefore, writing a new application is the approach I prefer. 

Next in the series

This installment in our Ultimate Guide to Successfully Adopting Kotlin in a Java-Dominated Environment series of blog posts demonstrated how Kotlin experiments can evolve into production code. Our next post focuses on the human side of adoption: convincing your peers. It explains how to present clear, code-driven arguments, guide new developers, and create a small but lasting Kotlin community within your team.

Urs Peter

Urs is a seasoned software engineer, solution architect, conference speaker, and trainer with over 20 years of experience in building resilient, scalable, and mission-critical systems, mostly involving Kotlin and Scala.

Besides his job as a consultant, he is also a passionate trainer and author of a great variety of courses ranging from language courses for Kotlin and Scala to architectural trainings such as Microservices and Event-Driven Architectures.

As a people person by nature, he loves to share knowledge and inspire and get inspired by peers on meetups and conferences. Urs is a JetBrains certified Kotlin trainer.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Work smarter with Copilot in the People, Files, and Calendar apps

1 Share

Hey, Insiders! I’m Yash Kamalanath, a Principal Product Manager on the Microsoft 365 companion apps team. I’m excited to share that the companion apps – People, Files, and Calendar – are now smarter than ever: Microsoft 365 Copilot is built into People and Files, with Copilot in Calendar coming soon. These smart agents make finding what you need and getting things done in Windows lightning fast.

Work smarter with Copilot in the People, Files, and Calendar apps

Looking up a colleague, finding a file, or checking what’s next on your calendar should be effortless, but it’s easy to get distracted between tasks. With People, Files, and Calendar apps, everything you need is just a click away on the taskbar. These Copilot-integrated apps surface the right content in real time so you can stay focused and keep your work moving.

With Copilot, each companion app will be grounded in your work data – people, files, meetings – making them the fastest and easiest way to prompt for relevant questions. Companion apps offer instant suggestions for every item, plus a freeform box for your own prompts. For example, you can catch up on the latest from your top collaborators, flag comments that need your input, or recap meetings you missed. Start simple with a search in a companion app and seamlessly hand off to the Microsoft 365 Copilot app with full context for more complex inquiries – no extra steps needed.

Copilot in People: Pick up on updates from your collaborators

With Copilot, the People app goes beyond names and title searches – it surfaces recent communications, highlights key responsibilities, and suggests tailored prompts to help you connect and collaborate with teammates across your organization.

Gather more insights by asking Copilot:

  • “Tell me about John.”
  • “What’s the latest from John?”
  • “Show me follow up tasks with John.”

Copilot in Files: Learn more about your content

In the Files app, you can start a Copilot conversation directly from the content to summarize documents or presentations, review changes, analyze data, and create action items – without breaking your flow.

Gather more insights by asking Copilot:

  • “What’s the context for this?”
  • “Summarize this workbook.”
  • “Highlight key figures or trends.”

Copilot in Calendar: Keep your workday on track

Coming soon, Copilot will be integrated into the Calendar app, where you will be able to get meeting summaries and prep material to catch up and prepare for your day, manage your schedule in real time from your taskbar, and get up to speed in seconds on missed conversations. 

Gather more insights by asking Copilot:

  • “Suggest talking points for this meeting.”
  • “Check for action items.”
  • “Key takeaways from this meeting.”

Availability

Copilot in the companion apps is available for Windows 11 users who have Microsoft 365 companion apps installed, are on either Enterprise or Business SKUs, and have a Microsoft 365 Copilot license. Copilot in People and Files is available immediately, and Copilot in Calendar will become available soon. As an admin, you can pin the Microsoft 365 Copilot app together with selected companion apps to the Windows 11 taskbar on Intune-managed devices. This provides users with quick access to Copilot features such as Chat, Search, and Agents.

Learn more about setting up People, Files, and Calendar: Microsoft 365 companions apps overview - Microsoft 365 Apps

Learn more about pinning Microsoft 365 Copilot and its companion apps to the Windows taskbar: Pin Microsoft 365 Copilot and its companion apps to the Windows taskbar.

Feedback

We’re excited to bring you these new capabilities, and we’d love to hear your thoughts on how these companion apps are working for you! Share feedback or suggestions anytime using the Give Feedback button in the top-right corner of any of the companion apps.

 

Learn about the Microsoft 365 Insider program and sign up for the Microsoft 365 Insider newsletter to get the latest information about Insider features in your inbox once a month!

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Evolving eCommerce Shipping Solutions

1 Share

In this episode of Mailin’ It!, hosts Karla Kirby and Jeff Marino jump into the fast-paced world of e-commerce shipping with guest Heather Maday, Senior Director of Sales Enablement at the US Postal Service. Heather shares how USPS has transformed its operations to support today’s eCommerce economy, where shoppers expect speed, visibility, and value with every order. From data driven logistics and new fulfillment technologies to transparent pricing, Heather explains how USPS helps businesses reach customers affordably and efficiently.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://afp-920619-injected.calisto.simplecastaudio.com/f32cca5f-79ec-4392-8613-6b30c923629b/episodes/efb1a195-2647-40cd-9c21-d318c8af9c65/audio/128/default.mp3?aid=rss_feed&awCollectionId=f32cca5f-79ec-4392-8613-6b30c923629b&awEpisodeId=efb1a195-2647-40cd-9c21-d318c8af9c65&feed=bArttHdR
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Theme Parks, Bakeries, and G-Code: Samantha’s Unexpected Path Into Software

1 Share

In this episode, I talk with Samantha Lopez, whose path into software development is anything but typical -- and that’s exactly what makes it so inspiring.


Whether you’re just starting out or pivoting mid-career, Samantha’s journey proves that your past experiences can be your biggest asset in tech -- you just have to connect the dots and keep going!


----

You can find Samantha at:

- LinkedIn: https://www.linkedin.com/in/samlopezdev/

- GitHub: https://github.com/samlopezdev

- Portfolio: https://samlopezdev.netlify.app/


----

🎥 Channels:


🔑 Membership & Subscriptions:


🧠 Courses:


🗣️ Social Media & Links:







Download audio: https://anchor.fm/s/f7b5ab38/podcast/play/110090479/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-9-23%2F409809056-44100-2-a1003fd8178b1.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Chrome DevTools To Benefit From MCP

1 Share
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Coding Azure 20: Creating a Backend Processor for Reading an Azure Storage Queue

1 Share

Here’s everything you need to securely pull messages from an Azure Storage queue as part of creating a reliable, scalable and extendable asynchronous application.

In my last few posts, I’ve walked through creating an Azure Storage Queue and showed how to add messages to that queue from either an ASP.NET Core or TypeScript/JavaScript frontend. This post is about how to create the backend processor that will read messages from that queue and process them.

One option is to use an Azure Function App or a Logic App with a trigger tied to the Storage queue to run a function in the Function App when a message is added to the queue. However, the best part of using Function Apps is how easy it is to configure and manage them—primarily because you give up a lot of control both in how you’ll process a message and how you’ll manage the service. You can get some of that control back by deploying your Function App to an App Service.

Having said that, for this post, I’m going to create my backend processor as a Worker Service in an App Service. That choice gives me access to more functionality in processing messages than I get with a Function App (for example, peeking at a message rather than reading it) and all the features of an App Service in managing my backend.

Creating and Securing the App Service

Fundamentally, you create your App Service exactly as you would create an App Service for a backend Web Service. For your development App Service, you may want to enable Basic Authentication to support configuring your App Service.

After creating the service, go to the Settings | Configuration menu choice. From the settings page that displays on the right, find the Always On option and set it to On (that will increase your costs but will also make sure that your application doesn’t shut down because you went a long time without seeing a message). You should also, for your development App Service, enable logging from the Monitoring | App Service logs menu down the left side of your App Service.

You will also want to make sure only your approved backend processors can read your queue. Because I’m creating my backend processor as a server-side application, my preferred tool (and Microsoft’s recommendation) is to use an Azure Managed Identity. When you create your Managed Identity, make a note of its client id—you’ll need it later.

Still in the portal, surf to your Storage queue, click its Access Control (IAM) menu choice from the menu down the left side, and use the Add role assignment button on the resulting page to give your Managed Identity the Queue Storage Contributor role for your queue.

Finally, surf to the App Service that you will be deploying your backend processor to. From the App Service’s Settings | Identity choice in the menu down the left side, use the User Assigned tab to assign your Managed Identity to the service.

You’re now ready to create your backend processor.

Creating the Processor

First, of course, you need to create your project in Visual Studio or Visual Studio Code. In Visual Studio, you want to use the Worker Service template; in Visual Studio Code, you want to use this command to create your project.

I named my project WarehouseMgmtProductsProcessor. You’ll want to swap in your own project name:

dotnet new worker -n  WarehouseMgmtProductsProcessor

Once your project is created, you’ll need to add the NuGet package Azure.Storage.Queues to your project (if you’re using Visual Studio’s Manage NuGet packages tab, search for “azure storage queues”). You’ll also need to add the Microsoft.Identity.Web package so that your backend processor can get authorization to access the queue.

Once your project is created, open the Worker.cs file that’s automatically added to your project. The loop inside the ExecuteAsync method will execute automatically when you start or stop the App Service you deploy your backend. You just need to add the code to check for a message and process it.

Accessing the Queue

Before you can read from the queue, you need to create a QueueClient object, passing the URL for your queue (wrapped inside a Uri object) and a credential object that authorizes access to the queue.

If you’re using a Managed Identity to provide authorization (and you should), you can use the ManagedIdentityCredential object to authorize your client, as I do in the following code. You’ll need to pass your ManagedIdentityCredential the client id of the Managed Identity you created.

Typical code in the Worker class’s constructor to create the QueueClient and put it in a local field would look like this:

private QueueClient qc;
public Worker(ILogger<Worker> logger)
{
    qc = new QueueClient(
   new Uri("https://warehousemgmtphv.queue.core.windows.net/updateproductinventory"),
   new ManagedIdentityCredential("ee8…-…-…-…-…be7"));

Reading from the Queue

With the QueueClient object created, you can then use its ReadMessagesAsync method to read, by default, the next message from the queue and return it wrapped inside a Response object—assuming a message exists on the queue. If there is no message on the queue, then ReadMessagesAsync returns a Response object that has its Value property set to null.

If there is a message on the queue, you can retrieve the value of the message (almost certainly the JSON representation of some object) from the Value property’s Body property. The message will be invisible to any other processor for 30 seconds (assuming there is another processor).

It’s your responsibility to delete the message from the queue either when you’ve completed processing or have an error processing the message (if you have an error, you should probably write the message out to some “dead letter” space to be reviewed later). If you don’t find a message, your code should wait some reasonable period of time before trying again.

This code assumes that the queue is holding the JSON representation of a class I called QueueDTO:

while (!stoppingToken.IsCancellationRequested)
 {           
   Response<QueueMessage> msg = await qc.ReceiveMessageAsync();
   if (msg.Value != null)
        {
            try
            {
                QueueDTO? qDto = JsonSerializer.Deserialize<QueueDTO>(msg.Value.Body);
                //...processing
            }
            catch (Exception ex)
            {
                _logger.LogError(ex.Message, ex.InnerException);
            }
            finally
            {
                await qc.DeleteMessageAsync(msg.Value.MessageId, msg.Value.PopReceipt);
 //…write message to some location for review
            }
   }
   else
   {
      await Task.Delay(5000, stoppingToken);
   }

You can pass a TimeSpan object as the second parameter to the ReceiveMessageAsync method to change how long the message remains invisible to other processors after being received.

If your queue is going to empty for significant periods of time, you might want to implement a more sophisticated wait pattern than the “Always 5 seconds” that I’ve used in my sample code (e.g., if you go three reads without a message, extend the wait period to 30 seconds, at 20 reads, extend the wait to 60 seconds and so on). If you know when your application won’t be running, you could create a recurring Logic App to start up and shut down your service.

You can also have the ReceiveMessageAsync retrieve more than one message by passing a maximum messages value to the method (up to a maximum of 32 messages). However, if you do want to receive multiple messages, I think it’s easier to use the ReceiveMessagesAsync method because it returns an array of QueueMessage objects.

This code, for example, uses ReceiveMessagesAsync to retrieve up to 10 messages at a time and then processes each message before reading the next batch:

Response<QueueMessage[]> msgs = await qc.ReceiveMessagesAsync(10);
if (msg.Value != null)
{
    foreach (QueueMessage qmsg in msgs.Value)
    {
        try
           {    
               QueueDTO? qDto = JsonSerializer.Deserialize<QueueDTO>(qmsg.Body);}
         catch {}
         finally {}        
    }
}                

Since each receive is actually a separate HTTP request to the queue, retrieving multiple messages is probably a good idea.

After you deploy your backend to an App Service, you’ll be able to stop it and start it from the Azure Portal. Under the hood, the portal passes a cancellation token to your Worker, causing the loop you’ve put your code inside of to terminate.

You can also pass that token onto your ReceiveMessageAsync method to terminate it and stop your Worker a little faster. You should not, however, pass the token to your DeleteMessageAsync method. Unless your messages are idempotent (i.e., processing the same message twice won’t cause a problem), you want read and processed messages to be deleted.

Deploying and Testing Your App

You can deploy your backend Worker to your App Service as either a triggered or continuous service (these two options show up as different slots in your App Service—you can deploy to both).

A continuous Worker will be started automatically after deployment while a triggered Worker must be started manually. For testing and debugging purposes, you should deploy as a triggered service—that will allow you to, for example, load test data into your queue before starting your backend. In production, you may want to switch to continuous.

Visual Studio

To deploy from Visual Studio, from the Build menu, select Publish Selection to open a Publish tab for your project. Click the Add a publish profile link to start a wizard for creating your profile.

You’ll find that, after you select Azure as your target, you’re given a choice of picking the appropriate WebJobs (either Azure WebJobs (Windows) or Azure WebJobs (Linux)) rather than the App Services choices you may be used to. Just pick the category that matches the platform you used for your App Service (e.g., Azure WebJobs (Windows) if you created a Windows-based App Service) and you’ll be taken to a list of App Services. Select the App Service you want to deploy your backend to and finish creating your profile.

When your completed publish profile is displayed, click on its “Show all settings” link to display the Profile settings dialog. On that dialog, you’ll see a WebJob Type dropdown list that will let you choose between deploying your application into the service’s Continuous or Triggered slot. For testing and development, your best bet is to select Triggered.

With your profile configured, close the Profile Setting dialog by clicking the Save button and, back in your profile, click the Publish button to deploy your backend to its App Service.

If you do want to publish the production version of your backend as continuous, rather than triggered, your best choice is to create a second publish profile (call it something clever, like “Publish to Prod”) and set its WebJob Type to Continuous.

Visual Studio Code

To deploy your app from Visual Studio Code, you’ll first need to create a publish package using dotnet publish. Once that publish package is created (and assuming that you’ve added the Azure Extensions to your copy of Visual Studio Code), you can open Visual Studio Code’s Azure Extensions panel, right-click on the App Service you intend to deploy to and select Deploy Now. That will open a file browser dialog in Visual Studio Code—drill down through the project’s bin folder until you find the publish folder and select that folder.

You’ll then be asked to reselect your App Service and, after that, your backend will be deployed (you’ll also be asked if you always want to deploy your project to that App Service. Take the option—it will save you time later).

By default, your backend will be deployed to your App Service’s Triggered slot which is what you want for testing and development. To switch your deployment to Continuous, open your project’s Properties/PublishProfiles/<project name>-WebDeploy.pubxml file and, in that file, set the <WebJobType> element to Continuous.

Testing Your Backend

To try out your backend, in the Azure Portal, surf to your App Service and, in the menu down the left side, select the Settings | WebJobs menu choice to display the Triggered/Continuous slots where you’ve deployed applications. To run a backend in the Triggered slot, click the run icon in the Run column near the right end of the slot.

Assuming that you’ve enabled logging for your App Service, you can view any log messages generated by your application by clicking the clipboard icon in the Logs column of the slot.

You can stop your backend by clicking the Refresh button on the menu across the top of the list of slots and then clicking on the stop icon in the Run column.

Next Steps

In addition to receiving and deleting messages, you can also update a message, changing either its body or the time that the message will be invisible after being received and then leave the message on the queue. The PeekMessage method is an alternative to the ReceiveMessageAsync method. PeekMessage lets you retrieve a message without making it invisible to any other processor.

These options can be useful if you have multiple processors, each of which performs different processing for the messages on the queue. Since each of the processors needs to see every message, you could use PeekMessage to read a message and, after processing the message, update the message instead of deleting it. In this pattern, messages would stay on the queue until every processor had processed it (you’ll need some processor to regularly sweep through the queue, find all the messages that have had all their processing done and delete those messages).

If that sounds messy/complicated, it might be easier just to have your frontend write the message to multiple queues with each queue having its own, dedicated processor. Alternatively, you could start thinking about moving to an Azure Service Bus which supports having multiple processors natively. I’ll be covering Service Buses in my next post.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories