This episode explains what is arguably the best career advice you'll hear this week: the one skill that signifies seniority in software engineers is the ability to synthesise and optimise for multiple factors at once. Instead of focusing on a single factor, such as performance or maintainability, senior engineers identify and weigh the various trade-offs involved in any decision.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community by visiting https://developertea.com/discord today!
If you're enjoying the show and want to support the content head over to iTunes and leave a review! It helps other developers discover the show and keep us focused on what matters to you.
Join Elaiza Benitez and special guest, Sudeep Ghatak, Practice Lead at Theta New Zealand, to discover how Microsoft Copilot Studio enables intelligent agent-to-agent collaboration for seamless employee onboarding.
In this episode, Sudeep showcases multiple AI agents working together to automate key onboarding tasks such HR registration, IT setup, and facilities access. Learn how each agent is purpose-built to handle specific workflows, creating a scalable and efficient onboarding experience. Perfect for developers, IT professionals, and enterprise teams exploring advanced AI automation in Microsoft Copilot Studio.
Note: at the time of when this episode airs, this is a Preview feature.
Learn more about how to connect your agents to other agents at https://learn.microsoft.com/en-us/microsoft-copilot-studio/authoring-add-other-agents
Learn more about Copilot at aka.ms/copilotstudio
✅ Chapters:
00:00 Introduction
01:13 Difference between bot and agent
02:02 Key features of agents
03:13 Agent use case - staff recruitment
05:20 Agent solution design
05:58 Recruitment agent
08:06 Offer Letter agent
09:14 Office Admin agent
09:34 Service Desk agent
09:56 Canvas app to initiate the hiring process
10:11 Demo of agent to agent collaboration
13:54 Outro
15:15 Limitations
✅ Resources:
Sudeep Ghatak LinkedIn profile - https://www.linkedin.com/in/sudeepghatak/
GitHub profile - https://github.com/sudeepghatak
YouTube Channel - https://www.youtube.com/channel/UCEXVGKXko70Y4wk3DZE26OQ
In today's complex IT environments, monitoring and understanding the health and performance of your applications and infrastructure is critical. The Red Hat build of OpenTelemetry, which can be installed in Red Hat OpenShift, provides a powerful framework for collecting and exporting telemetry data, enabling comprehensive metrics and logs reporting. In this article, we will explore the benefits and capabilities of using the Red Hat build of OpenTelemetry for effective observability.
OpenTelemetry is an open source project under the Cloud Native Computing Foundation (CNCF) that provides a set of APIs, libraries, agents, and collectors to capture distributed traces, metrics, and logs. Red Hat build of OpenTelemetry is a distribution of the upstream OpenTelemetry project, built and supported by Red Hat.
Metrics provide insights into the performance and health of your applications. The Red Hat build of OpenTelemetry can collect various types of metrics, including:
You can export these metrics to various monitoring systems, such as Prometheus, for visualization and analysis.
The Red Hat build of OpenTelemetry simplifies the collection and management of telemetry data, offering several key advantages:
You can seamlessly integrate Red Hat build of OpenTelemetry with existing monitoring and logging systems. The OpenTelemetry collector acts as a central hub for receiving, processing, and exporting telemetry data. OpenTelemetry Collector is composed of the following components.
Receivers:
Exporters:
Processors:
Check out the OpenTelemetry architecture documentation for more information.
To get started with the Red Hat build of OpenTelemetry, you can follow these general steps.
The deployment process of the Red Hat build of OpenTelemetry operator is very straightforward.
In your OpenShift cluster, follow these steps:
The OpenTelemetry collector is where all the magic happens. In the collector, you will define all the receivers, processors, and exporters you want in your environment. Let’s dig into the receivers that make more sense to use in a Kubernetes environment.
Strategically select the receivers that make sense to your use case. You might not need all of them, but enabling the receivers mentioned previously will provide a very comprehensive metrics and logs reporting system tailored to Kubernetes environments. By selecting only specific receivers, you will keep the footprint of opentelemetry-collector low, allowing you to deploy the collector even in environments with limited resources.
A pipeline defines the complete lifecycle of telemetry data. This journey begins with the reception of data from various sources, continues through optional processing stages where the data can be transformed, enriched, or filtered, and culminates in the export of the data to one or more back-end destinations for storage, visualization, or analysis.
A pipeline typically looks like this:
service:
pipelines:
metrics:
receivers:
- hostmetrics
- kubeletstats
- k8s_cluster
processors:
- k8sattributes
exporters:
- debug
There are various types of pipelines:
Each of these pipeline types has its own set of specialized receivers, processors, and exporters that are tailored to the specific characteristics of the telemetry data they handle. By configuring pipelines appropriately for metrics, logs, and traces, users can gain comprehensive observability into their applications and infrastructure using the Red Hat build of OpenTelemetry.
Now that we know how a collector works, let’s define a collector that will have all the receivers previously mentioned.
The full collector is available here. The most important pieces are the following.
Available scrapers for host metrics:
Example:
receivers:
hostmetrics:
collection_interval: 60s
initial_delay: 1s
root_path: /
scrapers:
cpu: {}
memory: {}
disk: {}
load: {}
filesystem: {}
paging: {}
processes: {}
process: {}
Monitored objects:
Example:
k8sobjects:
auth_type: serviceAccount
objects:
- name: pods
mode: pull
interval: 60s
- name: events
mode: watch
Configuration parameters:
Example:
kubeletstats:
collection_interval: 60s
auth_type: "serviceAccount"
endpoint: "https://${env:K8S_NODE_NAME}:10250"
insecure_skip_verify: true
Configuration parameters:
Example:
k8s_cluster:
distribution: openshift
collection_interval: 60s
K8s_events
To limit the scope of event collection to specific namespaces, the namespaces parameter can be defined as a list of namespace names. This allows for focused monitoring and reduces the volume of event data being processed. In this example, the namespaces are commented out, so all namespaces will be collected.
Example:
k8s_events:
namespaces: [project1, project2]
Journald configuration breakdown:
Example:
journald:
files: /var/log/journal/*/*
priority: info
units:
- kubelet
- crio
- init.scope
- dnsmasq
all: true
retry_on_failure:
enabled: true
initial_interval: 1s
max_interval: 60s
max_elapsed_time: 5m
You can find the complete OpenTelemetry collector on GitHub.
Depending on the receivers used, the collector needs different permissions. You will find the permissions in the documentation of each receiver. For the receivers listed above, the necessary permissions are available at the repo.
To install the collector with all necessary permissions, run the following:
git clone https://github.com/giofontana/rh-build-opentelemetry.git
cd rh-build-opentelemetry
oc apply -k manifests/overlays/debug
Initially, our collector only logs the metrics collected by the receivers using the debug exporter. To fully leverage the collected metrics, we will now change the configuration to use OTLP/HTTP exporter, enabling the collector to send the gathered metrics data to a remote system accessible via HTTP using the OpenTelemetry Protocol (OTLP).
For demonstration and testing of the OpenTelemetry setup, we will deploy the following systems in a Red Hat Enterprise Linux virtual machine. These systems are essential for illustrating the complete data flow, from collection through processing to visualization and storage. The following systems are deployed in the VM:
Detailed, step-by-step instructions on how to deploy these systems in a virtual machine are not the scope of this article, but you can refer to the documentation for more information.
With Mimir up and running, we will now reconfigure the OpenTelemetry collector to send metrics to it. To do so, change the endpoint of spec.config.exporters.otlphttp.endpoint
to reflect your environment:
#Change line highlighted below:
vi manifests/overlays/all/opentelemetry-collector.yaml
exporters:
debug:
verbosity: basic
otlphttp:
endpoint: 'http://10.1.1.100:9009/otlp' #CHANGE IP
tls:
insecure: true
otlphttp/logs:
endpoint: 'http://10.1.1.100:3100/otlp' #CHANGE IP
tls:
insecure: true
You might need to add other parameters depending on TLS and other configurations you have on your external system. Check this documentation for more information about configuration parameters.
With the opentelemetry-collector.yaml
properly configured, you can deploy the new collector.
#Delete the existing one, if exists
oc delete OpenTelemetryCollector otel -n k8s-otel
#Deploy the new one
oc apply -k manifests/overlays/all/
To verify the setup, add a new data source to Grafana for Mimir as follows:
Then, use the Explore function. You should now see available metrics, like container_cpu_time (Figure 1).
Do the same for Loki:
Then, use the Explore function. Select Loki, any filter (e.g., k8s_namespace_name=k8s-otel), and click on Run query, as shown in Figure 2.
To test OpenTelemetry, you can import this dashboard: https://grafana.com/grafana/dashboards/20376-opentelemetry-collector-hostmetrics-node-exporter/
Navigate to Dashboards -> New -> Import.
Enter 20376 and click the Load button, as shown in Figure 3.
Red Hat build of OpenTelemetry provides a powerful and flexible solution for comprehensive metrics and logs reporting in complex environments. By standardizing data collection, offering robust scalability, and providing enterprise-grade support from Red Hat, it simplifies observability and enables deeper insights into application and infrastructure health. With its seamless integration capabilities and a rich set of receivers, processors, and exporters, the Red Hat build of OpenTelemetry allows you to tailor your monitoring setup to meet your specific needs.
To fully leverage the capabilities of the Red Hat build of OpenTelemetry and enhance your monitoring strategy, explore the official documentation and community resources. Dive deeper into configuring collectors, setting up pipelines, and integrating with visualization tools like Grafana. You can find detailed information and getting started guides to help you implement and optimize your observability practices. Find out more about Red Hat build of OpenTelemetry and Red Hat OpenShift observability.
The post Effective observability with Red Hat build of OpenTelemetry appeared first on Red Hat Developer.