In my ongoing journey with OpenTelemetry, I’ve been developing a demo that highlights traceability across diverse technology stacks, including asynchronous communication through an MQTT queue. Recently, I expanded the architecture and incorporated new components. Below, I share some key insights and lessons learned along the way, noting that some may extend beyond just OpenTelemetry.
Enhanced Architecture Overview
The diagram below shows the updated architecture. New components are highlighted in violet, while those updated are in green.
Expanding the Inventory Component
To allow for more flexibility, I redesigned the inventory component so that instead of querying the database directly, it now queries warehouses, which are distributed across various regions. Each warehouse can be implemented in a different tech stack, allowing for endless scalability. Currently, I’m missing implementations in Elixir and .Net—contributions are welcome! The contract for these warehouses is straightforward:
Despite my dislike for Go, I have to admit—it’s efficient for developers who prioritize getting the job done quickly.
Developing the Ruby Warehouse
While Ruby has fallen out of the limelight, I still wanted to include it in this architecture. I opted for the lean Sinatra framework over Ruby on Rails and used Sequel for database access. The dynamic nature of Ruby posed some challenges, making this the most time-consuming service to develop.
I struggled with auto-instrumentation for OpenTelemetry in Ruby, especially since Sinatra lacks a built-in plugin system. My attempts to use Bash as a workaround were unsuccessful. If you’re a Ruby expert, I’d love to hear your thoughts on this.
GraalVM Native Image: A Kotlin Experiment
This warehouse is a Kotlin application running on Spring Boot, with a twist—I compiled it to native code using GraalVM. This allowed me to use a minimal Docker image like BusyBox. Though it’s not as fast as Go or Rust, it’s a decent choice if you’re committed to the JVM.
Initially, OpenTelemetry worked fine on the JVM, but complications arose when I compiled it to native code. The culprit was the OtlpTracingConfigurations.ConnectionDetails
class, which relies on a property set during compile time. The solution? Set the property to an empty string in application.properties
and override it in the Docker Compose file.
Understanding Spring Boot’s auto-configuration is crucial, but it’s no small feat—especially when combined with OpenTelemetry.
Migrating from JavaScript to TypeScript
I originally wrote the MQTT subscriber in JavaScript but soon migrated to TypeScript. While JavaScript is valid TypeScript, transitioning to “true” TypeScript revealed an issue with OpenTelemetry traces. After several iterations, I found the culprit:
const userProperties: Record<string, any> = {}
This adjustment resolved the trace issues and allowed for smoother TypeScript integration.
Adding a Redis Cache
Up until now, all services used PostgreSQL for data storage. With the addition of Redis, I leveraged OpenTelemetry to trace the interactions between services and the datastore. The instrumentation for Redis using the Lettuce client (default in Spring) was straightforward, with OpenTelemetry handling most of the heavy lifting.
Introducing Another Apache APISIX Instance
To streamline API management, I added an additional APISIX instance. In many organizations, managing APIs behind a centralized multi-node API Gateway can be cumbersome, especially when rapid changes are needed. By allowing API teams to manage their own instances, you reduce friction and increase flexibility, even if it adds complexity to the API team’s workload.
Credit and Original Source:
This article was originally written by Nicolas Fränkel and published on their blog. For more insights from Nicolas, including in-depth discussions on OpenTelemetry and other Java-related topics, you can visit the original post here
Conclusion
In this post, I walked through the latest updates to my OpenTelemetry tracing demo and the valuable lessons learned. I’m eager to add more warehouse components in other tech stacks—what would you like to see? Contributions are always welcome!
The full source code is available on GitHub.