OpenTelemetry has evolved over the last few years to become the de facto standard of choice for a rapidly growing number of organizations in need of instrumenting applications and standardizing telemetry data. It is designed so that the data can be understood by different observability platforms and visualization and storage systems of your choice.OpenTelemetry users’ wants and needs continued to serve as the DNA of the recently held OTel Unplugged EU following FOSDEM 2026 in Brussels earlier this month. Described as an “unconference,” attendees write on post-its the subjects they want to discuss and those subjects form the basis for separate working-group minisessions. They covered a number of topics as they relate to OpenTelemetry, including Prometheus, Weaver (for nomenclature) and other topics, including NixOS, which I attended.RoadmapAs the Cloud Native Computing Foundation’s second-largest open source projects, OpenTelemetry’s future outlook and roadmap were of interest. The unconference concluded with a talk on OpenTelemetry’s roadmap that Austin Parker, director of open source at Honeycomb, gave. The main theme was to provide “clarity on items currently in progress,” Parker said. Efforts are underway to enhance sampling algorithms for users, including a standard field for communicating sampling rates to telemetry consumers, and the ability to set minimum sampling thresholds for all services within a trace without requiring a separate collector, Parker said.Without observability collector functionality, configuring each backend or user monitoring separately for those is required, which can be cumbersome. On the contrary, an observability collector serves as a single endpoint for all microservices, streamlining access to applications and microservices through a unified point facilitated by the collector. Utilizing an observability agent to serve as a collector, you can view and manage microservices collectively, offering a consolidated view on a platform like Grafana. While Grafana provides certain alternatives without an OpenTelemetry collector, the collector significantly simplifies this process.FrictionTed Young, developer programs director for Grafana Labs, who was one of the main unconference organizers and speakers (as well as the resident ukulele player) said the core promise of standardizing observability signals — traces, metrics and logs — certainly meets real-world challenges with OpenTelemetry, particularly in the domain of metrics and legacy systems. But while the project is seen as essential “to survive the future,” several key areas are driving friction, he said.For tracing, ensuring proper semantic conventions for labels and attributes poses challenges for developers when drawing ingress data through OpenTelemetry. Large organizations with hundreds of developers must ensure semantic conventions remain uniform. This developer-centric approach to instrumentation leads to “the weakest data quality” in tracing, as it is inherently “harder to get it right,” Young said.Another “big thing” on the horizon is entities, which is a proposal to evolve the OpenTelemetry resource specification, Parker said during his roadmap talk. By defining both static and dynamic resources, the project allows for a new type of resource called an entity, which can be used to define metric identity or complex system topology. To support this infrastructure, there is an ongoing effort to get core collector points — such as the OpenTelemetry Protocol (OTLP) receiver and exporter — to version 1.0 stability, Parker said.The roadmap also introduces Arrow, which is a stateful version of OTLP, Parker described during his talk. While OTLP is currently stateless, Arrow allows receivers and producers to coordinate, solving problems around egress and providing higher performance guarantees.Meanwhile, the browser SIG has been rebooted with a “Phase 1” goal to instrument the browser and its most important libraries rather than attempting to “eat the whole elephant all at once,” Parker said.Long-term stabilizationFinally, the project is focusing on long-term stabilization through reusable deployment architectures called “blueprints” and the creation of stability requirements, Parker said. “To protect users, the team is standardizing how experimental behavior is flagged in SDKs to ensure that those moving to version 1.0 do not accidentally opt into unstable features,” Parker said.Stability requirements are paired with a new performance benchmarking initiative designed to provide a standard set of measurements for every component, ensuring consumers understand the actual computational cost and overhead of the SDK, Parker said.Among the minisesssions, immense improvements in the integration of OpenTelemetry and Prometheus were discussed. A key catalyst for these improvements is how Prometheus now supports UTF-8 and other OTLP-native enhancements.Nix and NixOSThe conference minisession on Nix and NixOS featured discussion on adding OpenTelemetry instrumentation to NixOS systems, especially for apps lacking an SDK. Nix and NixOS are increasingly used to create reproducible environments on a much higher scale than Docker containers can.The host’s project is an LD_PRELOAD-based injector that activates OpenTelemetry SDKs in Java/Node/Python; it can be deployed via overlays (changing package builders) or via environment variables in systemd units (easier, but causes cache misses). NixOS typically uses a fixed store path for preloads; overlays are the primary injection method. For benchmarking, the user plans a custom OpenTelemetry collector receiver (e.g., Cassandra throughput) documented with semantic conventions and validated via Weaver.The post OpenTelemetry roadmap: Sampling rates and collector improvements ahead appeared first on The New Stack.