Meet the Author

Mike Terhar
Senior Customer Architect
Mike enjoys solving problems and has made a career chasing that reward trigger. By being a generalist and working with many industries, Mike brings empathy for an organization's struggles and blends their old ways with newer, better ways of working. By focusing on learned helplessness and reinterpretation of policies from first principles, legacy companies can take huge strides forward. Outside of work, funny things are the best things. Mike is always looking for shows, movies, books, plays, pet, and kid activities that bring amusement.
Explore Author's Blog

Unleash SaaS Data With the Webhookevent Receiver
There are many vendors, Honeycomb included, where actions on the application can emit a web request that goes to another service for coordination or tracking purposes. Many vendors have pre-built integrations, but some have a fallback that says “Custom Webhook” or similar. If you’re looking to create a full picture of your request flow, you would want these other services to show up in your trace waterfall.

What Is a Telemetry Pipeline?
In a simple deployment, an application will emit spans, metrics, and logs which will be sent to api.honeycomb.io and show up in charts. This works for small projects and organizations that do not control outbound access from their servers. If your organization has more components, network rules, or requires tail-based sampling, you’ll need to create a telemetry pipeline.

Avoid Stubbing Your Toe on Telemetry Changes
When you have questions about your software, telemetry data is there for you. Over time, you make friends with your data, learning what queries take you right to the error you want to see, and what graphs reassure you that your software is serving users well. You build up alerts based on those errors. You set business goals as SLOs around those graphs. And you find the balance of how much telemetry to sample, retaining the shape of important metrics and traces of all the errors, while dropping the rest to minimize costs.

Defensive Instrumentation Benefits Everyone
A lot of reasoning in content is predicated on the audience being in a modern, psychologically safe, agile sort of environment. It’s aspirational, so folks who aren’t in those environments may feel like the path there includes doing “the new thing” or using “the new tool.” If you write software and your employer hasn’t caught up to all the newest, best ways to work, I hope this pragmatic post helps you sleep better at night.

Simplify OpenTelemetry Pipelines with Headers Setter
In telemetry jargon, a pipeline is a directed acyclic graph (DAG) of nodes that carry emitted signals from an application to a backend. In an OpenTelemetry Collector, a pipeline is a set of receivers that collect signals, runs them through processors, and then emits them through configured exporters. This blog post hopes to simplify both types of pipelines by using an OpenTelemetry extension called the Headers Setter.

Rescue Struggling Pods from Scratch
Containers are an amazing technology. They provide huge benefits and create useful constraints for distributing software. Golang-based software doesn’t need a container in the same way Ruby or Python would bundle the runtime and dependencies. For a statically compiled Go application, the container doesn’t need much beyond the binary. Since the software is intended to run in a Kubernetes cluster, the container provides the release and distribution mechanism which the Helm chart uses to refer to these binaries. It also allows releasing multiple processor architectures to reference their own images. For general troubleshooting, some pretty good resources exist, like Refinery and the OpenTelemetry Collector.

Infinite Retention with OpenTelemetry and Honeycomb
Honeycomb is massively powerful at delivering detailed answers from the last several weeks of system telemetry within seconds. It keeps you in the flow state needed to work through complex system failures while asking question after question to get closer to the answer. The biggest trade-off is the 60 day retention limit.

How Metrics Behave in Honeycomb
Honeycomb has the ability to receive events from application. These events can take the shape of Honeycomb wide events, OpenTelemetry trace spans, and OpenTelemetry metrics. Because Honeycomb’s backend is very flexible, these OpenTelemetry signals fit in just fine—but sometimes, they have a few quirks. Let’s dive into using metrics the Honeycomb way and cover a few optimizations.

Exotic Trace Shapes
OpenTelemetry and Beelines were designed with assumptions about the types of traffic that most users would trace. Based on these assumptions, web application and API calls fit very nicely into a trace waterfall view. This is also the set of assumptions for Refinery, our sampling proxy, to manage the traces it processes. These assumptions are true for most traffic.

Honeycomb, Meet Terraform
The best mechanism to combat proliferation of uncontrolled resources is to use Infrastructure as Code (IaC) to create a common set of things that everyone can get comfortable using and referencing. This doesn’t block the ability to create ad hoc resources when desired—it’s about setting baselines that are available when people want answers to questions they’ve asked in the past.