2025-08-293 min read

From Zero Visibility to Unified Monitoring: Building a Custom Integration That Unblocked a Federal Deal

How a custom Juniper Mist integration enabled unified network visibility in Elastic and removed a key federal evaluation blocker.

Article audio

Listen to this article

0:00 / 0:00

This article is long; narration is currently limited to the first 3000 characters.

AI-generated voice.

From Zero Visibility to Unified Monitoring: Building a Custom Integration That Unblocked a Federal Deal

I'm a Solutions Architect working in federal pre-sales. This is how one engagement went - from a stalled evaluation to a closed deal - and what it says about how I approach the role.

The situation

A federal civilian networking team was evaluating Elastic for unified network visibility. Their environment ran Juniper Mist, which generates telemetry across six distinct event families - client events, device events, NAC events, alarms, audits, and edge events - each with its own payload structure and semantics.

The problem: no Elastic integration for Juniper Mist existed. Without one, the customer couldn't get Mist telemetry into Elastic at all. They were stuck chair-swiveling between tools with no unified view of their network. The evaluation stalled.

How I approached it

Before writing any code, I needed to understand what "unified visibility" actually meant for this team - which event families mattered most, what their operational workflows looked like, and what they'd need to see in Elastic to move forward with confidence.

From there, I scoped and built a custom integration package from scratch. The core design decisions:

One endpoint instead of six. Rather than requiring the customer to configure a separate webhook path for each Mist event family, I built a single intake stream with automatic classification logic in Elastic's ingest layer. The pipeline inspects each incoming payload, determines which event family it belongs to, and routes it to the correct dataset. If classification fails, the event gets flagged for triage - no silent drops.

Schema normalization into a common standard. Raw Mist payloads aren't immediately useful for dashboards or alerting. Each event family pipeline maps vendor-specific fields into Elastic Common Schema (ECS) - normalizing timestamps from mixed formats, applying event categorization, enriching IP fields with GeoIP data, and optionally preserving raw payloads for forensic use cases. This is what turns webhook JSON into data an analyst can immediately query and act on.

Safe, backward-compatible rollout. I designed the migration so legacy configurations stayed functional while the customer validated the new unified flow. No risky cutover, no downtime, no trust gap.

Validated against real-world payloads. I built test coverage for known event types, edge cases, batched payloads, and unmapped topics - validating that the integration behaved predictably under production conditions, not just clean samples.

The outcome

The customer went from zero Mist visibility in Elastic to having all six event families flowing through a single endpoint, normalized and ready for dashboards, detection rules, and cross-family correlation.

The integration removed the technical blocker. The deal moved forward.

What this reflects about how I work

This project is a good example of how I approach the pre-sales SA role: identify what's actually blocking a customer from moving forward, scope a solution that addresses the real requirement, build it if it doesn't exist, and make sure it works under production conditions - not just in a demo.

The technical pattern here - unified intake, deterministic routing, schema normalization - is reusable and not specific to this vendor. But the more important pattern is the approach: treating pre-sales as a problem-solving engagement, not a feature walkthrough.


Appendix: Routing Architecture

For those who want to see what's under the hood - here's the intake and classification flow:

flowchart LR
  A["Mist Webhooks"] --> B["Elastic Agent\nhttp_endpoint input"]
  B --> C["Ingest pipeline:\nlogs-mist.events"]
  C --> D{"Classify routing"}
  D -->|Topic alias| E["mist.client_events"]
  D -->|Topic alias| F["mist.device_events"]
  D -->|Topic alias| G["mist.nac_events"]
  D -->|Topic alias| H["mist.alarm_events"]
  D -->|Topic alias| I["mist.audit_events"]
  D -->|Topic alias| J["mist.mx_edge_events"]
  D -->|Heuristic| E
  D -->|Heuristic| F
  D -->|Heuristic| G
  D -->|Heuristic| H
  D -->|Heuristic| I
  D -->|Heuristic| J
  D -->|Unmatched| K["mist.events\npipeline_error"]

Single intake on the left, deterministic classification in the middle, six normalized datasets plus an explicit error path on the right. Every document lands somewhere accountable.