We built a telemetry pipeline that handles more than 5,400 data points per second with sub-10 millisecond query responses. The techniques we discovered while processing flight simulator data at 60FPS (frames per second) apply to any high-frequency telemetry system, from Internet of Things (IoT) sensors to application monitoring.Here’s how we got our queries from 30 seconds down to sub-10ms, and why these techniques work for any high-frequency telemetry system.The Problem With Current ValuesEveryone writes this query to get the latest telemetry value:SELECT * FROM flight_dataWHERE time >= now() - INTERVAL '1 minute'ORDER BY time DESC LIMIT 1This scans recent data, sorts everything, then throws away all but one row. At high frequencies, this pattern completely breaks down. We were generating more than 90 fields at 60 updates per second from Microsoft Flight Simulator 2024 through FSUIPC, a utility that serves as a middleman for communication between the simulator and external apps or hardware controls. Our dashboards were taking 30 seconds or more to refresh.Stop Querying, Start CachingInfluxDB 3 Enterprise, a time series database with built-in compaction and caching features, offers something called Last Value Cache (LVC). Instead of searching through thousands of data points every time, it keeps the most recent value for each metric ready in memory.Here’s how we set it up:SELECT * FROM last_cache('flight_data', 'flightsim_flight_data_lvc')Setting it up:influxdb3 create last_cache \ --database flightsim \ --table flight_data \ --key-columns aircraft_tailnumber \ --value-columns flight_altitude,speed_true_airspeed,flight_heading_magnetic,flight_latitude,flight_longitude \ --count 1 \ --ttl 10s \ flightsim_flight_data_lvcThe query time dropped from 30+ seconds to less than 10ms. Dashboards update at 5FPS and feel pretty instantaneous if you are monitoring a pilot on a separate screen.Batch EverythingWriting individual telemetry points at high frequency creates thousands of network round trips. The fix is pretty simple:// Batching configurationMaxBatchSize: 100MaxBatchAgeMs: 100 millisecondsBuffer points and flush when you hit either limit. This captures about six complete telemetry snapshots per database write.Measured performance:Write latency: 1.3ms per row.Sustained thousands of metrics per second.Zero data loss during 24-hour tests.Aggressive CompactionHigh-frequency telemetry creates hundreds of small files. We configured these environment variables in InfluxDB 3 Enterprise to use its compaction feature:# COMPACTION OPTIMIZATIONINFLUXDB3_GEN1_DURATION=5mINFLUXDB3_ENTERPRISE_COMPACTION_GEN2_DURATION=5mINFLUXDB3_ENTERPRISE_COMPACTION_MAX_NUM_FILES_PER_PLAN=100# REAL-TIME DATA ACCESSINFLUXDB3_WAL_FLUSH_INTERVAL=100msINFLUXDB3_WAL_MAX_WRITE_BUFFER_SIZE=200000Smaller time windows (five minutes vs. the default 10 minutes) and more frequent compaction prevent file accumulation, and it worked well enough for our scenario.24-hour results:142 automatic compaction events.From 127 files to 18 optimized files (Parquet).Storage from 500MB to 30MB (94% reduction).Block ReadingWe were making 90+ individual API calls through the FSUIPC Client DLL to collect telemetry. At 60FPS, that’s over 5,000 calls per second. The overhead was crushing performance.Solution: Group related metrics into memory blocks._memoryBlocks = new Dictionary{ // Position, attitude, altitude { "FlightData", new MemoryBlock(0x0560, 48) }, { "Engine1", new MemoryBlock(0x088C, 64) }, { "Engine2", new MemoryBlock(0x0924, 64) }, // Flight controls, trim { "Controls", new MemoryBlock(0x0BC0, 44) }, { "Autopilot", new MemoryBlock(0x07BC, 96) }};Each block fetches multiple related parameters in one operation.The impact went from 2,700 to 5,400 calls/second down to 240 to 480 calls/second (90%+ reduction).Separating Real-Time From Historical QueriesWe built two distinct modes:Real time (using Last Value Cache): Current values only, sub-10ms response.Historical (traditional SQL): Trends and analysis, acceptable to be slower reads.This separation seems obvious in hindsight, but most monitoring systems try to serve both needs with the same query patterns.Patterns and Practices for Real-Time DashboardingThese aren’t exotic techniques. Whether you’re monitoring manufacturing equipment, tracking application metrics or processing market data feeds, the patterns are the same:Cache current values in memory.Batch writes at 100-200ms intervals.Configure aggressive compaction.Read related metrics together.Separate real-time from historical queries.The ResultsThese techniques work anywhere you’re dealing with high-frequency data:IoT sensors: Manufacturing equipment, smart buildings, environmental monitoring.Application metrics: Application performance monitoring (APM) data, microservice telemetry, distributed tracing.Financial data: Market feeds, trading systems, risk monitoring.Gaming: Player telemetry, server metrics, performance monitoringAfter implementing these patterns:Query time: 30 seconds down to sub-10ms.Storage: 94% reduction.API calls: 90% fewer.Dashboard experience: Actually real time (well, as close as we could get!).The difference isn’t about faster hardware, it’s about using the right patterns for high-frequency data. I hope you give these techniques a try!The CodeComplete implementation on GitHub:C# data bridge: msfs2influxdb3-enterpriseNext.js dashboard: FlightSim2024-InfluxDB3EnterpriseWhether you’re building for flight simulators or production monitoring systems, we found these patterns were fun to discover on the path to real-time data handling.The post How We Cut Telemetry Queries to Under 10 Milliseconds appeared first on The New Stack.