Improve performance and usability of telemetry batching implementation #7838
Labels
notable_change
A change which should be noted in the changelog
severity:critical
type:enhancement
verified
Tested or intentionally closed
Is your feature request related to a problem? Please describe.
In real-world use of the initial implementation of telemetry batching it was found that bursty telemetry easily overran the buffers causing regular telemetry drops.
The buffering implementation should be modified to be more flexible in the face of bursty data.
Additionally, the current implementation requires plugin developers to define a server-specific "batching strategy". This presents a fairly significant impediment to adoption. A shared, rather than a per-parameter buffer removes the opportunity for a "one-size-fits-all" buffering approach.
Describe the solution you'd like
The initial implementation of real-time telemetry buffering used per-parameter buffers in an attempt to optimize for the case where a noisy parameter causing a buffer overrun squeezes out updates from lower-frequency parameters.
What was discovered in real-world testing was that telemetry did not arrive as regularly as expected, and instead would often arrive in "bursts". These bursts would easily overrun a shorter per-parameter buffer, causing regular telemetry drops.
A large shared buffer, rather than smaller per-parameter buffers is likely to offer more overhead in the face of temporary bursts of telemetry, as the buffer space is more efficiently shared between high and low frequency parameters.
The text was updated successfully, but these errors were encountered: