Performance Metrics
Table of Contents
SPBench provides the following metrics for users to evaluate the benchmarks:
Latency
Throughput
CPU usage (%)
Memory usage (KB)
These metrics can be collected at different levels, such as per time unit, per operator, per source, or global average. Some of them can be combined as well.
Also, the metrics can be collected statically (selected and configured before starting the run), or dynamically (collected at any point during the benchmark execution through function calls inside the source code).
The table below shows more details.
Metric |
Granularity |
Usage |
---|---|---|
Latency |
Global average (end-to-end)
Global average (per-operator)
Per time window
Per item
|
Dynamic and static |
Throughput |
Global averag
Average per time window
|
Dynamic and static |
CPU and Mem. Usage |
Global average
Per time interval
|
Static |
Statically Selecting Performance Metrics
Statically selected metrics are the metrics that are selected as an argument through the exec command in the CLI. See Command-Line Interface or run ./SPBench exec -h
for more information.
Here are five optional arguments for metrics you can use when running the ‘exec’ command:
-monitor <time_interval>
It monitors latency, throughput, CPU and memory usage, and other parameters. Here users can indicate a monitoring time interval in milliseconds (e.g. 250). It will generate a log file inside a log folder (spbench/log/). This file will contain the results got for each metric at each time interval.
-latency
It prints the average latency results for each application operator on the screen after the execution.
-throughput
It prints the average throughput results after the execution.
-latency-monitor
It monitors the latency per stage and per item and generates a log file inside the log folder.
-resource-usage
It prints the global memory and CPU usage for the selected application.
Note
You must run the benchmark as root or adjust paranoid value to use the resource usage metric.
Dinamically Selecting Performance Metrics
SPBench also allows some metrics to be collected at runtime, such as throughput and latency.
These metrics are available in two modes. The first one returns the global average results from the beginning of the execution to the current moment. The second mode is the instantenous one, where the metrics are computed over a short period (time window).
To get the instantaneous throughput: spb::Metrics::getInstantThroughput(<time_window_in_sec>)
To get the average throughput: spb::Metrics::getAverageThroughput()
To get the instantaneous latency: spb::Metrics::getInstantLatency(<time_window_in_sec>)
To get the average latency: spb::Metrics::getAverageLatency()
These metrics are also used by the performance monitoring. There, the time window used for the instantaneous metrics is the same value used for the monitoring time interval.
Performance Metrics for Multi-source Benchmarks
All metrics available for single source benchmarks are also available for multi-source.
Their usage does not change for static metrics, but for the dynamic ones it still quite similar.
The difference here is that these metrics are implemented as methods from the spb::Source class, instead of spb::Metrics class.
For example:
To get the instantaneous throughput: source.getInstantThroughput(<time_window_in_sec>)
To get the average throughput: source.getAverageThroughput()
To get the instantaneous latency: source.getInstantLatency(<time_window_in_sec>)
To get the average latency: source.getAverageLatency()