- Frugal Cafe
- Posts
- FC02 The 2.5 Copies of Data Problem - How do you know the fix is better?
FC02 The 2.5 Copies of Data Problem - How do you know the fix is better?
Simple performance benchmarking
For the 2.5 copies of data problem in JSON serialization, we proposed a solution to reduce/avoid LOH allocations, and reduce data-copying. But your code review crew would need a more convincing story, so it’s best to write a performance benchmark, comparing performance data before and after.
First, you need to setup the data properly, to generate similar large object allocation:
Then you can write a simple test to validate the fix is correct, and check output size to make sure it’s really big enough to land in LOH
Now we can write a simple performance benchmark:
The pattern here is calling the code twice for warmup, then measure a big loop. Here 1,000 iterations is good enough. Sometimes you may need millions.
Now the perf test results:
CPU reduction 17.45%, allocation reduction 99.57%. You can capture a trace or dump to validate all LOH allocations are gone, — the buffer in the MemoryStream is reused over and over again.
If this code is running in a heavy server side process, you may need a better way to reuse the MemoryStream, for example using thread static or a object pool.