-
-
Save bruno-/660c5bdfcaa310467c5f88fc0b24f66c to your computer and use it in GitHub Desktop.
| require "async" | |
| CONCURRENCY = 1000 | |
| ITERATIONS = 100 | |
| def work | |
| Async do |task| | |
| CONCURRENCY.times do | |
| task.async do | |
| sleep 1 | |
| end | |
| end | |
| end | |
| end | |
| def duration | |
| start = Process.clock_gettime(Process::CLOCK_MONOTONIC) | |
| work | |
| Process.clock_gettime(Process::CLOCK_MONOTONIC) - start | |
| end | |
| def average | |
| ITERATIONS.times.sum { | |
| duration | |
| }.fdiv(ITERATIONS) | |
| end | |
| puts average # => 1.01772911996115 |
| CONCURRENCY = 1000 | |
| ITERATIONS = 100 | |
| def work | |
| CONCURRENCY.times.map { | |
| Thread.new do | |
| sleep 1 | |
| end | |
| }.each(&:join) | |
| end | |
| def duration | |
| start = Process.clock_gettime(Process::CLOCK_MONOTONIC) | |
| work | |
| Process.clock_gettime(Process::CLOCK_MONOTONIC) - start | |
| end | |
| def average | |
| ITERATIONS.times.sum { | |
| duration | |
| }.fdiv(ITERATIONS) | |
| end | |
| puts average # => 1.045861059986055 |
@bruno- thanks for catching my math mistakes 🙊 .
I'm not sure if this is the right way to look at it. I can't prove your math wrong,
In general, I'm nervous when there's a variable that I can arbitrarily change to get different (comparative) results. It makes me worried that I've gamed my own microbenchmark.
Not sure if you've seen but I've done a bunch of work in the space of trying to ensure a benchmark result are actually valid
- Take a look at comparative histograms and statistical significance checks https://github.com/zombocom/derailed_benchmarks#i-made-a-patch-to-to-rails-how-can-i-tell-if-it-made-my-rails-app-faster-and-test-for-statistical-significance
- Not specifically performance but about tuning a system that varies when number inputs are changed https://www.schneems.com/2020/07/08/a-fast-car-needs-good-brakes-how-we-added-client-rate-throttling-to-the-platform-api-gem/
My conclusion is that the average request duration for threads is 0.2225s or about 11% overhead on average.
That's the average across 200 runs. Each run does 1000 operations which is what I was dividing by (if that makes sense).
I've seen the noah article ages ago, but forgotten about it (thanks for the reminder). The code is here https://github.com/noahgibbs/fiber_basic_benchmarks/tree/master/benchmarks that file got renamed it looks like.
You should test real world case, like one event loop creating lots of fibers, rather than creating lots of event loops.