Validating more loop optimizations
Each optimization has a cost in complexity and must have a compensating benefit in performance.
I hope this issue illustrated the quantitative approach that I want to take to optimizations.Or it could be that our JIT does the job better or that there is other value e.g.that we prefer to simplify the mcode ourselves to make it easier to read (since we never get to see the uops.) In any case it will be interesting to look at the benchmarks and decide The results are ready (permalink) and here is the first graph (click to zoom - corresponds to the one in #46): How does it look?I will spend some time on resolving #47 before thinking about that. So here is another of my wild braindumps based on the information from these benchmarks, with the caveat that this is absolutely not the whole story and more data is always needed... It benefits a small number of benchmarks and the code is simple.However I am concerned about the "astonishment factor" that this optimization may find patterns in non-deterministic random noise that could potentially lead to surprising and unpredictable performance. perhaps I was disabling loop optimization entirely in this test.