Quote: It's possible that this is due to the nature of FSharp. I've noticed that FSharp assemblies seem to have a huge amount of generated code.
This is correct. Any let-bound function in a module, nested, curried becomes an auto-generated class that essentially captures the closure as its state. These classes have one or move
Invoke methods that execute the captured function. F# goes through great lengths to try to minimize the size of these functions, and programmers tend to be very aware of whether or not a closure needs to be captured.
For instance,
[1;2;3] |> List.map (fun x -> x + 1) will create an anonymous function, i.e. a class with
Invoke for the function with the body
x + 1 (this isn't 100% true, and technically such an anonymous function is also created for
|> (piping), but that gets source-inlined, so it isn't visible in the IL).
Such coding is idiomatic in F#. The result is that there are a lot of very small classes with very small Invoke methods. That isn't much of a problem, because JIT will inline most of them anyway, so after JIT'ing they disappear. If I inspect the code after instrumentation, I can see that instrumentation is added to
all methods. For analyzing execution times, I doubt you can capture any reasonable timings from pure IL (without calls into other methods). In fact, if I inspect "Hot spots", even in tests that run long (1 sec or so), all but a few "outer" methods report 0ms (that's another point for debate, why ms? For line-level timings that seems rather course). Any other line always starts at 15 or 16ms, then 31 or 32, then around 45-48, so there seems to be some logic that only aggregates a certain minimum of execution times.
I know that this is very complex machinery and my guess is that, considering it is a new feature, you are looking into ways to make it even more useful. My 2p would be: stop collecting data from micro-methods that are pure. Even with line-level timings on, that won't make sense in whatever language anyway.
Quote:Do you notice much difference with the 'Detect stack overflow' setting disabled or the 'Analyse line execution times' disabled?
I did notice in the added instrumentation that indeed the stack-overflow stubs are also added to the tiniest (pure) methods. One optimization would be to somehow stop doing that when a method is pure (that is: when it doesn't call into any other method), or when a method is merely a wrapper, I mean, its arguments are passed immediately to the next method unchanged (JIT detects this and lifts that method, so it doesn't add to the stack, hence "detect SO" is redundant in that case, but this may be much trickier than detecting a pure method).
I will experiment a bit more with the settings.
Quote:Are you able to isolate this? We do have some special handling around tail methods ... maybe we're getting something wrong..
It seems to be flaky, sometimes it happens, sometimes it doesn't, regardless the settings. When I isolate the project where it happens, I can't repro it. The method where it goes wrong has multiple exit points, but all are in tail position (i.e., at ends of if, elif, else and switch-case statements).
Quote:You'll find it much easier to extract timing data using the 'Execution Steps' pane in the Tests Window
I thought these timings were off, but now I see that "32.547 ms" means 32.5s, not 32.5ms. The dot is confusing and is a European dot, apparently. I've misinterpreted that in the past... my bad.