Features Download
From: Chandler Carruth <chandlerc <at> google.com>
Subject: Re: RFC: Binary format for instrumentation based profiling data
Newsgroups: gmane.comp.compilers.llvm.devel
Date: Monday 17th March 2014 21:07:49 UTC (over 3 years ago)
On Wed, Mar 12, 2014 at 6:09 PM, Justin Bogner

> Instrumentation-based profiling data is generated by instrumented
> binaries through library functions in compiler-rt, and read by the clang
> frontend to feed PGO.

Note that compiler-rt *cannot* link in LLVM libraries for many reasons:

1) The compiler-rt must be built for the target, the LLVM libraries are
built for the host.
2) The compiler-rt source is under a different license
3) We typically want to avoid the large dependencies of LLVM from runtime
libraries that need to be extremely light weight such as the profile

So I think you will at least need to have the header file with basic
structs, and endian-aware writing code duplicated. =/ This may explain some
of the trouble you had with build systems at least.

Use cases
> =========
> There are a few use cases that drive our requirements:
> A. Looking up the counters for a particular function needs to be
>   efficient, so as to not slow down compilation too much when using the
>   data for PGO.  This is also presumably how this data would be
>   accessed if using it for coverage.
> B. The file should be relatively quick to generate, and shouldn't
>   require a large amount of memory overhead to write out.  Instrumented
>   programs will write this file out on termination or on demand.
> C. Iterating over all of the data must be reasonably simple, to support
>   tools that summarize or merge data files.
> D. The format should be relatively compact.  Programs with a large
>   number of functions may be instrumented, and we don't want to create
>   unnecessarily large files because of it.
> E. The format should be portable between systems.  Ie, if Alice
>   generates profile data for some program P using Machine A, Bob should
>   be able to use that same profile data to instrument a copy of that
>   same program on machine B.
> Partially updating the file is not a goal.  Instrumented programs write
> out a profile for a single run, and multiple runs can be merged by a
> separate tool.

The other assumption here is that you want the same file format written by
instrumentation and read back by the compiler. While I think that is an
unsurprising goal, I think it creates quite a few limitations that I'd like
to point out. I think it would be worthwhile to consider the alternative of
having the profile library write out data files in a format which is
essentially "always" transformed by a post-processing tool before being
used during compilation.

Limitations of using the same format in both places:
- High burden on writing the file constrains the format (must be fast, must
not use libraries, etc...)
- Have to write and index even though the writer doesn't really need it.
- Have to have the function name passed through the instrumentation,
potentially duplicating it with debug info.
- Can't use an extensible file format (like bitcode) to insulate readers of
profile data from format changes.

I'm imagining it might be nicer to have something along the lines of the
following counter proposal. Define two formats: the format written by
instrumentation, and the format read by the compiler. Split the use cases
up. Specialize the formats based on the use cases. It does require the user
to post-process the results, but it isn't clear that this is really a
burden. Historically it has been needed to merge gcov profiles from
different TUs, and it is still required to merge them from multiple runs.

I think the results could be superior for both the writer and reader:

Instrumentation written format:
- No index, just header and counters
- (optional) Omit function names, and use PC at a known point of the
function, and rely on debug info to map back to function names.
- Use a structure which can be mmap-ed directly by the instrumentation code
(at least on LE systems) so that "writing the file on close" is just
flushing the memory region to disk
- Explicitly version format, and provide no stability going forward

Profile reading format:
- Use a bitcoded format much like Clang's ASTs do (or some other tagged
format which allows extensions)
- Leverage the existing partial reading which has been heavily optimized
for modules, LLVM IR, etc.
- Use implicit-zero semantics for missing counters within a function where
we have *some* instrumentation results, and remove all zero counters
- Maybe other compression techniques

Thoughts? Specific reasons to avoid this? I'm very much interested in
minimizing the space and runtime overhead of instrumentation, as well as
getting more advanced features in the format read by Clang itself.
CD: 3ms