constexpr JIT?

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

constexpr JIT?

David Chisnall via cfe-dev
Hi,


TLDR: Is there any work going on to drastically improve constexpr evaluation
      speed?


after watching "constexpr ALL the things!"
lexer/parser generator library constexpr as well, which turned out to be just as
easy as advertised. There where two problems I could deal with relativly easily:

a) gcc exhausts any available memory. So switch to clang.
b) clang 5.0.0 has the constexpr constructor bug 
   (https://bugs.llvm.org/show_bug.cgi?id=19741), so build clang from source.
c) -fconstexpr-steps=-1

The only obstacle left (except for missing placement new and having to
initialize all memory) are the slow, slow slow compile times. I'm talking about
"I don't even know if it will ever end", before I started optimizing for 
constexpr evaluation. I have been able to decrease the compile time to 20
minutes, then 15 minutes, then 10 minutes, ... etc. There was no "Oh, I
accidentally used a bad algorithm." or "Oh, the compiler doesn't handle X very
well.". Just incrementally optimizing the code for compile time evaluation.

By now I am down to 3 minutes for my "test case" of constructing a minimal DFA 
for a json Lexer including

a) parsing the Regexes for json terminals and constructing an NFA,
b) constructing the corresponding DFA based on the NFA,
c) minimizing the DFA (Hopcroft's algorithm).

I'm basically using algorithms described in the Dragon Book.

With -O0 and disabling constexpr, this runs in 0.275 seconds. I have
continuously improved the compilation time by profiling and optimizing the -O0 
build first with perf tools and then with callgrind. Here is what I learned:

a) Just reduce the number of operations necessary to compute the result.
b) Ignore cache locality.
c) Abstraction is too expensive. Inline everything manually.

While this is a fascinating experience that goes against everything I learned
about optimization in the past few years, it obviously hinders the idea of being
able to use constexpr functions both at compile time AND at run time.

So of course I would prefer it if compile time evaluation speed would be
comparable to, say, an interpreted language. Maybe one using just-in-time
compilation like LLVM (hint, hint). Of course you can't just compile and run the 
constexpr code because undefined behavior must be "trapped". But running a
no-undefined-behavior version of C++ would be cool, I think. Even if it
certainly would be a lot of work ... 

Is there any work going on in that direction? Am I missing something?

Regards,
Tim Rakowski

_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
Reply | Threaded
Open this post in threaded view
|

Re: constexpr JIT?

David Chisnall via cfe-dev

On Nov 4, 2017, at 12:09 PM, Tim Rakowski via cfe-dev <[hidden email]> wrote:

Hi,


TLDR: Is there any work going on to drastically improve constexpr evaluation
      speed?

I am not aware of any, no.  If you're willing to share some of your intermediate programs
with us, I suspect there are a number of things we could do to speed up constexpr evaluation
short of actually taking on the massive complexity cost of using a JIT.  In particular, our value
representation is not particularly tuned, and the constant evaluator probably does a lot of
deep copies.

John.



after watching "constexpr ALL the things!"
lexer/parser generator library constexpr as well, which turned out to be just as
easy as advertised. There where two problems I could deal with relativly easily:

a) gcc exhausts any available memory. So switch to clang.
b) clang 5.0.0 has the constexpr constructor bug 
   (https://bugs.llvm.org/show_bug.cgi?id=19741), so build clang from source.
c) -fconstexpr-steps=-1

The only obstacle left (except for missing placement new and having to
initialize all memory) are the slow, slow slow compile times. I'm talking about
"I don't even know if it will ever end", before I started optimizing for 
constexpr evaluation. I have been able to decrease the compile time to 20
minutes, then 15 minutes, then 10 minutes, ... etc. There was no "Oh, I
accidentally used a bad algorithm." or "Oh, the compiler doesn't handle X very
well.". Just incrementally optimizing the code for compile time evaluation.

By now I am down to 3 minutes for my "test case" of constructing a minimal DFA 
for a json Lexer including

a) parsing the Regexes for json terminals and constructing an NFA,
b) constructing the corresponding DFA based on the NFA,
c) minimizing the DFA (Hopcroft's algorithm).

I'm basically using algorithms described in the Dragon Book.

With -O0 and disabling constexpr, this runs in 0.275 seconds. I have
continuously improved the compilation time by profiling and optimizing the -O0 
build first with perf tools and then with callgrind. Here is what I learned:

a) Just reduce the number of operations necessary to compute the result.
b) Ignore cache locality.
c) Abstraction is too expensive. Inline everything manually.

While this is a fascinating experience that goes against everything I learned
about optimization in the past few years, it obviously hinders the idea of being
able to use constexpr functions both at compile time AND at run time.

So of course I would prefer it if compile time evaluation speed would be
comparable to, say, an interpreted language. Maybe one using just-in-time
compilation like LLVM (hint, hint). Of course you can't just compile and run the 
constexpr code because undefined behavior must be "trapped". But running a
no-undefined-behavior version of C++ would be cool, I think. Even if it
certainly would be a lot of work ... 

Is there any work going on in that direction? Am I missing something?

Regards,
Tim Rakowski
_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev


_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev