It seems that JIT is overloaded with at least 2 meaning.
The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.
What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".
> Usually, a program is being interpreted first, then switches to compiled code in the middle of execution.
I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.
The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.
As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.
nasal voice Actually, there's no difference between AOT compilation and interpreting either.
I agree with your point. I think the point of JIT is that a program can be best sped up when there's a large amount of context - of which there's a lot when it's running. A powerful enough JIT compiler can even convert an "interpreter" into a "JIT compiler": See PyPy and Truffle/Graal.
I always took the distinction to be the executable memory bit as opposed to writing an executable and then launching it in the usual way. Of course high performance runtimes that contain JITs do typically interpret first in order to get type information and reduce startup latency, but that’s more a function of the language not being efficiently AOT compileable rather than fundamental to the concept of a JIT
Can you explain what mean by a “language not being efficiently AOT compileable”? I am guessing you are referring to languages that emphasize REPL based development or languages that are ‘image’ based (I think Smalltalk is the most common example).
Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.
The more context is narrowed down, the more optimizations that can applied during compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
In that sense almost every compiled Lisp/Scheme implementation, GHC, etc. or any other interactive programming system, count as JIT. But virtually nobody in those circles refer to such implementations as JIT. Instead, people says "I wish our implementation was JIT to benefit from all those optimizations it enables"!
Following is the closed form solution linked in the article (from a since-deleted Reddit comment):
from functools import reduce
def recurrence(ops, a0, n):
def transform(x, op):
return eval(repr(x) + op)
ops = ops.split()
m = reduce(transform, [op for op in ops if op[0] in ('*', '/')], 1)
b = reduce(transform, ops, 0)
for k in range(n + 1):
print('Term 0: ', a0 * m ** k + b * (m ** k - 1) / (m - 1))
> This is really only interesting if a particular (potentially really large) term of the sequence is desired, as opposed to all terms up to a point. The key observation is that any sequence of the given set of operations reduces to a linear recurrence which has the given solution.
Is it just me or is the author using static wrong? Someone mentioned it in the thread on the previous post, where it felt more like an oversight. But in this article it seems much more like an actual misunderstanding. Should it actually be the function level static variable? I feel like in the right context the optimizer might be able to leave that in a register if there the calling function is in the compilation unit.
The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.
What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".
I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.
The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.
As to the actual difference between JIT vs. AOT... it may just come down to accounting. That is, on whether you can always exclude the compilation time/cost from the overall program execution time/cost or not. If so, you're compiling ahead of (execution) time. If not, you're compiling during execution time.
I agree with your point. I think the point of JIT is that a program can be best sped up when there's a large amount of context - of which there's a lot when it's running. A powerful enough JIT compiler can even convert an "interpreter" into a "JIT compiler": See PyPy and Truffle/Graal.
Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
EDIT: at least GHC seems to be a traditional AOT compiler.