|Pliant parser is not relying on grammar notion.
It's guts are: token parsing, then rewritting.
For parsing token, we have an ordered set of rules (each rule is a function that
is responsinle for recognising one kind of token. An example could be a rule
recognising a floatting point value).
In very fiew words, a token parsing rule function either fails (meaning what comes
next is not something I've been designed to recognise), or adds an object
to the output tree of the parser and move the parsing cursor forward.
Now, rewritting rules (also an ordered list of functions) just travel the output
tree (which is just a list after token parsing took place) and fold it.
An example could be a rule saying that 'A' '+' 'B' must be folded as ('+' 'A' 'B')
Now, what makes the connection with grammars hard is that:
. any Pliant module can provide new rules (so change the overall equivalent
. modules visibility apply to parser rules, so some rules apply only if the
current module did include another one, so you don't have a single grammar
for Pliant, but potencially 2^n where n is the number of modules providing
My point of view, also probably not widely accepted, is that the main
advantage of grammars is to provide optimal parsing speed, but we don't care
because parsing time is just negligeable when compared to code optimisation