|
The parser has a rather unorthodox structure:
Until EOF:
Read a section:
Generator function get_expr() yields one section after the
other, as a string. An unindented, non-empty line that
isn't a comment starts a new section.
Lexing:
Split section into a list of tokens (strings), with help
of generator function tokenize().
Parsing:
Parse the first expression from the list of tokens, with
parse(), throw away any remaining tokens.
In parse_schema(): record value of an enum, union or
struct key (if any) in the appropriate global table,
append expression to the list of expressions.
Return list of expressions.
Known issues:
(1) Indentation is significant, unlike in real JSON.
(2) Neither lexer nor parser have any idea of source positions. Error
reporting is hard, let's go shopping.
(3) The one error we bother to detect, we "report" via raise.
(4) The lexer silently ignores invalid characters.
(5) If everything in a section gets ignored, the parser crashes.
(6) The lexer treats a string containing a structural character exactly
like the structural character.
(7) Tokens trailing the first expression in a section are silently
ignored.
(8) The parser accepts any token in place of a colon.
(9) The parser treats comma as optional.
(10) parse() crashes on unexpected EOF.
(11) parse_schema() crashes when a section's expression isn't a JSON
object.
Replace this piece of original art by a thoroughly unoriginal design.
Takes care of (1), (2), (5), (6) and (7), and lays the groundwork for
addressing the others. Generated source files remain unchanged.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1374939721-7876-4-git-send-email-armbru@redhat.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|