summaryrefslogtreecommitdiff
path: root/qobject/json-streamer.c
AgeCommit message (Collapse)Author
2016-07-12json-streamer: fix double-free on exiting during a parsePaolo Bonzini
Now that json-streamer tries not to leak tokens on incomplete parse, the tokens can be freed twice if QEMU destroys the json-streamer object during the parser->emit call. To fix this, create the new empty GQueue earlier, so that it is already in place when the old one is passed to parser->emit. Reported-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1467636059-12557-1-git-send-email-pbonzini@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-06-30json-streamer: Don't leak tokens on incomplete parseEric Blake
Valgrind complained about a number of leaks in tests/check-qobject-json: ==12657== definitely lost: 17,247 bytes in 1,234 blocks All of which had the same root cause: on an incomplete parse, we were abandoning the token queue without cleaning up the allocated data within each queue element. Introduced in commit 95385fe, when we switched from QList (which recursively frees contents) to g_queue (which does not). We don't yet require glib 2.32 with its g_queue_free_full(), so open-code it instead. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1463608012-12760-1-git-send-email-eblake@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
2016-02-04qobject: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 1454089805-5470-12-git-send-email-peter.maydell@linaro.org
2015-11-26qjson: Limit number of tokens in addition to total sizeMarkus Armbruster
Commit 29c75dd "json-streamer: limit the maximum recursion depth and maximum token count" attempts to guard against excessive heap usage by limiting total token size (it says "token count", but that's a lie). Total token size is a rather imprecise predictor of heap usage: many small tokens use more space than few large tokens with the same input size, because there's a constant per-token overhead: 37 bytes on my system. Tighten this up: limit the token count to 2Mi. Chosen to roughly match the 64MiB total token size limit. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1448486613-17634-13-git-send-email-armbru@redhat.com>
2015-11-26qjson: surprise, allocating 6 QObjects per token is expensivePaolo Bonzini
Replace the contents of the tokens GQueue with a simple struct. This cuts the amount of memory allocated by tests/check-qjson from ~500MB to ~20MB, and the execution time from 600ms to 80ms on my laptop. Still a lot (some could be saved by using an intrusive list, such as QSIMPLEQ, instead of the GQueue), but the savings are already massive and the right thing to do would probably be to get rid of json-streamer completely. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1448300659-23559-5-git-send-email-pbonzini@redhat.com> [Straightforwardly rebased on my patches] Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-11-26qjson: store tokens in a GQueuePaolo Bonzini
Even though we still have the "streamer" concept, the tokens can now be deleted as they are read. While doing so convert from QList to GQueue, since the next step will make tokens not a QObject and we will have to do the conversion anyway. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1448300659-23559-4-git-send-email-pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-11-26qjson: replace QString in JSONLexer with GStringPaolo Bonzini
JSONLexer only needs a simple resizable buffer. json-streamer.c can allocate memory for each token instead of relying on reference counting of QStrings. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1448300659-23559-2-git-send-email-pbonzini@redhat.com> [Straightforwardly rebased on my patches, checkpatch made happy] Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-11-26qjson: Give each of the six structural chars its own token typeMarkus Armbruster
Simplifies things, because we always check for a specific one. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <1448486613-17634-6-git-send-email-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-11-26qjson: Don't crash when input exceeds nesting limitMarkus Armbruster
We limit nesting depth and input size to defend against input triggering excessive heap or stack memory use (commit 29c75dd json-streamer: limit the maximum recursion depth and maximum token count). However, when the nesting limit is exceeded, parser_context_peek_token()'s assertion fails. Broken in commit 65c0f1e "json-parser: don't replicate tokens at each level of recursion". To reproduce stuff 1025 open braces or brackets into QMP. Fix by taking the error exit instead of the normal one. Reported-by: Eric Blake <eblake@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1448486613-17634-3-git-send-email-armbru@redhat.com>
2015-11-26qjson: Apply nesting limit more sanelyMarkus Armbruster
The nesting limit from commit 29c75dd "json-streamer: limit the maximum recursion depth and maximum token count" applies separately to braces and brackets. This makes no sense. Apply it to their sum, because that's actually a measure of recursion depth. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1448486613-17634-2-git-send-email-armbru@redhat.com>
2013-01-12build: move qobject files to qobject/ and libqemuutil.aPaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>