Age | Commit message (Collapse) | Author |
|
For testing purposes, this allows opening of any filename by passing it
as an argument.
Additionally, there is a --benchmark option that will just call decode
for 100 frames and then exit, printing the time spent in the decoder.
|
|
Debug prints are expensive, so doing them every frame seems excessive
now that the decoder is completely functional on some test videos.
|
|
Previously, saved probability tables were being inserted, causing the
Vector to increase in size when it should say fixed at a size of 4. This
changes the Vector to an Array<T, 4> which will default-initalize and
allow assigning to any index without previously setting size.
|
|
Integer overflow could sometimes occur due to counts going above 255,
where the values should instead be clamped at their maximum to avoid
wrapping to 0.
|
|
This fixes an issue causing frame 3 of the test video to fail to parse
because a reference vector was incorrectly within the range for a high
precision delta vector read.
|
|
This allows the second shown frame of the VP9 test video to be decoded,
as the second chunk uses a superframe to encode a reference frame and
a second to inter predict between the keyframe and the reference frame.
|
|
This enables the second frame of the test video to be decoded.
It appears that the test video uses a superframe (group of multiple
frames) for the first chunk of the file, but we haven't implemented
superframe parsing.
We also ignore the show_frame flag, so for now, this
means that the second frame read out is shown when it should not be. To
fix this, another error type needs to be implemented that is "thrown" to
decoder's client so they know to send another sample buffer.
|
|
The above interpolation filter mode was being taken from the left side
instead, causing some parsing errors.
This also changes the magic number 3 to SWITCHABLE_FILTERS.
Unfortunately, the spec uses the magic number, so this value was taken
instead from the reference codec, libvpx.
|
|
These values were referencing the wrong column of a table in the spec,
the values should start from 10.
|
|
This gets the decoder closer to fully parsing the second frame without
any errors. It will still be unable to output an inter-predicted frame.
The lack of output causes VideoPlayer to crash if it attempts to read
the buffers for frame 1, so it is still limited to the first frame.
|
|
This changes MotionVector by removing the cpp file and moving all
functions to the header, where they are now declared as constexpr
so that they can be compile-time evaluated in LookupTables.h.
|
|
|
|
For testing purposes, the output buffer is taken directly from the
decoder and displayed in an image widget.
The first keyframe can be displayed, but the second will not decode
so VideoPlayer will stop at frame 0 for now.
This implements a BT.709 YCbCr to RGB conversion in VideoPlayer, but
that should be moved to a library for handling color space conversion.
|
|
The first keyframe of the test video can be decoded with these changes.
Raw memory allocations in the Parser have been replaced with Vector or
Array to avoid memory leaks and OOBs.
|
|
This allows runtime strings, so we can format the errors to make them
more helpful. Errors in the VP9 decoder will now print out a function,
filename and line number for where a read or bitstream requirement
has failed.
The DecoderErrorCategory enum will classify the errors so library users
can show general user-friendly error messages, while providing the
debug information separately.
Any non-DecoderErrorOr<> results can be wrapped by DECODER_TRY to
return from decoder functions. This will also add the extra information
mentioned above to the error message.
|
|
This allows parsing of the implemented functions from the VP9 spec in
the test video included in /home/anon/Videos.
|
|
This will allow BitStream::read_bool() to read more than one bit from
the range-coded bitstream at a time if needed.
|
|
init_bool will now check whether there is enough data in the bitstream
for the range coding size to be fully read.
exit_bool will now read the entire padding element regardless of size,
which the spec does not specify a limit on.
|
|
Reads will now be done in larger chunks at a time.
The public read_byte() function was removed in favor of a private
fill_reservoir() function which will be used to fill the 64-bit
reservoir field which will then be bit-shifted and masked as necessary
for subsequent arbitrary bit-sized reads.
read_f(n) was renamed to read_bits to be clearer about its use.
|
|
Errors are propagated to the user of the decoder so that they can be
aware of specific places where a read failed.
|
|
This was unnecessary, as the implicit one works correctly.
|
|
The interpolation filter value is not set when reading an intra-only
frame, so printing this for the first keyframe of the file was printing
"220", which is invalid.
|
|
|
|
|
|
is_url_code_point invokes StringView::contains, which never was and
cannot become constexpr.
|
|
Instead of doing anything reasonable, Utf8CodePointIterator returned
invalid code points, for example U+123456. However, many callers of this
iterator assume that a code point is always at most 0x10FFFF.
In fact, this is one of two reasons for the following OSS Fuzz issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49184
This is probably a very old bug.
In the particular case of URLParser, AK::is_url_code_point got confused:
return /* ... */ || code_point >= 0xA0;
If code_point is a "code point" beyond 0x10FFFF, this violates the
condition given in the preceding comment, but satisfies the given
condition, which eventually causes URLParser to crash.
This commit fixes *only* the erroneous UTF-8 decoding, and does not
fully resolve OSS-Fuzz#49184.
|
|
In particular, StringView::contains(char) is often used with a u32
code point. When this is done, the compiler will for some reason allow
data corruption to occur silently.
In fact, this is one of two reasons for the following OSS Fuzz issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49184
This is probably a very old bug.
In the particular case of URLParser, AK::is_url_code_point got confused:
return /* ... */ || "!$&'()*+,-./:;=?@_~"sv.contains(code_point);
If code_point is a large code point that happens to have the correct
lower bytes, AK::is_url_code_point is then convinced that the given
code point is okay, even if it is actually problematic.
This commit fixes *only* the silent data corruption due to the erroneous
conversion, and does not fully resolve OSS-Fuzz#49184.
|
|
For some odd reason we used to return PhysicalPtr for a page_table_base
result, but when setting it we accepted only a 32 bit value, so we
truncated valid 64 bit addresses into 32 bit addresses by doing that.
With this commit being applied, now PageDirectories can be located
beyond the 4 GiB barrier.
This was found by sin-ack, therefore he should be credited with this fix
appropriately with Co-authored-by sign.
Co-authored-by: sin-ack <sin-ack@users.noreply.github.com>
|
|
This includes the known dtc utility as well with other bundled utilities
with it.
|
|
|
|
Add ability to use values passed to grid-template-columns and
grid-template-rows for CSS Grid layout within a repeat() function.
E.g. grid-template-columns: repeat(2, 50px); means to have two columns
of 50px width each.
|
|
|
|
There is no particular reason why this section should be marked as
`NOBITS` (as it might very well include initialized values), and it
resolves 90% of the mismatches between the input and output sections,
which LLD now warns about when linking.
|
|
Hopefully no one else will forget to call set_prototype with the cached
prototype they just retrieved from a realm and spend a long time
wondering why their object has no properties...
|
|
Get rid of the bespoke NavigatorObject class and use the modern IDL
strategies for creating platform objects to re-implement Navigator and
its associcated mixin interfaces. While we're here, implement it in a
way that brings WorkerNavigator up to spec :^)
|
|
We can now properly add the prototypes and constructors to the global
object of the Worker's inner realm, so we don't need this window for
anything anymore.
|
|
|
|
There's still some yaks to shave here as Window, Location and Navigator
don't have idl files yet.
|
|
This new code generator takes all the .idl files in LibWeb, looks for
each top level interface in there with an [Exposed=Foo] attribute, and
adds code to add the constructor and prototype for each of those exposed
interfaces to the realm of the relevant global object we're initialzing.
It will soon replace WindowObjectHelper as the way that web interfaces
are added to the Window object, and will be used in the future for
creating proper WorkerGlobalScope objects for dedicated and shared
workers.
|
|
Instead, create a tree of Parsers all pointing to a top-level Parser.
All module imports and interfaces are stored at the top level, instead
of in a static map. This allows creating multiple IDL::Parsers in the
same process without them stepping on each others toes.
|
|
This includes things like Exposed and LegacyFactoryFunction.
|
|
The intent is to use these to autogenerate prototype declarations for
Window and WorkerGlobalScope classes.
And the spec links are just nice to have :^)
|
|
Recent changes to layout and display broke these pseudo elements
leading to crashes on a few websites such as https://rpcs3.net/.
|
|
Both USB_IDS_PATH and PCI_IDS_PATH are now unused so can be safely
removed.
|
|
|
|
Also, let's stop use the signature file and instead just compare sha256
checksums.
|
|
|
|
|
|
When the indicated column-span is greater than the implicit grid (like
in cases when the grid has the default size of 1x1, and the column is
supposed to span any number greater than that), then previously were
crashing.
|
|
Fixes a bug in the maybe_add_column() implementation of the
OccupationGrid. Previously were checking for the width of the grid based
off of the first row, and so when augmenting the column count row-by-row
the latter rows would have differing column counts.
Also, were doing an unnecessary + 1 which I imagine comes from before
when I wasn't quite clear on whether I was referring to columns by
index or by the css-value of the column (column 1 in the css is
column index 0).
|