summaryrefslogtreecommitdiff
path: root/Userland/Libraries/LibVideo/VP9/LookupTables.h
AgeCommit message (Collapse)Author
2023-05-07Everywhere: Run spellcheck on all documentationBen Wiederhake
2023-04-25LibVideo/VP9: Use an enum to select segment featuresZaggy1024
This throws out some ugly `#define`s we had that were taking the role of an enum anyway. We now have some nice getters in the contexts that take the place of the combo of `seg_feature_active()` and then doing a lookup in `FrameContext::m_segmentation_features` directly.
2023-04-25LibVideo/VP9: Implement unscaled fast paths in inter predictionZaggy1024
Inter-prediction convolution filters are selected based on the subpixel position determined for the motion vector relative to the block being predicted. The subpixel position 0 only uses one single sample in the center of the convolution, not averaging any other samples. Let's call this a copy. Reference frames can also be a different size relative to the frame being predicted, but in almost every case, that scale will be 1:1 for every single frame in a video. Taking into account these facts, we can create multiple fast paths for inter prediction. These fast paths are only active when scaling is 1:1. If we are doing a copy in both dimensions, then we can do a straight memcpy from the reference frame to the output block buffer. In videos where there is no motion, this is a dramatic speedup. If we are doing a copy in one dimension, we can just do one convolution and average directly into the output block buffer. If we aren't doing a copy in either dimension, we can still cut out a few operations from the convolution loops, since we only need to advance our samples by whole pixels instead of subpixels. These fast paths result in about a 34% improvement (~31.2s -> ~20.6s) in a video which relies heavily on intra-predicted blocks due to high motion. In videos with less motion, the improvement will be even greater. Also, note that the accumulators in these faster loops are only 16-bit. High bit-depth videos will overflow those, so for now the fast path is only used for 8-bit videos.
2023-02-03LibVideo/VP9: Use proper indices for updating inter_mode probabilitiesZaggy1024
I previously changed it to use the absolute inter-prediction mode values instead of the ones relative to NearestMv. That caused the probability adaption to take invalid indices from the counts and broke certain videos. Now it will just convert to the PredictionMode enum when returning from parse_inter_mode, which allows us to still use it the same as before.
2022-11-30LibVideo/VP9: Prefix TransformSize with Transform_ instead of TX_Zaggy1024
2022-11-30LibVideo/VP9: Rename TX(Mode|Size) to Transform(Mode|Size)Zaggy1024
2022-11-30LibVideo/VP9: Replace (DCT|ADST)_(DCT_ADST) with struct TransformSetZaggy1024
Those previous constants were only set and used to select the first and second transforms done by the Decoder class. By turning it into a struct, we can make the code a bit more legible while keeping those transform modes the same size as before or smaller.
2022-11-30LibVideo/VP9: Convert token scan order indices to u16Zaggy1024
They are directly taken from lookup tables that only need that bit precision, so may as well shrink them.
2022-11-30LibVideo/VP9: Use a bitwise enum for motion vector joint selectionZaggy1024
The motion vector joints enum is set up so that the first bit indicates that a vector should have a non-zero value in the column, and the second bit indicates a non-zero value for the row. Taking advantage of this makes the code a bit more legible.
2022-11-12LibVideo: Combine VP9's Intra- and InterMode enums into PredictionModeZaggy1024
The two different mode sets are stored in single fields, and the underlying values didn't overlap, so there was no reason to keep them separate. The enum is now an enum class as well, to enforce that almost all uses of the enum are named. The only case where underlying values are used is in lookup tables, but it may be worth abstracting that as well to make array bounds more clear.
2022-10-09LibVideo: Implement inter predictionZaggy1024
This enables the second frame of the test video to be decoded. It appears that the test video uses a superframe (group of multiple frames) for the first chunk of the file, but we haven't implemented superframe parsing. We also ignore the show_frame flag, so for now, this means that the second frame read out is shown when it should not be. To fix this, another error type needs to be implemented that is "thrown" to decoder's client so they know to send another sample buffer.
2022-10-09LibVideo: Add MotionVector lookup tables as constant expressionsZaggy1024
This changes MotionVector by removing the cpp file and moving all functions to the header, where they are now declared as constexpr so that they can be compile-time evaluated in LookupTables.h.
2021-07-10LibVideo/VP9: Implement token parsing (6.4.24-6.4.26)FalseHonesty
Note that this now requires a couple new syntax types to be parsed in the TreeParser, so a follow-up commit will implement that behavior.
2021-07-10LibVideo/VP9: Implement sections 6.1.2 and 8.4.1-8.4.4FalseHonesty
These section implement the behavior to refresh the probability tables after parsing a frame.
2021-07-10LibVideo/VP9: Start parsing residuals (6.4.21-6.4.23)FalseHonesty
Additionally, this uncovered a couple bugs with existing code, so those have been fixed. Currently, parsing a whole video does fail because we are now using a new calculation for frame width, but it hasn't been fully implemented yet.
2021-06-30LibVideo/VP9: Implement intra_frame_mode_info procedure (6.4.6)FalseHonesty
2021-06-30LibVideo/VP9: Begin creating a tree parser to parse syntax elementsFalseHonesty
2021-06-30LibVideo/VP9: Begin decoding tilesFalseHonesty
2021-06-12LibVideo/VP9: Add Decoder and begin parsing uncompressed header dataFalseHonesty
This patch brings all of the previous work together and starts to actually parse and decode frame information. Currently it only parses the uncompressed header data (section 6.2 of the spec).