| Commit message (Collapse) | Author | Age | Lines |
| |
|
|
|
|
|
|
|
|
| |
CheckedInt propagates the mIsValid in each add operation so that it avoids needing a bunch of code
for the overflow check in each add operation. Additionally, it avoids mismatching parameters
between the computing result and the additional overflow check.
This patch uses CheckedInt to take advantage of those implicit features of it.
|
|
|
|
| |
Standards Compliance fix, port of Bug 1492737
|
|
|
|
|
|
| |
This commit was incomplete. Will re-land AV1 in MP4 support properly at a future date.
This reverts commit 29f718ef78f1a25ca904c6438b59ffc8e365a750.
|
| |
|
| |
|
|
|
|
| |
This increases the thread size for the platform decoder threads (to prevent stack overflows, particularly when decoding av1), while leaving the others at their default values.
|
|
|
|
|
|
| |
Adding partial support for 10/12-bit video images seems to have broken the native pixel-stride support we were using to pass 8-bit AV1 frame data formatted in 16-bit pixel values, resulting in vertical green lines.
Revert to the earlier behavior of always downsampling to 8 bit data. This is slower, but at least displays correctly.
|
| |
|
|
|
|
| |
Disabled by default.
|
|
|
|
| |
This reflects the API changes to the aom_codec_decode function and the removal of I440. It also sets allow_lowbitdepth to give proper support for 8 bit video, and removes the git version from the mime type.
|
|
|
|
| |
Update aom to commit id d14c5bb4f336ef1842046089849dee4a301fbbf0.
|
| |
|
|
|
|
|
|
| |
The libaom av1 decoder will return 16 bit per channel aom_image_t structures with only 8 significant bits.
Detect this case and use the mSkip fields of PlanarYCbCrImage to handle the extra data instead of allocating and performing an extra copy to obtain the necessary 8 bit representation.
|
|
|
|
| |
The libaom av1 decoder can return high bit depth frame data now. Handle those frames by downsampling them to 8 bits per channel so they can be passed to our normal playback pipeline.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
When av1 video playback is enabled, declare it as supported in the webm container in MediaSource.IsTypeSupported.
Also support special mime types of the form video/webm; codecs=vp9.experimental.<git-commit-id> so test sites can verify playback support of particular encodings while the av1 bitstream is under development.
|
|
|
|
| |
Upstream has removed the requirement to set this when initializing the stream_info struct.
|
| |
|
| |
|
|
|
|
| |
Call AOMDecoder to handle AV1 video tracks from the WebM container. The new decoder is very similar to VPXDecoder so we can use parallel calls. This codec is still build-time conditional.
|
| |
|
|
|
|
| |
Port the VPXDecoder interface to libaom which uses the same api with the names changed.
|
|
|
|
| |
The MediaDecoderStateMachine treat seek's EOS as fatal errors, so instead we always resolve the seek promise, and let the next GetSample return EOS.
|
|
|
|
| |
Otherwise the WebM demuxer makes no difference between a genuine EOS and encountering an error.
|
|
|
|
| |
Use the new helper functions instead of calling libvpx directly. This simplifies adding other codecs in the future.
|
|
|
|
| |
Encapsulate code from WebMDemuxer to query keyframe and frame resolution inside VPXDecoder, so we have a clean wrapper for all the libvpx functions we use.
|
| |
|
|
|
|
| |
Use the enum we already have here instead of converting to an int when we pass it around, giving us better type checking.
|
|
|
|
| |
This simplifies the comparison and update logic.
|
|
|
|
| |
This resolves #810.
|
|
|
|
| |
Despite wording of the documentation to the contrary, we can't provide a static pointer to an immutable object.
|
| |
|
| |
|
|\
| |
| | |
Update ffvpx code to 4.0.2
|
| | |
|
|/
|
|
| |
Tag #21.
|
|
|
|
| |
Tag #21.
|
| |
|
|
|
|
| |
Tag Issue #765
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
origins
Potential attack: session supercookie.
[Moz Notes](https://bugzilla.mozilla.org/show_bug.cgi?id=1334776#c5):
"The problem is that for unknown header names we store the first one we see and then later we case-insensitively match against that name *globally*. That means you can track if a user agent has already seen a certain header name used (by using a different casing and observing whether it gets normalized). This would allow you to see if a user has used a sensitive service that uses custom header names, or allows you to track a user across sites, by teaching the browser about a certain header case once and then observing if different casings get normalized to that.
What we should do instead is only store the casing for a header name for each header list and not globally. That way it only leaks where it's expected (and necessary) to leak."
[Moz fix note](https://bugzilla.mozilla.org/show_bug.cgi?id=1334776#c8):
"nsHttpAtom now holds the old nsHttpAtom and a string that is case sensitive (only for not standard headers).
So nsHttpAtom holds a pointer to a header name. (header names are store on a static structure). This is how it used to be. I left that part the same but added a nsCString which holds a string that was used to resoled the header name. So when we parse headers we call ResolveHeader with a char*. If it is a new header name the char* will be stored in a HttpHeapAtom, nsHttpAtom::_val will point to HttpHeapAtom::value and the same strings will be stored in mLocalCaseSensitiveHeader. For the first resolve request they will be the same but for the following maybe not. At the end this nsHttpAtom will be stored in nsHttpHeaderArray. For all operation we will used the old char* except when we are returning it to a script using VisitHeaders."
|
|
|
|
|
|
| |
A case of "one queue too many" here. Instead of worker runnables being sent to the main thread
where they are supposed to run, they are put in a task queue per-worker. This is devastating
for performance if many workers are running.
|
| |
|
| |
|
|
|
|
| |
Includes a standalone reftest.
|
|
|
|
| |
Follow-up to 9830cd079d8306abc223461190553af64b6fd0ca
|
| |
|