Tuesday, 18 October

I decided to organise the Libav sprint again, this time in a small village near Pelhřimov. The participants:

  • Luca Barbato - came with a lot of Venchi chocolate
  • Anton Khirnov
  • Kostya Shishkov - came with a lot of Läderach chocolate
  • Mark Thompson
  • Alexandra Hájková (me) 
All the chocolate was sooo tasty, we ate all of it of course.
  • Luca - Altivec
  • Anton and Mark - coworking on QSV and VP9
  • Alexandra - x86 SIMD HEVC IDCT
  • Kostya - consultations for the rest of us

I rented a cosy cottage for the sprint. It was surprisingly warm for the end of September and we enjoyed a nice garden sitting not only with the table and chairs but even with a couch. The sitting was under the roof and it was possible to work outside which was really pleasant for me. There was also a grill so we grilled some sausages for one of our dinners. It started to rain the last day and making a fire in the fireplace made a nice feeling.

Because the weather was really nice we decided to explore the countyside a bit, we finally found the path to the forest and spent mid-afternoon on its fresh air.

Both Luca and me like to cook and to try new foods and dishes. We cooked a lot during the sprint, Luca prepared some delicious Italian meals, Kostya cooked us traditional ukranian dish from millet called куліш which was very tasty and I want to try it at home sometime. At least for me the most interesting thing of the cooking part was making another traditional Ukrainian meal вареники which is kind of filled-pasta. We filled one half of them with potato-salami  with fried onion filling and the other half with cottage cheese, both very good I can't decide which one was better.  The вареники was eaten with sour cream, there're also some dried cranberries on the picture.

I almost finished my x86 SIMD  optimisation for HEVC IDCT there, Luca introduced altivec to me and I wrote PPC optimised 4x4 IDCT (github).

A lot of work was done during the sprint, many patches sent (ML), many patches pushed, all of this in a friendly atmosphere in a comfortable cottage, fresh air and with a good cuisine. Personally I enjoyed the sprint very much, I'm glad I organised it and I hope the other people liked it as well.

Thank you everyone for coming!

Tuesday, 05 July

In case you don’t know telegony is a belief that all offspring of a female will look like her first partner (there’s nothing wrong with it unless you know biology). In the metaphorical sense it means that the original circumstances of the company or project founding (i.e. who, with what goal and such) will affect its development to the end. Like there was a small company producing so-called “ever sharp” pencils that’s still afloat after a century—but who cares about pencils from that Sharp Corporation nowadays? Anyway, let’s talk about various opensource multimedia projects and see how it applies to them.

FFmpeg. The project was created by FFabrice Bellard who’s known for his brilliant software with horrible source code (he’s won IOCCC twice after all). FFmpeg obviously follows the suit and still has a lot of horrible code that people can’t clean up to this day. And yet it’s as ubiquitous as LZEXE packer back in DOS days or even more.

MPlayer. The project was originally created because its author could not find a video player that would support MPEG-2 video and AVI codecs at the same time—so he hacked something from libmpeg2 and avifile. And what do you know, after all these years it strives to play everything by means of ugly hacks. Those include but not limited to: catching SIGSEGV during MPEG-2 video decoding and restarting decoder, patching loaded VfW/DShow/DMO decoders based on .dll name, calling private internals of bundled FFmpeg.

MPlayer forks. mplayer2 was created by Uoti Urpala who had his disagreements with the original MPlayer team and made his own version with streamlined build system and throwing out many hacks of the original code. mpv seems to follow the suit and I guess if the next fork appears it’ll be also done by a disgruntled developer who wants to throw more hacks out and make it more average-user-friendly.

libvpx. Originally (VP3 times) it was a codec using rather well known coding methods, definitely not state of the art, with a confusing source code that was opensourced without any format description beside source code because the company didn’t care about it but wanted it to become multimedia standard anyway. The company name was On2 then, mind you! But if you look at VP8 or VP9 nothing has changed much (I still have the impression that VP9 format specification was written by engineers from that company implementing hardware decoder and benevolently edited and released by Baidu when they don’t care about VP9 any more because there’s VP X coming).

Derek B. A brilliant reverse engineer who did his first codec (VBLE) by struggling with disassembly and then asking the codec author for the sources. I’ve mentioned before that his best decoders were REd in a very original way: by declaring the intent to do that and waiting for somebody else to do it. Obviously not a project but his story fits so I thought it’s appropriate to mention him here.

VLC. As you might remember the project started to justify a high-speed LAN in some French École Abnormale (because it was not the only high school in France named École Normale in case you wonder). And what do you know—the project went fine as a business project and does so till this day. Though if they ever switch from Discworld-themed naming to Ubuntu-like they should go with Glorified Gstreamer. Why? Because while venerable old projects like XAnim, Xine or MPlayer demonstrated high-class by having their own codec reverse engineering work, VideoLAN went the way similar to the previous story—waiting until somebody does it elsewhere and then announcing support for that format.

Moral of the story: think before you start a project, people may make fun of it afterwards.

Saturday, 02 July

Well, since I don’t do anything useful these days and just give rants on subjects nobody cares about here’s another one.

This codec (there’s also VX container with IIRC PCM audio to accompany video) can be named a Very Mobile H.264 Rip-off. Why? Because the only available binary specification I got seems to come from some game, in ARMv6 I guess and has stride hardcoded to 256 pixels which would be appropriate only for some hand-held consoles. As for the second part—it uses Elias Gamma’ codes (like H.264 does—under different name) and suspiciously similar spatial prediction. Obviously it also differs a lot because it’s intended for a low-power devices and low resolutions too.

So, it operates on 16×16 macroblocks, each one can be coded with one of 24 possible modes that are really just combinations of one of the following techniques with optional residue coding:

  • splitting into 16×8 or 8×16 sub-blocks and processing them in the same way;
  • copying data from one of the previous three frames with or without motion vector adjustment (it’s full-pixel only);
  • copying data from the previous (frame?) block with some offset added to it (actually three offsets—one per component) and motion vector optionally;
  • applying intra prediction that also comes in two flavours—four modes applied to any block size or nine modes applied for 4×4 luma blocks (still only four modes for chroma).

Residual 16×16 block is coded as 5-bit CBP (again, Elias Gamma’ code mapped to CBP value) for four 4×4 luma blocks and two 4×4 chroma blocks. Coefficients are coded like this:

  1. Mode from Xine table (it’s predicted from neighbouring blocks too) that defines how many coded coefficients are there and how many of them are ones;
  2. Signs for known ones;
  3. Elias Gamma’ codes for other coefficients and their signs;
  4. Zero run value for skips between elements (from Xine table depending on maximum coefficient level seen so far).

In other words—nothing like H.264 CAVLC mode at all.

And if you think it was fun to RE I can tell you it was not and there are still challenges to overcome. First, the specification is badly written and optimised too much that decompiler is almost worthless there (for example, refilling bits is done by jumping to the end of the function that reads unsigned Elias Gamma’ code), functions expecting certain registers to be used for the state (like block functions expect R11 and R12 to contain motion vector, bitstream reading functions operate on the context stored in R1-R3 and return result in R6 etc etc), Hex-Rays also can’t decompile anything with switch statements and block decoding functions are full of them, it often decompiles function just to the first function call and ignores the rest of function code (happened to me on x86 too once where it decided to decompile only the head of main Smacker video decompression function without the block decoding loop, that’s why I trust decompilers even less that compilers). Second, the specification seems to miss data for lookup tables used in coefficients decoding.

So if you want to have it fully REd find a better specification and/or more patient and persistent man to deal with it. As for me—it’s dead, Jim®.

Monday, 27 June

Sometimes I hear that voting in FFmpeg is worse than in North Korea. Obviously people don’t know much about voting in North Korea or they wouldn’t make such statements. Here I shall try to have a short overview on voting in two similar yet different Asian countries and compare it to FFmpeg. My knowledge about North Korea comes from posts of Russian researchers, at least one of them lives in Korea and their information looks legit to me (because it lacks political agenda and has even small details mentioned that correlate well with other information from people visiting DPRK). For the other country it’s various sources not from state media (because such information is often omitted there or you hear simply outright lies). For FFmpeg I obviously have my own experience and observations from their mailing list.

North Korea

Maybe the main reason why it makes people think about FFmpeg is the fact they claim sovereignty over the whole Korean peninsula and even appoints special people to do merges govern over regions still under occupation and even promotes them time from time. South Korea actually does the same except for useless staff but they recognize people with North Korean passports as their own citizens.

Anyway, voting. The system is rather simple: you have one candidate per region and all people vote for him. The ruling member of Kim family should get 100% support in his region (around Paektu Mountain if you forgot that)—and 100% means that all voters should come and vote (I guess it’s obvious how). In other regions some voters may skip it in case they are very ill. I don’t remember whether they can vote against but it’s rather unthinkable too. There’s a story about North Koreans fleeing to China and seeing a demonstration in South Korea on the TV—their reaction was “Why do they allow it?! How can the President run such a country?!”. So people there are quite well conditioned and voting goes smooth.


This is a country with a variety of approaches to voting—some elections nobody cares about and they can be even fair (until an inappropriate candidate is elected and then they have to correct it), some elections try to keep an image of honesty in order to show outsiders that the system works fine and elected candidates are legitimate (in that case they simply try not to rig it as blatantly), some elections are rigged in the most blatant way and even more and in the ideal case there are no more elections (under some stupid pretext they’ve got rid of governor elections and now in some towns and cities mayors are not elected either because that saves money). And referendums there while still theoretically possible can’t be on any question related to government or status of some region or the questions that are decided by government.

So depending on luck in some elections a vote can matter, in other cases it doesn’t matter and in some cases it matters that much that the votes are cast and counted without voter’s participation (some regions constantly report that over 100% of voters were present and there’s a Russian meme 146% that appeared because when they reported result for parliament elections in Rostov region IIRC the percents from different parties added up to that number and there were some other regions with total sum over 100%).

There are many tricks to get desired results—inventing new demands to filter out unwanted candidates (their neighbour Belarus has a joke “In order to be eligible as presidential candidate the person must have at least five years of presidential experience”), the same voters voting again and again (because there’s a special document allowing a voter to vote in other region—and groups of people can have a dozen of such permits per member and thus vote repeatedly in several places), putting a sheaf of “properly” filled ballots into ballot box or simply counting them as you see fit no matter what was the actual vote there.

Similar story with petitions—they are often masked with similar petitions or later an expert group finds that petition to be infeasible or contradicting the law.


In the old days voting was usually called mostly on naming issues and it worked. But then disagreement with the practices of the FFmpeg leader (like committing code without any review and despite objections) escalated to the point of The Split but before that there were ugly votings that included old MPlayer members that nobody cared about in FFmpeg (because the leader said everybody with commit rights on svn.mplayerhq.hu should have a vote). So after The Split and another ugly voting there were two projects. I don’t remember any voting in Libav but in FFmpeg this tradition still holds. Like recently there was a voting committee formed and there were at least two serious votes (not just a new season logo in trac)—for code of conduct (because every project should have one but not necessarily follow it) and for banning Carl Eugen Hoyos for some time. The first one obviously passed, the second one rather expectedly failed. But then A Person Known For Resigning As Leader took an action that should be allowed only to leaders and banned some people for 24 hours from the mailing list.

Well, I think it’s clear now that FFmpeg voting is not on North Korea level because there’s animosity there that can be expected only from current MPlayer team but not FFmpeg. But is FFmpeg on par with Russia? Maybe not yet but it tries hard IMO.

Saturday, 25 June

Yes, it’s about FFmpeg and Libav again. And yes, I consider them both to be failed projects (not that their basic goal is failed and they provide even less multimedia support than GStreamer with no external libraries used), I mean the state of the project as living and developing entity.

Even if I mostly emulate Derek nowadays—i.e. unsubscribed from FFmpeg and Libav mailing lists, do nothing productive, wait for somebody to reverse engineer codecs I care about somewhat (that would be ClearVideo, thank you very much). Yet I peruse development-related resources for both projects (mostly for finding laughs) and sometimes I see gems like this (it was pointed at in the comments as well since it answers some questions I’ve asked before).

First, I’d like to outline how large projects are organised and what to expect in general. So, if you have a large and used project you’ll have at least these components:

  • codebase (normal projects have some code to run after all);
  • developers (to add features, fix bugs and such);
  • users (to annoy developers and once in a while to provide sensible bugreport or feature request);
  • infrastructure (hosting for code, means to communicate for developers, maybe even support for users).

Developers can be also divided into three main categories:

  1. core developers—the ones who do main work on the codebase and do it in regular manner (they might intersect with the next category too);
  2. corporate developers—the ones who do work mostly on behalf of their companies (e.g. add a feature they need internally so they don’t have to maintain it themselves);
  3. contributors—developers who add some feature or provide some bugfix because they needed it themselves, they do it irregularly or even just once (again, they might intersect with the previous category).

This division is by no means perfect but it shows the main forces behind development: those who treat is as a hobby, those who do it for their benefit (i.e. making money with/from it) and those who use it and just want to be it a bit more suited to their personal needs.

So, with that all in mind let’s look at the projects:


Codebase. It’s a complete mess. And its git history is even worse. The running joke is that who cares what that piece of code does, it’s FFeature so it must be kept at whatever cost (that’s how you get double decoders, demuxers and encoders; an outstanding example there would be libutvideo wrappers—refer for the details to ffmpeg-devel mailing list).

Developers. Because of the merging policy (that is likely to be codified soon—see this document again) many developers of FFmpeg code are not FFmpeg developers. And yet they are dictating API to be used in FFmpeg: the first example that also involves me—I’ve proposed side data for packets in Libav, FFmpeg hesitated for a bit yet included it with such flattering message; the most of examples include Anton’s work from introducing refcounted buffers to splitting codec parameters into separate structure—in any case FFmpeg simply takes it and converts their code to comply with a new practice (even if it has to include some horrible hacks). If that doesn’t cry out loud “a failed project” I don’t know what does.

Also (even if I’m stepping onto minefield) some FFmpeg developers are completely unfit for collective work because of their personal qualities. People may make jokes about providing full console output of ffmpeg command but it’s not Carl who’s the main problem in FFmpeg (yes, people who didn’t work on MPlayer might think otherwise; I still believe he’d be a decent leader for FFmpeg—mostly because he doesn’t focus just on technical side and he’s unlikely to be treated as a technical god who can’t make any mistake or write less than perfect code). Here it’s more about Michael and Clément—the former never really understood what being a leader really is or what resigning from a leader means (anyone disagreeing please ban yourself from a mailing list of your choice for 24 hours), the latter does not understand people at all (neither does Michael)—I’m not going to paste the link to the same document for the third time, I’ll simply quote the relevant part:

Any Libav developer is of course welcome anytime to contribute directly to the
FFmpeg tree. Of course, we fully understand and are forced to accept that very
few Libav developers are interested in doing so, but we still want to recognize
their work.

Here’s an excerpt from Michael’s mail:

> Don’t you think you should remove Diego, Måns, Kostya, … as well?

They didnt ask me to remove them, they didnt remove themselfs even
though they could, they didnt post a patch to remove themselfs.
No contributor said that he contacted them and they no longer maintain
the code they are listed for. (or i missed that)

Well, if it’s hard to realize that Libav developers don’t want to contribute to FFmpeg and don’t want to do anything with it even though it’s been over five years then you really have a problem. And I’ve expressed my thought on reuniting both projects already.

Users. You know, there’s a difference between catering to your users and selling out completely (to put it mildly). When you see some changes being done in interests of some third party often without mentioning it that looks suspicious. I’m not against making money off your work but when it’s not even mentioning the fact it looks strange; when you have a decoder with a copyright assigned to some company it’s fine, but when you have fixes for files nobody has seen or FFv1 features added because it was all paid by somebody (see here slide 12) it looks not completely honest even if there’s nothing wrong with it.

Infrastructure. From what I understood FFmpeg services are now hosted on various boxes with no plan or idea (i.e. if somebody could provide a box for something they took it) and there’s no system administrator for these boxes. Again, as I understand it, they were kicked out of Hungary for some reason and even though they got a free server and hosting in Bulgaria they cannot use that box properly because there’s nobody to set it up properly and maintain afterwards. Sounds like fail to me.


This project is failed for the different reasons but failed nevertheless.

Codebase. While it’s mostly fine sadly new features hardly come in. Just two examples—there have been talks about replacing libswscale since ages, two years ago they’d started to design it (and it went nowhere), then I offered my design with a PoC (yes, piece of that) code to test it (that’s how NAScale was born), people work on integrating it into Libav a bit and that’s all—nothing has happened yet; the second example is bitstream reader replacement—since its submission in April nothing has come out of it as all traction was lost in bikeshedding. Is it failure or what?

Developers. Here we have two problems—some FFmpeg folks and some core developers. I’ve written about the former before so let’s talk about the latter. Surprisingly or not there are counterparts for Austrian FFmpeg developers in Libav. Where in FFmpeg you have Carl Eugen, in Libav there’s Diego and I guess many have suffered from his perfectionism (in form of proper formatting). And instead of Michael there’s Anton. While he is not that leadery in general sense, he’s the one introducing big changes in API that are hardly discussed before. And even worse thing—he tries to make all nontrivial code go through him, QSV support is a good example: Maxym Dmytrychenko had submitted initial support but it was not deemed good enough so Luca Barbato had to rework it into proper form. And what do you know? It turned out to be not good enough for Anton so he worked on it himself with the result not being much different from Luca’s. And since nothing is being done about that I consider it to be a failure.

Users. Sadly, there seems to exist not so many of them which is a fail. On the other hoof they don’t need to deal with distros and Baidu and that’s a blessing by itself. Though there is still an issue with FFmpeg users who bother (ex-)developers for features present in FFmpeg but not in Libav (or present in different form), like Blackmagic card support or prores_ks encoder (hint: there’s no encoder with such name in Libav and it’s my personal pleasure to ignore mails about it).

Infrastructure. From what I heard thanks to Attila and Janne everything is working fine.

Well, maybe I should continue with Actimagine VX codec at last and forget about multimedia outside work matters afterwards (insert the obvious joke about this not hurting NihAV development at all).

Tuesday, 21 June

So, unlike those breakthrough codecs everybody talks about (I mean RMHD and ORBX.js), V-Nova Perseus was delivered (but what do you expect from a codec announced on the first of April?) and is available in some Android app. So I’ve looked at it.

The implementation seems bafflingly simple: there’s a base layer, it gets upscaled 2x and an enhancement is applied to the upscaled image. And those enhancements are essentially quantised differences after 2×2 Haar transform plus runs, all coded with context-dependent Huffman codes. If that reminds you of RealVideo—don’t worry, they code those codebook descriptions too so it’s different.

I don’t know if it really works as good as promisedmarketed but it’s an interesting approach and it introduces some variety in the world of codecs that look alike—mostly because they all use the same principles as the standard video codec with some small enhancements or building blocks replaced with functional analogues; yes, I completely forgot about Daala, please remind me about it when they settle with final design—it might be the codec of choice for GNU HURD NG by then too.

Friday, 03 June

Continuing the theme set by the previous post, let’s talk more about confusing names introduced by H.264. I mean CAVLC and CABAC.

CAVLC stands for Context-based Adaptive Variable Length Coding. While technically true because it employs variable-length codes and the code set is selected based on context it’s nothing special (and I’ve not spotted anything there that would make it “adaptive”). Again, it’s a trivial thing less exercised before because they had less ROM for codebooks. The idea of “let’s select codebook depending on top and/or left decoded values” it too trivial to get an own name IMO.

CABAC stands for Context-based Adaptive Binary Arithmetic Coding and the name is partly stupid and partly misleading. But before I explain why I want to present some history and terminology.

Arithmetic coding was developed in late sixties to early seventies but mostly known by work of Rissanen and Langdon that resulted in many IBM patents. The idea is that you can assign probabilities to various symbols, send them to the coder and the coding result is a long fraction belonging to the range obtained by multiplying the ranges in sequence. I.e. if we have probabilities for A, B and C as ranges [0; 1/3), [1/3; 2/3) and [2/3; 1) then AB is coded in [1/9; 2/9) range and BA is coded in [1/3; 4/9) range. It’s the ideal coding method since it codes probabilities in the minimum possible amount of bits (unless you remember it’s real world and we don’t have infinite-precision arithmetic; still, the losses are very small and there’s no better coding method).

And in 1979 G.*.*. Martin (no, not the writer known for Tuf Voyaging) introduced range coding. Which is absolutely the same thing except that (de)coder maintains low and range values instead of low and high values in a conventional arithmetic coder (hence the name). Since it was kinda not covered by patents it got more popularity over the years.

And because dealing with arbitrary probabilities usually involves division by an arbitrary integer (and maybe increased coder precision) the further improvements were for sacrificing efficiency for speed until it boiled down to coding just two symbols and creating more elaborate models to code input that takes more than one bit. Arithmetic mode in JPEG seems to be simply feeding bits from Huffman codes and such to Q-coder (patented by IBM and thus extremely popular in the wild) that squeezes a bit more entropy out of them. Then you create an advanced version (MQ-coder) and push it into JPEG-2000 until binary coding is popular in image and video coding.

So, CABAC is:

  1. Context-based — yes, static coding would be a tad more effective than Huffman coding applied to bits (hint: it gives no savings). The problem that it’s the first step of the classical scheme: modelling and providing probability to the entropy coder;
  2. Adaptive — see above;
  3. Binary — true (just remember it codes not bits but most and least probable symbols);
  4. Arithmetic — actually it uses range coding;
  5. Coding — nothing to argue with here.

In general, the naming is a lot like USSR which was hardly a union, probably not soviet (whatever that word means—literal meaning is “belonging to the councils”), republics were just provinces (or local despoties) but it was more or less socialistic (to the point its ideology can be called international socialism and it was founded by SDAPR(B) too).

And I’d like to point out that CABAC should refer to the whole process of binarisation+context selection plus coding the result, not just the exact implementation used in ITU H.264 and ITU H.EVC (even if it’s called CBAC in AVS and “we have completely different coding” in VPx). And if you want an example of context-based adaptive binary non-arithmetic coding look at ELS used in G2M2 (and if you drop binary coding then you have examples in every other advanced lossless audio codec).

Tuesday, 31 May

After seeing the recent commit in Libav this rant simply wrote itself.

People, Solomon Wolf Golomb was a genius whose work influenced various areas of science (I’ve read about his work in Martin Gardner’s books plus some of his papers) but please stop attributing to him stuff he did not invent. I’m talking about universal variable-length codes for integers (or Xine for short).

He has introduced (in late sixties) a specific kind of Xine for optimal coding of integers with certain distributions (I’ve recommended to read the paper introducing them before, it’s awesome and I wish more papers were written like that one). Those codes have a parameter k that is distribution parameter and also it’s used to split code into two parts—an unary prefix coding N/k and log2k bits coding N%k (for some values it’s rounded down, for another it’s rounded up). Later Robert Rice had a similar idea and independently introduced codes that were Golomb codes with parameter 2^k (and thus they’re often called Rice codes and they’re used more because they are easier to manipulate on computer). And that’s all—there are no other Golomb codes.

Yet thanks to ITU H.264 standard (aka GNUMPEG-4 AVC) we have exp-Golomb codes and interleaved exp-Golomb codes. I don’t know who decided on the name but it’s misleading and wrong (but because it’s in the standard people insist on using it; that also shows how much people designing codecs know about general compression methods). Maybe if some other Xines were rediscovered they’d go under equally ridiculous names like geo(metric)-Golomb or norm(al)-Golomb or quad-Golomb or recursive Golomb codes (because people have never heard of Levenstein coding).

Again, back in seventies Peter Elias proposed a scheme for coding: let’s call unary code (i.e. the one that codes an integer as a series on one value terminated with the other value like 000001 or 1111110) alpha code and fixed bit representation of an integer beta code then we can arrive to gamma code that combines both.

So “exp-Golomb” code is really Elias gamma code? NOPE! Like with TV interlacing came first and actual Elias gamma code is what is incorrectly called “interleaved exp-Golomb” (i.e. first you have flag bit telling whether the code is over or there are more bits left, then data bit, then flag bit again, rinse, repeat). And progressive version is Elias gamma prime has unary prefix i.e. alpha code for the length of the second part concatenated with beta code for that part (I’ve rechecked the original paper freely available at sci-hub—because the only time I paid for IEEE papers access was when my scientific supervisor sent me with money to pay for IEEE membership renewal for our chair). Then you can construct delta code that codes integer value in three parts (actual bits, their length and length of the length part) and jump to omega codes that code mostly lengths of the following length part (very meta!).

So there’s another thing to dislike in H.264 standard beside all interlaced modes and scalable and multi-view coding, it’s forcing a wrong terminology on the world (feel free to correct me if there were earlier uses). The same way there are confusions between arithmetic and range coding, various binary coders are not free from it too. But that’s a rant for another day.

Saturday, 28 May

One cannot be called a true reverse engineer unless he tried (and failed) REing Italian literature collection. I’ve finally tried it (and, obviously, failed).

What’s so special about it? Here is Mike’s description. From what I’ve seen on the first CD videos occupy 280MB out of total maybe 300MB (and over 200MB of it is a single tutorial video). While the actual library data occupies about five megabytes there.

The main library application is written in Visual Basic 4 (16-bit version) and it’s not a compiled version but rather P-code and I’ve failed to find a decompiler for that exact version (32-bit? seems no problem; 16-bit or even 32-bit Visual Basic 3? also no problem; 16-bit VB3? keep searching). There are some utility apps of unknown purpose there written in Borland Delphi (also 16-bit and I’m pretty sure it was simply Borland Delphi then, no additional versions needed). And while those are in sane machine code (well, 16-bit x86 machine code is hardly sane but manageable) there’s a lot of Delphi cruft compiled in with TThis and TThat and TOtherThing and such (plus additions in Italian).

Despite files having extension .LZ[1-3] I doubt they employ any kind of Lempel-Ziv compression, I’d expect some different dictionary-based scheme (you have an index with all possible words after all). And looks like they’ve licensed some DBT thing (obviously it stands for Text DataBase in Italian) from some Italian Institute of Computational Linguistics and this DBT is responsible for the file formats but I’m too lazy to RE those half-megabyte .dlls without a decompiler (written in Delphi too).

Thursday, 26 May

So yesterday I had a quick look on DLI image format. It turns out to be somewhat related to video codecs (and JPEG of course): there’s 8×8 fast integer DCT approximation, quantisation and bit coding of the block. And bit coding is the most interesting part really—this format employs binary model with old-school arithmetic coding and context selection for the model; coefficients are coded as first an array of coded coefficients flags (plus a flag for last coded coefficient), each non-zero coefficient has an additional flag to signal whether it’s larger than one and in that case it’s coded as unary code (bits coded with arithmetic coder of course).

And I still don’t like the notion of “let’s make our video codec I-frames into an image codec” (the reverse of “motion <image format>” is not much better but at least it makes sense for intermediate formats). Images and video codecs have different use cases and required features but I think I ranted about it once.

Thursday, 19 May

Since my previous post hasn’t brought me answers I sought here’s another philosophical (i.e. no answers again) post on question that bothers me.

The concept is rather simple: some old tricks and methods become more appealing over the time when other more competitive methods lose their traction. So I often wonder when those old methods, approaches and tricks will become relevant again.

For instance, quadtree coding was not popular some time ago and yet we see it again in codecs where it handles blocks of smaller sizes inside some coding unit (ITU H.EVC, VP9, AOMv1—you name the codec). There’s similar story with vector quantisation—it still lives in some GPU-assisted form and is interesting again.

Now let’s talk about classical arithmetic coding. In the flow of time it was mostly supplanted by some variation of binary coding. But with the time binary coding becomes more and more unwieldy since you have to code bits with different contexts and you often don’t code bits per se but rather bits for some variable-length code for integers. So I wonder if the classical arithmetic coding may come to use again and return saner coding while still being faster? Of course one could point me to One Xiphophorus, the company that made the best VP3 encoder, since they’ve found this approach worked fine in Celt and should work fine in Daala (unrelated to them: is FFA1 still a thing?). But really, is CABAC/boolean coder still the coder(s) of future or we’ll see more interesting things from the past? And yes, I’m aware how rANS can be used for faster coding of probabilities and that ANS is used in VP10 experiments. But what about, say, better modelling with, say, order-10 contexts (or the ones that take parameters from both neighbour blocks and blocks upper in hierarchy into account)?

And another one is not related to my usual stuff but is still quite interesting. Will raytracing return again? From what I know the current way is to have lots of triangles, lots of textures, lots of crazy additional maps and lots of even crazier shaders. I believe it went this way:

  • let’s approximate everything by triangles and draw them;
  • simple colours are not good enough—let’s add textures;
  • not good enough—let’s add shading (like Gouraud or Phong);
  • not good enough—let’s introduce bump maps for better realism;
  • not good enough—let’s introduce light maps;
  • not good enough—let’s introduce computable shaders;
  • still not good enough—let’s render scene once, calculate different parameters from it, create new light/shadow/whatever maps, add them to the scene and rerender again;
  • you know what, it’s still not good enough—let’s …

(I don’t know much about computer graphics since our university course didn’t went much farther than Bresenham’s line algorithms and simple image formats)

With all this trickery you still have not achieved realistic picture especially when it comes to dynamic light, shadows and reflections. Yet during all this time there was raytracing which is simple as hell (and equally slow): you have a scene and for each pixel you simply trace its path until you end in some light source or simply give up. With massive parallelism of GPUs and complex shaders it looks to me that switching to raytracing might be easier (sure, there’s a problem of legacy, making all those developers switch from Magma and Vulcan to a new approach etc etc) but I still wonder if it makes sense from technical point of view or will make in the near future.

And as usual—I hope for the answers but I don’t expect that I receive any.

Saturday, 14 May

Disclaimer: the word “schizophrenia” is used here as it’s perceived by majority, not to denote a certain psychological condition. Feel free to be offended.

I’ve wasted about a decade working at two multimedia projects (plus a patch or two to unrelated projects) and what I’ve seen there leads me to the conclusion they both suffer from schizophrenia, albeit in different forms.


FFmpeg features two forms of schizophrenia—developers and code.

Developers schizophrenia can be seen in how some developers believe they are also Libav developers. Mostly they brag because they’ve sent a patch or two to Libav and now can use it as a free review service. While I dislike Carl Eugen he’s at least honest and acts to his beliefs (here, an amended Elenril’s Law fulfilled; in case you’ve forgotten it says “Every FFmpeg-related discussion ends up mentioning Michael. Or Carl Eugen.”).

Code schizophrenia is more celebrated. The most prominent example is ProRes support—they offer two decoders and two encoders for it. There are two ASF demuxers as well. And two audio resampling libraries. And there are talks about adding second libswscale (*shudder*). The best part is that if you ask why it will probably go like this:

— Why do you have feature X in two versions?
— Because Libav has it.
— But why do you take it if you have your own version?
— To make merging Libav codebase easier.
— But why do you need to merge it at all?
— To make merging Libav codebase easier.

Please please tell me I’m wrong and provide proper reasons why FFmpeg keeps merging Libav stuff and keeps several versions of the same feature.


Here it’s somewhat more interesting—you have developers with physical multiple personalities disorder. Unlike the case in FFmpeg here you have people that work solely on Libav but as several different people. Most prominent examples are Luca (known as lu_zero and koda on IRC) who is really several instances not always agreeing with themselves (and if you subscribe to Lu_zianism then he’s also both Michael and Carl Eugen too, if you don’t believe it—you should because it annoys him/them). And there’s also Alexandra (aka beta elenril) and Anton (aka sasshka 2.0). And that’s the majority of core Libav developers anyway.

But at least the project seems to be more happy with itself and probably has a dream similar to the Ukrainian dream (which is “bugger off you damned Russians” in case you didn’t know).

It was somewhat fun to watch the fate of proposed bitstream reader replacement. Alexandra (she’s also Top Libav Blogger №2 by the way—simply because she blogs) proposed a new bitstream reader to replace the old horror (which is a good idea), and that new bitstream reader turned out to be faster than the old mess too, and what was the result? If I’d be British I’d call it sheep-worrying.

Those developers from FFmpeg that believe they also should have a say in Libav process started to express their opinions. While there were independent benchmarking proving the new implementation is indeed faster (which is a good thing to provide) those benchmarks were also run on the decoder not present in Libav, with badly converted functions for code reading, and that turned out to have some problems because of the encoder used (also not in Libav) that produced nonconforming stream and screwed multi-threaded decoding benchmarks (that one can be seen as both trollish and arrogant—kinda like judging The Beatles performance from an excerpt sung by your not very talented neighbour).

But mostly it was bikeshedding and asking why it was not using old get_bits interface. The answer for the latter is simple—because it was built from horrible macros used in half of the places directly so you should either to make everything follow those macros design or convert the old UPDATE_CACHE(); LAST_SKIP_BITS(); ... CLOSE_READER(); into saner get_bits(); skip_bits(); anyway. And Libav developers decided it’s better to have fully new interface anyway and to make it consistent with bytestream reading while at it.

So why did people who should have nothing to do with it bikeshedded that much? Probably because they know in their hearts that as soon as it hits Libav the work on copying it into FFmpeg starts and sooner or later it will reside in FFmpeg codebase probably along with old get_bits.h with most decoders switched to the new bitstream reader anyway. Why? See the theoretical conversation above. I’d like to know the answer why merges are really done but I guess I’ll get it no sooner than this bitstream reader is accepted into Libav master (i.e. never).

Saturday, 07 May

The amount of interesting codecs is dangerously low so I’ll probably stop writing about them at all (and that rises a question whether this blog should be kept alive at all).

So, scraping bottom of the barrel I come to QuickTime codecs.

There are two codecs from the standard QuickTime package that are yet to be implemented in opensource: QDesign Music and Apple Pixlet. The former is (obviously) an audio codec with simple tones+noise coding, I hope to document it soon. The latter is an intermediate codec based on wavelets, so it should not be that hard to RE. The main problem is that I don’t know where to find a decoder (and I’m too lazy to search for one actively). It’s said that the only version of QuickTime being able to decode it was on Mac OS X Panther (yes, not just when it was called Mac OS X but also when it was purely PowerPC only). I estimate this codec would be rather simple—on par with SMPTE VC-5 (and probably even without codebooks but rather with generic variable-length codes like in Pear Intermediate Codec and AmateurRes). And PowerPC assembly is not that bad after you get hold of rlwinm instruction, I’ve REd most of AIC from PowerPC binary after all.

And there are some third-party extensions even Compn doesn’t know about like NewTek SpeedHQ or Digital Anarchy Microcosm codec. The former is an ordinary DCT-based intermediate codec any koda can RE, the latter is somewhat funny lossless codec (funny because it uses range coder just to decode bytes and use them in 8- or 16-bit RLE) that is better left to Derek to RE. SheerVideo has been documented long time ago, ZyGo video was just another DiVX, VP3 and Indeo 4 have other decoders etc etc.

Life is boring.

Update: so there is a more modern Pixlet decoder. I’ve looked at it. There’s per-plane wavelet compression, parametrised Rice codes, everything rather trivial. The only interesting things are coding of the zeroeth subband (it’s splitted into first coefficient, top row, left column and all other coefficients coded with top+left prediction) and the fact they have subband header with magic 0xDEADBEEF. Nice touch!

Life is still boring though.

Monday, 02 May

Some time ago Niels Möller proposed a new method of bitreading that should be faster then the current one (here). It is an interesting idea and I decided to try it. Luca Barbato considered it to be a good idea and had his company sponsored this work. The new bitstream reader (bitstream.h) is faster in many cases and is never  slower than the existing one (get_bits.h).

All the new and equivalent old bitreading functions was benchmarked using TIME macros in a simple test program. Because the results were good, I converted all the decoders to use the new bitstream reader. The performances of the most important decoders using the new and old bitreaders was benchmarked with perf stat (using x86_64, 64-bit (Intel Core i3-2120, 3.30GHz)) are pretty good and even on arm32 I could not see speed regressions.

The old bitstream reader is quite inconsistent, with its core API made of macros and with at least 3 or 4 higher level functions reading a not easy to guess number of bits.
static inline unsigned int get_bits(GetBitContext *s, int n){ register int tmp; OPEN_READER(re, s);
UPDATE_CACHE(re, s); tmp = SHOW_UBITS(re, s, n); LAST_SKIP_BITS(re, s, n); CLOSE_READER(re, s); return tmp;}
The new bitstream reader is written to be easier to use, more consistent and to be easier to follow. It is better documented, the functions are named according to the current naming convetions and to be consistent with the bytestream reader naming.
Some of bitstream.h functions replaces several ones from get_bits.h at once:
  • bitstream_read_32() reads bits from the 0-32 range and replaces
    • get_bits()
    • get_bits_long()
    • get_bitsz()
  • bitstream_peek_32() replaces
    • show_bits()
    • show_bits_long()
    • show_bits1()
  • bitstream_skip() replaces
    • skip_bits1()
    • skip_bits()
    • skip_bits_long()
The get_bits.h bitreading macros have to be used directly sometimes to achieve better decoding performance. Reading or writing the code that uses these macros requires good knowledge about how this bitreader works and they can be surprising at times since they create local variables.
The new bitreader usage does not require such a deep knowledge, all needed operations require just to use a smaller set of function that happen to be faster in many usage patterns.

Many thanks to Luca Barbato for his advices and consultations during the developing process.

 I hope this new bitreader could become a useful piece of the Libav code. Opinions and suggestions are welcome.

Thursday, 28 April

Before I move to the point I’d like to give some historical examples based on countries.


Well, as you remember in 1917-1918 there were several Ukrainian republics, most known are Ukrainian People’s Republic, West Ukrainian People’s Republic and Ukrainian Soviet Socialistic Republic. There were some other small states like anarchic republic but they are not relevant here.

So, Ukrainian People’s Republic and West Ukrainian People’s Republic willingly united in 1919 and that day is a national holiday now (later it was obviously occupied by Soviet Russia, Poland, Romania and Czechoslovakia). But why did that union happen? Because people wanted it and there was a dream for the united Ukraine since ages.


You should’ve learned about it at school (or witnessed if you’re old enough). Why did the unification happened? Because people from both sides wanted it and the Soviet union could not prevent it any more.

Moldova and Romania

These countries share common history, have the same language and people like the idea of single country. While unification has not happened yet it might happen even in this century.

Chinese Republics

Here the situation is funnier. People’s Republic of China doesn’t recognize Republic of China yet they somehow co-exist and probably in distant future they will be the one again. Why? Because PRC is changing and it’s not what it had been during Chairman 毛 times.


Here the situation is even funnier. There are two governments who think they are the only True UpstreamKorean state, it’s just half of it is still occupied. And while there are constant talks about reunification, neither state really wants it. One country has suffered under homebrew Socialism (just look up what ‘juche’ means) for too long so it will take an enormous amount of time and money to make both parts equal (even funnier if you consider that before 1960s North Korea was industrially developed and South Korea was an underdeveloped agrarian region). Germanies got it easier (as a person paying Solidaritätszuschlag I know that). So will the reunion ever happen? I wouldn’t bet on it.

And now, to the our favourite projects.

Time from time somebody outside projects or from FFmpeg side asks about projects reunification. There are talks about it at VDDs. And yet there are no results. Have you noticed that I mentioned no such talks initiated by Libav. Why? Probably because Libav does not want to merge back. And there you have it—reunification cannot happen peacefully because you don’t have majority on both sides wanting it.

And that raises two questions: why FFmpeg wants reunification and why Libav doesn’t want it (or as a single question—what prevents it).

It seems that for some reason not clear to me FFmpeg keeps merging all stuff from Libav (feel free to enlighten me, otherwise FFmpeg developers themselves might forget it and it’ll turn into tradition) and having both projects together will solve two problems: the need for merge and the lack of skilled developers (it’s always the issue).

What does Libav gain from the merging? Relief from constant merges? Unlikely since it’s not being done there. More developers? That’s nice but Libav project seems to be happy as is. Return to the known brand and distributions? See above and here.

Let’s assume the projects decided to play nice out of nowhere and please people who’d want them to reunite. What would happen then? Multiple discussions about development process (that lead to the split in the first place), including but not limited to: reviewing process (relaxed and not applicable to some people or mandatory for any change), code standards (especially formatting), what features to have in the united tree (flat history or merges, one native decoder for certain format or two, use the code snippet like it was done in FFmpeg or in Libav). And on this stage it will all start to fall apart again.

So there you have it: clash of different development ideologies and more benefits for one side than the other. Also it’s rather hard to force people to work on the project they don’t like (and now they can choose at least).

And since this discussion cannot avoid certain names, here it is: I believe that Carl Eugen Hoyos deserves to be the next FFmpeg leader. Obviously my opinion doesn’t matter there and I could not convince anybody at VDD’15 but I firmly believe so. He’s the one with passion for the project, he cares for codec support (even fringe formats), he likes to follow guidelines, he respects Michael and is unlikely to go and ruin what he created. And at VDD he looked kinda like the most responsible adult too, so he can be the project face. Again, this is merely my opinion that won’t change anything.

Sincerely yours, NihAV project developer (it’s still vapourware, thanks for asking).

Sunday, 24 April

So I’ve looked at them.

Micronas SC4 seems to be rather unusual as it seems to bring elements of LPC to ADPCM. So it’s not just the old conventional “get nibble, multiply by step, output prediction, update index and step values”—it keeps a history of last 6 decoded samples and predictions and use them to calculate a new prediction value. Details might appear in the Wiki one day.

VoxWare MetaSound is three families of 2-3 codecs bundled under the same brand. I’ve not looked at technical details but they seem to have lots and lots of tables with floating point numbers (or just a bit of tables if you’ve looked at MetaSound first).
Here are the codecs:

  • RT24 2400bps “Real-Time” codec (ID is VOXa)
  • RT28 2844bps “Real-Time” codec (ID is VOXh)
  • RT29 2978bps “High Quality” codec (ID is VOXg)
  • VR12 1260bps Variable Rate codec (ID is VOXb)
  • VR15 1537bps Variable Rate codec(ID is VOXc)
  • SC3 3200bps “Embedded” codec (no ID)
  • SC6 6400bps “Embedded” codec (no ID)

Ask for support by grabbing j-b and demanding it to be supported. I know there are other players beside VLC but that’s the only project advertising that it “plays it all” even on T-shirts. It’s time to be responsible for your own words. And ask for Bink2 too while at it.

Saturday, 23 April

Spoiler: they are not nice.

Speech codecs are probably the worst from my REing point of view. Why? Not because they are particularly hard to RE but rather because they are unpleasant. Here’s my list of reasons:

  • They are math-heavy. Of course you need to know mathematics to understand most of the codecs but I had no DSP courses at the university yet I can understand how video codecs work even in fine details, the same with many audio codecs. With speech codecs I have only general ideas about how they work.
  • Even worse, there are hardly any conventions on how to do things and in result codecs are built in a process that puts designing ARM SoCs to shame.
  • Even worse, because codecs are math heavy they have to be implemented with efficiency in mind which results in horrible fixed-point math usually in 16-bit variables.
  • And because all of this wasn’t enough, if codec supports several bitrates it might have additional postprocessing functions for lower bitrates in the best case. In the usual case it’s a different codec.

And that all is the source of REing problems. Bitstream format is usually easy to find and parse, the functions that do something with it are not easy to understand at all. I often end up not understanding what the function does let alone what concept it implements. I might recognize only some of them like LPC filter.

So why are speech codecs so badly designed? In my opinion it comes from the design decision. The initial idea was to use as little bits as possible by having a synthetic model and transmitting only its parameters. It worked great (at least compared to MPEG-4 with its synthetic scene and audio description aka the key parts of standard that people pretend do not exist at all). So you have human throat which is basically a variable tube and vowels are modulated tone, consonants are modulated noise. Transmit filter coefficients (original LPC, LSF, parcor form or something else), noise flag and pitch frequency and you’re done, right? It works fine for some sounds but the quality is not that great and it fails with some sounds completely (the sounds often used by French for example, so there’s still a need for French Speech Codec or j-bc for short). How to improve it? By adding impulses to “excite” the model (i.e. tell when to start/stop the sounds). Not good enough? Add pitch tilting! Still not good enough? Add postprocessing filter (and it was mandatory there long before video codecs). And what if we want to code not just voice and higher frequency range audio too? Well, add…

And thus it became pile of hacks over pile of hacks with a side dish of hacks. And each stage can be done in several different ways which only adds to confusion. That’s not even starting to talk about smart ways to save bits by splitting frame into several subframes and omitting coding some information for selected subframes (it can be interpolated from other subframes information after all). Or using codebooks and vector quantisation. Or how to generate noise for silent frames. Or using better coding than just writing fixed amount of bits for every element. Or…

I’ve finished looking at Lernout&Hauspie CELP+SBC codecs (as usual I don’t understand most of the things they do there but maybe I’ll still document them) and this plus my past experience made me write this post. Next is VoxWare MetaVoice and maybe Micronas SC4. And something saner afterwards. Or maybe it will be the usual nothing.

Friday, 22 April

Today I’d like to talk about how opensource projects are supported by “community” on example of FFmpeg/Libav.

Obviously I’ve chosen this example because I know some facts about internal politics and the fact libavcodec is de facto multimedia decoding standard used on every platform and by most multimedia processing tools out there.

What good did it bring to the project(s)? Some fame but that’s probably it—the largest users don’t even bother to acknowledge in public that they use it (cough, BaidUTube, cough). There’s an enormous amount of code (that serves as a good compiler suite too) but it’s maintained mostly by volunteers and people who have to use it at work (as they were hired because they worked on it in the first place). That’s it: the best material gain is an employment because you’ve showed off your skills or you can take occasional consulting work with varying quality of tasks and pay. Some people are paid to improve or write new decoder. Some are hired to work on improving protocol support and hardly paid at all (true story). Some simply sigh at yet another “please implement this for my app” mail. That’s good but where are the money to pay for task the project itself (e.g. refactor old horrible code, add tests, implement new feature or fix some old problem)?

There was an attempt to set up a foundation to gather money and use them for the project but it didn’t work well even before the split and got completely derailed after; and of course it was not a good idea to set it up in the USA since IRS refused to recognize it as non-profit organisation as other open-source projects have experienced (including X.org). The best part is how much money it could raise—I don’t remember the actual sum but it was relatively low like less than $20,000 (please correct me if I’m wrong) and it came mostly from caught (L)GPL violators IIRC.

Let’s take a successful opensource project that uses libavcodec, that would be VLC. For example, last VideoLAN Developer Days definitely costed tens and tens of thousands ${proper_currency} — for accommodating about hundred of people, reimbursing (at least some of) their travel costs, food etc etc; the event was sponsored by the largest Internet advertising company, the largest French advertising company and the largest French VideoLAN advertising company. I remember talks that they could even employ one developer full-time. And would VLC be any useful without libavcodec? And while they probably are the biggest opensource supporter of FFmpeg (they host their Git repository after all) and their developers write some code time from time they hardly do anything else—there is a bounty program but it’s a complete joke since it lists mostly done tasks nobody will claim reward for (and despite me pointing them to that fact nothing has been done). And obviously it doesn’t have tasks that would be beneficial for FFmpeg/Libav but not for VLC directly.

Let’s take a look at some commercial user. All those video hosting sites are available mostly because there’s enough bandwidth to stream video and because there is free code to support most of the formats uploaded by users so they don’t have to bother about details. And that free code is… (no points for guessing). And the biggest video hosting site was mentioned above. They don’t provide any support nor even acknowledge that they use certain opensource projects. Of course one can point to Baidu Summer of Code program but it’s for involving students into opensource (and probably finding fresh meat for themselves) and it doesn’t work that good for projects—you mostly have students willing to get money and/or credit for résumé. They tend to do the task and disappear completely. I’m sure that projects would prefer to have financial freedom instead to sponsor specific tasks that could be undertaken by anyone (not just students) and last whatever time is needed (not just summer). If the project gets some support from large companies it’s usually because a developer from that project working in that company convinced the management to do so.

Even worse is the situation with distributions because they tend to demand free technical support from you: fixing your own bugs, fixing other projects that use your code and such. What do you get in return? Nothing. The recent Debian and Xscreensaver messup is not some special example, it’s too typical.

It’s funny how the best sponsor for Libav might be Lu_minem.it—a small Italian company whose founder is Libav developer and thus knows what the project needs.

How do I fit into all of this? I was FFmpeg and then Libav developer, later a multimedia related work has found me because of my work on decoders (I’d been trying to find a good job myself but failed and had to accept the best offer I got). I was around core developers of the projects and thus could learn the facts written above. What have I got beside developing experience, acquaintances and pessimistic outlook on life? BSoC 2006-2009 participation (where I managed to finish only one project in time really despite the formal passing of it) plus some smaller sum of money for rewriting a component of swscale to make it LGPL. And some free dinners from VideoLAN foundation. So I’m fine but seeing that the project cannot afford paying a developer for some internal project (like writing a new RealMedia demuxer) is still very sad.

Sunday, 17 April

Warning: if you do not recognize names mentioned here you might be too young.

I’ve been using computer for about nineteen years and during that time I tried various players for various formats. My curiosity for internal format design made me search for information about compression methods, source code for decoders and such. So it led me to the current state (doing nothing). Yet for the many multimedia players I know there are some naming issues and that’s what I want to talk about.

My first versatile player was PLAYSND.COM by Yuri Tumarin. This 13kB DOS program could play a lot of various sound formats like WAV, VOC, MIDI variants and Adlib tracker music (RAD, HSC and lots of other variants). The best part is that it played some compressed WAV files too (various ADPCM variants and more). Excellent tool but the name is too bland and hard to search for.

Speaking of Adlib tracker formats, there’s an opensource player with Adlib emulator supporting lots and lots of them. The problem with it (beside being outdated now)? It’s called adplug. A good name to be blocked by a generic rule!

Let’s move to video players.

My first Linux player for VideoCDs was MpegTV. It was a commercial program but again, it was a country where nobody bothered about piracy and it was the only player on Linux I knew that could decode VideoCDs without stuttering. The player was doing its task fine but its name is rather cringeworthy.

Then I found out about DVD-oriented players like Ogle and Xine. Good names. And I still use Xine sometime when I need to play DVDs.

And the golden standard of multimedia on Unix systems—XAnim. The only bad thing is that the last time I checked it didn’t work correctly in 32-bit X11 mode. But it did its job well and I’m still grateful it exists (and also its binary plugins were good binary specification for missing codecs).

And there’s MPlayer. It was fast, it had many useful features and codecs supported (I still use it as a testbed for running some 32-bit VfW/DMO codecs when I cannot write a decoder without debug) but its codebase was horrible (in some cases legendarily horrible) and the name is both bland and reminds of mplayer32.exe (which crashed and hanged a lot too).

And one of its forks is named after one of the horrible chunks in libavcodec and its author operates under pseudonym. So was it really worth it to name the player MPV?

And I conclude this review with a well-known multimedia player that I won’t use. When I think about VLC the first meaning coming to my mind is variable length codes. And when the only good thing about your player name is the number of puns you can make I’d use something with more decent name thank you very much.

I’ve finally documented what I know about VP4 in the wiki and I should unload it from my memory. Implementing decoders and such is left as an exercise for TrueMotion-loving reader.

Probably I’ll look at ClearVideo (for the N-th time) or some speech codec suite. Funny thing is that even if they market it as a single speech codec you have a good chance to find several codecs for different bitrates (like for Lernout & Hauspie you have CELP for 4.8 kbps and SBC with different parameters for 8, 12 and 16 kbps) and don’t get me started on VoxWare MetaSpeech (don’t confuse it with MetaSound—that one is not a speech codec or with MetaAudio—that one doesn’t exist), that’s the rant for another day.

Saturday, 16 April

TrueMotion 1 was licensed and has several variants outside the usual TM1. There’s allegedly Horizons PowerEZ but only j-b would know anything about it—because it’s vintage and used to code content he’s interested in of course. The other version was used for intro and victory cutscenes in Star Control II: Ur-Quan Masters 3DO version, the source code is available so any Mike Melanson out there can have a look at it. To me it looked as the same coding algorithm but with custom delta tables and codebooks provided. Oh, and data is split between several files (global header, codebook, frame data and offsets to individual frames).

TrueMotion 2 Realtime seems to be really Truemotion 1.2 Realtime Edition. It has quite similar header format to TrueMotion 1 (same obfuscation even) but with some values that would make TM1 decoder bail out on error and it was released before actual TrueMotion 2.

TrueMotion 2X seem to return to coding method from TM1 as well since there’s a suspicious similarity between its inverse Huffman coding method (they call it “string encoder” which sounds somewhat even more confusing) and the codebook used in TM1 except that in TM2X they use 0x80 as the end of data flag instead of 0x01.

P.S. I should really move to VP4 and then away from this codec family altogether.

Sunday, 10 April

So I’ve spent an hour or so to look at IMM4.

What do you know, it’s a very simple IDCT codec with interframes. Intraframes have only DCT with usual run-level VLC coding, interframes have skip flag to tell whether this macroblock should be skipped or there’s a difference to the previous frame coded or intra block. See, no motion vectors, quantisation is single value per block (except for DC in intra block), there seems to be no zigzagging either. You cannot get much simpler than that.

Friday, 01 April

swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

Colorspace conversion and Scaling

Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

Most multimedia concepts are more or less intuitive:
encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
demuxing is getting back the compressed data with the timing information stored in the container format
decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.


The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

Everything is a Kernel

One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

Those components are called kernels.

The kernels can be conceptually divided in two kinds:
Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

This pipeline approach gets great flexibility and helps code reuse.

The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.


AVScale works with two kind of structures:
AVPixelFormaton: A full description of the pixel format
AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

For now only avscale_config and avscale_convert_frame are implemented.

So if the input and output are pre-determined the context can be configured like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)

ret = avscale_config(ctx, out, in);
if (ret < 0)

But you can skip it and scale and/or convert from a input to an output like this:

AVScaleContext *ctx = avscale_alloc_context();

if (!ctx)

ret = avscale_convert_frame(ctx, out, in);
if (ret < 0)


The context gets lazily configured on the first call.

Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

As said the API is really simple and essential.

Help welcome!

Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.

Tuesday, 29 March

Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

What is here now

Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

  • Function Names: push got renamed to send and pull got renamed to receive.
  • No separated function to probe the process state, need_data and have_data are not here.
  • No codecs ported to use the new API, so no actual asyncronicity for now.
  • Subtitles aren’t supported yet.


There are just 4 new functions replacing both audio-specific and video-specific ones:

// Decode
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

// Encode
int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);

The workflow is sort of simple:
– You setup the decoder or the encoder as usual
– You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
– You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
– Once you are done feeding data you have to pass a NULL to signal the end of stream.
– You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
– You free the contexts as usual.

Decoding examples


The setup uses the usual avcodec_open2.


    c = avcodec_alloc_context3(codec);

    ret = avcodec_open2(c, codec, &opts);
    if (ret < 0)

Simple decoding loop

People using the old API usually have some kind of simple loop like

while (get_packet(pkt)) {
    ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
    if (ret < 0) {
    if (got_picture) {

The old functions can be replaced by calling something like the following.

// The flush packet is a non-NULL packet with size 0 and data NULL
int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
    int ret;

    *got_frame = 0;

    if (pkt) {
        ret = avcodec_send_packet(avctx, pkt);
        // In particular, we don't expect AVERROR(EAGAIN), because we read all
        // decoded frames with avcodec_receive_frame() until done.
        if (ret < 0)
            return ret == AVERROR_EOF ? 0 : ret;

    ret = avcodec_receive_frame(avctx, frame);
    if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
        return ret;
    if (ret >= 0)
        *got_frame = 1;

    return 0;

Callback approach

Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

// return 0 on success, negative on error
typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);

int decode(AVCodecContext *avctx, AVFrame *pkt,
           process_frame_cb cb, void *priv)
    AVFrame *frame = av_frame_alloc();
    int ret;

    ret = avcodec_send_packet(avctx, pkt);
    // Again EAGAIN is not expected
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_frame(avctx, frame);
        if (!ret)
            ret = cb(priv, frame);

    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;

Separated threads

The new API makes sort of easy to split the workload in two separated threads.

// Assume we have context with a mutex, a condition variable and the AVCodecContext

// Feeding loop
    AVPacket *pkt = NULL;

    while ((ret = get_packet(ctx, pkt)) >= 0) {

        ret = avcodec_send_packet(avctx, pkt);
        if (!ret) {
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the draining loop
            // Wait here
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;


    ret = avcodec_send_packet(avctx, NULL);


    return ret;

// Draining loop
    AVFrame *frame = av_frame_alloc();

    while (!done) {

        ret = avcodec_receive_frame(avctx, frame);
        if (!ret) {
        } else if (ret == AVERROR(EAGAIN)) {
            // Signal the feeding loop
            // Wait
            pthread_cond_wait(&ctx->cond, &ctx->mutex);
        } else if (ret < 0)
            goto out;


        if (!ret) {

    return ret;

It isn’t as neat as having all this abstracted away, but is mostly workable.

Encoding Examples

Simple encoding loop

Some compatibility with the old API can be achieved using something along the lines of:

int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
    int ret;

    *got_packet = 0;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        return ret;

    ret = avcodec_receive_packet(avctx, pkt);
    if (!ret)
        *got_packet = 1;
    if (ret == AVERROR(EAGAIN))
        return 0;

    return ret;

Callback approach

Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

// return 0 on success, negative on error
typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);

int encode(AVCodecContext *avctx, AVFrame *frame,
           process_packet_cb cb, void *priv)
    AVPacket *pkt = av_packet_alloc();
    int ret;

    ret = avcodec_send_frame(avctx, frame);
    if (ret < 0)
        goto out;

    while (!ret) {
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            ret = cb(priv, pkt);

    if (ret == AVERROR(EAGAIN))
        return 0;
    return ret;

The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

Coming Next

This post is long enough so the next one might involve converting a codec to the new API.

Monday, 21 March

Last weekend, after few months of work, the new bitstream filter API eventually landed.

Bitstream filters

In Libav is possible to manipulate raw and encoded data in many ways, the most common being

  • Demuxing: extracting single data packets and their timing information
  • Decoding: converting the compressed data packets in raw video or audio frames
  • Encoding: converting the raw multimedia information in a compressed form
  • Muxing: store the compressed information along timing information and additional information.

Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.


The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

// Query
const AVBitStreamFilter *av_bsf_next(void **opaque);
const AVBitStreamFilter *av_bsf_get_by_name(const char *name);

// Setup
int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
int av_bsf_init(AVBSFContext *ctx);

// Usage
int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);

// Cleanup
void av_bsf_free(AVBSFContext **ctx);

In order to use a bsf you need to:

  • Look up its definition AVBitStreamFilter using a query function.
  • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
  • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
  • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.


You can enumerate the available filters

void *state = NULL;

const AVBitStreamFilter *bsf;

while ((bsf = av_bsf_next(&state)) {
    av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);

or directly pick the one you need by name:

const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");


A bsf may use some codec parameters and time_base and provide updated ones.

AVBSFContext *ctx;

ret = av_bsf_alloc(bsf, &ctx);
if (ret < 0)
    return ret;

ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
if (ret < 0)
    goto fail;

ctx->time_base_in = in->time_base;

ret = av_bsf_init(ctx);
if (ret < 0)
    goto fail;

ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
if (ret < 0)
    goto fail;

out->time_base = ctx->time_base_out;


Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

AVPacket *pkt;

while (got_new_packet(&pkt)) {
    ret = av_bsf_send_packet(ctx, pkt);
    if (ret < 0)
        goto fail;

    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {

    if (ret == AVERROR(EAGAIN)
    IF (ret == AVERROR_EOF)
        goto end;
    if (ret < 0)
        goto fail;

// Flush
ret = av_bsf_send_packet(ctx, NULL);
if (ret < 0)
    goto fail;

while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {

if (ret != AVERROR_EOF)
    goto fail;

In order to signal the end of stream a NULL pkt should be fed to send_packet.


The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.


All the memory is freed and the ctx pointer is set to NULL.

Coming Soon

Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.

Saturday, 05 March

Sometimes it's very useful to print out how some parameters changes during the program execution.

When writing the new version of some piece of code one usually needs to compare it with the old one to be sure it behaves the same in every case. Especially the corner cases might be tricky and I spent a lot of time with them while my code worked fine in general.

For example when I was working on my ASF demuxer, I was happy there's an old demuxer and I can compare their behaviour. When debugging the ASF, I wanted to know the state of I/O context. In that time lu_zero (who was mentoring me) created a set of macros which printed logs for every I/O function (here). For example there's the macro for avio_seek() function (which is equivalent to fseek()).

  #define avio_seek(s, o, w) ({ \
int64_t _ret = avio_seek(s, o, w); \
int64_t _pos = avio_tell(s); \
av_log(NULL, AV_LOG_VERBOSE|AV_LOG_C(154), "0x%08"PRIx64" - %s:%d seek %p %"PRId64" %d -> %"PRId64"\n", \
_pos, __FUNCTION__, __LINE__, s, o, w, _ret); \
_ret; \
 When such a macro was present in my demuxer, for all the calls of avio_seek the following information was printed
  • _pos = avio_tell(s): the offset in the demuxed file
  • __FUNCTION__ : preprocessor define that contains the name of the function being compiled to know which function called avio_seek
  • __LINE__ : preprocessor define that contains the line number of the original source file that is being compiled to know from what line avio_seek was called from
  • s, o, w : the values of the parameters avio_seek was called with
  • _ret: the avio_seek return value
  • __FILE__: preprocessor define contains the name of the file being compiled (this one was not used in the example but might be useful when one needs more complex log). 
Parentheses are used around the define body because such a construct may appear as an expression in GNU C. There's _ret; as the last statement in this macro because its value serves as the value of the entire construct. If the last _ret; would be omitted in my example, the return value of this macro expression would be printf return value. The underscores in _ret or _pos variables are used to be sure it does not shadow some other variables with the same names.
Working with a log created by a set of macros similar to this example might be more effective than debugging with gdb in some cases.

Many thanks to lu_zero for teaching me about it. The support from the more experienced developers is the thing I really love about Libav.

Wednesday, 09 December

I made a split my complex dcadec bit-exact patch (https://patches.libav.org/patch/59013/) to the several parts. The first part which contains changing the dcadec core to work with integer coefficients instead of converting the coefficients to floats just after reading them was sent to the mailing list (https://patches.libav.org/patch/59141/). Such a change was expected to slow down the decoding process. Therefore I made some measurements to examine how much slower decoding is after my patch.
 I decoded this sample: samples.libav.org/A-codecs/DTS/dts/dtswavsample16.wav 10 times and measured the user time between invocation and termination with the "time" command:

 time ./avconv -f dts -i dtswavsample16.wav -f null -c pcm_f32le null, 
counted the average real time of avconv run and repeated everything for the master branch. The duration of the dtswavsample16.wav is ~4 mins and I wanted to look at the slow down for the longer files. Hence I used relatively new loop option for the avconv (http://sasshkas.blogspot.cz/2015/08/the-loop-option-for-avconv.html) to create ~24 mins long file from the initial file by looping it 6x with
 ./avconv -loop 6 -i dtswavsample16.wav -c copy dts_long.wav. 
I decoded this longer dts file 10x again for both new integer and old float coefficients core and counted the averages.
According to my results the integer patch causes ~20% slow down. The question is if this is still acceptable. I see 2 options here
  • To consider the slowdown acceptable and to try to make some speedups like SIMDifying VQ decoding and using inline asm for 64-bit math.
  • Or alternatively both int and float modes can be kept for different decoding modes but this might make the code too hairy.

Opinions and suggestions are welcome.

Tuesday, 08 December

When playing a multimedia file, one usually wants to seek to reach different parts of a file. Mostly, containers allows this feature but it might be problem for streamed files.
Withing the libavformat, seeking is performed with function (inside the demuxer) called read_seek. This function tries to find matching timestamp  for the requested position (offset) in the played file.
There are 2 ways to seek through the file. One of them is when file contains some kind of index, which matches positions with appropriate timestamps. In this case index entries are created by calling av_add_index_entry. If index entries are present, av_index_search_timestamp, which is called inside the read_seek, looks for the closest timestamp for the requested position. When the file does not provide such entries, one can look for the requested position with ff_seek_frame_binary. For doing so, read_timestamp function has to be created inside the demuxer.
Read_timestamp takes required position and stream index and then tries to find offset of the beginning of the closest packet wich is key frame with matching stream index. While doing this, read_timestamp reads timestamps for all the packets after given position and creates index entries. When the key frame with matching stream index is found, read_timestamp upgrades required position and returns timestamp matching to it. 

I was told to test my ASF demuxer with the zzuf utility. Zuff is a fuzzer, it changes random bits in the program's input which simulates damaged file or unexpected data.
For testing ASF's behaviour I want to feed avconv with some corrupted wmv files and see what will happen. Because I want to fuzz in several different ways I want to vary seed (the initial value of zzuf’s random number generator). I'll do this with command:

while true; SEED=$RANDOM; for file *wmv; do zzuf -M -l -r 0.00001 -q -U 60 -s $SEED ./avconv -i "file" -f null -c copy - || echo $SEED $file >> fuzz; done; done;.

I got the file fuzz which is the list of seed paired with filename.  Now I need to use zzuf for creating damaged files to check the problem with valgrind. I'll use the list to determine the seed which caused some crash for creating my damaged file:

zzuf -M -l -r 0.00001 -q -U 60 -s myseed < somefile.wmv | cat out.asf.

Now I'll just use valgrind to find out what happened:

valgrind ./avconv -i out.asf -f null -c copy -.

I tried to test the ASF demuxer with different tricky samples and with FATE
and the demuxer behaved well but testing with zzuf detected several new crashes. Mainly it was insane values sizes and it was easy to fix them by adding some more checks. Zzuf is a great thing for testing.

Pelhřimov is small but very nice town in Czech Republic approximately 120 km from the capital Prague and I decided to organize a small but nice Libav sprint in it.
The participants and the topics were:

  • Luca Barbato -
    • AVScale, especially documenting it
    • HW accelerated encoders
    • async API
  • Anton Khirnov - The Evil Plan II and fixing H.264 decoder for it
  • Kostya Shishkov - trolling motivating the others
  • Alexandra Hájková (me) - dcadec decoder, mainly testing the new patch
  • everyone - eating chocolate

I finished and sent my dcadec "Integer core decoder" patch (which transforms the dcadec core decoder to work with integers) during the sprint. After the discussion and some hints from the others I tested my patch better and found out some interesting things:
  • It seems XLL output really is lossless when using my patch.
  • But LFE channel was broken - this was fixed during the sprint.
  • While lossless or force_fixed output looks fine my patch breaks a float (lossy) output a little bit - it feels the same for my ears but looking at the output in audacity and comparing it with "before the integer patch" output shows something is wrong there.
  • I discovered for myself an avconv option called channelsplit ( https://libav.org/documentation/avconv.html#channelsplit) that splits the file into per-channel files wich was very useful for comparing the output with some reference decoder with the other channel order (for example with https://github.com/foo86/dcadec).
My post sprint dcadec plans are:
  • Fix the lossy output issue.
  • Fix detection of the extensions and add the options for disabling them.
  • Rewrite the dca_decode_frame to handle all the extensions and working with the new options for them more systematicly.

I decided to improve the Libav DTS decoder - dcadec. Here I want to explain what are its problems now and what I would like to do about them.
DTS encoded audio stream consists of core audio and may contain extended audio. Dcadec supports XCH and XLL extensions but X96, XXCH and XBR extensions are waiting to be implemented - I'd like to implement them later.
For the DTS lossless extension - XLL, the decoded output audio should be a bit for bit accurate reproduction of the encoded input. However there are some problems:

  • The main problem is that the core decoder converts integer coefficients read from the bitstream to floats just after reading them (along with dequantization). All other steps of the audio reconstruction are done with floats and the output can not be the bitexact reproduction of the input so it is not lossless.
When the coefficients are read from the bitstream the core decoder does the following:
dequantization (with int -> float conversion)

inverse ADPCM (when needed)

VQ decoding (when needed)

filtering: QMF, LFE, downmixing (when needed)

float output.
I'm working now on modifying the core to work with integer coefficients and then convert them to floats before QMF filtering for lossy output but use bitexact QMF (intermediate LFE coefficients should be always integers and I think it's not correct in the current version) for lossless output. Also I added an option called -force_fixed to force fixed-point reconstruction for any kind of input.
  • Another problem is XLL extension presence detection. During the testing I found out that XLL extension is not detected sometimes and the core audio only is decoded in this case. I want to fix this issue as well.

Saturday, 21 November

This is a sort of short list of checklists and few ramblings in the wake of Fosdem’s Code of Conduct discussions and the not exactly welcoming statements about how to perceive a Code of Conduct such as this one.

Code of Conduct and OpenSource projects

A Code of Conduct is generally considered a mean to get rid of problematic people (and thus avoid toxic situations). I prefer consider it a mean to welcome people and provide good guidelines to newcomers.

Communities without a code of conduct tend to reject the idea of having one, thinking that a is only needed to solve the above mentioned issue and adding more bureaucracy would just actually give more leeway to macchiavellian ploys.

That is usually a problem since, no matter how good things are now, it takes just few poisonous people to get in an unbearable situation and a you just need one in few selected cases.

If you consider the CoC a shackle or a stick to beat “bad guys” so you do not need it until you see a bad guy, that is naive and utterly wrong: you will end up writing something that excludes people due a quite understandable, but wrong, knee-jerk reaction.

A Code of Conduct should do exactly the opposite, it should embrace people and make easier joining and fit in. It should be the social equivalent of the developer handbook or the coding style guidelines.

As everybody can make a little effort and make sure to send code with spaces between operators everybody can make an effort and not use colorful language. Likewise as people would be more happy to contribute if the codebase they are hacking on is readable so they are more confident in joining the community if the environment is pleasant.

Making an useful Code of Conduct

The Code of Conduct should be a guideline for people that have no idea what the expected behavior, it should be written thinking on how to help people get along not on how to punish who do not like.

  • It should be short. It is pointless to enumerate ALL the possible way to make people uncomfortable, you are bound to miss it.
  • It should be understanding and inclusive. Always assume cultural bias and not ill will.
  • It should be enforced. It gets quite depressing when you have a 100+ lines code of conduct but then nobody cares about it and nobody really enforces it. And I’m not talking about having specifically designated people to enforce it. Your WHOLE community should agree on what is an acceptable behavior and act accordingly on breaches.

People joining the community should consider the Code of Conduct first as a request (and not a demand) to make an effort to get along with the others.


Since I saw quite some long and convoluted wall of text being suggested as THE CODE OF CONDUCT everybody MUST ABIDE TO, here some suggestion on what NOT do.

  • It should not be a political statement: this is a strong cultural bias that would make potential contributors just stay away. No matter how good and great you think your ideas are, those unrelated to a project that should gather people that enjoy writing code in their spare time should stay away. The Open Source is already an ideology, overloading it with more is just a recipe for a disaster.
  • Do not try to make a long list of definitions, you just dilute the content and give even more ammo to lawyer-type arguers.
  • Do not think much about making draconian punishments, this is a community on internet, even nowadays nobody really knows if you are actually a dog or not, you cannot really enforce anything if the other party really wants to be a pest.

Good examples

Some CoC I consider good are obviously the ones used in the communities I belong to, Gentoo and Libav, they are really short and to the point.


As I said before no matter how well written a code of conduct is, the only way to really make it useful is if the community as whole helps new (and not so new) people to get along.

The rule of thumb “if somebody feels uncomfortable in a non-technical discussion, once he says, drop it immediately”, is ok as long:
* The person uncomfortable speaks up. If you are shy you might ask somebody else to speak up for you, but do not be quiet when it happens and then fill a complaint much later, that is NOT OK.
* The rule is not bent to derail technical discussions. See my post about reviews to at least avoid this pitfall.
* People agree to drop at least some of their cultural biases, otherwise it would end up like walking on eggshells every moment.

Letting situations going unchecked is probably the main issue, newcomers can think it is OK to behave in a certain way if people are behaving such way and nobody stops that, again, not just specific enforcers of some kind, everybody should behave and tell clearly to those not behaving that they are problematic.

Gentoo is a big community so once somebody steps the boundaries gets problematic having a swift reaction, lots of people prefer not to speak up when something happens, so people unwillingly causing the problem are not made aware immediately.

The people then in charge to dish bans have to try to figure out what exactly was wrong and there the cultural biases everybody has might or might not trigger and make the problem harder to address.

Libav is a much smaller community and in general nobody has qualms in saying “please stop” (that is also partially due how the community evolved).

Hopefully this post would help avoid making some mistakes and help people getting along better.

Sunday, 08 November

This mini-post spurred from this bug.

AVFrame and AVCodecContext

In Libav there are a number of patterns shared across most of the components.
Does not matter if it models a codec, a demuxer or a resampler: You interact with it using a Context and you get data in or out of the module using some kind of Abstraction that wraps data and useful information such as the timestamp. Today’s post is about AVFrames and AVCodecContext.


The most used abstraction in Libav by far is the AVFrame. It wraps some kind of raw data that can be produced by decoders and fed to encoders, passed through filters, scalers and resamplers.

It is quite flexible and contains the data and all the information to understand it e.g.:

  • format: Used to describe either the pixel format for video and the sample format for audio.
  • width and height: The dimension of a video frame.
  • channel_layout, nb_samples and sample_rate for audio frames.


This context contains all the information useful to describe a codec and to configure an encoder or a decoder (the generic, common features, there are private options for specific features).

Being shared with encoder, decoder and (until Anton’s plan to avoid it is deployed) container streams this context is fairly large and a good deal of its fields are a little confusing since they seem to replicate what is present in the AVFrame or because they aren’t marked as write-only since they might be read in few situation.

In the bug mentioned channel_layout was the confusing one but also width and height caused problems to people thinking the value of those fields in the AVCodecContext would represent what is in the AVFrame (then you’d wonder why you should have them in two different places…).

As a rule of thumb everything that is set in a context is either the starting configuration and bound to change in the future.

Video decoders can reconfigure themselves and output video frames with completely different geometries, audio decoders can report a completely different number of channels or variations in their layout and so on.

Some encoders are able to reconfigure on the fly as well, but usually with more strict constraints.

Why their information is not the same

The fields in the AVCodecContext are used internally and updated as needed by the decoder. The decoder can be multithreaded so the AVFrame you are getting from one of the avcodec_decode_something() functions is not the last frame decoded.

Do not expect any of the fields with names similar to the ones provided by AVFrame to stay immutable or to match the values provided by the AVFrame.

Common pitfalls

Allocating video surfaces

Some quite common mistake is to use the AVCodecContext coded_width and coded_height to allocate the surfaces to present the decoded frames.

As said the frame geometry can change mid-stream, so if you do that best case you have some lovely green surrounding your picture, worst case you have a bad crash.

I suggest to always check that the AVFrame dimensions fit and be ready to reconfigure your video out when that happens.

Resampling audio

If you are using a current version of Libav you have avresample_convert_frame() doing most of the work for you, if you are not you need to check that format channel_layout and sample_rate do not change and manually reconfigure.

Rescaling video

Similarly you can misconfigure swscale and you should check manually that format, width and height and reconfigure as well. The AVScale draft API on provides an avscale_process_frame().

In closing

Be extra careful, think twice and beware of the examples you might find on internet, they might work until they wont.

Friday, 06 November

This spurred from some events happening in Gentoo, since with the move to git we eventually have more reviews and obviously comments over patches can be acceptable (and accepted) depending on a number of factors.

This short post is about communicating effectively.

When reviewing patches

No point in pepper coating

Do not disparage code or, even worse, people. There is no point in being insulting, you add noise to the signal:

You are a moron! This is shit has no place here, do not do again something this stupid.

This is not OK: most people will focus on the insult and the technical argument will be totally lost.

Keep in mind that you want people doing stuff for the project not run away crying.

No point in sugar coating

Do not downplay stupid mistakes that would crash your application (or wipe an operating system) because you think it would hurt the feelings of the contributor.

    rm -fR /usr /local/foo

Is as silly as you like but the impact is HUGE.

This is a tiny mistake, you should not do that again.

No, it isn’t tiny it is quite a problem.

Mistakes happen, the review is there to avoid them hitting people, but a modicum of care is needed:
wasting other people’s time is still bad.

Point the mistake directly by quoting the line

And use at most 2-3 lines to explain why it is a problem.
If you can’t better if you fix that part yourself or move the discussion on a more direct media e.g. IRC.

Be specific

This kind of change is not portable, obscures the code and does not fix the overflow issue at hand:
The expression as whole could still overflow.

Hopefully even the most busy person juggling over 5 different tasks will get it.

Be direct

Do not suggest the use of those non-portable functions again anyway.

No room for interpretation, do not do that.

Avoid clashes

If you and another reviewer disagree, move the discussion on another media, there is NO point in spamming
the review system with countless comments.

When receiving reviews (or waiting for them)

Everybody makes mistakes

YOU included, if the reviewer (or more than one) tells you that your changes are not right, there are good odds you are wrong.

Conversely, the reviewer can make mistakes. Usually is better to move away from the review system and discuss over emails or IRC.

Be nice

There is no point in being confrontational. If you think the reviewer is making a mistake, politely point it out.

If the reviewer is not nice, do not use the same tone to fit in. Even more if you do not like that kind of tone to begin with.

Wait before answering

Do not update your patch or write a reply as soon as you get a notification of a review, more changes might be needed and maybe other reviewers have additional opinions.

Be patient

If a patch is unanswered, ping it maybe once a week, possibly rebasing it if the world changed meanwhile.

Keep in mind that most of your interaction is with other people volunteering their free time and not getting anything out of it as well, sometimes the real-life takes priority =)

Wednesday, 21 October

You might be subtle like this or just work on your stuff like that but then nobody will know that you are the one that did something (and praise somebody else completely unrelated for your stuff, e.g. Anton not being praised much for the HEVC threaded decoding, the huge work on ref-counted AVFrame and many other things).

Blogging is boring

Once you wrote something in code talking about it gets sort of boring, the code is there, it works and maybe you spent enough time on the mailing list and irc discussing about it that once it is done you wouldn’t want to think about it for at least a week.

The people at xiph got it right and they wrote awesome articles about what they are doing.

Blogging is important

JB got it right by writing posts about what happened every week. Now journalist can pick from there what’s cool and coming from VLC and not have to try to extract useful information from git log, scattered mailing lists and conversations on irc.
I’m not sure I’ll have the time to do the same, but surely I’ll prod at least Alexandra and the others to write more.

Thursday, 15 October

In Libav we try to clean up the API and make it more regular, this is one of the possibly many articles I write about APIs, this time about deprecating some relic from the past and why we are doing it.


This struct used to store image data using data pointers and linesizes. It comes from the far past and it looks like this:

typedef struct AVPicture {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];
} AVPicture;

Once the AVFrame was introduced it was made so it would alias to it and for some time the two structures were actually defined sharing the commond initial fields through a macro.

The AVFrame then evolved to store both audio and image data, to use AVBuffer to not have to do needless copies and to provide more useful information (e.g. the actual data format), now it looks like:

typedef struct AVFrame {
    uint8_t *data[AV_NUM_DATA_POINTERS];
    int linesize[AV_NUM_DATA_POINTERS];

    uint8_t **extended_data;

    int width, height;

    int nb_samples;

    int format;

    int key_frame;

    enum AVPictureType pict_type;

    AVRational sample_aspect_ratio;

    int64_t pts;

} AVFrame;

The image-data manipulation functions moved to the av_image namespace and use directly data and linesize pointers, while the equivalent avpicture became a wrapper over them.

int avpicture_fill(AVPicture *picture, uint8_t *ptr,
                   enum AVPixelFormat pix_fmt, int width, int height)
    return av_image_fill_arrays(picture->data, picture->linesize,
                                ptr, pix_fmt, width, height, 1);

int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
                     int width, int height,
                     unsigned char *dest, int dest_size)
    return av_image_copy_to_buffer(dest, dest_size,
                                   src->data, src->linesize,
                                   pix_fmt, width, height, 1);


It is also used in the subtitle abstraction:

typedef struct AVSubtitleRect {
    int x, y, w, h;
    int nb_colors;

    AVPicture pict;
    enum AVSubtitleType type;

    char *text;
    char *ass;
    int flags;
} AVSubtitleRect;

And to crudely pass AVFrame from the decoder level to the muxer level, for certain rawvideo muxers by doing something such as:

    pkt.data   = (uint8_t *)frame;
    pkt.size   =  sizeof(AVPicture);

AVPicture problems

In the codebase its remaining usage is dubious at best:

AVFrame as AVPicture

In some codecs the AVFrame produced or consumed are casted as AVPicture and passed to avpicture functions instead
of directly use the av_image functions.


For the subtitle codecs, accessing the Rect data requires a pointless indirection, usually something like:

    wrap3 = rect->pict.linesize[0];
    p = rect->pict.data[0];
    pal = (const uint32_t *)rect->pict.data[1];  /* Now in YCrCb! */


Copying memory from a buffer to another when can be avoided is consider a major sin (“memcpy is murder”) since it is a costly operation in itself and usually it invalidates the cache if we are talking about large buffers.

Certain muxers for rawvideo, try to spare a memcpy and thus avoid a “murder” by not copying the AVFrame data to the AVPacket.

The idea in itself is simple enough, store the AVFrame pointer as if it would point a flat array, consider the data size as the AVPicture size and hope that the data pointed by the AVFrame remains valid while the AVPacket is consumed.

Simple and faulty: with the AVFrame ref-counted API codecs may use a Pool of AVFrames and reuse them.
It can lead to surprising results because the buffer gets updated before the AVPacket is actually written.
If the frame referenced changes dimensions or gets deallocated it could even lead to crashes.

Definitely not a great idea.


Vittorio, wm4 and I worked together to fix the problems. Radically.

AVFrame as AVPicture

The av_image functions are now used when needed.
Some pointless copies got replaced by av_frame_ref, leading to less memory usage and simpler code.

No AVPictures are left in the video codecs.


The AVSubtitleRect is updated to have simple data and linesize fields and each codec is updated to keep the AVPicture and the new fields in sync during the deprecation window.

The code is already a little easier to follow now.


Just dropping the “feature” would be a problem since those muxers are widely used in FATE and the time the additional copy takes adds up to quite a lot. Your regression test must be as quick as possible.

I wrote a safer wrapper pseudo-codec that leverages the fact that both AVPacket and AVFrame use a ref-counted system:

  • The AVPacket takes the AVFrame and increases its ref-count by 1.
  • The AVFrame is then stored in the data field and wrapped in a custom AVBuffer.
  • That AVBuffer destructor callback unrefs the frame.

This way the AVFrame data won’t change until the AVPacket gets destroyed.

Goodbye AVPicture

With the release 14 the AVPicture struct will be removed completely from Libav, people using it outside Libav should consider moving to use full AVFrame (and leverage the additional feature it provides) or the av_image functions directly.

Friday, 02 October

During the VDD we had lots of discussions and I enjoyed reviewing the initial NihAV implementation. Kostya already wrote some more about the decoupled API that I described at high level here.

This article is about some possible implementation details, at least another will follow.

The new API requires some additional data structures, mainly something to keep the data that is being consumed/produced, additional implementation-callbacks in AVCodec and possibly a mean to skip the queuing up completely.

Data Structures

AVPacketQueue and AVFrameQueue

In the previous post I considered as given some kind of Queue.

Ideally the API for it could be really simple:

typedef struct AVPacketQueue;

AVPacketQueue *av_packet_queue_alloc(int size);
int av_packet_queue_put(AVPacketQueue *q, AVPacket *pkt);
int av_packet_queue_get(AVPacketQueue *q, AVPacket *pkt);
int av_packet_queue_size(AVPacketQueue *q);
void av_packet_queue_free(AVPacketQueue **q);
typedef struct AVFrameQueue;

AVFrameQueue *av_frame_queue_alloc(int size);
int av_frame_queue_put(AVFrameQueue *q, AVPacket *pkt);
int av_frame_queue_get(AVFrameQueue *q, AVPacket *pkt);
int av_frame_queue_size(AVFrameQueue *q);
void av_frame_queue_free(AVFrameQueue **q);

Internally it leverages the ref-counted API (av_packet_move_ref and av_frame_move_ref) and any data structure that could fit the queue-usage. It will be used in a multi-thread scenario so a form of Lock has to be fit into it.

We have already something specific for AVPlay, using a simple Linked List and a FIFO for some other components that have a near-constant maximum number of items (e.g. avconv, NVENC, QSV).

Possibly also a Tree could be used to implement something such as av_packet_queue_insert_by_pts and have some forms of reordering happen on the fly. I’m not a fan of it, but I’m sure someone will come up with the idea..

The Queues are part of AVCodecContext.

typedef struct AVCodecContext {
    // ...

    AVPacketQueue *packet_queue;
    AVFrameQueue *frame_queue;

    // ...
} AVCodecContext;

Implementation Callbacks

In Libav the AVCodec struct describes some specific codec features (such as the supported framerates) and hold the actual codec implementation through callbacks such as init, decode/encode2, flush and close.
The new model obviously requires additional callbacks.

Once the data is in a queue it is ready to be processed, the actual decoding or encoding can happen in multiple places, for example:

  • In avcodec_*_push or avcodec_*_pull, once there is enough data. In that case the remaining functions are glorified proxies for the matching queue function.
  • somewhere else such as a separate thread that is started on avcodec_open or the first avcodec_decode_push and is eventually stopped once the context related to it is freed by avcodec_close. This is what happens under the hood when you have certain hardware acceleration.


typedef struct AVCodec {
    // ... previous fields
    int (*need_data)(AVCodecContext *avctx);
    int (*has_data)(AVCodecContext *avctx);
    // ...
} AVCodec;

Those are used by both the encoder and decoder, some implementations such as QSV have functions that can be used to probe the internal state in this regard.


typedef struct AVCodec {
    // ... previous fields
    int (*decode_push)(AVCodecContext *avctx, AVPacket *packet);
    int (*decode_pull)(AVCodecContext *avctx, AVFrame *frame);
    // ...
} AVCodec;

Those two functions can take a portion of the work the current decode function does, for example:
– the initial parsing and dispatch to a worker thread can happen in the _push.
– reordering and blocking until there is data to output can happen on _pull.

Assuming the reordering does not happen outside the pull callback in some generic code.


typedef struct AVCodec {
    // ... previous fields
    int (*encode_push)(AVCodecContext *avctx, AVFrame *frame);
    int (*encode_pull)(AVCodecContext *avctx, AVPacket *packet);
} AVCodec;

As per the Decoding callbacks, encode2 workload is split. the _push function might just keep queuing up until there are enough frames to complete the initial the analysis, while, for single thread encoding, the rest of the work happens at the _pull.

Yielding data directly

So far the API mainly keeps some queue filled and let some magic happen under the hood, let see some usage examples first:

Simple Usage

Let’s expand the last example from the previous post: register callbacks to pull/push the data and have some simple loops.


typedef struct DecodeCallback {
    int (*pull_packet)(void *priv, AVPacket *pkt);
    int (*push_frame)(void *priv, AVFrame *frame);
    void *priv_data_pull, *priv_data_push;
} DecodeCallback;

Two pointers since you pull from a demuxer+parser and you push to a splitter+muxer.

int decode_loop(AVCodecContext *avctx, DecodeCallback *cb)
    AVPacket *pkt  = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    int ret;
    while ((ret = avcodec_decode_need_data(avctx)) > 0) {
        ret = cb->pull_packet(cb->priv_data_pull, pkt);
        if (ret < 0)
            goto end;
        ret = avcodec_decode_push(avctx, pkt);
        if (ret < 0)
            goto end;
    while ((ret = avcodec_decode_have_data(avctx)) > 0) {
        ret = avcodec_decode_pull(avctx, frame);
        if (ret < 0)
            goto end;
        ret = cb->push_frame(cb->priv_data_push, frame);
        if (ret < 0)
            goto end;

    return ret;


For encoding something quite similar can be done:

typedef struct EncodeCallback {
    int (*pull_frame)(void *priv, AVFrame *frame);
    int (*push_packet)(void *priv, AVPacket *packet);
    void *priv_data_push, *priv_data_pull;
} EncodeCallback;

The loop is exactly the same beside the data types swapped.

int encode_loop(AVCodecContext *avctx, EncodeCallback *cb)
    AVPacket *pkt  = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    int ret;
    while ((ret = avcodec_encode_need_data(avctx)) > 0) {
        ret = cb->pull_frame(cb->priv_data_pull, frame);
        if (ret < 0)
            goto end;
        ret = avcodec_encode_push(avctx, frame);
        if (ret < 0)
            goto end;
    while ((ret = avcodec_encode_have_data(avctx)) > 0) {
        ret = avcodec_encode_pull(avctx, pkt);
        if (ret < 0)
            goto end;
        ret = cb->push_packet(cb->priv_data_push, pkt);
        if (ret < 0)
            goto end;

    return ret;


Transcoding, the naive way, could be something such as

int transcode(AVFormatContext *mux,
              AVFormatContext *dem,
              AVCodecContext *enc,
              AVCodecContext *dec)
    DecodeCallbacks dcb = {
        dem, enc->frame_queue };
    EncodeCallbacks ecb = {
        enc->frame_queue, mux };
    int ret = 0;

    while (ret > 0) {
        if ((ret = decode_loop(dec, &dcb)) > 0)
            ret = encode_loop(enc, &ecb);

One loop feeds the other throught the queue. get_packet and push_packet are muxing and demuxing functions, they might end up being other two Queue functions once the AVFormat layer gets a similar overhaul.

Advanced usage

From the examples above you would notice that in some situation you would possibly do better,
all the loops pull data from a queue push it immediately to another:

  • why not feeding right queue immediately once you have the data ready?
  • why not doing some processing before feeding the decoded data to the encoder, such as conver the pixel format?

Here some additional structures and functions to enable advanced users:

typedef struct AVFrameCallback {
    int (*yield)(void *priv, AVFrame *frame);
    void *priv_data;
} AVFrameCallback;

typedef struct AVPacketCallback {
    int (*yield)(void *priv, AVPacket *pkt);
    void *priv_data;
} AVPacketCallback;

typedef struct AVCodecContext {
// ...

AVFrameCallback *frame_cb;
AVPacketCallback *packet_cb;

// ...

} AVCodecContext;

int av_frame_yield(AVFrameCallback *cb, AVFrame *frame)
    return cb->yield(cb->priv_data, frame);

int av_packet_yield(AVPacketCallback *cb, AVPacket *packet)
    return cb->yield(cb->priv_data, packet);

Instead of using directly the Queue API, would be possible to use yield functions and give the user a mean to override them.

Some API sugar could be something along the lines of this:

int avcodec_decode_yield(AVCodecContext *avctx, AVFrame *frame)
    int ret;

    if (avctx->frame_cb) {
        ret = av_frame_yield(avctx->frame_cb, frame);
    } else {
        ret = av_frame_queue_put(avctx->frame_queue, frame);

    return ret;

Whenever a frame (or a packet) is ready it could be passed immediately to another function, depending on your threading model and cpu it might be much more efficient skipping some enqueuing+dequeuing steps such as feeding directly some user-queue that uses different datatypes.

This approach might work well even internally to insert bitstream reformatters after the encoding or before the decoding.

Open problems

The callback system is quite powerful but you have at least a couple of issues to take care of:
– Error reporting: when something goes wrong how to notify what broke?
– Error recovery: how much the user have to undo to fallback properly?

Probably this part should be kept for later, since there is already a huge amount of work.

What’s next

Muxing and demuxing

Ideally the container format layer should receive the same kind of overhaul, I’m not even halfway documenting what should
change, but from this blog post you might guess the kind of changes. Spoiler: The I/O layer gets spun in a separate library.

Proof of Concept

Soon^WNot so late I’ll complete a POC out of this and possibly hack avplay so that either it uses QSV or videotoolbox as test-case (depending on which operating system I’m playing with when I start), probably I’ll see which are the limitations in this approach soon.

If you like the ideas posted above or you want to discuss them more, you can join the Libav irc channel or mailing list to discuss and help.

Thursday, 10 September

This is a tiny introduction to Libav, the organization.


The project aims to provide useful tools, written in portable code that is readable, trustworthy and performant.

Libav is an opensource organization focused on developing libraries and tools to decode, manipulate and encode multimedia content.


The project tries to be as non-hierarchical as possible. Every contributor must abide by a well defined set of rules, no matter which role.

For decisions we strive to reach near-unanimous consensus. Discussions may happen on irc, mailing-list or in real life meetings.

If possible, conflicts should be avoided and otherwise resolved.

Join us!

We are always looking for enthusiastic new contributors and will help you get started. Below you can find a number of possible ways to contribute. Please contact us.


Even if the project is non-hierarchical, it is possible to define specific roles within it. Roles do not really give additional power but additional responsibilities.


Contributing to Libav makes you a Contributor!
Anybody who reviews patches, writes patches, helps triaging bugs, writes documentation, helps people solve their problems, or keeps our infrastructure running is considered a contributor.

It does not matter how little you contribute. Any help is welcome.

On top of the standard great feats of contributing to an opensource project, special chocolate is always available during the events.


Many eyes might not make every bug shallow, but probably a second and a third pair might prevent some silly mistakes.

A reviewer is supposed to read the new patches and prevent mistakes (silly, tiny or huge) to land in the master.

Because of our workflow, spending time reading other people patches is quite common.

People with specific expertise might get nagged to give their opinion more often than others, but everybody might spot something that looks wrong and probably is.


Checking that the bugs are fixed and ask for better reports is important.

Bug wrangling involves making sure reported issues have all the needed information to start fixing the problem and checking if old issues are still valid or had been fixed already.


Nobody can push a patch to the master until it is reviewed, but somebody has to push it once it is.

Committers are the people who push code to the main repository after it has been reviewed.

Being a committer requires you to take newly submitted patches, make sure they work as expected either locally or pushing them through our continuous integration system and possibly fix minor issues like typos.

Patches from a committer go through the normal review process as well.

Infrastructure Administrator

The regression test system. git repository, the samples collection, the website, the patch trackers, the wiki and the issue tracker are all managed on dedicated hardware.

This infrastructure needs constant maintaining and improving.

Most of comes from people devoting their time and (beside few exceptions) their own hardware, definitely this role requires a huge amount of dedication.


The project strives to provide a pleasant environment for everybody.

Every contributor is considered a member of the team, regardless if they are a newcomer or a founder. Nobody has special rights or prerogatives.

Well defined rules have been adopted since the founding of the project to ensure fairness.

Code of Conduct

A quite simple code of conduct is in place in our project.

It boils down to respecting the other people and being pleasant to deal with.

It is commonly enforced with a friendly warning, followed by the request to leave if the person is unable to behave and, then, eventual removal if anything else fails.

Contribution workflow

The project has a simple contribution workflow:

  • Every patch must be sent to the mailing-list
  • Every patch must get a review and an Ok before it lands in the master branch

Code Quality

We have plenty of documentation to make it easy for you to prepare patches.

The reviewers usually help newcomers by reformatting the first patches and pointing and fixing common pitfalls.

If some mistakes are not caught during the review, there are few additional means to prevent them from hitting a release.

Post Scriptum

This post tried to summarize the project and its structure as if the legends surrounding it do not exist and the project is just a clean slate. Shame on me for not having written this blog post 5 years ago.

Past and Present

I already wrote about the past and the current situation of Libav, if you are curious please do read the previous posts. I will probably blog again about the social issues soon.


The Release 12 is in the ABI break window now and soon the release branch will be spun off! After that some of my plans to improve the API will see some initial implementations and hopefully will be available as part of the release 13 (and nihav)

I will discuss avframe_yield first since Kostya already posted about a better way to handle container formats.

Friday, 14 August

I'd like to add the loop option to avconv. This option allows to repeat an input file given number of times, so the output contains specified number of inputs. The command is ./avconv -loop n -i infile outfile, n specifies how many times the input file should be looped in the output.

How does this work?
After processing the input file for the first time, avconv calls new seek_to_start function to seek back to the beginning of the file. av_seek_frame is called to perform seeking itself but there are other things needed for loop option to work.

1) flush
Flush decoder buffers to take out delayed frames. In avconv this is done by calling process_input_file with NULL as frame, process_input_packet had to be modified a little to not to signal EOF on the filters when seeking.

2) timestamps (ts)
To have correct timestamps in the "after seeking" part of the output stream they have to be corrected with ts = ts_{from the demuxer} + n * (duration of the input stream), n is number of times the input stream was processed so far . This duration is the duration of the longest stream in a file because all the streams have to be processed (or played) before starting the next loop. The duration of the stream is the last timestamp - the first timestamp + duration of the last frame. For the audio streams one "frame" is usually a constant number of samples and its duration is number of samples/sample rate. Video frames on the other side are displayed unevenly so their average framerate can be used for the last frame duration if available or if the average framerate is not known the last frame duration is just 1 (in the current time base).


Thursday, 30 July

We are getting closer to a new release and you can see it is an even release by the amount of old and crufty code we are dropping. This usually is welcomed by some people and hated by others. This post is trying to explain what we do and why we are doing it.

New API and old API

Since the start of Libav we tried to address the painful shortcomings of the previous management, here the short list:

  • No leaders or dictators, there are rules agreed by consensus and nobody bends them.
  • No territoriality, nobody “owns” a specific area of the codebase nor has special rights on it.
  • No unreviewed changes in the tree, all the patches must receive an Ok by somebody else before they can be pushed in the tree.
  • No “cvs is the release”, major releases at least twice per year, bugfix-only point releases as often as needed.
  • No flames and trollfests, some basic code of conduct is enforced.

One of the effect of this is that the APIs are discussed, proposals are documented and little by little we are migrating to a hopefully more rational and less surprising API.

What’s so bad regarding the old API?

Many of the old APIs were not designed at all, but just randomly added because mplayer or ffmpeg.c happened to need some
feature at the time. The result was usually un(der)documented, hard to use correctly and often not well defined in some cases. Most users of the old API that I’ve seen actually used it wrong and would at best occasionally fail to work, at worst crash randomly.
– Anton

To expand a bit on that you can break down the issues with the old API in three groups:

  • Unnamespaced common names (e.g. CODEC_ID_NONE), those may or might not clash with other libraries.
  • Now-internal-only fields previously exposed that were expected to be something that are not really are (e.g. AVCodecContext.width).
  • Functionality not really working well (e.g. the old audio resampler) for which a replacement got provided eventually (AVResample).

The worst result of API misuse could be a crash in specific situations (e.g. if you use the AVCodecContext dimension when you should use the AVFrame dimensions to allocate your screen surface you get quite an ugly crash since the former represent the decoding time dimension while the latter the dimensions of the frame you are going to present and they can vary a LOT).

But Compatibility ?!

In Libav we try our best to give migration paths and in the past years we even went over the extra mile by providing patches for quite a bit of software Debian was distributing at the time. (Since nobody even thanked for the effort, I doubt the people involved would do that again…)

Keeping backwards compatibility forever is not really feasible:

  • You do want to remove a clashing symbol from your API
  • You do want to not have application crashing because of wrong assumptions
  • You do want people to use the new API and not keep compatibility wrappers that might not work in certain
    corner cases.

The current consensus is to try to keep an API deprecated for about 2 major releases, with release 12 we are dropping code that had been deprecated since 2-3 years ago.


I had been busy with my dayjob deadlines so I couldn’t progress on the new api for avformat and avcodec I described before, probably the next blogpost will be longer and a bit more technical again.

Thursday, 09 July

Debian decided to move to the new FFmpeg, what does it mean to me? Why should I care? This post won’t be technical for once, if you think “Libav is evil” start reading from here.

Relationship between Libav and Debian

After split between what was FFmpeg in two projects, with Michael Niedermayer keeping the name due his ties with the legal owner of the trademark and “merging” everything the group of 18 people was doing under the new Libav name.

For Gentoo I, maybe naively, decided to just have both and let whoever want maintain the other package. Gentoo is about choice and whoever wants to shot himself on a foot has to be be free to do that in the safest possible way.

For Debian, being binary packaged, who was maintaining the package decided to stay with Libav. It wasn’t surprising given “lack of releases” was one of the sore points of the former FFmpeg and he started to get involved with upstream to try to fix it.

Perceived Leverage and Real Shackles

Libav started with the idea to fix everything that went wrong with the Former FFmpeg:
– Consensus instead of idolatry for THE Leader
– Paced releases instead of cvs is always a release
– Maintained releases branches for years
git instead of svn
– Cleaner code instead of quick hacks to solve the problem of the second
– Helping downstreams instead of giving them the finger.

Being in Debian, according to some people was undeserved because “Libav is evil” and since we wrongly though that people would look at actions and not at random blogpost by people with more bias than anything we just kept writing code. It was a huge mistake, this blogpost and this previous are my try to address this.

Being in Debian to me meant that I had to help fixing stale version of software, often even without upstream.

The people at Debian instead of helping, the amount of patches coming from people @debian.org over the years amounted to 1 according to git, kept piling up work on us.

Fun requests such as “Do remove a standard test image because its origin according to them is unclear” or “Do maintain the ancient release branch that is 3 major releases behind” had been quite common.

For me Debian had been no help and additional bourden.

The leverage that being in a distribution theoretically gives according to those crying because the evil Libav was in Debian amounts to none to me: their user complain because the version provided is stale, their developers do not help even keeping the point releases up or updating the software using Libav because scared to be tainted, downstreams such as Kubi (that are so naive to praise FFmpeg for what happened in Libav, such as the HEVC multi-thread support Anton wrote) would keep picking the implementation they prefer and use ffmpeg-only API whenever they could (debian will ask us to fix that for them anyway).

Is important being in Debian?

Last time they were discussing moving to FFmpeg I had the unpleasant experience of reading lots of lovely email with passive-aggressive snide remarks such as “libav has just developers not users” or seeing the fruits of the smear campaign such as “is it true you stole the FFmpeg hardware” in their mailing list (btw during the past VDD the FFmpeg people there said at least that would be addressed, well, it had not been yet, thank you).

At that time I got asked to present Libav, this time after reading in the debian wiki the “case” presented with skewed git statistics (maybe purge the merge commits when you count them to compare a project activity?) and other number dressing I just got sick of it.

Personally I do not care. There is a better way to spend your own free time than do the distro maintenance work for people that not even thanks you (because you are evil).

The smear campaign pays

I’m sure that now that now that the new FFmpeg gets to replace Libav will get more contributions from people @debian.org and maybe those that were crying for the “oh so unjust” treatment would be happy to do the maintenance churn.

Anyway that’s not my problem anymore and I guess I can spend more time writing about the “social issues” around the project trying to defuse at least a little the so effective “Libav is evil” narrative a post a time.

Friday, 03 July

Last weekend some libav developers met in the South Pole offices with additional sponsorship from Inteno Broadband Technology. (And the people at Borgodoro that gave us more chocolate to share with everybody).


Since last year the libav started to have sprints to meet up, discuss in person topics that require a more direct media than IRC or Mailing List and usually write some code asking for direct opinions and help.

Who attended

Benjamin was our host for the event. Andreas joined us for the first day only, while Anton, Vittorio, Kostya, Janne, Jan and Rémi stayed both days.

What we did

The focus had been split in a number of area of interests:

  • API: with some interesting discussion between Rémi and Anton regarding on how to clarify a tricky detail regarding AVCodecContext and AVFrame and who to trust when.
  • Reverse Engineering: With Vittorio and Kostya having fun unraveling codecs one after the other (I think they got 3 working)
  • Release 12 API and ABI break
    • What to remove and what to keep further
    • What to change so it is simpler to use
    • If there is enough time to add the decoupled API for avcodec
  • Release 12 wishlist:
    • HEVC speed improvements, since even the C code can be sped up.
    • HEVC extended range support, since there is YUV 422 content out now.
    • More optimizations for the newer architectures (aarch64 and power64le)
    • More hardware accelerator support (e.g. HEVC encoding and decoding support for Intel MediaSDK).
    • Some more filters, since enough people asked for them.
    • Merge some of the pending work (e.g. go2meeting3, the new asf demuxer).
    • Get more security fixes in (with ago kindly helping me on this).
    • … and more …
  • New website with markdown support to make easier for people to update.

During the sprint we managed to write a lot of code and even to push some during the sprint.
Maybe a little too early in the case of asf, but better have it in and get to fix it for the release.

Special mention to Jan for getting a quite exotic container almost ready, I’m looking forward to see it in the ml; and Andreas for reminding me that AVScale is needed sorely by sending me a patch that fixes a problem his PowerPC users are experiencing while uncovering some strange problem in swscale… I’ll need to figure out a good way to get a PowerPC big-endian running to look at it in detail.

Thank you

I want to especially thank all the people at South Pole that welcome me when I arrived with 1 day in advance and all the people that participated and made the event possible, had been fun!

Post Scriptum

  • This post had been delayed 1 week since I had been horribly busy, sorry for the delay =)
  • During the sprint legends such as kropping the sourdough monster and the burning teapot had been created, some reference of them will probably appear in commits and code.
  • Anybody with experience with qemu-user for PowerPC is welcome to share his knowledge with me.

Wednesday, 25 March

I am hearing a lot of persons interested in open-source and giving back to the community. I think it can be an exciting experience and it can be positive in many different ways: first of all more contributors mean better open-source software being produced and that is great, but it also means that the persons involved can improve their skills and they can learn more about how successful projects get created.

So I wondered why many developers do not do the first step: what is stopping them to send the first patch or the first pull-request? I think that often they do not know where to start or they think that contributing to the big projects out there is intimidating, something to be left to an alien form of life, some breed of extra-good programmers totally separated by the common fellows writing code in the world we experience daily.

I think that hearing the stories of a few developers that have given major contributions to top level project could help to go over these misconceptions. So I asked a few questions to this dear friend of mine, Luca Barbato, who contributed among the others to Gentoo and VLC.

Let’s start from the beginning: when did you start programming?

I started dabbling stuff during high school, but I started doing something more consistent at the time I started university.

What was your first contribution to an open-source project?

I think either patching the ati-drivers to work with the 2.6 series or hacking cloop (a early kernel module for compressed loops) to use lzo instead of gzip.

What are the main projects you have been involved into?

Gentoo, MPlayer, Libav, VLC, cairo/pixman

How did you started being involved in Gentoo? Can you explain the roles you have covered?

Daniel Robbins invited me to join, I thought “why not?

During the early times I took care of PowerPC and [Altivec](http://en.wikipedia.org/wiki/AltiVec), then I focused on the toolchain due the fact it gcc and binutils tended to break software in funny ways, then multimedia since altivec was mainly used there. I had been part of the Council a few times used to be a recruiter (if you want to join Gentoo feel free to contact me anyway, we love to have more people involved) and I’m involved with community relationship lately.

Note: Daniel Robbins is the creator of Gentoo, a Linux distribution. 

Are there other less famous projects you have contributed to?

I have minor contributions in quite a bit of software due. The fact is that in Gentoo we try our best to upstream our changes and I like to get back fixes to what I like to use.

What are your motivations to contribute to open-source?

Mainly because I can =)

Who helped you to start contributing? From who you have learnt the most?

Daniel Robbins surely had been one of the first asking me directly to help.

You learn from everybody so I can’t name a single person among all the great people I met.

How did you get to know Daniel Robbins? How did he helped you?

I was a gentoo user, I happened to do stuff he deemed interesting and asked me to join.

He involved me in quite a number of interesting projects, some worked (e.g. Gentoo PowerPC), some (e.g. Gentoo Games) not so much.

Do your contributions to open-source help your professional life?

In some way it does, contrary to the assumption I’m just seldom paid to improve the projects I care about the most, but at the same time having them working helps me when I need them during the professional work.

How do you face disagreement on technical solutions?

I’m a fan of informed consensus, otherwise prototypes (as in “do, test and then tell me back”) work the best.

To contribute to OSS are more important the technical skills or the diplomatic/relation skills?

Both are needed at different time, opensource is not just software, you MUST get along with people.

Have you found different way to organize projects? What works best in your opinion? What works worst?

Usually the main problem is dealing with poisonous people, doesn’t matter if it is a 10-people project or a 300+-people project. You can have a dictator, you can have a council, you can have global consensus, poisonous people are what makes your community suffer a lot. Bonus point if the poisonous people get clueless fan giving him additional voices.

Did you ever sent a patch for the Linux kernel?

Not really, I’m not fond of that coding style so usually other people correct the small bugs I stumble upon before I decide to polish my fix so it is acceptable =)

Do you have any suggestions for people looking to get started contributing to open-source?

Pick something you use, scratch your own itch first, do not assume other people are infallible or heroes.

ME: I certainly agree with that, it is one of the best advices. However if you cannot find anything suitable at the end of this post I wrote a short list of projects that could use some help.

Can you tell us about your best and your worst moments with contribution to OSS?

The best moment is recurring and it is when some user thanks you since you improved his or her life.

The worst moment for me is when some rabid fan claims I’m evil because I’m contributing to Libav and even praises FFmpeg for something originally written in Libav in the same statement, happened more than once.

What are you working on right now and what plans do you have for the future?

Libav, plaid, bmdtools, commonmark. In the future I might play a little more with [rust](http://www.rust-lang.org/).

Thanks Luca! I would be extremely happy if this short post could give to someone the last push they need to contribute to an existing open-source project or start their own: I think we could all use more, better, open-source software. So let’s write it.

One thing I admire in Luca is that he is always curious and ready to jump on the next challenge. I think this is the perfect attitude to become an OSS contributor: just start play around with the things you like and talk to people, you could find more possibilities to contribute that you could imagine.

…and one final thing: Luca is also the author of open-source recipes: he created the recipes of two types of chocolate bars dedicated to Libav and VLC. You can find them on the borgodoro website.


I suggest to take a look at his blog.

A few open-source you could consider contributing to

Well, just in case you are eager to start writing some code and you are looking for some projects to contribute to here there are a few, written with different technologies. If you want to start contributing to any of those and you need directions just drop me a line (federico at tomassetti dot me) and I would be glad to help!

  • If you are interested in contributing to Libav, you can take a look at this post: there I explained how I submitted my first patch (approved in the meantime!). It is written in C.

  • You could be also interested in plaid: it is a Python web application to manage git patches sent by e-mail (there are a few projects using this model like libav or the linux kernel)

  • WorldEngine, it is a world generator written in Python

  • Plate-tectonics, it is a library for plate tectonics simulation. It is written in C++

  • JavaParser a Java parser, written in Java

  • Incremental Java parser, an incremental Java parser, written in Scala

The post How people get started contributing to open-source? A few questions to Luca Barbato, contributor to Gentoo, MPlayer, Libav, VLC, cairo/pixman appeared first on Federico Tomassetti - Consultant Software Engineer.

Wednesday, 18 February

I happened to have a few hours free and I was looking for some coding to do. I thought about VLC, the media player which I have enjoyed so much using over the years and I decided that I wanted to contribute in some way.

To start helping in such a complex process there are a few steps involved. Here I describe how I got my first patched accepted. In particular I wrote a patch for libav, the library behind VLC.

The general picture

I started by reading the wiki. It is a very helpful starting point but the process to setup the environment and send a first patch was not yet 100% clear to me so I got in touch with some of the developers of libav to understand how they work and how I could start lending an hand with something simple. They explained me that the easier way to start is by solving issues reported by static analysis tools and style checkers. They use uncrustify to verify that the code is adhering to their style guidelines and they run coverity to check for potential issues like memory leaks or null deferences. So I:

  • started looking at some coverity issues
  • found something easy to address (a very simple null deference)
  • prepared the patch
  • submitted the patch

After a few minutes the patch was approved by a committer, ready to be merged. The day after it made its way to the master branch. Yeah!

Download source code, build libav and run the tests

First of all, let’s clone the git repository:

git clone git://git.libav.org/libav.git

Alternatively you could use the GitHub mirror, if you want to.

At this point you may want to install all the dependencies. The instructions are platform specific, you can find them here. If you have Mac Os-X be sure to have installed yasm, because nasm does not work. If you have installed both configure will pick up yasm (correctly). Just be sure to run configure after installing yasm.

If everything goes well you can now build libav by running:


Note that it is fine to build in-tree (no need to build in a separate directory).

Now it is time to run the tests. You will have to specify one directory where to download some samples, later used by tests. Let’s assume you wanted to put your samples under ~/libav-samples:

mkdir ~/libav-samples
# This download the samples
make fate-rsync SAMPLES=~/libav-samples
# This run the tests
make fate

Did everything run fine? Good! Let’s start to patch then!

Write the patch

First of all we need to find an open issue. Visit Coverity page for libav at https://scan.coverity.com/projects/106. You will have to ask for access and wait that someone grants it to you. When you will be able to login you will encounter a screen like this:

Screenshot from 2015-02-14 19:39:06

Here, this seems an easy one! The variable oggstream has been allocated by av_mallocz (basically a wrapper for malloc) but the result values has not been checked. If the allocation fails a NULL pointer is returned and when we will try to access it at the next line things are going end up unpleasantly. What we need to do is to check the return value of av_mallocz and if it is NULL we should return an error. The appropriate error to return in this case is AVERROR(ENOMEM). To get this information… you have to start reading code, getting familiar with the way of doing business of this codebase.

Libav follows strict rules about the comments in git commits: use git log to look at previous commits and try to use the same style.

Submitting the patch

I think many of you are familiar with GitHub and the whole process of submitting a patch for revision. GitHub is great because it made that process so easy. However there are some projects (notably including the Linux kernel) which adopts another approach: they receive patches by e-mail.

Git has a functionality that permits to submit a patch by e-mail with a simple command. The patch will be sent to the mailing list, discussed, and if approved the e-mail will be downloaded, processed through git and committed in the official repository. Does it sound cumbersome? Well, it sounds to me, spoiled as I am by GitHub and similar tools but, you know, if you go in Rome you should behave as the Romans do, so…  

# This install the git extension for sending patches through e-mail
sudo apt install git-email 
# This submit a patch built using the last commit
git send-email -1 --to libav-devel@libav.org

Sending patches using gmail with 2-factor authentication enabled

Now, many of you are using gmail and many of you have enable 2-factor authentication (right? If not, you should). If this is you case you will get an error along this lines:

Password for 'smtp://f.tomassetti@gmail.com@smtp.gmail.com:587': 5.7.9 Application-specific password required. Learn more at 5.7.9 http://support.google.com/accounts/bin/answer.py?answer=185833 cj12sm14743233wjb.35 - gsmtp

Here you can find how to create a password for this goal: https://support.google.com/accounts/answer/185833 The name of the application that I had to create was smtp://f.tomassetti@gmail.com@smtp.gmail.com:587. Note that I used the same name specified in the previous error message.

What if I need to correct my patch?

If things go well an e-mail with your patch will be sent to the mailing-list, someone will look at it and accept it. Most of the times you will receive suggestions about possible adjustments to be done to improve your password. When it happens you want to submit a new version of your patch in the same thread which contains your first version of the patch and the e-mails commenting it.

To do that you want to update your patch (typically using git commit –amend) and then run something like:

git send-email -1 --to libav-devel@libav.org --in-reply-to Message-ID: 54E0F459.3090707@gentoo.org

Of course you need to find out the message-id of the e-mail to which you want to reply. To do that in gmail select the “Show original” item from the contextual menu for the message and in the screen opened look for the Message-Id header.

Tools to manage patches sent by e-mail

There are also web applications which are used to manage the patches sent by e-mail. Libav is currently using Patchwork for managing patches. You can see it deployed at: https://patches.libav.org/project/libav-devel/list/. Currently another tool has been developed to replace patchwork. It is named Plaid and I tried to help a little bit with that also 🙂


Mine has been a very small contribution, and in the future I hope to be able to do more. But being a maintainer of other open-source projects I learned that also small help is useful and appreciated, so for today I feel good.

Screenshot from 2015-02-14 22:29:48

Please, if I am missing something help me correct this post

The post How to contribute to Libav (VLC): just got my first patch approved appeared first on Federico Tomassetti - Consultant Software Engineer.

Monday, 12 January

I'm interested in history, so I like visinting castles (or their ruins) and historical towns. There're many of them here in Czech Republic. Some of Czech sights like those in Prague, Kutná Hora or Český Krumlov are well known and, especially during the summer, overcrowded by tourists.  But there are also very nice less popular places which are much calmer and it is really a pleasure to visit them. One of such places is Jindřichův Hradec town, where the third biggest castle in the Czech Republic is located. Town centre with this castle is really amazing, it is full of romatic little streets, churches, museums and ancient buildings. This small town has really big  historical centre compared to its size, so one can spend the whole day exploring the castle and it surroundings. Recently, I decided to visit the town again, it was windy day, but it was realtively warm for the winter. There weren't any tourists around, and I really enjoyed my visit. The only disadvantage of this trip was that castle is closed for visitors during the winter and museums have short opening hours on weekends.

Sunday, 30 November

In one of my previous posts I have noted I'm an avid audiobook consumer. I started when I was at the hospital, because I didn't have the energy to read — and most likely, because of the blood sugar being out of control after coming back from the ICU: it turns out that blood sugar changes can make your eyesight go crazy; at some point I had to buy a pair of €20 glasses simply because my doctor prescribed me a new treatment and my eyesight ricocheted out of control for a week or so.

Nowadays, I have trouble sleeping if I'm not listening to something, and I end up with the Audible app installed in all my phones and tablets, with at least a few books preloaded whenever I travel. Of course as I said, I keep the majority of my audiobooks in the iPod, and the reason is that while most of my library is on Audible, not all of it is. There are a few books that I have bought on iTunes before finding out about Audible, and then there are a few I received in CD form, including The Hitchhiker's Guide To The Galaxy Complete Radio Series which is my among my favourite playlists.

Unfortunately, to be able to convert these from CD to a format that the iPod could digest, I ended up having to buy a software called Audiobook Builder for Mac, which allows you to rip CDs and build M4B files out of them. What's M4B? It's the usual mp4 format container, just with an extension that makes iTunes consider it an audiobook, and with chapter markings in the stream. At the time I first ripped my audiobooks, ffmpeg/libav had no support for chapter markings, so that was not an option. I've been told that said support is there now, but I have not tried getting it to work.

Indeed, what I need to find out is how to build an audiobook file out of a string of mp3 files, and I have no idea how to fix that now that I no longer have access to my personal iTunes account on a mac to re-download the Audiobook Builder and process them. In particular, the list of mp3s that I'm looking forward to merge together are the years 2013 and 2014 of BBC's The News Quiz, to which I'm addicted and listen continuously. Being able to join them all together so I can listen to them with a multi-day-running playlist is one of the very few things that still let me sleep relatively calmly — I say relatively because I really don't remember when was the last time I have slept soundly in about an year by now.

Essentially, what I'd like is for Audible to let me sideload some content (the few books I did not buy from them, and the News Quiz series that I stitch together from the podcast), and create a playlist — then for what I'm concerned I don't have to use an iPod at all. Well, beside the fact that I'd have to find a way to shut up notifications while playing audiobooks. Having Dragons of Autumn Twilight interrupted by the Facebook pop notification is not something that I'm looking forward for most of the time. And in some cases I even have had some background update disrupting my playback so there is definitely space for improvement.

The other day I wrote about unpaper and the fact that I was working on making it use libav for file input. I have now finished converting unpaper (in a branch) so that it does not use its own image structure, but rather the same AVFrame structure that libav uses internally and externally. This meant not only supporting stripes, but using the libav allocation functions and pixel formats.

This also enabled me to use libav for file output as well as input. While for the input I decided to add support for formats that unpaper did not read before, for output at the moment I'm sticking with the same formats as before. Mostly because the one type of output file I'd like to support is not currently supported by libav properly, so it'll take me quite a bit longer to be able to use it. For the curious, the format I'm referring to is multipage TIFF. Right now libav only supports single-page TIFF and it does not support JPEG-compressed TIFF images, so there.

Originally, I planned to drop compatibility with previous unpaper version, mostly because to drop the internal structure I was going to lose the input format information for 1-bit black and white images. At the end I was actually able to reimplement the same feature in a different way, and so I restored that support. The only compatibility issue right now is that the -depth parameter is no longer present, mostly because it and -type constrained the same value (the output format).

To reintroduce the -depth parameter, I want to support 16-bit gray. Unfortunately to do so I need to make more fundamental changes to the code, as right now it expects to be able to get the full value at most at 24 bit — and I'm not sure how to scale a 16-bit grayscale to 24-bit RGB and maintain proper values.

While I had to add almost as much code to support the libav formats and their conversion as there was there to load the files, I think this is still a net win. The first point is that there is no format parsing code in unpaper, which means that as long as the pixel format is something that I can process, any file that libav supports now or will support in the future will do. Then there is the fact that I ended up making the code "less smart" by removing codepath optimizations such as "input and output sizes match, so I won't be touching it, instead I'll copy one structure on top of the other", which means that yes, I probably lost some performance, but I also gained some sanity. The code was horribly complicated before.

Unfortunately, as I said in the previous post, there are a couple of features that I would have preferred if they were implemented in libav, as that would mean they'd be kept optimized without me having to bother with assembly or intrinsics. Namely pixel format conversion (which should be part of the proposed libavscale, still not reified), and drawing primitives, including bitblitting. I think part of this is actually implemented within libavfilter but as far as I know it's not exposed for other software to use. Having optimized blitting, especially "copy this area of the image over to that other image" would be definitely useful, but it's not a necessary condition for me to release the current state of the code.

So current work in progress is to support grayscale TIFF files (PAL8 pixel format), and then I'll probably turn to libav and try to implement JPEG-encoded TIFF files, if I can find the time and motivation to do so. What I'm afraid of is having to write conversion functions between YUV and RGB, I really don't look forward to that. In the mean time, I'll keep playing Tales of Graces f because I love those kind of games.

Also, for those who're curious, the development of this version of unpaper is done fully on my ZenBook — I note this because it's the first time I use a low-power device to work on a project that actually requires some processing power to build, but the results are not bad at all. I only had to make sure I had swap enabled: 4GB of RAM are no longer enough to have Chrome open with a dozen tabs, and a compiler in the background.

I've resumed working on unpaper since I have been using it more than a couple of times lately and there has been a few things that I wanted to fix.

What I've been working on now is a way to read input files in more formats; I was really aggravated by the fact that unpaper implemented its own loading of a single set of file formats (the PPM "rawbits"); I went on to look into libraries that abstract access to image formats, but I couldn't find one that would work for me. At the end I settled for libav even though it's not exactly known for being an image processing library.

My reasons to choose libav was mostly found in the fact that, while it does not support all the formats I'd like to have supported in unpaper (PS and PDF come to mind), it does support the formats that it supports now (PNM and company), and I know the developers well enough that I can get bugs and features fixed or implemented as needed.

I have now a branch can read files by using libav. It's a very naïve implementation of it though: it reads the image into an AVFrame structure and then convert that into unpaper's own image structure. It does not even free up the AVFrame, mostly because I'd actually like to be able to use AVFrame instead of unpaper's structure. Not only to avoid copying memory when it's not required (libav has functions to do shallow-copy of frames and mark them as readable when needed), but also because the frames themselves already contain all the needed information. Furthermore, libav 12 is likely going to include libavscale (or so Luca promised!) so that the on-load conversion can also be offloaded to the library.

Even with the naïve implementation that I implemented in half an afternoon, unpaper not only supports the same input file as before, but also PNG (24-bit non-alpha colour files are loaded the same way as PPM, 1-bit black and white is inverted compared to PBM, while 8-bit grayscale is actually 16-bit with half of it defining the alpha channel) and very limited TIFF support (1-bit is the same as PNG; 8-bit is paletted so I have not implemented it yet, and as for colour, I found out that libav does not currently support JPEG-compressed TIFF – I'll work on that if I can – but otherwise it is supported as it's simply 24bpp RGB).

What also needs to be done is to write out the file using libav too. While I don't plan to allow writing files in any random format with unpaper, I wouldn't mind being able to output through libav. Right now the way this is implemented, the code does explicit conversion back or forth between black/white, grayscale and colour at save time, and this is nothing different than the same conversion that happens at load time, and should rather be part of libavscale when that exists.

Anyway, if you feel like helping with this project, the code is on GitHub and I'll try to keep it updated soon.

Thursday, 20 November

I've participated to last Libav sprint in Torino. I made new ASF demuxer for Libav, but during the testing problems with rtsp a mms protocols has appeared. Therefore, my main task during the sprint was to fix these issues. 
It was second time I was at such sprint and also my second Torino visit and the sprint was even better than I expected. It's really nice to see people I'm communicating throught the irc channel in person, the thing I like about Libav a lot is its friendly community. But the most important thing for me as the most unexperienced person among skilled developers was naturally their help. My mentors from OPW participated the sprint and as a result all the issues was fixed and patch was sent to the ML (https://patches.libav.org/patch/55682/). Also, these personal consultations can be very productive in learning new things and because I'm not native English speaker I realized few days I have to speak or even think in English are really helpful for getting better in it.
The last day of the sprint we had a trip to a really magical place called Sacra di San Michele (http://www.sacradisanmichele.com/).

I like to visit places like this, in Czech Republic, where I'm living, I'm visiting ancient castles. But I think it may be the oldest place I've ever been to, the oldest parts of it was built in 10th century.  I had a feeling the history is breathing on us from the walls. We were lucky about the weather, it was sunny during our visit and the view from the terrace on top of the building was really breathtaking. We saw peaks of Alps covered by snow that divides this part of Italy and France.

Friday, 15 August

RealAudio files have several possible interleavers. The simplest is “Int0”, which means that the packets are in order. Today, I was contrasting “Int4” and “genr”. They both require rearranging data, in highly similar but not identical ways. “genr” is slightly more complex than “Int4”.

A typical Int4 pattern, writing to subpacket 0, 1, 2, 3, etc, would read data from subpacket 0, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 1, 7, 13, etc, in that order – assuming subpkt_h is 12, as it was in one sample file. It is effectively subpacket_h rows of subpacket_h / 2 columns, counting up by subpacket_h / 2 and wrapping every two rows.

A typical genr pattern is a little trickier. For subpacket_h = 14, and the same 6 columns per row as above, the pattern to read from looks like 0, 12, 24, 36, 48, 60, 72, 6, 18, 30, 42, 54, 66, 78, 1, etc.

I spent most of today implementing genr, carefully working with a paper notebook, pencil, Python, and a terse formula from the old implementation:

for (x = 0; x < w/sps; x++) avio_read(pb, ast->pkt.data+sps*(h*x+((h+1)/2)*(y&1)+(y>>1)), sps);

After various debug printfs, a lot of quality time in GDB running commands like x /94x (pkt->data + 14 * 94), a few interestingly garbled bits of audio playback, and a mentor pointing out I have some improvements to make on header parsing, I can play (some) genr files.

I have also recently implemented SIPR support, and it works in both RA and RM files. RV10 video also largely works.

Sunday, 10 August

I've been participating in a Libav project with a few small contributions before, but implementing the ASF demuxer was my first complex enterprise. I was scared a little about it and I wasn't sure I'm able to handle such a task, because I'm not much experienced as programmer yet. Hopefully specifications and patient and wise mentors were there for me (and other OPW participants). When I started to implement  just the ASF parser - asfinfo with the specifications, I realized the specifications  are accurate and has  a logical structure, which helped me a lot. Sometimes I had a feeling some things are not explained clear but may be it's just my inexperience with understanding  the specs and my mentors always patiently helped me with things I didn't understand. Also I was unsure about proper packets handling and other Libav internal stuff I wasn't familiar with. I was searching for inspiration in another demuxers (I was forbidden to look at the old ASF demuxer for the fear of copying its code without understanding it) and my mentors answered my questions of course, so  I think writing a demuxer (not too complex or overcomplicated) with the specs and with patient people who helps end encourages you is not that hard after all. It's not that easy too, I've made many bugs, I was even hopeless few times and there're still much work left but I think it is doable. Point is, people (girls), don't be afraid, you can do it.

Saturday, 09 August

I've solved lost packets problems and finally my ASF demuxer started to work right at "ideal samples in vacuum".  So the time for fixing memory leaks had come and valgrind helped me a lot with this issue. After memory leaks was solved I had to start testing my demuxer on various samples of ASF format multimedia files. As expected, I've found many samples my demuxer failed for. The reasons was different - mostly it was my mistakes, misunderstood or overlooked parts of specs, but I think I found a case that needed unusual handling specs didn't mention about.
some of problems was caused for example by
* improper subpayloads handling - one should be really careful while reading specs to avoid problems for less common cases like one subpayload is inside single payload and there's is padding inside payload itself (while padding after payload is 0), but there was other problems too
* I had to revise padding handling for all possible cases
* ASF file has 3 places where ASF packet size is told - twice in the header objects and once in a packet itself, and specs are not specifying what should one do when they differs or at least I didn't found it
* some stupid mistakes like when I just forgot to do something after adding new block to my code was really annoying
Funny thing was when I fixed my demuxer for one group of samples and another one that worked before started to fail, I fixed this new group and third group failed. I was so much annoyed by this, but many mistakes I did was caused by my inexperience and I think one (at least me) just have to do all of these mistakes to get better.

Sunday, 27 July

Finally, all basic parts of ASF demuxer seems to work somehow.

 At last two weeks I fixed various bugs in my code and I hope packets handling is correct now. Only problem is that few packets at the end of the Data Object are still lost. Because I wanted a small break from this problem, my mentors allowed me to implement basic seeking first. ASF demuxer can now read index entries from Simple Index Object and adds them with av_add_index_entry to AVStream. So when Simple Index Object is present in an ASF file, my demuxer can seek to the requested time.

Sunday, 13 July

Skeleton of the new ASF demuxer was written, but only audio was demuxed properly now. Problem is complicated video frames handling in ASF format. I hope I finally found out how to process packets properly. ASF packet can contain single payload, single payload with subpayloads, multiple payloads or multiple payloads with subpayloads inside some of them. Every subpayload is always one frame, but single payload can be whole frame or just part of it. When ASF packet contains multiple payloads inside it, each of them can be one frame but it can be just fragment of it as well. When one of mulptiple payloads contains subpayloads, each of subpayload is one frame and it can be processed as AVPacket.
For the case of fragmented frame in ASF packet I have to store several unfinished frames in ASFPacket structures that I've created for this purpose.  There should not be more than one unfinished frames per stream, so I have one ASFPacket in each ASFStream (ASFStream is structure for storing ASF stream properties). ASFPacket contains pointer to AVBufferRef where unfinished frame is stored. When frame is finished I can forward pointer to buffer  with data to AVPacket, set its properties like size, timestamps and others and finally return AVPacket.
I introduced many bugs to my code that was working (at least ASF packets was parsed right and audio worked) and now I'm working on fixing all of them.

I was accepted for OPW, May - August 2014 round with project "Rewrite the ASF demuxer". First task from my mentors was to create  wiki page  about ASF (Advanced Streaming Format), it was created at https://wiki.libav.org/ASF
Interesting notes about other containers: http://codecs.multimedia.cx/?p=676.

Next task from my mentors was to write simple program which reads asf file and prints its structure, i.e. list of asf objects, metadata and codec information. ASF file consists of so called ASF Objects. There're 3 top-level objects - Header Object, Data Object and Index Object. Especially Header Object can contain many other objects to provide different asf features, for example Codec List Object for codec information or Metadata object for metadata.  One can recognise object with GUID,  which is 16 byte array (each byte is number) that identifies object type. I was confused about the fact the GUID number you read from the file is not matching the GUID from specs. For some historical reasons one have to modify GUIDs from specs (reorder the numbers) for match GUID read from the file.
My program is working now and can list objects, codecs and metadata info, but it ignores Index Objects by then. I hope I'll add support for them soon. Also I want to print offsets for each object and read Data Object deeper.

I hoped to finish my code for parsing ASF file this week, but it seems I unlikely now. I spent the whole week trying to parse ASF data packet.
   One packet consists of Error Correction Data -> Payload Parsing Information -> Payload Data -> Padding Data. Error Correction Data and Payload Data are optional. Payload data itself can be either "Single Payload", "Single Payload with Subpayloads", "Multiple Payload" or "Multiple Payloads with Subpayloads". All these objects contain several flags, so I got to practice using &  operator. While trying to implement reading packets properly, I made a lot of stupid mistakes which slowed down my work.
   At this point, it seems I can detect a packet and print out its offset as well as some information about it.  However deeper packet examination and reading subpayloads doesn't work correctly yet.
  Implementing  with specs can be fun, I enjoyed it.

Finally I've finished my asfinfo tool which prints asf file data structure. It takes an asf file and tells you which objects are inside, offsets of the particular object, for some objects it also tells more detailed information. The most interesting information asfinfo can tell you is offset of every packet your file contains.
Asfinfo was submitted to the libav-devel mailing list and now I'm applying different comments to it. If someone wants to compile it with libav it's here   http://patchwork.libav.org/patch/51607/, for compilation moving of flags is needed http://patchwork.libav.org/patch/51610/.
I'll keep working at asfinfo by polishing it according to the libav developers comments, but now I've already started my main work - to write new ASF demuxer from scratch. It seems I'll reuse some code from asfinfo, that helped me a lot to understeand how to handle ASF file, mainly how to parse packets.

Thursday, 03 July

Today, I learned how to use framecrc as a debug tool. Many Libav tests use framecrc to compare expected and actual decoding. While rewriting existing code, the output from the old and new versions of the code on the same sample can be checked; this makes a lot of mistakes clear quickly, including ones that can be quite difficult to debug otherwise.

Checking framecrcs interactively is straightforward: ./avconv -i somefile -c:a copy -f framecrc -. The -c:a copy specifies that the original, rather than decoded, packet should be used. The - at the end makes the output go to stdout, rather than a named file.

The output has several columns, for the stream index, dts, pts, duration, packet size, and crc:

0, 0, 0, 192, 2304, 0xbf0a6b45
0, 192, 192, 192, 2304, 0xdd016b78
0, 384, 384, 192, 2304, 0x18da71d6
0, 576, 576, 192, 2304, 0xcf5a6a07
0, 768, 768, 192, 2304, 0x3a84620a

It is also unusually simple to find out what the fields are, as libavformat/framecrcenc.c spells it out quite clearly:

static int framecrc_write_packet(struct AVFormatContext *s, AVPacket *pkt)
uint32_t crc = av_adler32_update(0, pkt->data, pkt->size);
char buf[256];

snprintf(buf, sizeof(buf), “%d, %10″PRId64″, %10″PRId64″, %8d, %8d, 0x%08″PRIx32″\n”,
pkt->stream_index, pkt->dts, pkt->pts, pkt->duration, pkt->size, crc);
avio_write(s->pb, buf, strlen(buf));
return 0;

Keiler, one of my Libav mentors, patiently explained the above; I hope documenting it helps other people who are starting with Libav development.

Thursday, 12 June

Most recently, I have been adding documentation to Libav. Today, my work included writing a demuxer howto. In the last couple of weeks, I have also reimplemented RealAudio 1.0 support (2.0 is in progress), and learned more about Coccinelle and undefined behavior in C. Blog posts on these topics are pending.

Tuesday, 20 May

My first patch for undefined behavior eliminates left shifts of negative numbers, replacing a << b (where a can be negative) with a * (1 << b). This change fixes bug686, at least for fate-idct8x8 and libavcodec/dct-test -i (compiled with ubsan and fno-sanitize-recover). Due to Libav policy, the next step is to benchmark the change. I was also asked to write a simple benchmarking HowTo for the Libav wiki.

First, I installed perf: sudo aptitude install linux-tools-generic
I made two build directories, and built the code with defined behavior in one, and the code with undefined behavior in the other (with ../configure && make -j8 && make fate). Then, in each directory, I ran:

perf stat --repeat 150 ./libavcodec/dct-test -i > /dev/null

The results were somewhat more stable than with –repeat 30, but it still looks much more like noise than a meaningful result. I ran the command with –repeat 30 for both before the recorded 150 run, so both would start on equal footing. With defined behavior, the results were “0.121670022 seconds time elapsed ( +-  0.11% )”; with undefined behavior, “0.123038640 seconds time elapsed ( +-  0.15% )”. The best of a further three runs had the opposite result, shown below:

% cat undef.150.best

perf stat –repeat 150 ./libavcodec/dct-test -i > /dev/null

Performance counter stats for ‘./libavcodec/dct-test -i’ (150 runs):

120.427535 task-clock (msec) # 0.997 CPUs utilized ( +- 0.11% )
21 context-switches # 0.178 K/sec ( +- 1.88% )
0 cpu-migrations # 0.000 K/sec ( +-100.00% )
226 page-faults # 0.002 M/sec ( +- 0.01% )
455’393’772 cycles # 3.781 GHz ( +- 0.05% )
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1’306’169’698 instructions # 2.87 insns per cycle ( +- 0.00% )
89’674’090 branches # 744.631 M/sec ( +- 0.00% )
1’144’351 branch-misses # 1.28% of all branches ( +- 0.18% )

0.120741498 seconds time elapse

% cat def.150.best

Performance counter stats for ‘./libavcodec/dct-test -i’ (150 runs):

120.838976 task-clock (msec) # 0.997 CPUs utilized ( +- 0.11% )
21 context-switches # 0.172 K/sec ( +- 1.98% )
0 cpu-migrations # 0.000 K/sec
226 page-faults # 0.002 M/sec ( +- 0.01% )
457’077’626 cycles # 3.783 GHz ( +- 0.08% )
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1’306’321’521 instructions # 2.86 insns per cycle ( +- 0.00% )
89’673’780 branches # 742.093 M/sec ( +- 0.00% )
1’148’393 branch-misses # 1.28% of all branches ( +- 0.11% )

0.121162660 seconds time elapsed ( +- 0.11% )

I also compared the disassembled code from jrevdct.o, before and after the changes to have defined behavior (using gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 on x86_64).

In the build directory for the code with defined behavior:
objdump -d libavcodec/jrevdct.o > def.dis
sed -e 's/^.*://' def.dis > noline.def.dis

In the build directory for the code with undefined behavior:
objdump -d libavcodec/jrevdct.o > undef.dis
sed -e 's/^.*://' undef.dis > noline.undef.dis

Leaving aside difference in jump locations (despite the fact that they can impact performance), there are two differences:

diff -u build_benchmark_undef/noline.undef.dis build_benchmark_def/noline.def.dis

–       0f bf 50 f0             movswl -0x10(%rax),%edx
+       0f b7 58 f0             movzwl -0x10(%rax),%ebxi

It’s switched to using a zero-extension rather than sign-extension in one place.

–       74 1c                   je     40 <ff_j_rev_dct+0x40>
–       c1 e2 02                shl    $0x2,%edx
–       0f bf d2                movswl %dx,%edx
–       89 d1                   mov    %edx,%ecx
–       0f b7 d2                movzwl %dx,%edx
–       c1 e1 10                shl    $0x10,%ecx
–       09 d1                   or     %edx,%ecx
–       89 48 f0                mov    %ecx,-0x10(%rax)
–       89 48 f4                mov    %ecx,-0xc(%rax)
–       89 48 f8                mov    %ecx,-0x8(%rax)
–       89 48 fc                mov    %ecx,-0x4(%rax)
+       74 19                   je     3d <ff_j_rev_dct+0x3d>
+       c1 e3 02                shl    $0x2,%ebx
+       89 da                   mov    %ebx,%edx
+       0f b7 db                movzwl %bx,%ebx
+       c1 e2 10                shl    $0x10,%edx
+       09 da                   or     %ebx,%edx
+       89 50 f0                mov    %edx,-0x10(%rax)
+       89 50 f4                mov    %edx,-0xc(%rax)
+       89 50 f8                mov    %edx,-0x8(%rax)
+       89 50 fc                mov    %edx,-0x4(%rax)

Leaving aside differences in register use:

–       0f bf d2                movswl %dx,%edx
There is one extra movswl instruction in the version with undefined behavior, at least with the particular version of the particular compiler for the particular architecture checked.

This is an example of a null result while benchmarking; neither version performs better, although any given benchmark has one or the other come out ahead, generally by less than the variance within the run. If this were a suggested performance change, it would not make sense to apply it. However, the point of this change was correctness; a performance increase is not expected, and the lack of a performance penalty is a bonus.

Monday, 19 May

One of my fantastic OPW mentors prepared a “Welcome task package”, of self-contained, approachable, useful tasks that can be done while getting used to the code, and with a much smaller scope than the core objective. This is awesome. To any mentors reading this: consider making a welcome package!

Step one of it is to use ubsan with gdb. This turned out to be somewhat intricate, so I have decided to supplement the wiki’s documentation with a step-by-step guide for Ubuntu 14.04.

1) Install clang-3.5 (sudo aptitude install clang-3.5), as Ubuntu 14.04 comes with gcc 4.8, which does not support -fsanitize=undefined.

2) Under libav, mkdir build_ubsan && cd build_ubsan && ../configure --toolchain=clang-usan --extra-cflags=-fno-sanitize-recover (alternatively, –cc=clang –extra-cflags=-fsanitize=undefined –extra-ldflags=-fsanitize=undefined can be used instead of –toolchain=clang-usan).

3) make -j8 && make fate

4) Watch where the tests die (they only die if –extra-cflags=-fno-sanitize-recover is used). For me, they died on TEST idct8x8. Running make V=1 fate and asking my mentors pointed me towards libavcodec/dct-test -i, which is dying on jrevdct.c:310:47: with “runtime error: left shift of negative value -14”. If you really want to err on the side of caution, make a second build dir, and ./configure --cc=clang && make -j8 && make fate in it, making sure it does not fail… this confirms that the problem is related to configuring with –toolchain=clang-usan (and, it turns out, with -fsanitize=undefined).

5) It’s time to use the information my mentor pointed out on the wiki about ubsan at https://wiki.libav.org/Security/Tools  – specifically, the information about useful gdb breakpoints. I put a modified version of the b_u definitions into ~/.gdbinit. The wiki has been updated now, but was originally missing a few functions, including one that turns out to be relevant: __ubsan_handle_shift_out_of_bounds

6 Run gdb ./libavcodec/dct-test, then at the gdb prompt, set args -i to set the arguments dct-test was being run with, and then b_u to load the ubsan breakpoints defined above. Then start the program: type run at the gdb prompt.

7) It turns out that a problem can be found, and the program stops running. Get a backtrace with bt.

680 in __ubsan_handle_shift_out_of_bounds ()
#1  0x000000000048ac96 in __ubsan_handle_shift_out_of_bounds_abort ()
#2  0x000000000042c074 in row_fdct_8 (data=<optimized out>) at /home/me/opw/libav/libavcodec/jfdctint_template.c:219
#3  ff_jpeg_fdct_islow_8 (data=<optimized out>) at /home/me/opw/libav/libavcodec/jfdctint_template.c:273
#4  0x0000000000425c46 in dct_error (dct=<optimized out>, test=<optimized out>, is_idct=<optimized out>, speed=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:246
#5  main (argc=<optimized out>, argv=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:522

It would be nice to see a bit more detail, so I wanted to compile the project so that less would be optimized out, and eventually settled on -O1 because compiling with ubsan and without optimizations failed (which I reported as bug 683). This led to a slightly better backtrace:

#0  0x0000000000491a70 in __ubsan_handle_shift_out_of_bounds ()
#1  0x0000000000492086 in __ubsan_handle_shift_out_of_bounds_abort ()
#2  0x0000000000434dfb in ff_j_rev_dct (data=<optimized out>) at /home/me/opw/libav/libavcodec/jrevdct.c:275
#3  0x00000000004258eb in dct_error (dct=0x4962b0 <idct_tab+64>, test=1, is_idct=1, speed=0) at /home/me/opw/libav/libavcodec/dct-test.c:246
#4  0x00000000004251cc in main (argc=<optimized out>, argv=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:522

It is possible to work around the problem by modifying the source code rather than the compiler flags: FFmpeg did so within hours of the bug report – the commit is at http://git.videolan.org/?p=ffmpeg.git;a=commit;h=bebce653e5601ceafa004db0eb6b2c7d4d16f0c0 ! Both FFmpeg and Libav have also merged my patch to work around the problem (FFmpeg patch, Libav patch). The workaround of using -O1 was suggested by one of my mentors, lu_zero; –disable-optimizations does not actually disable all optimizations (in practice, it leaves in ones necessary for compilation), and it does not touch the -O1 that –toolchain=clang-usan now sets.

Wanting a better backtrace leads to the next post: a detailed guide to narrowing down a bug in a the C compiler, Clang. Yes, I know, the problem is never a bug in the C compiler – but this time, it was.

Thursday, 15 May

What’s the fun of only running code on platforms you physically have? Portability is important, and Libav actively targets several platforms. It can be useful to be able to try out the code, even if the hardware is totally unavailable.

Here is how to run Libav’s tests under aarch64, on x86_64 hardware and Ubuntu 14.04. This guide is provided in the hopes that it saves someone else 20 hours or more: there is a lot of once-excellent information which has become misleading, because a lot of progress has been made in aarch64 support. I have tried three approachs – building with Linaro’s cross-compiler, building under QEMU user emulation, and building under QEMU system emulation, and cross-compiling. Building with a cross-compiler is the fastest option. Building under user emulation is about ten times slower. Building under system emulation is about a hundred times slower. There is actually a fourth option, using ARM Foundation Model, but I have not tried it. Running under QEMU user emulation is the only approach I managed to make entirely work.

For all three approaches, you will want a rootfs; I used Ubuntu Core. You can download Ubuntu Core for aarch64 (a minimal rootfs; see https://wiki.ubuntu.com/Core to learn more),  and untar it (as root) into a new directory. Then, set an environment variable that the rest of this guide/set of notes uses frequently, changing the path to match your system:

export a64root=/path/to/your/aarch64/rootdir

Approach 1 – build under QEMU’s user emulation.

Step 1) Set up QEMU. The days when using SUSE branches were necessary are over, but it still needs to be statically linked, and not all QEMU packages are. Ubuntu has a static QEMU:

sudo aptitude install qemu-user-static

This package also sets up binfmt for you. You can delete broken or stale binfmt information by running:
echo -1 > /proc/sys/fs/binfmt_misc/archnamehere – this can be useful, especially if you have previously installed QEMU by hand.

Step 2) Copy your QEMU binary into the chroot, as root, with:

cp `which qemu-aarch64-static` $a64root/usr/bin/

Step 3) As root, set up the aarch64 image so it can do DNS resolution, so you can freely use apt-get:
echo 'nameserver' > $a64root/etc/resolv.conf

Step 4) Chroot into your new system. Run chroot $a64root /bin/bash as root.

At this point, you should be able to run an aarch64 version of ls, and confirm with file /bin/ls that it is an aarch64 binary.

Now you have a working, emulated, minimal aarch64 system.

On x86, you would run aptitude build-dep libav, but there is no such package for aarch64 yet, so outside of the chroot, on the normal system, I installed apt-rdepends and ran:
apt-rdepends --build-depends --follow=DEPENDS libav

With version information stripped out, the following packages are considered dependencies:
debhelper frei0r-plugins-dev libasound2-dev libbz2-dev libcdio-cdda-dev libcdio-dev libcdio-paranoia-dev libdc1394-22-dev libfreetype6-dev  libgnutls-dev libgsm1-dev libjack-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenjpeg-dev libopus-dev libpulse-dev libraw1394-dev librtmp-dev libschroedinger-dev libsdl1.2-dev libspeex-dev libtheora-dev libtiff-dev libtiff5-dev libva-dev libvdpau-dev libvo-aacenc-dev libvo-amrwbenc-dev libvorbis-dev libvpx-dev libx11-dev libx264-dev libxext-dev libxfixes-dev libxvidcore-dev libxvmc-dev texi2html yasm zlib1g-dev doxygen

Many of the libraries do not have current aarch64 Ubuntu packages, and neither does frei0r-plugins-dev, but running aptitude install on the above list installs a lot of useful things – including build-essential. The full list is in the command below; the missing packages are non-essential.

Step 5) Set it up: apt-get install aptitude

aptitude install git debhelper frei0r-plugins-dev libasound2-dev libbz2-dev libcdio-cdda-dev libcdio-dev libcdio-paranoia-dev libdc1394-22-dev libfreetype6-dev  libgnutls-dev libgsm1-dev libjack-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenjpeg-dev libopus-dev libpulse-dev libraw1394-dev librtmp-dev libschroedinger-dev libsdl1.2-dev libspeex-dev libtheora-dev libtiff-dev libtiff5-dev libva-dev libvdpau-dev libvo-aacenc-dev libvo-amrwbenc-dev libvorbis-dev libvpx-dev libx11-dev libx264-dev libxext-dev libxfixes-dev libxvidcore-dev libxvmc-dev texi2html yasm zlib1g-dev doxygen

Now it is time to actually build libav.

Step 6) Create a user within your chroot: useradd -m auser, and switch to running as that user: sudo -u auser bash, and type cd to go to the home directory.

Step 7) Run git clone git://git.libav.org/libav.git, then ./configure --disable-pthreads && make -j8 (change the 8 to approximately the number of CPU cores you have).
On my hardware, this takes 10-11 minutes, and ‘make fate’ takes about 16. Disabling pthreads is essential, as qemu-user does not handle threads well, and running the tests hangs randomly without it.

Approach 2: cross-compile (warning: I do not have the tests working with this approach).

1) Start by getting an aarch64 compiler. A good place to get one is http://releases.linaro.org/latest/components/toolchain/binaries/; I am using http://releases.linaro.org/latest/components/toolchain/binaries/gcc-linaro-aarch64-linux-gnu-4.8-2014.04_linux.tar.xz . Untar it, and add it to your path:

export PATH=$PATH:/path/to/your/linaro/tools/bin

2) Make the cross-compiler work. Run aptitude install lsb lib32stdc++6. Without this, invoking the compiler will say “No such file or directory”. See http://lists.linaro.org/pipermail/linaro-toolchain/2012-January/002016.html.

3) Under the libav directory (run git clone git://git.libav.org/libav.git if you do not have one), type mkdir a64crossbuild; cd a64crossbuild. Make sure the libav directory is somewhere under $a64root (it should simplify running the tests, later).

4)./configure --arch=aarch64 --cpu=generic --cross-prefix=aarch64-linux-gnu- --cc=aarch64-linux-gnu-gcc --target-os=linux --sysroot=$a64root --target-exec="qemu-aarch64-static -L $a64root" --disable-pthreads

This is a minimal variant of Jannau’s configuration – a developer who has recently done a lot of libav aarch64 work.

5) Run make -j8. On my hardware, it takes just under a minute.

6) Run make fate. Unfortunately, both versions of QEMU I tried hung on wait4 at this point (in fft-test, fate-fft-4), and used an extra couple of hundred megabytes of RAM per second until I stopped QEMU, even if I asked it to wait for a remote GDB. For anyone else trying this, https://lists.libav.org/pipermail/libav-devel/2014-May/059584.html has several useful tips for getting the tests to run after cross-compilation.

Approach 3: Use QEMU’s system emulation. In theory, this should allow you to use pthreads; in practice, the tests hung for me. The following May 9th post describes what to do: http://www.bennee.com/~alex/blog/2014/05/09/running-linux-in-qemus-aarch64-system-emulation-mode/. In short: git clone git://git.qemu.org/qemu.git qemu.git && cd qemu.git && ./configure --target-list=aarch64-softmmu && make, then

./aarch64-softmmu/qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append "console=ttyAMA0" -fsdev local,id=r,path=$a64root,security_model=none -device virtio-9p-device,fsdev=r,mount_tag=r

Then, under the buildroot system, log in as root (no password), and type mkdir /mnt/core && mount -t 9p -o trans=virtio r /mnt/core. At this point, you can run chroot /mnt/core /bin/bash, and follow the approach 1 instructions from useradd onwards, except that ./configure without –disable-pthreads should theoretically work. On my system, ./configure takes a bit over 5 minutes with this approach. Running make is quite slow; time make took 113 minutes. Do not use -j – you are limited to a single core, so -j would slow compilation down slightly. However, make fate consistently hung on acodec-pcm-alaw, and I have not yet figured out why.


Things not to do:

  • Use a rootfs from a year ago; I am yet to try one that is not broken, and some come with fun bonuses like infinite file system loops. These cost me well over a dozen hours.
  • Compile SUSE’s QEMU; qemu-system is bleeding-edge enough that you need to compile it from upstream, but SUSE’s patches have long been merged into the normal QEMU upstream. Unless you want qemu-system, you do not need to compile QEMU at all under Ubuntu 14.04.
  • Leave the environment variables in this tutorial unset in a new shell and wonder why things do not work.


Wednesday, 23 April

Applying to OPW requires an initial contribution. The Libav IRC channel suggested porting the asettb filter from FFmpeg, so I did (version 5 of the patch was merged upstream, in two parts: a rename patch and a content patch; the FFmpeg author was credited as author for the latter, while I did a signed-off-by). I also contributed a 3000+ line documentation patch, standardizing the libavfilter documentation and removing numerous English errors, and triaged a few bugs, git bisecting the one that was reproducible.

Sunday, 13 April

And how it nearly ruined another video coding standard.

Everyone knows that interlacing was a trick in the '80s for pseudo motion compensation with analogue video. This more or less worked because it mimicked how television worked back then. This technique was preserved when flat panels for pc and tv were introduced, for a mix of backward compatibility and technical limitations, and video coding features interlacing in MPEG2 and H264 and similar.

However as with black and white, TACS and Gopher, old technology has to be replaced with modern and efficient technology, as a trade off of users' interests and technology providers' market prospects. In case you are not familiar, interlacing is a mess to support, makes decoding slower and heavily degrades quality. People saying that interlacing saves bandwidth do not know much about video coding and bad marketing claiming that higher resolution is better than higher framerate has an effect too.

So, when ITU and then MPEG set out to establish the mandates for a new video standard capable of superseding H264, it was decided that interlacing was old enough, did more harm than good and it was time for retirement: HEVC was going to be the first video codec to officially deprecate interlacing.

Things went pretty swell during its development, until a few months before the completion of the standard. A group of US companies complained that the proposed tools were not sufficient (a set of SEI messages and treating fields like progressive frames) and heavily protested with both standardisation bodies. ITU firmly rejected the idea (with the video group chair threatening to step down) while MPEG set out to understand the needs of the industry and see if there was anything that could be done.

An ad-hoc group was established to see if there was any evidence that interlaced coding tool would have improved the situation. Things looked really shady, the Requirements group even mentioned that it was the first time that an AhG was established to look for evidence, instead of establishing an AhG because there was evidence. Several liasons from EBU and other DVB members tried to point out this absurdity while the threat of adding interlacing back in HEVC became real. Luckily the first version of the specifications got published in the meantime, so this decision didn't slow down the standardisation process.

Why so much love towards interlacing? Well in the "rebellious" group defence, it is true that interlaced content in HEVC is less performant than in H264; however it is also true that such deinterlaced content in HEVC outperforms H264 in any configuration. Truth is that mass marketed deinterlacers (commonly found in televisions for example) bring a lot of royalty income, so it is normal that companies with vested interests would prefer to have interlacing in a soon-popular video standard like HEVC. Also in markets like US where the network operator (which has control on the encoding but not on the video source) might differ from the content provider, it could be politically difficult to act as a carrier only if you have to deinterlace a video.

However these problems are actually not enough for forcing every encoder, decoder, analyser to support a deprecated technology like interlacing. Technical problems can be solved with good deinterlacers at the top of the distribution chain, while political ones can be solved amending contracts. Plus having progressive only video will definitely improve quality and let the industry concentrate on other delicate subjects, like bit depth, both properties going in favour of users' interests.

At the last MPEG meeting, the "rebellious" group which had been working on reintroducing interlacing for a year provided no real evidence that interlaced coding tools would improve HEVC at all. The only sensible solution was to disband the group over this wasted effort and support progressive video only, which is what happened luckily. So now both ITU and MPEG support progressive video only and this has finally nailed it.

Interlacing is dead, long live progressive.

Written by Vittorio Giovara (projectsymphony@gmail.com)
Published under a CC-BY-SA 3.0 license.

Tuesday, 25 March

I am very glad to announce that Libav 10 has been released!

This has a bunch of features that I contributed to, in particular regarding stereoscopic video and interlaced filtering, but more importantly this release has the work of an awesome group of people which has been carried out for a whole year. This is the magic of open source!

I joined the group more or less one year ago, with some patches regarding an obscure H.264 specification which I then later reimplemented in HEVC and then I wrote a few filters I needed and then designed an API and then, wow! A whole year passed without me noticing, and I am still around, sending patches to the same group of people who welcomed someone who had problems with shifting values (sad but true story)!

I met the team both at VDD and FOSDEM and they've been the most exciting conferences I ever went to (and I went to a lot of them). I couldn't believe I was with the devteam of my favourite multimeida opensource projects I've been following since I was a kid! Until a year ago, I saw the names from the commits and the blogposts from both VideoLAN and Libav projects and I had been thinking "Oh wouldn't it be so cool to be like one of them".

The answer is yes, it definitely would, and it's something that can happen if one is really committed in it! The Libav Info page states "Being a committer is a duty, not a privilege", but it sure does feel like one.

Thanks for this exciting year guys, I look forward to the next ones.

Monday, 24 March

...using latest modern tools!

X264 and VLC are two of the most awesomest opensource software you can find on-line and of course the pose no problem when you compile them on a Unix environment. Too bad that sometimes you need to think of Windowze as well, so we need a way to crosscompile that software: in this blogpost, I'll describe how to achieve that, using modern tools on a Ubuntu 12.04 installation.

[0] Sources
It goes without saying that without the following guides, I'd have had a much harder time!
So a big thanks to all the original authors!

[1] Introduction
When you crosscompile you just use the same tools and toolchains that you are used to, gcc, ld and so on, but configured (and compiled) so that they produce executable code for a different platform. This platform can vary both in software and in hardware and it is usually identified by a triplet: the processor architecture, the ABI and the operating system.

What we are going to use here is i686-w64-mingw32, which identifies any x86 cpu since the Pentium III, the w64 ABI used on modern Windows NT systems (if I'm not wrong), and the mingw32 architecture, that is the Windows gcc variant.

[2] Prerequisites
Note that the name of the packages might be slightly different according to your distribution. We are going to need a quite recent mingw-runtime for VLC (>=3.00) which has not yet landed on Ubuntu, so we'll take it from our Debian cousins.

Execute this command

$ wget http://ftp.jp.debian.org/debian/pool/main/m/mingw-w64/mingw-w64-dev_3.0~svn4933-1_all.deb
$ sudo dpkg -i mingw-w64-dev_3.0~svn4933-1_all.deb

and then install stock dependencies

$ sudo dpkg -i gcc-mingw-w64 g++-mingw-w64
$ sudo dpkg -i pkg-config yasm subversion cvs git-core

[3] x264 and libav 
x264 has very few dependencies, just pthreads and zlib, but it reaches its full potential when all of them are satisfied (encapsulation, avisynth support and so on).

Loosely following Alex Jurkiewicz's work, we create a user-writable folder and then we prepare a script that sets some useful variables every time.

$ mkdir -p ~/win32-cross/{src,lib,include,share,bin}


export CC=$TRIPLET-gcc
export CXX=$TRIPLET-g++
export CPP=$TRIPLET-cpp
export AR=$TRIPLET-ar
export RANLIB=$TRIPLET-ranlib
export ADD2LINE=$TRIPLET-addr2line
export AS=$TRIPLET-as
export LD=$TRIPLET-ld
export NM=$TRIPLET-nm
export STRIP=$TRIPLET-strip

export PATH="/usr/i586-mingw32msvc/bin:$PATH"
export PKG_CONFIG_PATH="$HOME/win32-cross/lib/pkgconfig/"

export CFLAGS="-static -static-libgcc -static-libstdc++ -I$HOME/win32-cross/include -L$HOME/win32-cross/lib -I/usr/$TRIPLET/include -L/usr/$TRIPLET/lib"

exec "$@"
Please not the use of the CFLAGS variables: without all the static parameters, the executable will dynamically link gcc, so you'll need to bundle the equivalent dll. I prefer to have one single exe, so everything goes static, but I'm not really sure which flag is actually needed. If you have any idea, please drop me a line.

Anyway, let's compile latest revision of pthreads (2.9.1 as of this writing)

$ cd ~/win32-cross/src
$ wget -qO - ftp://sourceware.org/pub/pthreads-win32/pthreads-w32-2-9-1-release.tar.gz | tar xzvf -
$ cd pthreads-w32-2-9-1-release
$ make GC-static CROSS=i686-w64-mingw32-
$ cp libpthreadGC2.a ../../lib
$ cp *.h ../../include

and zlib (1.2.7) - we need to remove the references to the libc library (which is implied anyway) otherwise we will get a linkage failure

$ cd ~/win32-cross/src
$ wget -qO - http://zlib.net/zlib-1.2.7.tar.gz | tar xzvf -
$ cd zlib-1.2.7
$ ../../mingw ./configure
$ sed -i"" -e 's/-lc//' Makefile
$ make
$ DESTDIR=../.. make install prefix=

Now it's turn for libav, so that x264 can use different input chroma and other stuff. If you need libav exececutables, you might want to change the configure line so that it suits you

$ cd ~/win32-cross/src
$ git clone git://git.libav.org/libav.git
$ cd libav
$ ./configure \
--target-os=mingw32 --cross-prefix=i686-w64-mingw32- --arch=x86 --prefix=../.. \
--enable-memalign-hack --enable-gpl --enable-avisynth --enable-runtime-cpudetect \
--disable-encoders --disable-muxers --disable-network --disable-devices
$ make
$ make install

and the nice tools that give more output options

$ cd ~/win32-cross/src
$ svn checkout http://ffmpegsource.googlecode.com/svn/trunk/ ffms
$ cd ffms
$ ../../mingw ./configure --host=mingw32 --with-zlib=../.. --prefix=$HOME/win32-cross
$ ../../mingw make
$ make install

$ cd $HOME/win32-x264/src
# Create a CVS auth file on your machine
$ cvs -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac login
$ cvs -z3 -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac co -P gpac
$ cd gpac
$ chmod +rwx configure src/Makefile
# Hardcode cross-prefix
$ sed -i'' -e 's/cross_prefix=""/cross_prefix="i686-w64-mingw32-"/' configure
$ ../../mingw ./configure --static --use-js=no --use-ft=no --use-jpeg=no \
      --use-png=no --use-faad=no --use-mad=no --use-xvid=no --use-ffmpeg=no \
      --use-ogg=no --use-vorbis=no --use-theora=no --use-openjpeg=no \
      --disable-ssl --disable-opengl --disable-wx --disable-oss-audio \
      --disable-x11-shm --disable-x11-xv --disable-fragments--use-a52=no \
      --disable-xmlrpc --disable-dvb --disable-alsa --static-mp4box \
      --extra-cflags="-I$HOME/win32-cross/include -I/usr/i686-w64-mingw32/include" \
      --extra-ldflags="-L$HOME/win32-cross/lib -L/usr/i686-w64-mingw32/lib"
# Fix pthread lib name
$ sed -i"" -e 's/pthread/pthreadGC2/' config.mak
# Add extra libs that are required but not included
$ sed -i"" -e 's/-lpthreadGC2/-lpthreadGC2 -lwinmm -lwsock32 -lopengl32 -lglu32/' config.mak
$ make
# Make will fail a few commands after building libgpac_static.a
# (i586-mingw32msvc-ar cr ../bin/gcc/libgpac_static.a ...).
# That's fine, we just need libgpac_static.a 
i686-w64-mingw32-ranlib bin/gcc/libgpac_static.a 
$ cp bin/gcc/libgpac_static.a ../../lib/
$ cp -r include/gpac ../../include/

Finally we can compile x264 at full power! The configure script will provide a list of what features have been activated, make sure everything you need is there!

$ cd ~/win32-cross/src
$ git clone git://git.videolan.org/x264.git
$ cd x264
$ ./configure --cross-prefix=i686-w64-mingw32- --host=i686-w64-mingw32 \
      --extra-cflags="-static -static-libgcc -static-libstdc++ -I$HOME/win32-cross/include" \
      --extra-ldflags="-static -static-libgcc -static-libstdc++ -L$HOME/win32-cross/lib" \
$ make

And you're done! Take that x264.exe file and use it wherever you want!
Most of the work here has been outlined by Alex Jurkiewicz in this guide so checkout his blog for more nice guides!

[4] VideoLAN
On the other hand, VLC has a LOT of dependencies, but thankfully it also has a nice way to get them working quickly. If you read the wiki guide, you'll notice that it will use i586-mingw32msvc everywhere, but you should definitely avoid that! In fact that one offers a very old toolchain, under which VLC will fail to compile! Also the latest versions provides much better code, x264 will weight 46MB against 38MB in one case!

So let's update every script to the more modern version i686-w64-mingw32! As usual, first of all get the sources

$ git clone git://git.videolan.org/vlc.git vlc
$ cd vlc 
And let's get the dependencies through the contrib scripts, qt4 needs to be compiled by hand as the version in Ubuntu repositories doesn't cope well with the rest of the process. I also had to remove some of the files because they were of the wrong architecture (mileage might vary here) .

$ mkdir -p contrib/win32
$ cd contrib/win32
$ ../bootstrap --host=
$ make prebuilt
$ make .qt4
$ rm ../i686-w64-mingw32/bin/{moc,uic,rcc}
$ cd -

We now return to the main sources folder and launch the boostrap and configure process; you need some standard automake/libtool dependencies for this.

$ ./bootstrap
$ mkdir win32 && cd win32
$ ../extras/package/win32/configure.sh --host=i686-w64-mingw32
$ ./compile
$ make package-win-common

Let's grab something to drink and celebrate when the compilation ends! You'll find all the necessary files in the vlc-x.x.x folder. A big thanks goes to the wiki authors and j-b who gave me pointers on #videolan irc.

[5] Conclusions
Whelp, that was a long run! As additional benefit you are able to customize every single piece of software to your need, eg. you can modify the libav version that you are going to use for Vlc as you wish! Also crosscompiling is often treated as black magic, but in reality is a simple process that just needs more careful configuration. Errors often are related to wrong paths or missing dependencies and sometimes a combination of both; don't lose hope and keep going until you get what you want!

For future reference, all (or most of) functions and structs in libav have a prefix that indicates the exposure of that functions. Those are

  • av_ meaning a public function, present in the API;
  • ff_ meaning a private function, not present in the API;
  • avpriv_ meaning inter-library private function, used internally across libraries only.
Source: #libav-devel

Friday, 13 January

Well, I've finished the new audio decoding API, which has been merged into Libav master. The new audio encoding API is basically done, pending a (hopefully final) round of review before committing.

Next up is audio timestamp fixes/clean-up. This is a fairly undefined task. I've been collecting a list of various things that need to be fixed and ideas to try. Plus, the audio encoding API revealed quite a few bugs in some of the demuxers. Today I started a sort of TODO list for this stage of the project. I'll be editing it as the project continues to progress.

Friday, 28 October

For the past few weeks I've been working on a new project sponsored by FFMTech. The entire project involves reworking much of the existing audio framework in libavcodec.

Part 1 is changing the audio decoding API to match the video decoding API. Currently the audio decoders take packet data from an AVPacket and decode it directly to a sample buffer supplied by the user. The video decoders take packet data from an AVPacket and decode it to an AVFrame structure with a buffer allocated by AVCodecContext.get_buffer(). My project will include modifying the audio decoding API to decode audio from an AVPacket to an AVFrame, as is done with video.

AVCODEC_MAX_AUDIO_FRAME_SIZE puts an arbitrary limit on the amount of audio data returned by the decoder. For example, each FLAC frame can hold up to 65536 samples for 8 channels at 32-bit sample depth, which is 2097152 bytes of raw audio, but AVCODEC_MAX_AUDIO_FRAME_SIZE is only 192000. Using get/release_buffer() for audio decoding will solve this problem. It will, however, require changes to every audio decoder. Most of those changes are trivial since the frame size is known prior to decoding the frame or is easily parsed. Some of the changes are more intrusive due to having to determine the frame size prior to allocating and writing to the output buffer.

As part of the preparation for the new API, I have been cleaning up all the audio decoder, which has been quite tedious. I've found some pretty surprising bugs along the way. I'm getting close to finishing that part so I'll be able to move on to implementing the new API in each decoder.

Wednesday, 27 July

So, I've moved on from AHT now, and it's on to Spectral Extension (SPX).  I got the full syntax working yesterday, now I just need to figure out how to calculate all the parameters.  I have a feeling this will help quality quite a bit, especially when used in conjunction with variable bandwidth/coupling.  My vision for automatic bandwidth adjustment is starting to come together.

SPX encoding/decoding is fairly straightforward, so I expect this won't take too long to implement.  Similar to channel coupling, the encoder writes coarsely banded scale factors for frequencies above the fully-encoded bandwidth, along with noise blending factors.  The decoder copies lower frequency coefficients to the upper bands, multiplies them by the scale factors, and blends them with noise (which has been scaled according to the band energy and the blending factors in the bitstream).  For the encoder, I just need to make the reconstructed coefficients match the original coefficients as closely as possible by calculating appropriate spectral extension coordinates and blending factors.  Also, like coupling coordinates, the encoder can choose how often to resend the parameters to balance accuracy vs. bitrate.

Once SPX encoding is working properly, I'll revisit variable bandwidth.  However, instead of adjusting the upper cutoff frequency (which is somewhat complex to avoid very audible attack/decay), it will adjust the channel coupling and/or spectral extension ranges to keep the cutoff frequency constant while still adjusting to changes in signal complexity to keep a more stable quality level at a constant bitrate.  This could also be used in a VBR mode with constrained bitrate limits.

If you want to follow the development, I have a separate branch at my Libav github repository.

I finally got the complete AHT syntax working properly.  Unfortunately, the quality seems to be lower at all bitrates than with the normal AC-3 quantization.  I'm hoping that I just need to pick better gain values, but I have a suspicion that some of the difference is related to vector quantization, which the encoder has no control over (a basic 6-dimensional VQ minimum distance search is the best it can do).

My first step is to find out for sure if choosing better gain values will help.  One problem is that the bit allocation model is saying we need X number of bits for each mantissa.  Using mode=0 (all zero gains) gives exactly X number of bits per mantissa (with no overhead for encoding the gain values), but the overall quality is lower than with normal AC-3 quantization or even GAQ with simplistic mode/gain decisions.  So I think that means there is some bias built-in to the AHT bit allocation table that assumes GAQ will appropriately fine-tune the final allocations.  Additionally, it could be that AHT should not always be turned on when the exponents are reused in blocks 1 through 5 (the condition required to use AHT).  This is probably the point where I need a more accurate bit allocation model...

edit: After analyzing the bit allocation tables for AC-3 vs. E-AC-3, it seems there is no built-in bias in the GAQ range.  They are nearly identical.  So the difference is clearly in VQ.  Next step, try a direct comparison of quantized mantissas using VQ vs. linear quantization and consider that in the AHT mode decision.

edit2: dct+VQ is nearly always worse than linear quantization...  I also tried turning AHT off for a channel if the quantization difference was over a certain threshold, but as the threshold approached zero, the quality approached that with AHT turned off.  I don't know what to do at this point... *sigh*

note: analyzation of a commercial E-AC-3 sample using AHT shows that AHT is always turned on when the exponent strategy allows it.

edit3: It turns out that the majority of the quality difference was in the 6-point DCT.  If I turn it off in both the encoder and decoder (but leave the quantization the same) the quality is much better.  I hope it's a bug or too much inaccuracy (it's 25-bit fixed-point) in my implementation...  If not then I'm at another dead-end.

edit4: I'm giving up on AHT for now.  The DCT is definitely correct and is very certainly causing the quality decrease.  If I can get my hands on a source + encoded E-AC-3 file from a commercial encoder that uses AHT then I will revisit this.  Until then, I have nothing to analyze to tell me how using AHT can possibly produce better quality.

Friday, 17 June

Well, I finally got a working E-AC-3 encoder committed to Libav.  The bitstream format does save a few bits here and there, but the overall quality difference is minimal.  However, it will be the starting point for adding more E-AC-3 features that will improve quality.

The first feature I completed was support for higher bit rates.  This is done in E-AC-3 by using fewer blocks per frame.  A normal AC-3 frame has 6 blocks of 256 samples each, but E-AC-3 can reduce that to 1, 2, or 3 blocks.  This way a small range can be used for the per-frame bit rate, but it still allow for increasing the per-second bit rate.  For example, 5.1-channel E-AC-3 content on HD-DVDs was typically encoded at 1536 kbps using 1 block per frame.

Currently I am working on implementing AHT (adaptive hybrid transform).  The AHT process uses a 6-point DCT on each coefficient across the 6 blocks in the frame.  It basically uses the normal AC-3 bit allocation process to determine quantization of each DCT-processed "pre-mantissa" but it uses a finer resolution for quantization and different quantization methods.  I have the 6-point DCT working and one of the two quantization methods.  Now I just need to finish the other quantization method and implement mantissa bit counting and bitstream output.


FeedRSSLast fetched
Ambient Language XML 2016-10-24 08:07
Flameeyes's Weblog XML 2016-10-24 08:07
Kostya's Wild Codec World XML 2016-10-24 08:07
libav – alpaastero XML 2016-10-24 08:07
Libav – Federico Tomassetti – Consultant Software Engineer XML 2016-10-24 08:07
libav – Luca Barbato XML 2016-10-24 08:07
Project Symphony XML 2016-10-24 08:07
Sasshka's XML 2016-10-24 08:07
Scarabeus' blag XML 2016-10-24 08:07

Planet Feed

rss 2.0 | opml


If you want your blog to be added, send an email to Luca Barbato.

Planet service provided by Luminem.