Sunday, 14 October

Let’s start with a bit of history since knowing how things developed often helps to understand how they ended up like they are.

There is an organisation previously called CCITT (phone line modem owners might remember it) later renamed to ITU-T. It is known for standardisation and accepting various standards under the same confusing name e.g. PCM and A-law/mu-law quantisation are G.711 recommendation from 1972 while G.711.0 is lossless audio compression scheme from 2009 and G.711.1 is a weird extension from 2008 that splits audio into two bands, compresses low band with A- or mu-law and uses MDCT and vector quantisation on top band.

And there is also a “family” of G.722 speech codecs: basic G.722 that employs splitting audio into subbands and applying ADPCM on them; G.722.1 is a completely different parametric bit allocation, VQ and MDCT codec we discuss later; G.722.2 is a traditional speech codec better known as AMR-WB.

So, what’s the deal with G.722.1? It comes from PictureTel family of Siren codecs (which later served as a base for G.719 too). Also as I mentioned before this codec employs MDCT, vector quantisation and parametric bit allocation. So you decode envelope defined by quantisers, allocate bits to bands depending on those (no, it’s not 1:1 mapping), unpack bands that are coded using vector quantisation dependent on amount of bits and perform MDCT on them. You might be not familiar but this is exactly how certain RealAudio codec works. And I don’t think you can guess its name even if I mention that it was written by Ken Cooke. But you cannot say nothing was changed: RealAudio codec works with different frame sizes (from 32 to 1024 IIRC), it has different codebooks, it has joint stereo mode and finally it has multichannel coding mode based on pairs. In other words, it has evolved from niche speech codec to general purpose audio codec rivalling AAC and it was indeed a codec of choice for RealMedia before they have finally switched to AAC and HE-AAC years later (which was the first time for them using open standard verbatim instead of licensing a proprietary technology or adding their own touches on standards drafts as before—even DNET had special low-bitrate mode).

Now let’s jump to 2012 and VideoLAN Dev Days ’12. I gave a talk there about reverse engineering codecs (of course) and it was a complete failure so that was my first and last public talk but that’s not important. And before me Timothy Terriberry gave an overview of Opus. So I listen how it combines speech and general audio codec (like USAC which you might still not know under its commercial name xHE-AAC)—boring, how speech codec works (it’s Skype SILK codec they dumped to open source at some point and like with Duck TrueMotion VP3 before, Xiph has picked it up and adopted for own purposes)—looks like typical speech codec that I can barely understand how it functions, and then CELT part comes up. CELT is general audio codec developed by Xiph that is essentially what your Opus files will end as (SILK is used only at extremely low bitrates in files produced by the reference encoder—or so I heard from the person implementing a decoder for it). And a couple of months before VDD12 I actually bothered to enter technical details about Cook into MultimediaWiki (here’s edit history if you want to check that)—I should probably RE some codec and write more pages there for the old times’ sake. So Cook design details were still fresh in my mind when I heard about CELT details…

So CELT codes just single channels or stereo pairs—nothing unusual so far, many codecs do that. It also uses MDCT—even more codecs do that. It codes envelope, uses parametric bit allocation and vector quantisation—wait a bit, I definitely heard about this somewhere before (yes, it sounds suspiciously like ITU G.719). Actually I pointed out that to Xiph guys (there was Monty present as well) immediately but it was dismissed as being nothing similar at all (“we transmit band energies instead of relying on quantisers”—right, and quantisers in audio are rarely chosen depending on energy).

Let’s compare the coding stages of two codecs to see how they fail to match up:

  1. CELT transmits band energy—Cook transmits quantisers (that are still highly correlated with band energy) and variable amount of gains to shape output frame in time domain;
  2. CELT transmits innovation (essentially coefficients for MDCT minus some predicted stuff)—Cook transmits MDCT coefficients;
  3. CELT uses transmitted band energy and bits available for innovation after the rest of frame is coded to determine number of bits for each band and mode in which coefficients are coded (aka parametric bit allocation)—Cook uses transmitted quantisers and bits available after the rest of frame is coded to determine number of bits for each band and mode in which coefficients are coded;
  4. CELT uses Perceptual Vector Quantization (based on Pyramid Vector Quantizer—boy, the won’t cause any confusion at all)—Cook uses fixed vector quantisation based on amount of bits allocated to band and static codebook;
  5. CELT estimates pitch gains and pitch period—that is a speech codec stuff that Cook does not have;
  6. CELT uses MDCT to restore the data—Cook does the same.

Some of you might say: “Hah! Even if it matches at some stages actual coefficient coding is completely different!! And you forgot that CELT uses range coder too.” Well, I didn’t say those two formats were exactly the same, just that their design is very similar. To quote the immortal words from Bell, Cleary and Witten paper on text compression, the progress in data compression is mostly defined by larger amounts of RAM available (and CPU cycles available). So back in the day hardly any audio codec could afford range coder (invented in 1979) except for some slow lossless audio coders. Similarly PVQ was proposed by Thomas Fischer in 1986 but wasn’t employed because it was significantly costlier than some fixed codebook vector quantisation. So while CELT is undeniably more advanced than Cook, the main gains are from using methods that do the same thing more effectively (at expense of RAM and/or CPU) instead of coming up with significantly different scheme. An obligatory car analogy: claiming that modern internal combustion engine car is completely new invention compared to Ford Model T or FIAT 124 because they have more bells and whistleselectronics even while principal scheme remains the same—while radically new car would be an electric one with no transmission or gearbox and engines in each wheel (let’s forget such scheme is very old too—electric cars of such design roamed Moon in 1970s).

So overall, Opus is almost synonymous with CELT and CELT has a lot of common in design with Cook (but greatly improved) so this allows Cook to be called RealOpus or Opus of its era.


BTW when implementing the decoder for this format in Rust I’ve encountered a problem: the table for 6-bit stereo coupling was never tested because its definition is wrong (some code definitions repeating with the same bit lengths) and looks like the first half of it got corrupted. Just compare for yourselves.

libavcodec version (lengths array added for the reference):

static const uint16_t ccpl_huffcodes6[63] = {
    0x0004,0x0005,0x0005,0x0006,0x0006,0x0007,0x0007,0x0007,0x0007,0x0008,0x0008,0x0008,
    0x0008,0x0009,0x0009,0x0009,0x0009,0x000a,0x000a,0x000a,0x000a,0x000a,0x000b,0x000b,
    0x000b,0x000b,0x000c,0x000d,0x000e,0x000e,0x0010,0x0000,0x000a,0x0018,0x0019,0x0036,
    0x0037,0x0074,0x0075,0x0076,0x0077,0x00f4,0x00f5,0x00f6,0x00f7,0x01f5,0x01f6,0x01f7,
    0x01f8,0x03f6,0x03f7,0x03f8,0x03f9,0x03fa,0x07fa,0x07fb,0x07fc,0x07fd,0x0ffd,0x1ffd,
    0x3ffd,0x3ffe,0xffff,
};

static const uint8_t ccpl_huffbits6[63] = {
    16,15,14,13,12,11,11,11,11,10,10,10,
    10,9,9,9,9,9,8,8,8,8,7,7,
    7,7,6,6,5,5,3,1,4,5,5,6,
    6,7,7,7,7,8,8,8,8,9,9,9,
    9,10,10,10,10,10,11,11,11,11,12,13,
    14,14,16,
};

NihAV corrected version (extracted from the reference of course):

const COOK_CPL_6BITS_CODES: &[u16; 63] = &[
    0xFFFE, 0x7FFE, 0x3FFC, 0x1FFC, 0x0FFC, 0x07F6, 0x07F7, 0x07F8,
    0x07F9, 0x03F2, 0x03F3, 0x03F4, 0x03F5, 0x01F0, 0x01F1, 0x01F2,
    0x01F3, 0x01F4, 0x00F0, 0x00F1, 0x00F2, 0x00F3, 0x0070, 0x0071,
    0x0072, 0x0073, 0x0034, 0x0035, 0x0016, 0x0017, 0x0004, 0x0000,
    0x000A, 0x0018, 0x0019, 0x0036, 0x0037, 0x0074, 0x0075, 0x0076,
    0x0077, 0x00F4, 0x00F5, 0x00F6, 0x00F7, 0x01F5, 0x01F6, 0x01F7,
    0x01F8, 0x03F6, 0x03F7, 0x03F8, 0x03F9, 0x03FA, 0x07FA, 0x07FB,
    0x07FC, 0x07FD, 0x0FFD, 0x1FFD, 0x3FFD, 0x3FFE, 0xFFFF
];

Saturday, 13 October

Looks like it’s been about two months since I last wrote anything about NihAV but that does not mean I did not have anything to write about. On the contrary, I’m glad to report about significant progress in RealAudio support.

Previously I’ve reported about RealVideo 3 and 4 support (as for RealVideo 1/2 and ClearVideo before), so video part was covered quite well but audio part was missing and I went on to rectify the situation.

Now NihAV supports RealAudio 1.0 (speech codec), RealAudio 2.0 (speech codec), RealAudio DNET (a bit about it later), RealAudio 4.0 (speech codec from Sipro), RealAudio Cook (this one deserves a separate post so the next one should be about this codec) and RealAudio Lossless. So there are only three codecs missing now: RealAudio 8 (ATRAC3), RealAudio 9/10 (AAC) and RealVideo 6(HD). Of course I’m going to add support for those as well.

This is actually a good time to implement those. As you might know, there is a Holy Trinity of Licensors: D.vX, D*lby and DT$. They are famous for ‘nice’ licensing terms. While I’ve never had to deal with them, I’ve heard from people who did that they like licensing single product they’re most famous for at outrageous prices (i.e. it’ll cost you a magnitude more per unit using their technology than e.g. H.264 decoder) and it’s a viral license too because if you sell stuff not oriented for consumers then you have to force your customers into the same deal (it’s GPL—Greedy Private License) and you have to report your sales to them for obvious reasons. Funny how two of the companies were bought out already. Now let’s look at them in some details:

  • D.vX This one is remarkable since it licensed the product it had nothing to do with (aka M$MPEG-4 adapted for non-ASF containers and MPEG-4 ASP). At least it seems hardly relevant now unless I dig out some old movies.
  • D*lby This one is mostly known (outside cinema equipment) for codec with several names: ATSC A/52, RealAudio DNET, ETSI TS 102 366, D*lby Digital and even something you can make out of letters A C and 3 (I heard rumours that it does not like its trademarks mentioned so I’d better avoid directly naming it). At least the last patents for that format has expired and support for it can be implemented freely. And it also owns a company that manages licensing of AAC. Fun fact is that patents for MPEG2 NBC are expired so I can implement AAC-LC decoder just fine but that does not stop them for licensing it. How they do it? By refusing to license the separate parts and forcing a whole package of AAC-LC, HE-AACv1, HE-AACv2 and xHE-AAC onto you. I guess if the situation won’t change in twenty years all current stuff will expire but they’ll still license it along with Ultra-Enhanced-Hyper-Expanded-Radically-Extended High-Efficiency AAC (which will have nothing to do with all those previous formats).
  • DT$ A company similar to D*lby and its (former?) prime competition. Also known for single format with many extensions making it essentially a homebrew AAC. At least it seems to be exclusively DVD/Blu-ray format and I’m satisfied with Xine for playing the former and avoiding the latter completely.

And I want to talk a bit more about my RealAudio DNET decoder. Internally it’s called ts102366 for obvious reasons and I have just a primitive implementation for it (i.e. it seems to work and should handle multichannel fine but no extended features). The extension for more than 5.1 channels also seems to be HD-DVD/Blu-ray only so I don’t care, it’s quite rare in RealMedia format and other containers seem to contain it as contiguous stream so I’d need to introduce support for NAElementaryStream in demuxing code and also proper parser to split it into frames. Not worth the effort for me at this moment. Another fun fact is that bitstream comes in 16-bit words that can have any endianness. In my case I just had to detect the proper endianness from first two bytes and simply initialise bitstream reader in BE or LE16 mode depending on it (again, it’s funnier with DT$ format where you have three different bitstream reading modes and you might need two modes simultaneously in some cases; again, good thing I don’t have to care about that stuff). Also it’s still one of two codecs I currently have that support multichannel audio (Cook is the second of course and AAC will be third).

And finally some words about Rust issues I had to deal with.

Rust as a language is more or less fine but compiler sucks. I’ve ran into several issues while writing code.

First, I had a fixed array of Codebooks to initialise in RALF decoder (one of 15 codebooks, another one of 125 codebooks and yet another one of 10×11 codebooks). If I use simply mem::uninitialized() with filling it up it works fine. In debug mode. In release mode it segfaults at the end. Probably I should’ve used ptr::write() instead of assigning and it would work fine but I gave up and used a vector instead of an array even if it’s not as efficient. Obviously it’s all my fault and not Rust issue but still that was weird.

Second, when I tried to create a generic codebook reader that would accept table of codes of any primitive type (u8, u16 or u32) I ran into funnier issue of Rust compiler spewing weird errors like “cannot convert u16 to u32 because it’s not a primitive type”. Obviously it’s my mistake and it’s caught by a tool (that is still not in stable) so the developers don’t care (yes, Luca even bothered to file an issue on that). Still, I’d rather have a clearer error message in that case (e.g. “… because it’s X and not a primitive type”).

And finally, an example that is definitely rustc stupidity and not mine. Again, developers don’t consider this to be an issue but I do (and Luca seemed to agree with me since he opened an issue about it). Essentially, there is a thing called DCE (dead code elimination), so when compilers see that certain block won’t be executed they might print a warning and just check inside code for syntactic validity. Current rustc might ignore condition value and optimise code inside even if it clearly makes no sense (to the point where it crashed because of that on some nightly version, see the issue for details). And while you argue that one should not write such code, I had quite plausible use case for it: a macro that took 2- or 3-element array and did something to its values so if third value was present it had to do something special with it. But of course compilation failed because you tried to do if ARR.len() > 2 { a = ARR[2]; } with two-element array. But when I tried to check whether I got indexing correct by using large constants as indices, cargo check passed just fine—probably because const propagation did not go that deep inside my code (it was in a function called from a long chain in some sub-sub-sub-module and standalone example errors out fine). This feels quite unpolished to me.

Oh, and final final fun thing: the calls like foo.bar(foo.baz) would still fail borrow check probably because they can’t (I guess) formalise function calling convention i.e. “if function is called then first its arguments are evaluated and copied if needed in certain order, then function address is evaluated and called with the arguments”. BTW you still have the situation like this:

struct Foo { foo: u8 }
impl Foo {
    fn bar(&mut self) -> u8 { self.foo += 1; self.foo }
}

fn fee(a: u8, b: u8) {
    println!("{} {}", a, b);
}

fn main() {
    let mut foo = Foo { foo: 42 };
    fee(foo.bar(), foo.bar());
}

And if you don’t know what’s wrong here I’ll tell you: in C argument evaluation is implementation-defined because back in the day there were very different calling conventions and thus compiler needed to start with evaluating from last argument to first to store them in order instead of widespread pushing arguments in order to stack. So depending on ABI the function would be called either as fee(43, 44) or as fee(44, 43).

Now I see two ways out of it: either detect such situation where the same object is mutably called several times and give an error or, which is better IMO, make formal calling convention so the code won’t be undefined. And fix borrow checker while doing that.


Overall, Rust is a nice experience so far since it allows code to structure much better but sometimes you hit such silly issues that spoil all the fun.

Anyway, next post should be about RealAudio Cook, the Opus of its era.

Wednesday, 03 October

Since today is the state holiday (some time ago two Germanies united into one—which looks more and more like DDR for some reason) why not look at Zoidberg of German lands—Saarland? Well, you might have many reasons (first, it being Saarland) but today I’ve completed my voyage on all of their accessible railways and hence this post.

First, a bit of history. As you all remember, after World War II Germany was split into four occupation zones and while I haven’t heard anything in particular about British occupation zone, the rest of occupation forces were behaving not nice at all: USA installed their military bases everywhere (and most of them are still there—at least it meant less military expenses for West Germany back in the day), USSR tried to convert its piece of Germany into a copy of itself (partly successfully, hopefully it will recover) and France was not satisfied with mere occupation and also tried to seize the part of Germany as its own but it bit more than it could chew and so back in 1957 Saarland was reunited with the rest of Germany (and that day is the state holiday too but I doubt many think of 1st of January as of Saarland reunification day).

Saarland still honours France

Second, a bit of railway network overview. Essentially you can think about it as a cross: there’s a main East-West line going from Mannheim to Trier (or Alt-Chemnitz) via Homburg and Saarbrücken, there’s a North-South line going from Bad Kreuznach to Saarbrücken, there’s a line going South from Saarbrücken to France (and another one served by tram but more about it later), there is a branch Dillingen—Niedaltdorf, there’s a line from Rohrbach to Pirmasens, there’s line Trier—Perl—Metz that goes partly through Saarland and there are several parallel lines connecting Homburg and Saarbrücken. Let’s count: Homburg—Rohrbach—Saarbrücken (that’s what trains from Mannheim to Saarbrücken use), Homburg—Neunkirchen—Saarbrücken (part of it is Nahetalbahn to Bad Kreuznach), Homburg—Neunkirchen—Merchweiler—Saarbrücken (serviced by regional trains Homburg—Illingen and Saarbrücken—Lebach) and finally there’s Homburg—Neunkirchen—Lebach—Saarbrücken via tram line that goes all the way from Lebach to Saarbrücken to Saargemünd (or Sarreguemines as some people write it).

Yes, there’s a tram line in Saarland that essentially crosses half of it. And it’s impossible to confuse it since there’s only one tram line and one tram route in Saarland.

Also I’ve found mentions of three museum lines but looks like only one is functioning: Ottweiler—Schwarzerden line (or Ostertalbahn for short). And I’ve tried it as well. Unlike many other museum lines, this one uses diesel locomotives from the 1960s (but hopefully they’ll manage to rebuild the steam locomotive from the parts they have one day). It was what can be experienced in Ukrainian regional trains—going at about 30km/h while sitting on wooden benches and enjoying looking at the nature outside. At least they boast that they work in any weather (while other museum lines close in Autumn they keep running trains in winter too).

There are many weird things there I’d like to talk about but I’ll leave them to the time when I finish travelling on all railways of Rheinland-Pfalz (should be done next year unless they decide not to open Zellertalbahn again) but here are some of them for now.

First, the train service Saarbrücken—Lebach-Jabach. Fischbachtalbahn (Saarbrücken—Wemmetsweiler) and Primstalbahn from Wemmetsweiler to Illingen are electrified (in Illingen only track 41 is electrified, track 51 is not). Then only a bit of track at Lebach is electrified but about fourteen kilometres in-between are not. We had similar situation here with Bruhrainbahn (between Graben-Neudorf and Germersheim) being not electrified so train Karlsruhe—Mainz ran mostly on electrified rails but still had to be a diesel one. At least this was fixed in 2011 by electrifying the missing piece.

Second, it’s the only tram line in Germany I know that has exit directions repeated in French too.

And third, to make Saarland feel even more like Switzerland, they have the same cryptic booking system: when I bought a ticket from Saargemünd to Lebach it offered me to choose one of three or four possible alternatives—just like buying a rail ticket in Switzerland! Come to think of it, Swiss rail system is exactly like German regional system:

  • Choosing route is the same;
  • German general rail tickets have a whole day of validity (or more for longer distances). German regional tickets are valid for just a couple of hours after purchase—and same in Switzerland (unless it’s some snowy route that might be closed for days);
  • When I bought a ticket from Schaffhausen to Zürich (two different kantons) the ticket also listed zones—like some German regional tickets do;
  • Like with German regional trains, the type does not really matter. It may be S-Bahn, RegioBahn, RegioExpress or InterRegioExpress—the ticket is valid regardless. Same in Switzerland: the same ticket valid for any kind of train and trains change classes during the travel (i.e. train Basel—Chur was labelled as InterCity up to Zürich and InterRegio after that, the difference is only how many intermediate stops it makes);
  • And finally, the famous Swiss train punctuality. Well, it’s a known effect that regional trains have much better punctuality than long-distance ones (and all trains in Switzerland are essentially slow regional trains).

So despite all local jokes about Saarland being very backward place (some even call it “rear end of Germany”) it’s quite European place in some aspects. And remember that it has a real Schengen border (i.e. it borders with Luxembourg where town of Schengen known for some treaty is located).

Saturday, 29 September

Originally I wanted to to write about NihAV progress but some kind soul has uploaded the final missing piece of Dingo Pictures art collections so I have no other choice but to talk about it.

So, Arischa the Little Witch (…on the visit to the Magic Forest).

The opening is a bit confusing since I’ve never seen such effects in any other openings from Dingo Pictures.

The young granddaughter of witch Sofia, you can guess her name, travels with raven Rudi (sorry, but I know only Rudi Rüssel) into the Magic Forest to talk to the oldest tree. There she also meets the usual animals and after that they together work on improving her witching skills (she could not even fly her broomstick, poor girl).


It’s always nice to see birds flying backwards.


See how many of them you recognize.


Back to school.

And to make things worse there is a nasty dwarf in the forest who messes with her magic textbooks and turns animals into stone (don’t think about Gimli son of Glóin, think about Ifnkovhgroghprm Rumplestiltskin).


And here are some effects not seen before in any Dingo Pictures cartoon:

That is why I was surprised by the opening animation. But they have used similar effects in the cartoon too.

I’m not going to reveal the whole plot but it’s not a Dingo Pictures story without a happy ending.

Also it should not be a spoiler to say that koda plays a rather large role here:

He’s just running by…


Must be SMPTE standard.


No comments.

Also I don’t remember seeing bird facepalming before (even if that does not look exactly like it, that is facepalm indeed).

The soundtrack is mostly either jazzy or Wabuu theme (and if there’s one thing that could improve this film, it’s definitely Wabuu). The graphics style is still the golden standard Dingo, just with some additional effects thrown it (it was 2005 after all and CGI was on the rise). The story is straightforward and reminds me of many witch stories for children written in Germany—who hasn’t heard about Otfried Preußler for instance (but Germans had this obsession with witches long before their most famous book on that subject, Malleus Maleficarum, was published in about 100km from Dingo Pictures studio).

Overall, this feels like the best animated Dingo Pictures work and I’m a bit sad they haven’t continued producing their stuff. Even my co-worker said this looks much better than the 3D animated crap that’s produced nowadays.

Saturday, 22 September

I wanted to write this post for several months since in July I finally had a chance to travel on some of the important Swedish railways.

Well, as anybody knows, I love Sweden and railways. And Swedish railways too. And obviously I’d like to ride them all and recently I’ve moved much closer to that goal.

There are this important railways in Sweden (sorry if I forgot some but this list should cover the most important ones):

  • Ostkustbanan (Stockholm—Uppsala—Gävle—Sundsvall)
  • Ådalsbanan+Botniabanan (Sundsvall—Kramfors—Umeå)
  • Norra stambanan (Gävle—Ånge)
  • Stambanan genom övre Norrland (Ånge—Bräcke—Vännäs—Boden)
  • Malmbanan (Luleå—Boden—Kiruna—Narvik)
  • Mittbanan (Sundsvall—Ånge—Östersund—Storlien—Hell—Trondheim)
  • Inlandsbanan (Gällivare—Östersund—Orsa—Mora)
  • Dalabanan+Siljansbanan (Uppsala—Borlänge, Borlänge—Mora)
  • Bergslagsbanan (Gävle—Borlänge—Frövi)
  • Västra stambanan (Stockholm—Göteborg)
  • Södra stambanan (Stockholm—Malmö)
  • Mälarbanan (Stockholm—Västerås—Örebro)
  • Svealandsbanan (Stockholm—Eskilstuna—Arboga)
  • Värmlandsbanan (Laxå—Charlottenberg, further to Oslo)
  • Kust till kust-banan (Göteborg—Alvesta—Kalmar)
  • Västkustbanan (Lund—Göteborg)
  • Jönköpingsbanan (Nässjö—Falköping)

And I want to talk about those railways and my experience there.

Ostkustbanan (Stockholm—Uppsala—Gävle—Sundsvall)

The old railway along the Swedish East Coast from Stockholm to the North. It’s essentially the main way from Stockholm to Middle and Northern Sweden. I travelled there countless times and hope to do that again.

The main problem is that north from Gävle it’s single track but at least since the opening of Botniabanan it becomes more and more important there’s hope for more development. They’ve opened a tunnel in Gamla Uppsala and made the Uppsala—Gävle route double track just last year.

The most remarkable thing on the line is probably the tunnel in Hudiksvall that goes under a house. Also Hudiksvall is the confectionery capital of Sweden and there’s a place near Sundsvall where Vasa Bryggeri is located. No wonder both my heart and my stomach are willing to go there.

Ådalsbanan+Botniabanan (Sundsvall—Kramfors—Umeå)

This is a continuation of Ostkustbanan to the north that partly replaces stambanan genom övre Norrland (that is the reason Botniabanan was built and in use since 2010) and intends to replace it in full (there are plans to prolong Botniabanan to Luleå).

The first time I saw the route in 2010 when I took a bus from Luleå to Sundsvall along the picturesque E4 and several years later I was able to ride there in a train.

It’s very picturesque with the Baltic sea coast, all the rivers flowing into it and there’s even a fountain in middle of one of them not far from the road and railway bridges crossing it.

Norra stambanan (Gävle—Ånge)

This is a railway going deep into Swedish territory instead of staying near the coast. It’s the route for trains to ski resorts near Åre and Duved and previously also for the trains to the very northern parts of Sweden (before stambanan genom övre Norrland was closed for passenger traffic).

Personally I like going to Ljusdal in winter because it’s about the right place where you can experience real winter (like -17°C and lots of snow) instead of what we have in Karlsruhe (which feels just late Autumn).

Stambanan genom övre Norrland (Ånge—Bräcke—Vännäs—Boden)

The old railway to the very North Sweden. Since the introduction of Botniabanan passenger trains run only on Vännäs—Boden part of it. But back in 2010 I was able to ride from Långsele to Boden in a night train (and Vännäs—Boden part later).

Since it’s an old single-track railway the trains can’t go fast there (so it’s like in Czechia or Ukraine). But at least the local forests and lakes are very nice to look at.

Also these three lines form the route for the Arctic Train that goes from Stockholm to Narvik and obviously it takes some time too! My train trip from Arlanda airport rail station to Gällivare with a short connection in Boden lasted 13 hours. Since Gällivare in 1313 km from Stockholm it’s still faster than 500 km by 12-hour night train Kharkiv—Kyyiv(—Zhytomyr) that I still try to forget.

Malmbanan (Luleå—Boden—Kiruna—Narvik)

One of the most remarkable railways in the whole world. Since its main purpose is to transport iron ore from mines around Kiruna and Gällivare, it employs the most powerful locomotives in the world—and they’re named after Eeyore (really!). Also because the ore trains are heavy, they use automatic couplers SA3 (short for Soviet Adapted Willison Couplers). And because the loaded trains go mostly downhill they manage to regenerate enough energy for their trip back (and also I heard that sometimes they generate a surplus that makes Malmbanan a quite unusual power station).

Another fun fact to mention is that some of the stations there have very literal names. There’s Riksgransen station (literal meaning: state border) located next to state border and there’s Polcirceln (literal meaning: the Arctic circle) located next to the village of same name (located a bit to the north from the actual Arctic circle). And of course there’s Sjisjka station that has the state rail operator (SJ) mentioned twice in its name—and it’s serviced only by Norrtag.

There’s Kiruna which is remarkable in two aspects: it’s next to the northernmost rocket range that launches real rockets—weighing over 12 tons instead of 750kg or less rockets launched from other Arctic sites. And the fact that Kiruna is moving (because of mining operation the ground is sinking down so they have to move the houses). When I got there in 2010 I stayed at the hotel next to the station and when I got there this year it’s a completely different station in a completely different location because the old one was in dangerous zone and had to be closed.

The route from Kiruna to Narvik is spectacular and though Swiss mountains are closer I’d rather go there again.

And another fun fact about Malmbanan. The Norwegian part (called Ofotbanen) is single-track with no passing loops (while Malmbanan has lots of those) and despite being only 42 km long it carries more cargo than the rest of Norwegian railways (and it’s the only rail connection for Narvik so Norway has to send all goods there including food via Sweden) and (at least when I was there this year) looks like all passenger trains there are operated by SJ (including local train from Narvik to state border).

Mittbanan (Sundsvall—Ånge—Östersund—Storlien—Hell—Trondheim)

This is a railway that goes across the whole Scandinavian peninsula, from Baltic coast in Sundsvall to Norwegian Sea coast in Trondheim.

The most remarkable things there are ski resorts in Åre and Duved and of course the Norwegian town with proper name—Hell. Yup, the real Hell is located there and it was colder there this summer than in Karlsruhe (and by much!).

Also I’d like to notice that while Norway is supposedly richer country than Sweden and has a lot of cheap hydroelectricity, it still has not electrified the Meråkerbanen (Storlien—Hell—Trondheim). Looks like it’s the same with all railways connecting to Norway: Swedish part is much better developed than Norwegian one.

Inlandsbanan (Gällivare—Östersund—Orsa—Mora)

This is more of a museum railway but since you can easily buy a ticket for it at SJ site like for any other destination. It is so long (Gällivare—Östersund track is about 750 km, Östersund—Mora is about 330 km) that actually it’s being serviced in two parts as well. Also Gällivare—Östersund part has only one train pair per day (train depart around 8AM and arrives at about 8PM with meeting point in Sorsele) and during Summer only. Gällivare—Mora is serviced by the same train that goes from Gällivare to Mora and back during the same day but at least it operates in winter time as well.

Oh, there is part they formally call Inlandsbanan that goes from Mora to Kristinehamn but it uses different railways for that since the original Inlandsbanan on that part was mostly closed and rails were removed too. But maybe one day I’ll try it too.

Since it’s a museum railway I shan’t tell much about it—go there and see for yourself, it’s definitely worth it. I’ll just mention that nature there is more than impressive and that it has a weird connection to Karlsruhe. Inlandsbanan goes via Vilhelmina and Dorotea, two places renamed after Friederike Dorothea Wilhelmine von Baden, a wife of Gustav IV (no points for guessing where she is from).

Dalabanan+Siljansbanan (Uppsala—Borlänge, Borlänge—Mora)

Another railway that I’ve travelled along countless times, quite often with a train 42 Stockholm-Mora. It is the line that goes to Dalarna which is the real heart of Sweden (i.e. you cannot get more Swedish than Dalarna). The famous wooden horse? From Dalarna. The famous Mora clock (Sweden is the famous neutral land of mountains, cheese and clocks after all)? Dalarna. The famous Hagström guitars? Dalarna! The liberation of Sweden from Danish rule? Started in Dalarna of course. So no wonder I went there and would like to go again and again.

Bergslagsbanan (Gävle—Borlänge—Frövi)

Another fine railway that goes through picturesque and widely varying scenery—from sea coast in Gävle to Bergslagen known for its mines (even the most famous brewery from there is named Kopparberg—Copper mountain—after the town where they mined copper) to the lake Mälaren.

Berglsagen has quite well-developed rail network so there’s still a lot left for me to travel to.

Västra stambanan (Stockholm—Göteborg)

One of two major lines, it connects the two biggest cities in Sweden. It’s not remarkable though except for a curve (near Flen IIRC) that really shows why tilting train technology (at least in form of SJ X2000) is awesome.

Södra stambanan (Stockholm—Malmö)

See above but replace “two biggest cities” with “the first and the third biggest city”.

Mälarbanan (Stockholm—Västerås—Örebro)

A nice line that goes along the northern shore of lake Mälaren. Also it serves as a backup line when there’s a problem in Stockholm and train can’t go directly to the south from Stockholm C.

It’s scenic and the rail network around Västerås and Örebro is quite diverse with lines connecting them to Ludvika, Fagersta, Sala, Avesta, Hallsberg and many other places. I need to go there more.

Svealandsbanan (Stockholm—Eskilstuna—Arboga)

A nice line that goes along the southern shore of lake Mälaren. I can’t remember anything particularly remarkable about it though.

Värmlandsbanan (Laxå—Charlottenberg, further to Oslo)

I think I took it when I went in a night train to Oslo and back to Stockholm in 2009, that’s all.

Kust till kust-banan (Göteborg—Alvesta—Kalmar)

I’m yet to visit it.

Västkustbanan (Lund—Göteborg)

I tried it first in 2009 when I took Öresundståg from Göteborg to Kopenhagen. Since it was at night I did not see much. A year later I tried Malmö—Helsingborg part again (with a stop at Lund of course). Since then they’ve opened a new tunnel around Helsingborg to make trains go faster and a new tunnel in Malmö to make trains go directly via Malmö instead of arriving to Malmö C and reversing. And they’re working on tunnels in Göteborg for the similar reason. So I hope to re-visit this railway later.

Meanwhile the most impressive things there are Lund, city hall in Helsingborg and of course one of the stations gave name for the best mineral water in the world—Ramlösa (the best mineral water, not the best marketed water).

Jönköpingsbanan (Nässjö—Falköping)

This is a rather short line along the southern shore of lake Vättern that goes through Jönköping including Huskvarna—and I need to visit them instead of just passing through.


Okay, that should be it. I’m biased and prefer Norrland to Southern Sweden but I’d still try to explore Southern Sweden too. Funny how most places I want to visit there start with K: Kil, Kristinehamn, Kalmar.

Next time I talk about railways it’ll probably be about Rheinland-Pfalz.

Friday, 24 August

Finally the large chunk is finished: NihAV has finally got support for RealVideo 3 and 4!

Since I’ve learned a great deal more about codecs since the last time I wrote RealVideo 3/4 decoder (and specifications for both were leaked—they have mistakes but still clarify some things), I was able to write a new decoder that also seems to reconstruct frames better.

Some words on the design: I’ve split it into several parts as usual—common RV3/4 code, RV3/4 DSP, RV3 bitstream parser, RV3 DSP and RV4 bitstream parser and DSP. That’s the approach I’ve been using before and I’ll probably use it in future decoders as well. The only more or less interesting thing is how I did weighted motion compensation: instead of temporary buffer I allocate 16×16 frame that I use for storing temporary results and which is used later to average results (since motion compensation routines in RealVideo 3 and 4 differ while weighted averaging is the same it makes sense to split it into separate operation).

And now for the juicy part: benchmarks and performance. I’ve tested one of the RealVideo 4 trailers (namely swordfish.rmvb) and avconv -threads 1 -cpuflags 0 decodes it in 15 seconds, nihav-tool needs almost 25.

Now a breakup by categories (numbers are kilocycles reported by perf, avconv first, nihav-tool second):

  • Loop filter — 9.3k / 15.2k;
  • Motion compensation — 0.9k / 6.7k. Ouch!;
  • Intra prediction — 0.4k / 0.8k;
  • Transforms — 0.8k / 3.3k. Ouch!;
  • The rest (mostly bitstream decoding) — ~3k / ~7k.

So unoptimised Rust code is consistently twice as slow as semi-optimised C code and I’m more or less fine with that but some things are especially bad. Let’s take transforms: by themselves transform code is about as fast as its C version but I have an explicit function add_coeffs() for adding transformed coefficients to the output and it takes 2.7 kilocycles—the second-heaviest function!

Here’s the straightforward original version of that function that was even slower (closer to 4k cycles):

    pub fn add_coeffs(&self, dst: &mut [u8], mut idx: usize, stride: usize, coeffs: &[i16]) {
        for y in 0..4 {
            for x in 0..4 {
                dst[idx + x] = clip8((dst[idx + x] as i16) + coeffs[x + y * 4]);
            }
            idx += stride;
        }
    }

And current one, which is faster but not that fast unfortunately:

    pub fn add_coeffs(&self, dst: &mut [u8], idx: usize, stride: usize, coeffs: &[i16]) {
        let out = &mut dst[idx..][..stride * 3 + 4];
        let mut sidx: usize = 0;
        for el in out.chunks_mut(stride).take(4) {
            assert!(el.len() >= 4);
            el[0] = mclip8((el[0] as i32) + (coeffs[0 + sidx] as i32));
            el[1] = mclip8((el[1] as i32) + (coeffs[1 + sidx] as i32));
            el[2] = mclip8((el[2] as i32) + (coeffs[2 + sidx] as i32));
            el[3] = mclip8((el[3] as i32) + (coeffs[3 + sidx] as i32));
            sidx += 4;
        }
    }

It’s funny how all those seemingly useless things like .take(4) and assert!() or even using 32-bit math instead of 16-bit one increase performance.

There’s a similar story with loop filtering: rewriting vertical edge loop filter to use iterators shaved off about ten percent of run-time. But I can’t apply the same approach to horizontal edge filtering (or most of the motion compensation functions) because there I need to access several lines in parallel so I fear the most time in such function will be spent on zipping 6-7 input iterators together (plus an output one). Maybe somebody else has a desire to test such approaches but I don’t.

Overall, I can summarise my experience in writing RealVideo 3/4 decoder in Rust in these sentences:

  1. Rust is a nice language for structuring code;
  2. Rust is still not as fast as C;
  3. It seems that it’s better to avoid using direct index access and use iterators instead;
  4. It feels like Rust code performance would greatly improve if there was a way to tell compiler “okay, I guarantee that .chunks() would produce exactly that amount of chunks and of exactly that length” (no, I don’t know about .exact_chunks() in nightly and it won’t work with add_coeffs() above because last chunk can be smaller). And I’m not into experimenting with custom pixel line-accessing iterators.

Anyway, it’s time to move to audio codecs.

Tuesday, 10 July

I am working on PowerPC SIMD optimizations for x264. I was playing with SAD functions and was thinking it would be nice to have something similar to x86 PSADBW for computing the sum of absolute differences. Luca suggested me to try using the #power9 vec_absd.  Single vec_absd( fenc, pix0v ) replaces vec_sub( vec_max( fencv, pix0v ), vec_min( fencv, pix0v) ). My patch can be found here. To make it work -mcpu=power9 must be set. The patch contains the macro that makes its code backward compatible with POWER8:

#ifndef __POWER9_VECTOR__
#define vec_absd(a, b) vec_sub(vec_max(a, b), vec_min(a, b))
#endif


I got very nice results using vec_absd (the numbers are ratios of AltiVec/C checkasm timings):


I benchmarked overall encoding performance with perf and using vec_absd for  SAD and SSD functions makes 8% improvement. Which is amazing for such a small change.

Thanks Raptor Computing Systems for the chance to experiment with Power9 early.

Monday, 02 July

I’m still working (barely) on NihAV and I’ve managed to make my code decode both RealVideo 3 and 4. It’s not always correct, especially B-frames and some corner cases, but at least it produces a sane picture in most cases.

And this time I’d like to write about disadvantages of writing motion compensation functions in Rust instead of C.

Motion compensation is performed by either simply copying pixels from one block into another or by performing some interpolation that “shifts” image by a fraction of pixel (1/4th for RealVideo 4), so in our case the filter looks like:

dst[x] = clip8(src[x-2] - 5*src[x-1] + 52*src[x] + 20*src[x+1] - 5*src[x+2] + src[x+3] + 32 >> 6); // 1/4 of pixel
dst[x] = clip8(src[x-2] - 5*src[x-1] + 20*src[x] + 20*src[x+1] - 5*src[x+2] + src[x+3] + 16 >> 5); // 1/2 of pixel

And it should be applied in two directions. And we have two block sizes (8×8 and 16×16) too. So there are 2*4*4=32 different functions to implement. Why make them separate functions instead of single one? Because then you can substitute them with optimised versions that do just one kind of operation but do it fast. And here’s a good place to mention that Rust stable still can’t generate new function/variable names in macros (or interpolate idents in Rust terminology) which add minor annoyance to copy-pasting and correcting function names in all macro invocations.

And of course I don’t like copy-pasting much so I used macros to generate functions like this:

macro_rules! mc_func {
    (mc01; $name: ident, $size: expr, $ver: expr) => (
        fn $name (dst: &mut [u8], mut didx: usize, dstride: usize, src: &[u8], mut sidx: usize, sstride: usize) {
            let step = if $ver { sstride } else { 1 };
            for _ in 0..$size {
                for x in 0..$size {
                    dst[didx + x] = filter!(01; src, sidx + x, step);
                }
                sidx += sstride;
                didx += dstride;
            }
        }
        );
...
    (cm01; $name: ident, $size: expr, $ofilt: ident) => (
        fn $name (dst: &mut [u8], didx: usize, dstride: usize, src: &[u8], mut sidx: usize, sstride: usize) {
            let mut buf: [u8; ($size + 5) * $size] = [0; ($size + 5) * $size];
            let mut bidx = 0;
            let bstride = $size;
            sidx -= sstride * 2;
            for _ in 0..$size+5 {
                for x in 0..$size { buf[bidx + x] = filter!(01; src, sidx + x, 1); }
                bidx += bstride;
                sidx += sstride;
            }
            $ofilt(dst, didx, dstride, &buf, 2*bstride, $size);
        }
        );
...
}

mc_func!(mc01; luma_mc_10_16, 16, false);
mc_func!(mc01; luma_mc_10_8,   8, false);
mc_func!(cm01; luma_mc_11_16, 16, luma_mc_01_16);
...

Which can generate four functions for mc01 case (interpolate 8×8 or 16×16 block and in vertical or horizontal direction) and six functions for cm01 (because you pass final interpolation function as an argument to the macro). So it works but it’s still bulky.

And Luca Barbato of rust-av fame suggested to use traits. Rust traits can have associated constants and default implementations and the code looks like:

trait HFilt {
    const HMODE: usize;
    fn filter_h(src: &[u8], idx: usize) -> u8 {
        match Self::HMODE {
            1 => filter!(01; src, idx, 1),
            2 => filter!(02; src, idx, 1),
            3 => filter!(03; src, idx, 1),
            _ => src[idx],
        }
    }
}
trait VFilt { ditto }
trait MC: HFilt+VFilt {
    const SIZE: usize;
    fn mc(dst: &mut [u8], mut didx: usize, dstride: usize, src: &[u8], mut sidx: usize, sstride: usize) {
        if (Self::HMODE != 0) && (Self::VMODE != 0) {
            let mut buf: [u8; (16 + 5) * 16] = [0; (16 + 5) * 16];
            let mut bidx = 0;
            let bstride = Self::SIZE;
            sidx -= sstride * 2;
            for _ in 0..Self::SIZE+5 {
                for x in 0..Self::SIZE { buf[bidx + x] = Self::filter_h(src, sidx + x); }
                bidx += bstride;
                sidx += sstride;
            }
            bidx = bstride * 2;
            for _ in 0..Self::SIZE {
                for x in 0..Self::SIZE { dst[didx + x] = Self::filter_v(&buf, bidx + x, bstride); }
                didx += dstride;
                bidx += bstride;
            }
        } else if Self::HMODE != 0 {
            for _ in 0..Self::SIZE {
                for x in 0..Self::SIZE {
                    dst[didx + x] = Self::filter_h(src, sidx + x);
                }
                didx += dstride;
                sidx += sstride;
            }
        } else if Self::VMODE != 0 {
            ...
        } else {
            // simple block copy
        }
    }
}

macro_rules! mc {
    ($name: ident, $size: expr, $vf: expr, $hf: expr) => {
        struct $name;
        impl HFilt for $name { const HMODE: usize = $hf; }
        impl VFilt for $name { const VMODE: usize = $vf; }
        impl MC for $name { const SIZE: usize = $size; }
    };
}

And then you can instantiate all functions via simple mc!(MC13_16, 16, 1, 3); or such. The main annoyance is that you can use $size passed as macro argument to define array sizes but let foo: [u8; Self::SIZE] inside trait is not allowed. But it’s a very minor thing that does not affect code much.

Now let’s see if the implementations differ in performance and other metrics. I’ve decoded first couple hundreds frames of some RealVideo 4 file on CPU locked at 1.2GHz and here are the results.

Macros: 1647 cycles, top four luma MC functions taking 200, 190, 180 and 150 cycles.
Traits: 1774 cycles, top four luma MC functions taking 250, 230, 210 and 140 cycles.

Code size: macros version — 13kB, traits — 11kB (about 6kB of which is a common code).

And compilation times are 4m34s for macros version and 4m37s for traits version. So it’s not a zero-compilation-cost abstraction either but the cost is negligible.

Well, code with traits is slower but cleaner and smaller (and should be used only when there’s no optimised version; and I don’t care much about the speed either for now) so I’ll probably keep it.

Monday, 18 June

So it has come to this. Let’s talk about a stuff one usually finds in sweets: various kinds of cream (and my experience with it).

I can divide the cream I’ve encountered or made so far into three categories:

  1. Swedish cream;
  2. Lazy cream;
  3. Custards.

Swedish cream is very easy to make: whip cream, optionally sprinkle cinnamon on top. It’s found in virtually every Swedish cake and serves as a base for some other cream variants. In Germany it’s common to use Sahnesteif—essentially a mix of starch and dextrose—that makes whipped cream stay thick and not runny longer.

Lazy cream is essentially a mix of some dairy product with powdered sugar and maybe something else for flavour (I use lemon juice): it can be butter, mascarpone, quark or something else. You simply mix those two ingredients together and use immediately. I believe the other term for this kind of cream is butter-cream.

And custards is the trickiest one since you have to cook it. It’s essentially a mix of egg yolks and milk with some thickening agent (can be starch or less commonly gelatine). When making it you have to keep in mind that if you simply put yolks into the hot milk they’ll curdle and you’ll end with a very runny omelette so you have to be extra careful and mix them (first you mix yolks with sugar and starch) by pouring a thin stream of one ingredient into another and mixing (some say you should first add some hot milk to yolks and then pour the mix back to milk, others claim it’s enough to pour yolks into milk). Afterwards you have to let it cool in a sealed container and maybe mix with whipped cream. It can be used in tarts, cakes, smaller pastry or eaten as it (preferably with something else though like berries or biscuits).

There’s a variation of it called Bavarian cream which you make by mixing yolks and milk, adding gelatine and mixing with whipped cream after it’s half-set (and then waiting even more hours until it’s fully set). The result is good as a standalone dessert but I heard it can be used in cakes too.

Overall I find all those cream varieties good but it’s better to eat them with something else and in moderation (or you’ll end having my shape).

Sunday, 10 June

Well, since I had no incentive to work on NihAV and recently the weather is not very encouraging for any kind of intellectual activity there was almost no progress. And yet now I have something to write about: NihAV has finally managed to decode non-trivial (i.e not fully black) RealVideo 3 I-frame properly (i.e. without any visible distortions). Loop filter is still missing but it’s a start. And it’s not a small feat considering one has to implement both coefficients decoding and intra prediction. So essentially it’s just motion vector juggling and motion compensation are all the things that are missing for P- and B-frames support. Maybe it will go faster from here (but most likely not).

And since doing that involved rewriting some C code into Rust here are some notes on how oxidising went:

  • match is a nice replacement for the cases when you have to partly remap values—in my case I had to adjust intra prediction directions in case top or left or bottom reference were missing and that means changing three or four values into other values, match looks more compact than several } else if (itype == FOO) { and does not lose readability either;
  • while in C foo = bar = 42; is a common thing, Rust does not allow this (I can understand why) and I’m surprised I ran into it only now (with intra prediction functions that assign the same calculated value to several output pixels at once);
  • loops in Rust are fine for basic use but when you need to deal with something more complex like for (i = 0; i < block_size; i += 4) or for (i = 99; i > 0; i--) you need either to write a simpler loop and remap indices inside or to remember it’s Rust and permute range in less intuitive ways like for i in (0..block_size).filter(|x| x&3 == 0) and for i in (1..99+1).rev(). While this works and even somewhat conveys the meaning it’s a bit unwieldy IMO;
  • and it might be a bit too esoteric but looks like I cannot easily write fn clip_u8(val: N) -> u8 that would take any primitive numeric type as input, do comparisons inside and return value either clipped to converted to u8. The best answer on how to do it I found was “you can’t, it’s against Rust practices”. I don’t need it much and I care even less, so I’ll just mark it as a neutral language feature and forget about it.

And now the small but constantly irritating thing: arrays. While slices are nice and easy to use (including extracting sub-slices), in my area I often need a slice with arbitrary start and end bounds. To clarify my use case: quite often you need a piece of memory that’s addressable with both positive and negative indices and those make sense on certain interval.

One of such common arrays is clipping array which essentially takes input index and returns it clipped usually to 0-255 range. So you have part [-255..-1] filled with zeroes, [0..255] filled with values in the same range and [256..511] filled with 255. I repeat, such clipping table is very common and useful thing that’s currently not easy to implement in Rust.

Another less common case is the block of pixels we process may require information from its top, left and top-left neighbours—and those are addressed as src[-stride + i], src[-1 + stride*i] and src[-stride - 1]. Or a whole frame of GDI-related codec (no, not from Westwood) or even simple BMP/DIB that stores lines upside-down so after you process line 0 you have to move to line -1.

I currently deal with it by keeping an additional variable pointing to the current position in array that I use as a reference and from which I can subtract other numbers if needed, but it’s a bit clunky and error-prone. Since Rust checks indices on slice access I wonder if extending it to work with e.g. negative indices is possible. IIRC FORTRAN and Pascal allowed you to define an array starting with arbitrary index, it might be possible in Rust too.

Oh well, I’ll just keep using my approach meanwhile and waiting to see what rust-av does in this regard.

Sunday, 27 May

One of the Rust language features is explicit object lifetimes that help compiler correctly track memory usage and free objects without using garbage collector. A neat idea but it leads to lifetime specifiers being used everywhere including places where compiler should be smart enough to deal with them without explicit mentions in every place.

Maybe I’m using Rust wrong but in most of the cases I create objects that have no need for lifetime specifier or the objects that have the same lifetime for both its members and itself. Thus I argue that in addition to generic lifetime specifier 'a (or whatever the name you give it) and obviously named 'static there should be 'self that specifies the lifetime to be exactly the same as the object itself.

So, instead of current:

struct Foo<'a> {
  myref: &'a [u8],
  subobj: Bar<'a>,
}

impl<'a> Foo<'a> {
  pub fn new(myref: &'a [u8], subobj: Bar<'a>) -> Self { ... }
}

it should be possible to write:

struct Foo {
  myref: &'self [u8],
  subobj: Bar,
}

impl Foo {
  pub fn new(myref: &'self [u8], subobj: Bar) -> Self { ... }
}

I am not sure whether compiler needs to perform some additional things in such objects compared to objects without no lifetime specifier but it should be easy to assign proper lifetime after parsing the structure definition anyway and I’m pretty sure the compiler does something like this anyway.

And I see only these reasons why this has not been done yet:

  • Considerations for compiler simplicity (i.e. parsing process should be kept as simple as possible)—I still think it should be easy for compiler to recognize the lifetime definition by the time structure declaration parsing is over and it’s used externally (i.e. for objects using this one);
  • Considerations for language clarity and consistency (i.e. it’s immediately obvious when you look at the object that it deals with lifetimes but not with the proposed change). I’d argue that explicit lifetimes should be kept for complex cases only, when you have to juggle lifetimes from several complex sources, and the objects with references not outliving themselves should be fine;
  • Simple oversight (i.e. “we did not think of such simplification”) or developers’ bias (i.e. “we got used to writing lifetime specifiers everywhere that we didn’t think it annoys anybody”). You should be able to guess what I have to say about such argument.

So all in all I’d be happy to either hear why it cannot be done (beside the compatibility with the existing code) or see it implemented. But most likely this will be ignored (and I’m fine with that too).

Tuesday, 22 May

So I had a chance to visit Belgium and Netherlands and what I’ve seen there makes me write this post.

Luxembourg

I visited it some years ago and it looked quite decent to me, nothing particularly strange.

Belgium

Previously I only went to Brussels for FOSDEM but this time I travelled around a bit and saw places outside the capital too.

So, they have nice touches like typeface used for station names, various kinds of trains (though I haven’t seen their outdated train that used to go between Liege and Aachen) and very interesting rail station in Antwerpen.

The only strange thing is that they hang out timetables for workdays and weekdays separately (at least in Bruxelles Nord).

The only stupid thing I saw is ticket machines having a special button for international trains and when you press it it tells you that you can’t buy an international train ticket there. And Belgium is such a small country that it’s hard to travel in any direction for an hour and not cross some border (or get into the sea). One would expect that buying tickets to neighbouring lands would be easier, especially for such close countries like Belgium and Netherlands.

Netherlands

Now this country looks like everything there was designed by idiots.

First, trains. By themselves they are not that bad but they have the best counterintuitive designed door open buttons. First, they are labelled the same: half-opened (or half-closed) doors with small arrows showing opening or closing. So if you have not a very good sight (like me) you’ll be confused. But buttons are colour-coded! Yes, and while on German trains it’s intuitive green—open, red—close (or just a single button for open/close), Dutch trains have yellow button for opening doors and green (or blue for random Japanese) button for closing. Honestly, it should be intuitive to have green button to open doors so you can go. And I’d like to hear a reason behind this beside “well, cannabis is legal in Netherlands”.

Next, timetables. Those are confusing as well. At least in Rotterdam timetables are hanged separately for each fork (well, it should be a line but most of them are drawn as forks which probably means the train parts separate at some point and head in two different directions)—maybe it’s this convoluted system made them invent InterCity Direct too (don’t ask me how that’s different from normal InterCity). And the separate timetable for international trains. Confusing.

And since that was not enough stupidity, they decided to install turnstiles in Rotterdam Centraal so in order to enter or leave the station you need to scan your ticket (yes, it’s like what you have in underground systems but in this case for rail station). And it might be the only station there with such a feature, I saw nothing like that in Amsterdam C when I visited couple of years ago or in The Hague two days ago.

Speaking of The Hague, they have the stupid station name—Den Haag HS where last two letters stay for Hollands Spoor or Dutch Rail. I know only two cases where such naming makes sense:

  • you have a station in the same town belonging to different railway operators e.g. in Basel you have the main station operated by CFF and so it’s called Basel SBB, French railways have their own section there called Elsässerbahnhof or Bâle SNCF and there’s a station used to belong to Baden Railways that is still called Basel Badischer Bahnhof;
  • you had a competing rail operator and the name stuck (a variation of the above really)—e.g. stations on track Bullay (DB)—Traben-Trarbach(DB) are called so because there was another rail line (on the other side of Mosel) with the same stations and when it was closed nobody wanted to rename stations just because;
  • you’re SNCF and you want to mark your stations because they’re yours and no foreign train should set wheel there!

And as far as I know none of this applies to The Hague. I suspect it happened because they have built a new station later (more than a century later) that they designated as central one and could not make a good name for the old station. It’s like in Germany they’d rename station Hamburg-Altona to Hamburg Hbf and Hamburg Hbf to Hamburg DB. In other words, pointless and stupid.

Overall, it was an interesting experience travelling Belgium and Netherlands but I did not expect that much stupidity from the latter. Anyway, the next post should be about Rust.

Wednesday, 25 April

Since I don’t have enough time to visit proper country I went to bad substitute of Sweden that’s much more accessible—Switzerland (it should be obvious why I cannot call it poor or cheap substitute). Since it happened on Easter (April 1-2), the environment was resembling Sweden: snow, mountains, deer and log sheds. And of course I could ride trains in new locations!

Rhaetian railways is a narrow-gauge railway system in canton Graubünden (which symbol uncannily resembles the one from Gävle), a fractal part of Switzerland occupying its south-east corner (fractal in the sense that canton shape looks almost exactly like the shape of whole Switzerland). Trains run in a picturesque scenery with dreadful names like Fhtagn (or Ftan in Swiss-Cthüelsch) or SaaS (they really have a station with such name!), going up to the mountains (in 1-2 km above sea level range) and I spent couple of days travelling around.

But while the scenery is okay, the railways are some unholy mix of Berlin S-Bahn, Czech and German railways:

  • There are German ICEs running there all the way to Chur (so I could travel home without any transfers);
  • The tracks are curvy and trains are as slow as in Czechia (i.e. no matter where you go it will take you at least an hour or two to get there);
  • Prices are like in Czechia too except they use Swiss Francs instead of Czech Koruna—but numbers are about the same (so it seems I can ride with ICE here cheaper, faster and on much longer distance than with RhB);
  • Another thing like in Czechia: buying a ticket with a card involves 1,5€ surcharge. No such thing in Sweden;
  • Narrow-gauge trains are a weird mix themselves: they can put locomotive in the front of the train, in the end (maybe), in the middle (very common) or just couple a typical EMU with a number of conventional rail carriages (I’m not sure I’ve seen that anywhere else);
  • Weird station names: I can understand when you name a station after two places at once like Reichenau-Tamins (that’s common in Germany too) or even if you name it after the same place twice like Disentis/Mustér (it’s Confoederatio Helvetica, natives can’t agree on a single name for anything) but Tavanasa-Breil/Brigels is definitely too much (it’s a station between those two mentioned earlier BTW);
  • It’s afraid of snow: after even insignificant amount of snow they stop going on some routes: on my stay there the trains on Pontresina-Tirano and Disentis/Mustér­-Andermatt routes were cancelled for indefinite amount of days. In Germany trains are more punctual—if they are late they’re late for dozens of minutes, not days. And if something bad happens and trains can run some route for days then you can see information everywhere including how to get around and such. No such thing in Switzerland;
  • And another thing that’s taken from German S-Bahn is timetables and tickets. This requires a separate rant.

Overall, FFS or RhB is not very friendly to a traveller: you should have a definite idea where are you going to, when (at which time and such) and how (i.e. where to transfer) if you want to buy a ticket. For example, I was at the station Chur-West and wanted to go to Scuol-Tarasp. The ticket vending machine offered me to choose from three options: via Samedan, via Chur-Samedan (i.e. go first to Chur main station and from there to Samedan and then to Scuol) or via Vereina. The last option is actually a tunnel and not a station name!

In Germany when you travel with long distance trains you actually choose one of the provided connection possibilities (e.g. InterCity from A to B, RegioBahn from B to C and ICE from C to D or InterCity from A to E and then from E to D) or you can use the provided functionality for route planning even if you don’t buy a ticket. SBB ticket machines simply allow you to buy ticket from A to B maybe with cryptic route midpoint and that’s all! That’s exactly how German ticket vending machines for regional transport work. And there’s yet another point of annoyance: Swiss rail timetables fail to include arrival time for the final destination so if you care about it (like I sometimes do) you have to find it out via other means. It’s plain stupid.

Oh, and the snow-related problem: when you buy a ticket you can’t be sure the train will go there because the only cryptic warning I got is when ticket machine said my ticket will be valid on April 1st-April 9th period (and much later in the train too). In Germany it actually shows warnings when there’s some problem with a train or it’s cancelled entirely (since you can use it later). I actually had a situation when one segment of my travel was served by a train that broke down and I had to take another train later instead. So it feels like you should rather use smartphone and buy ticket online where you can see the actual route and warnings (and probably use bahn.de instead of cff.ch too where possible).

Overall, travelling with Rhaetian railways was both a pleasant and exciting experience in some aspects (i.e. when I was inside the train) and confusing and frustrating experience in others (i.e. when I actually tried to buy a ticket). They also boast how some parts of the system are the third railway in UNESCO World Heritage Railways (the second after India, I guess) and how picturesque some parts are (they are almost as interesting as Sauschwänzlebahn indeed) but as I’ve seen it all there’s no reason to return there (and the reliable source says there are better places in Switzerland to wait over heat waves too).

Saturday, 14 April

Since I don’t have any urges to work on NihAV at this moment (big surprise, I know) I’ll talk about cooking instead.

Since I don’t know how to cook and never had any kind of culinary education, I divide dough into three main categories: puffy (the one that expands while baking), non-puffy (the one that keeps about the same volume) and runny (usually used for pancakes but we’ll talk about them later).

Non-puffy dough is the easiest to make: just mix flour and water (take either boiling water or very cold water for good results). Ideal for simple filling dishes like вареники or Karelian rice pasty (I made both and shall probably make again). The next level is to make so-called shortcrust pastry which is used for pies, quiches and such. Here you usually mix flour with some fat and/or filling (called shortening).

And there we have a variety of what to use for shortening:

  • classical recipes use butter—I’ve cooked stuff using it and it works fine except that it takes too much butter to my liking;
  • French people obviously prefer margarine (since it’s their invention)—I see no reason to try it;
  • Brits prefer some weird animal fat called suet—I feel queasy just thinking about it so it gets definitive no from me;
  • USians use chemically processed vegetable shortening; I’ve tried it once: ordered a can of Crisco shortening, followed the recipe for pie crust and the result is bad. I’d stick to other two recipes listed here. Fun fact: while searching for it on Amazon most offers were from sex shops where it’s apparently offered as a lubricant. I can see why—that stuff is sticky and slick and not fit for baking. Also since one of the sellers offers it along with various sweets (and what passes for them in the USA) I’ve ordered some of those and tried it—I was not impressed by that stuff either.
  • and finally there’s German variant that I find very good called Öl-Quark Teig (dough made from oil and quark—in this case lean homogenous cottage cheese). You mix flour with several spoonfuls of oil (you can choose different oil for different flavouring of course, which is a nice feature) and magerquark (lean homogenous cottage cheese) and that’s all! You can add an egg and/or baking powder too but it’s fine as is too.

Puffy dough is the trickiest one—the puffiness comes from bubbles in the dough and it takes extra effort to do that. The easiest way is simply to add baking powder (or baking soda reacting with vinegar) to the dough, the other conventional ways are to prepare yeast (cultured or uncultured, either way it takes time and some effort) or make bubbles from eggs which requires some skill that I lack (so I stick to baking powder). There are two recipes that work for me: mixing flour, eggs, butter and sugar (aka the usual cake mix) or öl-quark dough with sugar, egg and baking powder.

Runny dough (is it called batter?) can be made by mixing flour with a lot of liquid and some eggs and then used to make pancakes. Since it’s the only thing I’ve done with it so far let’s talk about them.

There are several kinds of pancakes that I know and tried so far:

  • French-style thin pancakes (aka crêpes) that are better eaten fresh with something rolled in;
  • Dutch laughably small pancakes (that have a name almost like an Australian word for gay—probably the words have the same origin);
  • common pancakes—thicker than crêpe, plain, good to eat with something on top or with some filling rolled in;
  • slightly thicker pancakes with something embedded in them (like bits of ham).

And of course Sweden has nice varieties of pancakes in wide range: ordinary pancakes, pancakes with bits of ham, pancakes with potatoes (I tried those and approve) and pancakes for people like me who can’t do anything right with their hands (including flipping pancakes)—ugnspannkaka, i.e. pancakes baked in oven. Obviously that one is much thicker than the rest but it’s easy to make (even I baked some) and it can embed various stuff too which makes it interesting (bits of ham, fish or even fruit). Also this way you’re more likely to end with rectangular pancakes which I find to be a nicer and more versatile shape than usual round ones.

I forgot to mention one local thing—in Baden-Württemberg they have plain pancakes shredded into thin stripes, dried and then they add it to the served soups. It’s called Flädle and you can buy it in every local supermarket (even Aldi). It’s a nice addition to a soup IMO.

Okay, now back to doing anything but coding.

Sunday, 18 March

Today I wanted to talk about two features that are quite important for multimedia decoding but are quite inconvenient in the current state.

First, macros. I know that macros in Rust are both very powerful and quite flexible but they are hard to use for data definition and I ranted about it before. The problem is that quite often you have tables with some internal structure that would benefit from macro substitutions: if you have a codebook constructed from entries following patterns like a, b, -a, -b and a, b, a, -b, -a, b, -a, -b it would be easier and less error-prone to represent them as e.g. FLIP2#(a, b) and FLIP4#(a, b) inside the data definition. The problem is that macro! does not allow you to do that easily since it’s supposed to expand into valid statements (i.e. code or full data definitions). Of course you can work it around by making a set of macros to define the whole array and some bits inside it but that’s what makes it unwieldy. And that’s why I believe there should be another macro substitution mechanism, maybe named macro#, that would work just on data but it’d be much easier to use in that particular case.

The second issue is assembly integration. Despite Rust being fast and such it’s still better to write small critical functions in assembly. And obviously it would be better if Cargo supported including assembler files into crate. You can point out there’s stdsimd for using the power of SIMD without much hassle. I can point out that compiler-generated code is still far from being perfect even with intrinsics and assembly is still better; supporting querying SIMD capabilities via standard package is good though. And you can point out that there’s a special crate for supporting various files with various compilers/assemblers already. I’d say that it’s a bit too generic but at least it can serve as a base for what I need. Again, there’s more or less standard way to deal with assembly files so making a common standard is not hard.

And in the unlikely case somebody reads this and asks why I don’t form an RFC—from what I heard it involves proposing code as well and I don’t want to study the compiler nor waste days compiling it.

Saturday, 03 March

Surprisingly, there’s still some life in NihAV and some progress time from time.

So I’ve debugged RealVideo 2 decoding and verified B-part of PB-frame reconstruction in Intel.263 decoder against the binary specification. Mind you, the latter is not likely to be seen supported by libavcodec ever. First, it’s a fringe feature for extremely old video codecs nobody cares about any more and, second, unlike later codecs, B-part is stored along with P-frame data (i.e. first you have macroblock header for P- or I-macroblock, then macroblock header for B-macroblock, then macroblock coefficients for P-part and then macroblock coefficients for B-part). Other codecs simply pack B-frame along with reference frame but here data is interleaved. I’ve added some support for skipping B-part in libavcodec H.263 decoder (exactly nine years ago!) but decoding two frames in parallel would require some serious hacking of infamous MpegEncContext-using core so it’s very unlikely to happen.

And directions for near future still include RealVideo 3/4 and all RealAudio codecs. Fun fact: two of those are patent-free now—ATSC A/52 aka DNET and AAC-LC (but probably not SBR extension used in racp version). So if you implement them now you can flip a middle finger to both D*lby and Ferkel-herzen-Gesellschaft since new decoders can’t be covered by patent licenses. Not that I cared about it before.

Wednesday, 07 February

I guess everybody else has reacted on his post about MPEG crisis so I can do that as well. So, $postname—most people just don’t understand his outlook. If you interpret his words from his point of view it’s clear he’s right for most of the things.

His posts starts with the list of their achievements in terms of standards produced. Some people claim that most of them don’t have much to do with MPEG: MPEG-1 video is based on H.261, MPEG-2 video is joint effort (and it is H.262), MPEG audio layer III was developed at Fraunh*fer, MP4 looks suspiciously like A**le MOV format etc etc. For that I’d like to quote Zvirmarillion (one of the best parodies ever written but since it’s in Russian hardly anybody would read it):

…First of all two lampposts were made, Aulë crafted two lamps, Varda filled them with kerosene, and Manwë signed the act of acceptance.

You may not seem it as such but from bureaucratic point of view proper paperwork is much more important than actual work done. And the words “business model” occur in the text shortly after that (and I’m sure the quotes are in the original text because MPEG is not for-profit organisation) that explain it: MPEG participants pool their research, they pick the bits for the “best performance” standard (this is true but it’s always a moving goal) so the result can be accepted by the industry for its merit and give profit to the MPEG participants that “donated” the technology (profit in forms of licensing patents or exploiting their intimate expertise in that area, whatever). So it’s more about the business than technology really.

And if you think about it, then indeed the great danger to this model is third party patent holders. From MPEG participants’ point of view they are either companies who own crucial technology used in a standard and could not agree with each other on licensing terms (it’s no secret that their relations are complicated) or parasites who try to profit off MPEG work because they own a blanket patent. From outsider’s point of view there is no difference—they all are greedy bastards without a trace of common sense. Actually they are not different from many other companies, e.g. D*lby Labs Licensing Corporation (you’d be surprised to find out how many H.264 or AAC decoders you can license for a single AC-3 decoder), except that some ITU H.EVC patent cesspools want you to pay by usage (i.e. streams encoded/distributed). Obviously that’s where any sane entity would start to search (or make) alternatives.

The solution he proposes makes perfect sense for the goals he sees before him (serving interests of MPEG members who allow him to travel around the world at somebody else’s expense). The fact that consumers want cheaper devices and less hassle with playing videos from various sources is a completely different problem outside of their scope (the standards are made for the vendors after all).

Now the bit about AOM being a threat for the whole industry is worth discussing further. First of all, remember ITU T.81 (aka JPEG). It was accepted in 1992 and it was arranged that everybody can get royalty-free patent license for it—except for Q-coder owned by !BM—and guess why it got widespread (except for arithmetic coding mode). Of course companies lose any incentive to research image compression further, funding to the universities was slashed and nobody has worked on image compression ever again. But if you look at it from standardisation point of view then this is exactly what has happened: JPEG is still alive and kicking while later standards like JPEG-2000, JPEG-XR or JPEG-WTF failed to supplant it. Same for audio: there are countless variations on AAC theme (including xHE-AAC which is almost completely unlike HE-AAC) despite people using Vorbis and nowadays Opus too.

One would argue that it’s harder and harder to create new codecs. But the building blocks are common so most of modern video codecs follow the same scheme as ITU H.EVC (still waiting for PERSEUS2 BTW). And people still keep creating screen capture or lossless audio codecs simply because they can.

Now one can remind of industry funding of research. I’d say they still have to finance research even in other areas (look at the companies participating in MPEG—most of them known for many products that have not much to do with multimedia—so they’ll survive regardless). And research is such a tricky thing that it will happen in one area even you were looking for something else. But it may be committee funding being slashed and obviously when you’re a chairman of one you care.

The other fun thing to remind is that many companies poured money into creating their own multimedia format ecosystems and while it seems those days are over, they are not. RealNetworks is still buffering, Baidu is trying to make its own ecosystem out of former On2 codecs etc etc (now it’s called AOM). And it will probably remain the same: there will be few accepted formats per ecosystem (like Baidu VP9/WebMKV or H.264+AAC for A**le) and there are means to recode almost any input into anything else so you can accept any user input. And if it gives you a competitive advantage (aka vendor lock-in), why not do it?

Of course the part of industry that depends on interoperability the most will be hurt the most too. But if you care about broadcasting then you know they’re completely fine with living on legacy, so it’s no big change for them.

In the conclusion I want to say that there are many outlooks on the same thing and they may be quite different, the same facts making sense in one system of the world but not in another. And it’s often quite useful to try and have a look from another point. For example, MP3 design is weird with MDCT applied after QMF which is hard to explain from technical point of view; but an anecdote (i.e. I have no proofs of that story being true) says that certain lamp producing company had patents on QMF technology and forced its inclusion into the codec—see, now it makes sense! And that’s why reading Cyril Northcote Parkinson works may be as beneficial as reading some textbook in signal processing.

Saturday, 03 February

So I’ve finally written a decoder for ClearVideo in NihAV and it works semi-decently.

Here’s a twentieth frame of basketball.avi from the usual sample repository. Only the first frame was intra-frame, the rest are coded with just the transforms (aka “copy block from elsewhere and change its brightness level if needed too”).

As you can see there are still serious glitches in decoding, especially on bottom and right edges but it’s moving scene and most of it is still good. And the standard “talking head” sample from the same place decodes perfectly. And RealMedia sample is decoded acceptably too.

Many samples are decoded quite fine and it’s amazing how such simple method (it does not code residue unlike other video codecs with interframes!) still achieves good results at reasonable (for that time) bitrate.

Hopefully there are not so many bugs in my implementation to fix so I can finally move to RealVideo 3 and 4. And then probably to audio codecs before RealVideo 6 (aka RealMedia HD) because it needs REing work for details (and maybe wider acceptance). So much stuff to procrastinate!

Update: I did MV clipping wrong, now it works just fine except for some rare glitches in one RealMedia file.

Sunday, 21 January

I don’t know whether it’s Sweden in general or just proper Swedish Trocadero but I’ve managed to clarify some things in ClearVideo codec.

One of the main problems is that binary specifications are full of cruft: thunks for (almost) every function in newer versions (it’s annoying) and generic containers with all stuff included (so you have lists with elements that have actual payload which are different kinds of classes—it was so annoying that I’ve managed to figure it all out just this week). Anyway, complaining about obscure and annoying binary specifications is fun but it does not give any gain, so let’s move to the actual new and clarified old information. Plus it has several different ways of coding information depending on various flags in extradata.

The codec has two modes: intra frames coded a la JPEG and inter frames that are coded with fractal transforms (and nothing else). Fractal frame is split into tiles of predefined size (that information is stored in extradata) and those tiles may be split into smaller blocks recursively. The information for one block is plane number, flags (most likely to show whether it should be split further), bias value (that should be added to the transformed block) and motion vector (a byte per component). The information is coded with static codebooks and it depends on the coding version and context (it’s one set for version 1, another for version 2 and completely different single codebook for version 6). Codebooks are stored in the resources of decoder wrapper, the same as with DCT coefficients tables.

Now, the extradata. After the copywrong string it actually has the information used in the decoding: picture size (again), flags, version, tile sizes and such. Fun thing is that this information is stored in 32-bit little-endian words for AVI but it uses big-endian words for RealMedia and probably MOV.

And the tables. There are two tables: CVLHUFF (single codebook definition) and HUFF (many codebooks). Both have similar format: first you have byte array for code lengths, then you have 16-bit array of actual codewords (or you can reconstruct them from code lengths the usual way—the shortest code is all zeroes and after that they increase) and finally you have 16-bit array of symbols (just bytes for case of 0x53 chunks in HUFF). The multiple codebook definition has 8-byte header and then codebook chunks in form [id byte][32-bit length in symbols][actual data], there are only 4 possible ID bytes (0xFF—empty table, 0x53—single byte for symbol, the rest is as described above). Those IDs correspond to the tables used to code 16-bit bias value, motion values (as a pair of bytes with possible escape value) and 8-bit flags value.

So, overall structure is more or less clear, underlying details can be verified with some debugging, and I hope to make ClearVideo decoder for NihAV this year. RMHD is still waiting 😉

Tuesday, 09 January

Sweden has a lot of local bus routes and every region (or län) has its own most popular bus route:

  • for Stockholm and Örebro län it’s “Ej i trafik” (something like “not participating in public transit service”, “trafik” in Swedish often means both [car] traffic and public transport service);
  • for Södermanland it’s “Är ej i trafik” (“Is not in service”);
  • in Östergötland it’s “Tyvärr, ej i tjänst” (“Sorry, not in service”).

The joke is that while there are many numbered bus routes (hundreds in Stockholm län), the regulations make bus drivers rest after completing a route so quite often a bus arrives to the end station, unloads all passengers, changes its route number to the one above and goes away; then, obviously, another bus (or the same one after the driver has rested) comes to pick passengers. Since I almost never travel by bus in Germany (we have trams here after all), most of my bus trips happened in Ukraine and Sweden—and those countries differ in approach to drivers indeed.

Another interesting thing is the variety of buses: in Stockholm län you have buses going on trunk lines—quite often those are articulated buses and they’re always painted blue—and ordinary buses (always red); some buses are double-deckers, like on bus route 676 (Stockholm-Norrtälje) and some coaches are double-deckers too (I still fondly remember travelling on top floor of one from Luleå to Sundsvall—no fond memories about Ukrainian bus trips though). And in Norrland they still have skvaders (aka buses with additional cargo departments). Also buses in Stockholm län quite often have USB chargers for every seat and even WiFi—everything for passenger comfort.

It’s quite interesting that some bus routes are operated by two buses: for example, if I want to get from Bromma to Portugal (a place on Adelsö island near Stockholm) I’d take bus 312 which goes to Sjöangen, there I’d step out, get into new bus 312 waiting there while the previous bus goes to the rest. Also it’d travel on a ferry which I also like for some reason.

So there’s something interesting about Swedish buses after all. But railways are still much better (more comfort, higher speeds, less problems from car traffic etc etc) and definitely more awesome (I’ve witnessed rail bus pushing a fallen fir from the tracks less than a week ago—try finding an ordinary bus doing that). But it’s still nice to know that Sweden has good things beside people, trains, food, drinks and nature.

P.S. This seems to have gone a bit further than just describing how popular bus routes differ in various Swedish regions. Hopefully my upcoming NowABitClearerVideo post would go the same way.

Friday, 08 December

Well, I intended to end my review but I was reminded that there are even more Dingo Pictures works that I’ve missed. So let’s look at those.

Moving pictures

For some reason Germany has not exactly cartoons but rather still pictures illustrating the story with some narration to accompany them (in order to make them less boring some effects may be applied to pictures like panning or zooming in and out, hence the title I gave them). There were a couple of those on the DVD set with some Dingo Pictures cartoons. I’ve never seen such before and something tells me I didn’t miss much.

Anyway, probably before transitioning to the fully animated pictures Dingo had produced couple of those. Their website lists those:

  • Fröhliche Weihnachte (i.e. Merry Christmas) — can be about anything really;
  • Weihnachtsgeschichten (i.e. Christmas Stories) — see above;
  • Hampie, ein kleiner Wal entdeckt seine Welt (i.e. Humpie the Little Whale Explores the World) — I guess the title is the spoiler.

But it turns out there are more of those, probably lost in VHS era, but here are some details about two of those.

Es weihnachtet sehr… (i.e. It Was Christmasing Much)

Here’s just one picture from it that explains it all:

And here’s the remake:

Indeed, it seems to be the same story just set around Christmas and with some snowy landscapes here and there. It may also be one of two stories mentioned above or it may be something different.

Bunny

I suspect this story had a different name but the only source I could find was in Italian with the title card replaced.

The story is about Easter Bunnies living in a village and painting Easter Eggs:

But unfortunately a bandit kidnaps one of the younger bunny and demands a ransom:

Obviously his plans are foiled:

… and he has to serve a term in prison:
The title is obvious pun on Alcatraz and Hase (German for bunny or hare)

Happy end!

This story might be a prequel to the later Easter Bunnies story that has the same setting but heroes are different and the story itself differs too.

Perseus

Before Hercules Dingo Pictures has tried its hand on adapting another series of Greek myths.

The story seems to be straightforward and seems to cover all major points mentioned in the myths.

Medusa sisters, the most famous encounter for Perseus

This is probably the first fully animated Dingo Pictures cartoon and it has slightly strange style compared to the usual one: many characters are not following the canon outlook they got in later films and all backgrounds seem to be drawn on the computer instead of hand-drawn backgrounds like we see in other cartoons. Yet it has a share of canonical Dingo Pictures characters, tropes and style so you won’t mistake it for any other studio.

Perseus—not quite the Dingo Young Man we’re accustomed to

Geese—exactly like many other of their cameos

The players—I don’t remember seeing one of them anywhere else but another one plays music (or cooks) in a lot of other stories

It’s not mentioned at the official site and I suspect it was written off as an experiment and not transferred to DVD with the rest. In any case it was an interesting experience.

Sunday, 30 April

I decided to write SIMD optimizations for HEVC decoder inverse transform (which is IDCT approximation) for ARMv7. (Here is an interesting post about DCT.) The inverse transform for HEVC operates on 4x4, 8x8, 16x16 and 32x32 blocks and I have finished them recently. For each block there are 2 functions, one for 8 bitdepth and the other for 10 bitdepth:

  • 4x4 block: ff_hevc_idct_4x4_8/10_neon, the speed up vs C code on A53 core ~3x
  • 8x8 block: ff_hevc_idct_8x8_8/10_neon, speed up (A53) ~4x (github)
  • 16x16 block: ff_hevc_idct_16x16_8/10_neon, speed up (A53) ~8x (github)
  • 32x32 block: ff_hevc_idct_32x32_8/10_neon, speed up (A53) ~13x (github)
Here are some things I learned about NEON ARMv7.
  • The values of q4 - q7 have to be preserved (with vpush/vpop {q4-q7} ) when one wants to use them. VPUSH/VPOP pushes and pops to/from stack.
  • Try to do things in parallel because many of the smaller ARM cores do not have out-of-order execution like x86 does.
  • Do not forget to preserve LR (link register) value when calling a function. When LR value is preserved it can be used as any other GPR.
  • If LR value was pushed to stack, one does not have to do pop lr and then bx lr to return but it's better to return with simply pop {pc} .
  • Use VSHL (Vector Shift Left) instead of VMUL (Vector multiply by scalar) when possible, it's much faster. (The same is valid in general, for example for x86.)
  • Align loads/stores when possible, it's faster.
  • To align the stack and allocate a temporary buffer there (rx is some GPR) mov rx, sp and rx, sp, #15 add rx, rx, #buffer_size sub sp, sp, rx sp now points to the buffer. After using the buffer, the stack pointer has to be restored with add sp, sp, rx .
  • Try to keep functions small. If it is needed to call some big macro several times, make a function of such a macro. Too big futions may fail to build and may hurt performance.

  • Always try to play with the instruction order when it is possible and benchmark the results. But what improves the performance on one core (mine is A53) may cause (or may not) a slowdown on some other core (A7, A8, A9).
Many of the things I learned could be found in ARM Architecture Reference Manual ARMv7-A or the other ARM documentation therefore it is important to read such documents. Many thanks to Kostya Shishkov, who introduced me to ARM and many thanks to Martin Storsjö, an ARM expert who reviewed my patches and helped me a lot with optimizing them.

Friday, 28 April

I tried my skills at optimising HEVC. My SIMD IDCT (Inverse Discrete Cosine Transform) for HEVC decoder was merged lately. What I did was 4x4, 8x8, 16x16 and 32x32 IDCTs for 8 and 10 bitdepths. Both 4x4 and 8x8 are supported on 32-bit CPUs but 16x16 and 32x32 are 64-bit only.

The larger transforms calls the smaller ones, 32 calls 16, 16 calles 8 and so on, so 4x4 is used by all the other transforms. Here is how the actual assembly looks:
; void ff_hevc_idct_4x4__{8,10}_(int16_t *coeffs, int col_limit)
; %1 = bitdepth
%macro IDCT_4x4 1
cglobal hevc_idct_4x4_%1, 1, 1, 5, coeffs
mova m0, [coeffsq]
mova m1, [coeffsq + 16]

TR_4x4 7, 1, 1
TR_4x4 20 - %1, 1, 1

mova [coeffsq], m0
mova [coeffsq + 16], m1
RET
%endmacro
*coeffs is a pointer to coefficients I want to transform. They are loaded to XMM registers and then TR_4x4 macro is called. This macro transforms the coeffs according to the following equations: res00 = 64 * src00 + 64 * src20 + 83 * src10 + 36 * src30
res10 = 64 * src01 + 64 * src21 + 83 * src11 + 36 * src31
res20 = 64 * src02 + 64 * src23 + 83 * src12 + 36 * src32
res30 = 64 * src03 + 64 * src23 + 83 * src13 + 36 * src33
Because the transformed coefficients are written back to the same place, "res" (as residual) is used for the results and "src" for the initial coefficients. The results from the calculations are then scaled res = (res + add_const) >> shift and the (4x4) block of the results is transposed. The macro is called again to perform the same transform but this time to rows.
; %1 - shift
; %2 - 1/0 - SCALE and Transpose or not
; %3 - 1/0 add constant or not
%macro TR_4x4 3
; interleaves src0 with src2 to m0
; and src1 with scr3 to m2
; src0: 00 01 02 03 m0: 00 20 01 21 02 22 03 23
; src1: 10 11 12 13 -->
; src2: 20 21 22 23 m1: 10 30 11 31 12 32 13 33
; src3: 30 31 32 33

SBUTTERFLY wd, 0, 1, 2

pmaddwd m2, m0, [pw_64] ; e0
pmaddwd m3, m1, [pw_83_36] ; o0
pmaddwd m0, [pw_64_m64] ; e1
pmaddwd m1, [pw_36_m83] ; o1

%if %3 == 1
%assign %%add 1 << (%1 - 1)
mova m4, [pd_ %+ %%add]
paddd m2, m4
paddd m0, m4
%endif

SUMSUB_BADC d, 3, 2, 1, 0, 4

%if %2 == 1
psrad m3, %1 ; e0 + o0
t psrad m1, %1 ; e1 + o1
psrad m2, %1 ; e0 - o0
psrad m0, %1 ; e1 - o1
;clip16
packssdw m3, m1
packssdw m0, m2
; Transpose
SBUTTERFLY wd, 3, 0, 1
SBUTTERFLY wd, 3, 0, 1
SWAP 3, 1, 0
%else
SWAP 3, 2, 0
%endif
%endmacro
The larger transforms are a bit more complicated but they works in a similar way.

There are the results benchmarked by checkasm bench_new() function for the bitdepth 8 (the results are similar for bitdepth 10). Checkasm can benchmark SIMD functions with --bench option, in my case the full command was:

    ./tests/checkasm/checkasm --bench=hevc_idct.
    The overall HEVC performace was benchmarked with perf:
    perf stat -r5 ./avconv -threads 1 -i sample.mkv -an -f null -. The sample details: duration 0:12:14, bitrate 200kb/s, yuv420p, 1920x1080, Divx encode of Tears of Steel. The result is 10% speed up after my SIMD optimisations.

    Many thanks to Kostya Shishkov and Henrik Gramner for their advices during the development process.

    Sunday, 22 January

    I asked Kostya Shishkov, an experienced ARM developer, to check my basic NEON knowledge. So here are his questions and my answers to them:

    • Where do you find informations about instruction details?
    • ARM Architecture Reference Manual ARMv7-A and ARMv7-R edition.
    • How many SIMD registers are there and how can you address them?
    • SIMD registers can be viewed as 32 64-bit registers named D0 - D31 or as 16 128-bit registers called Q0 - Q15. VFP view as 32 32-bit registers S0-S31 (mapped to D0- D15) can be also used.
    • In what ways can you load/store data to/from NEON registers and why use one over another?
      • use immediate constant to fill SIMD register:
      • vmov.i32 q0, #42 - move immediate constant 42 to q0 SIMD register, suffix i32 specifies the data type, 32-bit integer in this case, as q0 is 128-bit register, there will be 4 32-bit 42 constants
      • use GPR to store the offset for load/store instruction:
      • mov r1, #16 - move number 16 to r1 GPR register add r0, r1 - add 16 bytes to the address stored in r0 vst1.s16 {q0}, [r0] - store the content of q0 to the address stored in r1
      • update the address after loading (storing)
      • add r1, r0, \offset - add the offset to r0 and store the result in r1mov r2, \step - move the constant step to r2 vld1.s16 {d0}, [r1], r2 - store d0 content to the address in r1, then update r1 = r1 + r2
      • move data between GPR and SIMD registers:
      • vmov d0[0], r1
    • Why it's better to use all different registers in NEON instruction for all its arguments?
    • ( for example: why it's better to use vadd.s16 q1, q2, q3 instead of vadd.s16 q1, q1, q2?)Because it is faster in some cases.
    • What are the differences between vld1.16 q1, [r0] and vld1.16 q1, [r0,:128]?
    • :128 means the data are loaded/stored 128bit aligned. Aligned store means the addr I'm storing at minus the addr of the first array member is 128bit (in this case) multiple.
    • Why some instructions use e.g. vxxx.i8 form and others use vxxx.s8 form?
    • i stands for integer, s for signed integer, I is used when signedness does not matter just to know the element size.
    • Where would you use VZIP, VUZP and VTRN?
    • Those are useful for matrix transposition and various data shuffling.
    • When one NEON instruction can replace several different operations in other SIMDs?
      • VMLAL - multiply corresponding elements in 2 vectors and add the products to destination vector
      • VQRSHRN - Vector Saturating Shift Right Narrow - Shifts right vector elements and clips them into narrower element.

    Monday, 31 October

    Using hwaccel

    Had been a while since I mentioned the topic and we made a huge progress on this field.

    Currently with Libav12 we already have nice support for multiple different hardware support for decoding, scaling, deinterlacing and encoding.

    The whole thing works nicely but it isn’t foolproof yet so I’ll start describing how to setup and use it for some common tasks.

    This post will be about Intel MediaSDK, the next post will be about NVIDIA Video Codec SDK.

    Setup

    Prerequisites

    • A machine with QSV hardware, Haswell, Skylake or better.
    • The ability to compile your own kernel and modules
    • The MediaSDK mfx_dispatch

    It works nicely both on Linux and Windows. If you happen to have other platforms feel free to contact Intel and let them know, they’ll be delighted.

    Installation

    The MediaSDK comes with either the usual Windows setup binary or a Linux bash script that tries its best to install the prerequisites.

    # tar -xvf MediaServerStudioEssentials2017.tar.gz
    MediaServerStudioEssentials2017/
    MediaServerStudioEssentials2017/Intel(R)_Media_Server_Studio_EULA.pdf
    MediaServerStudioEssentials2017/MediaSamples_Linux_2017.tar.gz
    MediaServerStudioEssentials2017/intel_sdk_for_opencl_2016_6.2.0.1760_x64.tgz
    MediaServerStudioEssentials2017/site_license_materials.txt
    MediaServerStudioEssentials2017/third_party_programs.txt
    MediaServerStudioEssentials2017/redist.txt
    MediaServerStudioEssentials2017/FEI2017-16.5.tar.gz
    MediaServerStudioEssentials2017/SDK2017Production16.5.tar.gz
    MediaServerStudioEssentials2017/media_server_studio_essentials_release_notes.pdf
    

    Focus on SDK2017Production16.5.tar.gz.

    tar -xvf SDK2017Production16.5.tar.gz
    SDK2017Production16.5/
    SDK2017Production16.5/Generic/
    SDK2017Production16.5/Generic/intel-opencl-16.5-55964.x86_64.tar.xz.sig
    SDK2017Production16.5/Generic/intel-opencl-devel-16.5-55964.x86_64.tar.xz.sig
    SDK2017Production16.5/Generic/intel-opencl-devel-16.5-55964.x86_64.tar.xz
    SDK2017Production16.5/Generic/intel-linux-media_generic_16.5-55964_64bit.tar.gz
    SDK2017Production16.5/Generic/intel-opencl-16.5-55964.x86_64.tar.xz
    SDK2017Production16.5/Generic/vpg_ocl_linux_rpmdeb.public
    SDK2017Production16.5/media_server_studio_getting_started_guide.pdf
    SDK2017Production16.5/intel-opencl-16.5-release-notes.pdf
    SDK2017Production16.5/intel-opencl-16.5-installation.pdf
    SDK2017Production16.5/CentOS/
    SDK2017Production16.5/CentOS/libva-1.67.0.pre1-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/libdrm-devel-2.4.66-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/intel-linux-media-devel-16.5-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/intel-i915-firmware-16.5-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/install_scripts_centos_16.5-55964.tar.gz
    SDK2017Production16.5/CentOS/intel-opencl-devel-16.5-55964.x86_64.rpm
    SDK2017Production16.5/CentOS/ukmd-kmod-16.5-55964.el7.src.rpm
    SDK2017Production16.5/CentOS/libdrm-2.4.66-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/libva-utils-1.67.0.pre1-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/intel-linux-media-16.5-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/kmod-ukmd-16.5-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/intel-opencl-16.5-55964.x86_64.rpm
    SDK2017Production16.5/CentOS/libva-devel-1.67.0.pre1-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/drm-utils-2.4.66-55964.el7.x86_64.rpm
    SDK2017Production16.5/CentOS/MediaSamples_Linux_bin-16.5-55964.tar.gz
    SDK2017Production16.5/CentOS/vpg_ocl_linux_rpmdeb.public
    SDK2017Production16.5/media_server_studio_sdk_release_notes.pdf
    

    Libraries

    The MediaSDK leverages libva to access the hardware together with an highly extended DRI kernel module.
    They support CentOS with rpms and all the other distros with a tarball.

    BEWARE: if you use the installer script the custom libva would override your system one, you might not want that.

    I’m using Gentoo so it is intel-linux-media_generic_16.5-55964_64bit.tar.gz for me.

    The bits of this tarball you really want to install in the system no matter what is the firmware:

    ./lib/firmware/i915/skl_dmc_ver1_26.bin
    

    If you are afraid of adding custom stuff on your system I advise to offset the whole installation and then override the LD paths to use that only for Libav.

    BEWARE: you must use the custom iHD libva driver with the custom i915 kernel module.

    If you want to install using the provided script on Gentoo you should first emerge lsb-release.

    emerge lsb-release
    bash install_media.sh
    source /etc/profile.d/*.sh
    echo /opt/intel/mediasdk/lib64/ >> /etc/ld.so.conf.d/intel-msdk.conf
    ldconfig
    

    Kernel Modules

    The patchset resides in:

    opt/intel/mediasdk/opensource/patches/kmd/4.4/intel-kernel-patches.tar.bz2
    

    The current set is 143 patches against linux 4.4, trying to apply on a more recent kernel requires patience and care.

    The 4.4.27 works almost fine (even btrfs does not seem to have many horrible bugs).

    Libav

    In order to use the Media SDK with Libav you should use the mfx_dispatch from yours truly since it provides a default for Linux so it behaves in an uniform way compared to Windows.

    Building the dispatcher

    It is a standard autotools package.

    git clone git://github.com/lu-zero/mfx_dispatch
    cd mfx_dispatch
    autoreconf -ifv
    ./configure --prefix=/some/where
    make -j 8
    make install
    

    Building Libav

    If you want to use the advanced hwcontext features on Linux you must enable both the vaapi and the mfx support.

    git clone git://github.com/libav/libav
    cd libav
    export PKG_CONFIG_PATH=/some/where/lib/pkg-config
    ./configure --enable-libmfx --enable-vaapi --prefix=/that/you/like
    make -j 8
    make install
    

    Troubleshooting

    Media SDK is sort of temperamental and the setup process requires manual tweaking so the odds of having to do debug and investigate are high.

    If something misbehave here is a checklist:
    – Make sure you are using the right kernel and you are loading the module.

    uname -a
    lsmod
    dmesg
    
    • Make sure libva is the correct one and it is loading the right thing.
    vainfo
    strace -e open ./avconv -c:v h264_qsv -i test.h264 -f null -
    
    • Make sure you aren’t using the wrong ratecontrol or not passing all the parameters required
    ./avconv -v verbose -filter_complex testsrc -c:v h264_qsv {ratecontrol params omitted} out.mkv
    

    See below for some examples of working rate-control settings.
    – Use the MediaSDK examples provided with the distribution to confirm that everything works in case the SDK is more recent than the updates.

    Usage

    The Media SDK support in Libav covers decoding, encoding, scaling and deinterlacing.

    Decoding is straightforward, the rest has still quite a bit of rough edges and this blog post had been written mainly to explain them.

    Currently the most interesting format supported are h264 and hevc, but even other formats such as vp8 and vc1 are supported.

    ./avconv -codecs | grep qsv
    

    Decoding

    The decoders can output directly to system memory and can be used as normal decoders and feed a software implementation just fine.

    ./avconv -c:v h264_qsv -i input.h264 -c:v av1 output.mkv
    

    Or they can decode to opaque (gpu backed) buffers so further processing can happen

    ./avconv -hwaccel qsv -c:v h264_qsv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v x265 output.mov
    

    NOTICE: you have to explicitly pass the filterchain hwdownload,format=nv12 not have mysterious failures.

    Encoding

    The encoders are almost as straightforward beside the fact that the MediaSDK provides multiple rate-control systems and they do require explicit parameters to work.

    ./avconv -i input.mkv -c:v h264_qsv -q 20 output.mkv
    

    Failing to set the nominal framerate or the bitrate would make the look-ahead rate control not happy at all.

    Rate controls

    The rate control is one of the most rough edges of the current MediaSDK support, most of them do require a nominal frame rate and that requires an explicit -r to be passed.

    There isn’t a default bitrate so also -b:v should be passed if you want to use a rate-control that has a bitrate target.

    Is it possible to use a look-ahead rate-control aiming to a quality metric passing -global_quality -la_depth.

    The full list is documented.

    Transcoding

    It is possible to have a full hardware transcoding pipeline with Media SDK.

    Deinterlacing

    ./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv -c:v h264_qsv -r 25 -b:v 2M output.mov
    

    Scaling

    ./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 10 output.mov
    
    

    Both at the same time

    ./avconv -hwaccel qsv -c:v h264_qsv -i input.mkv -vf deinterlace_qsv,scale_qsv=640:480 -c:v h264_qsv -r 25 -b:v 2M -la_depth 10 output.mov
    

    Hardware filtering caveats

    The hardware filtering system is quite new and introducing it shown a number of shortcomings in the Libavfilter architecture regarding format autonegotiation so for hybrid pipelines (those that do not keep using hardware frames all over) it is necessary to explicitly call for hwupload and hwdownload explictitly in such ways:

    ./avconv -hwaccel qsv -c:v h264_qsv -i in.mkv -vf deinterlace_qsv,hwdownload,format=nv12 -c:v vp9 out.mkv
    

    Future for MediaSDK in Libav

    The Media SDK supports already a good number of interesting codecs (h264, hevc, vp8/vp9) and Intel seems to be quite receptive regarding what codecs support.
    The Libav support for it will improve over time as we improve the hardware acceleration support in the filtering layer and we make the libmfx interface richer.

    We’d need more people testing and helping us to figure use-cases and corner-cases that hadn’t been thought of yet, your feedback is important!

    Tuesday, 18 October

    I decided to organise the Libav sprint again, this time in a small village near Pelhřimov. The participants:


    • Luca Barbato - came with a lot of Venchi chocolate
    • Anton Khirnov
    • Kostya Shishkov - came with a lot of Läderach chocolate
    • Mark Thompson
    • Alexandra Hájková (me) 
    All the chocolate was sooo tasty, we ate all of it of course.
    Topics:
    • Luca - Altivec
    • Anton and Mark - coworking on QSV and VP9
    • Alexandra - x86 SIMD HEVC IDCT
    • Kostya - consultations for the rest of us

    I rented a cosy cottage for the sprint. It was surprisingly warm for the end of September and we enjoyed a nice garden sitting not only with the table and chairs but even with a couch. The sitting was under the roof and it was possible to work outside which was really pleasant for me. There was also a grill so we grilled some sausages for one of our dinners. It started to rain the last day and making a fire in the fireplace made a nice feeling.

    Because the weather was really nice we decided to explore the countyside a bit, we finally found the path to the forest and spent mid-afternoon on its fresh air.

    Both Luca and me like to cook and to try new foods and dishes. We cooked a lot during the sprint, Luca prepared some delicious Italian meals, Kostya cooked us traditional ukranian dish from millet called куліш which was very tasty and I want to try it at home sometime. At least for me the most interesting thing of the cooking part was making another traditional Ukrainian meal вареники which is kind of filled-pasta. We filled one half of them with potato-salami  with fried onion filling and the other half with cottage cheese, both very good I can't decide which one was better.  The вареники was eaten with sour cream, there're also some dried cranberries on the picture.

    I almost finished my x86 SIMD  optimisation for HEVC IDCT there, Luca introduced altivec to me and I wrote PPC optimised 4x4 IDCT (github).

    A lot of work was done during the sprint, many patches sent (ML), many patches pushed, all of this in a friendly atmosphere in a comfortable cottage, fresh air and with a good cuisine. Personally I enjoyed the sprint very much, I'm glad I organised it and I hope the other people liked it as well.

    Thank you everyone for coming!



    Monday, 02 May

    Some time ago Niels Möller proposed a new method of bitreading that should be faster then the current one (here). It is an interesting idea and I decided to try it. Luca Barbato considered it to be a good idea and had his company sponsored this work. The new bitstream reader (bitstream.h) is faster in many cases and is never  slower than the existing one (get_bits.h).

    All the new and equivalent old bitreading functions was benchmarked using TIME macros in a simple test program. Because the results were good, I converted all the decoders to use the new bitstream reader. The performances of the most important decoders using the new and old bitreaders was benchmarked with perf stat (using x86_64, 64-bit (Intel Core i3-2120, 3.30GHz)) are pretty good and even on arm32 I could not see speed regressions.

    The old bitstream reader is quite inconsistent, with its core API made of macros and with at least 3 or 4 higher level functions reading a not easy to guess number of bits.
    static inline unsigned int get_bits(GetBitContext *s, int n){ register int tmp; OPEN_READER(re, s);
    UPDATE_CACHE(re, s); tmp = SHOW_UBITS(re, s, n); LAST_SKIP_BITS(re, s, n); CLOSE_READER(re, s); return tmp;}
    The new bitstream reader is written to be easier to use, more consistent and to be easier to follow. It is better documented, the functions are named according to the current naming convetions and to be consistent with the bytestream reader naming.
    Some of bitstream.h functions replaces several ones from get_bits.h at once:
    • bitstream_read_32() reads bits from the 0-32 range and replaces
      • get_bits()
      • get_bits_long()
      • get_bitsz()
    • bitstream_peek_32() replaces
      • show_bits()
      • show_bits_long()
      • show_bits1()
    • bitstream_skip() replaces
      • skip_bits1()
      • skip_bits()
      • skip_bits_long()
    The get_bits.h bitreading macros have to be used directly sometimes to achieve better decoding performance. Reading or writing the code that uses these macros requires good knowledge about how this bitreader works and they can be surprising at times since they create local variables.
    The new bitreader usage does not require such a deep knowledge, all needed operations require just to use a smaller set of function that happen to be faster in many usage patterns.

    Many thanks to Luca Barbato for his advices and consultations during the developing process.

     I hope this new bitreader could become a useful piece of the Libav code. Opinions and suggestions are welcome.

    Friday, 01 April

    swscale is one of the most annoying part of Libav, after a couple of years since the initial blueprint we have something almost functional you can play with.

    Colorspace conversion and Scaling

    Before delving in the library architecture and the outher API probably might be good to make a extra quick summary of what this library is about.

    Most multimedia concepts are more or less intuitive:
    encoding is taking some data (e.g. video frames, audio samples) and compress it by leaving out unimportant details
    muxing is the act of storing such compressed data and timestamps so that audio and video can play back in sync
    demuxing is getting back the compressed data with the timing information stored in the container format
    decoding inflates somehow the data so that video frames can be rendered on screen and the audio played on the speakers

    After the decoding step would seem that all the hard work is done, but since there isn’t a single way to store video pixels or audio samples you need to process them so they work with your output devices.

    That process is usually called resampling for audio and for video we have colorspace conversion to change the pixel information and scaling to change the amount of pixels in the image.

    Today I’ll introduce you to the new library for colorspace conversion and scaling we are working on.

    AVScale

    The library aims to be as simple as possible and hide all the gory details from the user, you won’t need to figure the heads and tails of functions with a quite large amount of arguments nor special-purpose functions.

    The API itself is modelled after avresample and approaches the problem of conversion and scaling in a way quite different from swscale, following the same design of NAScale.

    Everything is a Kernel

    One of the key concept of AVScale is that the conversion chain is assembled out of different components, separating the concerns.

    Those components are called kernels.

    The kernels can be conceptually divided in two kinds:
    Conversion kernels, taking an input in a certain format and providing an output in another (e.g. rgb2yuv) without changing any other property.
    Process kernels, modifying the data while keeping the format itself unchanged (e.g. scale)

    This pipeline approach gets great flexibility and helps code reuse.

    The most common use-cases (such as scaling without conversion or conversion with out scaling) can be faster than solutions trying to merge together scaling and conversion in a single step.

    API

    AVScale works with two kind of structures:
    AVPixelFormaton: A full description of the pixel format
    AVFrame: The frame data, its dimension and a reference to its format details (aka AVPixelFormaton)

    The library will have an AVOption-based system to tune specific options (e.g. selecting the scaling algorithm).

    For now only avscale_config and avscale_convert_frame are implemented.

    So if the input and output are pre-determined the context can be configured like this:

    AVScaleContext *ctx = avscale_alloc_context();
    
    if (!ctx)
        ...
    
    ret = avscale_config(ctx, out, in);
    if (ret < 0)
        ...
    

    But you can skip it and scale and/or convert from a input to an output like this:

    AVScaleContext *ctx = avscale_alloc_context();
    
    if (!ctx)
        ...
    
    ret = avscale_convert_frame(ctx, out, in);
    if (ret < 0)
        ...
    
    avscale_free(&ctx);
    

    The context gets lazily configured on the first call.

    Notice that avscale_free() takes a pointer to a pointer, to make sure the context pointer does not stay dangling.

    As said the API is really simple and essential.

    Help welcome!

    Kostya kindly provided an initial proof of concept and me, Vittorio and Anton prepared this preview on the spare time. There is plenty left to do, if you like the idea (since many kept telling they would love a swscale replacement) we even have a fundraiser.

    Tuesday, 29 March

    Another week another API landed in the tree and since I spent some time drafting it, I guess I should describe how to use it now what is implemented. This is part I

    What is here now

    Between theory and practice there is a bit of discussion and obviously the (lack) of time to implement, so here what is different from what I drafted originally:

    • Function Names: push got renamed to send and pull got renamed to receive.
    • No separated function to probe the process state, need_data and have_data are not here.
    • No codecs ported to use the new API, so no actual asyncronicity for now.
    • Subtitles aren’t supported yet.

    New API

    There are just 4 new functions replacing both audio-specific and video-specific ones:

    // Decode
    int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
    int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);
    
    // Encode
    int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
    int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt);
    

    The workflow is sort of simple:
    – You setup the decoder or the encoder as usual
    – You feed data using the avcodec_send_* functions until you get a AVERROR(EAGAIN), that signals that the internal input buffer is full.
    – You get the data back using the matching avcodec_receive_* function until you get a AVERROR(EAGAIN), signalling that the internal output buffer is empty.
    – Once you are done feeding data you have to pass a NULL to signal the end of stream.
    – You can keep calling the avcodec_receive_* function until you get AVERROR_EOF.
    – You free the contexts as usual.

    Decoding examples

    Setup

    The setup uses the usual avcodec_open2.

        ...
    
        c = avcodec_alloc_context3(codec);
    
        ret = avcodec_open2(c, codec, &opts);
        if (ret < 0)
            ...
    

    Simple decoding loop

    People using the old API usually have some kind of simple loop like

    while (get_packet(pkt)) {
        ret = avcodec_decode_video2(c, picture, &got_picture, pkt);
        if (ret < 0) {
            ...
        }
        if (got_picture) {
            ...
        }
    }
    

    The old functions can be replaced by calling something like the following.

    // The flush packet is a non-NULL packet with size 0 and data NULL
    int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
    {
        int ret;
    
        *got_frame = 0;
    
        if (pkt) {
            ret = avcodec_send_packet(avctx, pkt);
            // In particular, we don't expect AVERROR(EAGAIN), because we read all
            // decoded frames with avcodec_receive_frame() until done.
            if (ret < 0)
                return ret == AVERROR_EOF ? 0 : ret;
        }
    
        ret = avcodec_receive_frame(avctx, frame);
        if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
            return ret;
        if (ret >= 0)
            *got_frame = 1;
    
        return 0;
    }
    

    Callback approach

    Since the new API will output multiple frames in certain situations would be better to process them as they are produced.

    // return 0 on success, negative on error
    typedef int (*process_frame_cb)(void *ctx, AVFrame *frame);
    
    int decode(AVCodecContext *avctx, AVFrame *pkt,
               process_frame_cb cb, void *priv)
    {
        AVFrame *frame = av_frame_alloc();
        int ret;
    
        ret = avcodec_send_packet(avctx, pkt);
        // Again EAGAIN is not expected
        if (ret < 0)
            goto out;
    
        while (!ret) {
            ret = avcodec_receive_frame(avctx, frame);
            if (!ret)
                ret = cb(priv, frame);
        }
    
    out:
        av_frame_free(&frame);
        if (ret == AVERROR(EAGAIN))
            return 0;
        return ret;
    }
    

    Separated threads

    The new API makes sort of easy to split the workload in two separated threads.

    // Assume we have context with a mutex, a condition variable and the AVCodecContext
    
    
    // Feeding loop
    {
        AVPacket *pkt = NULL;
    
        while ((ret = get_packet(ctx, pkt)) >= 0) {
            pthread_mutex_lock(&ctx->lock);
    
            ret = avcodec_send_packet(avctx, pkt);
            if (!ret) {
                pthread_cond_signal(&ctx->cond);
            } else if (ret == AVERROR(EAGAIN)) {
                // Signal the draining loop
                pthread_cond_signal(&ctx->cond);
                // Wait here
                pthread_cond_wait(&ctx->cond, &ctx->mutex);
            } else if (ret < 0)
                goto out;
    
            pthread_mutex_unlock(&ctx->lock);
        }
    
        pthread_mutex_lock(&ctx->lock);
        ret = avcodec_send_packet(avctx, NULL);
    
        pthread_cond_signal(&ctx->cond);
    
    out:
        pthread_mutex_unlock(&ctx->lock)
        return ret;
    }
    
    // Draining loop
    {
        AVFrame *frame = av_frame_alloc();
    
        while (!done) {
            pthread_mutex_lock(&ctx->lock);
    
            ret = avcodec_receive_frame(avctx, frame);
            if (!ret) {
                pthread_cond_signal(&ctx->cond);
            } else if (ret == AVERROR(EAGAIN)) {
                // Signal the feeding loop
                pthread_cond_signal(&ctx->cond);
                // Wait
                pthread_cond_wait(&ctx->cond, &ctx->mutex);
            } else if (ret < 0)
                goto out;
    
            pthread_mutex_unlock(&ctx->lock);
    
            if (!ret) {
                do_something(frame);
            }
        }
    
    out:
            pthread_mutex_unlock(&ctx->lock)
        return ret;
    }
    

    It isn’t as neat as having all this abstracted away, but is mostly workable.

    Encoding Examples

    Simple encoding loop

    Some compatibility with the old API can be achieved using something along the lines of:

    int encode(AVCodecContext *avctx, AVPacket *pkt, int *got_packet, AVFrame *frame)
    {
        int ret;
    
        *got_packet = 0;
    
        ret = avcodec_send_frame(avctx, frame);
        if (ret < 0)
            return ret;
    
        ret = avcodec_receive_packet(avctx, pkt);
        if (!ret)
            *got_packet = 1;
        if (ret == AVERROR(EAGAIN))
            return 0;
    
        return ret;
    }
    

    Callback approach

    Since for each input multiple output could be produced, would be better to loop over the output as soon as possible.

    // return 0 on success, negative on error
    typedef int (*process_packet_cb)(void *ctx, AVPacket *pkt);
    
    int encode(AVCodecContext *avctx, AVFrame *frame,
               process_packet_cb cb, void *priv)
    {
        AVPacket *pkt = av_packet_alloc();
        int ret;
    
        ret = avcodec_send_frame(avctx, frame);
        if (ret < 0)
            goto out;
    
        while (!ret) {
            ret = avcodec_receive_packet(avctx, pkt);
            if (!ret)
                ret = cb(priv, pkt);
        }
    
    out:
        av_packet_free(&pkt);
        if (ret == AVERROR(EAGAIN))
            return 0;
        return ret;
    }
    

    The I/O should happen in a different thread when possible so the callback should just enqueue the packets.

    Coming Next

    This post is long enough so the next one might involve converting a codec to the new API.

    Monday, 21 March

    Last weekend, after few months of work, the new bitstream filter API eventually landed.

    Bitstream filters

    In Libav is possible to manipulate raw and encoded data in many ways, the most common being

    • Demuxing: extracting single data packets and their timing information
    • Decoding: converting the compressed data packets in raw video or audio frames
    • Encoding: converting the raw multimedia information in a compressed form
    • Muxing: store the compressed information along timing information and additional information.

    Bitstream filtering is somehow less considered even if the are widely used under the hood to demux and mux many widely used formats.

    It could be consider an optional final demuxing or muxing step since it works on encoded data and its main purpose is to reformat the data so it can be accepted by decoders consuming only a specific serialization of the many supported (e.g. the HEVC QSV decoder) or it can be correctly muxed in a container format that stores only a specific kind.

    In Libav this kind of reformatting happens normally automatically with the annoying exception of MPEGTS muxing.

    New API

    The new API is modeled against the pull/push paradigm I described for AVCodec before, it works on AVPackets and has the following concrete implementation:

    // Query
    const AVBitStreamFilter *av_bsf_next(void **opaque);
    const AVBitStreamFilter *av_bsf_get_by_name(const char *name);
    
    // Setup
    int av_bsf_alloc(const AVBitStreamFilter *filter, AVBSFContext **ctx);
    int av_bsf_init(AVBSFContext *ctx);
    
    // Usage
    int av_bsf_send_packet(AVBSFContext *ctx, AVPacket *pkt);
    int av_bsf_receive_packet(AVBSFContext *ctx, AVPacket *pkt);
    
    // Cleanup
    void av_bsf_free(AVBSFContext **ctx);
    

    In order to use a bsf you need to:

    • Look up its definition AVBitStreamFilter using a query function.
    • Set up a specific context AVBSFContext, by allocating, configuring and then initializing it.
    • Feed the input using av_bsf_send_packet function and get the processed output once it is ready using av_bsf_receive_packet.
    • Once you are done av_bsf_free cleans up the memory used for the context and the internal buffers.

    Query

    You can enumerate the available filters

    void *state = NULL;
    
    const AVBitStreamFilter *bsf;
    
    while ((bsf = av_bsf_next(&state)) {
        av_log(NULL, AV_LOG_INFO, "%s\n", bsf->name);
    }
    

    or directly pick the one you need by name:

    const AVBitStreamFilter *bsf = av_bsf_get_by_name("hevc_mp4toannexb");
    

    Setup

    A bsf may use some codec parameters and time_base and provide updated ones.

    AVBSFContext *ctx;
    
    ret = av_bsf_alloc(bsf, &ctx);
    if (ret < 0)
        return ret;
    
    ret = avcodec_parameters_copy(ctx->par_in, in->codecpar);
    if (ret < 0)
        goto fail;
    
    ctx->time_base_in = in->time_base;
    
    ret = av_bsf_init(ctx);
    if (ret < 0)
        goto fail;
    
    ret = avcodec_parameters_copy(out->codecpar, ctx->par_out);
    if (ret < 0)
        goto fail;
    
    out->time_base = ctx->time_base_out;
    

    Usage

    Multiple AVPackets may be consumed before an AVPacket is emitted or multiple AVPackets may be produced out of a single input one.

    AVPacket *pkt;
    
    while (got_new_packet(&pkt)) {
        ret = av_bsf_send_packet(ctx, pkt);
        if (ret < 0)
            goto fail;
    
        while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
            yield_packet(pkt);
        }
    
        if (ret == AVERROR(EAGAIN)
            continue;
        IF (ret == AVERROR_EOF)
            goto end;
        if (ret < 0)
            goto fail;
    }
    
    // Flush
    ret = av_bsf_send_packet(ctx, NULL);
    if (ret < 0)
        goto fail;
    
    while ((ret = av_bsf_receive_packet(ctx, pkt)) == 0) {
        yield_packet(pkt);
    }
    
    if (ret != AVERROR_EOF)
        goto fail;
    

    In order to signal the end of stream a NULL pkt should be fed to send_packet.

    Cleanup

    The cleanup function matches the av_freep signature so it takes the address of the AVBSFContext pointer.

        av_bsf_free(&ctx);
    

    All the memory is freed and the ctx pointer is set to NULL.

    Coming Soon

    Hopefully next I’ll document the new HWAccel layer that already landed and some other API that I discussed with Kostya before.
    Sadly my blog-time (and spare time in general) shrunk a lot in the past months so he rightfully blamed me a lot.

    Saturday, 05 March

    Sometimes it's very useful to print out how some parameters changes during the program execution.

    When writing the new version of some piece of code one usually needs to compare it with the old one to be sure it behaves the same in every case. Especially the corner cases might be tricky and I spent a lot of time with them while my code worked fine in general.

    For example when I was working on my ASF demuxer, I was happy there's an old demuxer and I can compare their behaviour. When debugging the ASF, I wanted to know the state of I/O context. In that time lu_zero (who was mentoring me) created a set of macros which printed logs for every I/O function (here). For example there's the macro for avio_seek() function (which is equivalent to fseek()).

      #define avio_seek(s, o, w) ({ \
    int64_t _ret = avio_seek(s, o, w); \
    int64_t _pos = avio_tell(s); \
    av_log(NULL, AV_LOG_VERBOSE|AV_LOG_C(154), "0x%08"PRIx64" - %s:%d seek %p %"PRId64" %d -> %"PRId64"\n", \
    _pos, __FUNCTION__, __LINE__, s, o, w, _ret); \
    _ret; \
    })
     When such a macro was present in my demuxer, for all the calls of avio_seek the following information was printed
    • _pos = avio_tell(s): the offset in the demuxed file
    • __FUNCTION__ : preprocessor define that contains the name of the function being compiled to know which function called avio_seek
    • __LINE__ : preprocessor define that contains the line number of the original source file that is being compiled to know from what line avio_seek was called from
    • s, o, w : the values of the parameters avio_seek was called with
    • _ret: the avio_seek return value
    • __FILE__: preprocessor define contains the name of the file being compiled (this one was not used in the example but might be useful when one needs more complex log). 
    Parentheses are used around the define body because such a construct may appear as an expression in GNU C. There's _ret; as the last statement in this macro because its value serves as the value of the entire construct. If the last _ret; would be omitted in my example, the return value of this macro expression would be printf return value. The underscores in _ret or _pos variables are used to be sure it does not shadow some other variables with the same names.
    Working with a log created by a set of macros similar to this example might be more effective than debugging with gdb in some cases.

    Many thanks to lu_zero for teaching me about it. The support from the more experienced developers is the thing I really love about Libav.

    Wednesday, 09 December

    I made a split my complex dcadec bit-exact patch (https://patches.libav.org/patch/59013/) to the several parts. The first part which contains changing the dcadec core to work with integer coefficients instead of converting the coefficients to floats just after reading them was sent to the mailing list (https://patches.libav.org/patch/59141/). Such a change was expected to slow down the decoding process. Therefore I made some measurements to examine how much slower decoding is after my patch.
     I decoded this sample: samples.libav.org/A-codecs/DTS/dts/dtswavsample16.wav 10 times and measured the user time between invocation and termination with the "time" command:

     time ./avconv -f dts -i dtswavsample16.wav -f null -c pcm_f32le null, 
    counted the average real time of avconv run and repeated everything for the master branch. The duration of the dtswavsample16.wav is ~4 mins and I wanted to look at the slow down for the longer files. Hence I used relatively new loop option for the avconv (http://sasshkas.blogspot.cz/2015/08/the-loop-option-for-avconv.html) to create ~24 mins long file from the initial file by looping it 6x with
     ./avconv -loop 6 -i dtswavsample16.wav -c copy dts_long.wav. 
    I decoded this longer dts file 10x again for both new integer and old float coefficients core and counted the averages.
    According to my results the integer patch causes ~20% slow down. The question is if this is still acceptable. I see 2 options here
    • To consider the slowdown acceptable and to try to make some speedups like SIMDifying VQ decoding and using inline asm for 64-bit math.
    • Or alternatively both int and float modes can be kept for different decoding modes but this might make the code too hairy.

    Opinions and suggestions are welcome.

    Tuesday, 08 December


    When playing a multimedia file, one usually wants to seek to reach different parts of a file. Mostly, containers allows this feature but it might be problem for streamed files.
    Withing the libavformat, seeking is performed with function (inside the demuxer) called read_seek. This function tries to find matching timestamp  for the requested position (offset) in the played file.
    There are 2 ways to seek through the file. One of them is when file contains some kind of index, which matches positions with appropriate timestamps. In this case index entries are created by calling av_add_index_entry. If index entries are present, av_index_search_timestamp, which is called inside the read_seek, looks for the closest timestamp for the requested position. When the file does not provide such entries, one can look for the requested position with ff_seek_frame_binary. For doing so, read_timestamp function has to be created inside the demuxer.
    Read_timestamp takes required position and stream index and then tries to find offset of the beginning of the closest packet wich is key frame with matching stream index. While doing this, read_timestamp reads timestamps for all the packets after given position and creates index entries. When the key frame with matching stream index is found, read_timestamp upgrades required position and returns timestamp matching to it. 

    I was told to test my ASF demuxer with the zzuf utility. Zuff is a fuzzer, it changes random bits in the program's input which simulates damaged file or unexpected data.
    For testing ASF's behaviour I want to feed avconv with some corrupted wmv files and see what will happen. Because I want to fuzz in several different ways I want to vary seed (the initial value of zzuf’s random number generator). I'll do this with command:

    while true; SEED=$RANDOM; for file *wmv; do zzuf -M -l -r 0.00001 -q -U 60 -s $SEED ./avconv -i "file" -f null -c copy - || echo $SEED $file >> fuzz; done; done;.

    I got the file fuzz which is the list of seed paired with filename.  Now I need to use zzuf for creating damaged files to check the problem with valgrind. I'll use the list to determine the seed which caused some crash for creating my damaged file:

    zzuf -M -l -r 0.00001 -q -U 60 -s myseed < somefile.wmv | cat out.asf.

    Now I'll just use valgrind to find out what happened:

    valgrind ./avconv -i out.asf -f null -c copy -.


    I tried to test the ASF demuxer with different tricky samples and with FATE
    and the demuxer behaved well but testing with zzuf detected several new crashes. Mainly it was insane values sizes and it was easy to fix them by adding some more checks. Zzuf is a great thing for testing.

    Pelhřimov is small but very nice town in Czech Republic approximately 120 km from the capital Prague and I decided to organize a small but nice Libav sprint in it.
    The participants and the topics were:

    • Luca Barbato -
      • AVScale, especially documenting it
      • HW accelerated encoders
      • async API
    • Anton Khirnov - The Evil Plan II and fixing H.264 decoder for it
    • Kostya Shishkov - trolling motivating the others
    • Alexandra Hájková (me) - dcadec decoder, mainly testing the new patch
    • everyone - eating chocolate

    I finished and sent my dcadec "Integer core decoder" patch (which transforms the dcadec core decoder to work with integers) during the sprint. After the discussion and some hints from the others I tested my patch better and found out some interesting things:
    • It seems XLL output really is lossless when using my patch.
    • But LFE channel was broken - this was fixed during the sprint.
    • While lossless or force_fixed output looks fine my patch breaks a float (lossy) output a little bit - it feels the same for my ears but looking at the output in audacity and comparing it with "before the integer patch" output shows something is wrong there.
    • I discovered for myself an avconv option called channelsplit ( https://libav.org/documentation/avconv.html#channelsplit) that splits the file into per-channel files wich was very useful for comparing the output with some reference decoder with the other channel order (for example with https://github.com/foo86/dcadec).
    My post sprint dcadec plans are:
    • Fix the lossy output issue.
    • Fix detection of the extensions and add the options for disabling them.
    • Rewrite the dca_decode_frame to handle all the extensions and working with the new options for them more systematicly.

    I decided to improve the Libav DTS decoder - dcadec. Here I want to explain what are its problems now and what I would like to do about them.
    DTS encoded audio stream consists of core audio and may contain extended audio. Dcadec supports XCH and XLL extensions but X96, XXCH and XBR extensions are waiting to be implemented - I'd like to implement them later.
    For the DTS lossless extension - XLL, the decoded output audio should be a bit for bit accurate reproduction of the encoded input. However there are some problems:

    • The main problem is that the core decoder converts integer coefficients read from the bitstream to floats just after reading them (along with dequantization). All other steps of the audio reconstruction are done with floats and the output can not be the bitexact reproduction of the input so it is not lossless.
    When the coefficients are read from the bitstream the core decoder does the following:
    dequantization (with int -> float conversion)

    inverse ADPCM (when needed)

    VQ decoding (when needed)

    filtering: QMF, LFE, downmixing (when needed)

    float output.
    I'm working now on modifying the core to work with integer coefficients and then convert them to floats before QMF filtering for lossy output but use bitexact QMF (intermediate LFE coefficients should be always integers and I think it's not correct in the current version) for lossless output. Also I added an option called -force_fixed to force fixed-point reconstruction for any kind of input.
    • Another problem is XLL extension presence detection. During the testing I found out that XLL extension is not detected sometimes and the core audio only is decoded in this case. I want to fix this issue as well.

    Saturday, 21 November

    This is a sort of short list of checklists and few ramblings in the wake of Fosdem’s Code of Conduct discussions and the not exactly welcoming statements about how to perceive a Code of Conduct such as this one.

    Code of Conduct and OpenSource projects

    A Code of Conduct is generally considered a mean to get rid of problematic people (and thus avoid toxic situations). I prefer consider it a mean to welcome people and provide good guidelines to newcomers.

    Communities without a code of conduct tend to reject the idea of having one, thinking that it is only needed to solve the above mentioned issue and adding more bureaucracy would just actually give more leeway to macchiavellian ploys.

    Sadly, no matter how good the environment is, it takes just few poisonous people to get in an unbearable situation and a you just need one in few selected cases.

    If you consider the CoC a shackle or a stick to beat “bad guys” so you do not need it until you see a bad guy, that is naive and utterly wrong: you will end up writing something that excludes people due a, quite understandable but wrong, knee-jerk reaction.

    A Code of Conduct should do exactly the opposite, it should embrace people and make easier joining and fit in. It should be the social equivalent of the developer handbook or the coding style guidelines.

    As everybody can make a little effort and make sure to send code with spaces between operators everybody can make an effort and not use colorful language. Likewise as people would be more happy to contribute if the codebase they are hacking on is readable so they are more confident in joining the community if the environment is pleasant.

    Making an useful Code of Conduct

    The Code of Conduct should be a guideline for people that have no idea what the expected behavior is.
    It should be written thinking on how to help people get along not on how to punish who does not.

    • It should be short. It is pointless to enumerate ALL the possible way to make people uncomfortable, you are bound to miss few.
    • It should be understanding and inclusive. Always assume cultural biases and not ill will.
    • It should be enforced. It gets quite depressing when you have a 100+ lines code of conduct but then nobody cares about it and nobody really enforces it. And I’m not talking about having specifically designated people to enforce it. Your WHOLE community should agree on what is an acceptable behavior and act accordingly on breaches.

    People joining the community should consider the Code of Conduct first as a request (and not a demand) to make an effort to get along with the others.

    Pitfalls

    Since I saw quite some long and convoluted wall of text being suggested as THE CODE OF CONDUCT everybody MUST ABIDE TO, here some suggestion on what NOT do.

    • It should not be a political statement: this is a strong cultural bias that would make potential contributors just stay away. No matter how good and great you think your ideas are, those are unrelated to a project that should gather all the people that enjoy writing code in their spare time. The Open Source movement is already an ideology in itself, overloading it with more is just a recipe for a disaster.
    • Do not try to make a long list of definitions, you just dilute the content and give even more ammo to lawyer-type arguers.
    • Do not think much about making draconian punishments, this is a community on internet, even nowadays nobody really knows if you are actually a dog or not, you cannot really enforce anything if the other party really wants to be a pest.

    Good examples

    Some CoC I consider good are obviously the ones used in the communities I belong to, Gentoo and Libav, they are really short and to the point.

    Enforcing

    As I said before no matter how well written a code of conduct is, the only way to really make it useful is if the community as whole helps new (and not so new) people to get along.

    The rule of thumb “if anybody feels uncomfortable in a non-technical discussion, once they say they are, drop it immediately”, is ok as long:

    • The person uncomfortable speaks up. If you are shy you might ask somebody else to speak up for you, but do not be quiet when it happens and then fill a complaint much later, that is NOT OK.
    • The rule is not abused to derail technical discussions. See my post about reviews to at least avoid this pitfall.
    • People agree to drop at least some of their cultural biases, otherwise it would end up like walking on eggshells every moment.

    Letting situations going unchecked is probably the main issue, newcomers can think it is OK to behave in a certain way if people are behaving such way and nobody stops that, again, not just specific enforcers of some kind, everybody should behave and tell clearly to those not behaving that they are problematic.

    Gentoo is a big community, so gets problematic having a swift reaction: lots of people prefer not to speak up when something happens, so people unwillingly causing the problem are not made aware immediately.

    Libav is a much smaller community and in general nobody has qualms in saying “please stop” (that is also partially due how the community evolved).

    Hopefully this post would help avoid making some mistakes and help people getting along better.

    Sunday, 08 November

    This mini-post spurred from this bug.

    AVFrame and AVCodecContext

    In Libav there are a number of patterns shared across most of the components.
    Does not matter if it models a codec, a demuxer or a resampler: You interact with it using a Context and you get data in or out of the module using some kind of Abstraction that wraps data and useful information such as the timestamp. Today’s post is about AVFrames and AVCodecContext.

    AVFrame

    The most used abstraction in Libav by far is the AVFrame. It wraps some kind of raw data that can be produced by decoders and fed to encoders, passed through filters, scalers and resamplers.

    It is quite flexible and contains the data and all the information to understand it e.g.:

    • format: Used to describe either the pixel format for video and the sample format for audio.
    • width and height: The dimension of a video frame.
    • channel_layout, nb_samples and sample_rate for audio frames.

    AVCodecContext

    This context contains all the information useful to describe a codec and to configure an encoder or a decoder (the generic, common features, there are private options for specific features).

    Being shared with encoder, decoder and (until Anton’s plan to avoid it is deployed) container streams this context is fairly large and a good deal of its fields are a little confusing since they seem to replicate what is present in the AVFrame or because they aren’t marked as write-only since they might be read in few situation.

    In the bug mentioned channel_layout was the confusing one but also width and height caused problems to people thinking the value of those fields in the AVCodecContext would represent what is in the AVFrame (then you’d wonder why you should have them in two different places…).

    As a rule of thumb everything that is set in a context is either the starting configuration and bound to change in the future.

    Video decoders can reconfigure themselves and output video frames with completely different geometries, audio decoders can report a completely different number of channels or variations in their layout and so on.

    Some encoders are able to reconfigure on the fly as well, but usually with more strict constraints.

    Why their information is not the same

    The fields in the AVCodecContext are used internally and updated as needed by the decoder. The decoder can be multithreaded so the AVFrame you are getting from one of the avcodec_decode_something() functions is not the last frame decoded.

    Do not expect any of the fields with names similar to the ones provided by AVFrame to stay immutable or to match the values provided by the AVFrame.

    Common pitfalls

    Allocating video surfaces

    Some quite common mistake is to use the AVCodecContext coded_width and coded_height to allocate the surfaces to present the decoded frames.

    As said the frame geometry can change mid-stream, so if you do that best case you have some lovely green surrounding your picture, worst case you have a bad crash.

    I suggest to always check that the AVFrame dimensions fit and be ready to reconfigure your video out when that happens.

    Resampling audio

    If you are using a current version of Libav you have avresample_convert_frame() doing most of the work for you, if you are not you need to check that format channel_layout and sample_rate do not change and manually reconfigure.

    Rescaling video

    Similarly you can misconfigure swscale and you should check manually that format, width and height and reconfigure as well. The AVScale draft API on provides an avscale_process_frame().

    In closing

    Be extra careful, think twice and beware of the examples you might find on internet, they might work until they wont.

    Friday, 06 November

    This spurred from some events happening in Gentoo, since with the move to git we eventually have more reviews and obviously comments over patches can be acceptable (and accepted) depending on a number of factors.

    This short post is about communicating effectively.

    When reviewing patches

    No point in pepper coating

    Do not disparage code or, even worse, people. There is no point in being insulting, you add noise to the signal:

    You are a moron! This is shit has no place here, do not do again something this stupid.

    This is not OK: most people will focus on the insult and the technical argument will be totally lost.

    Keep in mind that you want people doing stuff for the project not run away crying.

    No point in sugar coating

    Do not downplay stupid mistakes that would crash your application (or wipe an operating system) because you think it would hurt the feelings of the contributor.

        rm -fR /usr /local/foo
    

    Is as silly as you like but the impact is HUGE.

    This is a tiny mistake, you should not do that again.

    No, it isn’t tiny it is quite a problem.

    Mistakes happen, the review is there to avoid them hitting people, but a modicum of care is needed:
    wasting other people’s time is still bad.

    Point the mistake directly by quoting the line

    And use at most 2-3 lines to explain why it is a problem.
    If you can’t better if you fix that part yourself or move the discussion on a more direct media e.g. IRC.

    Be specific

    This kind of change is not portable, obscures the code and does not fix the overflow issue at hand:
    The expression as whole could still overflow.

    Hopefully even the most busy person juggling over 5 different tasks will get it.

    Be direct

    Do not suggest the use of those non-portable functions again anyway.

    No room for interpretation, do not do that.

    Avoid clashes

    If you and another reviewer disagree, move the discussion on another media, there is NO point in spamming
    the review system with countless comments.

    When receiving reviews (or waiting for them)

    Everybody makes mistakes

    YOU included, if the reviewer (or more than one) tells you that your changes are not right, there are good odds you are wrong.

    Conversely, the reviewer can make mistakes. Usually is better to move away from the review system and discuss over emails or IRC.

    Be nice

    There is no point in being confrontational. If you think the reviewer is making a mistake, politely point it out.

    If the reviewer is not nice, do not use the same tone to fit in. Even more if you do not like that kind of tone to begin with.

    Wait before answering

    Do not update your patch or write a reply as soon as you get a notification of a review, more changes might be needed and maybe other reviewers have additional opinions.

    Be patient

    If a patch is unanswered, ping it maybe once a week, possibly rebasing it if the world changed meanwhile.

    Keep in mind that most of your interaction is with other people volunteering their free time and not getting anything out of it as well, sometimes the real-life takes priority =)

    Wednesday, 21 October

    You might be subtle like this or just work on your stuff like that but then nobody will know that you are the one that did something (and praise somebody else completely unrelated for your stuff, e.g. Anton not being praised much for the HEVC threaded decoding, the huge work on ref-counted AVFrame and many other things).

    Blogging is boring

    Once you wrote something in code talking about it gets sort of boring, the code is there, it works and maybe you spent enough time on the mailing list and irc discussing about it that once it is done you wouldn’t want to think about it for at least a week.

    The people at xiph got it right and they wrote awesome articles about what they are doing.

    Blogging is important

    JB got it right by writing posts about what happened every week. Now journalist can pick from there what’s cool and coming from VLC and not have to try to extract useful information from git log, scattered mailing lists and conversations on irc.
    I’m not sure I’ll have the time to do the same, but surely I’ll prod at least Alexandra and the others to write more.

    Thursday, 15 October

    In Libav we try to clean up the API and make it more regular, this is one of the possibly many articles I write about APIs, this time about deprecating some relic from the past and why we are doing it.

    AVPicture

    This struct used to store image data using data pointers and linesizes. It comes from the far past and it looks like this:

    typedef struct AVPicture {
        uint8_t *data[AV_NUM_DATA_POINTERS];
        int linesize[AV_NUM_DATA_POINTERS];
    } AVPicture;
    

    Once the AVFrame was introduced it was made so it would alias to it and for some time the two structures were actually defined sharing the commond initial fields through a macro.

    The AVFrame then evolved to store both audio and image data, to use AVBuffer to not have to do needless copies and to provide more useful information (e.g. the actual data format), now it looks like:

    typedef struct AVFrame {
        uint8_t *data[AV_NUM_DATA_POINTERS];
        int linesize[AV_NUM_DATA_POINTERS];
    
        uint8_t **extended_data;
    
        int width, height;
    
        int nb_samples;
    
        int format;
    
        int key_frame;
    
        enum AVPictureType pict_type;
    
        AVRational sample_aspect_ratio;
    
        int64_t pts;
    
        ...
    } AVFrame;
    

    The image-data manipulation functions moved to the av_image namespace and use directly data and linesize pointers, while the equivalent avpicture became a wrapper over them.

    int avpicture_fill(AVPicture *picture, uint8_t *ptr,
                       enum AVPixelFormat pix_fmt, int width, int height)
    {
        return av_image_fill_arrays(picture->data, picture->linesize,
                                    ptr, pix_fmt, width, height, 1);
    }
    
    int avpicture_layout(const AVPicture* src, enum AVPixelFormat pix_fmt,
                         int width, int height,
                         unsigned char *dest, int dest_size)
    {
        return av_image_copy_to_buffer(dest, dest_size,
                                       src->data, src->linesize,
                                       pix_fmt, width, height, 1);
    }
    
    ...
    

    It is also used in the subtitle abstraction:

    typedef struct AVSubtitleRect {
        int x, y, w, h;
        int nb_colors;
    
        AVPicture pict;
        enum AVSubtitleType type;
    
        char *text;
        char *ass;
        int flags;
    } AVSubtitleRect;
    

    And to crudely pass AVFrame from the decoder level to the muxer level, for certain rawvideo muxers by doing something such as:

        pkt.data   = (uint8_t *)frame;
        pkt.size   =  sizeof(AVPicture);
    

    AVPicture problems

    In the codebase its remaining usage is dubious at best:

    AVFrame as AVPicture

    In some codecs the AVFrame produced or consumed are casted as AVPicture and passed to avpicture functions instead
    of directly use the av_image functions.

    AVSubtitleRect

    For the subtitle codecs, accessing the Rect data requires a pointless indirection, usually something like:

        wrap3 = rect->pict.linesize[0];
        p = rect->pict.data[0];
        pal = (const uint32_t *)rect->pict.data[1];  /* Now in YCrCb! */
    

    AVFMT_RAWPICTURE

    Copying memory from a buffer to another when can be avoided is consider a major sin (“memcpy is murder”) since it is a costly operation in itself and usually it invalidates the cache if we are talking about large buffers.

    Certain muxers for rawvideo, try to spare a memcpy and thus avoid a “murder” by not copying the AVFrame data to the AVPacket.

    The idea in itself is simple enough, store the AVFrame pointer as if it would point a flat array, consider the data size as the AVPicture size and hope that the data pointed by the AVFrame remains valid while the AVPacket is consumed.

    Simple and faulty: with the AVFrame ref-counted API codecs may use a Pool of AVFrames and reuse them.
    It can lead to surprising results because the buffer gets updated before the AVPacket is actually written.
    If the frame referenced changes dimensions or gets deallocated it could even lead to crashes.

    Definitely not a great idea.

    Solutions

    Vittorio, wm4 and I worked together to fix the problems. Radically.

    AVFrame as AVPicture

    The av_image functions are now used when needed.
    Some pointless copies got replaced by av_frame_ref, leading to less memory usage and simpler code.

    No AVPictures are left in the video codecs.

    AVSubtitle

    The AVSubtitleRect is updated to have simple data and linesize fields and each codec is updated to keep the AVPicture and the new fields in sync during the deprecation window.

    The code is already a little easier to follow now.

    AVFMT_RAWPICTURE

    Just dropping the “feature” would be a problem since those muxers are widely used in FATE and the time the additional copy takes adds up to quite a lot. Your regression test must be as quick as possible.

    I wrote a safer wrapper pseudo-codec that leverages the fact that both AVPacket and AVFrame use a ref-counted system:

    • The AVPacket takes the AVFrame and increases its ref-count by 1.
    • The AVFrame is then stored in the data field and wrapped in a custom AVBuffer.
    • That AVBuffer destructor callback unrefs the frame.

    This way the AVFrame data won’t change until the AVPacket gets destroyed.

    Goodbye AVPicture

    With the release 14 the AVPicture struct will be removed completely from Libav, people using it outside Libav should consider moving to use full AVFrame (and leverage the additional feature it provides) or the av_image functions directly.

    Friday, 02 October

    During the VDD we had lots of discussions and I enjoyed reviewing the initial NihAV implementation. Kostya already wrote some more about the decoupled API that I described at high level here.

    This article is about some possible implementation details, at least another will follow.

    The new API requires some additional data structures, mainly something to keep the data that is being consumed/produced, additional implementation-callbacks in AVCodec and possibly a mean to skip the queuing up completely.

    Data Structures

    AVPacketQueue and AVFrameQueue

    In the previous post I considered as given some kind of Queue.

    Ideally the API for it could be really simple:

    typedef struct AVPacketQueue;
    
    AVPacketQueue *av_packet_queue_alloc(int size);
    int av_packet_queue_put(AVPacketQueue *q, AVPacket *pkt);
    int av_packet_queue_get(AVPacketQueue *q, AVPacket *pkt);
    int av_packet_queue_size(AVPacketQueue *q);
    void av_packet_queue_free(AVPacketQueue **q);
    
    typedef struct AVFrameQueue;
    
    AVFrameQueue *av_frame_queue_alloc(int size);
    int av_frame_queue_put(AVFrameQueue *q, AVPacket *pkt);
    int av_frame_queue_get(AVFrameQueue *q, AVPacket *pkt);
    int av_frame_queue_size(AVFrameQueue *q);
    void av_frame_queue_free(AVFrameQueue **q);
    

    Internally it leverages the ref-counted API (av_packet_move_ref and av_frame_move_ref) and any data structure that could fit the queue-usage. It will be used in a multi-thread scenario so a form of Lock has to be fit into it.

    We have already something specific for AVPlay, using a simple Linked List and a FIFO for some other components that have a near-constant maximum number of items (e.g. avconv, NVENC, QSV).

    Possibly also a Tree could be used to implement something such as av_packet_queue_insert_by_pts and have some forms of reordering happen on the fly. I’m not a fan of it, but I’m sure someone will come up with the idea..

    The Queues are part of AVCodecContext.

    typedef struct AVCodecContext {
        // ...
    
        AVPacketQueue *packet_queue;
        AVFrameQueue *frame_queue;
    
        // ...
    } AVCodecContext;
    

    Implementation Callbacks

    In Libav the AVCodec struct describes some specific codec features (such as the supported framerates) and hold the actual codec implementation through callbacks such as init, decode/encode2, flush and close.
    The new model obviously requires additional callbacks.

    Once the data is in a queue it is ready to be processed, the actual decoding or encoding can happen in multiple places, for example:

    • In avcodec_*_push or avcodec_*_pull, once there is enough data. In that case the remaining functions are glorified proxies for the matching queue function.
    • somewhere else such as a separate thread that is started on avcodec_open or the first avcodec_decode_push and is eventually stopped once the context related to it is freed by avcodec_close. This is what happens under the hood when you have certain hardware acceleration.

    Common

    typedef struct AVCodec {
        // ... previous fields
        int (*need_data)(AVCodecContext *avctx);
        int (*has_data)(AVCodecContext *avctx);
        // ...
    } AVCodec;
    

    Those are used by both the encoder and decoder, some implementations such as QSV have functions that can be used to probe the internal state in this regard.

    Decoding

    typedef struct AVCodec {
        // ... previous fields
        int (*decode_push)(AVCodecContext *avctx, AVPacket *packet);
        int (*decode_pull)(AVCodecContext *avctx, AVFrame *frame);
        // ...
    } AVCodec;
    

    Those two functions can take a portion of the work the current decode function does, for example:
    – the initial parsing and dispatch to a worker thread can happen in the _push.
    – reordering and blocking until there is data to output can happen on _pull.

    Assuming the reordering does not happen outside the pull callback in some generic code.

    Encoding

    typedef struct AVCodec {
        // ... previous fields
        int (*encode_push)(AVCodecContext *avctx, AVFrame *frame);
        int (*encode_pull)(AVCodecContext *avctx, AVPacket *packet);
    } AVCodec;
    

    As per the Decoding callbacks, encode2 workload is split. the _push function might just keep queuing up until there are enough frames to complete the initial the analysis, while, for single thread encoding, the rest of the work happens at the _pull.

    Yielding data directly

    So far the API mainly keeps some queue filled and let some magic happen under the hood, let see some usage examples first:

    Simple Usage

    Let’s expand the last example from the previous post: register callbacks to pull/push the data and have some simple loops.

    Decoding

    typedef struct DecodeCallback {
        int (*pull_packet)(void *priv, AVPacket *pkt);
        int (*push_frame)(void *priv, AVFrame *frame);
        void *priv_data_pull, *priv_data_push;
    } DecodeCallback;
    

    Two pointers since you pull from a demuxer+parser and you push to a splitter+muxer.

    int decode_loop(AVCodecContext *avctx, DecodeCallback *cb)
    {
        AVPacket *pkt  = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();
        int ret;
        while ((ret = avcodec_decode_need_data(avctx)) > 0) {
            ret = cb->pull_packet(cb->priv_data_pull, pkt);
            if (ret < 0)
                goto end;
            ret = avcodec_decode_push(avctx, pkt);
            if (ret < 0)
                goto end;
        }
        while ((ret = avcodec_decode_have_data(avctx)) > 0) {
            ret = avcodec_decode_pull(avctx, frame);
            if (ret < 0)
                goto end;
            ret = cb->push_frame(cb->priv_data_push, frame);
            if (ret < 0)
                goto end;
        }
    
    end:
        av_frame_free(&frame);
        av_packet_free(&pkt);
        return ret;
    }
    

    Encoding

    For encoding something quite similar can be done:

    typedef struct EncodeCallback {
        int (*pull_frame)(void *priv, AVFrame *frame);
        int (*push_packet)(void *priv, AVPacket *packet);
        void *priv_data_push, *priv_data_pull;
    } EncodeCallback;
    

    The loop is exactly the same beside the data types swapped.

    int encode_loop(AVCodecContext *avctx, EncodeCallback *cb)
    {
        AVPacket *pkt  = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();
        int ret;
        while ((ret = avcodec_encode_need_data(avctx)) > 0) {
            ret = cb->pull_frame(cb->priv_data_pull, frame);
            if (ret < 0)
                goto end;
            ret = avcodec_encode_push(avctx, frame);
            if (ret < 0)
                goto end;
        }
        while ((ret = avcodec_encode_have_data(avctx)) > 0) {
            ret = avcodec_encode_pull(avctx, pkt);
            if (ret < 0)
                goto end;
            ret = cb->push_packet(cb->priv_data_push, pkt);
            if (ret < 0)
                goto end;
        }
    
    end:
        av_frame_free(&frame);
        av_packet_free(&pkt);
        return ret;
    }
    

    Transcoding

    Transcoding, the naive way, could be something such as

    int transcode(AVFormatContext *mux,
                  AVFormatContext *dem,
                  AVCodecContext *enc,
                  AVCodecContext *dec)
    {
        DecodeCallbacks dcb = {
            get_packet,
            av_frame_queue_put,
            dem, enc->frame_queue };
        EncodeCallbacks ecb = {
            av_frame_queue_get,
            push_packet,
            enc->frame_queue, mux };
        int ret = 0;
    
        while (ret > 0) {
            if ((ret = decode_loop(dec, &dcb)) > 0)
                ret = encode_loop(enc, &ecb);
        }
    }
    

    One loop feeds the other throught the queue. get_packet and push_packet are muxing and demuxing functions, they might end up being other two Queue functions once the AVFormat layer gets a similar overhaul.

    Advanced usage

    From the examples above you would notice that in some situation you would possibly do better,
    all the loops pull data from a queue push it immediately to another:

    • why not feeding right queue immediately once you have the data ready?
    • why not doing some processing before feeding the decoded data to the encoder, such as conver the pixel format?

    Here some additional structures and functions to enable advanced users:

    typedef struct AVFrameCallback {
        int (*yield)(void *priv, AVFrame *frame);
        void *priv_data;
    } AVFrameCallback;
    
    typedef struct AVPacketCallback {
        int (*yield)(void *priv, AVPacket *pkt);
        void *priv_data;
    } AVPacketCallback;
    
    typedef struct AVCodecContext {
    // ...
    
    AVFrameCallback *frame_cb;
    AVPacketCallback *packet_cb;
    
    // ...
    
    } AVCodecContext;
    
    int av_frame_yield(AVFrameCallback *cb, AVFrame *frame)
    {
        return cb->yield(cb->priv_data, frame);
    }
    
    int av_packet_yield(AVPacketCallback *cb, AVPacket *packet)
    {
        return cb->yield(cb->priv_data, packet);
    }
    

    Instead of using directly the Queue API, would be possible to use yield functions and give the user a mean to override them.

    Some API sugar could be something along the lines of this:

    int avcodec_decode_yield(AVCodecContext *avctx, AVFrame *frame)
    {
        int ret;
    
        if (avctx->frame_cb) {
            ret = av_frame_yield(avctx->frame_cb, frame);
        } else {
            ret = av_frame_queue_put(avctx->frame_queue, frame);
        }
    
        return ret;
    }
    

    Whenever a frame (or a packet) is ready it could be passed immediately to another function, depending on your threading model and cpu it might be much more efficient skipping some enqueuing+dequeuing steps such as feeding directly some user-queue that uses different datatypes.

    This approach might work well even internally to insert bitstream reformatters after the encoding or before the decoding.

    Open problems

    The callback system is quite powerful but you have at least a couple of issues to take care of:
    – Error reporting: when something goes wrong how to notify what broke?
    – Error recovery: how much the user have to undo to fallback properly?

    Probably this part should be kept for later, since there is already a huge amount of work.

    What’s next

    Muxing and demuxing

    Ideally the container format layer should receive the same kind of overhaul, I’m not even halfway documenting what should
    change, but from this blog post you might guess the kind of changes. Spoiler: The I/O layer gets spun in a separate library.

    Proof of Concept

    Soon^WNot so late I’ll complete a POC out of this and possibly hack avplay so that either it uses QSV or videotoolbox as test-case (depending on which operating system I’m playing with when I start), probably I’ll see which are the limitations in this approach soon.

    If you like the ideas posted above or you want to discuss them more, you can join the Libav irc channel or mailing list to discuss and help.

    Thursday, 10 September

    This is a tiny introduction to Libav, the organization.

    Libav

    The project aims to provide useful tools, written in portable code that is readable, trustworthy and performant.

    Libav is an opensource organization focused on developing libraries and tools to decode, manipulate and encode multimedia content.

    Structure

    The project tries to be as non-hierarchical as possible. Every contributor must abide by a well defined set of rules, no matter which role.

    For decisions we strive to reach near-unanimous consensus. Discussions may happen on irc, mailing-list or in real life meetings.

    If possible, conflicts should be avoided and otherwise resolved.

    Join us!

    We are always looking for enthusiastic new contributors and will help you get started. Below you can find a number of possible ways to contribute. Please contact us.

    Roles

    Even if the project is non-hierarchical, it is possible to define specific roles within it. Roles do not really give additional power but additional responsibilities.

    Contributor

    Contributing to Libav makes you a Contributor!
    Anybody who reviews patches, writes patches, helps triaging bugs, writes documentation, helps people solve their problems, or keeps our infrastructure running is considered a contributor.

    It does not matter how little you contribute. Any help is welcome.

    On top of the standard great feats of contributing to an opensource project, special chocolate is always available during the events.

    Reviewer

    Many eyes might not make every bug shallow, but probably a second and a third pair might prevent some silly mistakes.

    A reviewer is supposed to read the new patches and prevent mistakes (silly, tiny or huge) to land in the master.

    Because of our workflow, spending time reading other people patches is quite common.

    People with specific expertise might get nagged to give their opinion more often than others, but everybody might spot something that looks wrong and probably is.

    Bugwrangler

    Checking that the bugs are fixed and ask for better reports is important.

    Bug wrangling involves making sure reported issues have all the needed information to start fixing the problem and checking if old issues are still valid or had been fixed already.

    Committer

    Nobody can push a patch to the master until it is reviewed, but somebody has to push it once it is.

    Committers are the people who push code to the main repository after it has been reviewed.

    Being a committer requires you to take newly submitted patches, make sure they work as expected either locally or pushing them through our continuous integration system and possibly fix minor issues like typos.

    Patches from a committer go through the normal review process as well.

    Infrastructure Administrator

    The regression test system. git repository, the samples collection, the website, the patch trackers, the wiki and the issue tracker are all managed on dedicated hardware.

    This infrastructure needs constant maintaining and improving.

    Most of comes from people devoting their time and (beside few exceptions) their own hardware, definitely this role requires a huge amount of dedication.

    Rules

    The project strives to provide a pleasant environment for everybody.

    Every contributor is considered a member of the team, regardless if they are a newcomer or a founder. Nobody has special rights or prerogatives.

    Well defined rules have been adopted since the founding of the project to ensure fairness.

    Code of Conduct

    A quite simple code of conduct is in place in our project.

    It boils down to respecting the other people and being pleasant to deal with.

    It is commonly enforced with a friendly warning, followed by the request to leave if the person is unable to behave and, then, eventual removal if anything else fails.

    Contribution workflow

    The project has a simple contribution workflow:

    • Every patch must be sent to the mailing-list
    • Every patch must get a review and an Ok before it lands in the master branch

    Code Quality

    We have plenty of documentation to make it easy for you to prepare patches.

    The reviewers usually help newcomers by reformatting the first patches and pointing and fixing common pitfalls.

    If some mistakes are not caught during the review, there are few additional means to prevent them from hitting a release.

    Post Scriptum

    This post tried to summarize the project and its structure as if the legends surrounding it do not exist and the project is just a clean slate. Shame on me for not having written this blog post 5 years ago.

    Past and Present

    I already wrote about the past and the current situation of Libav, if you are curious please do read the previous posts. I will probably blog again about the social issues soon.

    Future

    The Release 12 is in the ABI break window now and soon the release branch will be spun off! After that some of my plans to improve the API will see some initial implementations and hopefully will be available as part of the release 13 (and nihav)

    I will discuss avframe_yield first since Kostya already posted about a better way to handle container formats.

    Friday, 14 August

    I'd like to add the loop option to avconv. This option allows to repeat an input file given number of times, so the output contains specified number of inputs. The command is ./avconv -loop n -i infile outfile, n specifies how many times the input file should be looped in the output.

    How does this work?
    After processing the input file for the first time, avconv calls new seek_to_start function to seek back to the beginning of the file. av_seek_frame is called to perform seeking itself but there are other things needed for loop option to work.

    1) flush
    Flush decoder buffers to take out delayed frames. In avconv this is done by calling process_input_file with NULL as frame, process_input_packet had to be modified a little to not to signal EOF on the filters when seeking.

    2) timestamps (ts)
    To have correct timestamps in the "after seeking" part of the output stream they have to be corrected with ts = ts_{from the demuxer} + n * (duration of the input stream), n is number of times the input stream was processed so far . This duration is the duration of the longest stream in a file because all the streams have to be processed (or played) before starting the next loop. The duration of the stream is the last timestamp - the first timestamp + duration of the last frame. For the audio streams one "frame" is usually a constant number of samples and its duration is number of samples/sample rate. Video frames on the other side are displayed unevenly so their average framerate can be used for the last frame duration if available or if the average framerate is not known the last frame duration is just 1 (in the current time base).

    https://github.com/sasshka/libav/commit/90f2071420b6fd50eea34982475819248e5f6c8f

    Thursday, 30 July

    We are getting closer to a new release and you can see it is an even release by the amount of old and crufty code we are dropping. This usually is welcomed by some people and hated by others. This post is trying to explain what we do and why we are doing it.

    New API and old API

    Since the start of Libav we tried to address the painful shortcomings of the previous management, here the short list:

    • No leaders or dictators, there are rules agreed by consensus and nobody bends them.
    • No territoriality, nobody “owns” a specific area of the codebase nor has special rights on it.
    • No unreviewed changes in the tree, all the patches must receive an Ok by somebody else before they can be pushed in the tree.
    • No “cvs is the release”, major releases at least twice per year, bugfix-only point releases as often as needed.
    • No flames and trollfests, some basic code of conduct is enforced.

    One of the effect of this is that the APIs are discussed, proposals are documented and little by little we are migrating to a hopefully more rational and less surprising API.

    What’s so bad regarding the old API?

    Many of the old APIs were not designed at all, but just randomly added because mplayer or ffmpeg.c happened to need some
    feature at the time. The result was usually un(der)documented, hard to use correctly and often not well defined in some cases. Most users of the old API that I’ve seen actually used it wrong and would at best occasionally fail to work, at worst crash randomly.
    – Anton

    To expand a bit on that you can break down the issues with the old API in three groups:

    • Unnamespaced common names (e.g. CODEC_ID_NONE), those may or might not clash with other libraries.
    • Now-internal-only fields previously exposed that were expected to be something that are not really are (e.g. AVCodecContext.width).
    • Functionality not really working well (e.g. the old audio resampler) for which a replacement got provided eventually (AVResample).

    The worst result of API misuse could be a crash in specific situations (e.g. if you use the AVCodecContext dimension when you should use the AVFrame dimensions to allocate your screen surface you get quite an ugly crash since the former represent the decoding time dimension while the latter the dimensions of the frame you are going to present and they can vary a LOT).

    But Compatibility ?!

    In Libav we try our best to give migration paths and in the past years we even went over the extra mile by providing patches for quite a bit of software Debian was distributing at the time. (Since nobody even thanked for the effort, I doubt the people involved would do that again…)

    Keeping backwards compatibility forever is not really feasible:

    • You do want to remove a clashing symbol from your API
    • You do want to not have application crashing because of wrong assumptions
    • You do want people to use the new API and not keep compatibility wrappers that might not work in certain
      corner cases.

    The current consensus is to try to keep an API deprecated for about 2 major releases, with release 12 we are dropping code that had been deprecated since 2-3 years ago.

    Next!

    I had been busy with my dayjob deadlines so I couldn’t progress on the new api for avformat and avcodec I described before, probably the next blogpost will be longer and a bit more technical again.

    Thursday, 09 July

    Debian decided to move to the new FFmpeg, what does it mean to me? Why should I care? This post won’t be technical for once, if you think “Libav is evil” start reading from here.

    Relationship between Libav and Debian

    After split between what was FFmpeg in two projects, with Michael Niedermayer keeping the name due his ties with the legal owner of the trademark and “merging” everything the group of 18 people was doing under the new Libav name.

    For Gentoo I, maybe naively, decided to just have both and let whoever want maintain the other package. Gentoo is about choice and whoever wants to shot himself on a foot has to be be free to do that in the safest possible way.

    For Debian, being binary packaged, who was maintaining the package decided to stay with Libav. It wasn’t surprising given “lack of releases” was one of the sore points of the former FFmpeg and he started to get involved with upstream to try to fix it.

    Perceived Leverage and Real Shackles

    Libav started with the idea to fix everything that went wrong with the Former FFmpeg:
    – Consensus instead of idolatry for THE Leader
    – Paced releases instead of cvs is always a release
    – Maintained releases branches for years
    git instead of svn
    – Cleaner code instead of quick hacks to solve the problem of the second
    – Helping downstreams instead of giving them the finger.

    Being in Debian, according to some people was undeserved because “Libav is evil” and since we wrongly though that people would look at actions and not at random blogpost by people with more bias than anything we just kept writing code. It was a huge mistake, this blogpost and this previous are my try to address this.

    Being in Debian to me meant that I had to help fixing stale version of software, often even without upstream.

    The people at Debian instead of helping, the amount of patches coming from people @debian.org over the years amounted to 1 according to git, kept piling up work on us.

    Fun requests such as “Do remove a standard test image because its origin according to them is unclear” or “Do maintain the ancient release branch that is 3 major releases behind” had been quite common.

    For me Debian had been no help and additional bourden.

    The leverage that being in a distribution theoretically gives according to those crying because the evil Libav was in Debian amounts to none to me: their user complain because the version provided is stale, their developers do not help even keeping the point releases up or updating the software using Libav because scared to be tainted, downstreams such as Kubi (that are so naive to praise FFmpeg for what happened in Libav, such as the HEVC multi-thread support Anton wrote) would keep picking the implementation they prefer and use ffmpeg-only API whenever they could (debian will ask us to fix that for them anyway).

    Is important being in Debian?

    Last time they were discussing moving to FFmpeg I had the unpleasant experience of reading lots of lovely email with passive-aggressive snide remarks such as “libav has just developers not users” or seeing the fruits of the smear campaign such as “is it true you stole the FFmpeg hardware” in their mailing list (btw during the past VDD the FFmpeg people there said at least that would be addressed, well, it had not been yet, thank you).

    At that time I got asked to present Libav, this time after reading in the debian wiki the “case” presented with skewed git statistics (maybe purge the merge commits when you count them to compare a project activity?) and other number dressing I just got sick of it.

    Personally I do not care. There is a better way to spend your own free time than do the distro maintenance work for people that not even thanks you (because you are evil).

    The smear campaign pays

    I’m sure that now that now that the new FFmpeg gets to replace Libav will get more contributions from people @debian.org and maybe those that were crying for the “oh so unjust” treatment would be happy to do the maintenance churn.

    Anyway that’s not my problem anymore and I guess I can spend more time writing about the “social issues” around the project trying to defuse at least a little the so effective “Libav is evil” narrative a post a time.

    Friday, 03 July

    Last weekend some libav developers met in the South Pole offices with additional sponsorship from Inteno Broadband Technology. (And the people at Borgodoro that gave us more chocolate to share with everybody).

    Sprints

    Since last year the libav started to have sprints to meet up, discuss in person topics that require a more direct media than IRC or Mailing List and usually write some code asking for direct opinions and help.

    Who attended

    Benjamin was our host for the event. Andreas joined us for the first day only, while Anton, Vittorio, Kostya, Janne, Jan and Rémi stayed both days.

    What we did

    The focus had been split in a number of area of interests:

    • API: with some interesting discussion between Rémi and Anton regarding on how to clarify a tricky detail regarding AVCodecContext and AVFrame and who to trust when.
    • Reverse Engineering: With Vittorio and Kostya having fun unraveling codecs one after the other (I think they got 3 working)
    • Release 12 API and ABI break
      • What to remove and what to keep further
      • What to change so it is simpler to use
      • If there is enough time to add the decoupled API for avcodec
    • Release 12 wishlist:
      • HEVC speed improvements, since even the C code can be sped up.
      • HEVC extended range support, since there is YUV 422 content out now.
      • More optimizations for the newer architectures (aarch64 and power64le)
      • More hardware accelerator support (e.g. HEVC encoding and decoding support for Intel MediaSDK).
      • Some more filters, since enough people asked for them.
      • Merge some of the pending work (e.g. go2meeting3, the new asf demuxer).
      • Get more security fixes in (with ago kindly helping me on this).
      • … and more …
    • New website with markdown support to make easier for people to update.

    During the sprint we managed to write a lot of code and even to push some during the sprint.
    Maybe a little too early in the case of asf, but better have it in and get to fix it for the release.

    Special mention to Jan for getting a quite exotic container almost ready, I’m looking forward to see it in the ml; and Andreas for reminding me that AVScale is needed sorely by sending me a patch that fixes a problem his PowerPC users are experiencing while uncovering some strange problem in swscale… I’ll need to figure out a good way to get a PowerPC big-endian running to look at it in detail.

    Thank you

    I want to especially thank all the people at South Pole that welcome me when I arrived with 1 day in advance and all the people that participated and made the event possible, had been fun!

    Post Scriptum

    • This post had been delayed 1 week since I had been horribly busy, sorry for the delay =)
    • During the sprint legends such as kropping the sourdough monster and the burning teapot had been created, some reference of them will probably appear in commits and code.
    • Anybody with experience with qemu-user for PowerPC is welcome to share his knowledge with me.

    Wednesday, 25 March

    I am hearing a lot of persons interested in open-source and giving back to the community. I think it can be an exciting experience and it can be positive in many different ways: first of all more contributors mean better open-source software being produced and that is great, but it also means that the persons involved can improve their skills and they can learn more about how successful projects get created.

    So I wondered why many developers do not do the first step: what is stopping them to send the first patch or the first pull-request? I think that often they do not know where to start or they think that contributing to the big projects out there is intimidating, something to be left to an alien form of life, some breed of extra-good programmers totally separated by the common fellows writing code in the world we experience daily.

    I think that hearing the stories of a few developers that have given major contributions to top level project could help to go over these misconceptions. So I asked a few questions to this dear friend of mine, Luca Barbato, who contributed among the others to Gentoo and VLC.

    Let’s start from the beginning: when did you start programming?

    I started dabbling stuff during high school, but I started doing something more consistent at the time I started university.

    What was your first contribution to an open-source project?

    I think either patching the ati-drivers to work with the 2.6 series or hacking cloop (a early kernel module for compressed loops) to use lzo instead of gzip.

    What are the main projects you have been involved into?

    Gentoo, MPlayer, Libav, VLC, cairo/pixman

    How did you started being involved in Gentoo? Can you explain the roles you have covered?

    Daniel Robbins invited me to join, I thought “why not?

    During the early times I took care of PowerPC and [Altivec](http://en.wikipedia.org/wiki/AltiVec), then I focused on the toolchain due the fact it gcc and binutils tended to break software in funny ways, then multimedia since altivec was mainly used there. I had been part of the Council a few times used to be a recruiter (if you want to join Gentoo feel free to contact me anyway, we love to have more people involved) and I’m involved with community relationship lately.

    Note: Daniel Robbins is the creator of Gentoo, a Linux distribution. 

    Are there other less famous projects you have contributed to?

    I have minor contributions in quite a bit of software due. The fact is that in Gentoo we try our best to upstream our changes and I like to get back fixes to what I like to use.

    What are your motivations to contribute to open-source?

    Mainly because I can =)

    Who helped you to start contributing? From who you have learnt the most?

    Daniel Robbins surely had been one of the first asking me directly to help.

    You learn from everybody so I can’t name a single person among all the great people I met.

    How did you get to know Daniel Robbins? How did he helped you?

    I was a gentoo user, I happened to do stuff he deemed interesting and asked me to join.

    He involved me in quite a number of interesting projects, some worked (e.g. Gentoo PowerPC), some (e.g. Gentoo Games) not so much.

    Do your contributions to open-source help your professional life?

    In some way it does, contrary to the assumption I’m just seldom paid to improve the projects I care about the most, but at the same time having them working helps me when I need them during the professional work.

    How do you face disagreement on technical solutions?

    I’m a fan of informed consensus, otherwise prototypes (as in “do, test and then tell me back”) work the best.

    To contribute to OSS are more important the technical skills or the diplomatic/relation skills?

    Both are needed at different time, opensource is not just software, you MUST get along with people.

    Have you found different way to organize projects? What works best in your opinion? What works worst?

    Usually the main problem is dealing with poisonous people, doesn’t matter if it is a 10-people project or a 300+-people project. You can have a dictator, you can have a council, you can have global consensus, poisonous people are what makes your community suffer a lot. Bonus point if the poisonous people get clueless fan giving him additional voices.

    Did you ever sent a patch for the Linux kernel?

    Not really, I’m not fond of that coding style so usually other people correct the small bugs I stumble upon before I decide to polish my fix so it is acceptable =)

    Do you have any suggestions for people looking to get started contributing to open-source?

    Pick something you use, scratch your own itch first, do not assume other people are infallible or heroes.

    ME: I certainly agree with that, it is one of the best advices. However if you cannot find anything suitable at the end of this post I wrote a short list of projects that could use some help.

    Can you tell us about your best and your worst moments with contribution to OSS?

    The best moment is recurring and it is when some user thanks you since you improved his or her life.

    The worst moment for me is when some rabid fan claims I’m evil because I’m contributing to Libav and even praises FFmpeg for something originally written in Libav in the same statement, happened more than once.

    What are you working on right now and what plans do you have for the future?

    Libav, plaid, bmdtools, commonmark. In the future I might play a little more with [rust](http://www.rust-lang.org/).

    Thanks Luca! I would be extremely happy if this short post could give to someone the last push they need to contribute to an existing open-source project or start their own: I think we could all use more, better, open-source software. So let’s write it.

    One thing I admire in Luca is that he is always curious and ready to jump on the next challenge. I think this is the perfect attitude to become an OSS contributor: just start play around with the things you like and talk to people, you could find more possibilities to contribute that you could imagine.

    …and one final thing: Luca is also the author of open-source recipes: he created the recipes of two types of chocolate bars dedicated to Libav and VLC. You can find them on the borgodoro website.

    1385040326653

    I suggest to take a look at his blog.

    A few open-source you could consider contributing to

    Well, just in case you are eager to start writing some code and you are looking for some projects to contribute to here there are a few, written with different technologies. If you want to start contributing to any of those and you need directions just drop me a line (federico at tomassetti dot me) and I would be glad to help!

    • If you are interested in contributing to Libav, you can take a look at this post: there I explained how I submitted my first patch (approved in the meantime!). It is written in C.

    • You could be also interested in plaid: it is a Python web application to manage git patches sent by e-mail (there are a few projects using this model like libav or the linux kernel)

    • WorldEngine, it is a world generator written in Python

    • Plate-tectonics, it is a library for plate tectonics simulation. It is written in C++

    • JavaParser a Java parser, written in Java

    • Incremental Java parser, an incremental Java parser, written in Scala

    The post How people get started contributing to open-source? A few questions to Luca Barbato, contributor to Gentoo, MPlayer, Libav, VLC, cairo/pixman appeared first on Federico Tomassetti - Consultant Software Engineer.

    Wednesday, 18 February

    I happened to have a few hours free and I was looking for some coding to do. I thought about VLC, the media player which I have enjoyed so much using over the years and I decided that I wanted to contribute in some way.

    To start helping in such a complex process there are a few steps involved. Here I describe how I got my first patched accepted. In particular I wrote a patch for libav, the library behind VLC.

    The general picture

    I started by reading the wiki. It is a very helpful starting point but the process to setup the environment and send a first patch was not yet 100% clear to me so I got in touch with some of the developers of libav to understand how they work and how I could start lending an hand with something simple. They explained me that the easier way to start is by solving issues reported by static analysis tools and style checkers. They use uncrustify to verify that the code is adhering to their style guidelines and they run coverity to check for potential issues like memory leaks or null deferences. So I:

    • started looking at some coverity issues
    • found something easy to address (a very simple null deference)
    • prepared the patch
    • submitted the patch

    After a few minutes the patch was approved by a committer, ready to be merged. The day after it made its way to the master branch. Yeah!

    Download source code, build libav and run the tests

    First of all, let’s clone the git repository:

    git clone git://git.libav.org/libav.git

    Alternatively you could use the GitHub mirror, if you want to.

    At this point you may want to install all the dependencies. The instructions are platform specific, you can find them here. If you have Mac Os-X be sure to have installed yasm, because nasm does not work. If you have installed both configure will pick up yasm (correctly). Just be sure to run configure after installing yasm.

    If everything goes well you can now build libav by running:

    ./configure
    make

    Note that it is fine to build in-tree (no need to build in a separate directory).

    Now it is time to run the tests. You will have to specify one directory where to download some samples, later used by tests. Let’s assume you wanted to put your samples under ~/libav-samples:

    mkdir ~/libav-samples
    # This download the samples
    make fate-rsync SAMPLES=~/libav-samples
    # This run the tests
    make fate

    Did everything run fine? Good! Let’s start to patch then!

    Write the patch

    First of all we need to find an open issue. Visit Coverity page for libav at https://scan.coverity.com/projects/106. You will have to ask for access and wait that someone grants it to you. When you will be able to login you will encounter a screen like this:

    Screenshot from 2015-02-14 19:39:06

    Here, this seems an easy one! The variable oggstream has been allocated by av_mallocz (basically a wrapper for malloc) but the result values has not been checked. If the allocation fails a NULL pointer is returned and when we will try to access it at the next line things are going end up unpleasantly. What we need to do is to check the return value of av_mallocz and if it is NULL we should return an error. The appropriate error to return in this case is AVERROR(ENOMEM). To get this information… you have to start reading code, getting familiar with the way of doing business of this codebase.

    Libav follows strict rules about the comments in git commits: use git log to look at previous commits and try to use the same style.

    Submitting the patch

    I think many of you are familiar with GitHub and the whole process of submitting a patch for revision. GitHub is great because it made that process so easy. However there are some projects (notably including the Linux kernel) which adopts another approach: they receive patches by e-mail.

    Git has a functionality that permits to submit a patch by e-mail with a simple command. The patch will be sent to the mailing list, discussed, and if approved the e-mail will be downloaded, processed through git and committed in the official repository. Does it sound cumbersome? Well, it sounds to me, spoiled as I am by GitHub and similar tools but, you know, if you go in Rome you should behave as the Romans do, so…  

    # This install the git extension for sending patches through e-mail
    sudo apt install git-email 
    # This submit a patch built using the last commit
    git send-email -1 --to libav-devel@libav.org

    Sending patches using gmail with 2-factor authentication enabled

    Now, many of you are using gmail and many of you have enable 2-factor authentication (right? If not, you should). If this is you case you will get an error along this lines:

    Password for 'smtp://f.tomassetti@gmail.com@smtp.gmail.com:587': 5.7.9 Application-specific password required. Learn more at 5.7.9 http://support.google.com/accounts/bin/answer.py?answer=185833 cj12sm14743233wjb.35 - gsmtp

    Here you can find how to create a password for this goal: https://support.google.com/accounts/answer/185833 The name of the application that I had to create was smtp://f.tomassetti@gmail.com@smtp.gmail.com:587. Note that I used the same name specified in the previous error message.

    What if I need to correct my patch?

    If things go well an e-mail with your patch will be sent to the mailing-list, someone will look at it and accept it. Most of the times you will receive suggestions about possible adjustments to be done to improve your password. When it happens you want to submit a new version of your patch in the same thread which contains your first version of the patch and the e-mails commenting it.

    To do that you want to update your patch (typically using git commit –amend) and then run something like:

    git send-email -1 --to libav-devel@libav.org --in-reply-to Message-ID: 54E0F459.3090707@gentoo.org

    Of course you need to find out the message-id of the e-mail to which you want to reply. To do that in gmail select the “Show original” item from the contextual menu for the message and in the screen opened look for the Message-Id header.

    Tools to manage patches sent by e-mail

    There are also web applications which are used to manage the patches sent by e-mail. Libav is currently using Patchwork for managing patches. You can see it deployed at: https://patches.libav.org/project/libav-devel/list/. Currently another tool has been developed to replace patchwork. It is named Plaid and I tried to help a little bit with that also 🙂

    Conclusions

    Mine has been a very small contribution, and in the future I hope to be able to do more. But being a maintainer of other open-source projects I learned that also small help is useful and appreciated, so for today I feel good.

    Screenshot from 2015-02-14 22:29:48

    Please, if I am missing something help me correct this post

    The post How to contribute to Libav (VLC): just got my first patch approved appeared first on Federico Tomassetti - Consultant Software Engineer.

    Monday, 12 January

    I'm interested in history, so I like visinting castles (or their ruins) and historical towns. There're many of them here in Czech Republic. Some of Czech sights like those in Prague, Kutná Hora or Český Krumlov are well known and, especially during the summer, overcrowded by tourists.  But there are also very nice less popular places which are much calmer and it is really a pleasure to visit them. One of such places is Jindřichův Hradec town, where the third biggest castle in the Czech Republic is located. Town centre with this castle is really amazing, it is full of romatic little streets, churches, museums and ancient buildings. This small town has really big  historical centre compared to its size, so one can spend the whole day exploring the castle and it surroundings. Recently, I decided to visit the town again, it was windy day, but it was realtively warm for the winter. There weren't any tourists around, and I really enjoyed my visit. The only disadvantage of this trip was that castle is closed for visitors during the winter and museums have short opening hours on weekends.





    Thursday, 20 November

    I've participated to last Libav sprint in Torino. I made new ASF demuxer for Libav, but during the testing problems with rtsp a mms protocols has appeared. Therefore, my main task during the sprint was to fix these issues. 
    It was second time I was at such sprint and also my second Torino visit and the sprint was even better than I expected. It's really nice to see people I'm communicating throught the irc channel in person, the thing I like about Libav a lot is its friendly community. But the most important thing for me as the most unexperienced person among skilled developers was naturally their help. My mentors from OPW participated the sprint and as a result all the issues was fixed and patch was sent to the ML (https://patches.libav.org/patch/55682/). Also, these personal consultations can be very productive in learning new things and because I'm not native English speaker I realized few days I have to speak or even think in English are really helpful for getting better in it.
    The last day of the sprint we had a trip to a really magical place called Sacra di San Michele (http://www.sacradisanmichele.com/).



    I like to visit places like this, in Czech Republic, where I'm living, I'm visiting ancient castles. But I think it may be the oldest place I've ever been to, the oldest parts of it was built in 10th century.  I had a feeling the history is breathing on us from the walls. We were lucky about the weather, it was sunny during our visit and the view from the terrace on top of the building was really breathtaking. We saw peaks of Alps covered by snow that divides this part of Italy and France.

    Saturday, 15 November

    After my challenge with the fused multiply-add instructions I managed to find some time to write a new test utility. It’s written ad hoc for unpaper but it can probably be used for other things too. It’s trivial and stupid but it got the job done.

    What it does is simple: it loads both a golden and a result image files, compares the size and format, and then goes through all the bytes to identify how many differences are there between them. If less than 0.1% of the image surface changed, it consider the test a pass.

    It’s not a particularly nice system, especially as it requires me to bundle some 180MB of golden files (they compress to just about 10 MB so it’s not a big deal), but it’s a strict improvement compared to what I had before, which is good.

    This change actually allowed me to explore one change that I abandoned before because it resulted in non-pixel-perfect results. In particular, unpaper now uses single-precision floating points all over, rather than doubles. This is because the slight imperfection caused by this change are not relevant enough to warrant the ever-so-slight loss in performance due to the bigger variables.

    But even up to here, there is very little gain in performance. Sure some calculation can be faster this way, but we’re still using the same set of AVX/FMA instructions. This is unfortunate, unless you start rewriting the algorithms used for searching for edges or rotations, there is no gain to be made by changing the size of the code. When I converted unpaper to use libavcodec, I decided to make the code simple and as stupid as I could make it, as that meant I could have a baseline to improve from, but I’m not sure what the best way to improve it is, now.

    I still have a branch that uses OpenMP for the processing, but since most of the filters applied are dependent on each other it does not work very well. Per-row processing gets slightly better results but they are really minimal as well. I think the most interesting parallel processing low-hanging fruit would be to execute processing in parallel on the two pages after splitting them from a single sheet of paper. Unfortunately, the loops used to do that processing right now are so complicated that I’m not looking forward to touch them for a long while.

    I tried some basic profile-guided optimization execution, just to figure out what needs to be improved, and compared with codiff a proper release and a PGO version trained after the tests. Unfortunately the results are a bit vague and it means I’ll probably have to profile it properly if I want to get data out of it. If you’re curious here is the output when using rbelf-size -D on the unpaper binary when built normally, with profile-guided optimisation, with link-time optimisation, and with both profile-guided and link-time optimisation:

    % rbelf-size -D ../release/unpaper ../release-pgo/unpaper ../release-lto/unpaper ../release-lto-pgo/unpaper
        exec         data       rodata        relro          bss     overhead    allocated   filename
       34951         1396        22284            0        11072         3196        72899   ../release/unpaper
       +5648         +312         -192           +0         +160           -6        +5922   ../release-pgo/unpaper
        -272           +0        -1364           +0         +144          -55        -1547   ../release-lto/unpaper
       +7424         +448        -1596           +0         +304          -61        +6519   ../release-lto-pgo/unpaper
    

    It’s unfortunate that GCC does not give you any diagnostic on what it’s trying to do achieve when doing LTO, it would be interesting to see if you could steer the compiler to produce better code without it as well.

    Anyway, enough with the microptimisations for now. If you want to make unpaper faster, feel free to send me pull requests for it, I’ll be glad to take a look at them!

    Friday, 15 August

    RealAudio files have several possible interleavers. The simplest is “Int0”, which means that the packets are in order. Today, I was contrasting “Int4” and “genr”. They both require rearranging data, in highly similar but not identical ways. “genr” is slightly more complex than “Int4”.

    A typical Int4 pattern, writing to subpacket 0, 1, 2, 3, etc, would read data from subpacket 0, 6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 1, 7, 13, etc, in that order – assuming subpkt_h is 12, as it was in one sample file. It is effectively subpacket_h rows of subpacket_h / 2 columns, counting up by subpacket_h / 2 and wrapping every two rows.

    A typical genr pattern is a little trickier. For subpacket_h = 14, and the same 6 columns per row as above, the pattern to read from looks like 0, 12, 24, 36, 48, 60, 72, 6, 18, 30, 42, 54, 66, 78, 1, etc.

    I spent most of today implementing genr, carefully working with a paper notebook, pencil, Python, and a terse formula from the old implementation:

    case DEINT_ID_GENR:
    for (x = 0; x < w/sps; x++) avio_read(pb, ast->pkt.data+sps*(h*x+((h+1)/2)*(y&1)+(y>>1)), sps);

    After various debug printfs, a lot of quality time in GDB running commands like x /94x (pkt->data + 14 * 94), a few interestingly garbled bits of audio playback, and a mentor pointing out I have some improvements to make on header parsing, I can play (some) genr files.

    I have also recently implemented SIPR support, and it works in both RA and RM files. RV10 video also largely works.

    Saturday, 09 August

    I've solved lost packets problems and finally my ASF demuxer started to work right at "ideal samples in vacuum".  So the time for fixing memory leaks had come and valgrind helped me a lot with this issue. After memory leaks was solved I had to start testing my demuxer on various samples of ASF format multimedia files. As expected, I've found many samples my demuxer failed for. The reasons was different - mostly it was my mistakes, misunderstood or overlooked parts of specs, but I think I found a case that needed unusual handling specs didn't mention about.
    some of problems was caused for example by
    * improper subpayloads handling - one should be really careful while reading specs to avoid problems for less common cases like one subpayload is inside single payload and there's is padding inside payload itself (while padding after payload is 0), but there was other problems too
    * I had to revise padding handling for all possible cases
    * ASF file has 3 places where ASF packet size is told - twice in the header objects and once in a packet itself, and specs are not specifying what should one do when they differs or at least I didn't found it
    * some stupid mistakes like when I just forgot to do something after adding new block to my code was really annoying
    Funny thing was when I fixed my demuxer for one group of samples and another one that worked before started to fail, I fixed this new group and third group failed. I was so much annoyed by this, but many mistakes I did was caused by my inexperience and I think one (at least me) just have to do all of these mistakes to get better.

    Saturday, 02 August

    I’ve resumed working on unpaper since I have been using it more than a couple of times lately and there has been a few things that I wanted to fix.

    What I’ve been working on now is a way to read input files in more formats; I was really aggravated by the fact that unpaper implemented its own loading of a single set of file formats (the PPM “rawbits”); I went on to look into libraries that abstract access to image formats, but I couldn’t find one that would work for me. At the end I settled for libav even though it’s not exactly known for being an image processing library.

    My reasons to choose libav was mostly found in the fact that, while it does not support all the formats I’d like to have supported in unpaper (PS and PDF come to mind), it does support the formats that it supports now (PNM and company), and I know the developers well enough that I can get bugs and features fixed or implemented as needed.

    I have now a branch can read files by using libav. It’s a very naïve implementation of it though: it reads the image into an AVFrame structure and then convert that into unpaper’s own image structure. It does not even free up the AVFrame, mostly because I’d actually like to be able to use AVFrame instead of unpaper’s structure. Not only to avoid copying memory when it’s not required (libav has functions to do shallow-copy of frames and mark them as readable when needed), but also because the frames themselves already contain all the needed information. Furthermore, libav 12 is likely going to include libavscale (or so Luca promised!) so that the on-load conversion can also be offloaded to the library.

    Even with the naïve implementation that I implemented in half an afternoon, unpaper not only supports the same input file as before, but also PNG (24-bit non-alpha colour files are loaded the same way as PPM, 1-bit black and white is inverted compared to PBM, while 8-bit grayscale is actually 16-bit with half of it defining the alpha channel) and very limited TIFF support (1-bit is the same as PNG; 8-bit is paletted so I have not implemented it yet, and as for colour, I found out that libav does not currently support JPEG-compressed TIFF – I’ll work on that if I can – but otherwise it is supported as it’s simply 24bpp RGB).

    What also needs to be done is to write out the file using libav too. While I don’t plan to allow writing files in any random format with unpaper, I wouldn’t mind being able to output through libav. Right now the way this is implemented, the code does explicit conversion back or forth between black/white, grayscale and colour at save time, and this is nothing different than the same conversion that happens at load time, and should rather be part of libavscale when that exists.

    Anyway, if you feel like helping with this project, the code is on GitHub and I’ll try to keep it updated soon.

    Sunday, 27 July

    Finally, all basic parts of ASF demuxer seems to work somehow.

     At last two weeks I fixed various bugs in my code and I hope packets handling is correct now. Only problem is that few packets at the end of the Data Object are still lost. Because I wanted a small break from this problem, my mentors allowed me to implement basic seeking first. ASF demuxer can now read index entries from Simple Index Object and adds them with av_add_index_entry to AVStream. So when Simple Index Object is present in an ASF file, my demuxer can seek to the requested time.

    Sunday, 13 July


    Skeleton of the new ASF demuxer was written, but only audio was demuxed properly now. Problem is complicated video frames handling in ASF format. I hope I finally found out how to process packets properly. ASF packet can contain single payload, single payload with subpayloads, multiple payloads or multiple payloads with subpayloads inside some of them. Every subpayload is always one frame, but single payload can be whole frame or just part of it. When ASF packet contains multiple payloads inside it, each of them can be one frame but it can be just fragment of it as well. When one of mulptiple payloads contains subpayloads, each of subpayload is one frame and it can be processed as AVPacket.
    For the case of fragmented frame in ASF packet I have to store several unfinished frames in ASFPacket structures that I've created for this purpose.  There should not be more than one unfinished frames per stream, so I have one ASFPacket in each ASFStream (ASFStream is structure for storing ASF stream properties). ASFPacket contains pointer to AVBufferRef where unfinished frame is stored. When frame is finished I can forward pointer to buffer  with data to AVPacket, set its properties like size, timestamps and others and finally return AVPacket.
    I introduced many bugs to my code that was working (at least ASF packets was parsed right and audio worked) and now I'm working on fixing all of them.



    I was accepted for OPW, May - August 2014 round with project "Rewrite the ASF demuxer". First task from my mentors was to create  wiki page  about ASF (Advanced Streaming Format), it was created at https://wiki.libav.org/ASF
    Interesting notes about other containers: http://codecs.multimedia.cx/?p=676.


    Next task from my mentors was to write simple program which reads asf file and prints its structure, i.e. list of asf objects, metadata and codec information. ASF file consists of so called ASF Objects. There're 3 top-level objects - Header Object, Data Object and Index Object. Especially Header Object can contain many other objects to provide different asf features, for example Codec List Object for codec information or Metadata object for metadata.  One can recognise object with GUID,  which is 16 byte array (each byte is number) that identifies object type. I was confused about the fact the GUID number you read from the file is not matching the GUID from specs. For some historical reasons one have to modify GUIDs from specs (reorder the numbers) for match GUID read from the file.
    My program is working now and can list objects, codecs and metadata info, but it ignores Index Objects by then. I hope I'll add support for them soon. Also I want to print offsets for each object and read Data Object deeper.

    Thursday, 03 July

    Today, I learned how to use framecrc as a debug tool. Many Libav tests use framecrc to compare expected and actual decoding. While rewriting existing code, the output from the old and new versions of the code on the same sample can be checked; this makes a lot of mistakes clear quickly, including ones that can be quite difficult to debug otherwise.

    Checking framecrcs interactively is straightforward: ./avconv -i somefile -c:a copy -f framecrc -. The -c:a copy specifies that the original, rather than decoded, packet should be used. The - at the end makes the output go to stdout, rather than a named file.

    The output has several columns, for the stream index, dts, pts, duration, packet size, and crc:

    0, 0, 0, 192, 2304, 0xbf0a6b45
    0, 192, 192, 192, 2304, 0xdd016b78
    0, 384, 384, 192, 2304, 0x18da71d6
    0, 576, 576, 192, 2304, 0xcf5a6a07
    0, 768, 768, 192, 2304, 0x3a84620a

    It is also unusually simple to find out what the fields are, as libavformat/framecrcenc.c spells it out quite clearly:

    static int framecrc_write_packet(struct AVFormatContext *s, AVPacket *pkt)
    {
    uint32_t crc = av_adler32_update(0, pkt->data, pkt->size);
    char buf[256];

    snprintf(buf, sizeof(buf), “%d, %10″PRId64″, %10″PRId64″, %8d, %8d, 0x%08″PRIx32″\n”,
    pkt->stream_index, pkt->dts, pkt->pts, pkt->duration, pkt->size, crc);
    avio_write(s->pb, buf, strlen(buf));
    return 0;
    }

    Keiler, one of my Libav mentors, patiently explained the above; I hope documenting it helps other people who are starting with Libav development.

    Thursday, 12 June

    Most recently, I have been adding documentation to Libav. Today, my work included writing a demuxer howto. In the last couple of weeks, I have also reimplemented RealAudio 1.0 support (2.0 is in progress), and learned more about Coccinelle and undefined behavior in C. Blog posts on these topics are pending.

    Tuesday, 20 May

    My first patch for undefined behavior eliminates left shifts of negative numbers, replacing a << b (where a can be negative) with a * (1 << b). This change fixes bug686, at least for fate-idct8x8 and libavcodec/dct-test -i (compiled with ubsan and fno-sanitize-recover). Due to Libav policy, the next step is to benchmark the change. I was also asked to write a simple benchmarking HowTo for the Libav wiki.

    First, I installed perf: sudo aptitude install linux-tools-generic
    I made two build directories, and built the code with defined behavior in one, and the code with undefined behavior in the other (with ../configure && make -j8 && make fate). Then, in each directory, I ran:

    perf stat --repeat 150 ./libavcodec/dct-test -i > /dev/null

    The results were somewhat more stable than with –repeat 30, but it still looks much more like noise than a meaningful result. I ran the command with –repeat 30 for both before the recorded 150 run, so both would start on equal footing. With defined behavior, the results were “0.121670022 seconds time elapsed ( +-  0.11% )”; with undefined behavior, “0.123038640 seconds time elapsed ( +-  0.15% )”. The best of a further three runs had the opposite result, shown below:

    % cat undef.150.best

    perf stat –repeat 150 ./libavcodec/dct-test -i > /dev/null

    Performance counter stats for ‘./libavcodec/dct-test -i’ (150 runs):

    120.427535 task-clock (msec) # 0.997 CPUs utilized ( +- 0.11% )
    21 context-switches # 0.178 K/sec ( +- 1.88% )
    0 cpu-migrations # 0.000 K/sec ( +-100.00% )
    226 page-faults # 0.002 M/sec ( +- 0.01% )
    455’393’772 cycles # 3.781 GHz ( +- 0.05% )
    <not supported> stalled-cycles-frontend
    <not supported> stalled-cycles-backend
    1’306’169’698 instructions # 2.87 insns per cycle ( +- 0.00% )
    89’674’090 branches # 744.631 M/sec ( +- 0.00% )
    1’144’351 branch-misses # 1.28% of all branches ( +- 0.18% )

    0.120741498 seconds time elapse

    % cat def.150.best

    Performance counter stats for ‘./libavcodec/dct-test -i’ (150 runs):

    120.838976 task-clock (msec) # 0.997 CPUs utilized ( +- 0.11% )
    21 context-switches # 0.172 K/sec ( +- 1.98% )
    0 cpu-migrations # 0.000 K/sec
    226 page-faults # 0.002 M/sec ( +- 0.01% )
    457’077’626 cycles # 3.783 GHz ( +- 0.08% )
    <not supported> stalled-cycles-frontend
    <not supported> stalled-cycles-backend
    1’306’321’521 instructions # 2.86 insns per cycle ( +- 0.00% )
    89’673’780 branches # 742.093 M/sec ( +- 0.00% )
    1’148’393 branch-misses # 1.28% of all branches ( +- 0.11% )

    0.121162660 seconds time elapsed ( +- 0.11% )

    I also compared the disassembled code from jrevdct.o, before and after the changes to have defined behavior (using gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 on x86_64).

    In the build directory for the code with defined behavior:
    objdump -d libavcodec/jrevdct.o > def.dis
    sed -e 's/^.*://' def.dis > noline.def.dis

    In the build directory for the code with undefined behavior:
    objdump -d libavcodec/jrevdct.o > undef.dis
    sed -e 's/^.*://' undef.dis > noline.undef.dis

    Leaving aside difference in jump locations (despite the fact that they can impact performance), there are two differences:

    diff -u build_benchmark_undef/noline.undef.dis build_benchmark_def/noline.def.dis

    –       0f bf 50 f0             movswl -0x10(%rax),%edx
    +       0f b7 58 f0             movzwl -0x10(%rax),%ebxi

    It’s switched to using a zero-extension rather than sign-extension in one place.

    –       74 1c                   je     40 <ff_j_rev_dct+0x40>
    –       c1 e2 02                shl    $0x2,%edx
    –       0f bf d2                movswl %dx,%edx
    –       89 d1                   mov    %edx,%ecx
    –       0f b7 d2                movzwl %dx,%edx
    –       c1 e1 10                shl    $0x10,%ecx
    –       09 d1                   or     %edx,%ecx
    –       89 48 f0                mov    %ecx,-0x10(%rax)
    –       89 48 f4                mov    %ecx,-0xc(%rax)
    –       89 48 f8                mov    %ecx,-0x8(%rax)
    –       89 48 fc                mov    %ecx,-0x4(%rax)
    +       74 19                   je     3d <ff_j_rev_dct+0x3d>
    +       c1 e3 02                shl    $0x2,%ebx
    +       89 da                   mov    %ebx,%edx
    +       0f b7 db                movzwl %bx,%ebx
    +       c1 e2 10                shl    $0x10,%edx
    +       09 da                   or     %ebx,%edx
    +       89 50 f0                mov    %edx,-0x10(%rax)
    +       89 50 f4                mov    %edx,-0xc(%rax)
    +       89 50 f8                mov    %edx,-0x8(%rax)
    +       89 50 fc                mov    %edx,-0x4(%rax)

    Leaving aside differences in register use:

    –       0f bf d2                movswl %dx,%edx
    There is one extra movswl instruction in the version with undefined behavior, at least with the particular version of the particular compiler for the particular architecture checked.

    This is an example of a null result while benchmarking; neither version performs better, although any given benchmark has one or the other come out ahead, generally by less than the variance within the run. If this were a suggested performance change, it would not make sense to apply it. However, the point of this change was correctness; a performance increase is not expected, and the lack of a performance penalty is a bonus.

    Monday, 19 May

    One of my fantastic OPW mentors prepared a “Welcome task package”, of self-contained, approachable, useful tasks that can be done while getting used to the code, and with a much smaller scope than the core objective. This is awesome. To any mentors reading this: consider making a welcome package!

    Step one of it is to use ubsan with gdb. This turned out to be somewhat intricate, so I have decided to supplement the wiki’s documentation with a step-by-step guide for Ubuntu 14.04.

    1) Install clang-3.5 (sudo aptitude install clang-3.5), as Ubuntu 14.04 comes with gcc 4.8, which does not support -fsanitize=undefined.

    2) Under libav, mkdir build_ubsan && cd build_ubsan && ../configure --toolchain=clang-usan --extra-cflags=-fno-sanitize-recover (alternatively, –cc=clang –extra-cflags=-fsanitize=undefined –extra-ldflags=-fsanitize=undefined can be used instead of –toolchain=clang-usan).

    3) make -j8 && make fate

    4) Watch where the tests die (they only die if –extra-cflags=-fno-sanitize-recover is used). For me, they died on TEST idct8x8. Running make V=1 fate and asking my mentors pointed me towards libavcodec/dct-test -i, which is dying on jrevdct.c:310:47: with “runtime error: left shift of negative value -14”. If you really want to err on the side of caution, make a second build dir, and ./configure --cc=clang && make -j8 && make fate in it, making sure it does not fail… this confirms that the problem is related to configuring with –toolchain=clang-usan (and, it turns out, with -fsanitize=undefined).

    5) It’s time to use the information my mentor pointed out on the wiki about ubsan at https://wiki.libav.org/Security/Tools  – specifically, the information about useful gdb breakpoints. I put a modified version of the b_u definitions into ~/.gdbinit. The wiki has been updated now, but was originally missing a few functions, including one that turns out to be relevant: __ubsan_handle_shift_out_of_bounds

    6 Run gdb ./libavcodec/dct-test, then at the gdb prompt, set args -i to set the arguments dct-test was being run with, and then b_u to load the ubsan breakpoints defined above. Then start the program: type run at the gdb prompt.

    7) It turns out that a problem can be found, and the program stops running. Get a backtrace with bt.


    680 in __ubsan_handle_shift_out_of_bounds ()
    #1  0x000000000048ac96 in __ubsan_handle_shift_out_of_bounds_abort ()
    #2  0x000000000042c074 in row_fdct_8 (data=<optimized out>) at /home/me/opw/libav/libavcodec/jfdctint_template.c:219
    #3  ff_jpeg_fdct_islow_8 (data=<optimized out>) at /home/me/opw/libav/libavcodec/jfdctint_template.c:273
    #4  0x0000000000425c46 in dct_error (dct=<optimized out>, test=<optimized out>, is_idct=<optimized out>, speed=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:246
    #5  main (argc=<optimized out>, argv=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:522

    It would be nice to see a bit more detail, so I wanted to compile the project so that less would be optimized out, and eventually settled on -O1 because compiling with ubsan and without optimizations failed (which I reported as bug 683). This led to a slightly better backtrace:


    #0  0x0000000000491a70 in __ubsan_handle_shift_out_of_bounds ()
    #1  0x0000000000492086 in __ubsan_handle_shift_out_of_bounds_abort ()
    #2  0x0000000000434dfb in ff_j_rev_dct (data=<optimized out>) at /home/me/opw/libav/libavcodec/jrevdct.c:275
    #3  0x00000000004258eb in dct_error (dct=0x4962b0 <idct_tab+64>, test=1, is_idct=1, speed=0) at /home/me/opw/libav/libavcodec/dct-test.c:246
    #4  0x00000000004251cc in main (argc=<optimized out>, argv=<optimized out>) at /home/me/opw/libav/libavcodec/dct-test.c:522

    It is possible to work around the problem by modifying the source code rather than the compiler flags: FFmpeg did so within hours of the bug report – the commit is at http://git.videolan.org/?p=ffmpeg.git;a=commit;h=bebce653e5601ceafa004db0eb6b2c7d4d16f0c0 ! Both FFmpeg and Libav have also merged my patch to work around the problem (FFmpeg patch, Libav patch). The workaround of using -O1 was suggested by one of my mentors, lu_zero; –disable-optimizations does not actually disable all optimizations (in practice, it leaves in ones necessary for compilation), and it does not touch the -O1 that –toolchain=clang-usan now sets.

    Wanting a better backtrace leads to the next post: a detailed guide to narrowing down a bug in a the C compiler, Clang. Yes, I know, the problem is never a bug in the C compiler – but this time, it was.

    Thursday, 15 May

    What’s the fun of only running code on platforms you physically have? Portability is important, and Libav actively targets several platforms. It can be useful to be able to try out the code, even if the hardware is totally unavailable.

    Here is how to run Libav’s tests under aarch64, on x86_64 hardware and Ubuntu 14.04. This guide is provided in the hopes that it saves someone else 20 hours or more: there is a lot of once-excellent information which has become misleading, because a lot of progress has been made in aarch64 support. I have tried three approachs – building with Linaro’s cross-compiler, building under QEMU user emulation, and building under QEMU system emulation, and cross-compiling. Building with a cross-compiler is the fastest option. Building under user emulation is about ten times slower. Building under system emulation is about a hundred times slower. There is actually a fourth option, using ARM Foundation Model, but I have not tried it. Running under QEMU user emulation is the only approach I managed to make entirely work.

    For all three approaches, you will want a rootfs; I used Ubuntu Core. You can download Ubuntu Core for aarch64 (a minimal rootfs; see https://wiki.ubuntu.com/Core to learn more),  and untar it (as root) into a new directory. Then, set an environment variable that the rest of this guide/set of notes uses frequently, changing the path to match your system:

    export a64root=/path/to/your/aarch64/rootdir

    Approach 1 – build under QEMU’s user emulation.

    Step 1) Set up QEMU. The days when using SUSE branches were necessary are over, but it still needs to be statically linked, and not all QEMU packages are. Ubuntu has a static QEMU:

    sudo aptitude install qemu-user-static

    This package also sets up binfmt for you. You can delete broken or stale binfmt information by running:
    echo -1 > /proc/sys/fs/binfmt_misc/archnamehere – this can be useful, especially if you have previously installed QEMU by hand.

    Step 2) Copy your QEMU binary into the chroot, as root, with:

    cp `which qemu-aarch64-static` $a64root/usr/bin/

    Step 3) As root, set up the aarch64 image so it can do DNS resolution, so you can freely use apt-get:
    echo 'nameserver 8.8.8.8' > $a64root/etc/resolv.conf

    Step 4) Chroot into your new system. Run chroot $a64root /bin/bash as root.

    At this point, you should be able to run an aarch64 version of ls, and confirm with file /bin/ls that it is an aarch64 binary.

    Now you have a working, emulated, minimal aarch64 system.

    On x86, you would run aptitude build-dep libav, but there is no such package for aarch64 yet, so outside of the chroot, on the normal system, I installed apt-rdepends and ran:
    apt-rdepends --build-depends --follow=DEPENDS libav

    With version information stripped out, the following packages are considered dependencies:
    debhelper frei0r-plugins-dev libasound2-dev libbz2-dev libcdio-cdda-dev libcdio-dev libcdio-paranoia-dev libdc1394-22-dev libfreetype6-dev  libgnutls-dev libgsm1-dev libjack-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenjpeg-dev libopus-dev libpulse-dev libraw1394-dev librtmp-dev libschroedinger-dev libsdl1.2-dev libspeex-dev libtheora-dev libtiff-dev libtiff5-dev libva-dev libvdpau-dev libvo-aacenc-dev libvo-amrwbenc-dev libvorbis-dev libvpx-dev libx11-dev libx264-dev libxext-dev libxfixes-dev libxvidcore-dev libxvmc-dev texi2html yasm zlib1g-dev doxygen

    Many of the libraries do not have current aarch64 Ubuntu packages, and neither does frei0r-plugins-dev, but running aptitude install on the above list installs a lot of useful things – including build-essential. The full list is in the command below; the missing packages are non-essential.

    Step 5) Set it up: apt-get install aptitude

    aptitude install git debhelper frei0r-plugins-dev libasound2-dev libbz2-dev libcdio-cdda-dev libcdio-dev libcdio-paranoia-dev libdc1394-22-dev libfreetype6-dev  libgnutls-dev libgsm1-dev libjack-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenjpeg-dev libopus-dev libpulse-dev libraw1394-dev librtmp-dev libschroedinger-dev libsdl1.2-dev libspeex-dev libtheora-dev libtiff-dev libtiff5-dev libva-dev libvdpau-dev libvo-aacenc-dev libvo-amrwbenc-dev libvorbis-dev libvpx-dev libx11-dev libx264-dev libxext-dev libxfixes-dev libxvidcore-dev libxvmc-dev texi2html yasm zlib1g-dev doxygen

    Now it is time to actually build libav.

    Step 6) Create a user within your chroot: useradd -m auser, and switch to running as that user: sudo -u auser bash, and type cd to go to the home directory.

    Step 7) Run git clone git://git.libav.org/libav.git, then ./configure --disable-pthreads && make -j8 (change the 8 to approximately the number of CPU cores you have).
    On my hardware, this takes 10-11 minutes, and ‘make fate’ takes about 16. Disabling pthreads is essential, as qemu-user does not handle threads well, and running the tests hangs randomly without it.


    Approach 2: cross-compile (warning: I do not have the tests working with this approach).

    1) Start by getting an aarch64 compiler. A good place to get one is http://releases.linaro.org/latest/components/toolchain/binaries/; I am using http://releases.linaro.org/latest/components/toolchain/binaries/gcc-linaro-aarch64-linux-gnu-4.8-2014.04_linux.tar.xz . Untar it, and add it to your path:

    export PATH=$PATH:/path/to/your/linaro/tools/bin

    2) Make the cross-compiler work. Run aptitude install lsb lib32stdc++6. Without this, invoking the compiler will say “No such file or directory”. See http://lists.linaro.org/pipermail/linaro-toolchain/2012-January/002016.html.

    3) Under the libav directory (run git clone git://git.libav.org/libav.git if you do not have one), type mkdir a64crossbuild; cd a64crossbuild. Make sure the libav directory is somewhere under $a64root (it should simplify running the tests, later).

    4)./configure --arch=aarch64 --cpu=generic --cross-prefix=aarch64-linux-gnu- --cc=aarch64-linux-gnu-gcc --target-os=linux --sysroot=$a64root --target-exec="qemu-aarch64-static -L $a64root" --disable-pthreads

    This is a minimal variant of Jannau’s configuration – a developer who has recently done a lot of libav aarch64 work.

    5) Run make -j8. On my hardware, it takes just under a minute.

    6) Run make fate. Unfortunately, both versions of QEMU I tried hung on wait4 at this point (in fft-test, fate-fft-4), and used an extra couple of hundred megabytes of RAM per second until I stopped QEMU, even if I asked it to wait for a remote GDB. For anyone else trying this, https://lists.libav.org/pipermail/libav-devel/2014-May/059584.html has several useful tips for getting the tests to run after cross-compilation.


    Approach 3: Use QEMU’s system emulation. In theory, this should allow you to use pthreads; in practice, the tests hung for me. The following May 9th post describes what to do: http://www.bennee.com/~alex/blog/2014/05/09/running-linux-in-qemus-aarch64-system-emulation-mode/. In short: git clone git://git.qemu.org/qemu.git qemu.git && cd qemu.git && ./configure --target-list=aarch64-softmmu && make, then

    ./aarch64-softmmu/qemu-system-aarch64 -machine virt -cpu cortex-a57 -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img  --append "console=ttyAMA0" -fsdev local,id=r,path=$a64root,security_model=none -device virtio-9p-device,fsdev=r,mount_tag=r

    Then, under the buildroot system, log in as root (no password), and type mkdir /mnt/core && mount -t 9p -o trans=virtio r /mnt/core. At this point, you can run chroot /mnt/core /bin/bash, and follow the approach 1 instructions from useradd onwards, except that ./configure without –disable-pthreads should theoretically work. On my system, ./configure takes a bit over 5 minutes with this approach. Running make is quite slow; time make took 113 minutes. Do not use -j – you are limited to a single core, so -j would slow compilation down slightly. However, make fate consistently hung on acodec-pcm-alaw, and I have not yet figured out why.


     

    Things not to do:

    • Use a rootfs from a year ago; I am yet to try one that is not broken, and some come with fun bonuses like infinite file system loops. These cost me well over a dozen hours.
    • Compile SUSE’s QEMU; qemu-system is bleeding-edge enough that you need to compile it from upstream, but SUSE’s patches have long been merged into the normal QEMU upstream. Unless you want qemu-system, you do not need to compile QEMU at all under Ubuntu 14.04.
    • Leave the environment variables in this tutorial unset in a new shell and wonder why things do not work.

     

    Wednesday, 23 April

    Applying to OPW requires an initial contribution. The Libav IRC channel suggested porting the asettb filter from FFmpeg, so I did (version 5 of the patch was merged upstream, in two parts: a rename patch and a content patch; the FFmpeg author was credited as author for the latter, while I did a signed-off-by). I also contributed a 3000+ line documentation patch, standardizing the libavfilter documentation and removing numerous English errors, and triaged a few bugs, git bisecting the one that was reproducible.

    Sunday, 13 April

    And how it nearly ruined another video coding standard.

    Everyone knows that interlacing was a trick in the '80s for pseudo motion compensation with analogue video. This more or less worked because it mimicked how television worked back then. This technique was preserved when flat panels for pc and tv were introduced, for a mix of backward compatibility and technical limitations, and video coding features interlacing in MPEG2 and H264 and similar.

    However as with black and white, TACS and Gopher, old technology has to be replaced with modern and efficient technology, as a trade off of users' interests and technology providers' market prospects. In case you are not familiar, interlacing is a mess to support, makes decoding slower and heavily degrades quality. People saying that interlacing saves bandwidth do not know much about video coding and bad marketing claiming that higher resolution is better than higher framerate has an effect too.

    So, when ITU and then MPEG set out to establish the mandates for a new video standard capable of superseding H264, it was decided that interlacing was old enough, did more harm than good and it was time for retirement: HEVC was going to be the first video codec to officially deprecate interlacing.

    Things went pretty swell during its development, until a few months before the completion of the standard. A group of US companies complained that the proposed tools were not sufficient (a set of SEI messages and treating fields like progressive frames) and heavily protested with both standardisation bodies. ITU firmly rejected the idea (with the video group chair threatening to step down) while MPEG set out to understand the needs of the industry and see if there was anything that could be done.

    An ad-hoc group was established to see if there was any evidence that interlaced coding tool would have improved the situation. Things looked really shady, the Requirements group even mentioned that it was the first time that an AhG was established to look for evidence, instead of establishing an AhG because there was evidence. Several liasons from EBU and other DVB members tried to point out this absurdity while the threat of adding interlacing back in HEVC became real. Luckily the first version of the specifications got published in the meantime, so this decision didn't slow down the standardisation process.

    Why so much love towards interlacing? Well in the "rebellious" group defence, it is true that interlaced content in HEVC is less performant than in H264; however it is also true that such deinterlaced content in HEVC outperforms H264 in any configuration. Truth is that mass marketed deinterlacers (commonly found in televisions for example) bring a lot of royalty income, so it is normal that companies with vested interests would prefer to have interlacing in a soon-popular video standard like HEVC. Also in markets like US where the network operator (which has control on the encoding but not on the video source) might differ from the content provider, it could be politically difficult to act as a carrier only if you have to deinterlace a video.

    However these problems are actually not enough for forcing every encoder, decoder, analyser to support a deprecated technology like interlacing. Technical problems can be solved with good deinterlacers at the top of the distribution chain, while political ones can be solved amending contracts. Plus having progressive only video will definitely improve quality and let the industry concentrate on other delicate subjects, like bit depth, both properties going in favour of users' interests.

    At the last MPEG meeting, the "rebellious" group which had been working on reintroducing interlacing for a year provided no real evidence that interlaced coding tools would improve HEVC at all. The only sensible solution was to disband the group over this wasted effort and support progressive video only, which is what happened luckily. So now both ITU and MPEG support progressive video only and this has finally nailed it.

    Interlacing is dead, long live progressive.

    Written by Vittorio Giovara (projectsymphony@gmail.com)
    Published under a CC-BY-SA 3.0 license.

    Tuesday, 25 March

    I am very glad to announce that Libav 10 has been released!

    This has a bunch of features that I contributed to, in particular regarding stereoscopic video and interlaced filtering, but more importantly this release has the work of an awesome group of people which has been carried out for a whole year. This is the magic of open source!

    I joined the group more or less one year ago, with some patches regarding an obscure H.264 specification which I then later reimplemented in HEVC and then I wrote a few filters I needed and then designed an API and then, wow! A whole year passed without me noticing, and I am still around, sending patches to the same group of people who welcomed someone who had problems with shifting values (sad but true story)!

    I met the team both at VDD and FOSDEM and they've been the most exciting conferences I ever went to (and I went to a lot of them). I couldn't believe I was with the devteam of my favourite multimeida opensource projects I've been following since I was a kid! Until a year ago, I saw the names from the commits and the blogposts from both VideoLAN and Libav projects and I had been thinking "Oh wouldn't it be so cool to be like one of them".

    The answer is yes, it definitely would, and it's something that can happen if one is really committed in it! The Libav Info page states "Being a committer is a duty, not a privilege", but it sure does feel like one.

    Thanks for this exciting year guys, I look forward to the next ones.

    Monday, 24 March

    ...using latest modern tools!

    X264 and VLC are two of the most awesomest opensource software you can find on-line and of course the pose no problem when you compile them on a Unix environment. Too bad that sometimes you need to think of Windowze as well, so we need a way to crosscompile that software: in this blogpost, I'll describe how to achieve that, using modern tools on a Ubuntu 12.04 installation.

    [0] Sources
    It goes without saying that without the following guides, I'd have had a much harder time!
    http://alex.jurkiewi.cz/blog/2010/cross-compiling-x264-for-win32-on-ubuntu-linux
    https://bbs.archlinux.org/viewtopic.php?id=138128
    http://wiki.videolan.org/Win32Compile
    http://forum.videolan.org/viewtopic.php?f=32&t=101489
    So a big thanks to all the original authors!

    [1] Introduction
    When you crosscompile you just use the same tools and toolchains that you are used to, gcc, ld and so on, but configured (and compiled) so that they produce executable code for a different platform. This platform can vary both in software and in hardware and it is usually identified by a triplet: the processor architecture, the ABI and the operating system.

    What we are going to use here is i686-w64-mingw32, which identifies any x86 cpu since the Pentium III, the w64 ABI used on modern Windows NT systems (if I'm not wrong), and the mingw32 architecture, that is the Windows gcc variant.



    [2] Prerequisites
    Note that the name of the packages might be slightly different according to your distribution. We are going to need a quite recent mingw-runtime for VLC (>=3.00) which has not yet landed on Ubuntu, so we'll take it from our Debian cousins.

    Execute this command

    $ wget http://ftp.jp.debian.org/debian/pool/main/m/mingw-w64/mingw-w64-dev_3.0~svn4933-1_all.deb
    $ sudo dpkg -i mingw-w64-dev_3.0~svn4933-1_all.deb


    and then install stock dependencies


    $ sudo dpkg -i gcc-mingw-w64 g++-mingw-w64
    $ sudo dpkg -i pkg-config yasm subversion cvs git-core

    [3] x264 and libav 
    x264 has very few dependencies, just pthreads and zlib, but it reaches its full potential when all of them are satisfied (encapsulation, avisynth support and so on).

    Loosely following Alex Jurkiewicz's work, we create a user-writable folder and then we prepare a script that sets some useful variables every time.


    $ mkdir -p ~/win32-cross/{src,lib,include,share,bin}
    #!/bin/sh

    TRIPLET=i686-w64-mingw32

    export CC=$TRIPLET-gcc
    export CXX=$TRIPLET-g++
    export CPP=$TRIPLET-cpp
    export AR=$TRIPLET-ar
    export RANLIB=$TRIPLET-ranlib
    export ADD2LINE=$TRIPLET-addr2line
    export AS=$TRIPLET-as
    export LD=$TRIPLET-ld
    export NM=$TRIPLET-nm
    export STRIP=$TRIPLET-strip

    export PATH="/usr/i586-mingw32msvc/bin:$PATH"
    export PKG_CONFIG_PATH="$HOME/win32-cross/lib/pkgconfig/"

    export CFLAGS="-static -static-libgcc -static-libstdc++ -I$HOME/win32-cross/include -L$HOME/win32-cross/lib -I/usr/$TRIPLET/include -L/usr/$TRIPLET/lib"
    export CXXFLAGS="$CFLAGS"

    exec "$@"
    Please not the use of the CFLAGS variables: without all the static parameters, the executable will dynamically link gcc, so you'll need to bundle the equivalent dll. I prefer to have one single exe, so everything goes static, but I'm not really sure which flag is actually needed. If you have any idea, please drop me a line.

    Anyway, let's compile latest revision of pthreads (2.9.1 as of this writing)

    $ cd ~/win32-cross/src
    $ wget -qO - ftp://sourceware.org/pub/pthreads-win32/pthreads-w32-2-9-1-release.tar.gz | tar xzvf -
    $ cd pthreads-w32-2-9-1-release
    $ make GC-static CROSS=i686-w64-mingw32-
    $ cp libpthreadGC2.a ../../lib
    $ cp *.h ../../include

    and zlib (1.2.7) - we need to remove the references to the libc library (which is implied anyway) otherwise we will get a linkage failure

    $ cd ~/win32-cross/src
    $ wget -qO - http://zlib.net/zlib-1.2.7.tar.gz | tar xzvf -
    $ cd zlib-1.2.7
    $ ../../mingw ./configure
    $ sed -i"" -e 's/-lc//' Makefile
    $ make
    $ DESTDIR=../.. make install prefix=

    Now it's turn for libav, so that x264 can use different input chroma and other stuff. If you need libav exececutables, you might want to change the configure line so that it suits you


    $ cd ~/win32-cross/src
    $ git clone git://git.libav.org/libav.git
    $ cd libav
    $ ./configure \
    --target-os=mingw32 --cross-prefix=i686-w64-mingw32- --arch=x86 --prefix=../.. \
    --enable-memalign-hack --enable-gpl --enable-avisynth --enable-runtime-cpudetect \
    --disable-encoders --disable-muxers --disable-network --disable-devices
    $ make
    $ make install

    and the nice tools that give more output options


    $ cd ~/win32-cross/src
    $ svn checkout http://ffmpegsource.googlecode.com/svn/trunk/ ffms
    $ cd ffms
    $ ../../mingw ./configure --host=mingw32 --with-zlib=../.. --prefix=$HOME/win32-cross
    $ ../../mingw make
    $ make install


    $ cd $HOME/win32-x264/src
    # Create a CVS auth file on your machine
    $ cvs -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac login
    $ cvs -z3 -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac co -P gpac
    $ cd gpac
    $ chmod +rwx configure src/Makefile
    # Hardcode cross-prefix
    $ sed -i'' -e 's/cross_prefix=""/cross_prefix="i686-w64-mingw32-"/' configure
    $ ../../mingw ./configure --static --use-js=no --use-ft=no --use-jpeg=no \
          --use-png=no --use-faad=no --use-mad=no --use-xvid=no --use-ffmpeg=no \
          --use-ogg=no --use-vorbis=no --use-theora=no --use-openjpeg=no \
          --disable-ssl --disable-opengl --disable-wx --disable-oss-audio \
          --disable-x11-shm --disable-x11-xv --disable-fragments--use-a52=no \
          --disable-xmlrpc --disable-dvb --disable-alsa --static-mp4box \
          --extra-cflags="-I$HOME/win32-cross/include -I/usr/i686-w64-mingw32/include" \
          --extra-ldflags="-L$HOME/win32-cross/lib -L/usr/i686-w64-mingw32/lib"
    # Fix pthread lib name
    $ sed -i"" -e 's/pthread/pthreadGC2/' config.mak
    # Add extra libs that are required but not included
    $ sed -i"" -e 's/-lpthreadGC2/-lpthreadGC2 -lwinmm -lwsock32 -lopengl32 -lglu32/' config.mak
    $ make
    # Make will fail a few commands after building libgpac_static.a
    # (i586-mingw32msvc-ar cr ../bin/gcc/libgpac_static.a ...).
    # That's fine, we just need libgpac_static.a 
    i686-w64-mingw32-ranlib bin/gcc/libgpac_static.a 
    $ cp bin/gcc/libgpac_static.a ../../lib/
    $ cp -r include/gpac ../../include/

     
    Finally we can compile x264 at full power! The configure script will provide a list of what features have been activated, make sure everything you need is there!

    $ cd ~/win32-cross/src
    $ git clone git://git.videolan.org/x264.git
    $ cd x264
    $ ./configure --cross-prefix=i686-w64-mingw32- --host=i686-w64-mingw32 \
          --extra-cflags="-static -static-libgcc -static-libstdc++ -I$HOME/win32-cross/include" \
          --extra-ldflags="-static -static-libgcc -static-libstdc++ -L$HOME/win32-cross/lib" \
          --enable-win32thread
    $ make

    And you're done! Take that x264.exe file and use it wherever you want!
    Most of the work here has been outlined by Alex Jurkiewicz in this guide so checkout his blog for more nice guides!


    [4] VideoLAN
    On the other hand, VLC has a LOT of dependencies, but thankfully it also has a nice way to get them working quickly. If you read the wiki guide, you'll notice that it will use i586-mingw32msvc everywhere, but you should definitely avoid that! In fact that one offers a very old toolchain, under which VLC will fail to compile! Also the latest versions provides much better code, x264 will weight 46MB against 38MB in one case!

    So let's update every script to the more modern version i686-w64-mingw32! As usual, first of all get the sources


    $ git clone git://git.videolan.org/vlc.git vlc
    $ cd vlc 
    And let's get the dependencies through the contrib scripts, qt4 needs to be compiled by hand as the version in Ubuntu repositories doesn't cope well with the rest of the process. I also had to remove some of the files because they were of the wrong architecture (mileage might vary here) .

    $ mkdir -p contrib/win32
    $ cd contrib/win32
    $ ../bootstrap --host=
    i686-w64-mingw32
    $ make prebuilt
    $ make .qt4
    $ rm ../i686-w64-mingw32/bin/{moc,uic,rcc}
    $ cd -

    We now return to the main sources folder and launch the boostrap and configure process; you need some standard automake/libtool dependencies for this.


    $ ./bootstrap
    $ mkdir win32 && cd win32
    $ ../extras/package/win32/configure.sh --host=i686-w64-mingw32
    $ ./compile
    $ make package-win-common

    Let's grab something to drink and celebrate when the compilation ends! You'll find all the necessary files in the vlc-x.x.x folder. A big thanks goes to the wiki authors and j-b who gave me pointers on #videolan irc.

    [5] Conclusions
    Whelp, that was a long run! As additional benefit you are able to customize every single piece of software to your need, eg. you can modify the libav version that you are going to use for Vlc as you wish! Also crosscompiling is often treated as black magic, but in reality is a simple process that just needs more careful configuration. Errors often are related to wrong paths or missing dependencies and sometimes a combination of both; don't lose hope and keep going until you get what you want!

    For future reference, all (or most of) functions and structs in libav have a prefix that indicates the exposure of that functions. Those are

    • av_ meaning a public function, present in the API;
    • ff_ meaning a private function, not present in the API;
    • avpriv_ meaning inter-library private function, used internally across libraries only.
    Source: #libav-devel

    Friday, 13 January

    Well, I've finished the new audio decoding API, which has been merged into Libav master. The new audio encoding API is basically done, pending a (hopefully final) round of review before committing.

    Next up is audio timestamp fixes/clean-up. This is a fairly undefined task. I've been collecting a list of various things that need to be fixed and ideas to try. Plus, the audio encoding API revealed quite a few bugs in some of the demuxers. Today I started a sort of TODO list for this stage of the project. I'll be editing it as the project continues to progress.

    Friday, 28 October

    For the past few weeks I've been working on a new project sponsored by FFMTech. The entire project involves reworking much of the existing audio framework in libavcodec.

    Part 1 is changing the audio decoding API to match the video decoding API. Currently the audio decoders take packet data from an AVPacket and decode it directly to a sample buffer supplied by the user. The video decoders take packet data from an AVPacket and decode it to an AVFrame structure with a buffer allocated by AVCodecContext.get_buffer(). My project will include modifying the audio decoding API to decode audio from an AVPacket to an AVFrame, as is done with video.

    AVCODEC_MAX_AUDIO_FRAME_SIZE puts an arbitrary limit on the amount of audio data returned by the decoder. For example, each FLAC frame can hold up to 65536 samples for 8 channels at 32-bit sample depth, which is 2097152 bytes of raw audio, but AVCODEC_MAX_AUDIO_FRAME_SIZE is only 192000. Using get/release_buffer() for audio decoding will solve this problem. It will, however, require changes to every audio decoder. Most of those changes are trivial since the frame size is known prior to decoding the frame or is easily parsed. Some of the changes are more intrusive due to having to determine the frame size prior to allocating and writing to the output buffer.

    As part of the preparation for the new API, I have been cleaning up all the audio decoder, which has been quite tedious. I've found some pretty surprising bugs along the way. I'm getting close to finishing that part so I'll be able to move on to implementing the new API in each decoder.

    Wednesday, 27 July

    So, I've moved on from AHT now, and it's on to Spectral Extension (SPX).  I got the full syntax working yesterday, now I just need to figure out how to calculate all the parameters.  I have a feeling this will help quality quite a bit, especially when used in conjunction with variable bandwidth/coupling.  My vision for automatic bandwidth adjustment is starting to come together.

    SPX encoding/decoding is fairly straightforward, so I expect this won't take too long to implement.  Similar to channel coupling, the encoder writes coarsely banded scale factors for frequencies above the fully-encoded bandwidth, along with noise blending factors.  The decoder copies lower frequency coefficients to the upper bands, multiplies them by the scale factors, and blends them with noise (which has been scaled according to the band energy and the blending factors in the bitstream).  For the encoder, I just need to make the reconstructed coefficients match the original coefficients as closely as possible by calculating appropriate spectral extension coordinates and blending factors.  Also, like coupling coordinates, the encoder can choose how often to resend the parameters to balance accuracy vs. bitrate.

    Once SPX encoding is working properly, I'll revisit variable bandwidth.  However, instead of adjusting the upper cutoff frequency (which is somewhat complex to avoid very audible attack/decay), it will adjust the channel coupling and/or spectral extension ranges to keep the cutoff frequency constant while still adjusting to changes in signal complexity to keep a more stable quality level at a constant bitrate.  This could also be used in a VBR mode with constrained bitrate limits.

    If you want to follow the development, I have a separate branch at my Libav github repository.
    http://github.com/justinruggles/Libav/commits/eac3_spx

    I finally got the complete AHT syntax working properly.  Unfortunately, the quality seems to be lower at all bitrates than with the normal AC-3 quantization.  I'm hoping that I just need to pick better gain values, but I have a suspicion that some of the difference is related to vector quantization, which the encoder has no control over (a basic 6-dimensional VQ minimum distance search is the best it can do).

    My first step is to find out for sure if choosing better gain values will help.  One problem is that the bit allocation model is saying we need X number of bits for each mantissa.  Using mode=0 (all zero gains) gives exactly X number of bits per mantissa (with no overhead for encoding the gain values), but the overall quality is lower than with normal AC-3 quantization or even GAQ with simplistic mode/gain decisions.  So I think that means there is some bias built-in to the AHT bit allocation table that assumes GAQ will appropriately fine-tune the final allocations.  Additionally, it could be that AHT should not always be turned on when the exponents are reused in blocks 1 through 5 (the condition required to use AHT).  This is probably the point where I need a more accurate bit allocation model...

    edit: After analyzing the bit allocation tables for AC-3 vs. E-AC-3, it seems there is no built-in bias in the GAQ range.  They are nearly identical.  So the difference is clearly in VQ.  Next step, try a direct comparison of quantized mantissas using VQ vs. linear quantization and consider that in the AHT mode decision.

    edit2: dct+VQ is nearly always worse than linear quantization...  I also tried turning AHT off for a channel if the quantization difference was over a certain threshold, but as the threshold approached zero, the quality approached that with AHT turned off.  I don't know what to do at this point... *sigh*

    note: analyzation of a commercial E-AC-3 sample using AHT shows that AHT is always turned on when the exponent strategy allows it.

    edit3: It turns out that the majority of the quality difference was in the 6-point DCT.  If I turn it off in both the encoder and decoder (but leave the quantization the same) the quality is much better.  I hope it's a bug or too much inaccuracy (it's 25-bit fixed-point) in my implementation...  If not then I'm at another dead-end.

    edit4: I'm giving up on AHT for now.  The DCT is definitely correct and is very certainly causing the quality decrease.  If I can get my hands on a source + encoded E-AC-3 file from a commercial encoder that uses AHT then I will revisit this.  Until then, I have nothing to analyze to tell me how using AHT can possibly produce better quality.

    Friday, 17 June

    Well, I finally got a working E-AC-3 encoder committed to Libav.  The bitstream format does save a few bits here and there, but the overall quality difference is minimal.  However, it will be the starting point for adding more E-AC-3 features that will improve quality.

    The first feature I completed was support for higher bit rates.  This is done in E-AC-3 by using fewer blocks per frame.  A normal AC-3 frame has 6 blocks of 256 samples each, but E-AC-3 can reduce that to 1, 2, or 3 blocks.  This way a small range can be used for the per-frame bit rate, but it still allow for increasing the per-second bit rate.  For example, 5.1-channel E-AC-3 content on HD-DVDs was typically encoded at 1536 kbps using 1 block per frame.

    Currently I am working on implementing AHT (adaptive hybrid transform).  The AHT process uses a 6-point DCT on each coefficient across the 6 blocks in the frame.  It basically uses the normal AC-3 bit allocation process to determine quantization of each DCT-processed "pre-mantissa" but it uses a finer resolution for quantization and different quantization methods.  I have the 6-point DCT working and one of the two quantization methods.  Now I just need to finish the other quantization method and implement mantissa bit counting and bitstream output.

    Feeds

    FeedRSSLast fetched
    Ambient Language XML 2018-10-21 17:31
    Kostya's Boring Codec World XML 2018-10-21 17:31
    libav – alpaastero XML 2018-10-21 17:31
    Libav – Federico Tomassetti – Consultant Software Engineer XML 2018-10-21 17:31
    libav – Luca Barbato XML 2018-10-21 17:31
    Multimedia – Flameeyes's Weblog XML 2018-10-21 17:31
    Project Symphony XML 2018-10-21 17:31
    Sasshka's XML 2018-10-21 17:31
    Scarabeus' blag XML 2018-10-21 17:31

    Planet Feed

    rss 2.0 | opml

    About

    If you want your blog to be added, send an email to Luca Barbato.

    Planet service provided by Luminem.