MP4 Atom Parsing – where to configure time…?

I’m going to take a stab in the dark here and say that you’re not updating your stbl offsets properly. At least I didn’t (at first glance) see your python doing that anywhere.

STSC

Lets start with the location of data. Packets are written into the file in terms of chunks, and the header tells the decoder where each “block” of these chunks exists. The stsc table says how many items per chunk exist. The first chunk says where that new chunk starts. It’s a little confusing, but look at my example. This is saying that you have 100 samples per chunkk, up to the 8th chunk. At the 8th chunk there are 98 samples.

enter image description here

STCO

That said, you also have to track where the offsets of these chunks are. That’s the job of the stco table. So, where in the file is chunk offset 1, or chunk offset 2, etc.

enter image description here

If you modify any data in mdat you have to maintain these tables. You can’t just chop mdat data out, and expect the decoder to know what to do.

As if this wasn’t enough, now you have to also maintain the sample time table (stts) the sample size table (stsz) and if this was video, the sync sample table (stss).

STTS

stts says how long a sample should play for in units of the timescale. If you’re doing audio the timescale is probably 44100 or 48000 (kHz).

enter image description here

If you’ve lopped off some data, now everything could potentially be out of sync. If all the values here have the exact same duration though you’d be OK.

STSZ

stsz says what size each sample is in bytes. This is important for the decoder to be able to start at a chunk, and then go through each sample by its size.

enter image description here

Again, if all the sample sizes are exactly the same you’d be OK. Audio tends to be pretty much the same, but video stuff varies a lot (with keyframes and whatnot)

STSS

And last but not least we have the stss table which says which frame’s are keyframes. I only have experience with AAC, but every audio frame is considered a keyframe. In that case you can have one entry that describes all the packets.

enter image description here


In relation to your original question, the time display isn’t always honored the same way in each player. The most accurate way is to sum up the durations of all the frames in the header and use that as the total time. Other players use the metadata in the track headers. I’ve found it best to just keep all the values the same and then players are happy.

If you’re doing all that and I missed it in the script then can you post a sample mp4 and a standalone app and I can try to help you out.

Leave a Comment