The obsolescence of IFFL
So what is IFFL?
IFFL is the technique to link disk data. Since the drive allocates a full sector as a minimum, the last sector is statistically only half full. So if you merge several files into a bigger one, you statistically win half a block per file you add to the merged file.
Variants
It’s also fair to say that IFFL exists in two versions;
Version 1: This one is merely merging the files of a level into one bigger, where they are always meant to be loaded all at the same time. Here it is not possible to load only one of the segments without loading (and possible discarding) the segments preceding the one you are after, but if you package segments to be loaded at the same time, this is no restriction. A merit of this type is also that it doesn’t need any code in the drive and you can hence use standard Kernal load and save routines in parallel. So this is simple and straight forward, has no constraints on the compatibility.
Version 2: The concept matured and people added a routine to scan the IFFL file, to generate a table containing track, sector and offset of the different segments (this as the file can be copied and the track/sector part can be different every time the game is run). Using this technique, it’s possible to load an arbitrary segment. This is really handy, but it comes with a price; there must be code in the drive that handles the tables. And not all drive units have any drive RAM (SD2IEC for starters). And supporting a plethora of units, you need drive code that works in as many as possible. (Having said this, Uload does have the tables in computer RAM but I’d say it’s not really common to have memory enough to store the scan tables in computer RAM).
History
The term was to the best of my knowledge coined by Snacky when he released Another World in 1989 (or by OMG if you read the interview with Snacky). This was a version of IFFL version 1. It’s quite interesting that Gollum had used the same technique before, but just not given it any special name – see RoadBlasters from 1988. So Snacky didnt invent it – GP just coined the term.
Why should we use IFFL?
Size
Cracking games on the c64 has always involved two types of competitions – speed (first release) and quality when the size and number of files have been a key aspects. Triad’s magazine Gamer’s Guide was the leading force, driving people to perfect the release in the size and number of files aspect. Today GG doesn’t exist, and the only thing we have is the release rules maintained by Jazzcat. In these rules, the competition is all about speed and all the rules does to address quality is to set a minimum standard for when the release is counted. So striving for shaving off a few blocks is a dead craft, and this is today a really weak argument.
Disk search
A directory spanning multiple sectors will cause the drive to search for the file it wants to load. Even if you have a fastloader, this aspect is typically not any faster. So loading a file in a later part of the directory using a fastloader, will still be a bit slow even if the speedloader makes the load rather fast.
Why shouldn’t we use IFFL?
Incompatibility
The key aspect is that it breaks compatibility with a number of devices. We want the crack to have a working fallback to standard Kernal if possible, and this basically rules out IFFL.
Save becomes really complex
If you have the IFFL routine in the drive, and then also the scan tables, then you have filled drive ram. Any feature you add will eat the available ram, and that will restrict the number of files you can have in your tables. Save is such a feature. This is why the save functions in the IFFL context are quite restrictive. Some allow you to save an exact size file inside the actual IFFL. Some allow you to save exact size file(s) outside of the IFFL itself. There are none that support proper save or save with replace. Being clearly in favour of plain and unrestricted save, this is a real disadvantage.
Conclusion
Balancing the pros and cons, my clear conclusion is that IFFL is obsoleted. The priority should be on compatibility. Fewer blocks is nice, but people putting this forward and having 50 block intros have lost their sense of direction in the argument. You can’t argue in favour of killing compatibility in exchange for fewer blocks, and then wasting the same amount of blocks and more on something else. It doesn’t compute.
The only valid argument today is the search time for files in the latter part of the directory, but balancing this against compatibility – then compatibility wins for me.
What about nibtools, any good?
The problem with the argument is not really about IFFL at all, it is about a misunderstanding of how the 1541 disk format actually works.
A 1541 disk is built from fixed-size sectors. Each sector is 256 bytes, of which 254 are usable data. Files are stored as linked lists of sectors, and the last sector of a file always belongs entirely to that file, no matter whether it uses 1 byte or all 254 bytes. There is no sub-allocation and no sharing of unused space between files. Because of that, the idea that the “last sector is statistically half full” is simply incorrect. Nothing on a 1541 works statistically. Everything is deterministic.
You can see this immediately with concrete numbers. If you have eight files that are each 255 bytes long, every one of those files needs two sectors, because 255 bytes do not fit into a single 254-byte data area. That means eight files use sixteen sectors. If you concatenate those same eight files into one container, you end up with a single file of 2040 bytes, which fits into nine sectors. You save seven sectors, but not because of statistics or averages. You save them because each file crossed a sector boundary by exactly one byte. Change the example slightly and the result changes completely. If those eight files are exactly 254 bytes long, each file uses one sector, the container also uses eight sectors, and you save nothing at all. This is why the “statistical win” argument is misleading. IFFL does not magically save space on average, it only saves space in very specific, predictable cases.
The distinction between “IFFL version 1” and “version 2” also adds confusion rather than clarity. What is called version 1 is simply sequential data stored in a single file. From the drive’s perspective, this is a completely normal file, just a chain of sectors loaded in order. The only difference is that the loader knows the offsets of individual segments inside that file. This is not a special technique. It is the same principle used by tape blocks, asset packs, and countless data containers. Presenting this as a unique IFFL variant and then arguing against it makes the technique look more exotic than it really is.
The compatibility argument suffers from a similar problem. Yes, random access inside a container often involves drive-side code or at least a custom loader. But that is already true for fastloaders, trackloaders, and streaming loaders. Real 1541, 1541-II, 1571, and 1581 drives all have drive RAM and a 6502 CPU, and many loaders upload code there as a matter of course. Devices like SD2IEC do not emulate this environment, which is why a large class of advanced loaders already does not work on them. Blaming IFFL specifically for this incompatibility is misleading, because the incompatibility exists regardless of IFFL. If SD2IEC is treated as the baseline, then most serious loader technology has to be discarded anyway.
The claim that lookup tables are impractical is also hard to justify technically. A table that stores track, sector, and offset for each segment is very small. Even a table with dozens of entries typically fits into a few hundred bytes. On a C64 with 64 kilobytes of RAM, that is insignificant. Many games already keep level tables, sprite pointers, or music offsets of similar size in memory. There is nothing unusual or risky about this.
Save games are often brought up as another reason against IFFL, but here again the limitation is not specific to IFFL. CBM DOS does not support rewriting arbitrary parts of a file in place. Saving usually means creating a new file, overwriting an old one, or writing to fixed-size slots whose layout is known in advance. This is true for normal files just as much as for container files. Original games solved this by using separate save files or predefined slots, and that approach works just as well alongside IFFL. The technique does not prevent saving, it simply does not pretend to solve a DOS limitation that already exists.
Interestingly, the directory search argument actually works in favor of IFFL rather than against it. The directory on a 1541 is itself a linked list of sectors. To find a file late in the directory, the drive must read and scan all preceding directory sectors. This overhead is real, and many fastloaders do not speed it up much because it happens inside DOS logic. Reducing the number of directory entries by bundling assets into fewer files directly reduces that overhead. That is a concrete mechanical effect, not a theoretical one.
In the end, the conclusion that IFFL is “obsolete” does not follow from the technical facts. IFFL does not change the disk format, does not break the sector model, and does not introduce new fundamental limitations. It simply trades directory complexity for internal structure. Sometimes that trade-off makes sense, sometimes it does not. Treating it as inherently wrong or outdated requires ignoring how the 1541 actually works and attributing unrelated problems to the technique itself.
Thanks for the long comment, even if I disagree with most of the points made.
“Statistically” means that if you make an infinite number of IFFL files, with a random number of files in them, the average saving *will* be half a sector. That is of course not true in all cases, but as a rule of thumb, it’s very true. You point at the two extreme cases, thereby contradicting the point you try to make – all files will be a distribution between these two points. The files will in average extend 127 bytes into the last sector. Take all PRG files on say CSDb, and apply “length-(trunc(length/254)*254)” and you will get a figure close to 127. (Trunc being a Delphi function for truncating the integer part of a floating point number).
The IFFL v1 was what Snacky did and called IFFL, so this is what IFFL was when the acronym was coined. I guess you should talk to him if your view of what IFFL should be deviates from this. (When Gollum did the same he didn’t call the technique anything). And an IFFL file is always a file on disk. From the drive’s perspective there is no difference. The V1 and V2 files are 100% the same. The difference between v1 and v2 here is only that V2 scans the file and allows random access and that for V1 you might have several linked files (like one per level) and in V2 you always have one (at least per disk side). So if you have a view what IFFL is and V1 isn’t that, then talk to Snacky – no me.
When coding for the drive, you need to adhere to the least common denominator – the 1541. With 2KB or RAM, you need to write stuff ultra compact. Zeropage and the stack is basically not available to your code. You also need a buffer page for reading of data from disk. So basically you have only just over 1 Kb of RAM free. Every file will need track, sector and offset in the table, so you can do max 84 files in a page. I see you imply computer side storage of the tables (Magervalp did that for Ultima IV). Theoretically you have more RAM on the computer side, but on the other hand retrofitting a game, you need to find the space for your loader, possibly a buffer and then also tables. That is surely not always even possible. If this your own game, things might be different and you are free to do what you want, but then using your own loader that gives access to 40 tracks is a much easier way to increase the storage on the disk than IFFL. The latter is’t available to crackers wanting to have their release counted as such.
And compatibility IS an issue. How many of the IFFLs out there work fine on a 1581? I’d say a few (if any), whereas quite a few fastloaders work perfectly fine. SD2IEC is a lost cause for anything that uses drive code, agree. I don’t see why it should be defined as the baseline. The CBM line of disk drives should.
And save games. If you have an IFFL with drive side tables, you need to rescan the IFFL file after saving. There is no way in hell you are going to find space in a 1541 to have the drive routine plus and also fit a save routine that can handle saving of a file. That’s why N0SD0S stores inside the IFFL and only supports a fixed size segment. Again, I am assuming drive side storage of the tables.
I did say that the Dir argument is a pro for IFFL, so I can understand why you argue as if I said it was one of the additional drawbacks. In fact it’s basically the only real advantage except the handfull of blocks from the linking. But thes blocks are only relevant to crackers competing over the shortest version.