Jump to content
Official BF Editor Forums

Fbrb Extractor/packer

Recommended Posts

Usage: Download and install Python 2.7 first (should take just a minute): http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi Then copy paste the code into a file and make it a .py file. The script works via drag and drop directly in the windows explorer, i.e. you drop the archive you want to extract onto the script file. Double clicking the script does not do anything as of now.

I have updated the script to extract and pack fbrb files. Drag and drop a fbrb file on the script and a folder will be placed in the folder of the fbrb file being extracted (regardless of location of the script). Drag and drop a fbrb folder onto the script to go the opposite direction. Also supports drag and drop of several files at once (although dragging both a file and its respective folder on the script is kind of pointless). Additionally you can drag and drop a non fbrb folder on it and it asks you whether to pack or unpack all files within.

Make your own backups!

You can adjust the compressionlevel in the code. If there are issues while packing or extracting you can active the tempfile in the code as well so the payload is written on the hard drive instead of your memory while being managed.

I should have done this earlier.

This script gets rid of .res files altogether and uses proper file extensions. Pick one: set(['skinnedmeshset', 'swfmovie', 'dx11vertexshader', 'dx9shaderdatabase', 'dbx', 'visualwater', 'rigidmeshset', 'compositemeshset', 'grannymodel', 'irradiancevolume', 'watermesh', 'dx10vertexshader', 'binkmemory', 'dx10pixelshader', 'havokphysicsdata', 'impulseresponse', 'sootmesh', 'deltaanimation', 'dx9vertexshader', 'dx11shaderdatabase', 'dx11pixelshader', 'occludermesh', 'deleted', 'terrain', 'ragdollresource', 'wave', 'terrainheightfield', 'dx10shaderdatabase', 'itexture', 'visualterrain', 'grannyanimation', 'aimanimation', 'treemeshset', 'dx9pixelshader', 'terrainmaterialmap', 'animtreeinfo', 'weathersystem', 'meshdata'])

Drag and drop the archive on the script, Python 2.6/2.7 required. The output folders will be placed at the location of the script. There are two things I am not entirely sure about. There is one number in every archive entry which is usually 0x00010000, but some entries in terrain-00.fbrb have 0x00000000 for unknown reasons. I haven't investigated further yet.

Also, the length of the payload is specified twice for every file in an archive. Usually the two values are the same; only in ondemand_awards-00.fbrb in Package\async the second value is 0x40 bytes smaller than the first. However, going with the first value seems to extract fine so I think the last 40 bytes of the payload really belong to the files.

from struct import pack,unpack
import gzip
from cStringIO import StringIO
import sys
import os
import tempfile

# General usage of the script: Drag and drop one or several files and/or folders onto the script. It will
# unpack fbrb archives and pack FbRB folders. You can also drag non-fbrb folders onto the script and specify
# whether you want to unpack or pack all fbrb folders/files within.
# There are some options in this script: activate tempfiles if the script fails at packing/unpacking,
# specify the gzip compressionlevel when packing; specify a folder to unpack/pack files to.
# Note: The script will only pack files with known extensions, e.g. xml files in a fbrb folder will be ignored (useful!)

#packing parameters:
compressionlevel=1  #takes values between 0 and 9, 0=unzipped, 1=weak compression, 9=strong compression;
                   #the vanilla files are either 0 or 5-6. The vanilla files are probably on 5-6
                   #to fit on one disk and bf3 archives are not compressed at all.
                   #While 0 can make huge files, 1 is a good compromise. Don't go higher than 6.


packtmpfile=1   #make a temporary file on the hard drive in case of memory issues
               #my 4gb system could handle all files without, except for streaming_sounds-00.fbrb (580mb)
               #no significant change in performance when packing mp_common level-00

#unpacking parameters:
unpacktmpfile=0 #temp file for unpacking, was not necessary on a 4gb system
               #extract mp_common level-00 while suppressing output (i.e. no files written)
               #14 seconds with tempfile, 7 seconds without
               #with output the difference is around 20%

#adjust unpack folder / pack file, use the commented out line below as an example (in particular, slashes);
#this line will move files into the folder "C:\Program Files (x86)\Electronic Arts\files FbRB"
#no path given puts all extracted files in a folder at the same place as the fbrb file/folder

#unpackfolder="C:/Program Files (x86)/Electronic Arts/files"


BUFFSIZE=1000000 # buffer when writing the fbrb archive

def grabstring(offset):
   # add all chars until null terminated
   while dump[offset]!="\x00":
   return re
def makeint(num): return pack(">I",num)
def readint(pos): return unpack(">I",dump[pos:pos+4])[0]

#these are in fact strings on the left, the weird part of python

def packer(sourcefolder, targetfile="", compressionlevel=compressionlevel, tmpfile=1):
   """takes absoulte folder path with folder ending on " FbRB";
   the target file path is absolute without .fbrb extension"""
   if not os.path.isdir(sourcefolder) or sourcefolder[-5:]!=" FbRB": return

   print sourcefolder[4:] ###################
   toplevellength=len(sourcefolder)+1 #for the RELATIVE pathnames to put in the fbrb

   if not targetfile: targetfile=sourcefolder[:-5]+".fbrb"
   else: targetfile=lp(targetfile)+".fbrb"

   strings="" #the list of strings at the beginning of part1
   extdic=dict() #keep track of all extensions to omit string duplicates in part1
   entries="" #24 bytes each, 6 parts
   payloadoffset=0 #where the uncompressed payload starts, sum of all filelengths so far

   if tmpfile: s2=tempfile.TemporaryFile()
   else: s2=StringIO()
   if compressionlevel: zippy2=gzip.GzipFile(fileobj=s2,mode="wb",compresslevel=compressionlevel,filename="") #takes the payload when compression

   #go through all files inside the folder
   for dir0, dirs, files in os.walk(sourcefolder):
       for f in files:
           #validate file and grab its extension
           rawfilename,extension = os.path.splitext(f)

           #restore filename strings to res, dbx, bin, dbmanifest; null terminated
           if extension=="dbxdeleted": filepath=dir0.replace("\\","/")[toplevellength:]+f[:-7]+"\x00"
           elif extension not in ("dbx","bin","dbmanifest"): filepath=dir0.replace("\\","/")[toplevellength:]+rawfilename+".res\x00"
           else: filepath=dir0.replace("\\","/")[toplevellength:]+f+"\x00"
           stringoffset=makeint(len(strings)) #part1/6
           filepath=str(filepath) #make string because of unicode stuff

           if filelength==0: deleteflag="\x00\x00\x00\x00"
           else: deleteflag="\x00\x01\x00\x00" #part2/6

           #check if the extension has been used before, if so refer to the string already in use

           #make the entries, grab the payload

           if compressionlevel: zippy2.write(f1.read())
           else: s2.write(f1.read())

   if compressionlevel:

   #make decompressed part1, then compress it
   #make the final file
   while 1:
       buff = s2.read(BUFFSIZE)
       if buff: out.write(buff)
       else: break
   out.close(), s2.close()

def unpacker(sourcefilename,targetfolder="",tmpfile=0):
   """takes absolute file path with file ending on ".fbrb";
   the target folder path is absolute without " FbRB" extension"""
   global dump

   #check validity
   if sourcefilename[-5:].lower()!=".fbrb": return
   if f.read(4)!="FbRB":
   print sourcefilename[4:] ###################

   if not targetfolder: targetfolder=sourcefilename[:-5]+" FbRB\\"
   else: targetfolder=lp(targetfolder)+" FbRB\\"

   if not os.path.isdir(targetfolder): os.makedirs(targetfolder) #for empty fbrb files basically

   cut=unpack(">I",f.read(4))[0] # there are two gzip archives glued together
   if tmpfile:
   zippy=gzip.GzipFile(mode='rb', fileobj=part1)
   zippy2=gzip.GzipFile(mode='rb', fileobj=part2)

   part1.close(), zippy.close()

   if dump[-5]=="\x00": zipped=0
   else: zipped=1

   for i in range(numentries):
##        undeleteflag=readint(strlen+16+i*24) this is okay due to undeleteflag <=> extension=deleted
       payloadoffset=readint(strlen+20+i*24) # payload in the second gzip archive
##        payloadlen2=readint(strlen+28+i*24) # the same as payloadlen except for one file in Package

       # get folder name, get file name, grab payload and put it in the right place
       folder,filename = os.path.split(grabstring(filenameoffset+8))
       name,ending = os.path.splitext(filename) # original file ending: bin, res, dbmanifest, dbx
       extension=grabstring(extensionoffset+8).lower() #lowercase because .itexture looks better than .ITexture
       if extension=="*deleted*":
           if ending==".dbx": ending=".dbxdeleted"
           else: ending=".resdeleted"
       elif extension=="<non-resource>" and ending==".res": ending=".nonres"
       elif extension!="<non-resource>": ending="."+extension

       if folder!="": finalpath+="\\"

       if not os.path.isdir(finalpath): os.makedirs(finalpath)
       if zipped:

   zippy2.close(), part2.close()

def lp(path): #long pathnames
   if path[:4]=='\\\\?\\': return path
   elif path=="": return path
   else: return unicode('\\\\?\\' + os.path.normpath(path))

#give fbrb folder->pack
#give fbrb file->extract
#give other folder->extract/pack
#sadly os.walk is rather limited for this purpose, I cannot keep it out of "marked" fbrb folders
def main():
   inp=[lp(p) for p in sys.argv[1:]]
   for ff in inp:
       if os.path.isdir(ff) and ff[-5:]==" FbRB":
       elif os.path.isfile(ff):
       else: #handle all fbrb within this folder; but first ask user input
           if not mode: mode=raw_input("(u)npack or (p)ack everything from selected folder(s)\r\n")
           if mode.lower()=="u":
               for dir0,dirs,files in os.walk(ff):
                   for f in files:
           elif mode.lower()=="p":
               for dir0,dirs,files in os.walk(ff):

except Exception, e:

Edited by Frankelstner

Share this post

Link to post
Share on other sites

Hi, given recent developments vis-a-vis servers I thought I'd have a look at the potential for BC2 modding.

Would you describe your script as the most capable unpacker to date?

Share this post

Link to post
Share on other sites

Hi, given recent developments vis-a-vis servers I thought I'd have a look at the potential for BC2 modding.

Would you describe your script as the most capable unpacker to date?

Absolutely. It lacks flexibility right now but is definitely less cumbersome to use than the quickbms version.

Look, I've been there when BC2 went beta. That's when the quickbms extractor was released and I made the dbx converters and other people made some other tools. After that there was basically nothing for about two years until dhwang showed the code to pack the files back into the archives (it requires the original archive though apart from the extracted files). Then bf3 showed up and I had time to hone my skills. But just two days ago I've taken my first look at the fbrb files and realized that the quickbms unpacker is lacking. It leaves out certain information from the files which made packing up the files so hard. It also has .dbx and .res files only. While .dbx files are one homogeneous file type, .res files are not. Just imagine how hard it was all the time when we wanted to decipher .res files when you couldn't even be sure that they were indeed the same file type. That's why "I should have done this earlier.".

I also have a working fbrb packer which does not require the original archives, but first I'll have it iterate over the 1000 fbrb files to make sure everything works according to plan. I will also make some small changes to the unpacker in the next 24 hours, maybe merge unpacker and packer. And after that fix my dbx scripts.

Share this post

Link to post
Share on other sites

Cool, please do keep up the good work (I look forward to your packer). It's still a very good game and it'll be exciting to see what can be done modding-wise now that privately hosted servers are more achievable.

Share this post

Link to post
Share on other sites

Done. Handling long pathnames with Python probably took me more time than writing the packer.

Meh, don't activate unpacktmpfile right now, it doesn't work. Fixed that too.

Edited by Frankelstner

Share this post

Link to post
Share on other sites

Did some more tweaking. When running the script through all fbrb files everything matched. That is, because the files are zipped and the strings inside the fbrb files are not ordered I cannot compare pre-fbrb and post-fbrb directly. Rather I compared the result of extracting vs extracting-packing-extracting and the result was exactly the same. Of course this still could mean the game just handles things differently, but it looks good so far after doing some quick modding. I might extract my entire game and repack it just to see if it works.

Share this post

Link to post
Share on other sites

Unpack/packing function works nicely (packing seems faster than previous script).

I also tried to combine multiple archives and it didn't work. Combined mp_006 and mp_common into mp-common. Conclusion is that it's fine if files are bigger than they're supposed to be (ran another map), but deleting files in mp_006 isn't okay (when running mp_006, memory access again). I also tried to scrub all the files in mp_006 empty while preserving the file structure: still doesn't work. The server expects certain information to be there, and it won't work even if you move it elsewhere.

Edited by rukqoa

Share this post

Link to post
Share on other sites
Unpack/packing function works nicely (packing seems faster than previous script).

Isn't hard to be faster when my script uses the weakest compression level by default. The game works fine however and it has just come to my mind right now that probably the one and only reason most game files are compressed at around 5 is that the devs wanted to get everything on one disk. While he didn't refer to the compression directly it's clear that compression played its part when it came to reducing file size:

This technique allowed BFBC2 to fit on a DVD, both for the console and the PC versions.

Taken from: http://forums.electronicarts.co.uk/members-helping-members/1199353-software-engineering-file-formats-build-processes-packaging.html

Another good read: http://forums.electronicarts.co.uk/battlefield-bad-company-2-pc/1350772-so-how-about-modtools.html

Well that was interesting. It's just your small remark which made me look up that one quote, but there many useful things to be found in his posts:

Creating the content is less straight-forward. Sometimes there are tools that do the job well (Maya, Photoshop, SoundForge, Cubase etc).


You need to have Maya 8.5 (32-bit version) installed in order to convert any meshes.

Let's say that you tell the pipeline to build level MP_003.

MP_003 is represented by an XML file, which references a bunch of other files. These in turn reference other files. If you follow this graph of references, you will find the level layout, heightmap, characters, weapons, vehicles, and all the content that you can see in-game. (The in-game HUD and related stuff might also be in the graph.)

When the pipeline is about to build MP_003, it will first perform a consistency check on all content, and yell if any file that is referenced by any other is not present.

If all files are present, the pipeline will attempt to convert all files referenced by MP_003. It uses the file system journal to determine which files have changed on-disk. Also, and any files that have already been converted have info on which files depend on it (so it has info like: "if file X changes, then files Y,Z,W will also need to be rebuilt").

--So the main level file contains the asset requests.


Also, there are some complications wrt when we release patches that affect the base game's content. Whenever we release a patch, all existing levels will need to be rebuilt with a new set of original data. This is because some level-common data is stored inside of the level archives. I'm not sure at the time of writing, but that probably means that the only manageable way for us would be to invalidate any user-made levels when we release a patch of that form. Then creators of any user-generated levels would be required to run their levels again through the pipeline with the new base content supplied.

The tools which "cook" the game's datafiles takes in a set of original files, which are roughly 80GB in size. Most of the files here are organized by function ("this is a mesh, this is a texture, this is an animation, ...") rather than location (on which level it is used). The tools will process the dataset once per level, extract the subset that applies for the current level, and convert it to a format suitable for the game engine.

--Assets are stored at a central point and then put together individually for each specific level. I assume .dbxdeleted and .resdeleted files exist because the pipeline doesn't work correctly. These files are apparently placed into the wrong archives.

This allows the tools to do some per-level precomputations easily; for instance, since all the pixel/vertex shader combinations that will be used throughout the entire level is known, the tools pre-generate all these combinations and store them in a "shader database". (BF2142 generated & compiled the shader combinations during first load - that's one reason why first launch of a level was very slow there.)

After this is done for all levels, there are a bunch of archives for each level. This is ideal for quick loading times and no memory fragmentation, but it wastes diskspace unnecessarily. The result is probably about 15GB in size.

In order to make a better tradeoff between diskspace and load times, an extra processing step has been added; all the level-archives are compared, and any datafiles which appear inside many of these level-archives are moved to a level-common archive. So when loading a level, everything inside the level's own archives and the level-common archive has to be processed.

--Okaaaay, so you start out with all assets existing exactly once. Instead of having the level tell the engine which files it needs and have the engine look them up (as done to some extent in bf2 and entirely and very nicely in bf3) you have decided against that due to consoles and cloned files to put them into the level folders directly. Now mysteriously the files take up much more space than before. And then you had the great idea of checking which files are used in every level and put them into a common folder. So first you clone files, then realize that cloning sucks (duh!) and delete most of them again. Of course the loading times are nice, but the everything else is incredibly inconvenient.

So this is all awesome when shipping the game. Where do the problems begin?

When you begin creating patches.

First off, it turns out that the tools that "cook" the data don't produce binary-identical data all the time. The result after cooking is always functionally identical, but the bit-content may differ slightly. Items in unordered lists change order, uninitialized fields contain random data, that sort of stuff.

--Such a failure. A small change in order for items in lists has a notable impact on a gzipped archive.

Why didn't this get caught sooner? Because you only notice problems of this kind when re-cooking the same data several times, from scratch. And re-cooking all BC2 data takes over 48 hours for a high-end computer. And locating & correcting all places where cooking isn't deterministic down to the bit level would take a lot of time (both calendar time and effective development time). Perhaps that time is better spent elsewhere?

--Yeah, like checksums? I mean, the cooking is done by you so the resulting archives can be checksummed, can they? After all bf2 has md5 and bf3 has sha-1. So what does bc2 use? NOTHING AT ALL. I can join multiplayer just fine with a modded game. Here's a 4x zoom iron sight rifle (I've played an entire round on a pb server to be sure): http://i.imgur.com/dJUtJ.jpg http://i.imgur.com/Nev7Y.jpg

So. If different "cooking" runs produce slightly different results, it is suddenly difficult to look at version A and version B of the data and answer the question, "what are the differences between these two datasets?". It's easy when looking at the source data, but when looking at the cooked data there are a lot of changes which have no effect on the final game experience.

There are about 40.000 source files, and this results in well over 100.000 cooked files. Going through those by hand is not an option. Writing a perfect filter, which knows which differences are benign and which are for real, will take as much time and effort as making the cooking 100% deterministic. Neither that is an option.

--There are 440216 res files, 266951 dbx files, 25 bin files and 2 dbmanifest files found in my bc2 folder.

So you make a filter which does something in between; be smart, create rules for the file types you know about, and when in doubt - assume that the change is for real. Err on the side of caution.

Then you realize that those shader databases were never designed to be extendable. What happens when a new object is added to a level in a patch? Its mesh & textures are included, no sweat, but what about the shader combinations? How does one create add something to the shader database, when the shader database is an opaque binary block whose entire contents may change when just one object is added to the level? (One shader database is about 5MB. There are three shader databases per level - one each for DX9, DX10 and DX11.)

And finally, the patch system itself. Yes, it can replace portions of files on-disk. But due to its heritage (from BF Heroes), it is not able to open BFBC2's archive files and apply differences to individual files within the archives. The only straightforward way is to make all patched-in content arrive in archives on the side of the original archives.

--As in, your patcher doesn't even patch but instead just copy-pastes some files into Package.

Share this post

Link to post
Share on other sites

Creating the content is less straight-forward. Sometimes there are tools that do the job well (Maya, Photoshop, SoundForge, Cubase etc).


You need to have Maya 8.5 (32-bit version) installed in order to convert any meshes.

I took a look at the mesh files and saw they are build on Havok 5.5. I deleted the header, renamed them to .hkx or hke and tried to import them to different programs and maya, but I failed.

Maybe you know how to import them (although it should be illegal. But unpacking textures and sounds is illegal too.)


sorry for my bad english

Share this post

Link to post
Share on other sites

Okay, added the option to specify an output path for extracting. I think the script is done now.

So that guy basically accepted that their pipeline is total shit?

Well not completely. It's streamlined for the shortest possible loading times. But other than that, yeah.

I took a look at the mesh files and saw they are build on Havok 5.5. I deleted the header, renamed them to .hkx or hke and tried to import them to different programs and maya, but I failed.

Sorry, no idea. And it seems to be rather time consuming to figure it out.

If you need Havoc, you can download it for free from their website.

The issue is that the few sample havok files I've looked at seem to be completely different from .meshdata for example. The bc2 files have just a few strings at the top and the payload right after. Havok has strings just about everywhere.

Edited by Frankelstner

Share this post

Link to post
Share on other sites

Hey Frankelstner,

in some .dbx files the path "Terrains\MP_005\MaskTextures\..." is declared, but I don't get the path with its content by extracting the .fbrb archives. I tried to extract every .fbrb with every compression level (0-9). Maybe there are more information your converter doesn't handle?

Anyways good work :-)


Share this post

Link to post
Share on other sites
Hey Frankelstner,

in some .dbx files the path "Terrains\MP_005\MaskTextures\..." is declared, but I don't get the path with its content by extracting the .fbrb archives. I tried to extract every .fbrb with every compression level (0-9). Maybe there are more information your converter doesn't handle?

Nope. I have made a small script to just decompress the metadata section (containing the filenames, size and offset) and make it all lowercase, then look for the string "masktextures" and complain as soon as it is found somewhere. Not a single hit in all fbrb archives. I can even go as far as "masktex" and there's still nothing found.

Not to mention that the path probably goes like Terrains\MP_005\MaskTextures\DecalMask-grass_p+0+0.png and there's no way in hell that DICE would ship actual ordinary png files with their game.

A quick note about the compression: It's not for extraction, but for packing into back into fbrb archives only. That's the reason for the line #packing parameters: right above the compression level.

Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...