Jump to content
Official BF Editor Forums

Frankelstner

Members
  • Content count

    264
  • Joined

  • Last visited

Everything posted by Frankelstner

  1. Frankelstner

    Cascat-Based File Tweaker

    Update: A patch released around November or December adds RSA signatures to the archives. When having modified content, it is not possible to join a game on a PB protected server anymore. As a side effect, it's not possible to play with modified files on unprotected or unranked servers either. Even the singleplayer gives back the same error message when modified files are detected. Due to that, there's no point in adjusting the tool to work with the new archives (that's the easy part), it's impossible fake the signatures and thus the game cannot be modded anymore unless you modify the exe. The tweaker is up again. Make sure to read the instructions at the bottom of the post too. http://www.gamefront.com/files/22382214/bf3+tweaker.exe I have not written most my statement in Mordor. It's being edited by mods all over the place. These are not my words. But the words here are. Due to the heavy focus on the client in bf3 compared to bf2, modifying weapon damage among other things actually increases the damage dealt in multiplayer games (compared to bf2 where it does exactly nothing at all; make sure you understand this, modifying damage on your client is NOT meant to have any effect in online play in any game with decent netcode). bf3 (like bc2) also does not check for modified content making it possible to join ranked games with pb enabled. I have decided to reupload the tool as it is simply a mod tool after all (a very limited one to be honest). As far as I understand there are even people out there using injectors to play around with their graphics which is exactly how hacks work. I wouldn't touch one when playing online. The worst thing I can see happening when using modified files is a kick the moment you join a game (that is how it was handled in bf2). On the other hand as far as I know injectors might get you banned instantly, though I am fairly ignorant when it comes to hacks. I'd like to add, I had modified some files for bc2 three months ago and played a bit online. The account is still not banned so I would not trust others when they speak of bans for modified content. The intent of this tool was to allow people to play around with settings in singleplayer and enjoy the engine (also so the folks at symthic can do controlled tests; as well as filmmakers and machinima). I am not going to deprive the community of this option. Anything else is not intended and I am not willing to take blame for it. Clientside netcode is simply not acceptable when bf2 did everything right. Also, the option to do these mods has been there all along, although without GUI. Here's a cascat of mine from 2011 (I think m16 ammo count or something like that; can't be bothered, it's not even worth trying it as the Update folder most likely overwrites the setting by now anyway): http://www.gamefront.com/files/21023744/cascat_rar It has been on denkirson all along though I never even thought of testing it online; it was only after I realized that bc2 does not check for modified content either that I even dreamed of testing mods online in bf3. This program is based heavily on my Python script here: http://www.bfeditor.org/forums/index.php?showtopic=15531&st=0 The only difference is that the script discards the offset of the bytes for each entry instantly (unless you uncomment the line with the @) whereas this tool keeps track of the absolute offset for every field allowing users to change things directly. I suspect the game will add a check for valid content very soon™ which is something I welcome. Some servers with attentive admins already check for the existence of modded cas archives and may issue a kick. This being said, of course you are not meant to play with modified content on ranked servers. Requires .NET 4.5. Instructions: 1) Either create a new cat or open the existing cas.cat. Note that the cat in the Update folder overwrites most of the original entries so it is highly recommended to look there first. Also note that DLC uses sbtoc archives and is affected only peripherally (it might be not affected at all). 2) Modify any value written in bold. 3) Save and activate the cat. The tool will not write into the original assets (the cas archives) but instead create new cas archives in the range 50-99. It will make a copy of the original file to be tweaked in the new cas archive and apply changes there. The game uses a cat file containing the file hash, offset, size and archive number for each entry. I just tell the cat that the asset to look for (via hash) is not in archive 1-10 but instead in archive 50 so the cat file has to be overwritten to make any changes. However, the tool will make a backup of the cat. Also, the information contained is purely redundant as the cas archives themself have all the information to create a cat file from scratch (the tool is capable of that too if you delete all cats). To unmod the game just select the appropriate "Restore Cat". You can add a new cat in case you don't want to lose all changes when reverting to the unmodded game. Also, if you directly change cas.cat, change a few things so a new cas archive is created and then restore the cat, that cas archive has just become useless. If you keep doing that you might end up with lots of cas archives that are not used anymore so you will probably want to manually delete them. I recommend making a new cat instead of tweaking cas.cat. Basically you alter the new cat when making tweaks and when you press "Activate" the new cat will be copied to cas.cat. DLC expansions use sb and toc files instead of cas and cat so changes are not guaranteed to work there. And yes, the game does not do checks for modified content. Just like in bc2 you can do stuff like 4x zoom iron sights, though it was probably more useful in bc2 with the scope taking up a slot on its own. On the other hand it might come in handy for snipers.
  2. Frankelstner

    Bf4 Sbtoc Dumper

    Grab the latest version of the script here: https://www.dropbox.com/s/rhu9gjxs9087vn7/bf4dumper.zip Description of the LZ77 compression: http://pastebin.com/u2kntxSV Description of the changes to the sbtoc: http://pastebin.com/TftZEU9q Documentation of what I did to figure out the compression in case you are interested: http://pastebin.com/rGpBFwAV Documentation of what I did to figure out the sbtoc changes: http://pastebin.com/0bZebD8S Update 8.1.2014: Added support for the noncas archives (both unpatched and patched). I've rewritten most of the scripts along the way. Update 17.1.2014: Fixed handling of noncas delta payload instructions: 1) Deltas without implicit base-reading section at the end are now handled correctly. 2) Fixed the sanity check for instruction type 3. Update 1.2.2014: Noncas deltas ending with instruction type 4 are now handled correctly.
  3. Frankelstner

    File Dumper For Sbtoc Archives

    Update: I've made extensive changes, there are three scripts instead of one which require you to name the scripts exactly the way they are named here as well as having them in the same folder. This version is meant to run over all unpatched or patched toc files at once, spitting out everything it can find. The dumper always relies on sbtoc archives but extracts cascat too: Inside the toc there's a flag whether the assets or stored in cascat or in sb. The script reads that flag and acts accordingly. I've invented some file extensions for res files depending on resType and added some metadata to the filename when I wasn't sure what to do with it. Usage: Right click on dumper.py -> Edit with IDLE, adjust the paths at the top, then hit F5 to start the script. The script is done when there are no asterisks in the title. The script doesn't overwrite existing files, so it's preferable to dump the patched files first, then dump the unpatched files into the same folder. By default the script already has the patched folder selected, so once you've run it with that path, just put ## in front of the first tocRoot line and remove them from the second one and run the script once more. Python 2.7. For those DLC sbtoc archives. The script basically does three things, 1) undo XOR on the toc (table of contents) file, 2) extract the bundles from the superbundle (sb) file, 3) extract ebx files from the individual bundle files. Drag and drop one or several toc files or folders containing toc files onto the script file. The files will be extracted in the same folder as the script. Still requires another run with my file converter to make sense of it: http://www.bfeditor.org/forums/index.php?showtopic=15531 Bundle.py: import sys import os from struct import unpack,pack from binascii import hexlify,unhexlify import zlib from cStringIO import StringIO import sbtoc def readNullTerminatedString(f): result="" while 1: char=f.read(1) if char=="\x00": return result result+=char class Bundle(): #noncas def __init__(self, f): metaSize=unpack(">I",f.read(4))[0] #size of the meta section/offset of the payload section metaStart=f.tell() metaEnd=metaStart+metaSize self.header=Header(unpack(">8I",f.read(32)),metaStart) if self.header.magic!=0x970d1c13: raise Exception("Wrong noncas bundle header magic. The script cannot handle patched sbtoc") self.sha1List=[f.read(20) for i in xrange(self.header.numEntry)] #one sha1 for each ebx+res+chunk self.ebxEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numEbx)] self.resEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numRes)] #ebx are done, but res have extra content for entry in self.resEntries: entry.resType=unpack(">I",f.read(4))[0] #e.g. IT for ITexture for entry in self.resEntries: entry.resMeta=f.read(16) #often 16 nulls (always null for IT) self.chunkEntries=[Chunk(f) for i in xrange(self.header.numChunks)] #chunkmeta section, uses sbtoc structure, defines h32 and meta. If meta != nullbyte, then the corresponding chunk should have range entries. #Then again, noncas is crazy so this is only true for cas. There is one chunkMeta element (consisting of h32 and meta) for every chunk. #h32 is the FNV-1 hash applied to a string. For some audio files for example, the files are accessed via ebx files which of course have a name. #The hash of this name in lowercase is the h32 found in the chunkMeta. The same hash is also found in the ebx file itself at the keyword NameHash #For ITextures, the h32 is found in the corresponding res file. The res file also contains a name and once again the hash of this name is the h32. #meta for textures usually contains firstMip 0/1/2. if self.header.numChunks>0: self.chunkMeta=sbtoc.Subelement(f) for i in xrange(len(self.chunkEntries)): self.chunkEntries[i].meta=self.chunkMeta.content[i].elems["meta"].content self.chunkEntries[i].h32=self.chunkMeta.content[i].elems["h32"].content for entry in self.ebxEntries + self.resEntries: #ebx and res have a path and not just a guid f.seek(self.header.offsetString+entry.offsetString) entry.name=readNullTerminatedString(f) f.seek(metaEnd) #PAYLOAD. Just grab all the payload offsets and sizes and add them to the entries without actually reading the payload. Also attach sha1 to entry. sha1Counter=0 for entry in self.ebxEntries+self.resEntries+self.chunkEntries: while f.tell()%16!=0: f.seek(1,1) entry.offset=f.tell() f.seek(entry.size,1) entry.sha1=self.sha1List[sha1Counter] sha1Counter+=1 class Header: #8 uint32 def __init__(self,values,metaStart): self.magic =values[0] #970d1c13 for unpatched files self.numEntry =values[1] #total entries = numEbx + numRes + numChunks self.numEbx =values[2] self.numRes =values[3] self.numChunks =values[4] self.offsetString =values[5] +metaStart #offsets start at the beginning of the header, thus +metaStart self.offsetChunkMeta =values[6] +metaStart #redundant self.sizeChunkMeta =values[7] #redundant class BundleEntry: #3 uint32 + 1 string def __init__(self,values): self.offsetString=values[0] #in the name strings section self.size=values[1] #total size of the payload (for zlib including the two ints before the zlib) self.originalSize=values[2] #uncompressed size (for zlib after decompression and ignoring the two ints) #note: for zlib the uncompressed size is saved in both the file and the archive # for zlib the compressed size in the file is the (size in the archive)-8 class Chunk: def __init__(self, f): self.id=f.read(16) self.rangeStart=unpack(">I",f.read(4))[0] self.rangeEnd=unpack(">I",f.read(4))[0] #total size of the payload is rangeEnd-rangeStart self.logicalOffset=unpack(">I",f.read(4))[0] self.size=self.rangeEnd-self.rangeStart #rangeStart, rangeEnd and logicalOffset are for textures. Non-texture chunks have rangeStart=logicalOffset=0 and rangeEnd being the size of the payload. #For cas bundles: rangeEnd is always exactly the size of compressed payload (which is specified too). #Furthermore for cas, rangeStart defines the point at which the mipmap number specified by chunkMeta::meta is reached in the compressed payload. #logicalOffset then is the uncompressed equivalent of rangeStart. #However for noncas, rangeStart and rangeEnd work in absolutely crazy ways. Their individual values easily exceed the actual size of the file. #Adding the same number to both of them does NOT cause the game to crash when loading, so really only the difference matters. #Additionally the sha1 for these texture chunks does not match the payload. The non-texture chunks that come AFTER such a chunk have the correct sha1 again. sbtoc.py: import sys import os from struct import unpack, pack from binascii import hexlify, unhexlify import zlib from cStringIO import StringIO from collections import OrderedDict import Bundle def read128(File): """Reads the next few bytes in a file as LEB128/7bit encoding and returns an integer""" result,i = 0,0 while 1: byte=ord(File.read(1)) result|=(byte&127)<<i if byte>>7==0: return result i+=7 def write128(integer): """Writes an integer as LEB128 and returns a byte string; roughly the inverse of read, but no files involved here""" bytestring="" while integer: byte=integer&127 integer>>=7 if integer: byte|=128 bytestring+=chr(byte) return bytestring def readNullTerminatedString(f): result="" while 1: char=f.read(1) if char=="\x00": return result result+=char def unXOR(f): magic=f.read(4) if magic not in ("\x00\xD1\xCE\x00","\x00\xD1\xCE\x01"): f.seek(0) #the file is not encrypted return f f.seek(296) magic=[ord(f.read(1)) for i in xrange(260)] #bytes 257 258 259 are not used data=f.read() f.close() data2=[None]*len(data) #initalize the buffer for i in xrange(len(data)): data2[i]=chr(magic[i%257]^ord(data[i])^0x7b) return StringIO("".join(data2)) class EntryEnd(Exception): def __init__(self, value): self.value = value def __str__(self): return repr(self.value) class Entry: #Entries always start with a 82 byte and always end with a 00 byte. #They have their own size defined right after that and are just one subelement after another. #This size contains all bytes after the size until (and including) the 00 byte at the end. #Use the size as an indicator when to stop reading and raise errors when nullbytes are missing. def __init__(self,toc): #read the data from file ## if toc.read(1)!="\x82": raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell())) ## self.elems=OrderedDict() ## entrySize=read128(toc) ## endPos=toc.tell()+entrySize ## while toc.tell()<endPos-1: #-1 because of final nullbyte ## content=Subelement(toc) ## self.elems[content.name]=content ## if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell())) entryStart=toc.read(1) if entryStart=="\x82": #raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell())) self.elems=OrderedDict() entrySize=read128(toc) endPos=toc.tell()+entrySize while toc.tell()<endPos-1: #-1 because of final nullbyte content=Subelement(toc) self.elems[content.name]=content if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell())) elif entryStart=="\x87": #### self.elems=[] ## entrySize=read128(toc) ## endPos=toc.tell()+entrySize #### print entrySize ## print endPos ## while toc.tell()<endPos: #-1 because of final nullbyte self.elems=toc.read(read128(toc)-1) toc.seek(1,1) #trailing null else: raise Exception("Entry does not start with \x82 or (rare) \x87 byte. Position: "+str(toc.tell())) def write(self, f): #write the data into file f.write("\x82") #Write everything into a buffer to get the size. buff=StringIO() #Write the subelements. Write in a particular order to compare output with original file. for key in self.elems: self.elems[key].write(buff) f.write(write128(len(buff.getvalue())+1)) #end byte f.write(buff.getvalue()) f.write("\x00") buff.close() def showStructure(self,level=0): for key in self.elems: obj=self.elems[key] obj.showStructure(level+1) class Subelement: #These are basically subelements of an entry. #It consists of type (1 byte), name (nullterminated string), data depending on type. #However one such subelement may be a list type, containing several entries on its own. #Lists end with a nullbyte on their own; they (like strings) have their size prefixed as 7bit int. def __init__(self,toc): #read the data from file self.typ=toc.read(1) self.name=readNullTerminatedString(toc) if self.typ=="\x0f": self.content=toc.read(16) elif self.typ=="\x09": self.content=unpack("Q",toc.read(8))[0] elif self.typ=="\x08": self.content=unpack("I",toc.read(4))[0] elif self.typ=="\x06": self.content=True if toc.read(1)=="\x01" else False elif self.typ=="\x02": self.content=toc.read(read128(toc)) elif self.typ=="\x13": self.content=toc.read(read128(toc)) #the same as above with different content? elif self.typ=="\x10": self.content=toc.read(20) #sha1 elif self.typ=="\x07": #string, length prefixed as 7bit int. self.content=toc.read(read128(toc)-1) toc.seek(1,1) #trailing null elif self.typ=="\x01": #lists self.listLength=read128(toc) #self entries=[] endPos=toc.tell()+self.listLength while toc.tell()<endPos-1: #lists end on nullbyte entries.append(Entry(toc)) self.content=entries if toc.read(1)!="\x00": raise Exception("List does not end with \x00 byte. Position: "+str(toc.tell())) else: raise Exception("Unknown type: "+hexlify(typ)+" "+str(toc.tell())) def write(self,f): #write the data into file f.write(self.typ) f.write(self.name+"\x00") if self.typ=="\x0f": f.write(self.content) elif self.typ=="\x10": f.write(self.content) #sha1 elif self.typ=="\x09": f.write(pack("Q",self.content)) elif self.typ=="\x08": f.write(pack("I",self.content)) elif self.typ=="\x06": f.write("\x01" if self.content==True else "\x00") elif self.typ=="\x02": f.write(write128(len(self.content))+self.content) elif self.typ=="\x13": f.write(write128(len(self.content))+self.content) #the same as above with different content? elif self.typ=="\x07": #string f.write(write128(len(self.content)+1)+self.content+"\x00") elif self.typ=="\x01": #Write everything into a buffer to get the size. buff=StringIO() for entry in self.content: entry.write(buff) f.write(write128(len(buff.getvalue())+1)) #final nullbyte f.write(buff.getvalue()) f.write("\x00") buff.close() class Superbundle: #more about toc really def __init__(self,pathname): #make sure there is toc and sb self.fullpath,ext=os.path.splitext(pathname) #everything except extension self.filename=os.path.basename(self.fullpath) #the name without extension and without full path tocPath=pathname #toc or bundle tocPath,sbPath = self.fullpath+".toc",self.fullpath+".sb" if not (os.path.exists(tocPath) and os.path.exists(sbPath)): raise IOError("Could not find the sbtoc files.") try: toc=unXOR(open(tocPath,"rb")) except: raise Exception(pathname) self.entry=Entry(toc) toc.close() dumper.py: import sbtoc import Bundle import os from binascii import hexlify,unhexlify from struct import pack,unpack from cStringIO import StringIO import sys import zlib ##Adjust paths here. The script doesn't overwrite existing files so set tocRoot to the patched files first, ##then run the script again with the unpatched ones to get all files at their most recent version. catName=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Data\cas.cat" #use "" or r"" if you have no cat; doing so will make the script ignore patchedCatName patchedCatName=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Patch\Data\cas.cat" #used only when tocRoot contains "Update" tocRoot=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Update" ##tocRoot=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Data\Win32" outputfolder="D:/hexing/bf3 dump" #mohw stuff: ##catName=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Data\cas.cat" ##patchedCatName=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Update\Patch\Data\cas.cat" ## ##tocRoot=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Data" ## ##outputfolder="D:/hexing/mohw dump123/" ##################################### ##################################### #zlib (one more try): #Files are split into pieces which are then zlibbed individually (prefixed with compressed and uncompressed size) #and finally glued together again. Non-zlib files on the other hand have no prefix about size, they are just the payload. #The archive or file does not declare zlib/nonzlib, making things really complicated. I think the engine actually uses #ebx and res to figure out if a chunk is zlib or not. However, res itself is zlibbed already; in mohw ebx is zlibbed too. #In particular mohw crashes when delivering a non-zlibbed ebx file. #Prefixing the payload with two identical ints containing the payload size makes mohw work again so the game really deduces #compressedSize==uncompressedSize => uncompressed payload. #some thoughts without evidence: #It's possible that ebx/res zlib is slightly different from chunk zlib. #Maybe for ebx/res, compressedSize==uncompressedSize always means an uncompressed piece. #Whereas for chunks (textures in particular), there are mip sizes to consider #e.g. first piece of a mip is always compressed (even with compressedSize==uncompressedSize) but subsequent pieces of a mip may be uncompressed. def zlibb(f, size): #if the entire file is < 10 bytes, it must be non zlib if size<10: return f.read(size) #interpret the first 10 bytes as fb2 zlib stuff uncompressedSize,compressedSize=unpack(">ii",f.read(8)) magic=f.read(2) f.seek(-10,1) #sanity check: compressedSize may be just random non-zlib payload. if compressedSize>size-8: return f.read(size) if compressedSize<=0 or uncompressedSize<=0: return f.read(size) #another sanity check with a very specific condition: #when uncompressedSize is different from compressedSize, then having a non-zlib piece makes no sense. #alternatively one could just let the zlib module try to handle this. #It's tempting to compare uncompressedSize<compressedSize, but there are indeed cases when #the uncompressed payload is smaller than the compressed one. if uncompressedSize!=compressedSize and magic!="\x78\xda": return f.read(size) outStream=StringIO() pos0=f.tell() while f.tell()<pos0+size-8: uncompressedSize,compressedSize=unpack(">ii",f.read(8)) #big endian #sanity checks: #The sizes may be just random non-zlib payload; as soon as that happens, #abandon the whole loop and just give back the full payload without decompression if compressedSize<=0 or uncompressedSize<=0: f.seek(pos0) return f.read(size) #likewise, make sure that compressed size does not exceed the size of the file if f.tell()+compressedSize>pos0+size: f.seek(pos0) return f.read(size) #try to decompress if compressedSize!=uncompressedSize: try: outStream.write(zlib.decompress(f.read(compressedSize))) except: outStream.write(f.read(compressedSize)) else: #if compressed==uncompressed, one might be tempted to think that it is always non-zlib. It's not. magic=f.read(2) f.seek(-2,1) if magic=="\x78\xda": try: outStream.write(zlib.decompress(f.read(compressedSize))) except: outStream.write(f.read(compressedSize)) else: outStream.write(f.read(compressedSize)) data=outStream.getvalue() outStream.close() return data def zlibIdata(bytestring): return zlibb(StringIO(bytestring),len(bytestring)) def hex2(num): #take int, return 8byte string a=hex(num) if a[:2]=="0x": a=a[2:] if a[-1]=="L": a=a[:-1] while len(a)<8: a="0"+a return a class Stub(): pass class Cat: def __init__(self,catname): cat2=open(catname,"rb") cat=sbtoc.unXOR(cat2) self.casfolder=os.path.dirname(catname)+"\\" cat.seek(0,2) catsize=cat.tell() cat.seek(16) self.entries=dict() while cat.tell()<catsize: entry=Stub() sha1=cat.read(20) entry.offset, entry.size, entry.casnum = unpack("<III",cat.read(12)) self.entries[sha1]=entry cat.close() cat2.close() def grabPayload(self,entry): cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb") cas.seek(entry.offset) payload=cas.read(entry.size) cas.close() return payload def grabPayloadZ(self,entry): cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb") cas.seek(entry.offset) payload=zlibb(cas,entry.size) cas.close() return payload def open2(path,mode): #create folders if necessary and return the file handle #first of all, create one folder level manully because makedirs might fail pathParts=path.split("\\") manualPart="\\".join(pathParts[:2]) if not os.path.isdir(manualPart): os.makedirs(manualPart) #now handle the rest, including extra long path names folderPath=lp(os.path.dirname(path)) if not os.path.isdir(folderPath): os.makedirs(folderPath) return open(lp(path),mode) ## return StringIO() def lp(path): #long pathnames if path[:4]=='\\\\?\\' or path=="" or len(path)<=247: return path return unicode('\\\\?\\' + os.path.normpath(path)) resTypes={ 0x5C4954A6:".itexture", 0x2D47A5FF:".gfx", 0x22FE8AC8:"", 0x6BB6D7D2:".streamingstub", 0x1CA38E06:"", 0x15E1F32E:"", 0x4864737B:".hkdestruction", 0x91043F65:".hknondestruction", 0x51A3C853:".ant", 0xD070EED1:".animtrackdata", 0x319D8CD0:".ragdoll", 0x49B156D4:".mesh", 0x30B4A553:".occludermesh", 0x5BDFDEFE:".lightingsystem", 0x70C5CB3E:".enlighten", 0xE156AF73:".probeset", 0x7AEFC446:".staticenlighten", 0x59CEEB57:".shaderdatabase", 0x36F3F2C0:".shaderdb", 0x10F0E5A1:".shaderprogramdb", 0xC6DBEE07:".mohwspecific" } def dump(tocName,outpath): try: toc=sbtoc.Superbundle(tocName) except IOError: return sb=open(toc.fullpath+".sb","rb") chunkPathToc=os.path.join(outpath,"chunks")+"\\" # bundlePath=os.path.join(outpath,"bundles")+"\\" ebxPath=bundlePath+"ebx\\" dbxPath=bundlePath+"dbx\\" resPath=bundlePath+"res\\" chunkPath=bundlePath+"chunks\\" if "cas" in toc.entry.elems and toc.entry.elems["cas"].content==True: #deal with cas bundles => ebx, dbx, res, chunks. for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant sb.seek(tocEntry.elems["offset"].content) bundle=sbtoc.Entry(sb) for listType in ["ebx","dbx","res","chunks"]: #make empty lists for every type to get rid of key errors(=> less indendation) if listType not in bundle.elems: bundle.elems[listType]=Stub() bundle.elems[listType].content=[] for entry in bundle.elems["ebx"].content: #name sha1 size originalSize casHandlePayload(entry,ebxPath+entry.elems["name"].content+".ebx") for entry in bundle.elems["dbx"].content: #name sha1 size originalSize if "idata" in entry.elems: #dbx appear only idata if at all, they are probably deprecated and were not meant to be shipped at all. out=open2(dbxPath+entry.elems["name"].content+".dbx","wb") if entry.elems["size"].content==entry.elems["originalSize"].content: out.write(entry.elems["idata"].content) else: out.write(zlibIdata(entry.elems["idata"].content)) out.close() for entry in bundle.elems["res"].content: #name sha1 size originalSize resType resMeta if entry.elems["resType"].content not in resTypes: #unknown res file type casHandlePayload(entry,resPath+entry.elems["name"].content+" "+hexlify(entry.elems["resMeta"].content)+".unknownres"+hex2(entry.elems["resType"].content)) elif entry.elems["resType"].content in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0): #these 5 require resMeta. OccluderMesh might too, but it's always 16*ff casHandlePayload(entry,resPath+entry.elems["name"].content+" "+hexlify(entry.elems["resMeta"].content)+resTypes[entry.elems["resType"].content]) else: casHandlePayload(entry,resPath+entry.elems["name"].content+resTypes[entry.elems["resType"].content]) for entryNum in xrange(len(bundle.elems["chunks"].content)): #id sha1 size, chunkMeta::meta entry=bundle.elems["chunks"].content[entryNum] entryMeta=bundle.elems["chunkMeta"].content[entryNum] if entryMeta.elems["meta"].content=="\x00": firstMip="" else: firstMip=" firstMip"+str(unpack("B",entryMeta.elems["meta"].content[10])[0]) casHandlePayload(entry,chunkPath+hexlify(entry.elems["id"].content)+firstMip+".chunk") #deal with cas chunks defined in the toc. for entry in toc.entry.elems["chunks"].content: #id sha1 casHandlePayload(entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk") else: #deal with noncas bundles for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant if "base" in tocEntry.elems: continue #Patched noncas bundle. However, use the unpatched bundle because no file was patched at all. ## So I just skip the entire process and expect the user to extract all unpatched files on his own. sb.seek(tocEntry.elems["offset"].content) if "delta" in tocEntry.elems: #Patched noncas bundle. Here goes the hilarious part. Take the patched data and glue parts from the unpatched data in between. #When that is done (in memory of course) the result is a new valid bundle file that can be read like an unpatched one. deltaSize,DELTAAAA,nulls=unpack(">IIQ",sb.read(16)) deltas=[] for deltaEntry in xrange(deltaSize/16): delta=Stub() delta.size,delta.fromUnpatched,delta.offset=unpack(">IIQ",sb.read(16)) deltas.append(delta) bundleStream=StringIO() #here be the new bundle data patchedOffset=sb.tell() ##unpatched: C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Xpack2\Data\Win32\Levels\XP2_Palace\XP2_Palace.sb/toc ##patched: C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Patch\Data\Win32\Levels\XP2_Palace\XP2_Palace.sb/toc #So at this point I am at the patched file and need to get the unpatched file path. Just how the heck... #The patched toc itself contains some paths, but they all start at win32. #Then again, the files are nicely named. I.e. XP2 translates to Xpack2 etc. xpNum=os.path.basename(toc.fullpath)[2] #XP2_Palace => 2 unpatchedPath=toc.fullpath.lower().replace("patch","xpack"+str(xpNum))+".sb" unpatchedSb=open(unpatchedPath,"rb") for delta in deltas: if not delta.fromUnpatched: bundleStream.write(sb.read(delta.size)) else: unpatchedSb.seek(delta.offset) bundleStream.write(unpatchedSb.read(delta.size)) unpatchedSb.close() bundleStream.seek(0) bundle=Bundle.Bundle(bundleStream) sb2=bundleStream else: sb.seek(tocEntry.elems["offset"].content) bundle=Bundle.Bundle(sb) sb2=sb for entry in bundle.ebxEntries: noncasHandlePayload(sb2,entry,ebxPath+entry.name+".ebx") for entry in bundle.resEntries: if entry.resType not in resTypes: #unknown res file type noncasHandlePayload(sb2,entry,resPath+entry.name+" "+hexlify(entry.resMeta)+".unknownres"+hex2(entry.resType)) elif entry.resType in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0): noncasHandlePayload(sb2,entry,resPath+entry.name+" "+hexlify(entry.resMeta)+resTypes[entry.resType]) else: noncasHandlePayload(sb2,entry,resPath+entry.name+resTypes[entry.resType]) for entry in bundle.chunkEntries: if entry.meta=="\x00": firstMip="" else: firstMip=" firstMip"+str(unpack("B",entry.meta[10])[0]) noncasHandlePayload(sb2,entry,chunkPath+hexlify(entry.id)+firstMip+".chunk") #deal with noncas chunks defined in the toc for entry in toc.entry.elems["chunks"].content: #id offset size entry.offset,entry.size = entry.elems["offset"].content,entry.elems["size"].content #to make the function work noncasHandlePayload(sb,entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk") sb.close() def noncasHandlePayload(sb,entry,outPath): if os.path.exists(lp(outPath)): return print outPath sb.seek(entry.offset) out=open2(outPath,"wb") if "originalSize" in vars(entry): if entry.size==entry.originalSize: out.write(sb.read(entry.size)) else: out.write(zlibb(sb,entry.size)) else: out.write(zlibb(sb,entry.size)) out.close() if catName!="": cat=Cat(catName) if "update" in tocRoot.lower(): cat2=Cat(patchedCatName) def casHandlePayload(entry,outPath): #this version searches the patched cat first if os.path.exists(lp(outPath)): return #don't overwrite existing files to speed up things print outPath if "originalSize" in entry.elems: compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though. else: compressed=True if "idata" in entry.elems: out=open2(outPath,"wb") if compressed: out.write(zlibIdata(entry.elems["idata"].content)) else: out.write(entry.elems["idata"].content) else: try: catEntry=cat2.entries[entry.elems["sha1"].content] activeCat=cat2 except: catEntry=cat.entries[entry.elems["sha1"].content] activeCat=cat out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up if compressed: out.write(activeCat.grabPayloadZ(catEntry)) else: out.write(activeCat.grabPayload(catEntry)) out.close() else: def casHandlePayload(entry,outPath): #this version uses the unpatched cat only if os.path.exists(lp(outPath)): return #don't overwrite existing files to speed up things print outPath if "originalSize" in entry.elems: compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though. else: compressed=True if "idata" in entry.elems: out=open2(outPath,"wb") if compressed: out.write(zlibIdata(entry.elems["idata"].content)) else: out.write(entry.elems["idata"].content) else: catEntry=cat.entries[entry.elems["sha1"].content] out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up if compressed: out.write(cat.grabPayloadZ(catEntry)) else: out.write(cat.grabPayload(catEntry)) out.close() def main(): for dir0, dirs, ff in os.walk(tocRoot): for fname in ff: if fname[-4:]==".toc": print fname fname=dir0+"\\"+fname dump(fname,outputfolder) outputfolder=os.path.normpath(outputfolder) main()
  4. Frankelstner

    Ebx File Converter

    Note: The guid-resolving (and thus much better) version is found here: http://www.bfeditor.org/forums/index.php?showtopic=15531&view=findpost&p=106219 Here's an improved version. Please download http://www.gamefront.com/files/22080360/floattostring_rar and place it in the main Python folder for a better float representation. This version requires the user to provide an output directory (default is C:/bf3 files). This time around the file names will always be read correctly (instead of just 95% of the time) and small numbers will be displayed too instead of just being rounded to zero. Also, the general syntax has been slightly changed (it makes more sense this time around). Drag and drop the folder onto the script file. #requires Python 2.7 import string import sys from binascii import hexlify from struct import unpack import os from cStringIO import StringIO #adjust output folder here; you must specify a folder outputFolder="C:/bf3 files" try: #try to print a number as 0.95 from ctypes import * floatlib = cdll.LoadLibrary("floattostring") def formatfloat(num): bufType = c_char * 100 buf = bufType() bufpointer = pointer(buf) floatlib.convertNum(c_double(num), bufpointer, 100) rawstring=(buf.raw)[:buf.raw.find("\x00")] if rawstring[:2]=="-.": return "-0."+rawstring[2:] elif rawstring[0]==".": return "0."+rawstring[1:] elif "e" not in rawstring and "." not in rawstring: return rawstring+".0" return rawstring except: #the number will be printed as 0.949999988079 def formatfloat(num): return str(num) def hasher(keyword): #32bit FNV-1 hash with FNV_offset_basis = 5381 and FNV_prime = 33 hash = 5381 for byte in keyword: hash = (hash*33) ^ ord(byte) return hash & 0xffffffff # use & because Python promotes the num instead of intended overflow class Header: def __init__(self,varList): ##all 4byte unsigned integers self.absStringOffset = varList[0] ## absolute offset for string section start self.lenStringToEOF = varList[1] ## length from string section start to EOF self.numGUID = varList[2] ## number of external GUIDs self.null = varList[3] ## 00000000 self.numInstanceRepeater = varList[4] self.numComplex = varList[5] ## number of complex entries self.numField = varList[6] ## number of field entries self.lenName = varList[7] ## length of name section including padding self.lenString = varList[8] ## length of string section including padding self.numArrayRepeater = varList[9] self.lenPayload = varList[10] ## length of normal payload section; the start of the array payload section is absStringOffset+lenString+lenPayload class FieldDescriptor: #field has 4byte hash, 2byte type, 2byte reference/pointer, 4byte offset , 4byte secondary offset #e.g. 4B1F9065 3500 0000 0C000000 1C000000 # hash type ref offset offset2 # => 'VoiceOverType', 0x0035, 0, 12, 28 def __init__(self,varList): self.name = keywordDict[varList[0]] self.type = varList[1] self.ref = varList[2] #the field may contain another complex self.offset = varList[3] #offset in payload section; relative to the complex containing it self.secondaryOffset = varList[4] class ComplexDescriptor: #complex has 4byte hash, 4byte field index, 1byte number of fields, 1byte alignment size, 2byte type, 2byte payload size, 2byte size2 #e.g. 39E97F28 52000000 04 04 3500 5000 0000 # hash fieldIndex num align type size size2 # => 'EntityVoiceOverInfo', 82, 4, 4, 0x0035, 80, 0 def __init__(self,varList): self.name = keywordDict[varList[0]] self.fieldStartIndex = varList[1] #the index of the first field belonging to the complex self.numField = varList[2] #the total number of fields belonging to the complex self.alignment = varList[3] self.type = varList[4] self.size = varList[5] #total length of the complex in the payload section self.secondarySize = varList[6] #seems deprecated class InstanceRepeater: def __init__(self,varList): self.null = varList[0] #seems to be always null self.repetitions = varList[1] #number of instance repetitions self.complexIndex = varList[2] #index of complex used as the instance class arrayRepeater: def __init__(self,varList): self.offset = varList[0] #offset in array payload section self.repetitions = varList[1] #number of array repetitions self.complexIndex = varList[2] #not necessary for extraction def read(filename): global f1, f2, externalGUIDList, internalGUIDList, fields, complexes, header, trueFilename, arrayRepeaters, isPrimaryInstance, keywordDict #check magic try: f1=open(filename,"rb") except: return if f1.read(4)!="\xCE\xD1\xB2\x0F": f1.close() return print filename header=Header(unpack("11I",f1.read(44))) trueFilename="" #the cas extractor only guesses a filename which may be incorrect; but I do know how to find out the correct one #grab the file GUID and its primary instance. Make the hex numbers to string, e.g. 0x4b => "4b" fileGUID, primaryInstanceGUID = hexlify(f1.read(16)), hexlify(f1.read(16)) #add all GUID pairs to a list. These are external GUIDs so the first GUID is the GUID #of another file and the second belongs to an instance inside that file (may or may not be primary) externalGUIDList=[(hexlify(f1.read(16)),hexlify(f1.read(16))) for i in range(header.numGUID)] #make list of names and make a dictionary hash vs name keywords=str.split(f1.read(header.lenName),"\x00") ## while len(keywords[-1])==0: ## keywords.pop() #remove the last few empty entries which appeared due to null-padding; not necessary because keywordDict does not mind keywordDict=dict((hasher(keyword),keyword) for keyword in keywords) #read all fields and complexes into lists; replace hashes with names instantly fields=[FieldDescriptor(unpack("IHHII",f1.read(16))) for i in xrange(header.numField)] complexes=[ComplexDescriptor(unpack("IIBBHHH",f1.read(16))) for i in xrange(header.numComplex)] #read instanceRepeater and arrayRepeater, each entry consists of 3 unsigned ints instanceRepeaters=[instanceRepeater(unpack("3I",f1.read(12))) for i in range(header.numInstanceRepeater)] while f1.tell()%16!=0: f1.seek(1,1) #padding arrayRepeaters=[arrayRepeater(unpack("3I",f1.read(12))) for i in range(header.numArrayRepeater)] #ignore string section and read directly only when necessary. The elements are accessed directly via offset instead of index. f1.seek(header.absStringOffset+header.lenString) #START OF PAYLOAD SECTION ##make a list of all internal instance GUID, ignore the actual payload; this way I can instantly replace a payload Guid index with a string internalGUIDList=[] for instanceRepeater in instanceRepeaters: for repetition in xrange(instanceRepeater.repetitions): internalGUIDList.append(hexlify(f1.read(16))) f1.seek(complexes[instanceRepeater.complexIndex].size,1) f1.seek(header.absStringOffset+header.lenString) # go back to start of payload section ##do the same as above, but 1) don't make a list and 2) read the payload f2=StringIO() #prepare stream to write the output into memory because filename is not known yet for instanceRepeater in instanceRepeaters: instance=complexes[instanceRepeater.complexIndex] for repetition in xrange(instanceRepeater.repetitions): tabLevel=1 instanceGUID=hexlify(f1.read(16)) startPos=f1.tell() if instanceGUID==primaryInstanceGUID: f2.write(instance.name+" "+instanceGUID+" #primary instance\r\n") isPrimaryInstance=True else: f2.write(instance.name+" "+instanceGUID+"\r\n") isPrimaryInstance=False readComplex(instance,tabLevel) f1.seek(startPos+instance.size) f1.close() # the source file is read and everything is in the f2 stream #create folder, file, etc. try: outFilename=os.path.join(outputFolder,trueFilename)+" "+fileGUID+".txt" if not os.path.isdir(os.path.dirname(outFilename)): os.makedirs(os.path.dirname(outFilename)) f3=open(outFilename,"wb") f3.write(f2.getvalue()) f3.close() except: print "Could not write file "+filename try: f3.close() except: pass f2.close() ##field types sorted by the value of their ref: ##ref==0 ("7d40","0dc1","3dc1","4dc1","5dc1","adc0","bdc0","ddc0","edc0","fdc0","3500"): ## 7d40: 4bytes; string, the value is the offset in the string section ## 0dc1: 4bytes; uint32 ## 3dc1: 4bytes; single float ## 4dc1: 8bytes; double float ## 5dc1: 16bytes; GUID, referring to chunk files? ## adc0: 1byte; bool, padded to 4 if no other adc0,bdc0, the same applies to the other <4 bytes values ## bdc0: 1byte; int8 ## ddc0: 2bytes; uint16 ## edc0: 2bytes; int16 ## fdc0: 4bytes; int32 ## 3500: 4bytes; GUID index, read as uint32 and the first bit is the isExternal flag. Do >>31 and afterwards use it as the index for the right GUID table ## ## ##ref!=0 ("4100","2900","29d0"): ## 4100: 4bytes; arrayRepeater index ## 2900: 0bytes; complex entry ## 29d0: 0bytes; complex entry ## ## ##ref sometimes 0, sometimes non 0 ("0000","8900"): ## 0000: 0bytes when field within an enum or 8bytes (all nulls) when element of "$" (which indicates inheritance) ## 8900: 4bytes; enum. Find the enum corresponding to the payload value def readComplex(complex,tabLevel): #recursive function to read everything #get the fields for the complex fieldList=fields[complex.fieldStartIndex : complex.fieldStartIndex+complex.numField] startPos=f1.tell() if tabLevel!=1: f2.write("::"+complex.name+"\r\n") for field in fieldList: readField(field,startPos,tabLevel) f1.seek(startPos+complex.size) def readField(field,startPos,tabLevel): f1.seek(startPos+field.offset) ## f2.write("@"+str(f1.tell())) if field.type not in (0x0029,0xd029,0x0041,0x0000): #handle the simple stuff f2.write(tabLevel*"\t"+field.name+" "+unpackSimpleField(field)+"\r\n") elif field.type !=0x0041: #non arrays f2.write(tabLevel*"\t"+field.name) readComplex(complexes[field.ref],tabLevel+1) #recursion else: #arrays arrayIndex=unpack("I",f1.read(4))[0] if arrayIndex==0: #in contrast to the 0035 type, this time index 0 is reserved for these cases f2.write(tabLevel*"\t"+field.name+" *nullArray*"+"\r\n") return arrayRepeater=arrayRepeaters[arrayIndex] #no arrayIndex-1 necessary f1.seek(arrayRepeater.offset+header.absStringOffset+header.lenString+header.lenPayload) if arrayRepeater.repetitions==0: f2.write(tabLevel*"\t"+field.name+" *nullArray*"+"\r\n") else: arrayComplex=complexes[field.ref] memberField=fields[arrayComplex.fieldStartIndex] f2.write(tabLevel*"\t"+field.name) f2.write("::"+arrayComplex.name+"\r\n") for arrayRepetition in xrange(arrayRepeater.repetitions): position=f1.tell() readField(memberField,position,tabLevel+1) #recursion #make a dictionary for the number/bool types. Mainly to save me from bloating the function below too much. Single floats not included either because I want to display them properly. numDict={0xc10d:("I",4),0xc14d:("d",8),0xc0ad:("?",1),0xc0fd:("i",4),0xc0bd:("b",1),0xc0ed:("h",2), 0xc0dd:("H",2)} def unpackSimpleField(field): #read everything except 0x0029, 0xd029, 0x0041, 0x0000 #i.e. all assignments that do not contain another complex (0x0089 being the exception because it is so different) global trueFilename try: #if the entry is number/bool, extract it with the dictionary; else go to except (typ,length)=numDict[field.type] num=unpack(typ,f1.read(length))[0] return str(num) except: if field.type==0xc13d: return formatfloat(unpack("f",f1.read(4))[0]) if field.type==0xc15d: return hexlify(f1.read(16)) #GUID, neither external nor internal elif field.type==0xc0dd: return hexlify(f1.read(2)) #not sure about this type elif field.type==0x0089: if field.ref==0: return "*nullEnum*" else: #The field points at another complex. The fields in this complex then are the choices. #Basically I go through the fields one level deeper. These fields do not behave like actual fields, lots of nulls and offset is no offset at all. compareValue=unpack("I",f1.read(4))[0] #this value must match fakefield.offset fieldList=fields[complexes[field.ref].fieldStartIndex : complexes[field.ref].fieldStartIndex+complexes[field.ref].numField] for fakeField in fieldList: if fakeField.offset==compareValue: return fakeField.name elif field.type==0x407d: #the string section #The 4bytes payload are the offset, so we need to remember where we are, then jump and read #a null-terminated string and jump back originalPos=f1.tell() f1.seek(header.absStringOffset+unpack("I",f1.read(4))[0]) string="" while 1: a=f1.read(1) if a=="\x00": break else: string+=a f1.seek(originalPos+4) if len(string)==0: return "*nullString*" #actually the string is "" if isPrimaryInstance and trueFilename=="" and field.name=="Name": trueFilename=string return string elif field.type==0x0035: #Write the GUID of another instance. There are two different cases. #If the instance is found in another file use externalGUIDList (2*16 bytes). #If the instance is found in this file use internalGUIDList (1*16 bytes). #The first bit is the isExternal flag. => bitshift by 31 and bitmask GUIDIndex=unpack("I",f1.read(4))[0] if GUIDIndex>>31: return "-".join(externalGUIDList[GUIDIndex&0x7fffffff]) elif GUIDIndex==0: return "*nullGuid*" #this being handled differently by the engine is a bit annoying as internalGUIDList of course has an element at index 0 else: return internalGUIDList[GUIDIndex-1] #as a result, minus 1 is necessary def main(): if not outputFolder: return for ff in sys.argv[1:]: if os.path.isfile(ff): read(ff) else: for dir0,dirs,files in os.walk(ff): for f in files: read(dir0+"\\"+f) try: main() except Exception, e: raw_input(e) main()
  5. Frankelstner

    Bf4 Audio Decoder

    Refer to the instructions here: http://www.bfeditor.org/forums/index.php?showtopic=15780 I still haven't found the time to dump everything, so I only used a few sample files. Let me know if the script fails somewhere. Additionally, I've shaved off a few silent bytes for the xas compression. There are compressed blocks of 76 bytes which become 128 samples. However, most uncompressed audio is not a multiple of 128 samples. As a result, the last compressed block contains some silence at the end. The ebx contains the info to cut off the silence though, which I have now taken into account. Update 27.12.2013: Added EASpeex support. Update 12.01.2014: Fixed EASpeex multichannel. Moved all decoding into the dlls to improve performance. The script really just handles the ebx files now while the dlls handle the decoding. Grab it here: https://www.dropbox.com/s/ox6clmozrzzvr5e/fb3decoder.zip
  6. Usage: Download and install Python 2.7 first (should take just a minute): http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi Then copy paste the code into a file and make it a .py file. The script works via drag and drop directly in the windows explorer, i.e. you drop the archive you want to extract onto the script file. Double clicking the script does not do anything as of now. I have updated the script to extract and pack fbrb files. Drag and drop a fbrb file on the script and a folder will be placed in the folder of the fbrb file being extracted (regardless of location of the script). Drag and drop a fbrb folder onto the script to go the opposite direction. Also supports drag and drop of several files at once (although dragging both a file and its respective folder on the script is kind of pointless). Additionally you can drag and drop a non fbrb folder on it and it asks you whether to pack or unpack all files within. Make your own backups! You can adjust the compressionlevel in the code. If there are issues while packing or extracting you can active the tempfile in the code as well so the payload is written on the hard drive instead of your memory while being managed. from struct import pack,unpack import gzip from cStringIO import StringIO import sys import os import tempfile # General usage of the script: Drag and drop one or several files and/or folders onto the script. It will # unpack fbrb archives and pack FbRB folders. You can also drag non-fbrb folders onto the script and specify # whether you want to unpack or pack all fbrb folders/files within. # There are some options in this script: activate tempfiles if the script fails at packing/unpacking, # specify the gzip compressionlevel when packing; specify a folder to unpack/pack files to. # Note: The script will only pack files with known extensions, e.g. xml files in a fbrb folder will be ignored (useful!) #packing parameters: compressionlevel=1 #takes values between 0 and 9, 0=unzipped, 1=weak compression, 9=strong compression; #the vanilla files are either 0 or 5-6. The vanilla files are probably on 5-6 #to fit on one disk and bf3 archives are not compressed at all. #While 0 can make huge files, 1 is a good compromise. Don't go higher than 6. ##unzippedfiles=("build_overlay-00.fbrb","overlay-00.fbrb","streaming_sounds-00.fbrb","streaming_vo_de-00.fbrb", ##"streaming_vo_en-00.fbrb","streaming_vo_es-00.fbrb","streaming_vo_fr-00.fbrb","streaming_vo_it-00.fbrb" ##"streaming_vo_pl-00.fbrb","streaming_vo_ru-00.fbrb","async\\ondemand_awards-00.fbrb","async\\ondemand_sounds-00.fbrb") packtmpfile=1 #make a temporary file on the hard drive in case of memory issues #my 4gb system could handle all files without, except for streaming_sounds-00.fbrb (580mb) #no significant change in performance when packing mp_common level-00 #unpacking parameters: unpacktmpfile=0 #temp file for unpacking, was not necessary on a 4gb system #extract mp_common level-00 while suppressing output (i.e. no files written) #14 seconds with tempfile, 7 seconds without #with output the difference is around 20% #adjust unpack folder / pack file, use the commented out line below as an example (in particular, slashes); #this line will move files into the folder "C:\Program Files (x86)\Electronic Arts\files FbRB" #no path given puts all extracted files in a folder at the same place as the fbrb file/folder #unpackfolder="C:/Program Files (x86)/Electronic Arts/files" unpackfolder="" packfolder="" ########################### BUFFSIZE=1000000 # buffer when writing the fbrb archive def grabstring(offset): # add all chars until null terminated re="" while dump[offset]!="\x00": re+=dump[offset] offset+=1 return re def makeint(num): return pack(">I",num) def readint(pos): return unpack(">I",dump[pos:pos+4])[0] #these are in fact strings on the left, the weird part of python dic=dict(swfmovie='SwfMovie',dx10pixelshader='Dx10PixelShader',havokphysicsdata='HavokPhysicsData', treemeshset='TreeMeshSet',terrainheightfield='TerrainHeightfield',itexture='ITexture',animtreeinfo='AnimTreeInfo', irradiancevolume='IrradianceVolume',visualterrain='VisualTerrain',skinnedmeshset='SkinnedMeshSet', dx10vertexshader='Dx10VertexShader',aimanimation='AimAnimation',occludermesh='OccluderMesh', dx9shaderdatabase='Dx9ShaderDatabase',wave='Wave',sootmesh='SootMesh',terrainmaterialmap='TerrainMaterialMap', rigidmeshset='RigidMeshSet',compositemeshset='CompositeMeshSet',watermesh='WaterMesh',visualwater='VisualWater', dx9vertexshader='Dx9VertexShader',dx9pixelshader='Dx9PixelShader',dx11shaderdatabase='Dx11ShaderDatabase', dx11pixelshader='Dx11PixelShader',grannymodel='GrannyModel',ragdollresource='RagdollResource', grannyanimation='GrannyAnimation',weathersystem='WeatherSystem',dx11vertexshader='Dx11VertexShader',terrain='Terrain', impulseresponse='ImpulseResponse',binkmemory='BinkMemory',deltaanimation='DeltaAnimation', dx10shaderdatabase='Dx10ShaderDatabase',meshdata='MeshData',xenonpixelshader='XenonPixelShader', xenonvertexshader='XenonVertexShader',xenontexture='XenonTexture',pathdatadefinition='PathDataDefinition', nonres='<non-resource>',dbx='<non-resource>',dbxdeleted='*deleted*',resdeleted='*deleted*',bin='<non-resource>', dbmanifest='<non-resource>') def packer(sourcefolder, targetfile="", compressionlevel=compressionlevel, tmpfile=1): """takes absoulte folder path with folder ending on " FbRB"; the target file path is absolute without .fbrb extension""" sourcefolder=lp(sourcefolder) if not os.path.isdir(sourcefolder) or sourcefolder[-5:]!=" FbRB": return print sourcefolder[4:] ################### toplevellength=len(sourcefolder)+1 #for the RELATIVE pathnames to put in the fbrb if not targetfile: targetfile=sourcefolder[:-5]+".fbrb" else: targetfile=lp(targetfile)+".fbrb" strings="" #the list of strings at the beginning of part1 extdic=dict() #keep track of all extensions to omit string duplicates in part1 entries="" #24 bytes each, 6 parts numofentries=0 payloadoffset=0 #where the uncompressed payload starts, sum of all filelengths so far if tmpfile: s2=tempfile.TemporaryFile() else: s2=StringIO() if compressionlevel: zippy2=gzip.GzipFile(fileobj=s2,mode="wb",compresslevel=compressionlevel,filename="") #takes the payload when compression #go through all files inside the folder for dir0, dirs, files in os.walk(sourcefolder): dir0+="\\" for f in files: #validate file and grab its extension rawfilename,extension = os.path.splitext(f) extension=extension[1:].lower() try: ext=dic[extension] except: continue numofentries+=1 #restore filename strings to res, dbx, bin, dbmanifest; null terminated if extension=="dbxdeleted": filepath=dir0.replace("\\","/")[toplevellength:]+f[:-7]+"\x00" elif extension not in ("dbx","bin","dbmanifest"): filepath=dir0.replace("\\","/")[toplevellength:]+rawfilename+".res\x00" else: filepath=dir0.replace("\\","/")[toplevellength:]+f+"\x00" stringoffset=makeint(len(strings)) #part1/6 filepath=str(filepath) #make string because of unicode stuff strings+=filepath filelength=os.path.getsize(dir0+f) if filelength==0: deleteflag="\x00\x00\x00\x00" else: deleteflag="\x00\x01\x00\x00" #part2/6 #check if the extension has been used before, if so refer to the string already in use try: extpos=extdic[ext] except: extpos=len(strings) extdic[ext]=extpos strings+=ext+"\x00" #make the entries, grab the payload entries+=stringoffset+deleteflag+makeint(payloadoffset)+2*makeint(filelength)+makeint(extpos) payloadoffset+=filelength f1=open(dir0+f,"rb") if compressionlevel: zippy2.write(f1.read()) else: s2.write(f1.read()) f1.close() if compressionlevel: zippedflag="\x01" zippy2.close() else: zippedflag="\x00" #make decompressed part1, then compress it part1="\x00\x00\x00\x02"+makeint(len(strings))+strings+makeint(numofentries)+entries+zippedflag+makeint(payloadoffset) s1=StringIO() zippy=gzip.GzipFile(fileobj=s1,mode="wb",compresslevel=1) zippy.write(part1) zippy.close() output=s1.getvalue() s1.close() #make the final file out=open(targetfile,"wb") s2.seek(0) out.write("\x46\x62\x52\x42"+makeint(len(output))+output) while 1: buff = s2.read(BUFFSIZE) if buff: out.write(buff) else: break out.close(), s2.close() def unpacker(sourcefilename,targetfolder="",tmpfile=0): """takes absolute file path with file ending on ".fbrb"; the target folder path is absolute without " FbRB" extension""" global dump sourcefilename=lp(sourcefilename) #check validity if sourcefilename[-5:].lower()!=".fbrb": return f=open(sourcefilename,"rb") if f.read(4)!="FbRB": f.close() return print sourcefilename[4:] ################### if not targetfolder: targetfolder=sourcefilename[:-5]+" FbRB\\" else: targetfolder=lp(targetfolder)+" FbRB\\" if not os.path.isdir(targetfolder): os.makedirs(targetfolder) #for empty fbrb files basically cut=unpack(">I",f.read(4))[0] # there are two gzip archives glued together part1=StringIO(f.read(cut)) if tmpfile: part2=tempfile.TemporaryFile() part2.write(f.read()) part2.seek(0) else: part2=StringIO(f.read()) f.close() zippy=gzip.GzipFile(mode='rb', fileobj=part1) zippy2=gzip.GzipFile(mode='rb', fileobj=part2) dump=zippy.read() part1.close(), zippy.close() if dump[-5]=="\x00": zipped=0 else: zipped=1 strlen=readint(4) numentries=readint(strlen+8) for i in range(numentries): filenameoffset=readint(strlen+12+i*24) ## undeleteflag=readint(strlen+16+i*24) this is okay due to undeleteflag <=> extension=deleted payloadoffset=readint(strlen+20+i*24) # payload in the second gzip archive payloadlen=readint(strlen+24+i*24) ## payloadlen2=readint(strlen+28+i*24) # the same as payloadlen except for one file in Package extensionoffset=readint(strlen+32+i*24) # get folder name, get file name, grab payload and put it in the right place folder,filename = os.path.split(grabstring(filenameoffset+8)) name,ending = os.path.splitext(filename) # original file ending: bin, res, dbmanifest, dbx extension=grabstring(extensionoffset+8).lower() #lowercase because .itexture looks better than .ITexture if extension=="*deleted*": if ending==".dbx": ending=".dbxdeleted" else: ending=".resdeleted" elif extension=="<non-resource>" and ending==".res": ending=".nonres" elif extension!="<non-resource>": ending="."+extension finalpath=targetfolder+folder.replace("/","\\") if folder!="": finalpath+="\\" if not os.path.isdir(finalpath): os.makedirs(finalpath) out=open(finalpath+name+ending,"wb") if zipped: zippy2.seek(payloadoffset) out.write(zippy2.read(payloadlen)) else: part2.seek(payloadoffset) out.write(part2.read(payloadlen)) out.close() zippy2.close(), part2.close() def lp(path): #long pathnames if path[:4]=='\\\\?\\': return path elif path=="": return path else: return unicode('\\\\?\\' + os.path.normpath(path)) #give fbrb folder->pack #give fbrb file->extract #give other folder->extract/pack #sadly os.walk is rather limited for this purpose, I cannot keep it out of "marked" fbrb folders def main(): inp=[lp(p) for p in sys.argv[1:]] mode="" for ff in inp: if os.path.isdir(ff) and ff[-5:]==" FbRB": packer(ff,packfolder,compressionlevel,packtmpfile) elif os.path.isfile(ff): unpacker(ff,unpackfolder,unpacktmpfile) else: #handle all fbrb within this folder; but first ask user input if not mode: mode=raw_input("(u)npack or (p)ack everything from selected folder(s)\r\n") if mode.lower()=="u": for dir0,dirs,files in os.walk(ff): for f in files: unpacker(dir0+"\\"+f,unpackfolder,unpacktmpfile) elif mode.lower()=="p": for dir0,dirs,files in os.walk(ff): packer(dir0,packfolder,compressionlevel,packtmpfile) try: main() except Exception, e: raw_input(e)
  7. Frankelstner

    Cas Extractor

    Note: This script here is deprecated. Use the sbtoc dumper instead: http://www.bfeditor.org/forums/index.php?showtopic=15731 This script extracts all binary text format based files from the cas archives. Save the code as a .py file in Battlefield 3\Data and run it. Everything will then end up in Battlefield 3\Data\safe with my guess at folder names and file names. Additionally it adds the sha1 hash of every file at the end of its filename. The number it will print while running is the total number of processed text files so far. It should stop after 49000. There are about 110000 files in total and extracting the other file types is equally easy. However I haven't taken a closer look at them yet and it may not be as easy to get any folder names. And I can tell you, extracting 50000 files into a single folder (or maybe 5000 in ten folders, one for each cas file) is quite messy. Now you can take a closer look at the files I am currently working on (a.k.a. the files with nine different sections excluding the header). #needs Python 2.x import string import binascii import sys import os import struct from cStringIO import StringIO #cas_01.cas to cas_10.cas def unXOR(f): magic=f.read(4) if magic not in ("\x00\xD1\xCE\x00","\x00\xD1\xCE\x01"): f.seek(0) #the file is not encrypted return f f.seek(296) magic=[ord(f.read(1)) for i in xrange(260)] #bytes 257 258 259 are not used data=f.read() f.close() data2=[None]*len(data) #initalize the buffer for i in xrange(len(data)): data2[i]=chr(magic[i%257]^ord(data[i])^0x7b) return StringIO("".join(data2)) DICE="\xCE\xD1\xB2\x0F" cat2=open("cas.cat","rb") cat=unXOR(cat2) cat2.close() def readcat(cat): if cat.read(16)!="NyanNyanNyanNyan": print "error with header" return #get file length cat.seek(0,2) catlength=cat.tell() cat.seek(16) dicecount=0 #keep track of the total number of extracted files to print it later while cat.tell()<catlength: #do the cat sha1=binascii.hexlify(cat.read(20)) fileoffset=struct.unpack("<l",cat.read(4))[0] filesize=struct.unpack("<l",cat.read(4))[0] casfilenum=str(struct.unpack("<l",cat.read(4))[0]) if len(casfilenum)==1: casfilenum="0"+casfilenum #do the cas cas=open("cas_"+casfilenum+".cas","rb") cas.seek(fileoffset) if cas.read(4)==DICE: ##CONSIDER MY FAVOURITE FILE TYPE ONLY dicecount+=1 if dicecount%1000==0: print dicecount #find the paths in the middle of the file and make a list out of them pathpos=fileoffset+struct.unpack("l",cas.read(4))[0] cas.seek(28,1) pathlen=struct.unpack("<l",cas.read(4))[0] cas.seek(pathpos) paths=cas.read(pathlen) lastbytes=paths[-16:][::-1] i=0 try: while lastbytes[i]=="\x00": i+=1 except: i=16 pathlist=string.split(paths[:-i],"\x00") #find a genuine path; the first string with two slashes or more :)/> path="" pathcount=0 for i in pathlist: if string.count(i,"/")>=2: path=i break #just in case if path=="": for i in pathlist: if string.count(i,"/")==1: path=i break if path=="": path=pathlist[0] cas.seek(fileoffset) #make the folders try: separator=path.rfind("/") if not os.path.isdir("safe/"+path[:separator]): os.makedirs("safe/"+path[:separator]) except: print "error with folder creation" debug=open("safe/debugfile "+sha1,"wb") debug.write(cas.read(filesize)) debug.close() #write the files try: out=open("safe/"+path+" "+sha1,"wb") out.write(cas.read(filesize)) out.close() except: print "error with file creation" debug=open("safe/debugfile "+sha1,"wb") debug.write(cas.read(filesize)) debug.close() cas.close() print "files extracted: "+str(dicecount) readcat(cat) cat.close() print "done"
  8. Frankelstner

    Bf4 Ebx To Text Converter

    Some instances do not have a guid anymore. I've just enumerated them, so the first instance without guid is 00000000, the next is 00000001 etc. Also note that the first instance is always the primary instance, so I have not marked it anymore. The changes to the format in detail: http://pastebin.com/1QCmKSwH A documentation of everything I've done to figure out how to deal with the changes: http://pastebin.com/xb7tR2NC #Requires Python 2.7 #The floattostring.dll requires 32bit Python to write floating point numbers in a succinct manner, #but the dll is not required to run this script. import string import sys from binascii import hexlify import struct import os from cStringIO import StringIO import cProfile import cPickle import copy #Adjust input and output folders here inputFolder=r"D:\hexing\bf4 dump\bundles\ebx" outputFolder=r"D:\hexing\bf4 ebx" guidTableName="guidTable bf4" #Name of the guid table file; keeping separate names #for separate games is highly recommended. The table is created at the location of the script. EXTENSION=".txt" #Use a different file extension if you like. SEP=" " #Adjust the amount of whitespace on the left of the converted file. #Show offsets to the left printOffsets=False #True/False #Ignore all instances and fields with these names when converting to text: IGNOREINSTANCES=["RawFileDataAsset"] #used in WebBrowser\Fonts, crashes the script otherwise IGNOREFIELDS=[] ##IGNOREINSTANCES=["ShaderAdjustmentData","SocketData","WeaponSkinnedSocketObjectData","WeaponRegularSocketObjectData"] ##IGNOREFIELDS=["Mesh3pTransforms","Mesh3pRigidMeshSocketObjectTransforms"] #I recommend ignoring a few fields/instances which are related to meshes, #take up lots of space, and contain no useful information as the mesh format is not even known. #As an example, Mesh3pTransforms contains nothing but xyz vectors and is found in most weapon #files. This field takes up 715 lines in the 870 shotgun (the entire file is 3829 lines). #If you enjoy having to scroll past these 700 lines all the time, then ignore nothing. #Note however that the lists above applied to bf3. In bf4 I can only find Mesh3pTransforms in the files but not the other strings. #Nevertheless, use this as a guide to ignore fields/instances on your own. #First run through all files to create a guid table to resolve external file references. #Then run through all files once more, but this time convert them using the guid table. def main(): createGuidTable() dumpText() ############################################################## ############################################################## unpackLE = struct.unpack def unpackBE(typ,data): return struct.unpack(">"+typ,data) def createGuidTable(): for dir0, dirs, ff in os.walk(inputFolder): for fname in ff: if fname[-4:]!=".ebx": continue f=open(lp(dir0+"\\"+fname),"rb") relPath=(dir0+"\\"+fname)[len(inputFolder):-4] if relPath[0]=="\\": relPath=relPath[1:] try: dbx=Dbx(f,relPath) f.close() except ValueError as msg: f.close() if str(msg).startswith("The file is not ebx: "): continue else: asdf guidTable[dbx.fileGUID]=dbx.trueFilename f5=open(guidTableName,"wb") #write the table cPickle.dump(guidTable,f5) f5.close() def dumpText(): for dir0, dirs, ff in os.walk(inputFolder): for fname in ff: if fname[-4:]!=".ebx": continue print fname f=open(lp(dir0+"\\"+fname),"rb") relPath=(dir0+"\\"+fname)[len(inputFolder):-4] if relPath[0]=="\\": relPath=relPath[1:] try: dbx=Dbx(f,relPath) f.close() except ValueError as msg: f.close() if str(msg).startswith("The file is not ebx: "): continue else: asdf dbx.dump(outputFolder) def open2(path,mode="rb"): if mode=="wb": #create folders if necessary and return the file handle #first of all, create one folder level manully because makedirs might fail pathParts=path.split("\\") manualPart="\\".join(pathParts[:2]) if not os.path.isdir(manualPart): os.makedirs(manualPart) #now handle the rest, including extra long path names folderPath=lp(os.path.dirname(path)) if not os.path.isdir(folderPath): os.makedirs(folderPath) return open(lp(path),mode) def lp(path): #long, normalized pathnames if len(path)<=247 or path=="" or path[:4]=='\\\\?\\': return os.path.normpath(path) return unicode('\\\\?\\' + os.path.normpath(path)) try: from ctypes import * floatlib = cdll.LoadLibrary("floattostring") def formatfloat(num): bufType = c_char * 100 buf = bufType() bufpointer = pointer(buf) floatlib.convertNum(c_double(num), bufpointer, 100) rawstring=(buf.raw)[:buf.raw.find("\x00")] if rawstring[:2]=="-.": return "-0."+rawstring[2:] elif rawstring[0]==".": return "0."+rawstring[1:] elif "e" not in rawstring and "." not in rawstring: return rawstring+".0" return rawstring except: def formatfloat(num): return str(num) def hasher(keyword): #32bit FNV-1 hash with FNV_offset_basis = 5381 and FNV_prime = 33 hash = 5381 for byte in keyword: hash = (hash*33) ^ ord(byte) return hash & 0xffffffff # use & because Python promotes the num instead of intended overflow class Header: def __init__(self,varList): self.absStringOffset = varList[0] ## absolute offset for string section start self.lenStringToEOF = varList[1] ## length from string section start to EOF self.numGUID = varList[2] ## number of external GUIDs self.numInstanceRepeater = varList[3] ## total number of instance repeaters self.numGUIDRepeater = varList[4] ## instance repeaters with GUID self.unknown = varList[5] self.numComplex = varList[6] ## number of complex entries self.numField = varList[7] ## number of field entries self.lenName = varList[8] ## length of name section including padding self.lenString = varList[9] ## length of string section including padding self.numArrayRepeater = varList[10] self.lenPayload = varList[11] ## length of normal payload section; the start of the array payload section is absStringOffset+lenString+lenPayload class FieldDescriptor: def __init__(self,varList,keywordDict): self.name = keywordDict[varList[0]] self.type = varList[1] self.ref = varList[2] #the field may contain another complex self.offset = varList[3] #offset in payload section; relative to the complex containing it self.secondaryOffset = varList[4] if self.name=="$": self.offset-=8 class ComplexDescriptor: def __init__(self,varList,keywordDict): self.name = keywordDict[varList[0]] self.fieldStartIndex = varList[1] #the index of the first field belonging to the complex self.numField = varList[2] #the total number of fields belonging to the complex self.alignment = varList[3] self.type = varList[4] self.size = varList[5] #total length of the complex in the payload section self.secondarySize = varList[6] #seems deprecated class InstanceRepeater: def __init__(self,varList): self.complexIndex = varList[0] #index of complex used as the instance self.repetitions = varList[1] #number of instance repetitions class arrayRepeater: def __init__(self,varList): self.offset = varList[0] #offset in array payload section self.repetitions = varList[1] #number of array repetitions self.complexIndex = varList[2] #not necessary for extraction class Complex: def __init__(self,desc): self.desc=desc class Field: def __init__(self,desc,offset): self.desc=desc self.offset=offset #track absolute offset of each field in the ebx numDict={0xC12D:("Q",8),0xc0cd:("B",1) ,0x0035:("I",4),0xc10d:("I",4),0xc14d:("d",8),0xc0ad:("?",1),0xc0fd:("i",4),0xc0bd:("b",1),0xc0ed:("h",2), 0xc0dd:("H",2), 0xc13d:("f",4)} class Dbx: def __init__(self, f, relPath): #metadata magic=f.read(4) if magic=="\xCE\xD1\xB2\x0F": self.unpack=unpackLE elif magic=="\x0F\xB2\xD1\xCE": self.unpack=unpackBE else: raise ValueError("The file is not ebx: "+relPath) self.relPath=relPath #to give more feedback for unknown field types self.trueFilename="" self.header=Header(self.unpack("3I6H3I",f.read(36))) self.arraySectionstart=self.header.absStringOffset+self.header.lenString+self.header.lenPayload self.fileGUID=f.read(16) while f.tell()%16!=0: f.seek(1,1) #padding self.externalGUIDs=[(f.read(16),f.read(16)) for i in xrange(self.header.numGUID)] self.keywords=str.split(f.read(self.header.lenName),"\x00") self.keywordDict=dict((hasher(keyword),keyword) for keyword in self.keywords) self.fieldDescriptors=[FieldDescriptor(self.unpack("IHHii",f.read(16)), self.keywordDict) for i in xrange(self.header.numField)] self.complexDescriptors=[ComplexDescriptor(self.unpack("IIBBHHH",f.read(16)), self.keywordDict) for i in xrange(self.header.numComplex)] self.instanceRepeaters=[instanceRepeater(self.unpack("2H",f.read(4))) for i in xrange(self.header.numInstanceRepeater)] while f.tell()%16!=0: f.seek(1,1) #padding self.arrayRepeaters=[arrayRepeater(self.unpack("3I",f.read(12))) for i in xrange(self.header.numArrayRepeater)] #payload f.seek(self.header.absStringOffset+self.header.lenString) self.internalGUIDs=[] self.instances=[] # (guid, complex) nonGUIDindex=0 self.isPrimaryInstance=True #first instance is primary for i, instanceRepeater in enumerate(self.instanceRepeaters): for repetition in xrange(instanceRepeater.repetitions): #obey alignment of the instance; peek into the complex for that while f.tell()%self.complexDescriptors[instanceRepeater.complexIndex].alignment!=0: f.seek(1,1) #all instances after numGUIDRepeater have no guid if i<self.header.numGUIDRepeater: instanceGUID=f.read(16) else: #just numerate those instances without guid and assign a big endian int to them. instanceGUID=struct.pack(">I",nonGUIDindex) nonGUIDindex+=1 self.internalGUIDs.append(instanceGUID) self.instances.append( (instanceGUID,self.readComplex(instanceRepeater.complexIndex,f,True)) ) self.isPrimaryInstance=False #the readComplex function has used isPrimaryInstance by now f.close() #if no filename found, use the relative input path instead #it's just as good though without capitalization if self.trueFilename=="": self.trueFilename=relPath def readComplex(self, complexIndex, f, isInstance=False): complexDesc=self.complexDescriptors[complexIndex] cmplx=Complex(complexDesc) cmplx.offset=f.tell() cmplx.fields=[] #alignment 4 instances require subtracting 8 for all field offsets and the complex size obfuscationShift=8 if (isInstance and cmplx.desc.alignment==4) else 0 for fieldIndex in xrange(complexDesc.fieldStartIndex,complexDesc.fieldStartIndex+complexDesc.numField): f.seek(cmplx.offset+self.fieldDescriptors[fieldIndex].offset-obfuscationShift) cmplx.fields.append(self.readField(fieldIndex,f)) f.seek(cmplx.offset+complexDesc.size-obfuscationShift) return cmplx def readField(self,fieldIndex,f): fieldDesc = self.fieldDescriptors[fieldIndex] field=Field(fieldDesc,f.tell()) if fieldDesc.type in (0x0029, 0xd029,0x0000,0x8029): field.value=self.readComplex(fieldDesc.ref,f) elif fieldDesc.type==0x0041: arrayRepeater=self.arrayRepeaters[self.unpack("I",f.read(4))[0]] arrayComplexDesc=self.complexDescriptors[fieldDesc.ref] f.seek(self.arraySectionstart+arrayRepeater.offset) arrayComplex=Complex(arrayComplexDesc) arrayComplex.fields=[self.readField(arrayComplexDesc.fieldStartIndex,f) for repetition in xrange(arrayRepeater.repetitions)] field.value=arrayComplex elif fieldDesc.type in (0x407d, 0x409d): startPos=f.tell() stringOffset=self.unpack("i",f.read(4))[0] if stringOffset==-1: field.value="*nullString*" else: f.seek(self.header.absStringOffset+stringOffset) field.value="" while 1: a=f.read(1) if a=="\x00": break else: field.value+=a f.seek(startPos+4) if self.isPrimaryInstance and fieldDesc.name=="Name" and self.trueFilename=="": self.trueFilename=field.value elif fieldDesc.type in (0x0089,0xc089): #incomplete implementation, only gives back the selected string compareValue=self.unpack("i",f.read(4))[0] enumComplex=self.complexDescriptors[fieldDesc.ref] if enumComplex.numField==0: field.value="*nullEnum*" for fieldIndex in xrange(enumComplex.fieldStartIndex,enumComplex.fieldStartIndex+enumComplex.numField): if self.fieldDescriptors[fieldIndex].offset==compareValue: field.value=self.fieldDescriptors[fieldIndex].name break elif fieldDesc.type==0xc15d: field.value=f.read(16) elif fieldDesc.type==0x417d: field.value=f.read(8) else: try: (typ,length)=numDict[fieldDesc.type] num=self.unpack(typ,f.read(length))[0] field.value=num except: print "Unknown field type: "+str(fieldDesc.type)+" File name: "+self.relPath field.value="*unknown field type*" return field def dump(self,outputFolder): ## if not self.trueFilename: self.trueFilename=hexlify(self.fileGUID) outName=outputFolder+self.trueFilename+EXTENSION ## dirName=os.path.dirname(outputFolder+self.trueFilename) ## if not os.path.isdir(dirName): os.makedirs(dirName) ## if not self.trueFilename: self.trueFilename=hexlify(self.fileGUID) ## f2=open(outputFolder+self.trueFilename+EXTENSION,"wb") f2=open2(outName,"wb") for (guid,instance) in self.instances: if instance.desc.name not in IGNOREINSTANCES: ############# #print writeInstance(f2,instance,hexlify(guid)) self.recurse(instance.fields,f2,0) f2.close() def recurse(self, fields, f2, lvl): #over fields lvl+=1 for field in fields: if field.desc.type in (0x0029,0xd029,0x0000,0x8029): if field.desc.name not in IGNOREFIELDS: ############# writeField(f2,field,lvl,"::"+field.value.desc.name) self.recurse(field.value.fields,f2,lvl) elif field.desc.type == 0xc13d: writeField(f2,field,lvl," "+formatfloat(field.value)) elif field.desc.type == 0xc15d: writeField(f2,field,lvl," "+hexlify(field.value).upper()) #upper case=> chunk guid elif field.desc.type==0x417d: val=hexlify(field.value) ## val=val[:16]+"/"+val[16:] writeField(f2,field,lvl," "+val) elif field.desc.type == 0x0035: towrite="" if field.value>>31: extguid=self.externalGUIDs[field.value&0x7fffffff] try: towrite=guidTable[extguid[0]]+"/"+hexlify(extguid[1]) except: towrite=hexlify(extguid[0])+"/"+hexlify(extguid[1]) elif field.value==0: towrite="*nullGuid*" else: intGuid=self.internalGUIDs[field.value-1] towrite=hexlify(intGuid) writeField(f2,field,lvl," "+towrite) elif field.desc.type==0x0041: if len(field.value.fields)==0: writeField(f2,field,lvl," *nullArray*") else: writeField(f2,field,lvl,"::"+field.value.desc.name) #quick hack so I can add indices to array members while using the same recurse function for index in xrange(len(field.value.fields)): member=field.value.fields[index] if member.desc.name=="member": desc=copy.deepcopy(member.desc) desc.name="member("+str(index)+")" member.desc=desc self.recurse(field.value.fields,f2,lvl) else: writeField(f2,field,lvl," "+str(field.value)) def hex2(num): #take int, return 8byte string a=hex(num) if a[:2]=="0x": a=a[2:] if a[-1]=="L": a=a[:-1] while len(a)<8: a="0"+a return a if printOffsets: def writeField(f,field,lvl,text): f.write(hex2(field.offset)+SEP+lvl*SEP+field.desc.name+text+"\r\n") def writeInstance(f,cmplx,text): f.write(hex2(cmplx.offset)+SEP+cmplx.desc.name+" "+text+"\r\n") else: def writeField(f,field,lvl,text): f.write(lvl*SEP+field.desc.name+text+"\r\n") def writeInstance(f,cmplx,text): f.write(cmplx.desc.name+" "+text+"\r\n") if outputFolder[-1] not in ("/","\\"): outputFolder+="\\" if inputFolder[-1] not in ("/","\\"): inputFolder+="\\" #if there's a guid table already, use it try: f5=open(guidTableName,"rb") guidTable=cPickle.load(f5) f5.close() except: guidTable=dict() main()
  9. Frankelstner

    Bf3/bf4 And Moh_Wf Batch Texture Converters*

    I can't believe they did it again. It's the same permutation obfuscation that was used for that one console game: http://www.bfeditor.org/forums/index.php?showtopic=15780&page=5#entry106500This being said, I suppose the same might apply to the audio files in the bf4 DLC too. Thankfully I have left in the option to enable permutations there. Just in case though, can someone run my decoder on the DLC files with permutation activated and see if there are any files?
  10. Frankelstner

    Bf3/bf4 Mesh Research

    The res seems to contain 4 sections after the header. 1) Contains structures with a0 bytes each. A chunk id is given with an offset of 58 relative to the structure. 2) Contains structures with c0 bytes each. No ids here. Each structure ends with two "lines" (when using 0x10 alignment in the hex editor) which each read FFFFFFFFFFFFFFFFFFFFFFFF00000000. 3) A string section. 4) Payload of some sort? Header: 8 floats. Unknown purpose. Fourth and eighth float is null. 6 longs. Each long specifies the offset of a structure in the first section. If the section has less than 6 elements, the last few longs are set to null. As a consequence, there cannot be more than 6 guids given in a single res file (or so I believe). What's more, the structures are always a0 bytes and the header has a fixed size too, so one can calculate the offsets anyway without these longs. 2 longs. Absolute string offsets. It works like this: 1) objects/props/puddle/puddle_02_Mesh 2) puddle_02_Mesh (this is a substring of the previous one) 1 int. Hash of the filename or something like that. 4 nullbytes 1 int. Can exceed the size of the res + chunk (even when taken together). Other times is just 1. 1 short. Number of elements in first section, may not exceed 6 (because 6 longs max) 1 short. Number of elements in second section first section: 4 nulls, ffff, 2 nulls struct (3 ints): 5 times 1) small num 2) offset/size? 3) null Times: 1) 1, 110, null. maybe size of floats in chunk 2) 0, 230, null. Offset in res (after strings) 3) 1, 230, null. Same? 4) 0, 231, null. 5) 0, 231, null. long. 41 or 40 int. 30 long. 120. maybe size of floats in chunk guid ffffffff 8 nulls 3 longs, string offsets 1) Mesh:objects/props/puddle/puddle_02_Mesh_lod0 2) objects/props/puddle/puddle_02_Mesh_lod0 3) puddle_02_Mesh_lod0 h32? 16 nulls second section: 8 nulls int, string offset: lambert2 8 nulls small int, 8 8 nulls small int, 9 so maybe that was a struct with 3 ints again. int, 320 8 nulls some very odd bytes now FFFFFFFFFFFFFFFFFFFFFFFF00000000 FFFFFFFFFFFFFFFFFFFFFFFF00000000 About your second tool, note that you need to use the EOF to calculate the last number of faces. I've cleaned up your program and fixed that issue: http://pastebin.com/iciNCS5N Also note that kiwidog is working on the meshes too right now, though I don't know much about his progress.
  11. Frankelstner

    Bf4 Audio Decoder

    I've updated the script to handle those 6 EASpeex stereo files correctly. Additionally I've cut down the EASpeex volume in half to get rid of clipping. The codec gives me back floats in a range of about +-50000.0 so I just divide by 65536 (instead of 32768) to get decent samples.
  12. Frankelstner

    Bf4 Sbtoc Dumper

    The script now supports the xpack archives. Note that you must replace previous versions of the LZ77.dll as I had to make some extensive changes to it too. Meh, I forgot to make tocRoot2 absolute so the unpatched files were not extracted. I've fixed that though and on the bright side, if you happen to have used the old version you can just run the new one and it will continue where the other one left off.
  13. Frankelstner

    Bf4 Audio Decoder

    Aha. The dumper doesn't support the xpacks at the moment as they are noncas (and of course different from the noncas from bf3). I've downloaded the xpack files just yesterday and managed to dump the unpatched noncas, plus I'm about 50% done with the patched noncas. Thus it shouldn't take too long until I finish this. Will post the script when I'm done. I've changed some settings when compiling now. Does this dll fix it? http://www.gamefront.com/files/23943162/easpeex.zip
  14. Frankelstner

    Bf4 Audio Decoder

    I think we should elaborate a bit to prevent misunderstanding. So you mean to say that you have used the bf4 audio decoder (the previous version without speex support) to decode the xpack audio? I see a couple of issues. The dumper script does not handle noncas sbtoc yet so maybe the chunks are hiding there. Also, how did you define the chunk folders? Did you merge the DLC files with the other files (which is what I recommend)? In that case however, have you used some tool to determine the difference in output before and after adding the DLC files? About speex, try to replace the try/except at the top with a simple statement, i.e. substitute try: speex = cdll.LoadLibrary("easpeex") isSpeex=True except: isSpeex=False with speex = cdll.LoadLibrary("easpeex") The error message should be a bit more descriptive as to what's going on.
  15. Frankelstner

    Bf4 Audio Decoder

    It works fine when placed in the Python27 folder. Anyway, you must have 32bit Python to use the dll. Though wait a sec, that is required for xas too. I don't really have any ideas right now. I haven't tried it myself, but I suppose the decoder should work with the DLC too. Is there any error message in particular or does it just not do anything?
  16. Frankelstner

    Bf4 Audio Decoder

    I have added support for Speex. Use this dll: http://www.gamefront.com/files/23940328/easpeex.zip There are 6 stereo files (all other are mono) which are not handled correctly; they are located in Sound\VO\EN\MP\PA. I'm not sure where to start though and if this is worth fixing (the audio is a bit garbled but still comprehensible).
  17. Frankelstner

    Need Quick Help For My Server

    You could place the script in just about any Python folder. Common practice is to make a new script and place it in such a folder, then add two lines to the __init__.py in the same folder (which is empty by default): import myscriptname myscriptname.init() When a round starts this will then call the init function of your script. In your script, set up the timer in the init function. The rest of the script should work out. import bf2 import bf2.Timer import host def init(): timer = bf2.Timer(onTimer, 60, 1) timer.setRecurring(60) messages=("This is an unranked server", "Modified by Belgian_hero and =hero= Shoota{BE}", "Grenadelaunchers and C4 have 0 damage") def onTimer(data): global messages #not sure if this is even needed for message in messages: #just tidying things up a bit host.rcon_invoke('game.sayall "%s"' % message) I haven't tested it, but I hope you get the idea.
  18. Frankelstner

    Bf4 Ebx To Text Converter

    I've changed the script a bit so it only gives a message about unknown types but keeps running. I can't find fields of type 49453 right now, though I suppose they are rare and contain nothing useful anyway.
  19. Frankelstner

    Bf4 Sbtoc Dumper

    Note that the entries in question are not the ones in the sb, but in the toc. I've changed the unXOR function in the sbtoc.py. The tocs are not encrypted anymore. In fact the game does not contain any encrypted files at all now.
  20. Frankelstner

    Bf4 Audio Decoder

    I think the dumper with the pure Python decompression did not decompress chunks which were not part of a bundle. The dll has this all fixed, as you've found out yourself already.
  21. Frankelstner

    Bf4 Audio Decoder

    Looks fine to me: http://i.imgur.com/TROhprJ.png
  22. Frankelstner

    Bf4 Audio Decoder

    http://www.bfeditor.org/forums/index.php?showtopic=15844&st=0 The ebx files you have are compressed. I assume you have used the bf3 dumper on the unpatched files. That doesn't cut it. DICE changed the compression from zlib (not used for ebx files however which were uncompressed) to their own LZ77 variant which is applied to every single file.
  23. Frankelstner

    Bf4 Audio Decoder

    Well there you have it. The files do not start with CED1B20F and are thus not recognized as ebx. There's a chance that the previous pure Python dumper did not decompress all files properly, so you could dump the files again with the dll (which is much faster anyway). Other than that, upload an ebx file so I can take a look.
  24. Frankelstner

    Bf4 Audio Decoder

    I can only guess at this point. 1) In case you haven't done so, you need to run the dumper script twice, first with the patched files, then with the unpatched files. 2) Open up any ebx file. It must start with CED1B20F. 3) Use this function instead for some cheap debugging: def decodeAudio(): for dir0, dirs, ff in os.walk(ebxFolder): for fname in ff: print fname if fname[-4:]!=".ebx": continue f=open(lp(dir0+"\\"+fname),"rb") relPath=(dir0+"\\"+fname)[len(ebxFolder):-4] if relPath[0]=="\\": relPath=relPath[1:] print 1 try: dbx=Dbx(f,relPath) f.close() except ValueError as msg: print 2 f.close() if str(msg).startswith("The file is not ebx: "): continue else: asdf print 3 dbx.decode()
  25. Frankelstner

    Bf4 Audio Decoder

    The script parses all ebx files and checks if the primary instance is of type SoundWaveAsset. If it's not, then it continues with the next file. When you say it does nothing, do you mean it stops immediately or is it parsing the files (check the CPU load)? In the first case, you have probably not typed in the correct ebx path; in the latter case you will just need to wait longer.
×