Jump to content
Official BF Editor Forums

File Dumper For Sbtoc Archives


Frankelstner

Recommended Posts

Update: I've made extensive changes, there are three scripts instead of one which require you to name the scripts exactly the way they are named here as well as having them in the same folder. This version is meant to run over all unpatched or patched toc files at once, spitting out everything it can find. The dumper always relies on sbtoc archives but extracts cascat too: Inside the toc there's a flag whether the assets or stored in cascat or in sb. The script reads that flag and acts accordingly. I've invented some file extensions for res files depending on resType and added some metadata to the filename when I wasn't sure what to do with it. Usage: Right click on dumper.py -> Edit with IDLE, adjust the paths at the top, then hit F5 to start the script. The script is done when there are no asterisks in the title. The script doesn't overwrite existing files, so it's preferable to dump the patched files first, then dump the unpatched files into the same folder. By default the script already has the patched folder selected, so once you've run it with that path, just put ## in front of the first tocRoot line and remove them from the second one and run the script once more.

Python 2.7. For those DLC sbtoc archives. The script basically does three things, 1) undo XOR on the toc (table of contents) file, 2) extract the bundles from the superbundle (sb) file, 3) extract ebx files from the individual bundle files. Drag and drop one or several toc files or folders containing toc files onto the script file. The files will be extracted in the same folder as the script. Still requires another run with my file converter to make sense of it: http://www.bfeditor.org/forums/index.php?showtopic=15531

Bundle.py:

import sys
import os
from struct import unpack,pack
from binascii import hexlify,unhexlify
import zlib
from cStringIO import StringIO
import sbtoc


def readNullTerminatedString(f):
   result=""
   while 1:
       char=f.read(1)
       if char=="\x00": return result
       result+=char


class Bundle(): #noncas
   def __init__(self, f): 
       metaSize=unpack(">I",f.read(4))[0] #size of the meta section/offset of the payload section
       metaStart=f.tell()
       metaEnd=metaStart+metaSize
       self.header=Header(unpack(">8I",f.read(32)),metaStart)
       if self.header.magic!=0x970d1c13: raise Exception("Wrong noncas bundle header magic. The script cannot handle patched sbtoc")
       self.sha1List=[f.read(20) for i in xrange(self.header.numEntry)] #one sha1 for each ebx+res+chunk
       self.ebxEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numEbx)]
       self.resEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numRes)]
       #ebx are done, but res have extra content
       for entry in self.resEntries:
           entry.resType=unpack(">I",f.read(4))[0] #e.g. IT for ITexture
       for entry in self.resEntries:
           entry.resMeta=f.read(16) #often 16 nulls (always null for IT)

       self.chunkEntries=[Chunk(f) for i in xrange(self.header.numChunks)]


       #chunkmeta section, uses sbtoc structure, defines h32 and meta. If meta != nullbyte, then the corresponding chunk should have range entries.
       #Then again, noncas is crazy so this is only true for cas. There is one chunkMeta element (consisting of h32 and meta) for every chunk.
       #h32 is the FNV-1 hash applied to a string. For some audio files for example, the files are accessed via ebx files which of course have a name.
       #The hash of this name in lowercase is the h32 found in the chunkMeta. The same hash is also found in the ebx file itself at the keyword NameHash
       #For ITextures, the h32 is found in the corresponding res file. The res file also contains a name and once again the hash of this name is the h32.
       #meta for textures usually contains firstMip 0/1/2.
       if self.header.numChunks>0: self.chunkMeta=sbtoc.Subelement(f)
       for i in xrange(len(self.chunkEntries)):
           self.chunkEntries[i].meta=self.chunkMeta.content[i].elems["meta"].content
           self.chunkEntries[i].h32=self.chunkMeta.content[i].elems["h32"].content


       for entry in self.ebxEntries + self.resEntries: #ebx and res have a path and not just a guid
           f.seek(self.header.offsetString+entry.offsetString)
           entry.name=readNullTerminatedString(f)


       f.seek(metaEnd) #PAYLOAD. Just grab all the payload offsets and sizes and add them to the entries without actually reading the payload. Also attach sha1 to entry.
       sha1Counter=0
       for entry in self.ebxEntries+self.resEntries+self.chunkEntries:
           while f.tell()%16!=0: f.seek(1,1)
           entry.offset=f.tell()
           f.seek(entry.size,1)

           entry.sha1=self.sha1List[sha1Counter]
           sha1Counter+=1




class Header: #8 uint32
   def __init__(self,values,metaStart):
       self.magic           =values[0] #970d1c13 for unpatched files
       self.numEntry        =values[1] #total entries = numEbx + numRes + numChunks
       self.numEbx          =values[2]
       self.numRes          =values[3]
       self.numChunks       =values[4]
       self.offsetString    =values[5] +metaStart #offsets start at the beginning of the header, thus +metaStart
       self.offsetChunkMeta =values[6] +metaStart #redundant
       self.sizeChunkMeta   =values[7] #redundant

class BundleEntry: #3 uint32 + 1 string
   def __init__(self,values):
       self.offsetString=values[0] #in the name strings section
       self.size=values[1] #total size of the payload (for zlib including the two ints before the zlib)
       self.originalSize=values[2] #uncompressed size (for zlib after decompression and ignoring the two ints)
       #note: for zlib the uncompressed size is saved in both the file and the archive
       #      for zlib the compressed size in the file is the (size in the archive)-8


class Chunk:
   def __init__(self, f):
       self.id=f.read(16)
       self.rangeStart=unpack(">I",f.read(4))[0]
       self.rangeEnd=unpack(">I",f.read(4))[0] #total size of the payload is rangeEnd-rangeStart
       self.logicalOffset=unpack(">I",f.read(4))[0]
       self.size=self.rangeEnd-self.rangeStart
       #rangeStart, rangeEnd and logicalOffset are for textures. Non-texture chunks have rangeStart=logicalOffset=0 and rangeEnd being the size of the payload.
       #For cas bundles: rangeEnd is always exactly the size of compressed payload (which is specified too).
       #Furthermore for cas, rangeStart defines the point at which the mipmap number specified by chunkMeta::meta is reached in the compressed payload.
       #logicalOffset then is the uncompressed equivalent of rangeStart.
       #However for noncas, rangeStart and rangeEnd work in absolutely crazy ways. Their individual values easily exceed the actual size of the file.
       #Adding the same number to both of them does NOT cause the game to crash when loading, so really only the difference matters.
       #Additionally the sha1 for these texture chunks does not match the payload. The non-texture chunks that come AFTER such a chunk have the correct sha1 again.

sbtoc.py:

import sys
import os
from struct import unpack, pack
from binascii import hexlify, unhexlify
import zlib
from cStringIO import StringIO
from collections import OrderedDict
import Bundle

def read128(File):
   """Reads the next few bytes in a file as LEB128/7bit encoding and returns an integer"""
   result,i = 0,0
   while 1:
       byte=ord(File.read(1))
       result|=(byte&127)<<i
       if byte>>7==0: return result
       i+=7

def write128(integer):
   """Writes an integer as LEB128 and returns a byte string;
   roughly the inverse of read, but no files involved here"""
   bytestring=""
   while integer:
       byte=integer&127
       integer>>=7
       if integer: byte|=128
       bytestring+=chr(byte)
   return bytestring

def readNullTerminatedString(f):
   result=""
   while 1:
       char=f.read(1)
       if char=="\x00": return result
       result+=char

def unXOR(f):
   magic=f.read(4)
   if magic not in ("\x00\xD1\xCE\x00","\x00\xD1\xCE\x01"):
       f.seek(0) #the file is not encrypted
       return f

   f.seek(296)
   magic=[ord(f.read(1)) for i in xrange(260)] #bytes 257 258 259 are not used
   data=f.read()
   f.close()
   data2=[None]*len(data) #initalize the buffer
   for i in xrange(len(data)):
       data2[i]=chr(magic[i%257]^ord(data[i])^0x7b)
   return StringIO("".join(data2))

class EntryEnd(Exception):
   def __init__(self, value): self.value = value
   def __str__(self): return repr(self.value)

class Entry:
   #Entries always start with a 82 byte and always end with a 00 byte.
   #They have their own size defined right after that and are just one subelement after another.
   #This size contains all bytes after the size until (and including) the 00 byte at the end.
   #Use the size as an indicator when to stop reading and raise errors when nullbytes are missing.
   def __init__(self,toc): #read the data from file
##        if toc.read(1)!="\x82": raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell()))
##        self.elems=OrderedDict()
##        entrySize=read128(toc)
##        endPos=toc.tell()+entrySize 
##        while toc.tell()<endPos-1: #-1 because of final nullbyte
##            content=Subelement(toc)
##            self.elems[content.name]=content
##        if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell()))
       entryStart=toc.read(1)
       if entryStart=="\x82": #raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell()))
           self.elems=OrderedDict()
           entrySize=read128(toc)
           endPos=toc.tell()+entrySize 
           while toc.tell()<endPos-1: #-1 because of final nullbyte
               content=Subelement(toc)
               self.elems[content.name]=content
           if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell()))
       elif entryStart=="\x87":
####            self.elems=[]
##            entrySize=read128(toc)
##            endPos=toc.tell()+entrySize
####            print entrySize
##            print endPos
##            while toc.tell()<endPos: #-1 because of final nullbyte

           self.elems=toc.read(read128(toc)-1)
           toc.seek(1,1) #trailing null
       else:
           raise Exception("Entry does not start with \x82 or (rare) \x87 byte. Position: "+str(toc.tell()))



   def write(self, f): #write the data into file
       f.write("\x82")
       #Write everything into a buffer to get the size.
       buff=StringIO()
       #Write the subelements. Write in a particular order to compare output with original file.
       for key in self.elems:
           self.elems[key].write(buff)

       f.write(write128(len(buff.getvalue())+1)) #end byte
       f.write(buff.getvalue())
       f.write("\x00")
       buff.close()

   def showStructure(self,level=0):
       for key in self.elems:
           obj=self.elems[key]
           obj.showStructure(level+1)

class Subelement:
   #These are basically subelements of an entry.
   #It consists of type (1 byte), name (nullterminated string), data depending on type. 
   #However one such subelement may be a list type, containing several entries on its own.
   #Lists end with a nullbyte on their own; they (like strings) have their size prefixed as 7bit int.
   def __init__(self,toc): #read the data from file
       self.typ=toc.read(1)
       self.name=readNullTerminatedString(toc)

       if   self.typ=="\x0f": self.content=toc.read(16)
       elif self.typ=="\x09": self.content=unpack("Q",toc.read(8))[0]
       elif self.typ=="\x08": self.content=unpack("I",toc.read(4))[0]
       elif self.typ=="\x06": self.content=True if toc.read(1)=="\x01" else False
       elif self.typ=="\x02": self.content=toc.read(read128(toc))
       elif self.typ=="\x13": self.content=toc.read(read128(toc)) #the same as above with different content?
       elif self.typ=="\x10": self.content=toc.read(20) #sha1
       elif self.typ=="\x07": #string, length prefixed as 7bit int.
           self.content=toc.read(read128(toc)-1)
           toc.seek(1,1) #trailing null
       elif self.typ=="\x01": #lists
           self.listLength=read128(toc) #self
           entries=[]
           endPos=toc.tell()+self.listLength 
           while toc.tell()<endPos-1: #lists end on nullbyte
               entries.append(Entry(toc))
           self.content=entries
           if toc.read(1)!="\x00": raise Exception("List does not end with \x00 byte. Position: "+str(toc.tell()))
       else: raise Exception("Unknown type: "+hexlify(typ)+" "+str(toc.tell()))      

   def write(self,f): #write the data into file
       f.write(self.typ)
       f.write(self.name+"\x00")
       if   self.typ=="\x0f": f.write(self.content)
       elif self.typ=="\x10": f.write(self.content) #sha1
       elif self.typ=="\x09": f.write(pack("Q",self.content))
       elif self.typ=="\x08": f.write(pack("I",self.content))
       elif self.typ=="\x06": f.write("\x01" if self.content==True else "\x00")
       elif self.typ=="\x02": f.write(write128(len(self.content))+self.content)
       elif self.typ=="\x13": f.write(write128(len(self.content))+self.content) #the same as above with different content?
       elif self.typ=="\x07": #string
           f.write(write128(len(self.content)+1)+self.content+"\x00")
       elif self.typ=="\x01":
           #Write everything into a buffer to get the size.
           buff=StringIO()

           for entry in self.content:
               entry.write(buff)
           f.write(write128(len(buff.getvalue())+1)) #final nullbyte
           f.write(buff.getvalue())
           f.write("\x00")
           buff.close()


class Superbundle: #more about toc really
   def __init__(self,pathname):
       #make sure there is toc and sb
       self.fullpath,ext=os.path.splitext(pathname) #everything except extension
       self.filename=os.path.basename(self.fullpath) #the name without extension and without full path
       tocPath=pathname #toc or bundle
       tocPath,sbPath = self.fullpath+".toc",self.fullpath+".sb"
       if not (os.path.exists(tocPath) and os.path.exists(sbPath)): raise IOError("Could not find the sbtoc files.")
       try:
           toc=unXOR(open(tocPath,"rb"))
       except:
           raise Exception(pathname)
       self.entry=Entry(toc)
       toc.close()

dumper.py:

import sbtoc
import Bundle
import os
from binascii import hexlify,unhexlify
from struct import pack,unpack
from cStringIO import StringIO
import sys
import zlib

##Adjust paths here. The script doesn't overwrite existing files so set tocRoot to the patched files first,
##then run the script again with the unpatched ones to get all files at their most recent version.

catName=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Data\cas.cat" #use "" or r"" if you have no cat; doing so will make the script ignore patchedCatName
patchedCatName=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Patch\Data\cas.cat" #used only when tocRoot contains "Update"

tocRoot=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Update"
##tocRoot=r"C:\Program Files (x86)\Origin Games\Battlefield 3\Data\Win32"

outputfolder="D:/hexing/bf3 dump"


#mohw stuff:

##catName=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Data\cas.cat"
##patchedCatName=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Update\Patch\Data\cas.cat"
##
##tocRoot=r"C:\Program Files (x86)\Origin Games\Medal of Honor Warfighter\Data"
##
##outputfolder="D:/hexing/mohw dump123/"



#####################################
#####################################

#zlib (one more try):
#Files are split into pieces which are then zlibbed individually (prefixed with compressed and uncompressed size)
#and finally glued together again. Non-zlib files on the other hand have no prefix about size, they are just the payload.
#The archive or file does not declare zlib/nonzlib, making things really complicated. I think the engine actually uses
#ebx and res to figure out if a chunk is zlib or not. However, res itself is zlibbed already; in mohw ebx is zlibbed too.
#In particular mohw crashes when delivering a non-zlibbed ebx file.
#Prefixing the payload with two identical ints containing the payload size makes mohw work again so the game really deduces
#compressedSize==uncompressedSize => uncompressed payload.

#some thoughts without evidence:
#It's possible that ebx/res zlib is slightly different from chunk zlib.
#Maybe for ebx/res, compressedSize==uncompressedSize always means an uncompressed piece.
#Whereas for chunks (textures in particular), there are mip sizes to consider
#e.g. first piece of a mip is always compressed (even with compressedSize==uncompressedSize) but subsequent pieces of a mip may be uncompressed.

def zlibb(f, size):
   #if the entire file is < 10 bytes, it must be non zlib
   if size<10: return f.read(size)

   #interpret the first 10 bytes as fb2 zlib stuff
   uncompressedSize,compressedSize=unpack(">ii",f.read(8))
   magic=f.read(2)
   f.seek(-10,1)

   #sanity check: compressedSize may be just random non-zlib payload.
   if compressedSize>size-8: return f.read(size)
   if compressedSize<=0 or uncompressedSize<=0: return f.read(size)

   #another sanity check with a very specific condition:
   #when uncompressedSize is different from compressedSize, then having a non-zlib piece makes no sense.
   #alternatively one could just let the zlib module try to handle this.
   #It's tempting to compare uncompressedSize<compressedSize, but there are indeed cases when
   #the uncompressed payload is smaller than the compressed one.
   if uncompressedSize!=compressedSize and magic!="\x78\xda":
       return f.read(size)

   outStream=StringIO()
   pos0=f.tell()
   while f.tell()<pos0+size-8:
       uncompressedSize,compressedSize=unpack(">ii",f.read(8)) #big endian

       #sanity checks:
       #The sizes may be just random non-zlib payload; as soon as that happens,
       #abandon the whole loop and just give back the full payload without decompression
       if compressedSize<=0 or uncompressedSize<=0:
           f.seek(pos0)
           return f.read(size)
       #likewise, make sure that compressed size does not exceed the size of the file
       if f.tell()+compressedSize>pos0+size:
           f.seek(pos0)
           return f.read(size)

       #try to decompress
       if compressedSize!=uncompressedSize:
           try:    outStream.write(zlib.decompress(f.read(compressedSize)))
           except: outStream.write(f.read(compressedSize))
       else:
           #if compressed==uncompressed, one might be tempted to think that it is always non-zlib. It's not.
           magic=f.read(2)
           f.seek(-2,1)
           if magic=="\x78\xda":
               try:    outStream.write(zlib.decompress(f.read(compressedSize)))
               except: outStream.write(f.read(compressedSize))
           else:
               outStream.write(f.read(compressedSize))

   data=outStream.getvalue()
   outStream.close()
   return data


def zlibIdata(bytestring):
   return zlibb(StringIO(bytestring),len(bytestring))

def hex2(num):
   #take int, return 8byte string
   a=hex(num)
   if a[:2]=="0x": a=a[2:]
   if a[-1]=="L": a=a[:-1]
   while len(a)<8:
       a="0"+a
   return a

class Stub(): pass


class Cat:
   def __init__(self,catname):
       cat2=open(catname,"rb")
       cat=sbtoc.unXOR(cat2)

       self.casfolder=os.path.dirname(catname)+"\\"
       cat.seek(0,2)
       catsize=cat.tell()
       cat.seek(16)
       self.entries=dict()
       while cat.tell()<catsize:
           entry=Stub()
           sha1=cat.read(20)
           entry.offset, entry.size, entry.casnum = unpack("<III",cat.read(12))
           self.entries[sha1]=entry
       cat.close()
       cat2.close()

   def grabPayload(self,entry):
       cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb")
       cas.seek(entry.offset)
       payload=cas.read(entry.size)
       cas.close()
       return payload
   def grabPayloadZ(self,entry):
       cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb")
       cas.seek(entry.offset)
       payload=zlibb(cas,entry.size)
       cas.close()
       return payload



def open2(path,mode):
   #create folders if necessary and return the file handle

   #first of all, create one folder level manully because makedirs might fail
   pathParts=path.split("\\")
   manualPart="\\".join(pathParts[:2])
   if not os.path.isdir(manualPart): os.makedirs(manualPart)

   #now handle the rest, including extra long path names
   folderPath=lp(os.path.dirname(path))
   if not os.path.isdir(folderPath): os.makedirs(folderPath)
   return open(lp(path),mode)

##    return StringIO()


def lp(path): #long pathnames
   if path[:4]=='\\\\?\\' or path=="" or len(path)<=247: return path
   return unicode('\\\\?\\' + os.path.normpath(path))

resTypes={
   0x5C4954A6:".itexture",
   0x2D47A5FF:".gfx",
   0x22FE8AC8:"",
   0x6BB6D7D2:".streamingstub",
   0x1CA38E06:"",
   0x15E1F32E:"",
   0x4864737B:".hkdestruction",
   0x91043F65:".hknondestruction",
   0x51A3C853:".ant",
   0xD070EED1:".animtrackdata",
   0x319D8CD0:".ragdoll",
   0x49B156D4:".mesh",
   0x30B4A553:".occludermesh",
   0x5BDFDEFE:".lightingsystem",
   0x70C5CB3E:".enlighten",
   0xE156AF73:".probeset",
   0x7AEFC446:".staticenlighten",
   0x59CEEB57:".shaderdatabase",
   0x36F3F2C0:".shaderdb",
   0x10F0E5A1:".shaderprogramdb",
   0xC6DBEE07:".mohwspecific"
}


def dump(tocName,outpath):
   try:
       toc=sbtoc.Superbundle(tocName)
   except IOError:
       return

   sb=open(toc.fullpath+".sb","rb")

   chunkPathToc=os.path.join(outpath,"chunks")+"\\"
   #
   bundlePath=os.path.join(outpath,"bundles")+"\\"
   ebxPath=bundlePath+"ebx\\"
   dbxPath=bundlePath+"dbx\\"       
   resPath=bundlePath+"res\\"
   chunkPath=bundlePath+"chunks\\"


   if "cas" in toc.entry.elems and toc.entry.elems["cas"].content==True:
       #deal with cas bundles => ebx, dbx, res, chunks. 
       for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant
           sb.seek(tocEntry.elems["offset"].content)
           bundle=sbtoc.Entry(sb)

           for listType in ["ebx","dbx","res","chunks"]: #make empty lists for every type to get rid of key errors(=> less indendation)
               if listType not in bundle.elems:
                   bundle.elems[listType]=Stub()
                   bundle.elems[listType].content=[]

           for entry in bundle.elems["ebx"].content: #name sha1 size originalSize
               casHandlePayload(entry,ebxPath+entry.elems["name"].content+".ebx")

           for entry in bundle.elems["dbx"].content: #name sha1 size originalSize
               if "idata" in entry.elems: #dbx appear only idata if at all, they are probably deprecated and were not meant to be shipped at all.
                   out=open2(dbxPath+entry.elems["name"].content+".dbx","wb")
                   if entry.elems["size"].content==entry.elems["originalSize"].content:
                       out.write(entry.elems["idata"].content)
                   else:          
                       out.write(zlibIdata(entry.elems["idata"].content))

                   out.close()

           for entry in bundle.elems["res"].content: #name sha1 size originalSize resType resMeta
               if entry.elems["resType"].content not in resTypes: #unknown res file type
                   casHandlePayload(entry,resPath+entry.elems["name"].content+" "+hexlify(entry.elems["resMeta"].content)+".unknownres"+hex2(entry.elems["resType"].content))
               elif entry.elems["resType"].content in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0): #these 5 require resMeta. OccluderMesh might too, but it's always 16*ff
                   casHandlePayload(entry,resPath+entry.elems["name"].content+" "+hexlify(entry.elems["resMeta"].content)+resTypes[entry.elems["resType"].content])
               else:
                   casHandlePayload(entry,resPath+entry.elems["name"].content+resTypes[entry.elems["resType"].content])

           for entryNum in xrange(len(bundle.elems["chunks"].content)): #id sha1 size, chunkMeta::meta
               entry=bundle.elems["chunks"].content[entryNum]
               entryMeta=bundle.elems["chunkMeta"].content[entryNum]
               if entryMeta.elems["meta"].content=="\x00":
                   firstMip=""
               else:
                   firstMip=" firstMip"+str(unpack("B",entryMeta.elems["meta"].content[10])[0])

               casHandlePayload(entry,chunkPath+hexlify(entry.elems["id"].content)+firstMip+".chunk")


       #deal with cas chunks defined in the toc. 
       for entry in toc.entry.elems["chunks"].content: #id sha1
           casHandlePayload(entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk")



   else:
       #deal with noncas bundles
       for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant

           if "base" in tocEntry.elems: continue #Patched noncas bundle. However, use the unpatched bundle because no file was patched at all.
##          So I just skip the entire process and expect the user to extract all unpatched files on his own.

           sb.seek(tocEntry.elems["offset"].content)

           if "delta" in tocEntry.elems:
               #Patched noncas bundle. Here goes the hilarious part. Take the patched data and glue parts from the unpatched data in between.
               #When that is done (in memory of course) the result is a new valid bundle file that can be read like an unpatched one.

               deltaSize,DELTAAAA,nulls=unpack(">IIQ",sb.read(16))
               deltas=[]
               for deltaEntry in xrange(deltaSize/16):
                   delta=Stub()
                   delta.size,delta.fromUnpatched,delta.offset=unpack(">IIQ",sb.read(16))
                   deltas.append(delta)

               bundleStream=StringIO() #here be the new bundle data
               patchedOffset=sb.tell()

##unpatched: C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Xpack2\Data\Win32\Levels\XP2_Palace\XP2_Palace.sb/toc
##patched:   C:\Program Files (x86)\Origin Games\Battlefield 3\Update\Patch\Data\Win32\Levels\XP2_Palace\XP2_Palace.sb/toc
#So at this point I am at the patched file and need to get the unpatched file path. Just how the heck...
#The patched toc itself contains some paths, but they all start at win32.
#Then again, the files are nicely named. I.e. XP2 translates to Xpack2 etc.

               xpNum=os.path.basename(toc.fullpath)[2] #XP2_Palace => 2
               unpatchedPath=toc.fullpath.lower().replace("patch","xpack"+str(xpNum))+".sb"

               unpatchedSb=open(unpatchedPath,"rb")

               for delta in deltas:
                   if not delta.fromUnpatched:
                       bundleStream.write(sb.read(delta.size))
                   else:
                       unpatchedSb.seek(delta.offset)
                       bundleStream.write(unpatchedSb.read(delta.size))
               unpatchedSb.close()
               bundleStream.seek(0)          
               bundle=Bundle.Bundle(bundleStream)
               sb2=bundleStream

           else:
               sb.seek(tocEntry.elems["offset"].content)
               bundle=Bundle.Bundle(sb)
               sb2=sb

           for entry in bundle.ebxEntries:
               noncasHandlePayload(sb2,entry,ebxPath+entry.name+".ebx")

           for entry in bundle.resEntries:
               if entry.resType not in resTypes: #unknown res file type
                   noncasHandlePayload(sb2,entry,resPath+entry.name+" "+hexlify(entry.resMeta)+".unknownres"+hex2(entry.resType))
               elif entry.resType in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0):
                   noncasHandlePayload(sb2,entry,resPath+entry.name+" "+hexlify(entry.resMeta)+resTypes[entry.resType])
               else:
                   noncasHandlePayload(sb2,entry,resPath+entry.name+resTypes[entry.resType])


           for entry in bundle.chunkEntries:
               if entry.meta=="\x00":
                   firstMip=""
               else:
                   firstMip=" firstMip"+str(unpack("B",entry.meta[10])[0])
               noncasHandlePayload(sb2,entry,chunkPath+hexlify(entry.id)+firstMip+".chunk")

       #deal with noncas chunks defined in the toc
       for entry in toc.entry.elems["chunks"].content: #id offset size
           entry.offset,entry.size = entry.elems["offset"].content,entry.elems["size"].content #to make the function work
           noncasHandlePayload(sb,entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk")
   sb.close()


def noncasHandlePayload(sb,entry,outPath):
   if os.path.exists(lp(outPath)): return
   print outPath
   sb.seek(entry.offset)
   out=open2(outPath,"wb")
   if "originalSize" in vars(entry):
       if entry.size==entry.originalSize:
           out.write(sb.read(entry.size))
       else:
           out.write(zlibb(sb,entry.size))
   else:
       out.write(zlibb(sb,entry.size))
   out.close()


if catName!="":
   cat=Cat(catName)

   if "update" in tocRoot.lower():
       cat2=Cat(patchedCatName)
       def casHandlePayload(entry,outPath): #this version searches the patched cat first
           if os.path.exists(lp(outPath)): return #don't overwrite existing files to speed up things
           print outPath
           if "originalSize" in entry.elems:
               compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though.
           else:
               compressed=True
           if "idata" in entry.elems:
               out=open2(outPath,"wb")
               if compressed: out.write(zlibIdata(entry.elems["idata"].content))
               else:          out.write(entry.elems["idata"].content)

           else:        
               try:
                   catEntry=cat2.entries[entry.elems["sha1"].content]
                   activeCat=cat2
               except:
                   catEntry=cat.entries[entry.elems["sha1"].content]
                   activeCat=cat
               out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up
               if compressed: out.write(activeCat.grabPayloadZ(catEntry))
               else:          out.write(activeCat.grabPayload(catEntry))

           out.close()


   else:
       def casHandlePayload(entry,outPath): #this version uses the unpatched cat only
           if os.path.exists(lp(outPath)): return #don't overwrite existing files to speed up things
           print outPath
           if "originalSize" in entry.elems:
               compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though.
           else:
               compressed=True
           if "idata" in entry.elems:
               out=open2(outPath,"wb")
               if compressed: out.write(zlibIdata(entry.elems["idata"].content))
               else:          out.write(entry.elems["idata"].content)
           else:        
               catEntry=cat.entries[entry.elems["sha1"].content]
               out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up
               if compressed: out.write(cat.grabPayloadZ(catEntry))
               else:          out.write(cat.grabPayload(catEntry))
           out.close()



def main():
   for dir0, dirs, ff in os.walk(tocRoot):
       for fname in ff:
           if fname[-4:]==".toc":
               print fname
               fname=dir0+"\\"+fname
               dump(fname,outputfolder)

outputfolder=os.path.normpath(outputfolder)
main()

Edited by Frankelstner
Link to comment
Share on other sites

  • 2 months later...
  • Replies 100
  • Created
  • Last Reply

Top Posters In This Topic

  • 1 month later...
  • 2 weeks later...

Nope. The file system is a giant clusterfuck with two or three layers of files before reaching the actual data files. The itexture res files have a certain structure which includes all info necessary to make dds out of them. Writing a converter however is quite painful because it's a lot of trial-and-error for the itexture part, and the official dds documentation is bad too.

Just take a look at any itexture in hex. There is a guid specified in the itexture file and a file with this guid as a name can be found in the chunks section. The devs have conveniently cut off the dds header so you need to read out the info from the itexture to recreate an actual dds header to be glued to the start of the chunk.

This is not to say that this is impossible, I've put together plenty of textures by hand a few months ago together with kiwidog who seems to be a bit ahead of me when it comes to textures actually. Just beware that different tools will complain at different points when dealing with semi-correct headers. E.g. Paint.net and GIMP might give an error when opening a file that irfanview handles without problems, and for the next file irfan and GIMP give errors, etc.

I had began work on a script back then, here's a (obviously non-working) snippet about the itexture file structure (I can't guarantee that everything is correct):

class ITexture:
   def __init__(self,f):
       values=unpack("IIIIHHHHI",f.read(28))
       self.version=values[0]
       self.type=values[1]
       self.format=values[2]
       self.flags=values[3]
       self.width=values[4]
       self.height=values[5]
       self.depth=values[6]
       self.sliceCount=values[7]
       self.pitch=values[8]
       self.id=f.read(16)
       self.mipSizes=unpack("15I",f.read(60))
       self.mipChainSize=unpack("I",f.read(4))[0]
       self.h32=unpack("I",f.read(4))[0]
       self.textureGroup=f.read(16)

[...]

if itex.format==0: fourCC="DXT1"
elif itex.format==1: fourCC="DXT3"
elif itex.format==2: fourCC="DXT5"
##elif itex.format==3: fourCC="DXT5A"
elif itex.format==9: fourCC="MET1"
elif itex.format==10: fourCC=pack("I",50)
elif itex.format==11: fourCC=pack("I",81)
##elif itex.format==17: fourCC=NormalDXN
elif itex.format==18: fourCC="DXT1"
elif itex.format==19: fourCC="DXT5"
##elif itex.format==20: fourCC=NormalDXT5RGA

Edited by Frankelstner
Link to comment
Share on other sites

Here is my contribution, Extracts as far as I know 100% working DDS DXT1,3,5 Images.

CTexture.cs - IceEditor's code for creating new DDS files.

/*
* CTexture.cs
* By: kiwidog
* Generation of RAW/DDS file formats from the Frostbite 2 ITexture/Chunk Data
* http://allenthinks.com
* kiwidoggie productions (c) 2012-2013
*/

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using EndianIO;
using zlib;
namespace IceEditor.Editor
{
   class CTexture
   {
       enum D3DFORMAT : uint
       {
           D3DFMT_UNKNOWN              =  0,

           D3DFMT_R8G8B8               = 20,
           D3DFMT_A8R8G8B8             = 21,
           D3DFMT_X8R8G8B8             = 22,
           D3DFMT_R5G6B5               = 23,
           D3DFMT_X1R5G5B5             = 24,
           D3DFMT_A1R5G5B5             = 25,
           D3DFMT_A4R4G4B4             = 26,
           D3DFMT_R3G3B2               = 27,
           D3DFMT_A8                   = 28,
           D3DFMT_A8R3G3B2             = 29,
           D3DFMT_X4R4G4B4             = 30,
           D3DFMT_A2B10G10R10          = 31,
           D3DFMT_A8B8G8R8             = 32,
           D3DFMT_X8B8G8R8             = 33,
           D3DFMT_G16R16               = 34,
           D3DFMT_A2R10G10B10          = 35,
           D3DFMT_A16B16G16R16         = 36,

           D3DFMT_A8P8                 = 40,
           D3DFMT_P8                   = 41,

           D3DFMT_L8                   = 50,
           D3DFMT_A8L8                 = 51,
           D3DFMT_A4L4                 = 52,

           D3DFMT_V8U8                 = 60,
           D3DFMT_L6V5U5               = 61,
           D3DFMT_X8L8V8U8             = 62,
           D3DFMT_Q8W8V8U8             = 63,
           D3DFMT_V16U16               = 64,
           D3DFMT_A2W10V10U10          = 67,

           D3DFMT_UYVY                 = 0x59565955,
           D3DFMT_R8G8_B8G8            = 0x47424752,
           D3DFMT_YUY2                 = 0x32595559,
           D3DFMT_G8R8_G8B8            = 0x42475247,
           D3DFMT_DXT1                 = 0x31545844,
           D3DFMT_DXT2                 = 0x32545844,
           D3DFMT_DXT3                 = 0x33545844,
           D3DFMT_DXT4                 = 0x34545844,
           D3DFMT_DXT5                 = 0x35545844,

           D3DFMT_D16_LOCKABLE         = 70,
           D3DFMT_D32                  = 71,
           D3DFMT_D15S1                = 73,
           D3DFMT_D24S8                = 75,
           D3DFMT_D24X8                = 77,
           D3DFMT_D24X4S4              = 79,
           D3DFMT_D16                  = 80,

           D3DFMT_D32F_LOCKABLE        = 82,
           D3DFMT_D24FS8               = 83,

           D3DFMT_D32_LOCKABLE         = 84,
           D3DFMT_S8_LOCKABLE          = 85,

           D3DFMT_L16                  = 81,

           D3DFMT_VERTEXDATA           =100,
           D3DFMT_INDEX16              =101,
           D3DFMT_INDEX32              =102,

           D3DFMT_Q16W16V16U16         =110,

           D3DFMT_MULTI2_ARGB8         = 0x3154454D,

           D3DFMT_R16F                 = 111,
           D3DFMT_G16R16F              = 112,
           D3DFMT_A16B16G16R16F        = 113,

           D3DFMT_R32F                 = 114,
           D3DFMT_G32R32F              = 115,
           D3DFMT_A32B32G32R32F        = 116,

           D3DFMT_CxV8U8               = 117,

           D3DFMT_A1                   = 118,
           D3DFMT_A2B10G10R10_XR_BIAS  = 119,
           D3DFMT_BINARYBUFFER         = 199,

           D3DFMT_FORCE_DWORD          =0x7fffffff
       }
       enum DDSFLAGS : uint
       {
           DDSD_CAPS = 0x1,
           DDSD_HEIGHT = 0x2,
           DDSD_WIDTH = 0x4,
           DDSD_PITCH = 0x8,
           DDSD_PIXELFORMAT = 0x1000,
           DDSD_MIPMAPCOUNT = 0x20000,
           DDSD_LINEARSIZE = 0x80000,
           DDSD_DEPTH = 0x800000
       }

       enum DDSCAPS : uint
       {
           DDSCAPS_COMPLEX = 0x8,
           DDSCAPS_MIPMAP = 0x400000,
           DDSCAPS_TEXTURE = 0x1000 // Required
       }

       enum DDSCAPS2 : uint
       {
           DDSCAPS2_CUBEMAP = 0x200,
           DDSCAPS2_CUBEMAP_POSITIVEX = 0x400,
           DDSCAPS2_CUBEMAP_NEGATIVEX = 0x800,
           DDSCAPS2_CUBEMAP_POSITIVEY = 0x1000,
           DDSCAPS2_CUBEMAP_NEGATIVEY = 0x2000,
           DDSCAPS2_CUBEMAP_POSITIVEZ = 0x4000,
           DDSCAPS2_CUBEMAP_NEGATIVEZ = 0x8000,
           DDSCAPS2_VOLUME = 0x200000
       }

       enum DDSPXLFMTFLAGS : uint
       {
           DDPF_ALPHAPIXELS = 0x1,
           DDPF_ALPHA = 0x2,
           DDPF_FOURCC = 0x4,
           DDPF_RGB = 0x40,
           DDPF_YUV = 0x200,
           DDPF_LUMINANCE = 0x20000
       }

       class DDSPixelFormat
       {
           public uint m_size;
           public DDSPXLFMTFLAGS m_flags;
           public D3DFORMAT m_fourCC; // DXT1
           public uint m_rgbBitCount;
           public uint m_rBitMask; // Red
           public uint m_gBitMask; // Green
           public uint m_bBitMask; // Blue
           public uint m_aBitMask; // Alpha

           public DDSPixelFormat()
           {
               m_size = 32;
           }
       }

       class DDSHeader
       {
           public uint m_magic; // "DDS " or 0x44445320
           public uint m_size; // Always 124
           public uint m_flags; //
           public uint m_height;
           public uint m_width;
           public uint m_pitchOrLinearSize; // Big Endian
           public uint m_depth; // 0?
           public uint m_mipMapCount; // 1?
           public byte[] m_reserved; // Size=0x2C, its unused so we can put text here
           public DDSPixelFormat m_fmt;
           public DDSCAPS m_caps;
           public uint m_caps2;
           public uint m_caps3;
           public uint m_caps4;
           public uint m_reserved2; // Unused

           public DDSHeader()
           {
               m_magic = 0x20534444; // "DDS "
               m_size = 124;
               m_reserved2 = 0;
               m_fmt = new DDSPixelFormat();
           }
       }

       class ImageChunk
       {
           public uint m_uncompressedSize; // Non Compressed Size
           public uint m_compressedSize;
           public byte[] m_data;

           public ImageChunk(Stream stream)
           {
               EndianReader br = new EndianReader(stream, EndianType.BigEndian);
               m_uncompressedSize = br.ReadUInt32();
               m_compressedSize = br.ReadUInt32();
               m_data = br.ReadBytes((int)m_compressedSize);
           }
       }

       List<ImageChunk> m_chunks = new List<ImageChunk>();

       /// <summary>
       /// Reads out of a chunk file, uses the resource data to generate a working DDS file
       /// Assumes that the only type used in Frostbite 2 is DXT1
       /// </summary>
       /// <param name="chunkFile">File path to the chunk file (resource.id chunk)</param>
       /// <param name="outputFile">File path to the final output .dds file</param>
       public void GenerateDDSFile(string chunkFile, string outputFile, ITexture info)
       {
           EndianReader br = new EndianReader(new FileStream(chunkFile, FileMode.Open, FileAccess.Read), EndianType.BigEndian);
           while (br.BaseStream.Position < br.BaseStream.Length)
           {
               m_chunks.Add(new ImageChunk(br.BaseStream));
           }
           br.Close();

           BinaryWriter bw = new BinaryWriter(new FileStream(outputFile, FileMode.Create, FileAccess.Write));
           // Generate the Header and PixelFormatData
           DDSHeader dds = new DDSHeader();
           dds.m_flags = (uint)(DDSFLAGS.DDSD_CAPS | DDSFLAGS.DDSD_HEIGHT | DDSFLAGS.DDSD_WIDTH | DDSFLAGS.DDSD_PIXELFORMAT); // Default, others can be added later
           dds.m_height = info.m_height;
           dds.m_width = info.m_width;
           dds.m_pitchOrLinearSize = info.m_pitch;
           dds.m_depth = info.m_depth;
           dds.m_mipMapCount = 15;
           dds.m_reserved = new byte[4*11];
           dds.m_fmt.m_flags = DDSPXLFMTFLAGS.DDPF_FOURCC;

           // Texture Specific Settings
           switch ((TextureFormat)info.m_format)
           {
               case TextureFormat.TextureFormat_DXT1:
                   dds.m_fmt.m_fourCC = D3DFORMAT.D3DFMT_DXT1;
                   break;
               case TextureFormat.TextureFormat_DXT3:
                   dds.m_fmt.m_fourCC = D3DFORMAT.D3DFMT_DXT3;
                   break;
               case TextureFormat.TextureFormat_DXT5:
                   dds.m_fmt.m_fourCC = D3DFORMAT.D3DFMT_DXT5;

                   break;
               default:
                   break;
           }
           // General Texture Settings
           switch ((TextureFormat)info.m_format)
           {
               case TextureFormat.TextureFormat_DXT1:
               case TextureFormat.TextureFormat_DXT3:
               case TextureFormat.TextureFormat_DXT5:
                   dds.m_fmt.m_flags |= DDSPXLFMTFLAGS.DDPF_RGB;
                   dds.m_fmt.m_rgbBitCount = 32;
                   dds.m_fmt.m_rBitMask = 0x000000FF;
                   dds.m_fmt.m_gBitMask = 0x0000FF00;
                   dds.m_fmt.m_bBitMask = 0x00FF0000;
                   dds.m_fmt.m_aBitMask = 0xFF000000;
                   break;
               default:
                   break;
           }            

           dds.m_caps = DDSCAPS.DDSCAPS_TEXTURE;
           dds.m_caps2 = 0;
           dds.m_caps3 = 0;
           dds.m_caps4 = 0;

           bw.Write(dds.m_magic);
           bw.Write(dds.m_size);
           bw.Write(dds.m_flags);
           bw.Write(dds.m_height);
           bw.Write(dds.m_width);
           bw.Write(dds.m_pitchOrLinearSize);
           bw.Write(dds.m_depth);
           bw.Write(dds.m_mipMapCount);
           bw.Write(dds.m_reserved);
           bw.Write(dds.m_fmt.m_size);
           bw.Write((uint)dds.m_fmt.m_flags);
           bw.Write((uint)dds.m_fmt.m_fourCC);
           bw.Write(dds.m_fmt.m_rgbBitCount);
           bw.Write(dds.m_fmt.m_rBitMask);
           bw.Write(dds.m_fmt.m_gBitMask);
           bw.Write(dds.m_fmt.m_bBitMask);
           bw.Write(dds.m_fmt.m_aBitMask);
           bw.Write((uint)dds.m_caps);
           bw.Write(dds.m_caps2);
           bw.Write(dds.m_caps3);
           bw.Write(dds.m_caps4);
           bw.Write(dds.m_reserved2);
           // Loop through all the chunks
           foreach (ImageChunk img in m_chunks)
           {
               if (img.m_compressedSize == img.m_uncompressedSize)
                   bw.Write(img.m_data);
               else
               {
                   Stream m_final = new MemoryStream();
                   ZOutputStream outStream = new ZOutputStream(m_final);
                   MemoryStream compressedData = new MemoryStream(img.m_data);
                   try
                   {
                       CResource.CopyStream(compressedData, outStream);
                   }
                   finally
                   {
                       compressedData.Close();
                       //outStream.Close();
                   }
                   m_final.Position = 0;
                   BinaryReader brx = new BinaryReader(m_final);
                   bw.Write(brx.ReadBytes((int)brx.BaseStream.Length));
                   brx.Close();
               }
           }
           bw.Close();
       }
   }
}

ITexture.cs - Bad Company 2/Battlefield 3 format for Texture Resources

/*
* ITexture.cs
* By: kiwidog
* Frostbite 2 ITexture data structure
* http://allenthinks.com
* kiwidoggie productions (c) 2012-2013
*/

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;

namespace IceEditor.Editor
{
   // also for the FourCC code look at http://msdn.microsoft.com/en-us/library/windows/desktop/bb172558%28v=vs.85%29.aspx
   enum TextureFormat : uint // found by searching pastebin for Bad Company 2 ITexture resources
   {
       TextureFormat_DXT1 = 0x0,
       TextureFormat_DXT3 = 0x1,
       TextureFormat_DXT5 = 0x2,
       TextureFormat_DXT5A = 0x3,
       TextureFormat_DXN = 0x4,
       TextureFormat_RGB565 = 0x5,
       TextureFormat_RGB888 = 0x6,
       TextureFormat_ARGB1555 = 0x7,
       TextureFormat_ARGB4444 = 0x8,
       TextureFormat_ARGB8888 = 0x9,
       TextureFormat_L8 = 0xA,
       TextureFormat_L16 = 0xB,
       TextureFormat_ABGR16 = 0xC,
       TextureFormat_ABGR16F = 0xD,
       TextureFormat_ABGR32F = 0xE,
       TextureFormat_R16F = 0xF,
       TextureFormat_R32F = 0x10,
       TextureFormat_NormalDXN = 0x11,
       TextureFormat_NormalDXT1 = 0x12,
       TextureFormat_NormalDXT5 = 0x13,
       TextureFormat_NormalDXT5RGA = 0x14,
       TextureFormat_RG8 = 0x15,
       TextureFormat_GR16 = 0x16,
       TextureFormat_GR16F = 0x17,
       TextureFormat_D16 = 0x18,
       TextureFormat_D24S8 = 0x19,
       TextureFormat_D24FS8 = 0x1A,
       TextureFormat_D32F = 0x1B,
       TextureFormat_ABGR32 = 0x1C,
       TextureFormat_GR32F = 0x1D,
   };

   enum TextureType : uint // pastebin
   {
       TextureType_1d = 0x5,
       TextureType_1dArray = 0x4,
       TextureType_2d = 0x0,
       TextureType_2dArray = 0x3,
       TextureType_Cube = 0x1,
       TextureType_3d = 0x2,
   };

   class ITexture
   {
       public uint m_version;
       public uint m_type; // TextureType enum
       public uint m_format; // TextureFormat enum
       public uint m_flags;
       public ushort m_width;
       public ushort m_height;
       public ushort m_depth;
       public ushort m_sliceCount;
       public uint m_pitch; // BigEndian Pitch
       public byte[] m_id; // Len=16
       public uint[] m_mipMapSizes; // DWORD[15]
       public uint m_mipMapChainSize;
       public uint m_resourceNameHash; // same as h32 in Resource Chunk Data, fnvHash of the resFilePath [2:42:07 PM] Frank: weapons/a91/a91_d = 1494063087L
       public string m_textureGroup; // char[16]

       public CTexture m_outputFile; // Create a usable file out of this bullshit we got.

       public ITexture(Stream stream)
       {
           BinaryReader br = new BinaryReader(stream);
           m_version = br.ReadUInt32();
           m_type = br.ReadUInt32();
           m_format = br.ReadUInt32();
           m_flags = br.ReadUInt32();
           m_width = br.ReadUInt16();
           m_height = br.ReadUInt16();
           m_depth = br.ReadUInt16();
           m_sliceCount = br.ReadUInt16();
           m_pitch = br.ReadUInt32();
           m_id = br.ReadBytes(16);
           List<uint> m_mipMaps = new List<uint>();
           for (int i = 0; i < 15; i++)
               m_mipMaps.Add(br.ReadUInt32());
           m_mipMapSizes = m_mipMaps.ToArray();
           m_mipMapChainSize = br.ReadUInt32();
           m_resourceNameHash = br.ReadUInt32();
           m_textureGroup = new string(br.ReadChars(16));
       }

   }
}

Link to comment
Share on other sites

This is very exciting guys. Thank you all for doing great job on this. Got some sound ant text files extracted.But I`ve yet a lot to learn on how to use this properly.

Few questions though

I can see that I get mesh file: i.e \bundles\res\characters\headgear\us_helmet01_mesh\400f000000000000f800000070009400.mesh .Is this the actual geometry file? If so can the file format be read?

If no, would it help having the same scale geometry in a known file format to convert 400f000000000000f800000070009400.mesh file to a known format?

I have ripped models from the game using dx11 ripper and successfuly imported to 3ds max and semi-successfully re applyed the textures.

Back%2CFront%2CRight%28MR%29GammaONSmall.jpg

However there are some issues down the line and I would like to investigate further.

I have ripped and imported GrowlerITV model into 3ds max.

Growler_preview.png

It seems that the main mesh brings some extra geometry in.(all four wheels, turret mount, and few other props of which seem to cause problems later on.

To position wheels(point helpers) in space I have used the information found in ..\GrowlerITV.txt file under "WheelComponentData" trans::Vec3. Which seems to correctly corespond to position in 3ds max world space.

Growler_Points.png

"VehicleExitPointComponentData"

Growler_Points2.png

So I guess it is safe to assume that we have solid reference to world space data between max and game files.

Now for the textures,

The dx11 ripper grabs all textures used for the certain drawcall`s, they are saved as .dds files. My question is can those texture files be used to interpret the .itexture file payloads?

Edited by dainiuxxx
Link to comment
Share on other sites

Interesting. @Kiwidog, can I get a confirmation that this works for BC2? If so, do I have your permission to integrate it into my mod tools with due credit? (I might need some pointers seeing as the code isn't complete.)

http://www.bfeditor.org/forums/index.php?showtopic=15783&pid=106368&st=0

(Seems like when we add textures, we pretty much have the file-based tools needed for a completely functional mod (minus changes in game logic).

Edited by rukqoa
Link to comment
Share on other sites

So, I can't seem to get the dump script to work, Ever time I try and run it I get this error

Traceback (most recent call last):
 File "C:\Users\Boxman\Desktop\python stuff\dumper.py", line 1, in <module>
   import sbtoc
 File "C:\Users\Boxman\Desktop\python stuff\sbtoc.py", line 1
   Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
            ^
SyntaxError: invalid syntax

If someone could tell me as to why this might be? I copy and pasted, everything to the "T" I even tried it twice, and make sure all the scripts were named correctly as well.

Link to comment
Share on other sites

Interesting. @Kiwidog, can I get a confirmation that this works for BC2? If so, do I have your permission to integrate it into my mod tools with due credit? (I might need some pointers seeing as the code isn't complete.)

http://www.bfeditor.org/forums/index.php?showtopic=15783&pid=106368&st=0

(Seems like when we add textures, we pretty much have the file-based tools needed for a completely functional mod (minus changes in game logic).

There are different itextures in bc2. In fact the only lead that made me add an itexture extension to those files in bf3 was the resType (merely 4 bytes) which spelled "/IT." as well as the purpose of these files.

Traceback (most recent call last):
 File "C:\Users\Boxman\Desktop\python stuff\dumper.py", line 1, in <module>
   import sbtoc
 File "C:\Users\Boxman\Desktop\python stuff\sbtoc.py", line 1
   Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
            ^
SyntaxError: invalid syntax

Ahem. This says that the first line in sbtoc.py is "Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32". The first line should read "import sys". Just copypaste each script into a normal text file, then rename it. Edit the dumper with IDLE and then press F5 with the script window open (NOT the console). The Python console (which usually spits out these version numbers) will then automatically open up while the script is running.

Link to comment
Share on other sites

There are different itextures in bc2. In fact the only lead that made me add an itexture extension to those files in bf3 was the resType (merely 4 bytes) which spelled "/IT." as well as the purpose of these files.

Traceback (most recent call last):
 File "C:\Users\Boxman\Desktop\python stuff\dumper.py", line 1, in <module>
   import sbtoc
 File "C:\Users\Boxman\Desktop\python stuff\sbtoc.py", line 1
   Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
            ^
SyntaxError: invalid syntax

Ahem. This says that the first line in sbtoc.py is "Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32". The first line should read "import sys". Just copypaste each script into a normal text file, then rename it. Edit the dumper with IDLE and then press F5 with the script window open (NOT the console). The Python console (which usually spits out these version numbers) will then automatically open up while the script is running.

Thanks for the incite, I got it to work with your tips!,

Edited by boxman500
Link to comment
Share on other sites

  • 3 weeks later...
Have you tried the .mesh files?

I can see there are GUID references in *.mesh files but I searching the GUID`s among chunk files I can`t find any matches.

No idea, I don't know anything about mesh files in general and haven't taken a look at the bf3 ones, sorry.

Is it possible to re-pack files that only store packed cascat files? Eg cascat in sbtoc

I have tried hard a long time ago and failed even harder. I do not know of a way to correctly outsource files into cascat from sbtoc or to put them from cascat back into sbtoc.

Sbtoc handles metadata in a very different way depending on whether the files are stored in cascat or directly in the sbtoc. In particular, sbtoc contains some metadata for textures in the archive itself. I have not been able to correctly match the metadata between the two sbtoc types. Here are most of my notes: http://pastebin.com/YPNsmxUf

The challenge is to match rangeStart, rangeEnd, logicalOffset to the respective values in a noncas archive. I've given up on that problem as it seems not worth the effort. And seeing that zlib error in my notes with a file made of plenty zlib pieces (zlib = first two bytes are 78da) that suddenly decides to have some non-zlib pieces in between, I think it is for the better. I don't think I could suppress my rage when dealing with this a second time.

Link to comment
Share on other sites

No idea, I don't know anything about mesh files in general and haven't taken a look at the bf3 ones, sorry.

I have tried hard a long time ago and failed even harder. I do not know of a way to correctly outsource files into cascat from sbtoc or to put them from cascat back into sbtoc.

Sbtoc handles metadata in a very different way depending on whether the files are stored in cascat or directly in the sbtoc. In particular, sbtoc contains some metadata for textures in the archive itself. I have not been able to correctly match the metadata between the two sbtoc types. Here are most of my notes: http://pastebin.com/YPNsmxUf

The challenge is to match rangeStart, rangeEnd, logicalOffset to the respective values in a noncas archive. I've given up on that problem as it seems not worth the effort. And seeing that zlib error in my notes with a file made of plenty zlib pieces (zlib = first two bytes are 78da) that suddenly decides to have some non-zlib pieces in between, I think it is for the better. I don't think I could suppress my rage when dealing with this a second time.

Thanks a lot for your reply, and wow I didn't know game files could make one suicidal! :P

Link to comment
Share on other sites

  • 2 weeks later...

Where are those Chunk Files in the Cas Files? Im not familiar with Python and tried to find the Content of some Chunk Files in the Cas Archives, but with no success. I know its not possible to re-pack the files, but it should be possible to modify them in the Cas Files directly without extraction, i guess.

Link to comment
Share on other sites

Where are those Chunk Files in the Cas Files? Im not familiar with Python and tried to find the Content of some Chunk Files in the Cas Archives, but with no success. I know its not possible to re-pack the files, but it should be possible to modify them in the Cas Files directly without extraction, i guess.

CAS is content-adressed storage, so the cascat structure itself does not care about file types. Going from sbtoc however, there are two different places where chunks are stored. In the toc files there's a list of chunks with the necessary info to extract the files (but strangely no other metadata). The individual bundles in the sb files also have a list of chunks, with some extra metadata when textures are involved.

E.g. here's the first case (a cascat-based toc file): http://i.imgur.com/bh8CsU5.jpg

And here's the second case (cascat-based sb file): http://i.imgur.com/De6EXwt.jpg

So I think my notes are somewhat incomplete in regards to chunks, it should be

cascat-based toc:
 bundles 01
   id 07
   offset 09
   size 08
 chunks 01
   id 0f
   sha1 10
 cas 06
 name 07
 alwaysEmitSuperbundle 06

I have the feeling that this is not helpful at all, but I'm afraid there's no easy answer. If you already know of a certain chunk file you want to modify, then you can use its id (used as the filename by my dumper) and make a quick search in the sbtoc files and from there get the sha1 to find it in the cascat archives. However even then the merit should be marginal as the chunk files themselves are just raw data, e.g. there's no audio encoder as modifying audio would require making changes to some ebx files as well (e.g. for file size) which is impossible at the moment, and modifying textures is tough as hell because the devs decided to cut a texture into little pieces and zlib-compress each piece (or rather, most pieces, there are some random non-zlib pieces scattered in between) before glueing all pieces together and making this the chunk file. Altering a texture will probably fuck up the file size by a few bytes because of that, so you need to adjust cascat too. Considering I haven't been able to pull this off myself, there might be even more to allow for that I'm not aware of.

Link to comment
Share on other sites

It's XORed. The key is at these bytes starting at 296 and is 257 bytes long: http://i.imgur.com/v35T2HT.jpg After that there are three nulls, and after these the payload begins. The result of this XOR must then be XORed with 123 (= 0x7b) to obtain the actual data. You may want to look at the unXOR function in sbtoc.py for reference.

Here's my brute force unXORer script that doesn't care about any signatures whatsoever and will happily "unXOR" any file that is dropped on it via drag and drop.

import os
import sys
from cStringIO import StringIO

def unXOR(f):
   f.seek(296)
   magic=[ord(f.read(1)) for i in xrange(260)] #bytes 257 258 259 are not used
   data=f.read()
   f.close()
   data2=[None]*len(data) #initalize the buffer
   for i in xrange(len(data)):
       data2[i]=chr(magic[i%257]^ord(data[i])^0x7b)
   return StringIO("".join(data2))

def writeFile(fname):
   f=open(fname,"rb")
   base,ext=os.path.splitext(fname)
   f2=open(base+" unXOR"+ext,"wb")
   f2.write(unXOR(f).getvalue())
   f2.close()


def main():
   for ff in sys.argv[1:]:
       if os.path.isfile(ff):
           writeFile(ff)
       else:
           for dir0,dirs,files in os.walk(ff):
               for f in files:
                   writeFile(dir0+"\\"+f)

try:  
   main()
except Exception, e:
   raw_input(e)

Now that I think about it, XORing the key itself with 123 should improve performance a bit and make things even simpler.

Edited by Frankelstner
Link to comment
Share on other sites

  • 4 weeks later...

i don't understand how to allocate chunk files to the (eg. content) of ebx files (not the ebx files that contain sound file information). i couldn't associate heightmaps with them...

i just found some information about the heightfield in default.txt in level directories but no chunk info.

Edited by apfelbaum
Link to comment
Share on other sites

  • 3 weeks later...

I've made some changes to the dumper.py. It now supports extra long path names, handles empty or almost empty files correctly (it previously tried to read the first 10 bytes in a file to determine whether a file is compressed or not) and can deal with unknown res types. Therefore, it can now be used with mohw files. If you deduce from the changes that mohw has files with ridiculously long names and other files containing less than 10 bytes, then you are correct.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

Announcements




×
×
  • Create New...