Jump to content
Official BF Editor Forums
Sign in to follow this  
kosa8237

Battlefield 3 audio files HELP

Recommended Posts

Hello! I can help you extract audio files from the current version of Battlefield 3!

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1) To do this, you need to download the following scripts for Python 2.7.4 (x86), and Python 2.7.4 (x86) itself from my links! (you need exactly the version of Python x86, because scripts will not work on another version!) https://www.mediafire.com/file/o45uvkyilhfi78r/BF3_SOUND_EXTRACT.zip/file - there are all scripts for extracting audio files from battlefield 3.

2) https://www.python.org/ftp/python/2.7.4/python-2.7.4.msi - here you download the working version of Python 2.7.4 (x86).

3) Just in case, I will duplicate scripts for extracting audio (if links do not work !, if they work, you do not need to use duplicate scripts!)

Duplicate scripts: 

dumper.py

import sbtoc
import Bundle
import os
from binascii import hexlify,unhexlify
from struct import pack,unpack
from cStringIO import StringIO
import sys
import zlib

####Adjust paths here. The script doesn't overwrite existing files so set tocRoot to the patched files first, then run the script again with the unpatched ones to get all files at their most recent version.

catName=r"D:\Program Files (x86)\Origin Games\Battlefield 3\Data\cas.cat"
patchedCatName=r"D:\Program Files (x86)\Origin Games\Battlefield 3\Update\Patch\Data\cas.cat" #used only when tocRoot contains "Update"

tocRoot=r"D:\Program Files (x86)\Origin Games\Battlefield 3\Update"
tocRoot=r"D:\Program Files (x86)\Origin Games\Battlefield 3\Data\Win32"

outputfolder="D:/bf3 dump/"


############
############


def zlibb(f, size):
    ###give back the data directly if it is not in zlib format
    v1,v2=unpack(">II",f.read(8))
    magic=f.read(2)
    f.seek(-10,1)
    if magic!="\x78\xda" and v1!=v2: return f.read(size)
    ###
    outStream=StringIO()
    pos0=f.tell()
    while f.tell()<pos0+size:
        uncompressedSize,compressedSize=unpack(">II",f.read(8)) #big endian
        if compressedSize!=uncompressedSize: outStream.write(zlib.decompress(f.read(compressedSize)))
        else:
            magic=f.read(2)
            f.seek(-2,1) #hope that no uncompressed part starts with 78da:
            if magic=="\x78\xda": outStream.write(zlib.decompress(f.read(compressedSize)))
            else:                 outStream.write(f.read(compressedSize))   
    data=outStream.getvalue()
    outStream.close()
    return data

def zlibIdata(bytestring):
    return zlibb(StringIO(bytestring),len(bytestring))


class Stub(): pass


class Cat:
    def __init__(self,catname):
        cat2=open(catname,"rb")
        cat=sbtoc.unXOR(cat2)

        self.casfolder=os.path.dirname(catname)+"\\"
        cat.seek(0,2)
        catsize=cat.tell()
        cat.seek(16)
        self.entries=dict()
        while cat.tell()<catsize:
            entry=Stub()
            sha1=cat.read(20)
            entry.offset, entry.size, entry.casnum = unpack("<III",cat.read(12))
            self.entries[sha1]=entry
        cat.close()
        cat2.close()
       
    def grabPayload(self,entry):
        cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb")
        cas.seek(entry.offset)
        payload=cas.read(entry.size)
        cas.close()
        return payload
    def grabPayloadZ(self,entry):
        cas=open(self.casfolder+"cas_"+("0"+str(entry.casnum) if entry.casnum<10 else str(entry.casnum))+".cas","rb")
        cas.seek(entry.offset)
        payload=zlibb(cas,entry.size)
        cas.close()
        return payload
     
def open2(path,mode):
    #create folders if necessary and return the file handle
    folderPath=os.path.dirname(path)
    if not os.path.isdir(folderPath): os.makedirs(folderPath)
    return open(path,mode)

resTypes={
    0x5C4954A6:".itexture",
    0x2D47A5FF:".gfx",
    0x22FE8AC8:"",
    0x6BB6D7D2:".streamingstub",
    0x1CA38E06:"",
    0x15E1F32E:"",
    0x4864737B:".hkdestruction",
    0x91043F65:".hknondestruction",
    0x51A3C853:".ant",
    0xD070EED1:".animtrackdata",
    0x319D8CD0:".ragdoll",
    0x49B156D4:".mesh",
    0x30B4A553:".occludermesh",
    0x5BDFDEFE:".lightingsystem",
    0x70C5CB3E:".enlighten",
    0xE156AF73:".probeset",
    0x7AEFC446:".staticenlighten",
    0x59CEEB57:".shaderdatabase",
    0x36F3F2C0:".shaderdb",
    0x10F0E5A1:".shaderprogramdb"
}

def dump(tocName,outpath):
    try:
        toc=sbtoc.Superbundle(tocName)
    except IOError:
        return
    
    sb=open(toc.fullpath+".sb","rb")

    chunkPathToc=os.path.join(outpath,"chunks")+"/"
    #
    bundlePath=os.path.join(outpath,"bundles")+"/"
    ebxPath=bundlePath+"ebx/"
    dbxPath=bundlePath+"dbx/"       
    resPath=bundlePath+"res/"
    chunkPath=bundlePath+"chunks/"

    
    if "cas" in toc.entry.elems and toc.entry.elems["cas"].content==True:
        #deal with cas bundles => ebx, dbx, res, chunks. 
        for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant
            sb.seek(tocEntry.elems["offset"].content)
            bundle=sbtoc.Entry(sb)

            for listType in ["ebx","dbx","res","chunks"]: #make empty lists for every type to get rid of key errors(=> less indendation)
                if listType not in bundle.elems:
                    bundle.elems[listType]=Stub()
                    bundle.elems[listType].content=[]
            
            for entry in bundle.elems["ebx"].content: #name sha1 size originalSize
                casHandlePayload(entry,ebxPath+entry.elems["name"].content+".ebx")
           
            for entry in bundle.elems["dbx"].content: #name sha1 size originalSize
                if "idata" in entry.elems: #dbx appear only idata if at all, they are probably deprecated and were not meant to be shipped at all.
                    out=open2(dbxPath+entry.elems["name"].content+".dbx","wb")
                    if entry.elems["size"].content==entry.elems["originalSize"].content:
                        out.write(entry.elems["idata"].content)
                    else:          
                        out.write(zlibIdata(entry.elems["idata"].content))
                    out.close()
           
            for entry in bundle.elems["res"].content: #name sha1 size originalSize resType resMeta
                if entry.elems["resType"].content in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0): #these 5 require resMeta. OccluderMesh might too, but it's always 16*ff
                    casHandlePayload(entry,resPath+entry.elems["name"].content+" "+hexlify(entry.elems["resMeta"].content)+resTypes[entry.elems["resType"].content])
                else:
                    casHandlePayload(entry,resPath+entry.elems["name"].content+resTypes[entry.elems["resType"].content])
                
            for entryNum in xrange(len(bundle.elems["chunks"].content)): #id sha1 size, chunkMeta::meta
                entry=bundle.elems["chunks"].content[entryNum]
                entryMeta=bundle.elems["chunkMeta"].content[entryNum]
                if entryMeta.elems["meta"].content=="\x00":
                    firstMip=""
                else:
                    firstMip=" firstMip"+str(unpack("B",entryMeta.elems["meta"].content[10])[0])

                casHandlePayload(entry,chunkPath+hexlify(entry.elems["id"].content)+firstMip+".chunk")


        #deal with cas chunks defined in the toc. 
        for entry in toc.entry.elems["chunks"].content: #id sha1
            casHandlePayload(entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk")

    else:
        #deal with noncas bundles
        for tocEntry in toc.entry.elems["bundles"].content: #id offset size, size is redundant
            sb.seek(tocEntry.elems["offset"].content)
            try:
                bundle=Bundle.Bundle(sb)
            except:
                print "Ignoring patched noncas bundle file from: "+toc.fullpath
                continue #

            for entry in bundle.ebxEntries:
                noncasHandlePayload(sb,entry,ebxPath+entry.name+".ebx")

            for entry in bundle.resEntries:
                if entry.resType in (0x4864737B,0x91043F65,0x49B156D4,0xE156AF73,0x319D8CD0):
                    noncasHandlePayload(sb,entry,resPath+entry.name+" "+hexlify(entry.resMeta)+resTypes[entry.resType])
                else:
                    noncasHandlePayload(sb,entry,resPath+entry.name+resTypes[entry.resType])


            for entry in bundle.chunkEntries:
                if entry.meta=="\x00":
                    firstMip=""
                else:
                    firstMip=" firstMip"+str(unpack("B",entry.meta[10])[0])
                noncasHandlePayload(sb,entry,chunkPath+hexlify(entry.id)+firstMip+".chunk")

        #deal with noncas chunks defined in the toc
        for entry in toc.entry.elems["chunks"].content: #id offset size
            entry.offset,entry.size = entry.elems["offset"].content,entry.elems["size"].content #to make the function work
            noncasHandlePayload(sb,entry,chunkPathToc+hexlify(entry.elems["id"].content)+".chunk")          

def noncasHandlePayload(sb,entry,outPath):
    if os.path.exists(outPath): return
    print outPath
    sb.seek(entry.offset)
    out=open2(outPath,"wb")
    if "originalSize" in vars(entry):
        if entry.size==entry.originalSize:
            out.write(sb.read(entry.size))
        else:
            out.write(zlibb(sb,entry.size))
    else:
        out.write(zlibb(sb,entry.size))
    out.close()



cat=Cat(catName)

if "Update" in tocRoot:
    cat2=Cat(patchedCatName)
    def casHandlePayload(entry,outPath): #this version searches the patched cat first
        if os.path.exists(outPath): return #don't overwrite existing files to speed up things
        print outPath
        if "originalSize" in entry.elems:
            compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though.
        else:
            compressed=True
        if "idata" in entry.elems:
            out=open2(outPath,"wb")
            if compressed: out.write(zlibIdata(entry.elems["idata"].content))
            else:          out.write(entry.elems["idata"].content)

        else:        
            try:
                catEntry=cat2.entries[entry.elems["sha1"].content]
                activeCat=cat2
            except:
                catEntry=cat.entries[entry.elems["sha1"].content]
                activeCat=cat
            out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up
            if compressed: out.write(activeCat.grabPayloadZ(catEntry))
            else:          out.write(activeCat.grabPayload(catEntry))

        out.close()
        

else:
    def casHandlePayload(entry,outPath): #this version uses the unpatched cat only
        if os.path.exists(outPath): return #don't overwrite existing files to speed up things
        print outPath
        if "originalSize" in entry.elems:
            compressed=False if entry.elems["size"].content==entry.elems["originalSize"].content else True #I cannot tell for certain if this is correct. I do not have any negative results though.
        else:
            compressed=True
        if "idata" in entry.elems:
            out=open2(outPath,"wb")
            if compressed: out.write(zlibIdata(entry.elems["idata"].content))
            else:          out.write(entry.elems["idata"].content)
        else:        
            catEntry=cat.entries[entry.elems["sha1"].content]
            out=open2(outPath,"wb") #don't want to create an empty file in case an error pops up
            if compressed: out.write(cat.grabPayloadZ(catEntry))
            else:          out.write(cat.grabPayload(catEntry))
        out.close()

def main():
    for dir0, dirs, ff in os.walk(tocRoot):
        for fname in ff:
            if fname[-4:]==".toc":
                print fname
                fname=dir0+"\\"+fname
                dump(fname,outputfolder)        

main()

 

sbtoc.py

import sys
import os
from struct import unpack, pack
from binascii import hexlify, unhexlify
import zlib
from cStringIO import StringIO
from collections import OrderedDict
import Bundle

def read128(File):
   """Reads the next few bytes in a file as LEB128/7bit encoding and returns an integer"""
   result,i = 0,0
   while 1:
       byte=ord(File.read(1))
       result|=(byte&127)<<i
       if byte>>7==0: return result
       i+=7

def write128(integer):
   """Writes an integer as LEB128 and returns a byte string;
   roughly the inverse of read, but no files involved here"""
   bytestring=""
   while integer:
       byte=integer&127
       integer>>=7
       if integer: byte|=128
       bytestring+=chr(byte)
   return bytestring

def readNullTerminatedString(f):
   result=""
   while 1:
       char=f.read(1)
       if char=="\x00": return result
       result+=char

def unXOR(f):
   magic=f.read(4)
   if magic not in ("\x00\xD1\xCE\x00","\x00\xD1\xCE\x01"):
       f.seek(0) #the file is not encrypted
       return f

   f.seek(296)
   magic=[ord(f.read(1)) for i in xrange(260)] #bytes 257 258 259 are not used
   data=f.read()
   f.close()
   data2=[None]*len(data) #initalize the buffer
   for i in xrange(len(data)):
       data2[i]=chr(magic[i%257]^ord(data[i])^0x7b)
   return StringIO("".join(data2))

class EntryEnd(Exception):
   def __init__(self, value): self.value = value
   def __str__(self): return repr(self.value)

class Entry:
   #Entries always start with a 82 byte and always end with a 00 byte.
   #They have their own size defined right after that and are just one subelement after another.
   #This size contains all bytes after the size until (and including) the 00 byte at the end.
   #Use the size as an indicator when to stop reading and raise errors when nullbytes are missing.
   def __init__(self,toc): #read the data from file
##        if toc.read(1)!="\x82": raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell()))
##        self.elems=OrderedDict()
##        entrySize=read128(toc)
##        endPos=toc.tell()+entrySize 
##        while toc.tell()<endPos-1: #-1 because of final nullbyte
##            content=Subelement(toc)
##            self.elems[content.name]=content
##        if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell()))
       entryStart=toc.read(1)
       if entryStart=="\x82": #raise Exception("Entry does not start with \x82 byte. Position: "+str(toc.tell()))
           self.elems=OrderedDict()
           entrySize=read128(toc)
           endPos=toc.tell()+entrySize 
           while toc.tell()<endPos-1: #-1 because of final nullbyte
               content=Subelement(toc)
               self.elems[content.name]=content
           if toc.read(1)!="\x00": raise Exception("Entry does not end with \x00 byte. Position: "+str(toc.tell()))
       elif entryStart=="\x87":
####            self.elems=[]
##            entrySize=read128(toc)
##            endPos=toc.tell()+entrySize
####            print entrySize
##            print endPos
##            while toc.tell()<endPos: #-1 because of final nullbyte

           self.elems=toc.read(read128(toc)-1)
           toc.seek(1,1) #trailing null
       else:
           raise Exception("Entry does not start with \x82 or (rare) \x87 byte. Position: "+str(toc.tell()))



   def write(self, f): #write the data into file
       f.write("\x82")
       #Write everything into a buffer to get the size.
       buff=StringIO()
       #Write the subelements. Write in a particular order to compare output with original file.
       for key in self.elems:
           self.elems[key].write(buff)

       f.write(write128(len(buff.getvalue())+1)) #end byte
       f.write(buff.getvalue())
       f.write("\x00")
       buff.close()

   def showStructure(self,level=0):
       for key in self.elems:
           obj=self.elems[key]
           obj.showStructure(level+1)

class Subelement:
   #These are basically subelements of an entry.
   #It consists of type (1 byte), name (nullterminated string), data depending on type. 
   #However one such subelement may be a list type, containing several entries on its own.
   #Lists end with a nullbyte on their own; they (like strings) have their size prefixed as 7bit int.
   def __init__(self,toc): #read the data from file
       self.typ=toc.read(1)
       self.name=readNullTerminatedString(toc)

       if   self.typ=="\x0f": self.content=toc.read(16)
       elif self.typ=="\x09": self.content=unpack("Q",toc.read(8))[0]
       elif self.typ=="\x08": self.content=unpack("I",toc.read(4))[0]
       elif self.typ=="\x06": self.content=True if toc.read(1)=="\x01" else False
       elif self.typ=="\x02": self.content=toc.read(read128(toc))
       elif self.typ=="\x13": self.content=toc.read(read128(toc)) #the same as above with different content?
       elif self.typ=="\x10": self.content=toc.read(20) #sha1
       elif self.typ=="\x07": #string, length prefixed as 7bit int.
           self.content=toc.read(read128(toc)-1)
           toc.seek(1,1) #trailing null
       elif self.typ=="\x01": #lists
           self.listLength=read128(toc) #self
           entries=[]
           endPos=toc.tell()+self.listLength 
           while toc.tell()<endPos-1: #lists end on nullbyte
               entries.append(Entry(toc))
           self.content=entries
           if toc.read(1)!="\x00": raise Exception("List does not end with \x00 byte. Position: "+str(toc.tell()))
       else: raise Exception("Unknown type: "+hexlify(typ)+" "+str(toc.tell()))      

   def write(self,f): #write the data into file
       f.write(self.typ)
       f.write(self.name+"\x00")
       if   self.typ=="\x0f": f.write(self.content)
       elif self.typ=="\x10": f.write(self.content) #sha1
       elif self.typ=="\x09": f.write(pack("Q",self.content))
       elif self.typ=="\x08": f.write(pack("I",self.content))
       elif self.typ=="\x06": f.write("\x01" if self.content==True else "\x00")
       elif self.typ=="\x02": f.write(write128(len(self.content))+self.content)
       elif self.typ=="\x13": f.write(write128(len(self.content))+self.content) #the same as above with different content?
       elif self.typ=="\x07": #string
           f.write(write128(len(self.content)+1)+self.content+"\x00")
       elif self.typ=="\x01":
           #Write everything into a buffer to get the size.
           buff=StringIO()

           for entry in self.content:
               entry.write(buff)
           f.write(write128(len(buff.getvalue())+1)) #final nullbyte
           f.write(buff.getvalue())
           f.write("\x00")
           buff.close()


class Superbundle: #more about toc really
   def __init__(self,pathname):
       #make sure there is toc and sb
       self.fullpath,ext=os.path.splitext(pathname) #everything except extension
       self.filename=os.path.basename(self.fullpath) #the name without extension and without full path
       tocPath=pathname #toc or bundle
       tocPath,sbPath = self.fullpath+".toc",self.fullpath+".sb"
       if not (os.path.exists(tocPath) and os.path.exists(sbPath)): raise IOError("Could not find the sbtoc files.")
       try:
           toc=unXOR(open(tocPath,"rb"))
       except:
           raise Exception(pathname)
       self.entry=Entry(toc)
       toc.close()

 

Bundle.py

import sys
import os
from struct import unpack,pack
from binascii import hexlify,unhexlify
import zlib
from cStringIO import StringIO
import sbtoc


def readNullTerminatedString(f):
   result=""
   while 1:
       char=f.read(1)
       if char=="\x00": return result
       result+=char


class Bundle(): #noncas
   def __init__(self, f): 
       metaSize=unpack(">I",f.read(4))[0] #size of the meta section/offset of the payload section
       metaStart=f.tell()
       metaEnd=metaStart+metaSize
       self.header=Header(unpack(">8I",f.read(32)),metaStart)
       if self.header.magic!=0x970d1c13: raise Exception("Wrong noncas bundle header magic. The script cannot handle patched sbtoc")
       self.sha1List=[f.read(20) for i in xrange(self.header.numEntry)] #one sha1 for each ebx+res+chunk
       self.ebxEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numEbx)]
       self.resEntries=[bundleEntry(unpack(">3I",f.read(12))) for i in xrange(self.header.numRes)]
       #ebx are done, but res have extra content
       for entry in self.resEntries:
           entry.resType=unpack(">I",f.read(4))[0] #e.g. IT for ITexture
       for entry in self.resEntries:
           entry.resMeta=f.read(16) #often 16 nulls (always null for IT)

       self.chunkEntries=[Chunk(f) for i in xrange(self.header.numChunks)]


       #chunkmeta section, uses sbtoc structure, defines h32 and meta. If meta != nullbyte, then the corresponding chunk should have range entries.
       #Then again, noncas is crazy so this is only true for cas. There is one chunkMeta element (consisting of h32 and meta) for every chunk.
       #h32 is the FNV-1 hash applied to a string. For some audio files for example, the files are accessed via ebx files which of course have a name.
       #The hash of this name in lowercase is the h32 found in the chunkMeta. The same hash is also found in the ebx file itself at the keyword NameHash
       #For ITextures, the h32 is found in the corresponding res file. The res file also contains a name and once again the hash of this name is the h32.
       #meta for textures usually contains firstMip 0/1/2.
       if self.header.numChunks>0: self.chunkMeta=sbtoc.Subelement(f)
       for i in xrange(len(self.chunkEntries)):
           self.chunkEntries[i].meta=self.chunkMeta.content[i].elems["meta"].content
           self.chunkEntries[i].h32=self.chunkMeta.content[i].elems["h32"].content


       for entry in self.ebxEntries + self.resEntries: #ebx and res have a path and not just a guid
           f.seek(self.header.offsetString+entry.offsetString)
           entry.name=readNullTerminatedString(f)


       f.seek(metaEnd) #PAYLOAD. Just grab all the payload offsets and sizes and add them to the entries without actually reading the payload. Also attach sha1 to entry.
       sha1Counter=0
       for entry in self.ebxEntries+self.resEntries+self.chunkEntries:
           while f.tell()%16!=0: f.seek(1,1)
           entry.offset=f.tell()
           f.seek(entry.size,1)

           entry.sha1=self.sha1List[sha1Counter]
           sha1Counter+=1




class Header: #8 uint32
   def __init__(self,values,metaStart):
       self.magic           =values[0] #970d1c13 for unpatched files
       self.numEntry        =values[1] #total entries = numEbx + numRes + numChunks
       self.numEbx          =values[2]
       self.numRes          =values[3]
       self.numChunks       =values[4]
       self.offsetString    =values[5] +metaStart #offsets start at the beginning of the header, thus +metaStart
       self.offsetChunkMeta =values[6] +metaStart #redundant
       self.sizeChunkMeta   =values[7] #redundant

class BundleEntry: #3 uint32 + 1 string
   def __init__(self,values):
       self.offsetString=values[0] #in the name strings section
       self.size=values[1] #total size of the payload (for zlib including the two ints before the zlib)
       self.originalSize=values[2] #uncompressed size (for zlib after decompression and ignoring the two ints)
       #note: for zlib the uncompressed size is saved in both the file and the archive
       #      for zlib the compressed size in the file is the (size in the archive)-8


class Chunk:
   def __init__(self, f):
       self.id=f.read(16)
       self.rangeStart=unpack(">I",f.read(4))[0]
       self.rangeEnd=unpack(">I",f.read(4))[0] #total size of the payload is rangeEnd-rangeStart
       self.logicalOffset=unpack(">I",f.read(4))[0]
       self.size=self.rangeEnd-self.rangeStart
       #rangeStart, rangeEnd and logicalOffset are for textures. Non-texture chunks have rangeStart=logicalOffset=0 and rangeEnd being the size of the payload.
       #For cas bundles: rangeEnd is always exactly the size of compressed payload (which is specified too).
       #Furthermore for cas, rangeStart defines the point at which the mipmap number specified by chunkMeta::meta is reached in the compressed payload.
       #logicalOffset then is the uncompressed equivalent of rangeStart.
       #However for noncas, rangeStart and rangeEnd work in absolutely crazy ways. Their individual values easily exceed the actual size of the file.
       #Adding the same number to both of them does NOT cause the game to crash when loading, so really only the difference matters.
       #Additionally the sha1 for these texture chunks does not match the payload. The non-texture chunks that come AFTER such a chunk have the correct sha1 again.

4)

 1 step: install Python 2.7.4 (x86) from here (this is important, otherwise the scripts will not work!) - https://www.python.org/ftp/python/2.7.4/python-2.7.4.msi

 2 step: After installing Python 2.7.4 (X86), you need to unzip the downloaded BF3 SOUND EXTRACT archive.

 3 step: You need to hammer in the search for Windows: IDLE (Python GUI), click on it and run!

 4 step: After IDLE (Python GUI) has started, you need to click "File", then click "Open" and select the dumper.py script (it is important that the dumper.py script is next to the sbtoc.py and   Bundle.py scripts, otherwise python 2.7.4 will throw an error: "sbtoc module not found") 

 5 step: After you open the dumper.py script, you need to adjust the path to the game and press the F5 key on the keyboard. After that, we wait 1-2 hours to extract files from the game   Battlefield 3!

 6 step: When the extraction of battlefiled 3 files has finished, you need to open a folder called "2 step) .chunks + .exb = wav" and run a script called "bf3decoder.py" to adjust the path on your hard drive and press F5 again! Here you have to wait 3-4 hours, because all the audio files from the game in .wav format weigh about 24.7 GB

 7 step: We are happy, now you have all the audio files from the game battlefield 3, GOOD LUCK!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...