* [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds @ 2018-10-23 17:29 Jaben Carsey 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey ` (6 more replies) 0 siblings, 7 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel Cleanup to many files for GenFds. No command line visible changes are included. 1) refactor imports to reduce namespace clutter. 2) refactor to use existing sharable objects (and create a few new) 3) eliminate shadowing of names 4) remove double underscored private methods for PEP8 5) eliminate unused code/parameters/variables 6) add standard warnings and use them for common code changes from v1: 1) do not shadow CapsuleFV. 2) rebase on master changes from v2: 1) do not delete duplicate function calls. 2) add ".lower()" for GUID string comparison. 3) rebase on master Jaben Carsey (6): BaseTools/GenFds: cleanup GenFds BaseTools/GenFds: change objects to sets Basetools/GenFds: refactor class FV BaseTools/GenFds: remove MacroDict parameter BaseTools/GenFds: refactor FdfParser warnings BaseTools/GenFds: create and use new variable in FdfParser BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - BaseTools/Source/Python/Eot/EotMain.py | 372 +- BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- BaseTools/Source/Python/GenFds/Capsule.py | 26 +- BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- BaseTools/Source/Python/GenFds/DataSection.py | 4 +- BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- BaseTools/Source/Python/GenFds/FdfParser.py | 3780 ++++++++++---------- BaseTools/Source/Python/GenFds/Ffs.py | 82 +- BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- BaseTools/Source/Python/GenFds/Fv.py | 54 +- BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- BaseTools/Source/Python/GenFds/GenFds.py | 160 +- BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- BaseTools/Source/Python/GenFds/Region.py | 12 +- BaseTools/Source/Python/GenFds/UiSection.py | 4 +- BaseTools/Source/Python/GenFds/VerSection.py | 16 +- BaseTools/Source/Python/GenFds/Vtf.py | 48 +- BaseTools/Source/Python/build/BuildReport.py | 5 +- 24 files changed, 2576 insertions(+), 2402 deletions(-) -- 2.16.2.windows.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-25 13:45 ` Feng, Bob C 2018-10-26 4:53 ` Gary Lin 2018-10-23 17:29 ` [Patch v3 2/6] BaseTools/GenFds: change objects to sets Jaben Carsey ` (5 subsequent siblings) 6 siblings, 2 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel; +Cc: Yonghong Zhu, Liming Gao, Bob C Feng 1) remove wildcard imports and use explicit imports 2) refactor to use shared variables from Common/DataType 3) rename to not shadow imports 4) don't assign a variable in a loop (just do final assignment) 5) remove spaces, parens, unused or commented out code, etc. 6) merge unnecessary parent classes into child 7) refactor to share DXE and PEI apriori GUIDs from one place this includes changes to Build and EOT files 8) for PEP8, dont use __ for custom methods. Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Cc: Bob C Feng <bob.c.feng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - BaseTools/Source/Python/Eot/EotMain.py | 372 ++- BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- BaseTools/Source/Python/GenFds/Capsule.py | 26 +- BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- BaseTools/Source/Python/GenFds/DataSection.py | 4 +- BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- BaseTools/Source/Python/GenFds/Ffs.py | 82 +- BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- BaseTools/Source/Python/GenFds/Fv.py | 47 +- BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- BaseTools/Source/Python/GenFds/GenFds.py | 160 +- BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- BaseTools/Source/Python/GenFds/Region.py | 12 +- BaseTools/Source/Python/GenFds/UiSection.py | 4 +- BaseTools/Source/Python/GenFds/VerSection.py | 16 +- BaseTools/Source/Python/GenFds/Vtf.py | 48 +- BaseTools/Source/Python/build/BuildReport.py | 5 +- 24 files changed, 2185 insertions(+), 2020 deletions(-) diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py index 64b58c2078a3..3df82082457f 100644 --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py @@ -37,50 +37,6 @@ class FDClassObject: self.RegionList = [] self.vtfRawDict = {} -## FV data in FDF -# -# -class FvClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.UiFvName = None - self.CreateFileName = None - # 3-tuple list (blockSize, numBlocks, pcd) - self.BlockSizeList = [] - # DefineVarDict[var] = value - self.DefineVarDict = {} - # SetVarDict[var] = value - self.SetVarDict = {} - self.FvAlignment = None - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) - self.FvAttributeDict = {} - self.FvNameGuid = None - self.FvNameString = None - self.AprioriSectionList = [] - self.FfsList = [] - self.BsBaseAddress = None - self.RtBaseAddress = None - -## Region data in FDF -# -# -class RegionClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.Offset = None # The begin position of the Region - self.Size = None # The Size of the Region - self.PcdOffset = None - self.PcdSize = None - self.SetVarDict = {} - self.RegionType = None - self.RegionDataList = [] - ## FFS data in FDF # # @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : self.FvName = None self.FdName = None self.DefineVarDict = {} - self.AprioriSection = None self.KeepReloc = None ## INF statement data in FDF @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): self.KeepReloc = None self.UseArch = None -## APRIORI section data in FDF -# -# -class AprioriSectionClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - # DefineVarDict[var] = value - self.DefineVarDict = {} - self.FfsList = [] - ## section data in FDF # # @@ -363,21 +305,6 @@ class CapsuleClassObject : self.CapsuleDataList = [] self.FmpPayloadList = [] -## VTF data in FDF -# -# -class VtfClassObject : - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.KeyArch = None - self.ArchList = None - self.UiName = None - self.ResetBin = None - self.ComponentStatementList = [] - ## VTF component data in FDF # # diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py index 49a1494126c4..2ad93cc6b4e3 100644 --- a/BaseTools/Source/Python/Eot/EotMain.py +++ b/BaseTools/Source/Python/Eot/EotMain.py @@ -36,10 +36,9 @@ import struct import uuid import copy import codecs +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' gIndention = -4 class Image(array): @@ -402,10 +401,10 @@ class FirmwareVolume(Image): if Ffs.Type == 0x05: FfsDxeCoreGuid = FfsID continue - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): FfsPeiPrioriGuid = FfsID continue - if Ffs.Guid.lower() == gAprioriGuid: + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): FfsDxePrioriGuid = FfsID continue @@ -1084,6 +1083,371 @@ class Ffs(Image): State = property(_GetState, _SetState) + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" + + _GUID_ = struct.Struct("16x 1I2H8B") + _LENGTH_ = struct.Struct("16x 16x 1Q") + _SIG_ = struct.Struct("16x 16x 8x 1I") + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") + + def __init__(self, Name=''): + Image.__init__(self) + self.Name = Name + self.FfsDict = sdict() + self.OrderedFfsDict = sdict() + self.UnDispatchedFfsDict = sdict() + self.ProtocolList = sdict() + + def CheckArchProtocol(self): + for Item in EotGlobalData.gArchProtocolGuids: + if Item.lower() not in EotGlobalData.gProtocolList: + return False + return True + + def ParseDepex(self, Depex, Type): + List = None + if Type == 'Ppi': + List = EotGlobalData.gPpiList + if Type == 'Protocol': + List = EotGlobalData.gProtocolList + DepexStack = [] + DepexList = [] + DepexString = '' + FileDepex = None + CouldBeLoaded = True + for Index in range(0, len(Depex.Expression)): + Item = Depex.Expression[Index] + if Item == 0x00: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) + elif Item == 0x01: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) + elif Item == 0x02: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid.lower() in List: + DepexStack.append(True) + DepexList.append(Guid) + else: + DepexStack.append(False) + DepexList.append(Guid) + continue + elif Item == 0x03 or Item == 0x04: + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + str(DepexList.pop())) + elif Item == 0x05: + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) + elif Item == 0x06: + DepexStack.append(True) + DepexList.append('TRUE') + DepexString = DepexString + 'TRUE' + ' ' + elif Item == 0x07: + DepexStack.append(False) + DepexList.append('False') + DepexString = DepexString + 'FALSE' + ' ' + elif Item == 0x08: + if Index != len(Depex.Expression) - 1: + CouldBeLoaded = False + else: + CouldBeLoaded = DepexStack.pop() + else: + CouldBeLoaded = False + if DepexList != []: + DepexString = DepexList[0].strip() + return (CouldBeLoaded, DepexString, FileDepex) + + def Dispatch(self, Db = None): + if Db is None: + return False + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) + # Find PeiCore, DexCore, PeiPriori, DxePriori first + FfsSecCoreGuid = None + FfsPeiCoreGuid = None + FfsDxeCoreGuid = None + FfsPeiPrioriGuid = None + FfsDxePrioriGuid = None + for FfsID in self.UnDispatchedFfsDict: + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x03: + FfsSecCoreGuid = FfsID + continue + if Ffs.Type == 0x04: + FfsPeiCoreGuid = FfsID + continue + if Ffs.Type == 0x05: + FfsDxeCoreGuid = FfsID + continue + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): + FfsPeiPrioriGuid = FfsID + continue + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): + FfsDxePrioriGuid = FfsID + continue + + # Parse SEC_CORE first + if FfsSecCoreGuid is not None: + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) + self.LoadPpi(Db, FfsSecCoreGuid) + + # Parse PEI first + if FfsPeiCoreGuid is not None: + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) + self.LoadPpi(Db, FfsPeiCoreGuid) + if FfsPeiPrioriGuid is not None: + # Load PEIM described in priori file + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) + if len(FfsPeiPriori.Sections) == 1: + Section = FfsPeiPriori.Sections.popitem()[1] + if Section.Type == 0x19: + GuidStruct = struct.Struct('1I2H8B') + Start = 4 + while len(Section) > Start: + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) + GuidString = gGuidStringFormat % Guid + Start = Start + 16 + if GuidString in self.UnDispatchedFfsDict: + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) + self.LoadPpi(Db, GuidString) + + self.DisPatchPei(Db) + + # Parse DXE then + if FfsDxeCoreGuid is not None: + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) + self.LoadProtocol(Db, FfsDxeCoreGuid) + if FfsDxePrioriGuid is not None: + # Load PEIM described in priori file + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) + if len(FfsDxePriori.Sections) == 1: + Section = FfsDxePriori.Sections.popitem()[1] + if Section.Type == 0x19: + GuidStruct = struct.Struct('1I2H8B') + Start = 4 + while len(Section) > Start: + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) + GuidString = gGuidStringFormat % Guid + Start = Start + 16 + if GuidString in self.UnDispatchedFfsDict: + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) + self.LoadProtocol(Db, GuidString) + + self.DisPatchDxe(Db) + + def LoadProtocol(self, Db, ModuleGuid): + SqlCommand = """select GuidValue from Report + where SourceFileFullPath in + (select Value1 from Inf where BelongsToFile = + (select BelongsToFile from Inf + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) + and Model = %s) + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ + % (ModuleGuid, 5001, 3007) + RecordSet = Db.TblReport.Exec(SqlCommand) + for Record in RecordSet: + SqlCommand = """select Value2 from Inf where BelongsToFile = + (select DISTINCT BelongsToFile from Inf + where Value1 = + (select SourceFileFullPath from Report + where GuidValue like '%s' and ItemMode = 'Callback')) + and Value1 = 'FILE_GUID'""" % Record[0] + CallBackSet = Db.TblReport.Exec(SqlCommand) + if CallBackSet != []: + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid + else: + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid + + def LoadPpi(self, Db, ModuleGuid): + SqlCommand = """select GuidValue from Report + where SourceFileFullPath in + (select Value1 from Inf where BelongsToFile = + (select BelongsToFile from Inf + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) + and Model = %s) + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ + % (ModuleGuid, 5001, 3007) + RecordSet = Db.TblReport.Exec(SqlCommand) + for Record in RecordSet: + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid + + def DisPatchDxe(self, Db): + IsInstalled = False + ScheduleList = sdict() + for FfsID in self.UnDispatchedFfsDict: + CouldBeLoaded = False + DepexString = '' + FileDepex = None + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x07: + # Get Depex + IsFoundDepex = False + for Section in Ffs.Sections.values(): + # Find Depex + if Section.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') + break + if Section.Type == 0x01: + CompressSections = Section._SubImages[4] + for CompressSection in CompressSections.Sections: + if CompressSection.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Protocol') + break + if CompressSection.Type == 0x02: + NewSections = CompressSection._SubImages[4] + for NewSection in NewSections.Sections: + if NewSection.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Protocol') + break + + # Not find Depex + if not IsFoundDepex: + CouldBeLoaded = self.CheckArchProtocol() + DepexString = '' + FileDepex = None + + # Append New Ffs + if CouldBeLoaded: + IsInstalled = True + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) + NewFfs.Depex = DepexString + if FileDepex is not None: + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) + else: + ScheduleList[FfsID] = NewFfs + else: + self.UnDispatchedFfsDict[FfsID].Depex = DepexString + + for FfsID in ScheduleList: + NewFfs = ScheduleList.pop(FfsID) + FfsName = 'UnKnown' + self.OrderedFfsDict[FfsID] = NewFfs + self.LoadProtocol(Db, FfsID) + + SqlCommand = """select Value2 from Inf + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and lower(Value2) = lower('%s') and Model = %s) + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) + RecordSet = Db.TblReport.Exec(SqlCommand) + if RecordSet != []: + FfsName = RecordSet[0][0] + + if IsInstalled: + self.DisPatchDxe(Db) + + def DisPatchPei(self, Db): + IsInstalled = False + for FfsID in self.UnDispatchedFfsDict: + CouldBeLoaded = True + DepexString = '' + FileDepex = None + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x06 or Ffs.Type == 0x08: + # Get Depex + for Section in Ffs.Sections.values(): + if Section.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') + break + if Section.Type == 0x01: + CompressSections = Section._SubImages[4] + for CompressSection in CompressSections.Sections: + if CompressSection.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Ppi') + break + if CompressSection.Type == 0x02: + NewSections = CompressSection._SubImages[4] + for NewSection in NewSections.Sections: + if NewSection.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Ppi') + break + + # Append New Ffs + if CouldBeLoaded: + IsInstalled = True + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) + NewFfs.Depex = DepexString + self.OrderedFfsDict[FfsID] = NewFfs + self.LoadPpi(Db, FfsID) + else: + self.UnDispatchedFfsDict[FfsID].Depex = DepexString + + if IsInstalled: + self.DisPatchPei(Db) + + + def __str__(self): + global gIndention + gIndention += 4 + FvInfo = '\n' + ' ' * gIndention + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) + gIndention -= 4 + return FvInfo + FfsInfo + + def _Unpack(self): + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] + self.empty() + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) + + # traverse the FFS + EndOfFv = Size + FfsStartAddress = self.HeaderSize + LastFfsObj = None + while FfsStartAddress < EndOfFv: + FfsObj = Ffs() + FfsObj.frombuffer(self, FfsStartAddress) + FfsId = repr(FfsObj) + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): + if LastFfsObj is not None: + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) + else: + if FfsId in self.FfsDict: + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", + ExtraData="\t%s @ %s\n\t%s @ %s" \ + % (FfsObj.Guid, FfsObj.Offset, + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) + self.FfsDict[FfsId] = FfsObj + if LastFfsObj is not None: + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) + + FfsStartAddress += len(FfsObj) + # + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) + # The next FFS must be at the latest next 8-byte aligned address + # + FfsStartAddress = (FfsStartAddress + 7) & (~7) + LastFfsObj = FfsObj + + def _GetAttributes(self): + return self.GetField(self._ATTR_, 0)[0] + + def _GetSize(self): + return self.GetField(self._LENGTH_, 0)[0] + + def _GetChecksum(self): + return self.GetField(self._CHECKSUM_, 0)[0] + + def _GetHeaderLength(self): + return self.GetField(self._HLEN_, 0)[0] + + def _GetFileSystemGuid(self): + return gGuidStringFormat % self.GetField(self._GUID_, 0) + + Attributes = property(_GetAttributes) + Size = property(_GetSize) + Checksum = property(_GetChecksum) + HeaderSize = property(_GetHeaderLength) + FileSystemGuid = property(_GetFileSystemGuid) + ## MultipleFv() class # # A class for Multiple FV diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py index 7196f7f2c753..e7b856a1150a 100644 --- a/BaseTools/Source/Python/GenFds/AprioriSection.py +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py @@ -16,28 +16,31 @@ # Import Modules # from __future__ import absolute_import -from struct import * +from struct import pack import Common.LongFilePathOs as os from io import BytesIO -from . import FfsFileStatement +from .FfsFileStatement import FileStatement from .GenFdsGlobalVariable import GenFdsGlobalVariable -from CommonDataClass.FdfClass import AprioriSectionClassObject -from Common.StringUtils import * +from Common.StringUtils import NormPath from Common.Misc import SaveFileOnChange, PathClass -from Common import EdkLogger -from Common.BuildToolError import * +from Common.EdkLogger import error as EdkLoggerError +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE from Common.DataType import TAB_COMMON +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" + ## process APRIORI file data and generate PEI/DXE APRIORI file # # -class AprioriSection (AprioriSectionClassObject): +class AprioriSection (object): ## The constructor # # @param self The object pointer # def __init__(self): - AprioriSectionClassObject.__init__(self) + self.DefineVarDict = {} + self.FfsList = [] self.AprioriType = "" ## GenFfs() method @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): # @retval string Generated file name # def GenFfs (self, FvName, Dict = {}, IsMakefile = False): - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" Buffer = BytesIO('') - AprioriFileGuid = DXE_GUID if self.AprioriType == "PEI": - AprioriFileGuid = PEI_GUID + AprioriFileGuid = PEI_APRIORI_GUID + else: + AprioriFileGuid = DXE_APRIORI_GUID + OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ GenFdsGlobalVariable.FfsDir,\ AprioriFileGuid + FvName) - if not os.path.exists(OutputAprFilePath) : + if not os.path.exists(OutputAprFilePath): os.makedirs(OutputAprFilePath) OutputAprFileName = os.path.join( OutputAprFilePath, \ @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): Dict.update(self.DefineVarDict) InfFileName = None - for FfsObj in self.FfsList : + for FfsObj in self.FfsList: Guid = "" - if isinstance(FfsObj, FfsFileStatement.FileStatement): + if isinstance(FfsObj, FileStatement): Guid = FfsObj.NameGuid else: InfFileName = NormPath(FfsObj.InfFileName) Arch = FfsObj.GetCurrentArch() - if Arch is not None: + if Arch: Dict['$(ARCH)'] = Arch InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) - if Arch is not None: + if Arch: Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] Guid = Inf.Guid - else: Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] Guid = Inf.Guid - self.BinFileList = Inf.Module.Binaries - if self.BinFileList == []: - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, + if not Inf.Module.Binaries: + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, "INF %s not found in build ARCH %s!" \ % (InfFileName, GenFdsGlobalVariable.ArchList)) - GuidPart = Guid.split('-') Buffer.write(pack('I', long(GuidPart[0], 16))) Buffer.write(pack('H', int(GuidPart[1], 16))) @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) return AprFfsFileName - diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py index baa6e1dfa2ab..df29c40dbd20 100644 --- a/BaseTools/Source/Python/GenFds/Capsule.py +++ b/BaseTools/Source/Python/GenFds/Capsule.py @@ -16,22 +16,18 @@ # Import Modules # from __future__ import absolute_import -from .GenFdsGlobalVariable import GenFdsGlobalVariable -from .GenFdsGlobalVariable import FindExtendTool +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool from CommonDataClass.FdfClass import CapsuleClassObject import Common.LongFilePathOs as os -import subprocess from io import BytesIO -from Common.Misc import SaveFileOnChange -from Common.Misc import PackRegistryFormatGuid +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid import uuid from struct import pack from Common import EdkLogger -from Common.BuildToolError import * +from Common.BuildToolError import GENFDS_ERROR +from Common.DataType import TAB_LINE_BREAK - -T_CHAR_LF = '\n' -WIN_CERT_REVISION = 0x0200 +WIN_CERT_REVISION = 0x0200 WIN_CERT_TYPE_EFI_GUID = 0x0EF1 EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 ## create inf file describes what goes into capsule and call GenFv to generate capsule # # -class Capsule (CapsuleClassObject) : +class Capsule (CapsuleClassObject): ## The constructor # # @param self The object pointer @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : return self.GenFmpCapsule() CapInfFile = self.GenCapInf() - CapInfFile.writelines("[files]" + T_CHAR_LF) + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) CapFileList = [] - for CapsuleDataObj in self.CapsuleDataList : + for CapsuleDataObj in self.CapsuleDataList: CapsuleDataObj.CapsuleName = self.CapsuleName FileName = CapsuleDataObj.GenCapsuleSubItem() CapsuleDataObj.CapsuleName = None CapFileList.append(FileName) CapInfFile.writelines("EFI_FILE_NAME = " + \ FileName + \ - T_CHAR_LF) + TAB_LINE_BREAK) SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) CapInfFile.close() # @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : self.UiCapsuleName + "_Cap" + '.inf') CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') - CapInfFile.writelines("[options]" + T_CHAR_LF) + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) for Item in self.TokensDict: CapInfFile.writelines("EFI_" + \ Item + \ ' = ' + \ self.TokensDict[Item] + \ - T_CHAR_LF) + TAB_LINE_BREAK) return CapInfFile diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py index 0caba8983d2e..6071e9f4a545 100644 --- a/BaseTools/Source/Python/GenFds/CapsuleData.py +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py @@ -16,7 +16,6 @@ # Import Modules # from __future__ import absolute_import -from . import Ffs from .GenFdsGlobalVariable import GenFdsGlobalVariable from io import BytesIO from struct import pack diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py index aaaabf84dca8..51f6791fab19 100644 --- a/BaseTools/Source/Python/GenFds/CompressSection.py +++ b/BaseTools/Source/Python/GenFds/CompressSection.py @@ -16,7 +16,7 @@ # Import Modules # from __future__ import absolute_import -from .Ffs import Ffs +from .Ffs import SectionSuffix from . import Section import subprocess import Common.LongFilePathOs as os @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : ModuleName + \ SUP_MODULE_SEC + \ SecNum + \ - Ffs.SectionSuffix['COMPRESS'] + SectionSuffix['COMPRESS'] OutputFile = os.path.normpath(OutputFile) DummyFile = OutputFile + '.dummy' GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py index a6387b07c582..28f9b931ca4b 100644 --- a/BaseTools/Source/Python/GenFds/DataSection.py +++ b/BaseTools/Source/Python/GenFds/DataSection.py @@ -19,7 +19,7 @@ from __future__ import absolute_import from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from CommonDataClass.FdfClass import DataSectionClassObject from Common.Misc import PeImageClass @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): ) self.SectFileName = TeFile - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) OutputFile = os.path.normpath(OutputFile) GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), IsMakefile = IsMakefile) FileList = [OutputFile] diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py index b2d123bfc045..78efc18816e8 100644 --- a/BaseTools/Source/Python/GenFds/DepexSection.py +++ b/BaseTools/Source/Python/GenFds/DepexSection.py @@ -18,8 +18,6 @@ from __future__ import absolute_import from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable -import subprocess -from .Ffs import Ffs import Common.LongFilePathOs as os from CommonDataClass.FdfClass import DepexSectionClassObject from AutoGen.GenDepex import DependencyExpression @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ - ModuleFile, + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py index 623b77d27427..f8573b5c7d1b 100644 --- a/BaseTools/Source/Python/GenFds/EfiSection.py +++ b/BaseTools/Source/Python/GenFds/EfiSection.py @@ -20,7 +20,7 @@ from struct import * from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from CommonDataClass.FdfClass import EfiSectionClassObject from Common import EdkLogger @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): BuildNumTuple = tuple() Num = SecNum - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', #Ui=StringData, Ver=BuildNum, @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): for File in FileList: Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) f = open(File, 'r') VerString = f.read() f.close() @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): else: EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) Num = SecNum - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', #Ui=VerString, Ver=BuildNum, @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): Num = SecNum if IsMakefile and StringData == ModuleNameStr: StringData = "$(MODULE_NAME)" - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', Ui=StringData, IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): for File in FileList: Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) f = open(File, 'r') UiString = f.read() f.close() @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): Num = SecNum if IsMakefile and StringData == ModuleNameStr: StringData = "$(MODULE_NAME)" - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', Ui=StringData, IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): """ Copy Map file to FFS output path """ Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) File = GenFdsGlobalVariable.MacroExtend(File, Dict) #Get PE Section alignment when align is set to AUTO diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index 63687e98bb31..62bf9f14012a 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -18,79 +18,66 @@ # from __future__ import print_function from __future__ import absolute_import -import re +from re import compile, DOTALL +from string import hexdigits +from uuid import UUID -from . import Fd -from . import Region -from . import Fv -from . import AprioriSection -from . import FfsInfStatement -from . import FfsFileStatement -from . import VerSection -from . import UiSection -from . import FvImageSection -from . import DataSection -from . import DepexSection -from . import CompressSection -from . import GuidSection -from . import Capsule -from . import CapsuleData -from . import Rule -from . import RuleComplexFile -from . import RuleSimpleFile -from . import EfiSection -from . import Vtf -from . import ComponentStatement -from . import OptionRom -from . import OptRomInfStatement -from . import OptRomFileStatement -import string - -from .GenFdsGlobalVariable import GenFdsGlobalVariable from Common.BuildToolError import * from Common import EdkLogger -from Common.Misc import PathClass -from Common.StringUtils import NormPath -import Common.GlobalData as GlobalData -from Common.Expression import * +from Common.Misc import PathClass, tdict +from Common.StringUtils import NormPath, ReplaceMacro from Common import GlobalData +from Common.Expression import * from Common.DataType import * -from Common.StringUtils import ReplaceMacro -import uuid -from Common.Misc import tdict from Common.MultipleWorkspace import MultipleWorkspace as mws import Common.LongFilePathOs as os from Common.LongFilePathSupport import OpenLongFilePath as open -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID from Common.RangeExpression import RangeExpression from collections import OrderedDict -##define T_CHAR_SPACE ' ' -##define T_CHAR_NULL '\0' -##define T_CHAR_CR '\r' -##define T_CHAR_TAB '\t' -##define T_CHAR_LF '\n' -##define T_CHAR_SLASH '/' -##define T_CHAR_BACKSLASH '\\' -##define T_CHAR_DOUBLE_QUOTE '\"' -##define T_CHAR_SINGLE_QUOTE '\'' -##define T_CHAR_STAR '*' -##define T_CHAR_HASH '#' +from .Fd import FD +from .Region import Region +from .Fv import FV +from .AprioriSection import AprioriSection +from .FfsInfStatement import FfsInfStatement +from .FfsFileStatement import FileStatement +from .VerSection import VerSection +from .UiSection import UiSection +from .FvImageSection import FvImageSection +from .DataSection import DataSection +from .DepexSection import DepexSection +from .CompressSection import CompressSection +from .GuidSection import GuidSection +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile +from .RuleComplexFile import RuleComplexFile +from .RuleSimpleFile import RuleSimpleFile +from .EfiSection import EfiSection +from .Vtf import Vtf +from .ComponentStatement import ComponentStatement +from .OptionRom import OPTIONROM +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs +from .OptRomFileStatement import OptRomFileStatement +from .GenFdsGlobalVariable import GenFdsGlobalVariable -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') +T_CHAR_CR = '\r' +T_CHAR_TAB = '\t' +T_CHAR_DOUBLE_QUOTE = '\"' +T_CHAR_SINGLE_QUOTE = '\'' +T_CHAR_STAR = '*' -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') -TokenFindPattern = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') +TokenFindPattern = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') AllIncludeFileList = [] # Get the closest parent @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): return True def GetRealFileLine (File, Line): - InsertedLines = 0 for Profile in AllIncludeFileList: if Profile.IsLineInFile(Line): @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): ## The exception class that used to report error messages when parsing FDF # -# Currently the "ToolName" is set to be "FDF Parser". +# Currently the "ToolName" is set to be "FdfParser". # class Warning (Exception): ## The constructor @@ -132,7 +118,6 @@ class Warning (Exception): # @param Line The Line number that error occurs # def __init__(self, Str, File = None, Line = None): - FileLineTuple = GetRealFileLine(File, Line) self.FileName = FileLineTuple[0] self.LineNumber = FileLineTuple[1] @@ -147,7 +132,7 @@ class Warning (Exception): # # May raise Exception when opening file. # -class IncludeFileProfile : +class IncludeFileProfile: ## The constructor # # @param self The object pointer @@ -157,16 +142,11 @@ class IncludeFileProfile : self.FileName = FileName self.FileLinesList = [] try: - fsock = open(FileName, "rb", 0) - try: + with open(FileName, "rb", 0) as fsock: self.FileLinesList = fsock.readlines() for index, line in enumerate(self.FileLinesList): - if not line.endswith('\n'): - self.FileLinesList[index] += '\n' - - finally: - fsock.close() - + if not line.endswith(TAB_LINE_BREAK): + self.FileLinesList[index] += TAB_LINE_BREAK except: EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) @@ -209,7 +189,7 @@ class IncludeFileProfile : # # May raise Exception when opening file. # -class FileProfile : +class FileProfile: ## The constructor # # @param self The object pointer @@ -218,11 +198,8 @@ class FileProfile : def __init__(self, FileName): self.FileLinesList = [] try: - fsock = open(FileName, "rb", 0) - try: + with open(FileName, "rb", 0) as fsock: self.FileLinesList = fsock.readlines() - finally: - fsock.close() except: EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) @@ -266,41 +243,37 @@ class FdfParser: self.CurrentOffsetWithinLine = 0 self.CurrentFdName = None self.CurrentFvName = None - self.__Token = "" - self.__SkippedChars = "" + self._Token = "" + self._SkippedChars = "" GlobalData.gFdfParser = self # Used to section info - self.__CurSection = [] + self._CurSection = [] # Key: [section name, UI name, arch] - # Value: {MACRO_NAME : MACRO_VALUE} - self.__MacroDict = tdict(True, 3) - self.__PcdDict = OrderedDict() + # Value: {MACRO_NAME: MACRO_VALUE} + self._MacroDict = tdict(True, 3) + self._PcdDict = OrderedDict() - self.__WipeOffArea = [] + self._WipeOffArea = [] if GenFdsGlobalVariable.WorkSpaceDir == '': GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") - ## __SkipWhiteSpace() method + ## _SkipWhiteSpace() method # - # Skip white spaces from current char, return number of chars skipped + # Skip white spaces from current char. # # @param self The object pointer - # @retval Count The number of chars skipped # - def __SkipWhiteSpace(self): - Count = 0 - while not self.__EndOfFile(): - Count += 1 - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): - self.__SkippedChars += str(self.__CurrentChar()) - self.__GetOneChar() - + def _SkipWhiteSpace(self): + while not self._EndOfFile(): + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): + self._SkippedChars += str(self._CurrentChar()) + self._GetOneChar() else: - Count = Count - 1 - return Count + return + return - ## __EndOfFile() method + ## _EndOfFile() method # # Judge current buffer pos is at file end # @@ -308,17 +281,16 @@ class FdfParser: # @retval True Current File buffer position is at file end # @retval False Current File buffer position is NOT at file end # - def __EndOfFile(self): + def _EndOfFile(self): NumberOfLines = len(self.Profile.FileLinesList) SizeOfLastLine = len(self.Profile.FileLinesList[-1]) if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: return True - elif self.CurrentLineNumber > NumberOfLines: + if self.CurrentLineNumber > NumberOfLines: return True - else: - return False + return False - ## __EndOfLine() method + ## _EndOfLine() method # # Judge current buffer pos is at line end # @@ -326,14 +298,13 @@ class FdfParser: # @retval True Current File buffer position is at line end # @retval False Current File buffer position is NOT at line end # - def __EndOfLine(self): + def _EndOfLine(self): if self.CurrentLineNumber > len(self.Profile.FileLinesList): return True SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: return True - else: - return False + return False ## Rewind() method # @@ -347,7 +318,7 @@ class FdfParser: self.CurrentLineNumber = DestLine self.CurrentOffsetWithinLine = DestOffset - ## __UndoOneChar() method + ## _UndoOneChar() method # # Go back one char in the file buffer # @@ -355,80 +326,78 @@ class FdfParser: # @retval True Successfully go back one char # @retval False Not able to go back one char as file beginning reached # - def __UndoOneChar(self): - + def _UndoOneChar(self): if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: return False elif self.CurrentOffsetWithinLine == 0: self.CurrentLineNumber -= 1 - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 else: self.CurrentOffsetWithinLine -= 1 return True - ## __GetOneChar() method + ## _GetOneChar() method # # Move forward one char in the file buffer # # @param self The object pointer # - def __GetOneChar(self): + def _GetOneChar(self): if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 else: self.CurrentOffsetWithinLine += 1 - ## __CurrentChar() method + ## _CurrentChar() method # # Get the char pointed to by the file buffer pointer # # @param self The object pointer # @retval Char Current char # - def __CurrentChar(self): + def _CurrentChar(self): return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] - ## __NextChar() method + ## _NextChar() method # # Get the one char pass the char pointed to by the file buffer pointer # # @param self The object pointer # @retval Char Next char # - def __NextChar(self): + def _NextChar(self): if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: return self.Profile.FileLinesList[self.CurrentLineNumber][0] - else: - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] - ## __SetCurrentCharValue() method + ## _SetCurrentCharValue() method # # Modify the value of current char # # @param self The object pointer # @param Value The new value of current char # - def __SetCurrentCharValue(self, Value): + def _SetCurrentCharValue(self, Value): self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value - ## __CurrentLine() method + ## _CurrentLine() method # # Get the list that contains current line contents # # @param self The object pointer # @retval List current line contents # - def __CurrentLine(self): + def _CurrentLine(self): return self.Profile.FileLinesList[self.CurrentLineNumber - 1] - def __StringToList(self): + def _StringToList(self): self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] if not self.Profile.FileLinesList: EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) self.Profile.FileLinesList[-1].append(' ') - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): if StartPos[0] == EndPos[0]: Offset = StartPos[1] while Offset <= EndPos[1]: @@ -437,14 +406,14 @@ class FdfParser: return Offset = StartPos[1] - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): self.Profile.FileLinesList[StartPos[0]][Offset] = Value Offset += 1 Line = StartPos[0] while Line < EndPos[0]: Offset = 0 - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): self.Profile.FileLinesList[Line][Offset] = Value Offset += 1 Line += 1 @@ -454,45 +423,29 @@ class FdfParser: self.Profile.FileLinesList[EndPos[0]][Offset] = Value Offset += 1 - - def __GetMacroName(self): - if not self.__GetNextToken(): - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - MacroName = self.__Token - NotFlag = False - if MacroName.startswith('!'): - NotFlag = True - MacroName = MacroName[1:].strip() - - if not MacroName.startswith('$(') or not MacroName.endswith(')'): - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : MacroName}, - self.FileName, self.CurrentLineNumber) - MacroName = MacroName[2:-1] - return MacroName, NotFlag - - def __SetMacroValue(self, Macro, Value): - if not self.__CurSection: + def _SetMacroValue(self, Macro, Value): + if not self._CurSection: return MacroDict = {} - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict else: - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] MacroDict[Macro] = Value - def __GetMacroValue(self, Macro): + def _GetMacroValue(self, Macro): # Highest priority if Macro in GlobalData.gCommandLineDefines: return GlobalData.gCommandLineDefines[Macro] if Macro in GlobalData.gGlobalDefines: return GlobalData.gGlobalDefines[Macro] - if self.__CurSection: - MacroDict = self.__MacroDict[ - self.__CurSection[0], - self.__CurSection[1], - self.__CurSection[2] + if self._CurSection: + MacroDict = self._MacroDict[ + self._CurSection[0], + self._CurSection[1], + self._CurSection[2] ] if MacroDict and Macro in MacroDict: return MacroDict[Macro] @@ -502,7 +455,7 @@ class FdfParser: return GlobalData.gPlatformDefines[Macro] return None - def __SectionHeaderParser(self, Section): + def _SectionHeaderParser(self, Section): # [Defines] # [FD.UiName]: use dummy instead if UI name is optional # [FV.UiName] @@ -510,25 +463,25 @@ class FdfParser: # [Rule]: don't take rule section into account, macro is not allowed in this section # [VTF.arch.UiName, arch] # [OptionRom.DriverName] - self.__CurSection = [] - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') - ItemList = Section.split('.') + self._CurSection = [] + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) + ItemList = Section.split(TAB_SPLIT) Item = ItemList[0] if Item == '' or Item == 'RULE': return if Item == TAB_COMMON_DEFINES.upper(): - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] elif Item == 'VTF' and len(ItemList) == 3: UiName = ItemList[2] - Pos = UiName.find(',') + Pos = UiName.find(TAB_COMMA_SPLIT) if Pos != -1: UiName = UiName[:Pos] - self.__CurSection = ['VTF', UiName, ItemList[1]] + self._CurSection = ['VTF', UiName, ItemList[1]] elif len(ItemList) > 1: - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] elif len(ItemList) > 0: - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] ## PreprocessFile() method # @@ -540,7 +493,6 @@ class FdfParser: # @param self The object pointer # def PreprocessFile(self): - self.Rewind() InComment = False DoubleSlashComment = False @@ -548,12 +500,12 @@ class FdfParser: # HashComment in quoted string " " is ignored. InString = False - while not self.__EndOfFile(): + while not self._EndOfFile(): - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: InString = not InString # meet new line, then no longer in a comment for // and '#' - if self.__CurrentChar() == T_CHAR_LF: + if self._CurrentChar() == TAB_LINE_BREAK: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 if InComment and DoubleSlashComment: @@ -563,33 +515,33 @@ class FdfParser: InComment = False HashComment = False # check for */ comment end - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR and self.__NextChar() == T_CHAR_SLASH: - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and self._NextChar() == TAB_BACK_SLASH: + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() InComment = False # set comments to spaces elif InComment: - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() # check for // comment - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): InComment = True DoubleSlashComment = True # check for '#' comment - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: InComment = True HashComment = True # check for /* comment start - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: - self.__SetCurrentCharValue( T_CHAR_SPACE) - self.__GetOneChar() - self.__SetCurrentCharValue( T_CHAR_SPACE) - self.__GetOneChar() + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() InComment = True else: - self.__GetOneChar() + self._GetOneChar() # restore from ListOfList to ListOfString self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] @@ -606,30 +558,30 @@ class FdfParser: # nested include support Processed = False MacroDict = {} - while self.__GetNextToken(): + while self._GetNextToken(): - if self.__Token == 'DEFINE': - if not self.__GetNextToken(): + if self._Token == TAB_DEFINE: + if not self._GetNextToken(): raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - Macro = self.__Token - if not self.__IsToken( "="): + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() + Value = self._GetExpression() MacroDict[Macro] = Value - elif self.__Token == '!include': + elif self._Token == TAB_INCLUDE: Processed = True IncludeLine = self.CurrentLineNumber - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') - if not self.__GetNextToken(): + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) + if not self._GetNextToken(): raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) - IncFileName = self.__Token + IncFileName = self._Token PreIndex = 0 StartPos = IncFileName.find('$(', PreIndex) EndPos = IncFileName.find(')', StartPos+2) while StartPos != -1 and EndPos != -1: - Macro = IncFileName[StartPos+2 : EndPos] - MacroVal = self.__GetMacroValue(Macro) + Macro = IncFileName[StartPos+2: EndPos] + MacroVal = self._GetMacroValue(Macro) if not MacroVal: if Macro in MacroDict: MacroVal = MacroDict[Macro] @@ -686,9 +638,9 @@ class FdfParser: IncFileProfile.Level = ParentProfile.Level + 1 IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 # deal with remaining portions after "!include filename", if exists. - if self.__GetNextToken(): + if self._GetNextToken(): if self.CurrentLineNumber == CurrentLine: - RemainingLine = self.__CurrentLine()[CurrentOffset:] + RemainingLine = self._CurrentLine()[CurrentOffset:] self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) IncFileProfile.InsertAdjust += 1 self.CurrentLineNumber += 1 @@ -704,7 +656,7 @@ class FdfParser: # comment out the processed include file statement TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) - TempList.insert(IncludeOffset, '#') + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) if Processed: # Nested and back-to-back support self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) @@ -713,7 +665,7 @@ class FdfParser: self.Rewind() @staticmethod - def __GetIfListCurrentItemStat(IfList): + def _GetIfListCurrentItemStat(IfList): if len(IfList) == 0: return True @@ -735,29 +687,29 @@ class FdfParser: IfList = [] RegionLayoutLine = 0 ReplacedLine = -1 - while self.__GetNextToken(): + while self._GetNextToken(): # Determine section name and the location dependent macro - if self.__GetIfListCurrentItemStat(IfList): - if self.__Token.startswith('['): - Header = self.__Token - if not self.__Token.endswith(']'): - self.__SkipToToken(']') - Header += self.__SkippedChars + if self._GetIfListCurrentItemStat(IfList): + if self._Token.startswith(TAB_SECTION_START): + Header = self._Token + if not self._Token.endswith(TAB_SECTION_END): + self._SkipToToken(TAB_SECTION_END) + Header += self._SkippedChars if Header.find('$(') != -1: raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) - self.__SectionHeaderParser(Header) + self._SectionHeaderParser(Header) continue # Replace macros except in RULE section or out of section - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: ReplacedLine = self.CurrentLineNumber - self.__UndoToken() + self._UndoToken() CurLine = self.Profile.FileLinesList[ReplacedLine - 1] PreIndex = 0 StartPos = CurLine.find('$(', PreIndex) EndPos = CurLine.find(')', StartPos+2) - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: - MacroName = CurLine[StartPos+2 : EndPos] - MacorValue = self.__GetMacroValue(MacroName) + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, TAB_ELSE_IF]: + MacroName = CurLine[StartPos+2: EndPos] + MacorValue = self._GetMacroValue(MacroName) if MacorValue is not None: CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) if MacorValue.find('$(') != -1: @@ -771,74 +723,74 @@ class FdfParser: self.Profile.FileLinesList[ReplacedLine - 1] = CurLine continue - if self.__Token == 'DEFINE': - if self.__GetIfListCurrentItemStat(IfList): - if not self.__CurSection: + if self._Token == TAB_DEFINE: + if self._GetIfListCurrentItemStat(IfList): + if not self._CurSection: raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, self.CurrentLineNumber) DefineLine = self.CurrentLineNumber - 1 - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') - if not self.__GetNextToken(): + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) + if not self._GetNextToken(): raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - Macro = self.__Token - if not self.__IsToken( "="): + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - self.__SetMacroValue(Macro, Value) - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token == 'SET': - if not self.__GetIfListCurrentItemStat(IfList): + Value = self._GetExpression() + self._SetMacroValue(Macro, Value) + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token == 'SET': + if not self._GetIfListCurrentItemStat(IfList): continue SetLine = self.CurrentLineNumber - 1 SetOffset = self.CurrentOffsetWithinLine - len('SET') - PcdPair = self.__GetNextPcdSettings() + PcdPair = self._GetNextPcdSettings() PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) + Value = self._GetExpression() + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) - self.__PcdDict[PcdName] = Value + self._PcdDict[PcdName] = Value self.Profile.PcdDict[PcdPair] = Value self.SetPcdLocalation(PcdPair) FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token in ('!ifdef', '!ifndef', '!if'): - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) IfList.append([IfStartPos, None, None]) - CondLabel = self.__Token - Expression = self.__GetExpression() + CondLabel = self._Token + Expression = self._GetExpression() - if CondLabel == '!if': - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') + if CondLabel == TAB_IF: + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') else: - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') - if CondLabel == '!ifndef': + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') + if CondLabel == TAB_IF_N_DEF: ConditionSatisfied = not ConditionSatisfied BranchDetermined = ConditionSatisfied IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] if ConditionSatisfied: - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token in ('!elseif', '!else'): - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token in (TAB_ELSE_IF, '!else'): + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) if len(IfList) <= 0: raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) if IfList[-1][1]: IfList[-1] = [ElseStartPos, False, True] - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) else: - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) IfList[-1] = [ElseStartPos, True, IfList[-1][2]] - if self.__Token == '!elseif': - Expression = self.__GetExpression() - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') + if self._Token == TAB_ELSE_IF: + Expression = self._GetExpression() + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] if IfList[-1][1]: @@ -846,14 +798,14 @@ class FdfParser: IfList[-1][1] = False else: IfList[-1][2] = True - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token == '!endif': + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token == '!endif': if len(IfList) <= 0: raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) if IfList[-1][1]: - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) else: - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) IfList.pop() elif not IfList: # Don't use PCDs inside conditional directive @@ -862,7 +814,7 @@ class FdfParser: continue SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) if SetPcd: - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') RegionLayoutLine = self.CurrentLineNumber continue RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) @@ -873,35 +825,35 @@ class FdfParser: if not RegionSizeGuid: RegionLayoutLine = self.CurrentLineNumber + 1 continue - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') RegionLayoutLine = self.CurrentLineNumber + 1 if IfList: raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) self.Rewind() - def __CollectMacroPcd(self): + def _CollectMacroPcd(self): MacroDict = {} # PCD macro MacroDict.update(GlobalData.gPlatformPcds) - MacroDict.update(self.__PcdDict) + MacroDict.update(self._PcdDict) # Lowest priority MacroDict.update(GlobalData.gPlatformDefines) - if self.__CurSection: + if self._CurSection: # Defines macro - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] if ScopeMacro: MacroDict.update(ScopeMacro) # Section macro - ScopeMacro = self.__MacroDict[ - self.__CurSection[0], - self.__CurSection[1], - self.__CurSection[2] + ScopeMacro = self._MacroDict[ + self._CurSection[0], + self._CurSection[1], + self._CurSection[2] ] if ScopeMacro: MacroDict.update(ScopeMacro) @@ -912,15 +864,15 @@ class FdfParser: for Item in GlobalData.BuildOptionPcd: if isinstance(Item, tuple): continue - PcdName, TmpValue = Item.split("=") + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) TmpValue = BuildOptionValue(TmpValue, {}) MacroDict[PcdName.strip()] = TmpValue # Highest priority return MacroDict - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): - MacroPcdDict = self.__CollectMacroPcd() + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): + MacroPcdDict = self._CollectMacroPcd() if Op == 'eval': try: if Value: @@ -933,7 +885,7 @@ class FdfParser: # the precise number of line and return the evaluation result # EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), - File=self.FileName, ExtraData=self.__CurrentLine(), + File=self.FileName, ExtraData=self._CurrentLine(), Line=Line) return Excpt.result except Exception as Excpt: @@ -955,10 +907,10 @@ class FdfParser: Expression = Expression[2:-1] return Expression in MacroPcdDict - ## __IsToken() method + ## _IsToken() method # # Check whether input string is found from current char position along - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @param String The string to search @@ -966,26 +918,26 @@ class FdfParser: # @retval True Successfully find string, file buffer pointer moved forward # @retval False Not able to find string, file buffer pointer not changed # - def __IsToken(self, String, IgnoreCase = False): - self.__SkipWhiteSpace() + def _IsToken(self, String, IgnoreCase = False): + self._SkipWhiteSpace() # Only consider the same line, no multi-line token allowed StartPos = self.CurrentOffsetWithinLine index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) if index == 0: self.CurrentOffsetWithinLine += len(String) - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - ## __IsKeyword() method + ## _IsKeyword() method # # Check whether input keyword is found from current char position along, whole word only! - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @param Keyword The string to search @@ -993,117 +945,117 @@ class FdfParser: # @retval True Successfully find string, file buffer pointer moved forward # @retval False Not able to find string, file buffer pointer not changed # - def __IsKeyword(self, KeyWord, IgnoreCase = False): - self.__SkipWhiteSpace() + def _IsKeyword(self, KeyWord, IgnoreCase = False): + self._SkipWhiteSpace() # Only consider the same line, no multi-line token allowed StartPos = self.CurrentOffsetWithinLine index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) if index == 0: - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] + if not str(followingChar).isspace() and followingChar not in SEPARATORS: return False self.CurrentOffsetWithinLine += len(KeyWord) - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - def __GetExpression(self): + def _GetExpression(self): Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] Index = len(Line) - 1 - while Line[Index] in ['\r', '\n']: + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: Index -= 1 ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] self.CurrentOffsetWithinLine += len(ExpressionString) ExpressionString = ExpressionString.strip() return ExpressionString - ## __GetNextWord() method + ## _GetNextWord() method # # Get next C name from file lines - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @retval True Successfully find a C name string, file buffer pointer moved forward # @retval False Not able to find a C name string, file buffer pointer not changed # - def __GetNextWord(self): - self.__SkipWhiteSpace() - if self.__EndOfFile(): + def _GetNextWord(self): + self._SkipWhiteSpace() + if self._EndOfFile(): return False - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() StartPos = self.CurrentOffsetWithinLine if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': - self.__GetOneChar() - while not self.__EndOfLine(): - TempChar = self.__CurrentChar() + self._GetOneChar() + while not self._EndOfLine(): + TempChar = self._CurrentChar() if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': - self.__GetOneChar() + self._GetOneChar() else: break - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - def __GetNextPcdWord(self): - self.__SkipWhiteSpace() - if self.__EndOfFile(): + def _GetNextPcdWord(self): + self._SkipWhiteSpace() + if self._EndOfFile(): return False - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() StartPos = self.CurrentOffsetWithinLine - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == '[' or TempChar == ']': - self.__GetOneChar() - while not self.__EndOfLine(): - TempChar = self.__CurrentChar() + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: + self._GetOneChar() + while not self._EndOfLine(): + TempChar = self._CurrentChar() if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or TempChar == ']': - self.__GetOneChar() + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: + self._GetOneChar() else: break - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - ## __GetNextToken() method + ## _GetNextToken() method # # Get next token unit before a seperator - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @retval True Successfully find a token unit, file buffer pointer moved forward # @retval False Not able to find a token unit, file buffer pointer not changed # - def __GetNextToken(self): + def _GetNextToken(self): # Skip leading spaces, if exist. - self.__SkipWhiteSpace() - if self.__EndOfFile(): + self._SkipWhiteSpace() + if self._EndOfFile(): return False # Record the token start position, the position of the first non-space char. StartPos = self.CurrentOffsetWithinLine StartLine = self.CurrentLineNumber while StartLine == self.CurrentLineNumber: - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() # Try to find the end char that is not a space and not in seperator tuple. # That is, when we got a space or any char in the tuple, we got the end of token. - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: - self.__GetOneChar() + if not str(TempChar).isspace() and TempChar not in SEPARATORS: + self._GetOneChar() # if we happen to meet a seperator as the first char, we must proceed to get it. # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: - self.__GetOneChar() + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: + self._GetOneChar() break else: break @@ -1113,35 +1065,34 @@ class FdfParser: EndPos = self.CurrentOffsetWithinLine if self.CurrentLineNumber != StartLine: EndPos = len(self.Profile.FileLinesList[StartLine-1]) - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: - self.__Token = self.__Token.lower() + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: + self._Token = self._Token.lower() if StartPos != self.CurrentOffsetWithinLine: return True else: return False - ## __GetNextGuid() method + ## _GetNextGuid() method # # Get next token unit before a seperator - # If found, the GUID string is put into self.__Token + # If found, the GUID string is put into self._Token # # @param self The object pointer # @retval True Successfully find a registry format GUID, file buffer pointer moved forward # @retval False Not able to find a registry format GUID, file buffer pointer not changed # - def __GetNextGuid(self): - - if not self.__GetNextToken(): + def _GetNextGuid(self): + if not self._GetNextToken(): return False - if gGuidPattern.match(self.__Token) is not None: + if gGuidPattern.match(self._Token) is not None: return True else: - self.__UndoToken() + self._UndoToken() return False @staticmethod - def __Verify(Name, Value, Scope): + def _Verify(Name, Value, Scope): # value verification only applies to numeric values. if Scope not in TAB_PCD_NUMERIC_TYPES: return @@ -1157,17 +1108,17 @@ class FdfParser: EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) return True - ## __UndoToken() method + ## _UndoToken() method # # Go back one token unit in file buffer # # @param self The object pointer # - def __UndoToken(self): - self.__UndoOneChar() - while self.__CurrentChar().isspace(): - if not self.__UndoOneChar(): - self.__GetOneChar() + def _UndoToken(self): + self._UndoOneChar() + while self._CurrentChar().isspace(): + if not self._UndoOneChar(): + self._GetOneChar() return @@ -1175,118 +1126,118 @@ class FdfParser: CurrentLine = self.CurrentLineNumber while CurrentLine == self.CurrentLineNumber: - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() # Try to find the end char that is not a space and not in seperator tuple. # That is, when we got a space or any char in the tuple, we got the end of token. - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: - if not self.__UndoOneChar(): + if not str(TempChar).isspace() and not TempChar in SEPARATORS: + if not self._UndoOneChar(): return # if we happen to meet a seperator as the first char, we must proceed to get it. # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: return else: break - self.__GetOneChar() + self._GetOneChar() - ## __GetNextHexNumber() method + ## _GetNextHexNumber() method # # Get next HEX data before a seperator - # If found, the HEX data is put into self.__Token + # If found, the HEX data is put into self._Token # # @param self The object pointer # @retval True Successfully find a HEX data, file buffer pointer moved forward # @retval False Not able to find a HEX data, file buffer pointer not changed # - def __GetNextHexNumber(self): - if not self.__GetNextToken(): + def _GetNextHexNumber(self): + if not self._GetNextToken(): return False - if gHexPatternAll.match(self.__Token): + if gHexPatternAll.match(self._Token): return True else: - self.__UndoToken() + self._UndoToken() return False - ## __GetNextDecimalNumber() method + ## _GetNextDecimalNumber() method # # Get next decimal data before a seperator - # If found, the decimal data is put into self.__Token + # If found, the decimal data is put into self._Token # # @param self The object pointer # @retval True Successfully find a decimal data, file buffer pointer moved forward # @retval False Not able to find a decimal data, file buffer pointer not changed # - def __GetNextDecimalNumber(self): - if not self.__GetNextToken(): + def _GetNextDecimalNumber(self): + if not self._GetNextToken(): return False - if self.__Token.isdigit(): + if self._Token.isdigit(): return True else: - self.__UndoToken() + self._UndoToken() return False - def __GetNextPcdSettings(self): - if not self.__GetNextWord(): + def _GetNextPcdSettings(self): + if not self._GetNextWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - pcdTokenSpaceCName = self.__Token + pcdTokenSpaceCName = self._Token - if not self.__IsToken( "."): + if not self._IsToken(TAB_SPLIT): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - pcdCName = self.__Token + pcdCName = self._Token Fields = [] - while self.__IsToken("."): - if not self.__GetNextPcdWord(): + while self._IsToken(TAB_SPLIT): + if not self._GetNextPcdWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - Fields.append(self.__Token) + Fields.append(self._Token) - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) - ## __GetStringData() method + ## _GetStringData() method # # Get string contents quoted in "" - # If found, the decimal data is put into self.__Token + # If found, the decimal data is put into self._Token # # @param self The object pointer # @retval True Successfully find a string data, file buffer pointer moved forward # @retval False Not able to find a string data, file buffer pointer not changed # - def __GetStringData(self): - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): - self.__UndoToken() - self.__SkipToToken("\"") + def _GetStringData(self): + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): + self._UndoToken() + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) currentLineNumber = self.CurrentLineNumber - if not self.__SkipToToken("\""): + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) if currentLineNumber != self.CurrentLineNumber: raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) - self.__Token = self.__SkippedChars.rstrip('\"') + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) return True - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): - self.__UndoToken() - self.__SkipToToken("\'") + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): + self._UndoToken() + self._SkipToToken(T_CHAR_SINGLE_QUOTE) currentLineNumber = self.CurrentLineNumber - if not self.__SkipToToken("\'"): + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) if currentLineNumber != self.CurrentLineNumber: raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) - self.__Token = self.__SkippedChars.rstrip('\'') + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) return True else: return False - ## __SkipToToken() method + ## _SkipToToken() method # # Search forward in file buffer for the string - # The skipped chars are put into self.__SkippedChars + # The skipped chars are put into self._SkippedChars # # @param self The object pointer # @param String The string to search @@ -1294,25 +1245,25 @@ class FdfParser: # @retval True Successfully find the string, file buffer pointer moved forward # @retval False Not able to find the string, file buffer pointer not changed # - def __SkipToToken(self, String, IgnoreCase = False): + def _SkipToToken(self, String, IgnoreCase = False): StartPos = self.GetFileBufferPos() - self.__SkippedChars = "" - while not self.__EndOfFile(): + self._SkippedChars = "" + while not self._EndOfFile(): index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) if index == 0: self.CurrentOffsetWithinLine += len(String) - self.__SkippedChars += String + self._SkippedChars += String return True - self.__SkippedChars += str(self.__CurrentChar()) - self.__GetOneChar() + self._SkippedChars += str(self._CurrentChar()) + self._GetOneChar() - self.SetFileBufferPos( StartPos) - self.__SkippedChars = "" + self.SetFileBufferPos(StartPos) + self._SkippedChars = "" return False ## GetFileBufferPos() method @@ -1343,18 +1294,18 @@ class FdfParser: # @param self The object pointer # def Preprocess(self): - self.__StringToList() + self._StringToList() self.PreprocessFile() self.PreprocessIncludeFile() - self.__StringToList() + self._StringToList() self.PreprocessFile() self.PreprocessConditionalStatement() - self.__StringToList() - for Pos in self.__WipeOffArea: - self.__ReplaceFragment(Pos[0], Pos[1]) + self._StringToList() + for Pos in self._WipeOffArea: + self._ReplaceFragment(Pos[0], Pos[1]) self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] - while self.__GetDefines(): + while self._GetDefines(): pass ## ParseFile() method @@ -1365,19 +1316,18 @@ class FdfParser: # @param self The object pointer # def ParseFile(self): - try: self.Preprocess() - self.__GetError() + self._GetError() # # Keep processing sections of the FDF until no new sections or a syntax error is found # - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() or self.__GetOptionRom(): + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or self._GetOptionRom(): pass except Warning as X: - self.__UndoToken() - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ + self._UndoToken() + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ # At this point, the closest parent would be the included file itself Profile = GetParentAtLine(X.OriginalLineNumber) if Profile is not None: @@ -1386,7 +1336,7 @@ class FdfParser: else: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) X.Message += ' near line %d, column %d: %s' \ - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) raise ## SectionParser() method @@ -1403,7 +1353,7 @@ class FdfParser: and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not S.startswith('[FMPPAYLOAD.'): raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], [FD.], [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) - ## __GetDefines() method + ## _GetDefines() method # # Get Defines section contents and store its data into AllMacrosList # @@ -1411,56 +1361,55 @@ class FdfParser: # @retval True Successfully find a Defines # @retval False Not able to find a Defines # - def __GetDefines(self): - - if not self.__GetNextToken(): + def _GetDefines(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[DEFINES"): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[DEFINES", True): + self._UndoToken() + if not self._IsToken("[DEFINES", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - while self.__GetNextWord(): + while self._GetNextWord(): # handle the SET statement - if self.__Token == 'SET': - self.__UndoToken() - self.__GetSetStatement(None) + if self._Token == 'SET': + self._UndoToken() + self._GetSetStatement(None) continue - Macro = self.__Token + Macro = self._Token - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token.startswith('['): + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token return False - ##__GetError() method - def __GetError(self): + ##_GetError() method + def _GetError(self): #save the Current information CurrentLine = self.CurrentLineNumber CurrentOffset = self.CurrentOffsetWithinLine - while self.__GetNextToken(): - if self.__Token == TAB_ERROR: - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) + while self._GetNextToken(): + if self._Token == TAB_ERROR: + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) self.CurrentLineNumber = CurrentLine self.CurrentOffsetWithinLine = CurrentOffset - ## __GetFd() method + ## _GetFd() method # # Get FD section contents and store its data into FD dictionary of self.Profile # @@ -1468,27 +1417,26 @@ class FdfParser: # @retval True Successfully find a FD # @retval False Not able to find a FD # - def __GetFd(self): - - if not self.__GetNextToken(): + def _GetFd(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FD."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[FD.", True): + self._UndoToken() + if not self._IsToken("[FD.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) - FdName = self.__GetUiName() + FdName = self._GetUiName() if FdName == "": if len (self.Profile.FdDict) == 0: FdName = GenFdsGlobalVariable.PlatformName @@ -1502,56 +1450,56 @@ class FdfParser: if self.CurrentFdName in self.Profile.FdDict: raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - FdObj = Fd.FD() + FdObj = FD() FdObj.FdUiName = self.CurrentFdName self.Profile.FdDict[self.CurrentFdName] = FdObj if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) - Status = self.__GetCreateFile(FdObj) + Status = self._GetCreateFile(FdObj) if not Status: raise Warning("FD name error", self.FileName, self.CurrentLineNumber) - while self.__GetTokenStatements(FdObj): + while self._GetTokenStatements(FdObj): pass for Attr in ("BaseAddress", "Size", "ErasePolarity"): if getattr(FdObj, Attr) is None: - self.__GetNextToken() + self._GetNextToken() raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) if not FdObj.BlockSizeList: FdObj.BlockSizeList.append((1, FdObj.Size, None)) - self.__GetDefineStatements(FdObj) + self._GetDefineStatements(FdObj) - self.__GetSetStatements(FdObj) + self._GetSetStatements(FdObj) - if not self.__GetRegionLayout(FdObj): + if not self._GetRegionLayout(FdObj): raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) - while self.__GetRegionLayout(FdObj): + while self._GetRegionLayout(FdObj): pass return True - ## __GetUiName() method + ## _GetUiName() method # # Return the UI name of a section # # @param self The object pointer # @retval FdName UI name # - def __GetUiName(self): + def _GetUiName(self): Name = "" - if self.__GetNextWord(): - Name = self.__Token + if self._GetNextWord(): + Name = self._Token return Name - ## __GetCreateFile() method + ## _GetCreateFile() method # # Return the output file name of object # @@ -1559,16 +1507,15 @@ class FdfParser: # @param Obj object whose data will be stored in file # @retval FdName UI name # - def __GetCreateFile(self, Obj): - - if self.__IsKeyword( "CREATE_FILE"): - if not self.__IsToken( "="): + def _GetCreateFile(self, Obj): + if self._IsKeyword("CREATE_FILE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file name", self.FileName, self.CurrentLineNumber) - FileName = self.__Token + FileName = self._Token Obj.CreateFileName = FileName return True @@ -1576,25 +1523,25 @@ class FdfParser: def SetPcdLocalation(self,pcdpair): self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) - ## __GetTokenStatements() method + ## _GetTokenStatements() method # # Get token statements # # @param self The object pointer # @param Obj for whom token statement is got # - def __GetTokenStatements(self, Obj): - if self.__IsKeyword( "BaseAddress"): - if not self.__IsToken( "="): + def _GetTokenStatements(self, Obj): + if self._IsKeyword("BaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) - Obj.BaseAddress = self.__Token + Obj.BaseAddress = self._Token - if self.__IsToken( "|"): - pcdPair = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + pcdPair = self._GetNextPcdSettings() Obj.BaseAddressPcd = pcdPair self.Profile.PcdDict[pcdPair] = Obj.BaseAddress self.SetPcdLocalation(pcdPair) @@ -1602,16 +1549,16 @@ class FdfParser: self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple return True - if self.__IsKeyword( "Size"): - if not self.__IsToken( "="): + if self._IsKeyword("Size"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) - Size = self.__Token - if self.__IsToken( "|"): - pcdPair = self.__GetNextPcdSettings() + Size = self._Token + if self._IsToken(TAB_VALUE_SPLIT): + pcdPair = self._GetNextPcdSettings() Obj.SizePcd = pcdPair self.Profile.PcdDict[pcdPair] = Size self.SetPcdLocalation(pcdPair) @@ -1620,22 +1567,22 @@ class FdfParser: Obj.Size = long(Size, 0) return True - if self.__IsKeyword( "ErasePolarity"): - if not self.__IsToken( "="): + if self._IsKeyword("ErasePolarity"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) - if self.__Token != "1" and self.__Token != "0": + if self._Token != "1" and self._Token != "0": raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) - Obj.ErasePolarity = self.__Token + Obj.ErasePolarity = self._Token return True - return self.__GetBlockStatements(Obj) + return self._GetBlockStatements(Obj) - ## __GetAddressStatements() method + ## _GetAddressStatements() method # # Get address statements # @@ -1644,38 +1591,37 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetAddressStatements(self, Obj): - - if self.__IsKeyword("BsBaseAddress"): - if not self.__IsToken( "="): + def _GetAddressStatements(self, Obj): + if self._IsKeyword("BsBaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected address", self.FileName, self.CurrentLineNumber) - BsAddress = long(self.__Token, 0) + BsAddress = long(self._Token, 0) Obj.BsBaseAddress = BsAddress - if self.__IsKeyword("RtBaseAddress"): - if not self.__IsToken( "="): + if self._IsKeyword("RtBaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected address", self.FileName, self.CurrentLineNumber) - RtAddress = long(self.__Token, 0) + RtAddress = long(self._Token, 0) Obj.RtBaseAddress = RtAddress - ## __GetBlockStatements() method + ## _GetBlockStatements() method # # Get block statements # # @param self The object pointer # @param Obj for whom block statement is got # - def __GetBlockStatements(self, Obj): + def _GetBlockStatements(self, Obj): IsBlock = False - while self.__GetBlockStatement(Obj): + while self._GetBlockStatement(Obj): IsBlock = True Item = Obj.BlockSizeList[-1] @@ -1683,7 +1629,7 @@ class FdfParser: raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) return IsBlock - ## __GetBlockStatement() method + ## _GetBlockStatement() method # # Get block statement # @@ -1692,20 +1638,20 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetBlockStatement(self, Obj): - if not self.__IsKeyword( "BlockSize"): + def _GetBlockStatement(self, Obj): + if not self._IsKeyword("BlockSize"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) - BlockSize = self.__Token + BlockSize = self._Token BlockSizePcd = None - if self.__IsToken( "|"): - PcdPair = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + PcdPair = self._GetNextPcdSettings() BlockSizePcd = PcdPair self.Profile.PcdDict[PcdPair] = BlockSize self.SetPcdLocalation(PcdPair) @@ -1714,19 +1660,19 @@ class FdfParser: BlockSize = long(BlockSize, 0) BlockNumber = None - if self.__IsKeyword( "NumBlocks"): - if not self.__IsToken( "="): + if self._IsKeyword("NumBlocks"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) - BlockNumber = long(self.__Token, 0) + BlockNumber = long(self._Token, 0) Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) return True - ## __GetDefineStatements() method + ## _GetDefineStatements() method # # Get define statements # @@ -1735,11 +1681,11 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetDefineStatements(self, Obj): - while self.__GetDefineStatement( Obj): + def _GetDefineStatements(self, Obj): + while self._GetDefineStatement(Obj): pass - ## __GetDefineStatement() method + ## _GetDefineStatement() method # # Get define statement # @@ -1748,24 +1694,24 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetDefineStatement(self, Obj): - if self.__IsKeyword("DEFINE"): - self.__GetNextToken() - Macro = self.__Token - if not self.__IsToken( "="): + def _GetDefineStatement(self, Obj): + if self._IsKeyword(TAB_DEFINE): + self._GetNextToken() + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token Macro = '$(' + Macro + ')' Obj.DefineVarDict[Macro] = Value return True return False - ## __GetSetStatements() method + ## _GetSetStatements() method # # Get set statements # @@ -1774,11 +1720,11 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetSetStatements(self, Obj): - while self.__GetSetStatement(Obj): + def _GetSetStatements(self, Obj): + while self._GetSetStatement(Obj): pass - ## __GetSetStatement() method + ## _GetSetStatement() method # # Get set statement # @@ -1787,15 +1733,15 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetSetStatement(self, Obj): - if self.__IsKeyword("SET"): - PcdPair = self.__GetNextPcdSettings() + def _GetSetStatement(self, Obj): + if self._IsKeyword("SET"): + PcdPair = self._GetNextPcdSettings() - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) + Value = self._GetExpression() + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) if Obj: Obj.SetVarDict[PcdPair] = Value @@ -1807,19 +1753,19 @@ class FdfParser: return False - ## __CalcRegionExpr(self) + ## _CalcRegionExpr(self) # # Calculate expression for offset or size of a region # # @return: None if invalid expression # Calculated number if successfully # - def __CalcRegionExpr(self): + def _CalcRegionExpr(self): StartPos = self.GetFileBufferPos() Expr = '' PairCount = 0 - while not self.__EndOfFile(): - CurCh = self.__CurrentChar() + while not self._EndOfFile(): + CurCh = self._CurrentChar() if CurCh == '(': PairCount += 1 elif CurCh == ')': @@ -1828,106 +1774,106 @@ class FdfParser: if CurCh in '|\r\n' and PairCount == 0: break Expr += CurCh - self.__GetOneChar() + self._GetOneChar() try: return long( ValueExpression(Expr, - self.__CollectMacroPcd() + self._CollectMacroPcd() )(True), 0) except Exception: self.SetFileBufferPos(StartPos) return None - ## __GetRegionLayout() method + ## _GetRegionLayout() method # # Get region layout for FD # # @param self The object pointer - # @param Fd for whom region is got + # @param theFd for whom region is got # @retval True Successfully find # @retval False Not able to find # - def __GetRegionLayout(self, Fd): - Offset = self.__CalcRegionExpr() + def _GetRegionLayout(self, theFd): + Offset = self._CalcRegionExpr() if Offset is None: return False - RegionObj = Region.Region() + RegionObj = Region() RegionObj.Offset = Offset - Fd.RegionList.append(RegionObj) + theFd.RegionList.append(RegionObj) - if not self.__IsToken( "|"): + if not self._IsToken(TAB_VALUE_SPLIT): raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) - Size = self.__CalcRegionExpr() + Size = self._CalcRegionExpr() if Size is None: raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) RegionObj.Size = Size - if not self.__GetNextWord(): + if not self._GetNextWord(): return True - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): # # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] # Or it might be next region's offset described by an expression which starts with a PCD. # PcdOffset[|PcdSize] or OffsetPcdExpression|Size # - self.__UndoToken() - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) + self._UndoToken() + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) if IsRegionPcd: - RegionObj.PcdOffset = self.__GetNextPcdSettings() - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) + RegionObj.PcdOffset = self._GetNextPcdSettings() + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) self.SetPcdLocalation(RegionObj.PcdOffset) - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple - if self.__IsToken( "|"): - RegionObj.PcdSize = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + RegionObj.PcdSize = self._GetNextPcdSettings() self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size self.SetPcdLocalation(RegionObj.PcdSize) - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple - if not self.__GetNextWord(): + if not self._GetNextWord(): return True - if self.__Token == "SET": - self.__UndoToken() - self.__GetSetStatements( RegionObj) - if not self.__GetNextWord(): + if self._Token == "SET": + self._UndoToken() + self._GetSetStatements(RegionObj) + if not self._GetNextWord(): return True - elif self.__Token == BINARY_FILE_TYPE_FV: - self.__UndoToken() - self.__GetRegionFvType( RegionObj) + elif self._Token == BINARY_FILE_TYPE_FV: + self._UndoToken() + self._GetRegionFvType(RegionObj) - elif self.__Token == "CAPSULE": - self.__UndoToken() - self.__GetRegionCapType( RegionObj) + elif self._Token == "CAPSULE": + self._UndoToken() + self._GetRegionCapType(RegionObj) - elif self.__Token == "FILE": - self.__UndoToken() - self.__GetRegionFileType(RegionObj) + elif self._Token == "FILE": + self._UndoToken() + self._GetRegionFileType(RegionObj) - elif self.__Token == "INF": - self.__UndoToken() + elif self._Token == "INF": + self._UndoToken() RegionObj.RegionType = "INF" - while self.__IsKeyword("INF"): - self.__UndoToken() - ffsInf = self.__ParseInfStatement() + while self._IsKeyword("INF"): + self._UndoToken() + ffsInf = self._ParseInfStatement() if not ffsInf: break RegionObj.RegionDataList.append(ffsInf) - elif self.__Token == "DATA": - self.__UndoToken() - self.__GetRegionDataType(RegionObj) + elif self._Token == "DATA": + self._UndoToken() + self._GetRegionDataType(RegionObj) else: - self.__UndoToken() - if self.__GetRegionLayout(Fd): + self._UndoToken() + if self._GetRegionLayout(theFd): return True raise Warning("A valid region type was not found. " "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", @@ -1935,189 +1881,185 @@ class FdfParser: return True - ## __GetRegionFvType() method + ## _GetRegionFvType() method # # Get region fv data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionFvType(self, RegionObj): - - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): + def _GetRegionFvType(self, RegionObj): + if not self._IsKeyword(BINARY_FILE_TYPE_FV): raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = BINARY_FILE_TYPE_FV - RegionObj.RegionDataList.append((self.__Token).upper()) + RegionObj.RegionDataList.append((self._Token).upper()) - while self.__IsKeyword( BINARY_FILE_TYPE_FV): + while self._IsKeyword(BINARY_FILE_TYPE_FV): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append((self.__Token).upper()) + RegionObj.RegionDataList.append((self._Token).upper()) - ## __GetRegionCapType() method + ## _GetRegionCapType() method # # Get region capsule data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionCapType(self, RegionObj): - - if not self.__IsKeyword("CAPSULE"): + def _GetRegionCapType(self, RegionObj): + if not self._IsKeyword("CAPSULE"): raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "CAPSULE" - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - while self.__IsKeyword("CAPSULE"): + while self._IsKeyword("CAPSULE"): - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - ## __GetRegionFileType() method + ## _GetRegionFileType() method # # Get region file data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionFileType(self, RegionObj): - - if not self.__IsKeyword( "FILE"): + def _GetRegionFileType(self, RegionObj): + if not self._IsKeyword("FILE"): raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "FILE" - RegionObj.RegionDataList.append( self.__Token) + RegionObj.RegionDataList.append(self._Token) - while self.__IsKeyword( "FILE"): + while self._IsKeyword("FILE"): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - ## __GetRegionDataType() method + ## _GetRegionDataType() method # # Get region array data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionDataType(self, RegionObj): - - if not self.__IsKeyword( "DATA"): + def _GetRegionDataType(self, RegionObj): + if not self._IsKeyword("DATA"): raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 18: + if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) # convert hex string value to byte hex string array - AllString = self.__Token + AllString = self._Token AllStrLen = len (AllString) DataString = "" while AllStrLen > 4: - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT AllStrLen = AllStrLen - 2 - DataString = DataString + AllString[:AllStrLen] + "," + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT # byte value array - if len (self.__Token) <= 4: - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + if len (self._Token) <= 4: + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") + DataString = DataString.rstrip(TAB_COMMA_SPLIT) RegionObj.RegionType = "DATA" - RegionObj.RegionDataList.append( DataString) + RegionObj.RegionDataList.append(DataString) - while self.__IsKeyword( "DATA"): + while self._IsKeyword("DATA"): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 18: + if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) # convert hex string value to byte hex string array - AllString = self.__Token + AllString = self._Token AllStrLen = len (AllString) DataString = "" while AllStrLen > 4: - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT AllStrLen = AllStrLen - 2 - DataString = DataString + AllString[:AllStrLen] + "," + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT # byte value array - if len (self.__Token) <= 4: - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + if len (self._Token) <= 4: + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") - RegionObj.RegionDataList.append( DataString) + DataString = DataString.rstrip(TAB_COMMA_SPLIT) + RegionObj.RegionDataList.append(DataString) - ## __GetFv() method + ## _GetFv() method # # Get FV section contents and store its data into FV dictionary of self.Profile # @@ -2125,68 +2067,68 @@ class FdfParser: # @retval True Successfully find a FV # @retval False Not able to find a FV # - def __GetFv(self): - if not self.__GetNextToken(): + def _GetFv(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FV."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[FV.", True): + self._UndoToken() + if not self._IsToken("[FV.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - FvName = self.__GetUiName() + FvName = self._GetUiName() self.CurrentFvName = FvName.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - FvObj = Fv.FV() + FvObj = FV() FvObj.UiFvName = self.CurrentFvName self.Profile.FvDict[self.CurrentFvName] = FvObj - Status = self.__GetCreateFile(FvObj) + Status = self._GetCreateFile(FvObj) if not Status: raise Warning("FV name error", self.FileName, self.CurrentLineNumber) - self.__GetDefineStatements(FvObj) + self._GetDefineStatements(FvObj) - self.__GetAddressStatements(FvObj) + self._GetAddressStatements(FvObj) FvObj.FvExtEntryTypeValue = [] FvObj.FvExtEntryType = [] FvObj.FvExtEntryData = [] while True: - self.__GetSetStatements(FvObj) + self._GetSetStatements(FvObj) - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): break if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) while True: - isInf = self.__GetInfStatement(FvObj) - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) + isInf = self._GetInfStatement(FvObj) + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) if not isInf and not isFile: break return True - ## __GetFvAlignment() method + ## _GetFvAlignment() method # # Get alignment for FV # @@ -2195,26 +2137,25 @@ class FdfParser: # @retval True Successfully find a alignment statement # @retval False Not able to find a alignment statement # - def __GetFvAlignment(self, Obj): - - if not self.__IsKeyword( "FvAlignment"): + def _GetFvAlignment(self, Obj): + if not self._IsKeyword("FvAlignment"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ "1G", "2G"): - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - Obj.FvAlignment = self.__Token + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + Obj.FvAlignment = self._Token return True - ## __GetFvBaseAddress() method + ## _GetFvBaseAddress() method # # Get BaseAddress for FV # @@ -2223,23 +2164,22 @@ class FdfParser: # @retval True Successfully find a FvBaseAddress statement # @retval False Not able to find a FvBaseAddress statement # - def __GetFvBaseAddress(self, Obj): - - if not self.__IsKeyword("FvBaseAddress"): + def _GetFvBaseAddress(self, Obj): + if not self._IsKeyword("FvBaseAddress"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) - if not BaseAddrValuePattern.match(self.__Token.upper()): - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - Obj.FvBaseAddress = self.__Token + if not BaseAddrValuePattern.match(self._Token.upper()): + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + Obj.FvBaseAddress = self._Token return True - ## __GetFvForceRebase() method + ## _GetFvForceRebase() method # # Get FvForceRebase for FV # @@ -2248,23 +2188,22 @@ class FdfParser: # @retval True Successfully find a FvForceRebase statement # @retval False Not able to find a FvForceRebase statement # - def __GetFvForceRebase(self, Obj): - - if not self.__IsKeyword("FvForceRebase"): + def _GetFvForceRebase(self, Obj): + if not self._IsKeyword("FvForceRebase"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: Obj.FvForceRebase = True - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: Obj.FvForceRebase = False else: Obj.FvForceRebase = None @@ -2272,7 +2211,7 @@ class FdfParser: return True - ## __GetFvAttributes() method + ## _GetFvAttributes() method # # Get attributes for FV # @@ -2280,31 +2219,31 @@ class FdfParser: # @param Obj for whom attribute is got # @retval None # - def __GetFvAttributes(self, FvObj): + def _GetFvAttributes(self, FvObj): IsWordToken = False - while self.__GetNextWord(): + while self._GetNextWord(): IsWordToken = True - name = self.__Token + name = self._Token if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): - self.__UndoToken() + self._UndoToken() return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) - FvObj.FvAttributeDict[name] = self.__Token + FvObj.FvAttributeDict[name] = self._Token return IsWordToken - ## __GetFvNameGuid() method + ## _GetFvNameGuid() method # # Get FV GUID for FV # @@ -2312,110 +2251,107 @@ class FdfParser: # @param Obj for whom GUID is got # @retval None # - def __GetFvNameGuid(self, FvObj): - - if not self.__IsKeyword( "FvNameGuid"): + def _GetFvNameGuid(self, FvObj): + if not self._IsKeyword("FvNameGuid"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) - FvObj.FvNameGuid = self.__Token + FvObj.FvNameGuid = self._Token return True - def __GetFvNameString(self, FvObj): - - if not self.__IsKeyword( "FvNameString"): + def _GetFvNameString(self, FvObj): + if not self._IsKeyword("FvNameString"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) - FvObj.FvNameString = self.__Token + FvObj.FvNameString = self._Token return True - def __GetFvExtEntryStatement(self, FvObj): - - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): + def _GetFvExtEntryStatement(self, FvObj): + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): return False - if not self.__IsKeyword ("TYPE"): + if not self._IsKeyword ("TYPE"): raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryTypeValue.append(self.__Token) + FvObj.FvExtEntryTypeValue.append(self._Token) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryType.append(self.__Token) + FvObj.FvExtEntryType.append(self._Token) - if self.__Token == 'DATA': + if self._Token == 'DATA': - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString = self.__Token - DataString += "," + DataString = self._Token + DataString += TAB_COMMA_SPLIT - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") + DataString = DataString.rstrip(TAB_COMMA_SPLIT) FvObj.FvExtEntryData.append(DataString) - if self.__Token == 'FILE': + if self._Token == 'FILE': - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryData.append(self.__Token) + FvObj.FvExtEntryData.append(self._Token) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) return True - ## __GetAprioriSection() method + ## _GetAprioriSection() method # # Get token statements # @@ -2425,56 +2361,55 @@ class FdfParser: # @retval True Successfully find apriori statement # @retval False Not able to find apriori statement # - def __GetAprioriSection(self, FvObj, MacroDict = {}): - - if not self.__IsKeyword( "APRIORI"): + def _GetAprioriSection(self, FvObj, MacroDict = {}): + if not self._IsKeyword("APRIORI"): return False - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) - AprType = self.__Token + AprType = self._Token - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - AprSectionObj = AprioriSection.AprioriSection() + AprSectionObj = AprioriSection() AprSectionObj.AprioriType = AprType - self.__GetDefineStatements(AprSectionObj) + self._GetDefineStatements(AprSectionObj) MacroDict.update(AprSectionObj.DefineVarDict) while True: - IsInf = self.__GetInfStatement(AprSectionObj) - IsFile = self.__GetFileStatement( AprSectionObj) + IsInf = self._GetInfStatement(AprSectionObj) + IsFile = self._GetFileStatement(AprSectionObj) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) FvObj.AprioriSectionList.append(AprSectionObj) return True - def __ParseInfStatement(self): - if not self.__IsKeyword("INF"): + def _ParseInfStatement(self): + if not self._IsKeyword("INF"): return None - ffsInf = FfsInfStatement.FfsInfStatement() - self.__GetInfOptions(ffsInf) + ffsInf = FfsInfStatement() + self._GetInfOptions(ffsInf) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) - ffsInf.InfFileName = self.__Token + ffsInf.InfFileName = self._Token if not ffsInf.InfFileName.endswith('.inf'): raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) ffsInf.CurrentLineNum = self.CurrentLineNumber - ffsInf.CurrentLineContent = self.__CurrentLine() + ffsInf.CurrentLineContent = self._CurrentLine() #Replace $(SAPCE) with real space ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: #do case sensitive check for file path ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: @@ -2492,16 +2427,16 @@ class FdfParser: else: self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) - if self.__IsToken('|'): - if self.__IsKeyword('RELOCS_STRIPPED'): + if self._IsToken(TAB_VALUE_SPLIT): + if self._IsKeyword('RELOCS_STRIPPED'): ffsInf.KeepReloc = False - elif self.__IsKeyword('RELOCS_RETAINED'): + elif self._IsKeyword('RELOCS_RETAINED'): ffsInf.KeepReloc = True else: - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) return ffsInf - ## __GetInfStatement() method + ## _GetInfStatement() method # # Get INF statements # @@ -2510,86 +2445,86 @@ class FdfParser: # @retval True Successfully find inf statement # @retval False Not able to find inf statement # - def __GetInfStatement(self, Obj, ForCapsule=False): - ffsInf = self.__ParseInfStatement() + def _GetInfStatement(self, Obj, ForCapsule=False): + ffsInf = self._ParseInfStatement() if not ffsInf: return False if ForCapsule: - capsuleFfs = CapsuleData.CapsuleFfs() - capsuleFfs.Ffs = ffsInf - Obj.CapsuleDataList.append(capsuleFfs) + myCapsuleFfs = CapsuleFfs() + myCapsuleFfs.Ffs = ffsInf + Obj.CapsuleDataList.append(myCapsuleFfs) else: Obj.FfsList.append(ffsInf) return True - ## __GetInfOptions() method + ## _GetInfOptions() method # # Get options for INF # # @param self The object pointer # @param FfsInfObj for whom option is got # - def __GetInfOptions(self, FfsInfObj): - if self.__IsKeyword("FILE_GUID"): - if not self.__IsToken("="): + def _GetInfOptions(self, FfsInfObj): + if self._IsKeyword("FILE_GUID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) - FfsInfObj.OverrideGuid = self.__Token + FfsInfObj.OverrideGuid = self._Token - if self.__IsKeyword( "RuleOverride"): - if not self.__IsToken( "="): + if self._IsKeyword("RuleOverride"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) - FfsInfObj.Rule = self.__Token + FfsInfObj.Rule = self._Token - if self.__IsKeyword( "VERSION"): - if not self.__IsToken( "="): + if self._IsKeyword("VERSION"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Version", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - FfsInfObj.Version = self.__Token + if self._GetStringData(): + FfsInfObj.Version = self._Token - if self.__IsKeyword( BINARY_FILE_TYPE_UI): - if not self.__IsToken( "="): + if self._IsKeyword(BINARY_FILE_TYPE_UI): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - FfsInfObj.Ui = self.__Token + if self._GetStringData(): + FfsInfObj.Ui = self._Token - if self.__IsKeyword( "USE"): - if not self.__IsToken( "="): + if self._IsKeyword("USE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) - FfsInfObj.UseArch = self.__Token + FfsInfObj.UseArch = self._Token - if self.__GetNextToken(): - p = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): - FfsInfObj.KeyStringList.append(self.__Token) - if not self.__IsToken(","): + if self._GetNextToken(): + p = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): + FfsInfObj.KeyStringList.append(self._Token) + if not self._IsToken(TAB_COMMA_SPLIT): return else: - self.__UndoToken() + self._UndoToken() return - while self.__GetNextToken(): - if not p.match(self.__Token): + while self._GetNextToken(): + if not p.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - FfsInfObj.KeyStringList.append(self.__Token) + FfsInfObj.KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break - ## __GetFileStatement() method + ## _GetFileStatement() method # # Get FILE statements # @@ -2599,42 +2534,41 @@ class FdfParser: # @retval True Successfully find FILE statement # @retval False Not able to find FILE statement # - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): - - if not self.__IsKeyword( "FILE"): + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): + if not self._IsKeyword("FILE"): return False - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) - if ForCapsule and self.__Token == 'DATA': - self.__UndoToken() - self.__UndoToken() + if ForCapsule and self._Token == 'DATA': + self._UndoToken() + self._UndoToken() return False - FfsFileObj = FfsFileStatement.FileStatement() - FfsFileObj.FvFileType = self.__Token + FfsFileObj = FileStatement() + FfsFileObj.FvFileType = self._Token - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): - if not self.__GetNextWord(): + if not self._GetNextGuid(): + if not self._GetNextWord(): raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - FfsFileObj.NameGuid = self.__Token + FfsFileObj.NameGuid = self._Token - self.__GetFilePart( FfsFileObj, MacroDict.copy()) + self._GetFilePart(FfsFileObj, MacroDict.copy()) if ForCapsule: - capsuleFfs = CapsuleData.CapsuleFfs() + capsuleFfs = CapsuleFfs() capsuleFfs.Ffs = FfsFileObj Obj.CapsuleDataList.append(capsuleFfs) else: @@ -2642,7 +2576,7 @@ class FdfParser: return True - ## __FileCouldHaveRelocFlag() method + ## _FileCouldHaveRelocFlag() method # # Check whether reloc strip flag can be set for a file type. # @@ -2651,13 +2585,13 @@ class FdfParser: # @retval False No way to have it # @staticmethod - def __FileCouldHaveRelocFlag (FileType): + def _FileCouldHaveRelocFlag (FileType): if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): return True else: return False - ## __SectionCouldHaveRelocFlag() method + ## _SectionCouldHaveRelocFlag() method # # Check whether reloc strip flag can be set for a section type. # @@ -2666,13 +2600,13 @@ class FdfParser: # @retval False No way to have it # @staticmethod - def __SectionCouldHaveRelocFlag (SectionType): + def _SectionCouldHaveRelocFlag (SectionType): if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): return True else: return False - ## __GetFilePart() method + ## _GetFilePart() method # # Get components for FILE statement # @@ -2680,58 +2614,57 @@ class FdfParser: # @param FfsFileObj for whom component is got # @param MacroDict dictionary used to replace macro # - def __GetFilePart(self, FfsFileObj, MacroDict = {}): + def _GetFilePart(self, FfsFileObj, MacroDict = {}): + self._GetFileOpts(FfsFileObj) - self.__GetFileOpts( FfsFileObj) - - if not self.__IsToken("{"): - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): - if self.__Token == 'RELOCS_STRIPPED': + if not self._IsToken("{"): + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): + if self._Token == 'RELOCS_STRIPPED': FfsFileObj.KeepReloc = False else: FfsFileObj.KeepReloc = True else: raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) - if self.__Token == BINARY_FILE_TYPE_FV: - if not self.__IsToken( "="): + if self._Token == BINARY_FILE_TYPE_FV: + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - FfsFileObj.FvName = self.__Token + FfsFileObj.FvName = self._Token - elif self.__Token == "FD": - if not self.__IsToken( "="): + elif self._Token == "FD": + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) - FfsFileObj.FdName = self.__Token + FfsFileObj.FdName = self._Token - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): - self.__UndoToken() - self.__GetSectionData( FfsFileObj, MacroDict) + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): + self._UndoToken() + self._GetSectionData(FfsFileObj, MacroDict) elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': - self.__UndoToken() - self.__GetRAWData(FfsFileObj, MacroDict) + self._UndoToken() + self._GetRAWData(FfsFileObj, MacroDict) else: FfsFileObj.CurrentLineNum = self.CurrentLineNumber - FfsFileObj.CurrentLineContent = self.__CurrentLine() - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') - self.__VerifyFile(FfsFileObj.FileName) + FfsFileObj.CurrentLineContent = self._CurrentLine() + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') + self._VerifyFile(FfsFileObj.FileName) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - ## __GetRAWData() method + ## _GetRAWData() method # # Get RAW data for FILE statement # @@ -2739,33 +2672,32 @@ class FdfParser: # @param FfsFileObj for whom section is got # @param MacroDict dictionary used to replace macro # - def __GetRAWData(self, FfsFileObj, MacroDict = {}): + def _GetRAWData(self, FfsFileObj, MacroDict = {}): FfsFileObj.FileName = [] FfsFileObj.SubAlignment = [] while True: AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - AlignValue = self.__Token - if not self.__GetNextToken(): + if not self._Token == "Auto": + AlignValue = self._Token + if not self._GetNextToken(): raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) - FileName = self.__Token.replace('$(SPACE)', ' ') + FileName = self._Token.replace('$(SPACE)', ' ') if FileName == '}': - self.__UndoToken() + self._UndoToken() raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) - self.__VerifyFile(FileName) + self._VerifyFile(FileName) File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) FfsFileObj.FileName.append(File.Path) FfsFileObj.SubAlignment.append(AlignValue) - if self.__IsToken( "}"): - self.__UndoToken() + if self._IsToken("}"): + self._UndoToken() break if len(FfsFileObj.SubAlignment) == 1: @@ -2773,45 +2705,43 @@ class FdfParser: if len(FfsFileObj.FileName) == 1: FfsFileObj.FileName = FfsFileObj.FileName[0] - ## __GetFileOpts() method + ## _GetFileOpts() method # # Get options for FILE statement # # @param self The object pointer # @param FfsFileObj for whom options is got # - def __GetFileOpts(self, FfsFileObj): - - if self.__GetNextToken(): - if TokenFindPattern.match(self.__Token): - FfsFileObj.KeyStringList.append(self.__Token) - if self.__IsToken(","): - while self.__GetNextToken(): - if not TokenFindPattern.match(self.__Token): + def _GetFileOpts(self, FfsFileObj): + if self._GetNextToken(): + if TokenFindPattern.match(self._Token): + FfsFileObj.KeyStringList.append(self._Token) + if self._IsToken(TAB_COMMA_SPLIT): + while self._GetNextToken(): + if not TokenFindPattern.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - FfsFileObj.KeyStringList.append(self.__Token) + FfsFileObj.KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break else: - self.__UndoToken() + self._UndoToken() - if self.__IsKeyword( "FIXED", True): + if self._IsKeyword("FIXED", True): FfsFileObj.Fixed = True - if self.__IsKeyword( "CHECKSUM", True): + if self._IsKeyword("CHECKSUM", True): FfsFileObj.CheckSum = True - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - FfsFileObj.Alignment = self.__Token + if not self._Token == "Auto": + FfsFileObj.Alignment = self._Token - ## __GetAlignment() method + ## _GetAlignment() method # # Return the alignment value # @@ -2819,18 +2749,18 @@ class FdfParser: # @retval True Successfully find alignment # @retval False Not able to find alignment # - def __GetAlignment(self): - if self.__IsKeyword( "Align", True): - if not self.__IsToken( "="): + def _GetAlignment(self): + if self._IsKeyword("Align", True): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) return True return False - ## __GetFilePart() method + ## _GetFilePart() method # # Get section data for FILE statement # @@ -2838,23 +2768,23 @@ class FdfParser: # @param FfsFileObj for whom section is got # @param MacroDict dictionary used to replace macro # - def __GetSectionData(self, FfsFileObj, MacroDict = {}): + def _GetSectionData(self, FfsFileObj, MacroDict = {}): Dict = {} Dict.update(MacroDict) - self.__GetDefineStatements(FfsFileObj) + self._GetDefineStatements(FfsFileObj) Dict.update(FfsFileObj.DefineVarDict) - self.__GetAprioriSection(FfsFileObj, Dict.copy()) - self.__GetAprioriSection(FfsFileObj, Dict.copy()) + self._GetAprioriSection(FfsFileObj, Dict.copy()) + self._GetAprioriSection(FfsFileObj, Dict.copy()) while True: - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) if not IsLeafSection and not IsEncapSection: break - ## __GetLeafSection() method + ## _GetLeafSection() method # # Get leaf section for Obj # @@ -2864,97 +2794,95 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetLeafSection(self, Obj, MacroDict = {}): - + def _GetLeafSection(self, Obj, MacroDict = {}): OldPos = self.GetFileBufferPos() - if not self.__IsKeyword( "SECTION"): + if not self._IsKeyword("SECTION"): if len(Obj.SectionList) == 0: raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) else: return False AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - AlignValue = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + AlignValue = self._Token BuildNum = None - if self.__IsKeyword( "BUILD_NUM"): - if not self.__IsToken( "="): + if self._IsKeyword("BUILD_NUM"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) - BuildNum = self.__Token + BuildNum = self._Token - if self.__IsKeyword( "VERSION"): + if self._IsKeyword("VERSION"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected version", self.FileName, self.CurrentLineNumber) - VerSectionObj = VerSection.VerSection() + VerSectionObj = VerSection() VerSectionObj.Alignment = AlignValue VerSectionObj.BuildNum = BuildNum - if self.__GetStringData(): - VerSectionObj.StringData = self.__Token + if self._GetStringData(): + VerSectionObj.StringData = self._Token else: - VerSectionObj.FileName = self.__Token + VerSectionObj.FileName = self._Token Obj.SectionList.append(VerSectionObj) - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): + elif self._IsKeyword(BINARY_FILE_TYPE_UI): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected UI", self.FileName, self.CurrentLineNumber) - UiSectionObj = UiSection.UiSection() + UiSectionObj = UiSection() UiSectionObj.Alignment = AlignValue - if self.__GetStringData(): - UiSectionObj.StringData = self.__Token + if self._GetStringData(): + UiSectionObj.StringData = self._Token else: - UiSectionObj.FileName = self.__Token + UiSectionObj.FileName = self._Token Obj.SectionList.append(UiSectionObj) - elif self.__IsKeyword( "FV_IMAGE"): + elif self._IsKeyword("FV_IMAGE"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) - FvName = self.__Token + FvName = self._Token FvObj = None - if self.__IsToken( "{"): - FvObj = Fv.FV() + if self._IsToken("{"): + FvObj = FV() FvObj.UiFvName = FvName.upper() - self.__GetDefineStatements(FvObj) + self._GetDefineStatements(FvObj) MacroDict.update(FvObj.DefineVarDict) - self.__GetBlockStatement(FvObj) - self.__GetSetStatements(FvObj) - self.__GetFvAlignment(FvObj) - self.__GetFvAttributes(FvObj) - self.__GetAprioriSection(FvObj, MacroDict.copy()) - self.__GetAprioriSection(FvObj, MacroDict.copy()) + self._GetBlockStatement(FvObj) + self._GetSetStatements(FvObj) + self._GetFvAlignment(FvObj) + self._GetFvAttributes(FvObj) + self._GetAprioriSection(FvObj, MacroDict.copy()) + self._GetAprioriSection(FvObj, MacroDict.copy()) while True: - IsInf = self.__GetInfStatement(FvObj) - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) + IsInf = self._GetInfStatement(FvObj) + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - FvImageSectionObj = FvImageSection.FvImageSection() + FvImageSectionObj = FvImageSection() FvImageSectionObj.Alignment = AlignValue if FvObj is not None: FvImageSectionObj.Fv = FvObj @@ -2965,81 +2893,81 @@ class FdfParser: Obj.SectionList.append(FvImageSectionObj) - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - DepexSectionObj = DepexSection.DepexSection() + DepexSectionObj = DepexSection() DepexSectionObj.Alignment = AlignValue - DepexSectionObj.DepexType = self.__Token + DepexSectionObj.DepexType = self._Token - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken( "}"): + if not self._SkipToToken("}"): raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') Obj.SectionList.append(DepexSectionObj) else: - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected section type", self.FileName, self.CurrentLineNumber) # Encapsulation section appear, UndoToken and return - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": + if self._Token == "COMPRESS" or self._Token == "GUIDED": self.SetFileBufferPos(OldPos) return False - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == BINARY_FILE_TYPE_TE): + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) # DataSection - DataSectionObj = DataSection.DataSection() + DataSectionObj = DataSection() DataSectionObj.Alignment = AlignValue - DataSectionObj.SecType = self.__Token + DataSectionObj.SecType = self._Token - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): + if self._Token == 'RELOCS_STRIPPED': DataSectionObj.KeepReloc = False else: DataSectionObj.KeepReloc = True else: raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if self.__IsToken("="): - if not self.__GetNextToken(): + if self._IsToken(TAB_EQUAL_SPLIT): + if not self._GetNextToken(): raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) - DataSectionObj.SectFileName = self.__Token - self.__VerifyFile(DataSectionObj.SectFileName) + DataSectionObj.SectFileName = self._Token + self._VerifyFile(DataSectionObj.SectFileName) else: - if not self.__GetCglSection(DataSectionObj): + if not self._GetCglSection(DataSectionObj): return False Obj.SectionList.append(DataSectionObj) return True - ## __VerifyFile + ## _VerifyFile # # Check if file exists or not: # If current phase if GenFds, the file must exist; # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist # @param FileName: File path to be verified. # - def __VerifyFile(self, FileName): - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: + def _VerifyFile(self, FileName): + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: return - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - ## __GetCglSection() method + ## _GetCglSection() method # # Get compressed or GUIDed section for Obj # @@ -3049,28 +2977,28 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetCglSection(self, Obj, AlignValue = None): + def _GetCglSection(self, Obj, AlignValue = None): - if self.__IsKeyword( "COMPRESS"): + if self._IsKeyword("COMPRESS"): type = "PI_STD" - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): - type = self.__Token + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): + type = self._Token - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - CompressSectionObj = CompressSection.CompressSection() + CompressSectionObj = CompressSection() CompressSectionObj.Alignment = AlignValue CompressSectionObj.CompType = type # Recursive sections... while True: - IsLeafSection = self.__GetLeafSection(CompressSectionObj) - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) + IsLeafSection = self._GetLeafSection(CompressSectionObj) + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) if not IsLeafSection and not IsEncapSection: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(CompressSectionObj) @@ -3079,15 +3007,15 @@ class FdfParser: return True - elif self.__IsKeyword( "GUIDED"): + elif self._IsKeyword("GUIDED"): GuidValue = None - if self.__GetNextGuid(): - GuidValue = self.__Token + if self._GetNextGuid(): + GuidValue = self._Token - AttribDict = self.__GetGuidAttrib() - if not self.__IsToken("{"): + AttribDict = self._GetGuidAttrib() + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - GuidSectionObj = GuidSection.GuidSection() + GuidSectionObj = GuidSection() GuidSectionObj.Alignment = AlignValue GuidSectionObj.NameGuid = GuidValue GuidSectionObj.SectionType = "GUIDED" @@ -3096,12 +3024,12 @@ class FdfParser: GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] # Recursive sections... while True: - IsLeafSection = self.__GetLeafSection(GuidSectionObj) - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) + IsLeafSection = self._GetLeafSection(GuidSectionObj) + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) if not IsLeafSection and not IsEncapSection: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(GuidSectionObj) @@ -3109,44 +3037,43 @@ class FdfParser: return False - ## __GetGuidAttri() method + ## _GetGuidAttri() method # # Get attributes for GUID section # # @param self The object pointer # @retval AttribDict Dictionary of key-value pair of section attributes # - def __GetGuidAttrib(self): - + def _GetGuidAttrib(self): AttribDict = {} AttribDict["PROCESSING_REQUIRED"] = "NONE" AttribDict["AUTH_STATUS_VALID"] = "NONE" AttribDict["EXTRA_HEADER_SIZE"] = -1 - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ - or self.__IsKeyword("EXTRA_HEADER_SIZE"): - AttribKey = self.__Token + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ + or self._IsKeyword("EXTRA_HEADER_SIZE"): + AttribKey = self._Token - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) elif AttribKey == "EXTRA_HEADER_SIZE": Base = 10 - if self.__Token[0:2].upper() == "0X": + if self._Token[0:2].upper() == "0X": Base = 16 try: - AttribDict[AttribKey] = int(self.__Token, Base) + AttribDict[AttribKey] = int(self._Token, Base) continue except ValueError: raise Warning("expected Number", self.FileName, self.CurrentLineNumber) - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) - AttribDict[AttribKey] = self.__Token + AttribDict[AttribKey] = self._Token return AttribDict - ## __GetEncapsulationSec() method + ## _GetEncapsulationSec() method # # Get encapsulation section for FILE # @@ -3155,91 +3082,89 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetEncapsulationSec(self, FfsFileObj): - + def _GetEncapsulationSec(self, FfsFileObj): OldPos = self.GetFileBufferPos() - if not self.__IsKeyword( "SECTION"): + if not self._IsKeyword("SECTION"): if len(FfsFileObj.SectionList) == 0: raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) else: return False AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - AlignValue = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENT_NOAUTO: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + AlignValue = self._Token - if not self.__GetCglSection(FfsFileObj, AlignValue): + if not self._GetCglSection(FfsFileObj, AlignValue): self.SetFileBufferPos(OldPos) return False else: return True - def __GetFmp(self): - if not self.__GetNextToken(): + def _GetFmp(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - self.__SkipToToken("[FMPPAYLOAD.", True) - FmpUiName = self.__GetUiName().upper() + self._UndoToken() + self._SkipToToken("[FMPPAYLOAD.", True) + FmpUiName = self._GetUiName().upper() if FmpUiName in self.Profile.FmpPayloadDict: raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) - FmpData = CapsuleData.CapsulePayload() + FmpData = CapsulePayload() FmpData.UiName = FmpUiName - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] - while self.__Token in FmpKeyList: - Name = self.__Token + while self._Token in FmpKeyList: + Name = self._Token FmpKeyList.remove(Name) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) if Name == 'IMAGE_TYPE_ID': - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) - FmpData.ImageTypeId = self.__Token + FmpData.ImageTypeId = self._Token elif Name == 'CERTIFICATE_GUID': - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) - FmpData.Certificate_Guid = self.__Token - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: + FmpData.Certificate_Guid = self._Token + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) else: - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token if Name == 'IMAGE_HEADER_INIT_VERSION': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.Version = Value elif Name == 'IMAGE_INDEX': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.ImageIndex = Value elif Name == 'HARDWARE_INSTANCE': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.HardwareInstance = Value elif Name == 'MONOTONIC_COUNT': - if FdfParser.__Verify(Name, Value, 'UINT64'): + if FdfParser._Verify(Name, Value, 'UINT64'): FmpData.MonotonicCount = Value if FmpData.MonotonicCount.upper().startswith('0X'): FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) else: FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) - if not self.__GetNextToken(): + if not self._GetNextToken(): break else: - self.__UndoToken() + self._UndoToken() if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and FmpData.Certificate_Guid): EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a pair.") @@ -3248,7 +3173,7 @@ class FdfParser: if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, self.CurrentLineNumber) # get the Image file and Vendor code file - self.__GetFMPCapsuleData(FmpData) + self._GetFMPCapsuleData(FmpData) if not FmpData.ImageFile: raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) # check whether more than one Vendor code file @@ -3257,7 +3182,7 @@ class FdfParser: self.Profile.FmpPayloadDict[FmpUiName] = FmpData return True - ## __GetCapsule() method + ## _GetCapsule() method # # Get capsule section contents and store its data into capsule list of self.Profile # @@ -3265,90 +3190,89 @@ class FdfParser: # @retval True Successfully find a capsule # @retval False Not able to find a capsule # - def __GetCapsule(self): - - if not self.__GetNextToken(): + def _GetCapsule(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[CAPSULE."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[CAPSULE.", True): + self._UndoToken() + if not self._IsToken("[CAPSULE.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) - CapsuleObj = Capsule.Capsule() + CapsuleObj = Capsule() - CapsuleName = self.__GetUiName() + CapsuleName = self._GetUiName() if not CapsuleName: raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) CapsuleObj.UiCapsuleName = CapsuleName.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("CREATE_FILE"): - if not self.__IsToken( "="): + if self._IsKeyword("CREATE_FILE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file name", self.FileName, self.CurrentLineNumber) - CapsuleObj.CreateFile = self.__Token + CapsuleObj.CreateFile = self._Token - self.__GetCapsuleStatements(CapsuleObj) + self._GetCapsuleStatements(CapsuleObj) self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj return True - ## __GetCapsuleStatements() method + ## _GetCapsuleStatements() method # # Get statements for capsule # # @param self The object pointer # @param Obj for whom statements are got # - def __GetCapsuleStatements(self, Obj): - self.__GetCapsuleTokens(Obj) - self.__GetDefineStatements(Obj) - self.__GetSetStatements(Obj) - self.__GetCapsuleData(Obj) + def _GetCapsuleStatements(self, Obj): + self._GetCapsuleTokens(Obj) + self._GetDefineStatements(Obj) + self._GetSetStatements(Obj) + self._GetCapsuleData(Obj) - ## __GetCapsuleTokens() method + ## _GetCapsuleTokens() method # # Get token statements for capsule # # @param self The object pointer # @param Obj for whom token statements are got # - def __GetCapsuleTokens(self, Obj): - if not self.__GetNextToken(): + def _GetCapsuleTokens(self, Obj): + if not self._GetNextToken(): return False - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): - Name = self.__Token.strip() - if not self.__IsToken("="): + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): + Name = self._Token.strip() + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) if Name == 'CAPSULE_FLAGS': - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) - Value = self.__Token.strip() - while self.__IsToken(","): - Value += ',' - if not self.__GetNextToken(): + Value = self._Token.strip() + while self._IsToken(TAB_COMMA_SPLIT): + Value += TAB_COMMA_SPLIT + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) - Value += self.__Token.strip() + Value += self._Token.strip() elif Name == 'OEM_CAPSULE_FLAGS': - Value = self.__Token.strip() + Value = self._Token.strip() if not Value.upper().startswith('0X'): raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) try: @@ -3357,51 +3281,49 @@ class FdfParser: raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) if not 0x0000 <= Value <= 0xFFFF: raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) - Value = self.__Token.strip() + Value = self._Token.strip() else: - Value = self.__Token.strip() + Value = self._Token.strip() Obj.TokensDict[Name] = Value - if not self.__GetNextToken(): + if not self._GetNextToken(): return False - self.__UndoToken() + self._UndoToken() - ## __GetCapsuleData() method + ## _GetCapsuleData() method # # Get capsule data for capsule # # @param self The object pointer # @param Obj for whom capsule data are got # - def __GetCapsuleData(self, Obj): - + def _GetCapsuleData(self, Obj): while True: - IsInf = self.__GetInfStatement(Obj, True) - IsFile = self.__GetFileStatement(Obj, True) - IsFv = self.__GetFvStatement(Obj) - IsFd = self.__GetFdStatement(Obj) - IsAnyFile = self.__GetAnyFileStatement(Obj) - IsAfile = self.__GetAfileStatement(Obj) - IsFmp = self.__GetFmpStatement(Obj) + IsInf = self._GetInfStatement(Obj, True) + IsFile = self._GetFileStatement(Obj, True) + IsFv = self._GetFvStatement(Obj) + IsFd = self._GetFdStatement(Obj) + IsAnyFile = self._GetAnyFileStatement(Obj) + IsAfile = self._GetAfileStatement(Obj) + IsFmp = self._GetFmpStatement(Obj) if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): break - ## __GetFMPCapsuleData() method + ## _GetFMPCapsuleData() method # # Get capsule data for FMP capsule # # @param self The object pointer # @param Obj for whom capsule data are got # - def __GetFMPCapsuleData(self, Obj): - + def _GetFMPCapsuleData(self, Obj): while True: - IsFv = self.__GetFvStatement(Obj, True) - IsFd = self.__GetFdStatement(Obj, True) - IsAnyFile = self.__GetAnyFileStatement(Obj, True) + IsFv = self._GetFvStatement(Obj, True) + IsFd = self._GetFdStatement(Obj, True) + IsAnyFile = self._GetAnyFileStatement(Obj, True) if not (IsFv or IsFd or IsAnyFile): break - ## __GetFvStatement() method + ## _GetFvStatement() method # # Get FV for capsule # @@ -3410,32 +3332,31 @@ class FdfParser: # @retval True Successfully find a FV statement # @retval False Not able to find a FV statement # - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): - - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): + if not self._IsKeyword(BINARY_FILE_TYPE_FV): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in self.Profile.FvDict: + if self._Token.upper() not in self.Profile.FvDict: raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) - CapsuleFv = CapsuleData.CapsuleFv() - CapsuleFv.FvName = self.__Token + myCapsuleFv = CapsuleFv() + myCapsuleFv.FvName = self._Token if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleFv) + CapsuleObj.ImageFile.append(myCapsuleFv) else: - CapsuleObj.VendorCodeFile.append(CapsuleFv) + CapsuleObj.VendorCodeFile.append(myCapsuleFv) else: - CapsuleObj.CapsuleDataList.append(CapsuleFv) + CapsuleObj.CapsuleDataList.append(myCapsuleFv) return True - ## __GetFdStatement() method + ## _GetFdStatement() method # # Get FD for capsule # @@ -3444,74 +3365,73 @@ class FdfParser: # @retval True Successfully find a FD statement # @retval False Not able to find a FD statement # - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): - - if not self.__IsKeyword("FD"): + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): + if not self._IsKeyword("FD"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in self.Profile.FdDict: + if self._Token.upper() not in self.Profile.FdDict: raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) - CapsuleFd = CapsuleData.CapsuleFd() - CapsuleFd.FdName = self.__Token + myCapsuleFd = CapsuleFd() + myCapsuleFd.FdName = self._Token if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleFd) + CapsuleObj.ImageFile.append(myCapsuleFd) else: - CapsuleObj.VendorCodeFile.append(CapsuleFd) + CapsuleObj.VendorCodeFile.append(myCapsuleFd) else: - CapsuleObj.CapsuleDataList.append(CapsuleFd) + CapsuleObj.CapsuleDataList.append(myCapsuleFd) return True - def __GetFmpStatement(self, CapsuleObj): - if not self.__IsKeyword("FMP_PAYLOAD"): - if not self.__IsKeyword("FMP"): + def _GetFmpStatement(self, CapsuleObj): + if not self._IsKeyword("FMP_PAYLOAD"): + if not self._IsKeyword("FMP"): return False - if not self.__IsKeyword("PAYLOAD"): - self.__UndoToken() + if not self._IsKeyword("PAYLOAD"): + self._UndoToken() return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) - Payload = self.__Token.upper() + Payload = self._Token.upper() if Payload not in self.Profile.FmpPayloadDict: - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) return True - def __ParseRawFileStatement(self): - if not self.__IsKeyword("FILE"): + def _ParseRawFileStatement(self): + if not self._IsKeyword("FILE"): return None - if not self.__IsKeyword("DATA"): - self.__UndoToken() + if not self._IsKeyword("DATA"): + self._UndoToken() return None - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) - AnyFileName = self.__Token - self.__VerifyFile(AnyFileName) + AnyFileName = self._Token + self._VerifyFile(AnyFileName) if not os.path.isabs(AnyFileName): AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) return AnyFileName - ## __GetAnyFileStatement() method + ## _GetAnyFileStatement() method # # Get AnyFile for capsule # @@ -3520,23 +3440,23 @@ class FdfParser: # @retval True Successfully find a Anyfile statement # @retval False Not able to find a AnyFile statement # - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): - AnyFileName = self.__ParseRawFileStatement() + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): + AnyFileName = self._ParseRawFileStatement() if not AnyFileName: return False - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() - CapsuleAnyFile.FileName = AnyFileName + myCapsuleAnyFile = CapsuleAnyFile() + myCapsuleAnyFile.FileName = AnyFileName if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleAnyFile) + CapsuleObj.ImageFile.append(myCapsuleAnyFile) else: - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) else: - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) return True - ## __GetAfileStatement() method + ## _GetAfileStatement() method # # Get Afile for capsule # @@ -3545,18 +3465,17 @@ class FdfParser: # @retval True Successfully find a Afile statement # @retval False Not able to find a Afile statement # - def __GetAfileStatement(self, CapsuleObj): - - if not self.__IsKeyword("APPEND"): + def _GetAfileStatement(self, CapsuleObj): + if not self._IsKeyword("APPEND"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) - AfileName = self.__Token + AfileName = self._Token AfileBaseName = os.path.basename(AfileName) if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: @@ -3565,19 +3484,19 @@ class FdfParser: if not os.path.isabs(AfileName): AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) - self.__VerifyFile(AfileName) + self._VerifyFile(AfileName) else: if not os.path.exists(AfileName): raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) else: pass - CapsuleAfile = CapsuleData.CapsuleAfile() - CapsuleAfile.FileName = AfileName - CapsuleObj.CapsuleDataList.append(CapsuleAfile) + myCapsuleAfile = CapsuleAfile() + myCapsuleAfile.FileName = AfileName + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) return True - ## __GetRule() method + ## _GetRule() method # # Get Rule section contents and store its data into rule list of self.Profile # @@ -3585,143 +3504,140 @@ class FdfParser: # @retval True Successfully find a Rule # @retval False Not able to find a Rule # - def __GetRule(self): - - if not self.__GetNextToken(): + def _GetRule(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[RULE."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[Rule.", True): + self._UndoToken() + if not self._IsToken("[Rule.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken("."): + if not self._SkipToToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) - Arch = self.__SkippedChars.rstrip(".") + Arch = self._SkippedChars.rstrip(TAB_SPLIT) if Arch.upper() not in ARCH_SET_FULL: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) - ModuleType = self.__GetModuleType() + ModuleType = self._GetModuleType() TemplateName = "" - if self.__IsToken("."): - if not self.__GetNextWord(): + if self._IsToken(TAB_SPLIT): + if not self._GetNextWord(): raise Warning("expected template name", self.FileName, self.CurrentLineNumber) - TemplateName = self.__Token + TemplateName = self._Token - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - RuleObj = self.__GetRuleFileStatements() + RuleObj = self._GetRuleFileStatements() RuleObj.Arch = Arch.upper() RuleObj.ModuleType = ModuleType RuleObj.TemplateName = TemplateName - if TemplateName == '' : + if TemplateName == '': self.Profile.RuleDict['RULE' + \ - '.' + \ + TAB_SPLIT + \ Arch.upper() + \ - '.' + \ + TAB_SPLIT + \ ModuleType.upper() ] = RuleObj - else : + else: self.Profile.RuleDict['RULE' + \ - '.' + \ + TAB_SPLIT + \ Arch.upper() + \ - '.' + \ + TAB_SPLIT + \ ModuleType.upper() + \ - '.' + \ + TAB_SPLIT + \ TemplateName.upper() ] = RuleObj return True - ## __GetModuleType() method + ## _GetModuleType() method # # Return the module type # # @param self The object pointer # @retval string module type # - def __GetModuleType(self): - - if not self.__GetNextWord(): + def _GetModuleType(self): + if not self._GetNextWord(): raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, "DEFAULT", SUP_MODULE_BASE, \ + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, TAB_DEFAULT, SUP_MODULE_BASE, \ EDK_COMPONENT_TYPE_SECURITY_CORE, EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - return self.__Token + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + return self._Token - ## __GetFileExtension() method + ## _GetFileExtension() method # # Return the file extension # # @param self The object pointer # @retval string file name extension # - def __GetFileExtension(self): - if not self.__IsToken("."): + def _GetFileExtension(self): + if not self._IsToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) Ext = "" - if self.__GetNextToken(): - if FileExtensionPattern.match(self.__Token): - Ext = self.__Token - return '.' + Ext + if self._GetNextToken(): + if FileExtensionPattern.match(self._Token): + Ext = self._Token + return TAB_SPLIT + Ext else: - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) else: raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) - ## __GetRuleFileStatement() method + ## _GetRuleFileStatement() method # # Get rule contents # # @param self The object pointer # @retval Rule Rule object # - def __GetRuleFileStatements(self): - - if not self.__IsKeyword("FILE"): + def _GetRuleFileStatements(self): + if not self._IsKeyword("FILE"): raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) - Type = self.__Token.strip().upper() + Type = self._Token.strip().upper() if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsKeyword("$(NAMED_GUID)"): - if not self.__GetNextWord(): + if not self._IsKeyword("$(NAMED_GUID)"): + if not self._GetNextWord(): raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - NameGuid = self.__Token + NameGuid = self._Token KeepReloc = None - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(Type): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(Type): + if self._Token == 'RELOCS_STRIPPED': KeepReloc = False else: KeepReloc = True @@ -3729,111 +3645,109 @@ class FdfParser: raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) KeyStringList = [] - if self.__GetNextToken(): - if TokenFindPattern.match(self.__Token): - KeyStringList.append(self.__Token) - if self.__IsToken(","): - while self.__GetNextToken(): - if not TokenFindPattern.match(self.__Token): + if self._GetNextToken(): + if TokenFindPattern.match(self._Token): + KeyStringList.append(self._Token) + if self._IsToken(TAB_COMMA_SPLIT): + while self._GetNextToken(): + if not TokenFindPattern.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - KeyStringList.append(self.__Token) + KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break else: - self.__UndoToken() + self._UndoToken() Fixed = False - if self.__IsKeyword("Fixed", True): + if self._IsKeyword("Fixed", True): Fixed = True CheckSum = False - if self.__IsKeyword("CheckSum", True): + if self._IsKeyword("CheckSum", True): CheckSum = True AlignValue = "" - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - AlignValue = self.__Token + if not self._Token == "Auto": + AlignValue = self._Token - if self.__IsToken("{"): + if self._IsToken("{"): # Complex file rule expected - Rule = RuleComplexFile.RuleComplexFile() - Rule.FvFileType = Type - Rule.NameGuid = NameGuid - Rule.Alignment = AlignValue - Rule.CheckSum = CheckSum - Rule.Fixed = Fixed - Rule.KeyStringList = KeyStringList + NewRule = RuleComplexFile() + NewRule.FvFileType = Type + NewRule.NameGuid = NameGuid + NewRule.Alignment = AlignValue + NewRule.CheckSum = CheckSum + NewRule.Fixed = Fixed + NewRule.KeyStringList = KeyStringList if KeepReloc is not None: - Rule.KeepReloc = KeepReloc + NewRule.KeepReloc = KeepReloc while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) - IsLeaf = self.__GetEfiSection(Rule) + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) + IsLeaf = self._GetEfiSection(NewRule) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken("}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - return Rule + return NewRule else: # Simple file rule expected - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) - SectionName = self.__Token + SectionName = self._Token if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("Fixed", True): + if self._IsKeyword("Fixed", True): Fixed = True - if self.__IsKeyword("CheckSum", True): + if self._IsKeyword("CheckSum", True): CheckSum = True SectAlignment = "" - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - SectAlignment = self.__Token + SectAlignment = self._Token Ext = None - if self.__IsToken('|'): - Ext = self.__GetFileExtension() - elif not self.__GetNextToken(): + if self._IsToken(TAB_VALUE_SPLIT): + Ext = self._GetFileExtension() + elif not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) - Rule = RuleSimpleFile.RuleSimpleFile() - Rule.SectionType = SectionName - Rule.FvFileType = Type - Rule.NameGuid = NameGuid - Rule.Alignment = AlignValue - Rule.SectAlignment = SectAlignment - Rule.CheckSum = CheckSum - Rule.Fixed = Fixed - Rule.KeyStringList = KeyStringList + NewRule = RuleSimpleFile() + NewRule.SectionType = SectionName + NewRule.FvFileType = Type + NewRule.NameGuid = NameGuid + NewRule.Alignment = AlignValue + NewRule.SectAlignment = SectAlignment + NewRule.CheckSum = CheckSum + NewRule.Fixed = Fixed + NewRule.KeyStringList = KeyStringList if KeepReloc is not None: - Rule.KeepReloc = KeepReloc - Rule.FileExtension = Ext - Rule.FileName = self.__Token - return Rule + NewRule.KeepReloc = KeepReloc + NewRule.FileExtension = Ext + NewRule.FileName = self._Token + return NewRule - ## __GetEfiSection() method + ## _GetEfiSection() method # # Get section list for Rule # @@ -3842,127 +3756,124 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetEfiSection(self, Obj): - + def _GetEfiSection(self, Obj): OldPos = self.GetFileBufferPos() - if not self.__GetNextWord(): + if not self._GetNextWord(): return False - SectionName = self.__Token + SectionName = self._Token if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - self.__UndoToken() + self._UndoToken() return False if SectionName == "FV_IMAGE": - FvImageSectionObj = FvImageSection.FvImageSection() - if self.__IsKeyword("FV_IMAGE"): + FvImageSectionObj = FvImageSection() + if self._IsKeyword("FV_IMAGE"): pass - if self.__IsToken( "{"): - FvObj = Fv.FV() - self.__GetDefineStatements(FvObj) - self.__GetBlockStatement(FvObj) - self.__GetSetStatements(FvObj) - self.__GetFvAlignment(FvObj) - self.__GetFvAttributes(FvObj) - self.__GetAprioriSection(FvObj) - self.__GetAprioriSection(FvObj) + if self._IsToken("{"): + FvObj = FV() + self._GetDefineStatements(FvObj) + self._GetBlockStatement(FvObj) + self._GetSetStatements(FvObj) + self._GetFvAlignment(FvObj) + self._GetFvAttributes(FvObj) + self._GetAprioriSection(FvObj) + self._GetAprioriSection(FvObj) while True: - IsInf = self.__GetInfStatement(FvObj) - IsFile = self.__GetFileStatement(FvObj) + IsInf = self._GetInfStatement(FvObj) + IsFile = self._GetFileStatement(FvObj) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) FvImageSectionObj.Fv = FvObj FvImageSectionObj.FvName = None else: - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) - FvImageSectionObj.FvFileType = self.__Token + if not self._IsKeyword(BINARY_FILE_TYPE_FV): + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) + FvImageSectionObj.FvFileType = self._Token - if self.__GetAlignment(): - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - FvImageSectionObj.Alignment = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENT_NOAUTO: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + FvImageSectionObj.Alignment = self._Token - if self.__IsToken('|'): - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() - elif self.__GetNextToken(): - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._IsToken(TAB_VALUE_SPLIT): + FvImageSectionObj.FvFileExtension = self._GetFileExtension() + elif self._GetNextToken(): + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - FvImageSectionObj.FvFileName = self.__Token + FvImageSectionObj.FvFileName = self._Token else: - self.__UndoToken() + self._UndoToken() else: raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(FvImageSectionObj) return True - EfiSectionObj = EfiSection.EfiSection() + EfiSectionObj = EfiSection() EfiSectionObj.SectionType = SectionName - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file type", self.FileName, self.CurrentLineNumber) - if self.__Token == "STRING": - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): + if self._Token == "STRING": + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken('='): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - EfiSectionObj.StringData = self.__Token + if self._GetStringData(): + EfiSectionObj.StringData = self._Token - if self.__IsKeyword("BUILD_NUM"): - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): + if self._IsKeyword("BUILD_NUM"): + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) - EfiSectionObj.BuildNum = self.__Token + EfiSectionObj.BuildNum = self._Token else: - EfiSectionObj.FileType = self.__Token - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) + EfiSectionObj.FileType = self._Token + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) - if self.__IsKeyword("Optional"): - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): + if self._IsKeyword("Optional"): + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) EfiSectionObj.Optional = True - if self.__IsKeyword("BUILD_NUM"): - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): + if self._IsKeyword("BUILD_NUM"): + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) - EfiSectionObj.BuildNum = self.__Token + EfiSectionObj.BuildNum = self._Token - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - EfiSectionObj.Alignment = self.__Token + EfiSectionObj.Alignment = self._Token - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): + if self._Token == 'RELOCS_STRIPPED': EfiSectionObj.KeepReloc = False else: EfiSectionObj.KeepReloc = True @@ -3972,35 +3883,35 @@ class FdfParser: raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, self.CurrentLineNumber) - if self.__IsToken('|'): - EfiSectionObj.FileExtension = self.__GetFileExtension() - elif self.__GetNextToken(): - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._IsToken(TAB_VALUE_SPLIT): + EfiSectionObj.FileExtension = self._GetFileExtension() + elif self._GetNextToken(): + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - if self.__Token.startswith('PCD'): - self.__UndoToken() - self.__GetNextWord() + if self._Token.startswith('PCD'): + self._UndoToken() + self._GetNextWord() - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - EfiSectionObj.FileName = self.__Token + EfiSectionObj.FileName = self._Token else: - self.__UndoToken() + self._UndoToken() else: raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(EfiSectionObj) return True - ## __RuleSectionCouldBeOptional() method + ## _RuleSectionCouldBeOptional() method # # Get whether a section could be optional # @@ -4009,13 +3920,13 @@ class FdfParser: # @retval False section never optional # @staticmethod - def __RuleSectionCouldBeOptional(SectionType): + def _RuleSectionCouldBeOptional(SectionType): if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "RAW", BINARY_FILE_TYPE_SMM_DEPEX): return True else: return False - ## __RuleSectionCouldHaveBuildNum() method + ## _RuleSectionCouldHaveBuildNum() method # # Get whether a section could have build number information # @@ -4024,13 +3935,13 @@ class FdfParser: # @retval False section never have build number information # @staticmethod - def __RuleSectionCouldHaveBuildNum(SectionType): + def _RuleSectionCouldHaveBuildNum(SectionType): if SectionType in ("VERSION"): return True else: return False - ## __RuleSectionCouldHaveString() method + ## _RuleSectionCouldHaveString() method # # Get whether a section could have string # @@ -4039,13 +3950,13 @@ class FdfParser: # @retval False section never have string # @staticmethod - def __RuleSectionCouldHaveString(SectionType): + def _RuleSectionCouldHaveString(SectionType): if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): return True else: return False - ## __CheckRuleSectionFileType() method + ## _CheckRuleSectionFileType() method # # Get whether a section matches a file type # @@ -4053,7 +3964,7 @@ class FdfParser: # @param SectionType The section type to check # @param FileType The file type to check # - def __CheckRuleSectionFileType(self, SectionType, FileType): + def _CheckRuleSectionFileType(self, SectionType, FileType): if SectionType == "COMPAT16": if FileType not in ("COMPAT16", "SEC_COMPAT16"): raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) @@ -4085,53 +3996,53 @@ class FdfParser: if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) - ## __GetRuleEncapsulationSection() method + ## _GetRuleEncapsulationSection() method # # Get encapsulation section for Rule # # @param self The object pointer - # @param Rule for whom section is got + # @param theRule for whom section is got # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetRuleEncapsulationSection(self, Rule): - if self.__IsKeyword( "COMPRESS"): + def _GetRuleEncapsulationSection(self, theRule): + if self._IsKeyword("COMPRESS"): Type = "PI_STD" - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): - Type = self.__Token + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): + Type = self._Token - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - CompressSectionObj = CompressSection.CompressSection() + CompressSectionObj = CompressSection() CompressSectionObj.CompType = Type # Recursive sections... while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) - IsLeaf = self.__GetEfiSection(CompressSectionObj) + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) + IsLeaf = self._GetEfiSection(CompressSectionObj) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - Rule.SectionList.append(CompressSectionObj) + theRule.SectionList.append(CompressSectionObj) return True - elif self.__IsKeyword( "GUIDED"): + elif self._IsKeyword("GUIDED"): GuidValue = None - if self.__GetNextGuid(): - GuidValue = self.__Token + if self._GetNextGuid(): + GuidValue = self._Token - if self.__IsKeyword( "$(NAMED_GUID)"): - GuidValue = self.__Token + if self._IsKeyword("$(NAMED_GUID)"): + GuidValue = self._Token - AttribDict = self.__GetGuidAttrib() + AttribDict = self._GetGuidAttrib() - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - GuidSectionObj = GuidSection.GuidSection() + GuidSectionObj = GuidSection() GuidSectionObj.NameGuid = GuidValue GuidSectionObj.SectionType = "GUIDED" GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] @@ -4140,20 +4051,20 @@ class FdfParser: # Efi sections... while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) - IsLeaf = self.__GetEfiSection(GuidSectionObj) + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) + IsLeaf = self._GetEfiSection(GuidSectionObj) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - Rule.SectionList.append(GuidSectionObj) + theRule.SectionList.append(GuidSectionObj) return True return False - ## __GetVtf() method + ## _GetVtf() method # # Get VTF section contents and store its data into VTF list of self.Profile # @@ -4161,69 +4072,69 @@ class FdfParser: # @retval True Successfully find a VTF # @retval False Not able to find a VTF # - def __GetVtf(self): - if not self.__GetNextToken(): + def _GetVtf(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[VTF."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[VTF.", True): + self._UndoToken() + if not self._IsToken("[VTF.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken("."): + if not self._SkipToToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) - Arch = self.__SkippedChars.rstrip(".").upper() - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) - Name = self.__Token.upper() + Name = self._Token.upper() - VtfObj = Vtf.Vtf() + VtfObj = Vtf() VtfObj.UiName = Name VtfObj.KeyArch = Arch - if self.__IsToken(","): - if not self.__GetNextWord(): + if self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextWord(): raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - VtfObj.ArchList = self.__Token.upper() + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64): + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + VtfObj.ArchList = self._Token.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("IA32_RST_BIN"): - if not self.__IsToken("="): + if self._IsKeyword("IA32_RST_BIN"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) - VtfObj.ResetBin = self.__Token - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: + VtfObj.ResetBin = self._Token + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - while self.__GetComponentStatement(VtfObj): + while self._GetComponentStatement(VtfObj): pass self.Profile.VtfList.append(VtfObj) return True - ## __GetComponentStatement() method + ## _GetComponentStatement() method # # Get components in VTF # @@ -4232,132 +4143,132 @@ class FdfParser: # @retval True Successfully find a component # @retval False Not able to find a component # - def __GetComponentStatement(self, VtfObj): - if not self.__IsKeyword("COMP_NAME"): + def _GetComponentStatement(self, VtfObj): + if not self._IsKeyword("COMP_NAME"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) - CompStatementObj = ComponentStatement.ComponentStatement() - CompStatementObj.CompName = self.__Token + CompStatementObj = ComponentStatement() + CompStatementObj.CompName = self._Token - if not self.__IsKeyword("COMP_LOC"): + if not self._IsKeyword("COMP_LOC"): raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) CompStatementObj.CompLoc = "" - if self.__GetNextWord(): - CompStatementObj.CompLoc = self.__Token - if self.__IsToken('|'): - if not self.__GetNextWord(): + if self._GetNextWord(): + CompStatementObj.CompLoc = self._Token + if self._IsToken(TAB_VALUE_SPLIT): + if not self._GetNextWord(): raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.FilePos = self.__Token + CompStatementObj.FilePos = self._Token else: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 - if not self.__IsKeyword("COMP_TYPE"): + if not self._IsKeyword("COMP_TYPE"): raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompType = self.__Token + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompType = self._Token - if not self.__IsKeyword("COMP_VER"): + if not self._IsKeyword("COMP_VER"): raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) - if Pattern.match(self.__Token) is None: - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompVer = self.__Token + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) + if Pattern.match(self._Token) is None: + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompVer = self._Token - if not self.__IsKeyword("COMP_CS"): + if not self._IsKeyword("COMP_CS"): raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("1", "0"): - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompCs = self.__Token + if self._Token not in ("1", "0"): + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompCs = self._Token - if not self.__IsKeyword("COMP_BIN"): + if not self._IsKeyword("COMP_BIN"): raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) - CompStatementObj.CompBin = self.__Token - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: + CompStatementObj.CompBin = self._Token + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - if not self.__IsKeyword("COMP_SYM"): + if not self._IsKeyword("COMP_SYM"): raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) - CompStatementObj.CompSym = self.__Token - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: + CompStatementObj.CompSym = self._Token + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - if not self.__IsKeyword("COMP_SIZE"): + if not self._IsKeyword("COMP_SIZE"): raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if self.__IsToken("-"): - CompStatementObj.CompSize = self.__Token - elif self.__GetNextDecimalNumber(): - CompStatementObj.CompSize = self.__Token - elif self.__GetNextHexNumber(): - CompStatementObj.CompSize = self.__Token + if self._IsToken("-"): + CompStatementObj.CompSize = self._Token + elif self._GetNextDecimalNumber(): + CompStatementObj.CompSize = self._Token + elif self._GetNextHexNumber(): + CompStatementObj.CompSize = self._Token else: - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) VtfObj.ComponentStatementList.append(CompStatementObj) return True - ## __GetOptionRom() method + ## _GetOptionRom() method # # Get OptionROM section contents and store its data into OptionROM list of self.Profile # @@ -4365,38 +4276,37 @@ class FdfParser: # @retval True Successfully find a OptionROM # @retval False Not able to find a OptionROM # - def __GetOptionRom(self): - if not self.__GetNextToken(): + def _GetOptionRom(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[OPTIONROM."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[OptionRom.", True): - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + self._UndoToken() + if not self._IsToken("[OptionRom.", True): + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - OptRomName = self.__GetUiName() + OptRomName = self._GetUiName() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - OptRomObj = OptionRom.OPTIONROM() - OptRomObj.DriverName = OptRomName + OptRomObj = OPTIONROM(OptRomName) self.Profile.OptRomDict[OptRomName] = OptRomObj while True: - isInf = self.__GetOptRomInfStatement(OptRomObj) - isFile = self.__GetOptRomFileStatement(OptRomObj) + isInf = self._GetOptRomInfStatement(OptRomObj) + isFile = self._GetOptRomFileStatement(OptRomObj) if not isInf and not isFile: break return True - ## __GetOptRomInfStatement() method + ## _GetOptRomInfStatement() method # # Get INF statements # @@ -4405,17 +4315,17 @@ class FdfParser: # @retval True Successfully find inf statement # @retval False Not able to find inf statement # - def __GetOptRomInfStatement(self, Obj): - if not self.__IsKeyword( "INF"): + def _GetOptRomInfStatement(self, Obj): + if not self._IsKeyword("INF"): return False - ffsInf = OptRomInfStatement.OptRomInfStatement() - self.__GetInfOptions( ffsInf) + ffsInf = OptRomInfStatement() + self._GetInfOptions(ffsInf) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) - ffsInf.InfFileName = self.__Token - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: + ffsInf.InfFileName = self._Token + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: @@ -4434,71 +4344,71 @@ class FdfParser: self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) - self.__GetOptRomOverrides (ffsInf) + self._GetOptRomOverrides (ffsInf) Obj.FfsList.append(ffsInf) return True - ## __GetOptRomOverrides() method + ## _GetOptRomOverrides() method # # Get overrides for OptROM INF & FILE # # @param self The object pointer # @param FfsInfObj for whom overrides is got # - def __GetOptRomOverrides(self, Obj): - if self.__IsToken('{'): - Overrides = OptRomInfStatement.OverrideAttribs() + def _GetOptRomOverrides(self, Obj): + if self._IsToken('{'): + Overrides = OverrideAttribs() while True: - if self.__IsKeyword( "PCI_VENDOR_ID"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_VENDOR_ID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) - Overrides.PciVendorId = self.__Token + Overrides.PciVendorId = self._Token continue - if self.__IsKeyword( "PCI_CLASS_CODE"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_CLASS_CODE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) - Overrides.PciClassCode = self.__Token + Overrides.PciClassCode = self._Token continue - if self.__IsKeyword( "PCI_DEVICE_ID"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_DEVICE_ID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) - Overrides.PciDeviceId = self.__Token + Overrides.PciDeviceId = self._Token continue - if self.__IsKeyword( "PCI_REVISION"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_REVISION"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) - Overrides.PciRevision = self.__Token + Overrides.PciRevision = self._Token continue - if self.__IsKeyword( "PCI_COMPRESS"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_COMPRESS"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' + Overrides.NeedCompress = self._Token.upper() == 'TRUE' continue - if self.__IsToken( "}"): + if self._IsToken("}"): break else: EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) Obj.OverrideAttribs = Overrides - ## __GetOptRomFileStatement() method + ## _GetOptRomFileStatement() method # # Get FILE statements # @@ -4507,33 +4417,33 @@ class FdfParser: # @retval True Successfully find FILE statement # @retval False Not able to find FILE statement # - def __GetOptRomFileStatement(self, Obj): - if not self.__IsKeyword( "FILE"): + def _GetOptRomFileStatement(self, Obj): + if not self._IsKeyword("FILE"): return False - FfsFileObj = OptRomFileStatement.OptRomFileStatement() + FfsFileObj = OptRomFileStatement() - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) - FfsFileObj.FileType = self.__Token + FfsFileObj.FileType = self._Token - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File path", self.FileName, self.CurrentLineNumber) - FfsFileObj.FileName = self.__Token - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: + FfsFileObj.FileName = self._Token + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) if FfsFileObj.FileType == 'EFI': - self.__GetOptRomOverrides(FfsFileObj) + self._GetOptRomOverrides(FfsFileObj) Obj.FfsList.append(FfsFileObj) return True - ## __GetCapInFd() method + ## _GetCapInFd() method # # Get Cap list contained in FD # @@ -4541,7 +4451,7 @@ class FdfParser: # @param FdName FD name # @retval CapList List of Capsule in FD # - def __GetCapInFd (self, FdName): + def _GetCapInFd (self, FdName): CapList = [] if FdName.upper() in self.Profile.FdDict: FdObj = self.Profile.FdDict[FdName.upper()] @@ -4554,7 +4464,7 @@ class FdfParser: CapList.append(elementRegionData.upper()) return CapList - ## __GetReferencedFdCapTuple() method + ## _GetReferencedFdCapTuple() method # # Get FV and FD list referenced by a capsule image # @@ -4563,22 +4473,22 @@ class FdfParser: # @param RefFdList referenced FD by section # @param RefFvList referenced FV by section # - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): - for CapsuleDataObj in CapObj.CapsuleDataList : + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): + for CapsuleDataObj in CapObj.CapsuleDataList: if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and CapsuleDataObj.FvName.upper() not in RefFvList: RefFvList.append (CapsuleDataObj.FvName.upper()) elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and CapsuleDataObj.FdName.upper() not in RefFdList: RefFdList.append (CapsuleDataObj.FdName.upper()) elif CapsuleDataObj.Ffs is not None: - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): + if isinstance(CapsuleDataObj.Ffs, FileStatement): if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) else: - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) - ## __GetFvInFd() method + ## _GetFvInFd() method # # Get FV list contained in FD # @@ -4586,7 +4496,7 @@ class FdfParser: # @param FdName FD name # @retval FvList list of FV in FD # - def __GetFvInFd (self, FdName): + def _GetFvInFd (self, FdName): FvList = [] if FdName.upper() in self.Profile.FdDict: FdObj = self.Profile.FdDict[FdName.upper()] @@ -4599,7 +4509,7 @@ class FdfParser: FvList.append(elementRegionData.upper()) return FvList - ## __GetReferencedFdFvTuple() method + ## _GetReferencedFdFvTuple() method # # Get FD and FV list referenced by a FFS file # @@ -4608,17 +4518,17 @@ class FdfParser: # @param RefFdList referenced FD by section # @param RefFvList referenced FV by section # - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): for FfsObj in FvObj.FfsList: - if isinstance(FfsObj, FfsFileStatement.FileStatement): + if isinstance(FfsObj, FileStatement): if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: RefFvList.append(FfsObj.FvName.upper()) elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: RefFdList.append(FfsObj.FdName.upper()) else: - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) - ## __GetReferencedFdFvTupleFromSection() method + ## _GetReferencedFdFvTupleFromSection() method # # Get FD and FV list referenced by a FFS section # @@ -4627,19 +4537,19 @@ class FdfParser: # @param FdList referenced FD by section # @param FvList referenced FV by section # - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): SectionStack = [] SectionStack.extend(FfsFile.SectionList) while SectionStack != []: SectionObj = SectionStack.pop() - if isinstance(SectionObj, FvImageSection.FvImageSection): + if isinstance(SectionObj, FvImageSection): if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: FvList.append(SectionObj.FvName.upper()) if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not in FvList: FvList.append(SectionObj.Fv.UiFvName.upper()) - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): SectionStack.extend(SectionObj.SectionList) ## CycleReferenceCheck() method @@ -4662,7 +4572,7 @@ class FdfParser: FdAnalyzedList = [] Index = 0 - while RefFvStack != [] and Index < MaxLength: + while RefFvStack and Index < MaxLength: Index = Index + 1 FvNameFromStack = RefFvStack.pop() if FvNameFromStack.upper() in self.Profile.FvDict: @@ -4672,14 +4582,14 @@ class FdfParser: RefFdList = [] RefFvList = [] - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) for RefFdName in RefFdList: if RefFdName in FdAnalyzedList: continue LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) - FvInFdList = self.__GetFvInFd(RefFdName) + FvInFdList = self._GetFvInFd(RefFdName) if FvInFdList != []: for FvNameInFd in FvInFdList: LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) @@ -4725,7 +4635,7 @@ class FdfParser: RefFvList = [] RefFdList = [] - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) FvListLength = 0 FdListLength = 0 @@ -4735,7 +4645,7 @@ class FdfParser: continue LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) - CapInFdList = self.__GetCapInFd(RefFdName) + CapInFdList = self._GetCapInFd(RefFdName) if CapInFdList != []: for CapNameInFd in CapInFdList: LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) @@ -4746,7 +4656,7 @@ class FdfParser: EdkLogger.info(LogStr) return True - FvInFdList = self.__GetFvInFd(RefFdName) + FvInFdList = self._GetFvInFd(RefFdName) if FvInFdList != []: for FvNameInFd in FvInFdList: LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) @@ -4767,7 +4677,7 @@ class FdfParser: FvObj = self.Profile.FvDict[RefFvName.upper()] else: continue - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) FvAnalyzedList.append(RefFvName) return False diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py index 598b2d023124..4c1b43572964 100644 --- a/BaseTools/Source/Python/GenFds/Ffs.py +++ b/BaseTools/Source/Python/GenFds/Ffs.py @@ -15,53 +15,41 @@ ## # Import Modules # -from CommonDataClass.FdfClass import FDClassObject from Common.DataType import * -## generate FFS -# -# -class Ffs(FDClassObject): - # mapping between FILE type in FDF and file type for GenFfs - FdfFvFileTypeToFileType = { - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', - 'RAW' : 'EFI_FV_FILETYPE_RAW', - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', - 'SMM' : 'EFI_FV_FILETYPE_SMM', - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' - } +# mapping between FILE type in FDF and file type for GenFfs +FdfFvFileTypeToFileType = { + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', + 'RAW' : 'EFI_FV_FILETYPE_RAW', + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', + 'SMM' : 'EFI_FV_FILETYPE_SMM', + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' +} - # mapping between section type in FDF and file suffix - SectionSuffix = { - BINARY_FILE_TYPE_PE32 : '.pe32', - BINARY_FILE_TYPE_PIC : '.pic', - BINARY_FILE_TYPE_TE : '.te', - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', - 'VERSION' : '.ver', - BINARY_FILE_TYPE_UI : '.ui', - 'COMPAT16' : '.com16', - 'RAW' : '.raw', - 'FREEFORM_SUBTYPE_GUID': '.guid', - 'SUBTYPE_GUID' : '.guid', - 'FV_IMAGE' : 'fv.sec', - 'COMPRESS' : '.com', - 'GUIDED' : '.guided', - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' - } - - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - FfsClassObject.__init__(self) +# mapping between section type in FDF and file suffix +SectionSuffix = { + BINARY_FILE_TYPE_PE32 : '.pe32', + BINARY_FILE_TYPE_PIC : '.pic', + BINARY_FILE_TYPE_TE : '.te', + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', + 'VERSION' : '.ver', + BINARY_FILE_TYPE_UI : '.ui', + 'COMPAT16' : '.com16', + 'RAW' : '.raw', + 'FREEFORM_SUBTYPE_GUID': '.guid', + 'SUBTYPE_GUID' : '.guid', + 'FV_IMAGE' : 'fv.sec', + 'COMPRESS' : '.com', + 'GUIDED' : '.guided', + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' +} diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py index 4c35aac76a00..7479efff04c7 100644 --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py @@ -16,26 +16,22 @@ # Import Modules # from __future__ import absolute_import -from . import Ffs -from . import Rule -import Common.LongFilePathOs as os from io import BytesIO -import subprocess - -from .GenFdsGlobalVariable import GenFdsGlobalVariable +from struct import pack from CommonDataClass.FdfClass import FileStatementClassObject from Common import EdkLogger -from Common.BuildToolError import * -from Common.Misc import GuidStructureByteArrayToGuidString +from Common.BuildToolError import GENFDS_ERROR +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange +import Common.LongFilePathOs as os from .GuidSection import GuidSection from .FvImageSection import FvImageSection -from Common.Misc import SaveFileOnChange -from struct import * +from .Ffs import FdfFvFileTypeToFileType +from .GenFdsGlobalVariable import GenFdsGlobalVariable ## generate FFS from FILE # # -class FileStatement (FileStatementClassObject) : +class FileStatement (FileStatementClassObject): ## The constructor # # @param self The object pointer @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : # def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): + if self.NameGuid and self.NameGuid.startswith('PCD('): PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) if len(PcdValue) == 0: EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : Dict.update(self.DefineVarDict) SectionAlignments = None - if self.FvName is not None : + if self.FvName: Buffer = BytesIO('') if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : FileName = Fv.AddToBuffer(Buffer) SectionFiles = [FileName] - elif self.FdName is not None: + elif self.FdName: if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) FileName = Fd.GenFd() SectionFiles = [FileName] - elif self.FileName is not None: + elif self.FileName: if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == len(self.SubAlignment): FileContent = '' @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : Content = f.read() f.close() AlignValue = 1 - if self.SubAlignment[Index] is not None: + if self.SubAlignment[Index]: AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) if AlignValue > MaxAlignValue: MaxAlignIndex = Index @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : SectionFiles = [] Index = 0 SectionAlignments = [] - for section in self.SectionList : + for section in self.SectionList: Index = Index + 1 SecIndex = '%d' %Index # process the inside FvImage from FvSection or GuidSection @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : section.FvAddr = FvChildAddr.pop(0) elif isinstance(section, GuidSection): section.FvAddr = FvChildAddr - if FvParentAddr is not None and isinstance(section, GuidSection): + if FvParentAddr and isinstance(section, GuidSection): section.FvParentAddr = FvParentAddr if self.KeepReloc == False: @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : # FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), + FdfFvFileTypeToFileType.get(self.FvFileType), self.NameGuid, Fixed=self.Fixed, CheckSum=self.CheckSum, @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : ) return FfsFileOutput - - - diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py index 6149bc81b4ef..c1073c96e9aa 100644 --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os from io import BytesIO from struct import * from .GenFdsGlobalVariable import GenFdsGlobalVariable -from . import Ffs +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType import subprocess import sys from . import Section @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): SecNum = '%d' %Index GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum Index = Index + 1 OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): else: SecNum = '%d' %Index GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): self.ModuleGuid = RegistryGuidStr GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], + FdfFvFileTypeToFileType[Rule.FvFileType], self.ModuleGuid, Fixed=Rule.Fixed, CheckSum=Rule.CheckSum, Align=Rule.Alignment, SectionAlign=SectionAlignments, @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], + FdfFvFileTypeToFileType[Rule.FvFileType], self.ModuleGuid, Fixed=Rule.Fixed, CheckSum=Rule.CheckSum, Align=Rule.Alignment, SectionAlign=Alignments, diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py index c66fc383278d..30525bd11fcc 100644 --- a/BaseTools/Source/Python/GenFds/Fv.py +++ b/BaseTools/Source/Python/GenFds/Fv.py @@ -1,4 +1,3 @@ -from __future__ import absolute_import ## @file # process FV generation # @@ -16,16 +15,13 @@ from __future__ import absolute_import ## # Import Modules # +from __future__ import absolute_import import Common.LongFilePathOs as os import subprocess from io import BytesIO from struct import * - -from . import Ffs -from . import AprioriSection from . import FfsFileStatement from .GenFdsGlobalVariable import GenFdsGlobalVariable -from CommonDataClass.FdfClass import FvClassObject from Common.Misc import SaveFileOnChange, PackGUID from Common.LongFilePathSupport import CopyLongFilePath from Common.LongFilePathSupport import OpenLongFilePath as open @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' ## generate FV # # -class FV (FvClassObject): +class FV (object): ## The constructor # # @param self The object pointer # def __init__(self): - FvClassObject.__init__(self) + self.UiFvName = None + self.CreateFileName = None + self.BlockSizeList = [] + self.DefineVarDict = {} + self.SetVarDict = {} + self.FvAlignment = None + self.FvAttributeDict = {} + self.FvNameGuid = None + self.FvNameString = None + self.AprioriSectionList = [] + self.FfsList = [] + self.BsBaseAddress = None + self.RtBaseAddress = None self.FvInfFile = None self.FvAddressFile = None self.BaseAddress = None @@ -68,7 +76,7 @@ class FV (FvClassObject): # @param MacroDict macro value pair # @retval string Generated FV file path # - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False) : + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False): if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] @@ -96,7 +104,7 @@ class FV (FvClassObject): if self.FvBaseAddress is not None: BaseAddress = self.FvBaseAddress if not Flag: - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) # # First Process the Apriori section # @@ -114,7 +122,7 @@ class FV (FvClassObject): TAB_LINE_BREAK) # Process Modules in FfsList - for FfsFile in self.FfsList : + for FfsFile in self.FfsList: if Flag: if isinstance(FfsFile, FfsFileStatement.FileStatement): continue @@ -177,7 +185,7 @@ class FV (FvClassObject): if FvChildAddr != []: # Update Ffs again - for FfsFile in self.FfsList : + for FfsFile in self.FfsList: FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, FvName=self.UiFvName) if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: @@ -252,7 +260,7 @@ class FV (FvClassObject): return True return False - ## __InitializeInf__() + ## _InitializeInf() # # Initilize the inf file to create FV # @@ -263,7 +271,7 @@ class FV (FvClassObject): # @param ErasePolarity Flash erase polarity # @param VtfDict VTF objects # - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): # # Create FV inf file # @@ -275,7 +283,7 @@ class FV (FvClassObject): # Add [Options] # self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) - if BaseAddress is not None : + if BaseAddress is not None: self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ BaseAddress + \ TAB_LINE_BREAK) @@ -294,7 +302,7 @@ class FV (FvClassObject): #set default block size is 1 self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) - for BlockSize in self.BlockSizeList : + for BlockSize in self.BlockSizeList: if BlockSize[0] is not None: self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ '0x%X' %BlockSize[0] + \ @@ -320,9 +328,9 @@ class FV (FvClassObject): ' %s' %ErasePloarity + \ TAB_LINE_BREAK) if not (self.FvAttributeDict is None): - for FvAttribute in self.FvAttributeDict.keys() : + for FvAttribute in self.FvAttributeDict.keys(): if FvAttribute == "FvUsedSizeEnable": - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): self.UsedSizeEnable = True continue self.FvInfFile.writelines("EFI_" + \ @@ -365,8 +373,8 @@ class FV (FvClassObject): Guid = FV_UI_EXT_ENTY_GUID.split('-') # # Layout: - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 - # GUID : size 16 + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 + # GUID: size 16 # FV UI name # Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) @@ -422,7 +430,6 @@ class FV (FvClassObject): FvExtHeaderFileName + \ TAB_LINE_BREAK) - # # Add [Files] # diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py index 04556fc87099..d6e1f3315b7e 100644 --- a/BaseTools/Source/Python/GenFds/FvImageSection.py +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py @@ -18,7 +18,7 @@ from __future__ import absolute_import from . import Section from io import BytesIO -from .Ffs import Ffs +from .Ffs import SectionSuffix import subprocess from .GenFdsGlobalVariable import GenFdsGlobalVariable import Common.LongFilePathOs as os @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): if FvAlignmentValue > MaxFvAlignment: MaxFvAlignment = FvAlignmentValue - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get("FV_IMAGE")) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get("FV_IMAGE")) GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): # # Prepare the parameter of GenSection # - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get("FV_IMAGE")) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get("FV_IMAGE")) GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) OutputFileList.append(OutputFile) diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py index 9dec9c5eb57d..0c8091b79841 100644 --- a/BaseTools/Source/Python/GenFds/GenFds.py +++ b/BaseTools/Source/Python/GenFds/GenFds.py @@ -17,34 +17,30 @@ # from __future__ import print_function from __future__ import absolute_import +from re import compile from optparse import OptionParser -import sys -import Common.LongFilePathOs as os -import linecache -from . import FdfParser -import Common.BuildToolError as BuildToolError -from .GenFdsGlobalVariable import GenFdsGlobalVariable -from Workspace.WorkspaceDatabase import WorkspaceDatabase -from Workspace.BuildClassObject import PcdClassObject -from . import RuleComplexFile -from .EfiSection import EfiSection +from sys import exit +from glob import glob +from struct import unpack +from linecache import getlines from io import BytesIO -import Common.TargetTxtClassObject as TargetTxtClassObject -import Common.ToolDefClassObject as ToolDefClassObject + +import Common.LongFilePathOs as os +from Common.TargetTxtClassObject import TargetTxtClassObject from Common.DataType import * import Common.GlobalData as GlobalData from Common import EdkLogger -from Common.StringUtils import * -from Common.Misc import DirCache, PathClass -from Common.Misc import SaveFileOnChange -from Common.Misc import ClearDuplicatedInf -from Common.Misc import GuidStructureStringToGuidString +from Common.StringUtils import NormPath +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf from Common.BuildVersion import gBUILD_VERSION from Common.MultipleWorkspace import MultipleWorkspace as mws -from . import FfsFileStatement -import glob -from struct import unpack -from Common.GlobalData import gGuidPattern +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID +from Workspace.WorkspaceDatabase import WorkspaceDatabase + +from .FdfParser import FdfParser, Warning +from .GenFdsGlobalVariable import GenFdsGlobalVariable +from .FfsFileStatement import FileStatement ## Version and Copyright versionNumber = "1.0" + ' ' + gBUILD_VERSION @@ -71,22 +67,22 @@ def main(): EdkLogger.Initialize() try: - if Options.verbose is not None: + if Options.verbose: EdkLogger.SetLevel(EdkLogger.VERBOSE) GenFdsGlobalVariable.VerboseMode = True - if Options.FixedAddress is not None: + if Options.FixedAddress: GenFdsGlobalVariable.FixedLoadAddress = True - if Options.quiet is not None: + if Options.quiet: EdkLogger.SetLevel(EdkLogger.QUIET) - if Options.debug is not None: + if Options.debug: EdkLogger.SetLevel(Options.debug + 1) GenFdsGlobalVariable.DebugLevel = Options.debug else: EdkLogger.SetLevel(EdkLogger.INFO) - if (Options.Workspace is None): + if not Options.Workspace: EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") elif not os.path.exists(Options.Workspace): @@ -97,7 +93,7 @@ def main(): GenFdsGlobalVariable.WorkSpaceDir = Workspace if 'EDK_SOURCE' in os.environ: GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) - if (Options.debug): + if Options.debug: GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) if Options.GenfdsMultiThread: GenFdsGlobalVariable.EnableGenfdsMultiThread = True @@ -107,7 +103,7 @@ def main(): PackagesPath = os.getenv("PACKAGES_PATH") mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) - if (Options.filename): + if Options.filename: FdfFilename = Options.filename FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) @@ -123,13 +119,13 @@ def main(): else: EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") - if (Options.BuildTarget): + if Options.BuildTarget: GenFdsGlobalVariable.TargetName = Options.BuildTarget - if (Options.ToolChain): + if Options.ToolChain: GenFdsGlobalVariable.ToolChainTag = Options.ToolChain - if (Options.activePlatform): + if Options.activePlatform: ActivePlatform = Options.activePlatform ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) @@ -139,15 +135,15 @@ def main(): if not os.path.isabs (ActivePlatform): ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) - if not os.path.exists(ActivePlatform) : + if not os.path.exists(ActivePlatform): EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") else: EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) - if (Options.ConfDirectory): + if Options.ConfDirectory: # Get alternate Conf location, if it is absolute, then just use the absolute directory name ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) if ConfDirectoryPath.startswith('"'): @@ -169,7 +165,7 @@ def main(): GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) if os.path.isfile(BuildConfigurationFile) == True: - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() + TargetTxt = TargetTxtClassObject() TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) # if no build target given in command line, get it from target.txt if not GenFdsGlobalVariable.TargetName: @@ -237,10 +233,9 @@ def main(): GlobalData.gAllFiles = DirCache(Workspace) GlobalData.gWorkspace = Workspace - if (Options.archList) : + if Options.archList: ArchList = Options.archList.split(',') else: -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) @@ -249,9 +244,11 @@ def main(): for Arch in ArchList: GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].OutputDirectory) - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].PlatformName - if (Options.outputDir): + # assign platform name based on last entry in ArchList + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], Options.BuildTarget, Options.ToolChain].PlatformName + + if Options.outputDir: OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) if not os.path.isabs (OutputDirFromCommandLine): OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, OutputDirFromCommandLine) @@ -274,27 +271,27 @@ def main(): GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ - FdfParserObj = FdfParser.FdfParser(FdfFilename) + FdfParserObj = FdfParser(FdfFilename) FdfParserObj.ParseFile() if FdfParserObj.CycleReferenceCheck(): EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") - if (Options.uiFdName) : + if Options.uiFdName: if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: GenFds.OnlyGenerateThisFd = Options.uiFdName else: EdkLogger.error("GenFds", OPTION_VALUE_INVALID, "No such an FD in FDF file: %s" % Options.uiFdName) - if (Options.uiFvName) : + if Options.uiFvName: if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: GenFds.OnlyGenerateThisFv = Options.uiFvName else: EdkLogger.error("GenFds", OPTION_VALUE_INVALID, "No such an FV in FDF file: %s" % Options.uiFvName) - if (Options.uiCapName) : + if Options.uiCapName: if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: GenFds.OnlyGenerateThisCap = Options.uiCapName else: @@ -302,7 +299,7 @@ def main(): "No such a Capsule in FDF file: %s" % Options.uiCapName) GenFdsGlobalVariable.WorkSpace = BuildWorkSpace - if ArchList is not None: + if ArchList: GenFdsGlobalVariable.ArchList = ArchList # Dsc Build Data will handle Pcd Settings from CommandLine. @@ -313,10 +310,8 @@ def main(): # Record the FV Region info that may specific in the FD if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: - for Fv in FdfParserObj.Profile.FvDict: - FvObj = FdfParserObj.Profile.FvDict[Fv] - for Fd in FdfParserObj.Profile.FdDict: - FdObj = FdfParserObj.Profile.FdDict[Fd] + for FvObj in FdfParserObj.Profile.FvDict.values(): + for FdObj in FdfParserObj.Profile.FdDict.values(): for RegionObj in FdObj.RegionList: if RegionObj.RegionType != BINARY_FILE_TYPE_FV: continue @@ -338,7 +333,7 @@ def main(): """Display FV space info.""" GenFds.DisplayFvSpaceInfo(FdfParserObj) - except FdfParser.Warning as X: + except Warning as X: EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, RaiseError=False) ReturnCode = FORMAT_INVALID except FatalError as X: @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): # Using standard Python module optparse to parse command line option of this tool. # # @retval Opt A optparse.Values object containing the parsed options -# @retval Args Target of build command # def myOptionParser(): usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= MacroValue]\"" @@ -404,7 +398,7 @@ def myOptionParser(): Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: \"PcdName=Value\" ") Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable GenFds multi thread to generate ffs file.") - (Options, args) = Parser.parse_args() + Options, _ = Parser.parse_args() return Options ## The class implementing the EDK2 flash image generation process @@ -414,7 +408,7 @@ def myOptionParser(): # 2. Call methods of Fd class to generate FD # 3. Call methods of Fv class to generate FV that not belong to FD # -class GenFds : +class GenFds(object): FdfParsef = None OnlyGenerateThisFd = None OnlyGenerateThisFv = None @@ -423,12 +417,13 @@ class GenFds : ## GenFd() # # @param OutputDir Output directory - # @param FdfParser FDF contents parser + # @param FdfParserObject FDF contents parser # @param Workspace The directory of workspace # @param ArchList The Arch list of platform # - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) + @staticmethod + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: @@ -470,9 +465,10 @@ class GenFds : GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): OptRomObj.AddToBuffer(None) + @staticmethod - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): FdObj.GenFd(Flag=True) @@ -490,6 +486,7 @@ class GenFds : # @param FvObj Whose block size to get # @retval int Block size value # + @staticmethod def GetFvBlockSize(FvObj): DefaultBlockSize = 0x1 FdObj = None @@ -524,16 +521,17 @@ class GenFds : # @param FvObj Whose block size to get # @retval None # - def DisplayFvSpaceInfo(FdfParser): + @staticmethod + def DisplayFvSpaceInfo(FdfParserObject): FvSpaceInfoList = [] MaxFvNameLength = 0 - for FvName in FdfParser.Profile.FvDict: + for FvName in FdfParserObject.Profile.FvDict: if len(FvName) > MaxFvNameLength: MaxFvNameLength = len(FvName) FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') if os.path.exists(FvSpaceInfoFileName): - FileLinesList = linecache.getlines(FvSpaceInfoFileName) + FileLinesList = getlines(FvSpaceInfoFileName) TotalFound = False Total = '' UsedFound = False @@ -575,6 +573,7 @@ class GenFds : # @param DscFile modules from dsc file will be preprocessed # @retval None # + @staticmethod def PreprocessImage(BuildDb, DscFile): PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].Pcds PcdValue = '' @@ -600,6 +599,7 @@ class GenFds : ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) + @staticmethod def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") GuidXRefFile = BytesIO('') @@ -607,7 +607,6 @@ class GenFds : GuidDict = {} ModuleList = [] FileGuidList = [] - GuidPattern = gGuidPattern for Arch in ArchList: PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) @@ -626,20 +625,16 @@ class GenFds : continue else: ModuleList.append(Module) - GuidMatch = GuidPattern.match(ModuleFile.BaseName) - if GuidMatch is not None: + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) else: GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) - for key, item in Module.Protocols.items(): - GuidDict[key] = item - for key, item in Module.Guids.items(): - GuidDict[key] = item - for key, item in Module.Ppis.items(): - GuidDict[key] = item + GuidDict.update(Module.Protocols) + GuidDict.update(Module.Guids) + GuidDict.update(Module.Ppis) for FvName in FdfParserObj.Profile.FvDict: for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: - if not isinstance(FfsObj, FfsFileStatement.FileStatement): + if not isinstance(FfsObj, FileStatement): InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] if FdfModule in ModuleList: @@ -647,12 +642,9 @@ class GenFds : else: ModuleList.append(FdfModule) GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) - for key, item in FdfModule.Protocols.items(): - GuidDict[key] = item - for key, item in FdfModule.Guids.items(): - GuidDict[key] = item - for key, item in FdfModule.Ppis.items(): - GuidDict[key] = item + GuidDict.update(FdfModule.Protocols) + GuidDict.update(FdfModule.Guids) + GuidDict.update(FdfModule.Ppis) else: FileStatementGuid = FfsObj.NameGuid if FileStatementGuid in FileGuidList: @@ -661,13 +653,13 @@ class GenFds : FileGuidList.append(FileStatementGuid) Name = [] FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') if not FfsPath: continue if not os.path.exists(FfsPath[0]): continue MatchDict = {} - ReFileEnds = re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') + ReFileEnds = compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') FileList = os.listdir(FfsPath[0]) for File in FileList: Match = ReFileEnds.search(File) @@ -724,16 +716,10 @@ class GenFds : os.remove(GuidXRefFileName) GuidXRefFile.close() - ##Define GenFd as static function - GenFd = staticmethod(GenFd) - GetFvBlockSize = staticmethod(GetFvBlockSize) - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) - PreprocessImage = staticmethod(PreprocessImage) - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) - if __name__ == '__main__': r = main() ## 0-127 is a safe return range, and 1 is a standard default error - if r < 0 or r > 127: r = 1 - sys.exit(r) + if r < 0 or r > 127: + r = 1 + exit(r) diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py index 14578a92a9b1..e867693d7c88 100644 --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py @@ -16,13 +16,15 @@ # Import Modules # from __future__ import print_function +from __future__ import absolute_import + import Common.LongFilePathOs as os -import sys -import subprocess -import struct -import array +from sys import stdout +from subprocess import PIPE,Popen +from struct import Struct +from array import array -from Common.BuildToolError import * +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR from Common import EdkLogger from Common.Misc import SaveFileOnChange @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' LARGE_FILE_SIZE = 0x1000000 - SectionHeader = struct.Struct("3B 1B") + SectionHeader = Struct("3B 1B") # FvName, FdName, CapName in FDF, Image file name ImageBinDict = {} @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: ## LoadBuildRule # @staticmethod - def __LoadBuildRule(): + def _LoadBuildRule(): if GenFdsGlobalVariable.__BuildRuleDatabase: return GenFdsGlobalVariable.__BuildRuleDatabase BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: if not Arch in GenFdsGlobalVariable.OutputDirDict: return {} - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() if not BuildRuleDatabase: return {} PathClassObj = PathClass(Inf.MetaFile.File, GenFdsGlobalVariable.WorkSpaceDir) - Macro = {} - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir - Macro["MODULE_NAME" ] = Inf.BaseName - Macro["MODULE_GUID" ] = Inf.Guid - Macro["MODULE_VERSION" ] = Inf.Version - Macro["MODULE_TYPE" ] = Inf.ModuleType - Macro["MODULE_FILE" ] = str(PathClassObj) - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir - Macro["MODULE_DIR" ] = PathClassObj.SubDir - - Macro["BASE_NAME" ] = Inf.BaseName - - Macro["ARCH" ] = Arch - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName - - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) BuildDir = os.path.join( GenFdsGlobalVariable.OutputDirDict[Arch], Arch, PathClassObj.SubDir, PathClassObj.BaseName ) - Macro["MODULE_BUILD_DIR" ] = BuildDir - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") - + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) + Macro = { + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, + "MODULE_NAME":Inf.BaseName, + "MODULE_GUID":Inf.Guid, + "MODULE_VERSION":Inf.Version, + "MODULE_TYPE":Inf.ModuleType, + "MODULE_FILE":str(PathClassObj), + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, + "MODULE_DIR":PathClassObj.SubDir, + "BASE_NAME":Inf.BaseName, + "ARCH":Arch, + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, + "TARGET":GenFdsGlobalVariable.TargetName, + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], + "BIN_DIR":BinDir, + "LIB_DIR":BinDir, + "MODULE_BUILD_DIR":BuildDir, + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") + } + BuildRules = {} for Type in BuildRuleDatabase.FileTypeList: #first try getting build rule by BuildRuleFamily @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: if not Inf.IsBinaryModule: for File in Inf.Sources: - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: FileList.append((File, DataType.TAB_UNKNOWN_FILE)) for File in Inf.Binaries: - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: FileList.append((File, File.Type)) for File, FileType in FileList: @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: Source = SourceList[Index] Index = Index + 1 - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: # Skip all files that are not binary libraries if not Inf.LibraryClass: continue @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: # @param Workspace The directory of workspace # @param ArchList The Arch list of platform # + @staticmethod def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) -# GenFdsGlobalVariable.OutputDirDict = OutputDir + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) GenFdsGlobalVariable.FdfParser = FdfParser GenFdsGlobalVariable.WorkSpace = WorkSpace GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], DataType.TAB_FV_DIRECTORY) - if not os.path.exists(GenFdsGlobalVariable.FvDir) : + if not os.path.exists(GenFdsGlobalVariable.FvDir): os.makedirs(GenFdsGlobalVariable.FvDir) GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : + if not os.path.exists(GenFdsGlobalVariable.FfsDir): os.makedirs(GenFdsGlobalVariable.FfsDir) - T_CHAR_LF = '\n' # # Create FV Address inf file # @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: # # Add [Options] # - FvAddressFile.writelines("[options]" + T_CHAR_LF) + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) BsAddress = '0' for Arch in ArchList: if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ BsAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) RtAddress = '0' - for Arch in ArchList: - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress + for Arch in reversed(ArchList): + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress + if temp: + RtAddress = temp + break FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ RtAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) FvAddressFile.close() + @staticmethod def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile GenFdsGlobalVariable.FdfParser = FdfParser @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: if not os.path.exists(GenFdsGlobalVariable.FfsDir): os.makedirs(GenFdsGlobalVariable.FfsDir) - T_CHAR_LF = '\n' # # Create FV Address inf file # @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: # # Add [Options] # - FvAddressFile.writelines("[options]" + T_CHAR_LF) + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) BsAddress = '0' for Arch in ArchList: BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ BsAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) RtAddress = '0' - for Arch in ArchList: - if GenFdsGlobalVariable.WorkSpace.BuildObject[ + for Arch in reversed(ArchList): + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress + if temp: + RtAddress = temp + break FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ RtAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) FvAddressFile.close() @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: # # @param String String that may contain macro # + @staticmethod def ReplaceWorkspaceMacro(String): String = mws.handleWsMacro(String) Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: if not os.path.exists(Output): return True # always update "Output" if no "Input" given - if Input is None or len(Input) == 0: + if not Input: return True # if fdf file is changed after the 'Output" is generated, update the 'Output' @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: Cmd += ("-s", Type) if CompressionType: Cmd += ("-c", CompressionType) - if Guid is not None: + if Guid: Cmd += ("-g", Guid) - if DummyFile is not None: + if DummyFile: Cmd += ("--dummy", DummyFile) if GuidHdrLen: Cmd += ("-l", GuidHdrLen) @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) else: - SectionData = array.array('B', [0, 0, 0, 0]) + SectionData = array('B', [0, 0, 0, 0]) SectionData.fromstring(Ui.encode("utf_16_le")) SectionData.append(0) SectionData.append(0) @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: if BaseAddress: Cmd += ("-r", BaseAddress) - if ForceRebase == False: + if not ForceRebase: Cmd += ("-F", "FALSE") - elif ForceRebase == True: + else: Cmd += ("-F", "TRUE") if Capsule: @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): InputList = [] Cmd = ["EfiRom"] - if len(EfiInput) > 0: + if EfiInput: if Compress: Cmd.append("-ec") @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: Cmd.append(EfiFile) InputList.append (EfiFile) - if len(BinaryInput) > 0: + if BinaryInput: Cmd.append("-b") for BinFile in BinaryInput: Cmd.append(BinFile) @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: return GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) - if ClassCode is not None: + if ClassCode: Cmd += ("-l", ClassCode) - if Revision is not None: + if Revision: Cmd += ("-r", Revision) - if DeviceId is not None: + if DeviceId: Cmd += ("-i", DeviceId) - if VendorId is not None: + if VendorId: Cmd += ("-f", VendorId) Cmd += ("-o", Output) @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: else: GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) + @staticmethod def CallExternalTool (cmd, errorMess, returnValue=[]): if type(cmd) not in (tuple, list): @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: cmd += ('-v',) GenFdsGlobalVariable.InfLogger (cmd) else: - sys.stdout.write ('#') - sys.stdout.flush() + stdout.write ('#') + stdout.flush() GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: - sys.stdout.write('\n') + stdout.write('\n') try: - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) except Exception as X: EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) (out, error) = PopenObject.communicate() - while PopenObject.returncode is None : + while PopenObject.returncode is None: PopenObject.wait() if returnValue != [] and returnValue[0] != 0: #get command return value @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: print("###", cmd) EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) + @staticmethod def VerboseLogger (msg): EdkLogger.verbose(msg) + @staticmethod def InfLogger (msg): EdkLogger.info(msg) + @staticmethod def ErrorLogger (msg, File=None, Line=None, ExtraData=None): EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) + @staticmethod def DebugLogger (Level, msg): EdkLogger.debug(Level, msg) - ## ReplaceWorkspaceMacro() + ## MacroExtend() # # @param Str String that may contain macro # @param MacroDict Dictionary that contains macro value pair # + @staticmethod def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): - if Str is None : + if Str is None: return None - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, - '$(TARGET)' : GenFdsGlobalVariable.TargetName, - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, - '$(SPACE)' : ' ' + '$(TARGET)': GenFdsGlobalVariable.TargetName, + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, + '$(SPACE)': ' ' } - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] + if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] + else: + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] Dict['$(OUTPUT_DIRECTORY)'] = OutputDir - if MacroDict is not None and len (MacroDict) != 0: + if MacroDict: Dict.update(MacroDict) for key in Dict: - if Str.find(key) >= 0 : + if Str.find(key) >= 0: Str = Str.replace (key, Dict[key]) if Str.find('$(ARCH)') >= 0: @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: # # @param PcdPattern pattern that labels a PCD. # + @staticmethod def GetPcdValue (PcdPattern): - if PcdPattern is None : + if PcdPattern is None: return None PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') TokenSpace = PcdPair[0] TokenCName = PcdPair[1] - PcdValue = '' for Arch in GenFdsGlobalVariable.ArchList: Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] PcdDict = Platform.Pcds @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: if PcdObj.DatumType != DataType.TAB_VOID: EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) - PcdValue = PcdObj.DefaultValue - return PcdValue + return PcdObj.DefaultValue for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: if PcdObj.DatumType != DataType.TAB_VOID: EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) - PcdValue = PcdObj.DefaultValue - return PcdValue + return PcdObj.DefaultValue - return PcdValue - - SetDir = staticmethod(SetDir) - SetEnv = staticmethod(SetEnv) - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) - CallExternalTool = staticmethod(CallExternalTool) - VerboseLogger = staticmethod(VerboseLogger) - InfLogger = staticmethod(InfLogger) - ErrorLogger = staticmethod(ErrorLogger) - DebugLogger = staticmethod(DebugLogger) - MacroExtend = staticmethod (MacroExtend) - GetPcdValue = staticmethod(GetPcdValue) + return '' ## FindExtendTool() # @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): ToolOptionKey = None KeyList = None for ToolDef in ToolDefinition.items(): - if NameGuid.lower() == ToolDef[1].lower() : + if NameGuid.lower() == ToolDef[1].lower(): KeyList = ToolDef[0].split('_') Key = KeyList[0] + \ '_' + \ diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py index fd660737f421..25609e6f2ae1 100644 --- a/BaseTools/Source/Python/GenFds/GuidSection.py +++ b/BaseTools/Source/Python/GenFds/GuidSection.py @@ -19,7 +19,7 @@ from __future__ import absolute_import from . import Section import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from .GenFdsGlobalVariable import GenFdsGlobalVariable from .GenFdsGlobalVariable import FindExtendTool @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : ModuleName + \ SUP_MODULE_SEC + \ SecNum + \ - Ffs.SectionSuffix['GUIDED'] + SectionSuffix['GUIDED'] OutputFile = os.path.normpath(OutputFile) ExternalTool = None diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py index bb7071fa7fca..9d7ad25ecc43 100644 --- a/BaseTools/Source/Python/GenFds/OptionRom.py +++ b/BaseTools/Source/Python/GenFds/OptionRom.py @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange from Common import EdkLogger from Common.BuildToolError import * -T_CHAR_LF = '\n' - ## # # @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): # # @param self The object pointer # - def __init__(self): + def __init__(self, Name = ""): OptionRomClassObject.__init__(self) - + self.DriverName = Name ## AddToBuffer() # diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py index 5242b74c9e70..9aeaa45d4566 100644 --- a/BaseTools/Source/Python/GenFds/Region.py +++ b/BaseTools/Source/Python/GenFds/Region.py @@ -20,7 +20,6 @@ from struct import * from .GenFdsGlobalVariable import GenFdsGlobalVariable from io import BytesIO import string -from CommonDataClass.FdfClass import RegionClassObject import Common.LongFilePathOs as os from stat import * from Common import EdkLogger @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV ## generate Region # # -class Region(RegionClassObject): +class Region(object): ## The constructor # # @param self The object pointer # def __init__(self): - RegionClassObject.__init__(self) - + self.Offset = None # The begin position of the Region + self.Size = None # The Size of the Region + self.PcdOffset = None + self.PcdSize = None + self.SetVarDict = {} + self.RegionType = None + self.RegionDataList = [] ## PadBuffer() # diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py index 24f2f3ca938d..c1dff188be96 100644 --- a/BaseTools/Source/Python/GenFds/UiSection.py +++ b/BaseTools/Source/Python/GenFds/UiSection.py @@ -17,7 +17,7 @@ # from __future__ import absolute_import from . import Section -from .Ffs import Ffs +from .Ffs import SectionSuffix import subprocess import Common.LongFilePathOs as os from .GenFdsGlobalVariable import GenFdsGlobalVariable @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): self.StringData = FfsInf.__ExtendMacro__(self.StringData) self.FileName = FfsInf.__ExtendMacro__(self.FileName) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(BINARY_FILE_TYPE_UI)) if self.StringData is not None : NameString = self.StringData diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py index db71fe9653d4..051743f10c21 100644 --- a/BaseTools/Source/Python/GenFds/VerSection.py +++ b/BaseTools/Source/Python/GenFds/VerSection.py @@ -1,7 +1,7 @@ ## @file # process Version section generation # -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> # # This program and the accompanying materials # are licensed and made available under the terms and conditions of the BSD License @@ -16,10 +16,8 @@ # Import Modules # from __future__ import absolute_import -from .Ffs import Ffs -from . import Section +from .Ffs import SectionSuffix import Common.LongFilePathOs as os -import subprocess from .GenFdsGlobalVariable import GenFdsGlobalVariable from CommonDataClass.FdfClass import VerSectionClassObject from Common.LongFilePathSupport import OpenLongFilePath as open @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): # # Prepare the parameter of GenSection # - if FfsInf is not None: + if FfsInf: self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) self.StringData = FfsInf.__ExtendMacro__(self.StringData) self.FileName = FfsInf.__ExtendMacro__(self.FileName) OutputFile = os.path.join(OutputPath, - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) OutputFile = os.path.normpath(OutputFile) # Get String Data StringData = '' - if self.StringData is not None: + if self.StringData: StringData = self.StringData - elif self.FileName is not None: + elif self.FileName: FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) FileObj = open(FileNameStr, 'r') StringData = FileObj.read() StringData = '"' + StringData + '"' FileObj.close() - else: - StringData = '' GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) OutputFileList = [] diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py index 5cb2d4acfb1c..9dcd48b2d812 100644 --- a/BaseTools/Source/Python/GenFds/Vtf.py +++ b/BaseTools/Source/Python/GenFds/Vtf.py @@ -18,21 +18,24 @@ from __future__ import absolute_import from .GenFdsGlobalVariable import GenFdsGlobalVariable import Common.LongFilePathOs as os -from CommonDataClass.FdfClass import VtfClassObject from Common.LongFilePathSupport import OpenLongFilePath as open -T_CHAR_LF = '\n' +from Common.DataType import TAB_LINE_BREAK ## generate VTF # # -class Vtf (VtfClassObject): +class Vtf (object): ## The constructor # # @param self The object pointer # def __init__(self): - VtfClassObject.__init__(self) + self.KeyArch = None + self.ArchList = None + self.UiName = None + self.ResetBin = None + self.ComponentStatementList = [] ## GenVtf() method # @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): # def GenVtf(self, FdAddressDict) : self.GenBsfInf() - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') BaseAddArg = self.GetBaseAddressArg(FdAddressDict) OutputArg, VtfRawDict = self.GenOutputArg() @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): FvList = self.GetFvList() self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') BsfInf = open(self.BsfInfName, 'w+') - if self.ResetBin is not None: - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) + if self.ResetBin: + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) BsfInf.writelines ("IA32_RST_BIN" + \ " = " + \ GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ - T_CHAR_LF) - BsfInf.writelines (T_CHAR_LF) + TAB_LINE_BREAK) + BsfInf.writelines (TAB_LINE_BREAK) - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) for ComponentObj in self.ComponentStatementList : BsfInf.writelines ("COMP_NAME" + \ " = " + \ ComponentObj.CompName + \ - T_CHAR_LF) + TAB_LINE_BREAK) if ComponentObj.CompLoc.upper() == 'NONE': BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'N' + \ - T_CHAR_LF) + TAB_LINE_BREAK) - elif ComponentObj.FilePos is not None: + elif ComponentObj.FilePos: BsfInf.writelines ("COMP_LOC" + \ " = " + \ ComponentObj.FilePos + \ - T_CHAR_LF) + TAB_LINE_BREAK) else: Index = FvList.index(ComponentObj.CompLoc.upper()) if Index == 0: BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'F' + \ - T_CHAR_LF) + TAB_LINE_BREAK) elif Index == 1: BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'S' + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_TYPE" + \ " = " + \ ComponentObj.CompType + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_VER" + \ " = " + \ ComponentObj.CompVer + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_CS" + \ " = " + \ ComponentObj.CompCs + \ - T_CHAR_LF) + TAB_LINE_BREAK) BinPath = ComponentObj.CompBin if BinPath != '-': @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): BsfInf.writelines ("COMP_BIN" + \ " = " + \ BinPath + \ - T_CHAR_LF) + TAB_LINE_BREAK) SymPath = ComponentObj.CompSym if SymPath != '-': @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): BsfInf.writelines ("COMP_SYM" + \ " = " + \ SymPath + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_SIZE" + \ " = " + \ ComponentObj.CompSize + \ - T_CHAR_LF) - BsfInf.writelines (T_CHAR_LF) + TAB_LINE_BREAK) + BsfInf.writelines (TAB_LINE_BREAK) BsfInf.close() diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py index a66adfb16746..95863fefe151 100644 --- a/BaseTools/Source/Python/build/BuildReport.py +++ b/BaseTools/Source/Python/build/BuildReport.py @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath from Common.DataType import * import collections from Common.Expression import * +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID ## Pattern to extract contents in EDK DXS files gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) @@ -1853,8 +1854,8 @@ class FdRegionReport(object): # # Add PEI and DXE a priori files GUIDs defined in PI specification. # - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" # # Add ACPI table storage file # -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey @ 2018-10-25 13:45 ` Feng, Bob C 2018-10-26 4:53 ` Gary Lin 1 sibling, 0 replies; 14+ messages in thread From: Feng, Bob C @ 2018-10-25 13:45 UTC (permalink / raw) To: Carsey, Jaben, edk2-devel@lists.01.org; +Cc: Zhu, Yonghong, Gao, Liming Reviewed-by: Bob Feng <bob.c.feng@intel.com> -----Original Message----- From: Carsey, Jaben Sent: Wednesday, October 24, 2018 1:29 AM To: edk2-devel@lists.01.org Cc: Zhu, Yonghong <yonghong.zhu@intel.com>; Gao, Liming <liming.gao@intel.com>; Feng, Bob C <bob.c.feng@intel.com> Subject: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 1) remove wildcard imports and use explicit imports 2) refactor to use shared variables from Common/DataType 3) rename to not shadow imports 4) don't assign a variable in a loop (just do final assignment) 5) remove spaces, parens, unused or commented out code, etc. 6) merge unnecessary parent classes into child 7) refactor to share DXE and PEI apriori GUIDs from one place this includes changes to Build and EOT files 8) for PEP8, dont use __ for custom methods. Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Cc: Bob C Feng <bob.c.feng@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - BaseTools/Source/Python/Eot/EotMain.py | 372 ++- BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- BaseTools/Source/Python/GenFds/Capsule.py | 26 +- BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- BaseTools/Source/Python/GenFds/DataSection.py | 4 +- BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- BaseTools/Source/Python/GenFds/Ffs.py | 82 +- BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- BaseTools/Source/Python/GenFds/Fv.py | 47 +- BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- BaseTools/Source/Python/GenFds/GenFds.py | 160 +- BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- BaseTools/Source/Python/GenFds/Region.py | 12 +- BaseTools/Source/Python/GenFds/UiSection.py | 4 +- BaseTools/Source/Python/GenFds/VerSection.py | 16 +- BaseTools/Source/Python/GenFds/Vtf.py | 48 +- BaseTools/Source/Python/build/BuildReport.py | 5 +- 24 files changed, 2185 insertions(+), 2020 deletions(-) diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py index 64b58c2078a3..3df82082457f 100644 --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py @@ -37,50 +37,6 @@ class FDClassObject: self.RegionList = [] self.vtfRawDict = {} -## FV data in FDF -# -# -class FvClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.UiFvName = None - self.CreateFileName = None - # 3-tuple list (blockSize, numBlocks, pcd) - self.BlockSizeList = [] - # DefineVarDict[var] = value - self.DefineVarDict = {} - # SetVarDict[var] = value - self.SetVarDict = {} - self.FvAlignment = None - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) - self.FvAttributeDict = {} - self.FvNameGuid = None - self.FvNameString = None - self.AprioriSectionList = [] - self.FfsList = [] - self.BsBaseAddress = None - self.RtBaseAddress = None - -## Region data in FDF -# -# -class RegionClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.Offset = None # The begin position of the Region - self.Size = None # The Size of the Region - self.PcdOffset = None - self.PcdSize = None - self.SetVarDict = {} - self.RegionType = None - self.RegionDataList = [] - ## FFS data in FDF # # @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : self.FvName = None self.FdName = None self.DefineVarDict = {} - self.AprioriSection = None self.KeepReloc = None ## INF statement data in FDF @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): self.KeepReloc = None self.UseArch = None -## APRIORI section data in FDF -# -# -class AprioriSectionClassObject: - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - # DefineVarDict[var] = value - self.DefineVarDict = {} - self.FfsList = [] - ## section data in FDF # # @@ -363,21 +305,6 @@ class CapsuleClassObject : self.CapsuleDataList = [] self.FmpPayloadList = [] -## VTF data in FDF -# -# -class VtfClassObject : - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - self.KeyArch = None - self.ArchList = None - self.UiName = None - self.ResetBin = None - self.ComponentStatementList = [] - ## VTF component data in FDF # # diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py index 49a1494126c4..2ad93cc6b4e3 100644 --- a/BaseTools/Source/Python/Eot/EotMain.py +++ b/BaseTools/Source/Python/Eot/EotMain.py @@ -36,10 +36,9 @@ import struct import uuid import copy import codecs +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' gIndention = -4 class Image(array): @@ -402,10 +401,10 @@ class FirmwareVolume(Image): if Ffs.Type == 0x05: FfsDxeCoreGuid = FfsID continue - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): FfsPeiPrioriGuid = FfsID continue - if Ffs.Guid.lower() == gAprioriGuid: + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): FfsDxePrioriGuid = FfsID continue @@ -1084,6 +1083,371 @@ class Ffs(Image): State = property(_GetState, _SetState) + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" + + _GUID_ = struct.Struct("16x 1I2H8B") + _LENGTH_ = struct.Struct("16x 16x 1Q") + _SIG_ = struct.Struct("16x 16x 8x 1I") + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") + + def __init__(self, Name=''): + Image.__init__(self) + self.Name = Name + self.FfsDict = sdict() + self.OrderedFfsDict = sdict() + self.UnDispatchedFfsDict = sdict() + self.ProtocolList = sdict() + + def CheckArchProtocol(self): + for Item in EotGlobalData.gArchProtocolGuids: + if Item.lower() not in EotGlobalData.gProtocolList: + return False + return True + + def ParseDepex(self, Depex, Type): + List = None + if Type == 'Ppi': + List = EotGlobalData.gPpiList + if Type == 'Protocol': + List = EotGlobalData.gProtocolList + DepexStack = [] + DepexList = [] + DepexString = '' + FileDepex = None + CouldBeLoaded = True + for Index in range(0, len(Depex.Expression)): + Item = Depex.Expression[Index] + if Item == 0x00: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) + elif Item == 0x01: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) + elif Item == 0x02: + Index = Index + 1 + Guid = gGuidStringFormat % Depex.Expression[Index] + if Guid.lower() in List: + DepexStack.append(True) + DepexList.append(Guid) + else: + DepexStack.append(False) + DepexList.append(Guid) + continue + elif Item == 0x03 or Item == 0x04: + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + str(DepexList.pop())) + elif Item == 0x05: + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) + elif Item == 0x06: + DepexStack.append(True) + DepexList.append('TRUE') + DepexString = DepexString + 'TRUE' + ' ' + elif Item == 0x07: + DepexStack.append(False) + DepexList.append('False') + DepexString = DepexString + 'FALSE' + ' ' + elif Item == 0x08: + if Index != len(Depex.Expression) - 1: + CouldBeLoaded = False + else: + CouldBeLoaded = DepexStack.pop() + else: + CouldBeLoaded = False + if DepexList != []: + DepexString = DepexList[0].strip() + return (CouldBeLoaded, DepexString, FileDepex) + + def Dispatch(self, Db = None): + if Db is None: + return False + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) + # Find PeiCore, DexCore, PeiPriori, DxePriori first + FfsSecCoreGuid = None + FfsPeiCoreGuid = None + FfsDxeCoreGuid = None + FfsPeiPrioriGuid = None + FfsDxePrioriGuid = None + for FfsID in self.UnDispatchedFfsDict: + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x03: + FfsSecCoreGuid = FfsID + continue + if Ffs.Type == 0x04: + FfsPeiCoreGuid = FfsID + continue + if Ffs.Type == 0x05: + FfsDxeCoreGuid = FfsID + continue + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): + FfsPeiPrioriGuid = FfsID + continue + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): + FfsDxePrioriGuid = FfsID + continue + + # Parse SEC_CORE first + if FfsSecCoreGuid is not None: + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) + self.LoadPpi(Db, FfsSecCoreGuid) + + # Parse PEI first + if FfsPeiCoreGuid is not None: + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) + self.LoadPpi(Db, FfsPeiCoreGuid) + if FfsPeiPrioriGuid is not None: + # Load PEIM described in priori file + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) + if len(FfsPeiPriori.Sections) == 1: + Section = FfsPeiPriori.Sections.popitem()[1] + if Section.Type == 0x19: + GuidStruct = struct.Struct('1I2H8B') + Start = 4 + while len(Section) > Start: + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) + GuidString = gGuidStringFormat % Guid + Start = Start + 16 + if GuidString in self.UnDispatchedFfsDict: + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) + self.LoadPpi(Db, GuidString) + + self.DisPatchPei(Db) + + # Parse DXE then + if FfsDxeCoreGuid is not None: + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) + self.LoadProtocol(Db, FfsDxeCoreGuid) + if FfsDxePrioriGuid is not None: + # Load PEIM described in priori file + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) + if len(FfsDxePriori.Sections) == 1: + Section = FfsDxePriori.Sections.popitem()[1] + if Section.Type == 0x19: + GuidStruct = struct.Struct('1I2H8B') + Start = 4 + while len(Section) > Start: + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) + GuidString = gGuidStringFormat % Guid + Start = Start + 16 + if GuidString in self.UnDispatchedFfsDict: + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) + self.LoadProtocol(Db, GuidString) + + self.DisPatchDxe(Db) + + def LoadProtocol(self, Db, ModuleGuid): + SqlCommand = """select GuidValue from Report + where SourceFileFullPath in + (select Value1 from Inf where BelongsToFile = + (select BelongsToFile from Inf + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) + and Model = %s) + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ + % (ModuleGuid, 5001, 3007) + RecordSet = Db.TblReport.Exec(SqlCommand) + for Record in RecordSet: + SqlCommand = """select Value2 from Inf where BelongsToFile = + (select DISTINCT BelongsToFile from Inf + where Value1 = + (select SourceFileFullPath from Report + where GuidValue like '%s' and ItemMode = 'Callback')) + and Value1 = 'FILE_GUID'""" % Record[0] + CallBackSet = Db.TblReport.Exec(SqlCommand) + if CallBackSet != []: + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid + else: + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid + + def LoadPpi(self, Db, ModuleGuid): + SqlCommand = """select GuidValue from Report + where SourceFileFullPath in + (select Value1 from Inf where BelongsToFile = + (select BelongsToFile from Inf + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) + and Model = %s) + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ + % (ModuleGuid, 5001, 3007) + RecordSet = Db.TblReport.Exec(SqlCommand) + for Record in RecordSet: + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid + + def DisPatchDxe(self, Db): + IsInstalled = False + ScheduleList = sdict() + for FfsID in self.UnDispatchedFfsDict: + CouldBeLoaded = False + DepexString = '' + FileDepex = None + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x07: + # Get Depex + IsFoundDepex = False + for Section in Ffs.Sections.values(): + # Find Depex + if Section.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') + break + if Section.Type == 0x01: + CompressSections = Section._SubImages[4] + for CompressSection in CompressSections.Sections: + if CompressSection.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Protocol') + break + if CompressSection.Type == 0x02: + NewSections = CompressSection._SubImages[4] + for NewSection in NewSections.Sections: + if NewSection.Type == 0x13: + IsFoundDepex = True + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Protocol') + break + + # Not find Depex + if not IsFoundDepex: + CouldBeLoaded = self.CheckArchProtocol() + DepexString = '' + FileDepex = None + + # Append New Ffs + if CouldBeLoaded: + IsInstalled = True + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) + NewFfs.Depex = DepexString + if FileDepex is not None: + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) + else: + ScheduleList[FfsID] = NewFfs + else: + self.UnDispatchedFfsDict[FfsID].Depex = DepexString + + for FfsID in ScheduleList: + NewFfs = ScheduleList.pop(FfsID) + FfsName = 'UnKnown' + self.OrderedFfsDict[FfsID] = NewFfs + self.LoadProtocol(Db, FfsID) + + SqlCommand = """select Value2 from Inf + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and lower(Value2) = lower('%s') and Model = %s) + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) + RecordSet = Db.TblReport.Exec(SqlCommand) + if RecordSet != []: + FfsName = RecordSet[0][0] + + if IsInstalled: + self.DisPatchDxe(Db) + + def DisPatchPei(self, Db): + IsInstalled = False + for FfsID in self.UnDispatchedFfsDict: + CouldBeLoaded = True + DepexString = '' + FileDepex = None + Ffs = self.UnDispatchedFfsDict[FfsID] + if Ffs.Type == 0x06 or Ffs.Type == 0x08: + # Get Depex + for Section in Ffs.Sections.values(): + if Section.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') + break + if Section.Type == 0x01: + CompressSections = Section._SubImages[4] + for CompressSection in CompressSections.Sections: + if CompressSection.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Ppi') + break + if CompressSection.Type == 0x02: + NewSections = CompressSection._SubImages[4] + for NewSection in NewSections.Sections: + if NewSection.Type == 0x1B: + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Ppi') + break + + # Append New Ffs + if CouldBeLoaded: + IsInstalled = True + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) + NewFfs.Depex = DepexString + self.OrderedFfsDict[FfsID] = NewFfs + self.LoadPpi(Db, FfsID) + else: + self.UnDispatchedFfsDict[FfsID].Depex = DepexString + + if IsInstalled: + self.DisPatchPei(Db) + + + def __str__(self): + global gIndention + gIndention += 4 + FvInfo = '\n' + ' ' * gIndention + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) + gIndention -= 4 + return FvInfo + FfsInfo + + def _Unpack(self): + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] + self.empty() + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) + + # traverse the FFS + EndOfFv = Size + FfsStartAddress = self.HeaderSize + LastFfsObj = None + while FfsStartAddress < EndOfFv: + FfsObj = Ffs() + FfsObj.frombuffer(self, FfsStartAddress) + FfsId = repr(FfsObj) + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): + if LastFfsObj is not None: + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) + else: + if FfsId in self.FfsDict: + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", + ExtraData="\t%s @ %s\n\t%s @ %s" \ + % (FfsObj.Guid, FfsObj.Offset, + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) + self.FfsDict[FfsId] = FfsObj + if LastFfsObj is not None: + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) + + FfsStartAddress += len(FfsObj) + # + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) + # The next FFS must be at the latest next 8-byte aligned address + # + FfsStartAddress = (FfsStartAddress + 7) & (~7) + LastFfsObj = FfsObj + + def _GetAttributes(self): + return self.GetField(self._ATTR_, 0)[0] + + def _GetSize(self): + return self.GetField(self._LENGTH_, 0)[0] + + def _GetChecksum(self): + return self.GetField(self._CHECKSUM_, 0)[0] + + def _GetHeaderLength(self): + return self.GetField(self._HLEN_, 0)[0] + + def _GetFileSystemGuid(self): + return gGuidStringFormat % self.GetField(self._GUID_, 0) + + Attributes = property(_GetAttributes) + Size = property(_GetSize) + Checksum = property(_GetChecksum) + HeaderSize = property(_GetHeaderLength) + FileSystemGuid = property(_GetFileSystemGuid) + ## MultipleFv() class # # A class for Multiple FV diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py index 7196f7f2c753..e7b856a1150a 100644 --- a/BaseTools/Source/Python/GenFds/AprioriSection.py +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py @@ -16,28 +16,31 @@ # Import Modules # from __future__ import absolute_import -from struct import * +from struct import pack import Common.LongFilePathOs as os from io import BytesIO -from . import FfsFileStatement +from .FfsFileStatement import FileStatement from .GenFdsGlobalVariable import GenFdsGlobalVariable -from CommonDataClass.FdfClass import AprioriSectionClassObject -from Common.StringUtils import * +from Common.StringUtils import NormPath from Common.Misc import SaveFileOnChange, PathClass -from Common import EdkLogger -from Common.BuildToolError import * +from Common.EdkLogger import error as EdkLoggerError +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE from Common.DataType import TAB_COMMON +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" + ## process APRIORI file data and generate PEI/DXE APRIORI file # # -class AprioriSection (AprioriSectionClassObject): +class AprioriSection (object): ## The constructor # # @param self The object pointer # def __init__(self): - AprioriSectionClassObject.__init__(self) + self.DefineVarDict = {} + self.FfsList = [] self.AprioriType = "" ## GenFfs() method @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): # @retval string Generated file name # def GenFfs (self, FvName, Dict = {}, IsMakefile = False): - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" Buffer = BytesIO('') - AprioriFileGuid = DXE_GUID if self.AprioriType == "PEI": - AprioriFileGuid = PEI_GUID + AprioriFileGuid = PEI_APRIORI_GUID + else: + AprioriFileGuid = DXE_APRIORI_GUID + OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ GenFdsGlobalVariable.FfsDir,\ AprioriFileGuid + FvName) - if not os.path.exists(OutputAprFilePath) : + if not os.path.exists(OutputAprFilePath): os.makedirs(OutputAprFilePath) OutputAprFileName = os.path.join( OutputAprFilePath, \ @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): Dict.update(self.DefineVarDict) InfFileName = None - for FfsObj in self.FfsList : + for FfsObj in self.FfsList: Guid = "" - if isinstance(FfsObj, FfsFileStatement.FileStatement): + if isinstance(FfsObj, FileStatement): Guid = FfsObj.NameGuid else: InfFileName = NormPath(FfsObj.InfFileName) Arch = FfsObj.GetCurrentArch() - if Arch is not None: + if Arch: Dict['$(ARCH)'] = Arch InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) - if Arch is not None: + if Arch: Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] Guid = Inf.Guid - else: Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] Guid = Inf.Guid - self.BinFileList = Inf.Module.Binaries - if self.BinFileList == []: - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, + if not Inf.Module.Binaries: + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, "INF %s not found in build ARCH %s!" \ % (InfFileName, GenFdsGlobalVariable.ArchList)) - GuidPart = Guid.split('-') Buffer.write(pack('I', long(GuidPart[0], 16))) Buffer.write(pack('H', int(GuidPart[1], 16))) @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) return AprFfsFileName - diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py index baa6e1dfa2ab..df29c40dbd20 100644 --- a/BaseTools/Source/Python/GenFds/Capsule.py +++ b/BaseTools/Source/Python/GenFds/Capsule.py @@ -16,22 +16,18 @@ # Import Modules # from __future__ import absolute_import -from .GenFdsGlobalVariable import GenFdsGlobalVariable -from .GenFdsGlobalVariable import FindExtendTool +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool from CommonDataClass.FdfClass import CapsuleClassObject import Common.LongFilePathOs as os -import subprocess from io import BytesIO -from Common.Misc import SaveFileOnChange -from Common.Misc import PackRegistryFormatGuid +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid import uuid from struct import pack from Common import EdkLogger -from Common.BuildToolError import * +from Common.BuildToolError import GENFDS_ERROR +from Common.DataType import TAB_LINE_BREAK - -T_CHAR_LF = '\n' -WIN_CERT_REVISION = 0x0200 +WIN_CERT_REVISION = 0x0200 WIN_CERT_TYPE_EFI_GUID = 0x0EF1 EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 ## create inf file describes what goes into capsule and call GenFv to generate capsule # # -class Capsule (CapsuleClassObject) : +class Capsule (CapsuleClassObject): ## The constructor # # @param self The object pointer @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : return self.GenFmpCapsule() CapInfFile = self.GenCapInf() - CapInfFile.writelines("[files]" + T_CHAR_LF) + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) CapFileList = [] - for CapsuleDataObj in self.CapsuleDataList : + for CapsuleDataObj in self.CapsuleDataList: CapsuleDataObj.CapsuleName = self.CapsuleName FileName = CapsuleDataObj.GenCapsuleSubItem() CapsuleDataObj.CapsuleName = None CapFileList.append(FileName) CapInfFile.writelines("EFI_FILE_NAME = " + \ FileName + \ - T_CHAR_LF) + TAB_LINE_BREAK) SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) CapInfFile.close() # @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : self.UiCapsuleName + "_Cap" + '.inf') CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') - CapInfFile.writelines("[options]" + T_CHAR_LF) + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) for Item in self.TokensDict: CapInfFile.writelines("EFI_" + \ Item + \ ' = ' + \ self.TokensDict[Item] + \ - T_CHAR_LF) + TAB_LINE_BREAK) return CapInfFile diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py index 0caba8983d2e..6071e9f4a545 100644 --- a/BaseTools/Source/Python/GenFds/CapsuleData.py +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py @@ -16,7 +16,6 @@ # Import Modules # from __future__ import absolute_import -from . import Ffs from .GenFdsGlobalVariable import GenFdsGlobalVariable from io import BytesIO from struct import pack diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py index aaaabf84dca8..51f6791fab19 100644 --- a/BaseTools/Source/Python/GenFds/CompressSection.py +++ b/BaseTools/Source/Python/GenFds/CompressSection.py @@ -16,7 +16,7 @@ # Import Modules # from __future__ import absolute_import -from .Ffs import Ffs +from .Ffs import SectionSuffix from . import Section import subprocess import Common.LongFilePathOs as os @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : ModuleName + \ SUP_MODULE_SEC + \ SecNum + \ - Ffs.SectionSuffix['COMPRESS'] + SectionSuffix['COMPRESS'] OutputFile = os.path.normpath(OutputFile) DummyFile = OutputFile + '.dummy' GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py index a6387b07c582..28f9b931ca4b 100644 --- a/BaseTools/Source/Python/GenFds/DataSection.py +++ b/BaseTools/Source/Python/GenFds/DataSection.py @@ -19,7 +19,7 @@ from __future__ import absolute_import from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from CommonDataClass.FdfClass import DataSectionClassObject from Common.Misc import PeImageClass @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): ) self.SectFileName = TeFile - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) OutputFile = os.path.normpath(OutputFile) GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), IsMakefile = IsMakefile) FileList = [OutputFile] diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py index b2d123bfc045..78efc18816e8 100644 --- a/BaseTools/Source/Python/GenFds/DepexSection.py +++ b/BaseTools/Source/Python/GenFds/DepexSection.py @@ -18,8 +18,6 @@ from __future__ import absolute_import from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable -import subprocess -from .Ffs import Ffs import Common.LongFilePathOs as os from CommonDataClass.FdfClass import DepexSectionClassObject from AutoGen.GenDepex import DependencyExpression @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ - ModuleFile, + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py index 623b77d27427..f8573b5c7d1b 100644 --- a/BaseTools/Source/Python/GenFds/EfiSection.py +++ b/BaseTools/Source/Python/GenFds/EfiSection.py @@ -20,7 +20,7 @@ from struct import * from . import Section from .GenFdsGlobalVariable import GenFdsGlobalVariable import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from CommonDataClass.FdfClass import EfiSectionClassObject from Common import EdkLogger @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): BuildNumTuple = tuple() Num = SecNum - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', #Ui=StringData, Ver=BuildNum, @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): for File in FileList: Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) f = open(File, 'r') VerString = f.read() f.close() @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): else: EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) Num = SecNum - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', #Ui=VerString, Ver=BuildNum, @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): Num = SecNum if IsMakefile and StringData == ModuleNameStr: StringData = "$(MODULE_NAME)" - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', Ui=StringData, IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): for File in FileList: Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) f = open(File, 'r') UiString = f.read() f.close() @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): Num = SecNum if IsMakefile and StringData == ModuleNameStr: StringData = "$(MODULE_NAME)" - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', Ui=StringData, IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): """ Copy Map file to FFS output path """ Index = Index + 1 Num = '%s.%d' %(SecNum, Index) - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) File = GenFdsGlobalVariable.MacroExtend(File, Dict) #Get PE Section alignment when align is set to AUTO diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index 63687e98bb31..62bf9f14012a 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -18,79 +18,66 @@ # from __future__ import print_function from __future__ import absolute_import -import re +from re import compile, DOTALL +from string import hexdigits +from uuid import UUID -from . import Fd -from . import Region -from . import Fv -from . import AprioriSection -from . import FfsInfStatement -from . import FfsFileStatement -from . import VerSection -from . import UiSection -from . import FvImageSection -from . import DataSection -from . import DepexSection -from . import CompressSection -from . import GuidSection -from . import Capsule -from . import CapsuleData -from . import Rule -from . import RuleComplexFile -from . import RuleSimpleFile -from . import EfiSection -from . import Vtf -from . import ComponentStatement -from . import OptionRom -from . import OptRomInfStatement -from . import OptRomFileStatement -import string - -from .GenFdsGlobalVariable import GenFdsGlobalVariable from Common.BuildToolError import * from Common import EdkLogger -from Common.Misc import PathClass -from Common.StringUtils import NormPath -import Common.GlobalData as GlobalData -from Common.Expression import * +from Common.Misc import PathClass, tdict +from Common.StringUtils import NormPath, ReplaceMacro from Common import GlobalData +from Common.Expression import * from Common.DataType import * -from Common.StringUtils import ReplaceMacro -import uuid -from Common.Misc import tdict from Common.MultipleWorkspace import MultipleWorkspace as mws import Common.LongFilePathOs as os from Common.LongFilePathSupport import OpenLongFilePath as open -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID from Common.RangeExpression import RangeExpression from collections import OrderedDict -##define T_CHAR_SPACE ' ' -##define T_CHAR_NULL '\0' -##define T_CHAR_CR '\r' -##define T_CHAR_TAB '\t' -##define T_CHAR_LF '\n' -##define T_CHAR_SLASH '/' -##define T_CHAR_BACKSLASH '\\' -##define T_CHAR_DOUBLE_QUOTE '\"' -##define T_CHAR_SINGLE_QUOTE '\'' -##define T_CHAR_STAR '*' -##define T_CHAR_HASH '#' +from .Fd import FD +from .Region import Region +from .Fv import FV +from .AprioriSection import AprioriSection +from .FfsInfStatement import FfsInfStatement +from .FfsFileStatement import FileStatement +from .VerSection import VerSection +from .UiSection import UiSection +from .FvImageSection import FvImageSection +from .DataSection import DataSection +from .DepexSection import DepexSection +from .CompressSection import CompressSection +from .GuidSection import GuidSection +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile +from .RuleComplexFile import RuleComplexFile +from .RuleSimpleFile import RuleSimpleFile +from .EfiSection import EfiSection +from .Vtf import Vtf +from .ComponentStatement import ComponentStatement +from .OptionRom import OPTIONROM +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs +from .OptRomFileStatement import OptRomFileStatement +from .GenFdsGlobalVariable import GenFdsGlobalVariable -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') +T_CHAR_CR = '\r' +T_CHAR_TAB = '\t' +T_CHAR_DOUBLE_QUOTE = '\"' +T_CHAR_SINGLE_QUOTE = '\'' +T_CHAR_STAR = '*' -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') -TokenFindPattern = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') +TokenFindPattern = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') AllIncludeFileList = [] # Get the closest parent @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): return True def GetRealFileLine (File, Line): - InsertedLines = 0 for Profile in AllIncludeFileList: if Profile.IsLineInFile(Line): @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): ## The exception class that used to report error messages when parsing FDF # -# Currently the "ToolName" is set to be "FDF Parser". +# Currently the "ToolName" is set to be "FdfParser". # class Warning (Exception): ## The constructor @@ -132,7 +118,6 @@ class Warning (Exception): # @param Line The Line number that error occurs # def __init__(self, Str, File = None, Line = None): - FileLineTuple = GetRealFileLine(File, Line) self.FileName = FileLineTuple[0] self.LineNumber = FileLineTuple[1] @@ -147,7 +132,7 @@ class Warning (Exception): # # May raise Exception when opening file. # -class IncludeFileProfile : +class IncludeFileProfile: ## The constructor # # @param self The object pointer @@ -157,16 +142,11 @@ class IncludeFileProfile : self.FileName = FileName self.FileLinesList = [] try: - fsock = open(FileName, "rb", 0) - try: + with open(FileName, "rb", 0) as fsock: self.FileLinesList = fsock.readlines() for index, line in enumerate(self.FileLinesList): - if not line.endswith('\n'): - self.FileLinesList[index] += '\n' - - finally: - fsock.close() - + if not line.endswith(TAB_LINE_BREAK): + self.FileLinesList[index] += TAB_LINE_BREAK except: EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) @@ -209,7 +189,7 @@ class IncludeFileProfile : # # May raise Exception when opening file. # -class FileProfile : +class FileProfile: ## The constructor # # @param self The object pointer @@ -218,11 +198,8 @@ class FileProfile : def __init__(self, FileName): self.FileLinesList = [] try: - fsock = open(FileName, "rb", 0) - try: + with open(FileName, "rb", 0) as fsock: self.FileLinesList = fsock.readlines() - finally: - fsock.close() except: EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) @@ -266,41 +243,37 @@ class FdfParser: self.CurrentOffsetWithinLine = 0 self.CurrentFdName = None self.CurrentFvName = None - self.__Token = "" - self.__SkippedChars = "" + self._Token = "" + self._SkippedChars = "" GlobalData.gFdfParser = self # Used to section info - self.__CurSection = [] + self._CurSection = [] # Key: [section name, UI name, arch] - # Value: {MACRO_NAME : MACRO_VALUE} - self.__MacroDict = tdict(True, 3) - self.__PcdDict = OrderedDict() + # Value: {MACRO_NAME: MACRO_VALUE} + self._MacroDict = tdict(True, 3) + self._PcdDict = OrderedDict() - self.__WipeOffArea = [] + self._WipeOffArea = [] if GenFdsGlobalVariable.WorkSpaceDir == '': GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") - ## __SkipWhiteSpace() method + ## _SkipWhiteSpace() method # - # Skip white spaces from current char, return number of chars skipped + # Skip white spaces from current char. # # @param self The object pointer - # @retval Count The number of chars skipped # - def __SkipWhiteSpace(self): - Count = 0 - while not self.__EndOfFile(): - Count += 1 - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): - self.__SkippedChars += str(self.__CurrentChar()) - self.__GetOneChar() - + def _SkipWhiteSpace(self): + while not self._EndOfFile(): + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): + self._SkippedChars += str(self._CurrentChar()) + self._GetOneChar() else: - Count = Count - 1 - return Count + return + return - ## __EndOfFile() method + ## _EndOfFile() method # # Judge current buffer pos is at file end # @@ -308,17 +281,16 @@ class FdfParser: # @retval True Current File buffer position is at file end # @retval False Current File buffer position is NOT at file end # - def __EndOfFile(self): + def _EndOfFile(self): NumberOfLines = len(self.Profile.FileLinesList) SizeOfLastLine = len(self.Profile.FileLinesList[-1]) if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: return True - elif self.CurrentLineNumber > NumberOfLines: + if self.CurrentLineNumber > NumberOfLines: return True - else: - return False + return False - ## __EndOfLine() method + ## _EndOfLine() method # # Judge current buffer pos is at line end # @@ -326,14 +298,13 @@ class FdfParser: # @retval True Current File buffer position is at line end # @retval False Current File buffer position is NOT at line end # - def __EndOfLine(self): + def _EndOfLine(self): if self.CurrentLineNumber > len(self.Profile.FileLinesList): return True SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: return True - else: - return False + return False ## Rewind() method # @@ -347,7 +318,7 @@ class FdfParser: self.CurrentLineNumber = DestLine self.CurrentOffsetWithinLine = DestOffset - ## __UndoOneChar() method + ## _UndoOneChar() method # # Go back one char in the file buffer # @@ -355,80 +326,78 @@ class FdfParser: # @retval True Successfully go back one char # @retval False Not able to go back one char as file beginning reached # - def __UndoOneChar(self): - + def _UndoOneChar(self): if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: return False elif self.CurrentOffsetWithinLine == 0: self.CurrentLineNumber -= 1 - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 else: self.CurrentOffsetWithinLine -= 1 return True - ## __GetOneChar() method + ## _GetOneChar() method # # Move forward one char in the file buffer # # @param self The object pointer # - def __GetOneChar(self): + def _GetOneChar(self): if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 else: self.CurrentOffsetWithinLine += 1 - ## __CurrentChar() method + ## _CurrentChar() method # # Get the char pointed to by the file buffer pointer # # @param self The object pointer # @retval Char Current char # - def __CurrentChar(self): + def _CurrentChar(self): return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] - ## __NextChar() method + ## _NextChar() method # # Get the one char pass the char pointed to by the file buffer pointer # # @param self The object pointer # @retval Char Next char # - def __NextChar(self): + def _NextChar(self): if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: return self.Profile.FileLinesList[self.CurrentLineNumber][0] - else: - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] - ## __SetCurrentCharValue() method + ## _SetCurrentCharValue() method # # Modify the value of current char # # @param self The object pointer # @param Value The new value of current char # - def __SetCurrentCharValue(self, Value): + def _SetCurrentCharValue(self, Value): self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value - ## __CurrentLine() method + ## _CurrentLine() method # # Get the list that contains current line contents # # @param self The object pointer # @retval List current line contents # - def __CurrentLine(self): + def _CurrentLine(self): return self.Profile.FileLinesList[self.CurrentLineNumber - 1] - def __StringToList(self): + def _StringToList(self): self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] if not self.Profile.FileLinesList: EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) self.Profile.FileLinesList[-1].append(' ') - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): if StartPos[0] == EndPos[0]: Offset = StartPos[1] while Offset <= EndPos[1]: @@ -437,14 +406,14 @@ class FdfParser: return Offset = StartPos[1] - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): self.Profile.FileLinesList[StartPos[0]][Offset] = Value Offset += 1 Line = StartPos[0] while Line < EndPos[0]: Offset = 0 - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): self.Profile.FileLinesList[Line][Offset] = Value Offset += 1 Line += 1 @@ -454,45 +423,29 @@ class FdfParser: self.Profile.FileLinesList[EndPos[0]][Offset] = Value Offset += 1 - - def __GetMacroName(self): - if not self.__GetNextToken(): - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - MacroName = self.__Token - NotFlag = False - if MacroName.startswith('!'): - NotFlag = True - MacroName = MacroName[1:].strip() - - if not MacroName.startswith('$(') or not MacroName.endswith(')'): - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : MacroName}, - self.FileName, self.CurrentLineNumber) - MacroName = MacroName[2:-1] - return MacroName, NotFlag - - def __SetMacroValue(self, Macro, Value): - if not self.__CurSection: + def _SetMacroValue(self, Macro, Value): + if not self._CurSection: return MacroDict = {} - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict else: - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] MacroDict[Macro] = Value - def __GetMacroValue(self, Macro): + def _GetMacroValue(self, Macro): # Highest priority if Macro in GlobalData.gCommandLineDefines: return GlobalData.gCommandLineDefines[Macro] if Macro in GlobalData.gGlobalDefines: return GlobalData.gGlobalDefines[Macro] - if self.__CurSection: - MacroDict = self.__MacroDict[ - self.__CurSection[0], - self.__CurSection[1], - self.__CurSection[2] + if self._CurSection: + MacroDict = self._MacroDict[ + self._CurSection[0], + self._CurSection[1], + self._CurSection[2] ] if MacroDict and Macro in MacroDict: return MacroDict[Macro] @@ -502,7 +455,7 @@ class FdfParser: return GlobalData.gPlatformDefines[Macro] return None - def __SectionHeaderParser(self, Section): + def _SectionHeaderParser(self, Section): # [Defines] # [FD.UiName]: use dummy instead if UI name is optional # [FV.UiName] @@ -510,25 +463,25 @@ class FdfParser: # [Rule]: don't take rule section into account, macro is not allowed in this section # [VTF.arch.UiName, arch] # [OptionRom.DriverName] - self.__CurSection = [] - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') - ItemList = Section.split('.') + self._CurSection = [] + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) + ItemList = Section.split(TAB_SPLIT) Item = ItemList[0] if Item == '' or Item == 'RULE': return if Item == TAB_COMMON_DEFINES.upper(): - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] elif Item == 'VTF' and len(ItemList) == 3: UiName = ItemList[2] - Pos = UiName.find(',') + Pos = UiName.find(TAB_COMMA_SPLIT) if Pos != -1: UiName = UiName[:Pos] - self.__CurSection = ['VTF', UiName, ItemList[1]] + self._CurSection = ['VTF', UiName, ItemList[1]] elif len(ItemList) > 1: - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] elif len(ItemList) > 0: - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] ## PreprocessFile() method # @@ -540,7 +493,6 @@ class FdfParser: # @param self The object pointer # def PreprocessFile(self): - self.Rewind() InComment = False DoubleSlashComment = False @@ -548,12 +500,12 @@ class FdfParser: # HashComment in quoted string " " is ignored. InString = False - while not self.__EndOfFile(): + while not self._EndOfFile(): - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: InString = not InString # meet new line, then no longer in a comment for // and '#' - if self.__CurrentChar() == T_CHAR_LF: + if self._CurrentChar() == TAB_LINE_BREAK: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 if InComment and DoubleSlashComment: @@ -563,33 +515,33 @@ class FdfParser: InComment = False HashComment = False # check for */ comment end - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR and self.__NextChar() == T_CHAR_SLASH: - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and self._NextChar() == TAB_BACK_SLASH: + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() InComment = False # set comments to spaces elif InComment: - self.__SetCurrentCharValue(T_CHAR_SPACE) - self.__GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() # check for // comment - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): InComment = True DoubleSlashComment = True # check for '#' comment - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: InComment = True HashComment = True # check for /* comment start - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: - self.__SetCurrentCharValue( T_CHAR_SPACE) - self.__GetOneChar() - self.__SetCurrentCharValue( T_CHAR_SPACE) - self.__GetOneChar() + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() + self._SetCurrentCharValue(TAB_SPACE_SPLIT) + self._GetOneChar() InComment = True else: - self.__GetOneChar() + self._GetOneChar() # restore from ListOfList to ListOfString self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] @@ -606,30 +558,30 @@ class FdfParser: # nested include support Processed = False MacroDict = {} - while self.__GetNextToken(): + while self._GetNextToken(): - if self.__Token == 'DEFINE': - if not self.__GetNextToken(): + if self._Token == TAB_DEFINE: + if not self._GetNextToken(): raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - Macro = self.__Token - if not self.__IsToken( "="): + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() + Value = self._GetExpression() MacroDict[Macro] = Value - elif self.__Token == '!include': + elif self._Token == TAB_INCLUDE: Processed = True IncludeLine = self.CurrentLineNumber - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') - if not self.__GetNextToken(): + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) + if not self._GetNextToken(): raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) - IncFileName = self.__Token + IncFileName = self._Token PreIndex = 0 StartPos = IncFileName.find('$(', PreIndex) EndPos = IncFileName.find(')', StartPos+2) while StartPos != -1 and EndPos != -1: - Macro = IncFileName[StartPos+2 : EndPos] - MacroVal = self.__GetMacroValue(Macro) + Macro = IncFileName[StartPos+2: EndPos] + MacroVal = self._GetMacroValue(Macro) if not MacroVal: if Macro in MacroDict: MacroVal = MacroDict[Macro] @@ -686,9 +638,9 @@ class FdfParser: IncFileProfile.Level = ParentProfile.Level + 1 IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 # deal with remaining portions after "!include filename", if exists. - if self.__GetNextToken(): + if self._GetNextToken(): if self.CurrentLineNumber == CurrentLine: - RemainingLine = self.__CurrentLine()[CurrentOffset:] + RemainingLine = self._CurrentLine()[CurrentOffset:] self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) IncFileProfile.InsertAdjust += 1 self.CurrentLineNumber += 1 @@ -704,7 +656,7 @@ class FdfParser: # comment out the processed include file statement TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) - TempList.insert(IncludeOffset, '#') + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) if Processed: # Nested and back-to-back support self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) @@ -713,7 +665,7 @@ class FdfParser: self.Rewind() @staticmethod - def __GetIfListCurrentItemStat(IfList): + def _GetIfListCurrentItemStat(IfList): if len(IfList) == 0: return True @@ -735,29 +687,29 @@ class FdfParser: IfList = [] RegionLayoutLine = 0 ReplacedLine = -1 - while self.__GetNextToken(): + while self._GetNextToken(): # Determine section name and the location dependent macro - if self.__GetIfListCurrentItemStat(IfList): - if self.__Token.startswith('['): - Header = self.__Token - if not self.__Token.endswith(']'): - self.__SkipToToken(']') - Header += self.__SkippedChars + if self._GetIfListCurrentItemStat(IfList): + if self._Token.startswith(TAB_SECTION_START): + Header = self._Token + if not self._Token.endswith(TAB_SECTION_END): + self._SkipToToken(TAB_SECTION_END) + Header += self._SkippedChars if Header.find('$(') != -1: raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) - self.__SectionHeaderParser(Header) + self._SectionHeaderParser(Header) continue # Replace macros except in RULE section or out of section - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: ReplacedLine = self.CurrentLineNumber - self.__UndoToken() + self._UndoToken() CurLine = self.Profile.FileLinesList[ReplacedLine - 1] PreIndex = 0 StartPos = CurLine.find('$(', PreIndex) EndPos = CurLine.find(')', StartPos+2) - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: - MacroName = CurLine[StartPos+2 : EndPos] - MacorValue = self.__GetMacroValue(MacroName) + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, TAB_ELSE_IF]: + MacroName = CurLine[StartPos+2: EndPos] + MacorValue = self._GetMacroValue(MacroName) if MacorValue is not None: CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) if MacorValue.find('$(') != -1: @@ -771,74 +723,74 @@ class FdfParser: self.Profile.FileLinesList[ReplacedLine - 1] = CurLine continue - if self.__Token == 'DEFINE': - if self.__GetIfListCurrentItemStat(IfList): - if not self.__CurSection: + if self._Token == TAB_DEFINE: + if self._GetIfListCurrentItemStat(IfList): + if not self._CurSection: raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, self.CurrentLineNumber) DefineLine = self.CurrentLineNumber - 1 - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') - if not self.__GetNextToken(): + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) + if not self._GetNextToken(): raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) - Macro = self.__Token - if not self.__IsToken( "="): + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - self.__SetMacroValue(Macro, Value) - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token == 'SET': - if not self.__GetIfListCurrentItemStat(IfList): + Value = self._GetExpression() + self._SetMacroValue(Macro, Value) + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token == 'SET': + if not self._GetIfListCurrentItemStat(IfList): continue SetLine = self.CurrentLineNumber - 1 SetOffset = self.CurrentOffsetWithinLine - len('SET') - PcdPair = self.__GetNextPcdSettings() + PcdPair = self._GetNextPcdSettings() PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) + Value = self._GetExpression() + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) - self.__PcdDict[PcdName] = Value + self._PcdDict[PcdName] = Value self.Profile.PcdDict[PcdPair] = Value self.SetPcdLocalation(PcdPair) FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token in ('!ifdef', '!ifndef', '!if'): - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) IfList.append([IfStartPos, None, None]) - CondLabel = self.__Token - Expression = self.__GetExpression() + CondLabel = self._Token + Expression = self._GetExpression() - if CondLabel == '!if': - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') + if CondLabel == TAB_IF: + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') else: - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') - if CondLabel == '!ifndef': + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') + if CondLabel == TAB_IF_N_DEF: ConditionSatisfied = not ConditionSatisfied BranchDetermined = ConditionSatisfied IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] if ConditionSatisfied: - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token in ('!elseif', '!else'): - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token in (TAB_ELSE_IF, '!else'): + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) if len(IfList) <= 0: raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) if IfList[-1][1]: IfList[-1] = [ElseStartPos, False, True] - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) else: - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) IfList[-1] = [ElseStartPos, True, IfList[-1][2]] - if self.__Token == '!elseif': - Expression = self.__GetExpression() - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') + if self._Token == TAB_ELSE_IF: + Expression = self._GetExpression() + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] if IfList[-1][1]: @@ -846,14 +798,14 @@ class FdfParser: IfList[-1][1] = False else: IfList[-1][2] = True - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self.__Token == '!endif': + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + elif self._Token == '!endif': if len(IfList) <= 0: raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) if IfList[-1][1]: - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) else: - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) IfList.pop() elif not IfList: # Don't use PCDs inside conditional directive @@ -862,7 +814,7 @@ class FdfParser: continue SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) if SetPcd: - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') RegionLayoutLine = self.CurrentLineNumber continue RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) @@ -873,35 +825,35 @@ class FdfParser: if not RegionSizeGuid: RegionLayoutLine = self.CurrentLineNumber + 1 continue - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') RegionLayoutLine = self.CurrentLineNumber + 1 if IfList: raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) self.Rewind() - def __CollectMacroPcd(self): + def _CollectMacroPcd(self): MacroDict = {} # PCD macro MacroDict.update(GlobalData.gPlatformPcds) - MacroDict.update(self.__PcdDict) + MacroDict.update(self._PcdDict) # Lowest priority MacroDict.update(GlobalData.gPlatformDefines) - if self.__CurSection: + if self._CurSection: # Defines macro - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] if ScopeMacro: MacroDict.update(ScopeMacro) # Section macro - ScopeMacro = self.__MacroDict[ - self.__CurSection[0], - self.__CurSection[1], - self.__CurSection[2] + ScopeMacro = self._MacroDict[ + self._CurSection[0], + self._CurSection[1], + self._CurSection[2] ] if ScopeMacro: MacroDict.update(ScopeMacro) @@ -912,15 +864,15 @@ class FdfParser: for Item in GlobalData.BuildOptionPcd: if isinstance(Item, tuple): continue - PcdName, TmpValue = Item.split("=") + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) TmpValue = BuildOptionValue(TmpValue, {}) MacroDict[PcdName.strip()] = TmpValue # Highest priority return MacroDict - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): - MacroPcdDict = self.__CollectMacroPcd() + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): + MacroPcdDict = self._CollectMacroPcd() if Op == 'eval': try: if Value: @@ -933,7 +885,7 @@ class FdfParser: # the precise number of line and return the evaluation result # EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), - File=self.FileName, ExtraData=self.__CurrentLine(), + File=self.FileName, ExtraData=self._CurrentLine(), Line=Line) return Excpt.result except Exception as Excpt: @@ -955,10 +907,10 @@ class FdfParser: Expression = Expression[2:-1] return Expression in MacroPcdDict - ## __IsToken() method + ## _IsToken() method # # Check whether input string is found from current char position along - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @param String The string to search @@ -966,26 +918,26 @@ class FdfParser: # @retval True Successfully find string, file buffer pointer moved forward # @retval False Not able to find string, file buffer pointer not changed # - def __IsToken(self, String, IgnoreCase = False): - self.__SkipWhiteSpace() + def _IsToken(self, String, IgnoreCase = False): + self._SkipWhiteSpace() # Only consider the same line, no multi-line token allowed StartPos = self.CurrentOffsetWithinLine index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) if index == 0: self.CurrentOffsetWithinLine += len(String) - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - ## __IsKeyword() method + ## _IsKeyword() method # # Check whether input keyword is found from current char position along, whole word only! - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @param Keyword The string to search @@ -993,117 +945,117 @@ class FdfParser: # @retval True Successfully find string, file buffer pointer moved forward # @retval False Not able to find string, file buffer pointer not changed # - def __IsKeyword(self, KeyWord, IgnoreCase = False): - self.__SkipWhiteSpace() + def _IsKeyword(self, KeyWord, IgnoreCase = False): + self._SkipWhiteSpace() # Only consider the same line, no multi-line token allowed StartPos = self.CurrentOffsetWithinLine index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) if index == 0: - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] + if not str(followingChar).isspace() and followingChar not in SEPARATORS: return False self.CurrentOffsetWithinLine += len(KeyWord) - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - def __GetExpression(self): + def _GetExpression(self): Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] Index = len(Line) - 1 - while Line[Index] in ['\r', '\n']: + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: Index -= 1 ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] self.CurrentOffsetWithinLine += len(ExpressionString) ExpressionString = ExpressionString.strip() return ExpressionString - ## __GetNextWord() method + ## _GetNextWord() method # # Get next C name from file lines - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @retval True Successfully find a C name string, file buffer pointer moved forward # @retval False Not able to find a C name string, file buffer pointer not changed # - def __GetNextWord(self): - self.__SkipWhiteSpace() - if self.__EndOfFile(): + def _GetNextWord(self): + self._SkipWhiteSpace() + if self._EndOfFile(): return False - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() StartPos = self.CurrentOffsetWithinLine if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': - self.__GetOneChar() - while not self.__EndOfLine(): - TempChar = self.__CurrentChar() + self._GetOneChar() + while not self._EndOfLine(): + TempChar = self._CurrentChar() if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': - self.__GetOneChar() + self._GetOneChar() else: break - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - def __GetNextPcdWord(self): - self.__SkipWhiteSpace() - if self.__EndOfFile(): + def _GetNextPcdWord(self): + self._SkipWhiteSpace() + if self._EndOfFile(): return False - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() StartPos = self.CurrentOffsetWithinLine - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == '[' or TempChar == ']': - self.__GetOneChar() - while not self.__EndOfLine(): - TempChar = self.__CurrentChar() + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: + self._GetOneChar() + while not self._EndOfLine(): + TempChar = self._CurrentChar() if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or TempChar == ']': - self.__GetOneChar() + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: + self._GetOneChar() else: break - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] return True return False - ## __GetNextToken() method + ## _GetNextToken() method # # Get next token unit before a seperator - # If found, the string value is put into self.__Token + # If found, the string value is put into self._Token # # @param self The object pointer # @retval True Successfully find a token unit, file buffer pointer moved forward # @retval False Not able to find a token unit, file buffer pointer not changed # - def __GetNextToken(self): + def _GetNextToken(self): # Skip leading spaces, if exist. - self.__SkipWhiteSpace() - if self.__EndOfFile(): + self._SkipWhiteSpace() + if self._EndOfFile(): return False # Record the token start position, the position of the first non-space char. StartPos = self.CurrentOffsetWithinLine StartLine = self.CurrentLineNumber while StartLine == self.CurrentLineNumber: - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() # Try to find the end char that is not a space and not in seperator tuple. # That is, when we got a space or any char in the tuple, we got the end of token. - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: - self.__GetOneChar() + if not str(TempChar).isspace() and TempChar not in SEPARATORS: + self._GetOneChar() # if we happen to meet a seperator as the first char, we must proceed to get it. # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: - self.__GetOneChar() + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: + self._GetOneChar() break else: break @@ -1113,35 +1065,34 @@ class FdfParser: EndPos = self.CurrentOffsetWithinLine if self.CurrentLineNumber != StartLine: EndPos = len(self.Profile.FileLinesList[StartLine-1]) - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: - self.__Token = self.__Token.lower() + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: + self._Token = self._Token.lower() if StartPos != self.CurrentOffsetWithinLine: return True else: return False - ## __GetNextGuid() method + ## _GetNextGuid() method # # Get next token unit before a seperator - # If found, the GUID string is put into self.__Token + # If found, the GUID string is put into self._Token # # @param self The object pointer # @retval True Successfully find a registry format GUID, file buffer pointer moved forward # @retval False Not able to find a registry format GUID, file buffer pointer not changed # - def __GetNextGuid(self): - - if not self.__GetNextToken(): + def _GetNextGuid(self): + if not self._GetNextToken(): return False - if gGuidPattern.match(self.__Token) is not None: + if gGuidPattern.match(self._Token) is not None: return True else: - self.__UndoToken() + self._UndoToken() return False @staticmethod - def __Verify(Name, Value, Scope): + def _Verify(Name, Value, Scope): # value verification only applies to numeric values. if Scope not in TAB_PCD_NUMERIC_TYPES: return @@ -1157,17 +1108,17 @@ class FdfParser: EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) return True - ## __UndoToken() method + ## _UndoToken() method # # Go back one token unit in file buffer # # @param self The object pointer # - def __UndoToken(self): - self.__UndoOneChar() - while self.__CurrentChar().isspace(): - if not self.__UndoOneChar(): - self.__GetOneChar() + def _UndoToken(self): + self._UndoOneChar() + while self._CurrentChar().isspace(): + if not self._UndoOneChar(): + self._GetOneChar() return @@ -1175,118 +1126,118 @@ class FdfParser: CurrentLine = self.CurrentLineNumber while CurrentLine == self.CurrentLineNumber: - TempChar = self.__CurrentChar() + TempChar = self._CurrentChar() # Try to find the end char that is not a space and not in seperator tuple. # That is, when we got a space or any char in the tuple, we got the end of token. - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: - if not self.__UndoOneChar(): + if not str(TempChar).isspace() and not TempChar in SEPARATORS: + if not self._UndoOneChar(): return # if we happen to meet a seperator as the first char, we must proceed to get it. # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: return else: break - self.__GetOneChar() + self._GetOneChar() - ## __GetNextHexNumber() method + ## _GetNextHexNumber() method # # Get next HEX data before a seperator - # If found, the HEX data is put into self.__Token + # If found, the HEX data is put into self._Token # # @param self The object pointer # @retval True Successfully find a HEX data, file buffer pointer moved forward # @retval False Not able to find a HEX data, file buffer pointer not changed # - def __GetNextHexNumber(self): - if not self.__GetNextToken(): + def _GetNextHexNumber(self): + if not self._GetNextToken(): return False - if gHexPatternAll.match(self.__Token): + if gHexPatternAll.match(self._Token): return True else: - self.__UndoToken() + self._UndoToken() return False - ## __GetNextDecimalNumber() method + ## _GetNextDecimalNumber() method # # Get next decimal data before a seperator - # If found, the decimal data is put into self.__Token + # If found, the decimal data is put into self._Token # # @param self The object pointer # @retval True Successfully find a decimal data, file buffer pointer moved forward # @retval False Not able to find a decimal data, file buffer pointer not changed # - def __GetNextDecimalNumber(self): - if not self.__GetNextToken(): + def _GetNextDecimalNumber(self): + if not self._GetNextToken(): return False - if self.__Token.isdigit(): + if self._Token.isdigit(): return True else: - self.__UndoToken() + self._UndoToken() return False - def __GetNextPcdSettings(self): - if not self.__GetNextWord(): + def _GetNextPcdSettings(self): + if not self._GetNextWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - pcdTokenSpaceCName = self.__Token + pcdTokenSpaceCName = self._Token - if not self.__IsToken( "."): + if not self._IsToken(TAB_SPLIT): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - pcdCName = self.__Token + pcdCName = self._Token Fields = [] - while self.__IsToken("."): - if not self.__GetNextPcdWord(): + while self._IsToken(TAB_SPLIT): + if not self._GetNextPcdWord(): raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) - Fields.append(self.__Token) + Fields.append(self._Token) - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) - ## __GetStringData() method + ## _GetStringData() method # # Get string contents quoted in "" - # If found, the decimal data is put into self.__Token + # If found, the decimal data is put into self._Token # # @param self The object pointer # @retval True Successfully find a string data, file buffer pointer moved forward # @retval False Not able to find a string data, file buffer pointer not changed # - def __GetStringData(self): - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): - self.__UndoToken() - self.__SkipToToken("\"") + def _GetStringData(self): + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): + self._UndoToken() + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) currentLineNumber = self.CurrentLineNumber - if not self.__SkipToToken("\""): + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) if currentLineNumber != self.CurrentLineNumber: raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) - self.__Token = self.__SkippedChars.rstrip('\"') + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) return True - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): - self.__UndoToken() - self.__SkipToToken("\'") + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): + self._UndoToken() + self._SkipToToken(T_CHAR_SINGLE_QUOTE) currentLineNumber = self.CurrentLineNumber - if not self.__SkipToToken("\'"): + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) if currentLineNumber != self.CurrentLineNumber: raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) - self.__Token = self.__SkippedChars.rstrip('\'') + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) return True else: return False - ## __SkipToToken() method + ## _SkipToToken() method # # Search forward in file buffer for the string - # The skipped chars are put into self.__SkippedChars + # The skipped chars are put into self._SkippedChars # # @param self The object pointer # @param String The string to search @@ -1294,25 +1245,25 @@ class FdfParser: # @retval True Successfully find the string, file buffer pointer moved forward # @retval False Not able to find the string, file buffer pointer not changed # - def __SkipToToken(self, String, IgnoreCase = False): + def _SkipToToken(self, String, IgnoreCase = False): StartPos = self.GetFileBufferPos() - self.__SkippedChars = "" - while not self.__EndOfFile(): + self._SkippedChars = "" + while not self._EndOfFile(): index = -1 if IgnoreCase: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) else: - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) if index == 0: self.CurrentOffsetWithinLine += len(String) - self.__SkippedChars += String + self._SkippedChars += String return True - self.__SkippedChars += str(self.__CurrentChar()) - self.__GetOneChar() + self._SkippedChars += str(self._CurrentChar()) + self._GetOneChar() - self.SetFileBufferPos( StartPos) - self.__SkippedChars = "" + self.SetFileBufferPos(StartPos) + self._SkippedChars = "" return False ## GetFileBufferPos() method @@ -1343,18 +1294,18 @@ class FdfParser: # @param self The object pointer # def Preprocess(self): - self.__StringToList() + self._StringToList() self.PreprocessFile() self.PreprocessIncludeFile() - self.__StringToList() + self._StringToList() self.PreprocessFile() self.PreprocessConditionalStatement() - self.__StringToList() - for Pos in self.__WipeOffArea: - self.__ReplaceFragment(Pos[0], Pos[1]) + self._StringToList() + for Pos in self._WipeOffArea: + self._ReplaceFragment(Pos[0], Pos[1]) self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] - while self.__GetDefines(): + while self._GetDefines(): pass ## ParseFile() method @@ -1365,19 +1316,18 @@ class FdfParser: # @param self The object pointer # def ParseFile(self): - try: self.Preprocess() - self.__GetError() + self._GetError() # # Keep processing sections of the FDF until no new sections or a syntax error is found # - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() or self.__GetOptionRom(): + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or self._GetOptionRom(): pass except Warning as X: - self.__UndoToken() - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ + self._UndoToken() + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ # At this point, the closest parent would be the included file itself Profile = GetParentAtLine(X.OriginalLineNumber) if Profile is not None: @@ -1386,7 +1336,7 @@ class FdfParser: else: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) X.Message += ' near line %d, column %d: %s' \ - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) raise ## SectionParser() method @@ -1403,7 +1353,7 @@ class FdfParser: and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not S.startswith('[FMPPAYLOAD.'): raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], [FD.], [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) - ## __GetDefines() method + ## _GetDefines() method # # Get Defines section contents and store its data into AllMacrosList # @@ -1411,56 +1361,55 @@ class FdfParser: # @retval True Successfully find a Defines # @retval False Not able to find a Defines # - def __GetDefines(self): - - if not self.__GetNextToken(): + def _GetDefines(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[DEFINES"): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[DEFINES", True): + self._UndoToken() + if not self._IsToken("[DEFINES", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - while self.__GetNextWord(): + while self._GetNextWord(): # handle the SET statement - if self.__Token == 'SET': - self.__UndoToken() - self.__GetSetStatement(None) + if self._Token == 'SET': + self._UndoToken() + self._GetSetStatement(None) continue - Macro = self.__Token + Macro = self._Token - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token.startswith('['): + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token return False - ##__GetError() method - def __GetError(self): + ##_GetError() method + def _GetError(self): #save the Current information CurrentLine = self.CurrentLineNumber CurrentOffset = self.CurrentOffsetWithinLine - while self.__GetNextToken(): - if self.__Token == TAB_ERROR: - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) + while self._GetNextToken(): + if self._Token == TAB_ERROR: + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) self.CurrentLineNumber = CurrentLine self.CurrentOffsetWithinLine = CurrentOffset - ## __GetFd() method + ## _GetFd() method # # Get FD section contents and store its data into FD dictionary of self.Profile # @@ -1468,27 +1417,26 @@ class FdfParser: # @retval True Successfully find a FD # @retval False Not able to find a FD # - def __GetFd(self): - - if not self.__GetNextToken(): + def _GetFd(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FD."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[FD.", True): + self._UndoToken() + if not self._IsToken("[FD.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) - FdName = self.__GetUiName() + FdName = self._GetUiName() if FdName == "": if len (self.Profile.FdDict) == 0: FdName = GenFdsGlobalVariable.PlatformName @@ -1502,56 +1450,56 @@ class FdfParser: if self.CurrentFdName in self.Profile.FdDict: raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - FdObj = Fd.FD() + FdObj = FD() FdObj.FdUiName = self.CurrentFdName self.Profile.FdDict[self.CurrentFdName] = FdObj if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) - Status = self.__GetCreateFile(FdObj) + Status = self._GetCreateFile(FdObj) if not Status: raise Warning("FD name error", self.FileName, self.CurrentLineNumber) - while self.__GetTokenStatements(FdObj): + while self._GetTokenStatements(FdObj): pass for Attr in ("BaseAddress", "Size", "ErasePolarity"): if getattr(FdObj, Attr) is None: - self.__GetNextToken() + self._GetNextToken() raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) if not FdObj.BlockSizeList: FdObj.BlockSizeList.append((1, FdObj.Size, None)) - self.__GetDefineStatements(FdObj) + self._GetDefineStatements(FdObj) - self.__GetSetStatements(FdObj) + self._GetSetStatements(FdObj) - if not self.__GetRegionLayout(FdObj): + if not self._GetRegionLayout(FdObj): raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) - while self.__GetRegionLayout(FdObj): + while self._GetRegionLayout(FdObj): pass return True - ## __GetUiName() method + ## _GetUiName() method # # Return the UI name of a section # # @param self The object pointer # @retval FdName UI name # - def __GetUiName(self): + def _GetUiName(self): Name = "" - if self.__GetNextWord(): - Name = self.__Token + if self._GetNextWord(): + Name = self._Token return Name - ## __GetCreateFile() method + ## _GetCreateFile() method # # Return the output file name of object # @@ -1559,16 +1507,15 @@ class FdfParser: # @param Obj object whose data will be stored in file # @retval FdName UI name # - def __GetCreateFile(self, Obj): - - if self.__IsKeyword( "CREATE_FILE"): - if not self.__IsToken( "="): + def _GetCreateFile(self, Obj): + if self._IsKeyword("CREATE_FILE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file name", self.FileName, self.CurrentLineNumber) - FileName = self.__Token + FileName = self._Token Obj.CreateFileName = FileName return True @@ -1576,25 +1523,25 @@ class FdfParser: def SetPcdLocalation(self,pcdpair): self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) - ## __GetTokenStatements() method + ## _GetTokenStatements() method # # Get token statements # # @param self The object pointer # @param Obj for whom token statement is got # - def __GetTokenStatements(self, Obj): - if self.__IsKeyword( "BaseAddress"): - if not self.__IsToken( "="): + def _GetTokenStatements(self, Obj): + if self._IsKeyword("BaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) - Obj.BaseAddress = self.__Token + Obj.BaseAddress = self._Token - if self.__IsToken( "|"): - pcdPair = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + pcdPair = self._GetNextPcdSettings() Obj.BaseAddressPcd = pcdPair self.Profile.PcdDict[pcdPair] = Obj.BaseAddress self.SetPcdLocalation(pcdPair) @@ -1602,16 +1549,16 @@ class FdfParser: self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple return True - if self.__IsKeyword( "Size"): - if not self.__IsToken( "="): + if self._IsKeyword("Size"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) - Size = self.__Token - if self.__IsToken( "|"): - pcdPair = self.__GetNextPcdSettings() + Size = self._Token + if self._IsToken(TAB_VALUE_SPLIT): + pcdPair = self._GetNextPcdSettings() Obj.SizePcd = pcdPair self.Profile.PcdDict[pcdPair] = Size self.SetPcdLocalation(pcdPair) @@ -1620,22 +1567,22 @@ class FdfParser: Obj.Size = long(Size, 0) return True - if self.__IsKeyword( "ErasePolarity"): - if not self.__IsToken( "="): + if self._IsKeyword("ErasePolarity"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) - if self.__Token != "1" and self.__Token != "0": + if self._Token != "1" and self._Token != "0": raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) - Obj.ErasePolarity = self.__Token + Obj.ErasePolarity = self._Token return True - return self.__GetBlockStatements(Obj) + return self._GetBlockStatements(Obj) - ## __GetAddressStatements() method + ## _GetAddressStatements() method # # Get address statements # @@ -1644,38 +1591,37 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetAddressStatements(self, Obj): - - if self.__IsKeyword("BsBaseAddress"): - if not self.__IsToken( "="): + def _GetAddressStatements(self, Obj): + if self._IsKeyword("BsBaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected address", self.FileName, self.CurrentLineNumber) - BsAddress = long(self.__Token, 0) + BsAddress = long(self._Token, 0) Obj.BsBaseAddress = BsAddress - if self.__IsKeyword("RtBaseAddress"): - if not self.__IsToken( "="): + if self._IsKeyword("RtBaseAddress"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected address", self.FileName, self.CurrentLineNumber) - RtAddress = long(self.__Token, 0) + RtAddress = long(self._Token, 0) Obj.RtBaseAddress = RtAddress - ## __GetBlockStatements() method + ## _GetBlockStatements() method # # Get block statements # # @param self The object pointer # @param Obj for whom block statement is got # - def __GetBlockStatements(self, Obj): + def _GetBlockStatements(self, Obj): IsBlock = False - while self.__GetBlockStatement(Obj): + while self._GetBlockStatement(Obj): IsBlock = True Item = Obj.BlockSizeList[-1] @@ -1683,7 +1629,7 @@ class FdfParser: raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) return IsBlock - ## __GetBlockStatement() method + ## _GetBlockStatement() method # # Get block statement # @@ -1692,20 +1638,20 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetBlockStatement(self, Obj): - if not self.__IsKeyword( "BlockSize"): + def _GetBlockStatement(self, Obj): + if not self._IsKeyword("BlockSize"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) - BlockSize = self.__Token + BlockSize = self._Token BlockSizePcd = None - if self.__IsToken( "|"): - PcdPair = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + PcdPair = self._GetNextPcdSettings() BlockSizePcd = PcdPair self.Profile.PcdDict[PcdPair] = BlockSize self.SetPcdLocalation(PcdPair) @@ -1714,19 +1660,19 @@ class FdfParser: BlockSize = long(BlockSize, 0) BlockNumber = None - if self.__IsKeyword( "NumBlocks"): - if not self.__IsToken( "="): + if self._IsKeyword("NumBlocks"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) - BlockNumber = long(self.__Token, 0) + BlockNumber = long(self._Token, 0) Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) return True - ## __GetDefineStatements() method + ## _GetDefineStatements() method # # Get define statements # @@ -1735,11 +1681,11 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetDefineStatements(self, Obj): - while self.__GetDefineStatement( Obj): + def _GetDefineStatements(self, Obj): + while self._GetDefineStatement(Obj): pass - ## __GetDefineStatement() method + ## _GetDefineStatement() method # # Get define statement # @@ -1748,24 +1694,24 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetDefineStatement(self, Obj): - if self.__IsKeyword("DEFINE"): - self.__GetNextToken() - Macro = self.__Token - if not self.__IsToken( "="): + def _GetDefineStatement(self, Obj): + if self._IsKeyword(TAB_DEFINE): + self._GetNextToken() + Macro = self._Token + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token Macro = '$(' + Macro + ')' Obj.DefineVarDict[Macro] = Value return True return False - ## __GetSetStatements() method + ## _GetSetStatements() method # # Get set statements # @@ -1774,11 +1720,11 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetSetStatements(self, Obj): - while self.__GetSetStatement(Obj): + def _GetSetStatements(self, Obj): + while self._GetSetStatement(Obj): pass - ## __GetSetStatement() method + ## _GetSetStatement() method # # Get set statement # @@ -1787,15 +1733,15 @@ class FdfParser: # @retval True Successfully find # @retval False Not able to find # - def __GetSetStatement(self, Obj): - if self.__IsKeyword("SET"): - PcdPair = self.__GetNextPcdSettings() + def _GetSetStatement(self, Obj): + if self._IsKeyword("SET"): + PcdPair = self._GetNextPcdSettings() - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - Value = self.__GetExpression() - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) + Value = self._GetExpression() + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) if Obj: Obj.SetVarDict[PcdPair] = Value @@ -1807,19 +1753,19 @@ class FdfParser: return False - ## __CalcRegionExpr(self) + ## _CalcRegionExpr(self) # # Calculate expression for offset or size of a region # # @return: None if invalid expression # Calculated number if successfully # - def __CalcRegionExpr(self): + def _CalcRegionExpr(self): StartPos = self.GetFileBufferPos() Expr = '' PairCount = 0 - while not self.__EndOfFile(): - CurCh = self.__CurrentChar() + while not self._EndOfFile(): + CurCh = self._CurrentChar() if CurCh == '(': PairCount += 1 elif CurCh == ')': @@ -1828,106 +1774,106 @@ class FdfParser: if CurCh in '|\r\n' and PairCount == 0: break Expr += CurCh - self.__GetOneChar() + self._GetOneChar() try: return long( ValueExpression(Expr, - self.__CollectMacroPcd() + self._CollectMacroPcd() )(True), 0) except Exception: self.SetFileBufferPos(StartPos) return None - ## __GetRegionLayout() method + ## _GetRegionLayout() method # # Get region layout for FD # # @param self The object pointer - # @param Fd for whom region is got + # @param theFd for whom region is got # @retval True Successfully find # @retval False Not able to find # - def __GetRegionLayout(self, Fd): - Offset = self.__CalcRegionExpr() + def _GetRegionLayout(self, theFd): + Offset = self._CalcRegionExpr() if Offset is None: return False - RegionObj = Region.Region() + RegionObj = Region() RegionObj.Offset = Offset - Fd.RegionList.append(RegionObj) + theFd.RegionList.append(RegionObj) - if not self.__IsToken( "|"): + if not self._IsToken(TAB_VALUE_SPLIT): raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) - Size = self.__CalcRegionExpr() + Size = self._CalcRegionExpr() if Size is None: raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) RegionObj.Size = Size - if not self.__GetNextWord(): + if not self._GetNextWord(): return True - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): # # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] # Or it might be next region's offset described by an expression which starts with a PCD. # PcdOffset[|PcdSize] or OffsetPcdExpression|Size # - self.__UndoToken() - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) + self._UndoToken() + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) if IsRegionPcd: - RegionObj.PcdOffset = self.__GetNextPcdSettings() - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) + RegionObj.PcdOffset = self._GetNextPcdSettings() + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) self.SetPcdLocalation(RegionObj.PcdOffset) - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple - if self.__IsToken( "|"): - RegionObj.PcdSize = self.__GetNextPcdSettings() + if self._IsToken(TAB_VALUE_SPLIT): + RegionObj.PcdSize = self._GetNextPcdSettings() self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size self.SetPcdLocalation(RegionObj.PcdSize) - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple - if not self.__GetNextWord(): + if not self._GetNextWord(): return True - if self.__Token == "SET": - self.__UndoToken() - self.__GetSetStatements( RegionObj) - if not self.__GetNextWord(): + if self._Token == "SET": + self._UndoToken() + self._GetSetStatements(RegionObj) + if not self._GetNextWord(): return True - elif self.__Token == BINARY_FILE_TYPE_FV: - self.__UndoToken() - self.__GetRegionFvType( RegionObj) + elif self._Token == BINARY_FILE_TYPE_FV: + self._UndoToken() + self._GetRegionFvType(RegionObj) - elif self.__Token == "CAPSULE": - self.__UndoToken() - self.__GetRegionCapType( RegionObj) + elif self._Token == "CAPSULE": + self._UndoToken() + self._GetRegionCapType(RegionObj) - elif self.__Token == "FILE": - self.__UndoToken() - self.__GetRegionFileType(RegionObj) + elif self._Token == "FILE": + self._UndoToken() + self._GetRegionFileType(RegionObj) - elif self.__Token == "INF": - self.__UndoToken() + elif self._Token == "INF": + self._UndoToken() RegionObj.RegionType = "INF" - while self.__IsKeyword("INF"): - self.__UndoToken() - ffsInf = self.__ParseInfStatement() + while self._IsKeyword("INF"): + self._UndoToken() + ffsInf = self._ParseInfStatement() if not ffsInf: break RegionObj.RegionDataList.append(ffsInf) - elif self.__Token == "DATA": - self.__UndoToken() - self.__GetRegionDataType(RegionObj) + elif self._Token == "DATA": + self._UndoToken() + self._GetRegionDataType(RegionObj) else: - self.__UndoToken() - if self.__GetRegionLayout(Fd): + self._UndoToken() + if self._GetRegionLayout(theFd): return True raise Warning("A valid region type was not found. " "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", @@ -1935,189 +1881,185 @@ class FdfParser: return True - ## __GetRegionFvType() method + ## _GetRegionFvType() method # # Get region fv data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionFvType(self, RegionObj): - - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): + def _GetRegionFvType(self, RegionObj): + if not self._IsKeyword(BINARY_FILE_TYPE_FV): raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = BINARY_FILE_TYPE_FV - RegionObj.RegionDataList.append((self.__Token).upper()) + RegionObj.RegionDataList.append((self._Token).upper()) - while self.__IsKeyword( BINARY_FILE_TYPE_FV): + while self._IsKeyword(BINARY_FILE_TYPE_FV): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append((self.__Token).upper()) + RegionObj.RegionDataList.append((self._Token).upper()) - ## __GetRegionCapType() method + ## _GetRegionCapType() method # # Get region capsule data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionCapType(self, RegionObj): - - if not self.__IsKeyword("CAPSULE"): + def _GetRegionCapType(self, RegionObj): + if not self._IsKeyword("CAPSULE"): raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "CAPSULE" - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - while self.__IsKeyword("CAPSULE"): + while self._IsKeyword("CAPSULE"): - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - ## __GetRegionFileType() method + ## _GetRegionFileType() method # # Get region file data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionFileType(self, RegionObj): - - if not self.__IsKeyword( "FILE"): + def _GetRegionFileType(self, RegionObj): + if not self._IsKeyword("FILE"): raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "FILE" - RegionObj.RegionDataList.append( self.__Token) + RegionObj.RegionDataList.append(self._Token) - while self.__IsKeyword( "FILE"): + while self._IsKeyword("FILE"): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) - RegionObj.RegionDataList.append(self.__Token) + RegionObj.RegionDataList.append(self._Token) - ## __GetRegionDataType() method + ## _GetRegionDataType() method # # Get region array data for region # # @param self The object pointer # @param RegionObj for whom region data is got # - def __GetRegionDataType(self, RegionObj): - - if not self.__IsKeyword( "DATA"): + def _GetRegionDataType(self, RegionObj): + if not self._IsKeyword("DATA"): raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 18: + if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) # convert hex string value to byte hex string array - AllString = self.__Token + AllString = self._Token AllStrLen = len (AllString) DataString = "" while AllStrLen > 4: - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT AllStrLen = AllStrLen - 2 - DataString = DataString + AllString[:AllStrLen] + "," + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT # byte value array - if len (self.__Token) <= 4: - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + if len (self._Token) <= 4: + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") + DataString = DataString.rstrip(TAB_COMMA_SPLIT) RegionObj.RegionType = "DATA" - RegionObj.RegionDataList.append( DataString) + RegionObj.RegionDataList.append(DataString) - while self.__IsKeyword( "DATA"): + while self._IsKeyword("DATA"): - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 18: + if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) # convert hex string value to byte hex string array - AllString = self.__Token + AllString = self._Token AllStrLen = len (AllString) DataString = "" while AllStrLen > 4: - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT AllStrLen = AllStrLen - 2 - DataString = DataString + AllString[:AllStrLen] + "," + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT # byte value array - if len (self.__Token) <= 4: - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + if len (self._Token) <= 4: + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") - RegionObj.RegionDataList.append( DataString) + DataString = DataString.rstrip(TAB_COMMA_SPLIT) + RegionObj.RegionDataList.append(DataString) - ## __GetFv() method + ## _GetFv() method # # Get FV section contents and store its data into FV dictionary of self.Profile # @@ -2125,68 +2067,68 @@ class FdfParser: # @retval True Successfully find a FV # @retval False Not able to find a FV # - def __GetFv(self): - if not self.__GetNextToken(): + def _GetFv(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FV."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[FV.", True): + self._UndoToken() + if not self._IsToken("[FV.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - FvName = self.__GetUiName() + FvName = self._GetUiName() self.CurrentFvName = FvName.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - FvObj = Fv.FV() + FvObj = FV() FvObj.UiFvName = self.CurrentFvName self.Profile.FvDict[self.CurrentFvName] = FvObj - Status = self.__GetCreateFile(FvObj) + Status = self._GetCreateFile(FvObj) if not Status: raise Warning("FV name error", self.FileName, self.CurrentLineNumber) - self.__GetDefineStatements(FvObj) + self._GetDefineStatements(FvObj) - self.__GetAddressStatements(FvObj) + self._GetAddressStatements(FvObj) FvObj.FvExtEntryTypeValue = [] FvObj.FvExtEntryType = [] FvObj.FvExtEntryData = [] while True: - self.__GetSetStatements(FvObj) + self._GetSetStatements(FvObj) - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): break if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) while True: - isInf = self.__GetInfStatement(FvObj) - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) + isInf = self._GetInfStatement(FvObj) + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) if not isInf and not isFile: break return True - ## __GetFvAlignment() method + ## _GetFvAlignment() method # # Get alignment for FV # @@ -2195,26 +2137,25 @@ class FdfParser: # @retval True Successfully find a alignment statement # @retval False Not able to find a alignment statement # - def __GetFvAlignment(self, Obj): - - if not self.__IsKeyword( "FvAlignment"): + def _GetFvAlignment(self, Obj): + if not self._IsKeyword("FvAlignment"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ "1G", "2G"): - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - Obj.FvAlignment = self.__Token + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + Obj.FvAlignment = self._Token return True - ## __GetFvBaseAddress() method + ## _GetFvBaseAddress() method # # Get BaseAddress for FV # @@ -2223,23 +2164,22 @@ class FdfParser: # @retval True Successfully find a FvBaseAddress statement # @retval False Not able to find a FvBaseAddress statement # - def __GetFvBaseAddress(self, Obj): - - if not self.__IsKeyword("FvBaseAddress"): + def _GetFvBaseAddress(self, Obj): + if not self._IsKeyword("FvBaseAddress"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) - if not BaseAddrValuePattern.match(self.__Token.upper()): - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - Obj.FvBaseAddress = self.__Token + if not BaseAddrValuePattern.match(self._Token.upper()): + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + Obj.FvBaseAddress = self._Token return True - ## __GetFvForceRebase() method + ## _GetFvForceRebase() method # # Get FvForceRebase for FV # @@ -2248,23 +2188,22 @@ class FdfParser: # @retval True Successfully find a FvForceRebase statement # @retval False Not able to find a FvForceRebase statement # - def __GetFvForceRebase(self, Obj): - - if not self.__IsKeyword("FvForceRebase"): + def _GetFvForceRebase(self, Obj): + if not self._IsKeyword("FvForceRebase"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: Obj.FvForceRebase = True - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: Obj.FvForceRebase = False else: Obj.FvForceRebase = None @@ -2272,7 +2211,7 @@ class FdfParser: return True - ## __GetFvAttributes() method + ## _GetFvAttributes() method # # Get attributes for FV # @@ -2280,31 +2219,31 @@ class FdfParser: # @param Obj for whom attribute is got # @retval None # - def __GetFvAttributes(self, FvObj): + def _GetFvAttributes(self, FvObj): IsWordToken = False - while self.__GetNextWord(): + while self._GetNextWord(): IsWordToken = True - name = self.__Token + name = self._Token if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): - self.__UndoToken() + self._UndoToken() return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) - FvObj.FvAttributeDict[name] = self.__Token + FvObj.FvAttributeDict[name] = self._Token return IsWordToken - ## __GetFvNameGuid() method + ## _GetFvNameGuid() method # # Get FV GUID for FV # @@ -2312,110 +2251,107 @@ class FdfParser: # @param Obj for whom GUID is got # @retval None # - def __GetFvNameGuid(self, FvObj): - - if not self.__IsKeyword( "FvNameGuid"): + def _GetFvNameGuid(self, FvObj): + if not self._IsKeyword("FvNameGuid"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) - FvObj.FvNameGuid = self.__Token + FvObj.FvNameGuid = self._Token return True - def __GetFvNameString(self, FvObj): - - if not self.__IsKeyword( "FvNameString"): + def _GetFvNameString(self, FvObj): + if not self._IsKeyword("FvNameString"): return False - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) - FvObj.FvNameString = self.__Token + FvObj.FvNameString = self._Token return True - def __GetFvExtEntryStatement(self, FvObj): - - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): + def _GetFvExtEntryStatement(self, FvObj): + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): return False - if not self.__IsKeyword ("TYPE"): + if not self._IsKeyword ("TYPE"): raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryTypeValue.append(self.__Token) + FvObj.FvExtEntryTypeValue.append(self._Token) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryType.append(self.__Token) + FvObj.FvExtEntryType.append(self._Token) - if self.__Token == 'DATA': + if self._Token == 'DATA': - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString = self.__Token - DataString += "," + DataString = self._Token + DataString += TAB_COMMA_SPLIT - while self.__IsToken(","): - if not self.__GetNextHexNumber(): + while self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextHexNumber(): raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) - if len(self.__Token) > 4: + if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) - DataString += self.__Token - DataString += "," + DataString += self._Token + DataString += TAB_COMMA_SPLIT - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - DataString = DataString.rstrip(",") + DataString = DataString.rstrip(TAB_COMMA_SPLIT) FvObj.FvExtEntryData.append(DataString) - if self.__Token == 'FILE': + if self._Token == 'FILE': - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) - FvObj.FvExtEntryData.append(self.__Token) + FvObj.FvExtEntryData.append(self._Token) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) return True - ## __GetAprioriSection() method + ## _GetAprioriSection() method # # Get token statements # @@ -2425,56 +2361,55 @@ class FdfParser: # @retval True Successfully find apriori statement # @retval False Not able to find apriori statement # - def __GetAprioriSection(self, FvObj, MacroDict = {}): - - if not self.__IsKeyword( "APRIORI"): + def _GetAprioriSection(self, FvObj, MacroDict = {}): + if not self._IsKeyword("APRIORI"): return False - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) - AprType = self.__Token + AprType = self._Token - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - AprSectionObj = AprioriSection.AprioriSection() + AprSectionObj = AprioriSection() AprSectionObj.AprioriType = AprType - self.__GetDefineStatements(AprSectionObj) + self._GetDefineStatements(AprSectionObj) MacroDict.update(AprSectionObj.DefineVarDict) while True: - IsInf = self.__GetInfStatement(AprSectionObj) - IsFile = self.__GetFileStatement( AprSectionObj) + IsInf = self._GetInfStatement(AprSectionObj) + IsFile = self._GetFileStatement(AprSectionObj) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) FvObj.AprioriSectionList.append(AprSectionObj) return True - def __ParseInfStatement(self): - if not self.__IsKeyword("INF"): + def _ParseInfStatement(self): + if not self._IsKeyword("INF"): return None - ffsInf = FfsInfStatement.FfsInfStatement() - self.__GetInfOptions(ffsInf) + ffsInf = FfsInfStatement() + self._GetInfOptions(ffsInf) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) - ffsInf.InfFileName = self.__Token + ffsInf.InfFileName = self._Token if not ffsInf.InfFileName.endswith('.inf'): raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) ffsInf.CurrentLineNum = self.CurrentLineNumber - ffsInf.CurrentLineContent = self.__CurrentLine() + ffsInf.CurrentLineContent = self._CurrentLine() #Replace $(SAPCE) with real space ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: #do case sensitive check for file path ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: @@ -2492,16 +2427,16 @@ class FdfParser: else: self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) - if self.__IsToken('|'): - if self.__IsKeyword('RELOCS_STRIPPED'): + if self._IsToken(TAB_VALUE_SPLIT): + if self._IsKeyword('RELOCS_STRIPPED'): ffsInf.KeepReloc = False - elif self.__IsKeyword('RELOCS_RETAINED'): + elif self._IsKeyword('RELOCS_RETAINED'): ffsInf.KeepReloc = True else: - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) return ffsInf - ## __GetInfStatement() method + ## _GetInfStatement() method # # Get INF statements # @@ -2510,86 +2445,86 @@ class FdfParser: # @retval True Successfully find inf statement # @retval False Not able to find inf statement # - def __GetInfStatement(self, Obj, ForCapsule=False): - ffsInf = self.__ParseInfStatement() + def _GetInfStatement(self, Obj, ForCapsule=False): + ffsInf = self._ParseInfStatement() if not ffsInf: return False if ForCapsule: - capsuleFfs = CapsuleData.CapsuleFfs() - capsuleFfs.Ffs = ffsInf - Obj.CapsuleDataList.append(capsuleFfs) + myCapsuleFfs = CapsuleFfs() + myCapsuleFfs.Ffs = ffsInf + Obj.CapsuleDataList.append(myCapsuleFfs) else: Obj.FfsList.append(ffsInf) return True - ## __GetInfOptions() method + ## _GetInfOptions() method # # Get options for INF # # @param self The object pointer # @param FfsInfObj for whom option is got # - def __GetInfOptions(self, FfsInfObj): - if self.__IsKeyword("FILE_GUID"): - if not self.__IsToken("="): + def _GetInfOptions(self, FfsInfObj): + if self._IsKeyword("FILE_GUID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) - FfsInfObj.OverrideGuid = self.__Token + FfsInfObj.OverrideGuid = self._Token - if self.__IsKeyword( "RuleOverride"): - if not self.__IsToken( "="): + if self._IsKeyword("RuleOverride"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) - FfsInfObj.Rule = self.__Token + FfsInfObj.Rule = self._Token - if self.__IsKeyword( "VERSION"): - if not self.__IsToken( "="): + if self._IsKeyword("VERSION"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Version", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - FfsInfObj.Version = self.__Token + if self._GetStringData(): + FfsInfObj.Version = self._Token - if self.__IsKeyword( BINARY_FILE_TYPE_UI): - if not self.__IsToken( "="): + if self._IsKeyword(BINARY_FILE_TYPE_UI): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - FfsInfObj.Ui = self.__Token + if self._GetStringData(): + FfsInfObj.Ui = self._Token - if self.__IsKeyword( "USE"): - if not self.__IsToken( "="): + if self._IsKeyword("USE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) - FfsInfObj.UseArch = self.__Token + FfsInfObj.UseArch = self._Token - if self.__GetNextToken(): - p = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): - FfsInfObj.KeyStringList.append(self.__Token) - if not self.__IsToken(","): + if self._GetNextToken(): + p = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): + FfsInfObj.KeyStringList.append(self._Token) + if not self._IsToken(TAB_COMMA_SPLIT): return else: - self.__UndoToken() + self._UndoToken() return - while self.__GetNextToken(): - if not p.match(self.__Token): + while self._GetNextToken(): + if not p.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - FfsInfObj.KeyStringList.append(self.__Token) + FfsInfObj.KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break - ## __GetFileStatement() method + ## _GetFileStatement() method # # Get FILE statements # @@ -2599,42 +2534,41 @@ class FdfParser: # @retval True Successfully find FILE statement # @retval False Not able to find FILE statement # - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): - - if not self.__IsKeyword( "FILE"): + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): + if not self._IsKeyword("FILE"): return False - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) - if ForCapsule and self.__Token == 'DATA': - self.__UndoToken() - self.__UndoToken() + if ForCapsule and self._Token == 'DATA': + self._UndoToken() + self._UndoToken() return False - FfsFileObj = FfsFileStatement.FileStatement() - FfsFileObj.FvFileType = self.__Token + FfsFileObj = FileStatement() + FfsFileObj.FvFileType = self._Token - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextGuid(): - if not self.__GetNextWord(): + if not self._GetNextGuid(): + if not self._GetNextWord(): raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - FfsFileObj.NameGuid = self.__Token + FfsFileObj.NameGuid = self._Token - self.__GetFilePart( FfsFileObj, MacroDict.copy()) + self._GetFilePart(FfsFileObj, MacroDict.copy()) if ForCapsule: - capsuleFfs = CapsuleData.CapsuleFfs() + capsuleFfs = CapsuleFfs() capsuleFfs.Ffs = FfsFileObj Obj.CapsuleDataList.append(capsuleFfs) else: @@ -2642,7 +2576,7 @@ class FdfParser: return True - ## __FileCouldHaveRelocFlag() method + ## _FileCouldHaveRelocFlag() method # # Check whether reloc strip flag can be set for a file type. # @@ -2651,13 +2585,13 @@ class FdfParser: # @retval False No way to have it # @staticmethod - def __FileCouldHaveRelocFlag (FileType): + def _FileCouldHaveRelocFlag (FileType): if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): return True else: return False - ## __SectionCouldHaveRelocFlag() method + ## _SectionCouldHaveRelocFlag() method # # Check whether reloc strip flag can be set for a section type. # @@ -2666,13 +2600,13 @@ class FdfParser: # @retval False No way to have it # @staticmethod - def __SectionCouldHaveRelocFlag (SectionType): + def _SectionCouldHaveRelocFlag (SectionType): if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): return True else: return False - ## __GetFilePart() method + ## _GetFilePart() method # # Get components for FILE statement # @@ -2680,58 +2614,57 @@ class FdfParser: # @param FfsFileObj for whom component is got # @param MacroDict dictionary used to replace macro # - def __GetFilePart(self, FfsFileObj, MacroDict = {}): + def _GetFilePart(self, FfsFileObj, MacroDict = {}): + self._GetFileOpts(FfsFileObj) - self.__GetFileOpts( FfsFileObj) - - if not self.__IsToken("{"): - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): - if self.__Token == 'RELOCS_STRIPPED': + if not self._IsToken("{"): + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): + if self._Token == 'RELOCS_STRIPPED': FfsFileObj.KeepReloc = False else: FfsFileObj.KeepReloc = True else: raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) - if self.__Token == BINARY_FILE_TYPE_FV: - if not self.__IsToken( "="): + if self._Token == BINARY_FILE_TYPE_FV: + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - FfsFileObj.FvName = self.__Token + FfsFileObj.FvName = self._Token - elif self.__Token == "FD": - if not self.__IsToken( "="): + elif self._Token == "FD": + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) - FfsFileObj.FdName = self.__Token + FfsFileObj.FdName = self._Token - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): - self.__UndoToken() - self.__GetSectionData( FfsFileObj, MacroDict) + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): + self._UndoToken() + self._GetSectionData(FfsFileObj, MacroDict) elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': - self.__UndoToken() - self.__GetRAWData(FfsFileObj, MacroDict) + self._UndoToken() + self._GetRAWData(FfsFileObj, MacroDict) else: FfsFileObj.CurrentLineNum = self.CurrentLineNumber - FfsFileObj.CurrentLineContent = self.__CurrentLine() - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') - self.__VerifyFile(FfsFileObj.FileName) + FfsFileObj.CurrentLineContent = self._CurrentLine() + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') + self._VerifyFile(FfsFileObj.FileName) - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - ## __GetRAWData() method + ## _GetRAWData() method # # Get RAW data for FILE statement # @@ -2739,33 +2672,32 @@ class FdfParser: # @param FfsFileObj for whom section is got # @param MacroDict dictionary used to replace macro # - def __GetRAWData(self, FfsFileObj, MacroDict = {}): + def _GetRAWData(self, FfsFileObj, MacroDict = {}): FfsFileObj.FileName = [] FfsFileObj.SubAlignment = [] while True: AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - AlignValue = self.__Token - if not self.__GetNextToken(): + if not self._Token == "Auto": + AlignValue = self._Token + if not self._GetNextToken(): raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) - FileName = self.__Token.replace('$(SPACE)', ' ') + FileName = self._Token.replace('$(SPACE)', ' ') if FileName == '}': - self.__UndoToken() + self._UndoToken() raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) - self.__VerifyFile(FileName) + self._VerifyFile(FileName) File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) FfsFileObj.FileName.append(File.Path) FfsFileObj.SubAlignment.append(AlignValue) - if self.__IsToken( "}"): - self.__UndoToken() + if self._IsToken("}"): + self._UndoToken() break if len(FfsFileObj.SubAlignment) == 1: @@ -2773,45 +2705,43 @@ class FdfParser: if len(FfsFileObj.FileName) == 1: FfsFileObj.FileName = FfsFileObj.FileName[0] - ## __GetFileOpts() method + ## _GetFileOpts() method # # Get options for FILE statement # # @param self The object pointer # @param FfsFileObj for whom options is got # - def __GetFileOpts(self, FfsFileObj): - - if self.__GetNextToken(): - if TokenFindPattern.match(self.__Token): - FfsFileObj.KeyStringList.append(self.__Token) - if self.__IsToken(","): - while self.__GetNextToken(): - if not TokenFindPattern.match(self.__Token): + def _GetFileOpts(self, FfsFileObj): + if self._GetNextToken(): + if TokenFindPattern.match(self._Token): + FfsFileObj.KeyStringList.append(self._Token) + if self._IsToken(TAB_COMMA_SPLIT): + while self._GetNextToken(): + if not TokenFindPattern.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - FfsFileObj.KeyStringList.append(self.__Token) + FfsFileObj.KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break else: - self.__UndoToken() + self._UndoToken() - if self.__IsKeyword( "FIXED", True): + if self._IsKeyword("FIXED", True): FfsFileObj.Fixed = True - if self.__IsKeyword( "CHECKSUM", True): + if self._IsKeyword("CHECKSUM", True): FfsFileObj.CheckSum = True - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - FfsFileObj.Alignment = self.__Token + if not self._Token == "Auto": + FfsFileObj.Alignment = self._Token - ## __GetAlignment() method + ## _GetAlignment() method # # Return the alignment value # @@ -2819,18 +2749,18 @@ class FdfParser: # @retval True Successfully find alignment # @retval False Not able to find alignment # - def __GetAlignment(self): - if self.__IsKeyword( "Align", True): - if not self.__IsToken( "="): + def _GetAlignment(self): + if self._IsKeyword("Align", True): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) return True return False - ## __GetFilePart() method + ## _GetFilePart() method # # Get section data for FILE statement # @@ -2838,23 +2768,23 @@ class FdfParser: # @param FfsFileObj for whom section is got # @param MacroDict dictionary used to replace macro # - def __GetSectionData(self, FfsFileObj, MacroDict = {}): + def _GetSectionData(self, FfsFileObj, MacroDict = {}): Dict = {} Dict.update(MacroDict) - self.__GetDefineStatements(FfsFileObj) + self._GetDefineStatements(FfsFileObj) Dict.update(FfsFileObj.DefineVarDict) - self.__GetAprioriSection(FfsFileObj, Dict.copy()) - self.__GetAprioriSection(FfsFileObj, Dict.copy()) + self._GetAprioriSection(FfsFileObj, Dict.copy()) + self._GetAprioriSection(FfsFileObj, Dict.copy()) while True: - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) if not IsLeafSection and not IsEncapSection: break - ## __GetLeafSection() method + ## _GetLeafSection() method # # Get leaf section for Obj # @@ -2864,97 +2794,95 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetLeafSection(self, Obj, MacroDict = {}): - + def _GetLeafSection(self, Obj, MacroDict = {}): OldPos = self.GetFileBufferPos() - if not self.__IsKeyword( "SECTION"): + if not self._IsKeyword("SECTION"): if len(Obj.SectionList) == 0: raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) else: return False AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - AlignValue = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + AlignValue = self._Token BuildNum = None - if self.__IsKeyword( "BUILD_NUM"): - if not self.__IsToken( "="): + if self._IsKeyword("BUILD_NUM"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) - BuildNum = self.__Token + BuildNum = self._Token - if self.__IsKeyword( "VERSION"): + if self._IsKeyword("VERSION"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected version", self.FileName, self.CurrentLineNumber) - VerSectionObj = VerSection.VerSection() + VerSectionObj = VerSection() VerSectionObj.Alignment = AlignValue VerSectionObj.BuildNum = BuildNum - if self.__GetStringData(): - VerSectionObj.StringData = self.__Token + if self._GetStringData(): + VerSectionObj.StringData = self._Token else: - VerSectionObj.FileName = self.__Token + VerSectionObj.FileName = self._Token Obj.SectionList.append(VerSectionObj) - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): + elif self._IsKeyword(BINARY_FILE_TYPE_UI): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected UI", self.FileName, self.CurrentLineNumber) - UiSectionObj = UiSection.UiSection() + UiSectionObj = UiSection() UiSectionObj.Alignment = AlignValue - if self.__GetStringData(): - UiSectionObj.StringData = self.__Token + if self._GetStringData(): + UiSectionObj.StringData = self._Token else: - UiSectionObj.FileName = self.__Token + UiSectionObj.FileName = self._Token Obj.SectionList.append(UiSectionObj) - elif self.__IsKeyword( "FV_IMAGE"): + elif self._IsKeyword("FV_IMAGE"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) - FvName = self.__Token + FvName = self._Token FvObj = None - if self.__IsToken( "{"): - FvObj = Fv.FV() + if self._IsToken("{"): + FvObj = FV() FvObj.UiFvName = FvName.upper() - self.__GetDefineStatements(FvObj) + self._GetDefineStatements(FvObj) MacroDict.update(FvObj.DefineVarDict) - self.__GetBlockStatement(FvObj) - self.__GetSetStatements(FvObj) - self.__GetFvAlignment(FvObj) - self.__GetFvAttributes(FvObj) - self.__GetAprioriSection(FvObj, MacroDict.copy()) - self.__GetAprioriSection(FvObj, MacroDict.copy()) + self._GetBlockStatement(FvObj) + self._GetSetStatements(FvObj) + self._GetFvAlignment(FvObj) + self._GetFvAttributes(FvObj) + self._GetAprioriSection(FvObj, MacroDict.copy()) + self._GetAprioriSection(FvObj, MacroDict.copy()) while True: - IsInf = self.__GetInfStatement(FvObj) - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) + IsInf = self._GetInfStatement(FvObj) + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - FvImageSectionObj = FvImageSection.FvImageSection() + FvImageSectionObj = FvImageSection() FvImageSectionObj.Alignment = AlignValue if FvObj is not None: FvImageSectionObj.Fv = FvObj @@ -2965,81 +2893,81 @@ class FdfParser: Obj.SectionList.append(FvImageSectionObj) - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - DepexSectionObj = DepexSection.DepexSection() + DepexSectionObj = DepexSection() DepexSectionObj.Alignment = AlignValue - DepexSectionObj.DepexType = self.__Token + DepexSectionObj.DepexType = self._Token - if not self.__IsToken( "="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsToken( "{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken( "}"): + if not self._SkipToToken("}"): raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') Obj.SectionList.append(DepexSectionObj) else: - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected section type", self.FileName, self.CurrentLineNumber) # Encapsulation section appear, UndoToken and return - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": + if self._Token == "COMPRESS" or self._Token == "GUIDED": self.SetFileBufferPos(OldPos) return False - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == BINARY_FILE_TYPE_TE): + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) # DataSection - DataSectionObj = DataSection.DataSection() + DataSectionObj = DataSection() DataSectionObj.Alignment = AlignValue - DataSectionObj.SecType = self.__Token + DataSectionObj.SecType = self._Token - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): + if self._Token == 'RELOCS_STRIPPED': DataSectionObj.KeepReloc = False else: DataSectionObj.KeepReloc = True else: raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if self.__IsToken("="): - if not self.__GetNextToken(): + if self._IsToken(TAB_EQUAL_SPLIT): + if not self._GetNextToken(): raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) - DataSectionObj.SectFileName = self.__Token - self.__VerifyFile(DataSectionObj.SectFileName) + DataSectionObj.SectFileName = self._Token + self._VerifyFile(DataSectionObj.SectFileName) else: - if not self.__GetCglSection(DataSectionObj): + if not self._GetCglSection(DataSectionObj): return False Obj.SectionList.append(DataSectionObj) return True - ## __VerifyFile + ## _VerifyFile # # Check if file exists or not: # If current phase if GenFds, the file must exist; # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist # @param FileName: File path to be verified. # - def __VerifyFile(self, FileName): - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: + def _VerifyFile(self, FileName): + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: return - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - ## __GetCglSection() method + ## _GetCglSection() method # # Get compressed or GUIDed section for Obj # @@ -3049,28 +2977,28 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetCglSection(self, Obj, AlignValue = None): + def _GetCglSection(self, Obj, AlignValue = None): - if self.__IsKeyword( "COMPRESS"): + if self._IsKeyword("COMPRESS"): type = "PI_STD" - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): - type = self.__Token + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): + type = self._Token - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - CompressSectionObj = CompressSection.CompressSection() + CompressSectionObj = CompressSection() CompressSectionObj.Alignment = AlignValue CompressSectionObj.CompType = type # Recursive sections... while True: - IsLeafSection = self.__GetLeafSection(CompressSectionObj) - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) + IsLeafSection = self._GetLeafSection(CompressSectionObj) + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) if not IsLeafSection and not IsEncapSection: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(CompressSectionObj) @@ -3079,15 +3007,15 @@ class FdfParser: return True - elif self.__IsKeyword( "GUIDED"): + elif self._IsKeyword("GUIDED"): GuidValue = None - if self.__GetNextGuid(): - GuidValue = self.__Token + if self._GetNextGuid(): + GuidValue = self._Token - AttribDict = self.__GetGuidAttrib() - if not self.__IsToken("{"): + AttribDict = self._GetGuidAttrib() + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - GuidSectionObj = GuidSection.GuidSection() + GuidSectionObj = GuidSection() GuidSectionObj.Alignment = AlignValue GuidSectionObj.NameGuid = GuidValue GuidSectionObj.SectionType = "GUIDED" @@ -3096,12 +3024,12 @@ class FdfParser: GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] # Recursive sections... while True: - IsLeafSection = self.__GetLeafSection(GuidSectionObj) - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) + IsLeafSection = self._GetLeafSection(GuidSectionObj) + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) if not IsLeafSection and not IsEncapSection: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(GuidSectionObj) @@ -3109,44 +3037,43 @@ class FdfParser: return False - ## __GetGuidAttri() method + ## _GetGuidAttri() method # # Get attributes for GUID section # # @param self The object pointer # @retval AttribDict Dictionary of key-value pair of section attributes # - def __GetGuidAttrib(self): - + def _GetGuidAttrib(self): AttribDict = {} AttribDict["PROCESSING_REQUIRED"] = "NONE" AttribDict["AUTH_STATUS_VALID"] = "NONE" AttribDict["EXTRA_HEADER_SIZE"] = -1 - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ - or self.__IsKeyword("EXTRA_HEADER_SIZE"): - AttribKey = self.__Token + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ + or self._IsKeyword("EXTRA_HEADER_SIZE"): + AttribKey = self._Token - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) elif AttribKey == "EXTRA_HEADER_SIZE": Base = 10 - if self.__Token[0:2].upper() == "0X": + if self._Token[0:2].upper() == "0X": Base = 16 try: - AttribDict[AttribKey] = int(self.__Token, Base) + AttribDict[AttribKey] = int(self._Token, Base) continue except ValueError: raise Warning("expected Number", self.FileName, self.CurrentLineNumber) - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) - AttribDict[AttribKey] = self.__Token + AttribDict[AttribKey] = self._Token return AttribDict - ## __GetEncapsulationSec() method + ## _GetEncapsulationSec() method # # Get encapsulation section for FILE # @@ -3155,91 +3082,89 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetEncapsulationSec(self, FfsFileObj): - + def _GetEncapsulationSec(self, FfsFileObj): OldPos = self.GetFileBufferPos() - if not self.__IsKeyword( "SECTION"): + if not self._IsKeyword("SECTION"): if len(FfsFileObj.SectionList) == 0: raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) else: return False AlignValue = None - if self.__GetAlignment(): - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - AlignValue = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENT_NOAUTO: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + AlignValue = self._Token - if not self.__GetCglSection(FfsFileObj, AlignValue): + if not self._GetCglSection(FfsFileObj, AlignValue): self.SetFileBufferPos(OldPos) return False else: return True - def __GetFmp(self): - if not self.__GetNextToken(): + def _GetFmp(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - self.__SkipToToken("[FMPPAYLOAD.", True) - FmpUiName = self.__GetUiName().upper() + self._UndoToken() + self._SkipToToken("[FMPPAYLOAD.", True) + FmpUiName = self._GetUiName().upper() if FmpUiName in self.Profile.FmpPayloadDict: raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) - FmpData = CapsuleData.CapsulePayload() + FmpData = CapsulePayload() FmpData.UiName = FmpUiName - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] - while self.__Token in FmpKeyList: - Name = self.__Token + while self._Token in FmpKeyList: + Name = self._Token FmpKeyList.remove(Name) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) if Name == 'IMAGE_TYPE_ID': - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) - FmpData.ImageTypeId = self.__Token + FmpData.ImageTypeId = self._Token elif Name == 'CERTIFICATE_GUID': - if not self.__GetNextGuid(): + if not self._GetNextGuid(): raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) - FmpData.Certificate_Guid = self.__Token - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: + FmpData.Certificate_Guid = self._Token + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) else: - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) - Value = self.__Token + Value = self._Token if Name == 'IMAGE_HEADER_INIT_VERSION': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.Version = Value elif Name == 'IMAGE_INDEX': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.ImageIndex = Value elif Name == 'HARDWARE_INSTANCE': - if FdfParser.__Verify(Name, Value, 'UINT8'): + if FdfParser._Verify(Name, Value, 'UINT8'): FmpData.HardwareInstance = Value elif Name == 'MONOTONIC_COUNT': - if FdfParser.__Verify(Name, Value, 'UINT64'): + if FdfParser._Verify(Name, Value, 'UINT64'): FmpData.MonotonicCount = Value if FmpData.MonotonicCount.upper().startswith('0X'): FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) else: FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) - if not self.__GetNextToken(): + if not self._GetNextToken(): break else: - self.__UndoToken() + self._UndoToken() if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and FmpData.Certificate_Guid): EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a pair.") @@ -3248,7 +3173,7 @@ class FdfParser: if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, self.CurrentLineNumber) # get the Image file and Vendor code file - self.__GetFMPCapsuleData(FmpData) + self._GetFMPCapsuleData(FmpData) if not FmpData.ImageFile: raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) # check whether more than one Vendor code file @@ -3257,7 +3182,7 @@ class FdfParser: self.Profile.FmpPayloadDict[FmpUiName] = FmpData return True - ## __GetCapsule() method + ## _GetCapsule() method # # Get capsule section contents and store its data into capsule list of self.Profile # @@ -3265,90 +3190,89 @@ class FdfParser: # @retval True Successfully find a capsule # @retval False Not able to find a capsule # - def __GetCapsule(self): - - if not self.__GetNextToken(): + def _GetCapsule(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[CAPSULE."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[CAPSULE.", True): + self._UndoToken() + if not self._IsToken("[CAPSULE.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) - CapsuleObj = Capsule.Capsule() + CapsuleObj = Capsule() - CapsuleName = self.__GetUiName() + CapsuleName = self._GetUiName() if not CapsuleName: raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) CapsuleObj.UiCapsuleName = CapsuleName.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("CREATE_FILE"): - if not self.__IsToken( "="): + if self._IsKeyword("CREATE_FILE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file name", self.FileName, self.CurrentLineNumber) - CapsuleObj.CreateFile = self.__Token + CapsuleObj.CreateFile = self._Token - self.__GetCapsuleStatements(CapsuleObj) + self._GetCapsuleStatements(CapsuleObj) self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj return True - ## __GetCapsuleStatements() method + ## _GetCapsuleStatements() method # # Get statements for capsule # # @param self The object pointer # @param Obj for whom statements are got # - def __GetCapsuleStatements(self, Obj): - self.__GetCapsuleTokens(Obj) - self.__GetDefineStatements(Obj) - self.__GetSetStatements(Obj) - self.__GetCapsuleData(Obj) + def _GetCapsuleStatements(self, Obj): + self._GetCapsuleTokens(Obj) + self._GetDefineStatements(Obj) + self._GetSetStatements(Obj) + self._GetCapsuleData(Obj) - ## __GetCapsuleTokens() method + ## _GetCapsuleTokens() method # # Get token statements for capsule # # @param self The object pointer # @param Obj for whom token statements are got # - def __GetCapsuleTokens(self, Obj): - if not self.__GetNextToken(): + def _GetCapsuleTokens(self, Obj): + if not self._GetNextToken(): return False - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): - Name = self.__Token.strip() - if not self.__IsToken("="): + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): + Name = self._Token.strip() + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) if Name == 'CAPSULE_FLAGS': - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) - Value = self.__Token.strip() - while self.__IsToken(","): - Value += ',' - if not self.__GetNextToken(): + Value = self._Token.strip() + while self._IsToken(TAB_COMMA_SPLIT): + Value += TAB_COMMA_SPLIT + if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) - Value += self.__Token.strip() + Value += self._Token.strip() elif Name == 'OEM_CAPSULE_FLAGS': - Value = self.__Token.strip() + Value = self._Token.strip() if not Value.upper().startswith('0X'): raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) try: @@ -3357,51 +3281,49 @@ class FdfParser: raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) if not 0x0000 <= Value <= 0xFFFF: raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) - Value = self.__Token.strip() + Value = self._Token.strip() else: - Value = self.__Token.strip() + Value = self._Token.strip() Obj.TokensDict[Name] = Value - if not self.__GetNextToken(): + if not self._GetNextToken(): return False - self.__UndoToken() + self._UndoToken() - ## __GetCapsuleData() method + ## _GetCapsuleData() method # # Get capsule data for capsule # # @param self The object pointer # @param Obj for whom capsule data are got # - def __GetCapsuleData(self, Obj): - + def _GetCapsuleData(self, Obj): while True: - IsInf = self.__GetInfStatement(Obj, True) - IsFile = self.__GetFileStatement(Obj, True) - IsFv = self.__GetFvStatement(Obj) - IsFd = self.__GetFdStatement(Obj) - IsAnyFile = self.__GetAnyFileStatement(Obj) - IsAfile = self.__GetAfileStatement(Obj) - IsFmp = self.__GetFmpStatement(Obj) + IsInf = self._GetInfStatement(Obj, True) + IsFile = self._GetFileStatement(Obj, True) + IsFv = self._GetFvStatement(Obj) + IsFd = self._GetFdStatement(Obj) + IsAnyFile = self._GetAnyFileStatement(Obj) + IsAfile = self._GetAfileStatement(Obj) + IsFmp = self._GetFmpStatement(Obj) if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): break - ## __GetFMPCapsuleData() method + ## _GetFMPCapsuleData() method # # Get capsule data for FMP capsule # # @param self The object pointer # @param Obj for whom capsule data are got # - def __GetFMPCapsuleData(self, Obj): - + def _GetFMPCapsuleData(self, Obj): while True: - IsFv = self.__GetFvStatement(Obj, True) - IsFd = self.__GetFdStatement(Obj, True) - IsAnyFile = self.__GetAnyFileStatement(Obj, True) + IsFv = self._GetFvStatement(Obj, True) + IsFd = self._GetFdStatement(Obj, True) + IsAnyFile = self._GetAnyFileStatement(Obj, True) if not (IsFv or IsFd or IsAnyFile): break - ## __GetFvStatement() method + ## _GetFvStatement() method # # Get FV for capsule # @@ -3410,32 +3332,31 @@ class FdfParser: # @retval True Successfully find a FV statement # @retval False Not able to find a FV statement # - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): - - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): + if not self._IsKeyword(BINARY_FILE_TYPE_FV): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in self.Profile.FvDict: + if self._Token.upper() not in self.Profile.FvDict: raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) - CapsuleFv = CapsuleData.CapsuleFv() - CapsuleFv.FvName = self.__Token + myCapsuleFv = CapsuleFv() + myCapsuleFv.FvName = self._Token if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleFv) + CapsuleObj.ImageFile.append(myCapsuleFv) else: - CapsuleObj.VendorCodeFile.append(CapsuleFv) + CapsuleObj.VendorCodeFile.append(myCapsuleFv) else: - CapsuleObj.CapsuleDataList.append(CapsuleFv) + CapsuleObj.CapsuleDataList.append(myCapsuleFv) return True - ## __GetFdStatement() method + ## _GetFdStatement() method # # Get FD for capsule # @@ -3444,74 +3365,73 @@ class FdfParser: # @retval True Successfully find a FD statement # @retval False Not able to find a FD statement # - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): - - if not self.__IsKeyword("FD"): + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): + if not self._IsKeyword("FD"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in self.Profile.FdDict: + if self._Token.upper() not in self.Profile.FdDict: raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) - CapsuleFd = CapsuleData.CapsuleFd() - CapsuleFd.FdName = self.__Token + myCapsuleFd = CapsuleFd() + myCapsuleFd.FdName = self._Token if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleFd) + CapsuleObj.ImageFile.append(myCapsuleFd) else: - CapsuleObj.VendorCodeFile.append(CapsuleFd) + CapsuleObj.VendorCodeFile.append(myCapsuleFd) else: - CapsuleObj.CapsuleDataList.append(CapsuleFd) + CapsuleObj.CapsuleDataList.append(myCapsuleFd) return True - def __GetFmpStatement(self, CapsuleObj): - if not self.__IsKeyword("FMP_PAYLOAD"): - if not self.__IsKeyword("FMP"): + def _GetFmpStatement(self, CapsuleObj): + if not self._IsKeyword("FMP_PAYLOAD"): + if not self._IsKeyword("FMP"): return False - if not self.__IsKeyword("PAYLOAD"): - self.__UndoToken() + if not self._IsKeyword("PAYLOAD"): + self._UndoToken() return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) - Payload = self.__Token.upper() + Payload = self._Token.upper() if Payload not in self.Profile.FmpPayloadDict: - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) return True - def __ParseRawFileStatement(self): - if not self.__IsKeyword("FILE"): + def _ParseRawFileStatement(self): + if not self._IsKeyword("FILE"): return None - if not self.__IsKeyword("DATA"): - self.__UndoToken() + if not self._IsKeyword("DATA"): + self._UndoToken() return None - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) - AnyFileName = self.__Token - self.__VerifyFile(AnyFileName) + AnyFileName = self._Token + self._VerifyFile(AnyFileName) if not os.path.isabs(AnyFileName): AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) return AnyFileName - ## __GetAnyFileStatement() method + ## _GetAnyFileStatement() method # # Get AnyFile for capsule # @@ -3520,23 +3440,23 @@ class FdfParser: # @retval True Successfully find a Anyfile statement # @retval False Not able to find a AnyFile statement # - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): - AnyFileName = self.__ParseRawFileStatement() + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): + AnyFileName = self._ParseRawFileStatement() if not AnyFileName: return False - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() - CapsuleAnyFile.FileName = AnyFileName + myCapsuleAnyFile = CapsuleAnyFile() + myCapsuleAnyFile.FileName = AnyFileName if FMPCapsule: if not CapsuleObj.ImageFile: - CapsuleObj.ImageFile.append(CapsuleAnyFile) + CapsuleObj.ImageFile.append(myCapsuleAnyFile) else: - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) else: - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) return True - ## __GetAfileStatement() method + ## _GetAfileStatement() method # # Get Afile for capsule # @@ -3545,18 +3465,17 @@ class FdfParser: # @retval True Successfully find a Afile statement # @retval False Not able to find a Afile statement # - def __GetAfileStatement(self, CapsuleObj): - - if not self.__IsKeyword("APPEND"): + def _GetAfileStatement(self, CapsuleObj): + if not self._IsKeyword("APPEND"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) - AfileName = self.__Token + AfileName = self._Token AfileBaseName = os.path.basename(AfileName) if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: @@ -3565,19 +3484,19 @@ class FdfParser: if not os.path.isabs(AfileName): AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) - self.__VerifyFile(AfileName) + self._VerifyFile(AfileName) else: if not os.path.exists(AfileName): raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) else: pass - CapsuleAfile = CapsuleData.CapsuleAfile() - CapsuleAfile.FileName = AfileName - CapsuleObj.CapsuleDataList.append(CapsuleAfile) + myCapsuleAfile = CapsuleAfile() + myCapsuleAfile.FileName = AfileName + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) return True - ## __GetRule() method + ## _GetRule() method # # Get Rule section contents and store its data into rule list of self.Profile # @@ -3585,143 +3504,140 @@ class FdfParser: # @retval True Successfully find a Rule # @retval False Not able to find a Rule # - def __GetRule(self): - - if not self.__GetNextToken(): + def _GetRule(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[RULE."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[Rule.", True): + self._UndoToken() + if not self._IsToken("[Rule.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken("."): + if not self._SkipToToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) - Arch = self.__SkippedChars.rstrip(".") + Arch = self._SkippedChars.rstrip(TAB_SPLIT) if Arch.upper() not in ARCH_SET_FULL: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) - ModuleType = self.__GetModuleType() + ModuleType = self._GetModuleType() TemplateName = "" - if self.__IsToken("."): - if not self.__GetNextWord(): + if self._IsToken(TAB_SPLIT): + if not self._GetNextWord(): raise Warning("expected template name", self.FileName, self.CurrentLineNumber) - TemplateName = self.__Token + TemplateName = self._Token - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - RuleObj = self.__GetRuleFileStatements() + RuleObj = self._GetRuleFileStatements() RuleObj.Arch = Arch.upper() RuleObj.ModuleType = ModuleType RuleObj.TemplateName = TemplateName - if TemplateName == '' : + if TemplateName == '': self.Profile.RuleDict['RULE' + \ - '.' + \ + TAB_SPLIT + \ Arch.upper() + \ - '.' + \ + TAB_SPLIT + \ ModuleType.upper() ] = RuleObj - else : + else: self.Profile.RuleDict['RULE' + \ - '.' + \ + TAB_SPLIT + \ Arch.upper() + \ - '.' + \ + TAB_SPLIT + \ ModuleType.upper() + \ - '.' + \ + TAB_SPLIT + \ TemplateName.upper() ] = RuleObj return True - ## __GetModuleType() method + ## _GetModuleType() method # # Return the module type # # @param self The object pointer # @retval string module type # - def __GetModuleType(self): - - if not self.__GetNextWord(): + def _GetModuleType(self): + if not self._GetNextWord(): raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, "DEFAULT", SUP_MODULE_BASE, \ + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, TAB_DEFAULT, SUP_MODULE_BASE, \ EDK_COMPONENT_TYPE_SECURITY_CORE, EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - return self.__Token + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + return self._Token - ## __GetFileExtension() method + ## _GetFileExtension() method # # Return the file extension # # @param self The object pointer # @retval string file name extension # - def __GetFileExtension(self): - if not self.__IsToken("."): + def _GetFileExtension(self): + if not self._IsToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) Ext = "" - if self.__GetNextToken(): - if FileExtensionPattern.match(self.__Token): - Ext = self.__Token - return '.' + Ext + if self._GetNextToken(): + if FileExtensionPattern.match(self._Token): + Ext = self._Token + return TAB_SPLIT + Ext else: - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) else: raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) - ## __GetRuleFileStatement() method + ## _GetRuleFileStatement() method # # Get rule contents # # @param self The object pointer # @retval Rule Rule object # - def __GetRuleFileStatements(self): - - if not self.__IsKeyword("FILE"): + def _GetRuleFileStatements(self): + if not self._IsKeyword("FILE"): raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) - Type = self.__Token.strip().upper() + Type = self._Token.strip().upper() if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__IsKeyword("$(NAMED_GUID)"): - if not self.__GetNextWord(): + if not self._IsKeyword("$(NAMED_GUID)"): + if not self._GetNextWord(): raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - NameGuid = self.__Token + NameGuid = self._Token KeepReloc = None - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__FileCouldHaveRelocFlag(Type): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._FileCouldHaveRelocFlag(Type): + if self._Token == 'RELOCS_STRIPPED': KeepReloc = False else: KeepReloc = True @@ -3729,111 +3645,109 @@ class FdfParser: raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) KeyStringList = [] - if self.__GetNextToken(): - if TokenFindPattern.match(self.__Token): - KeyStringList.append(self.__Token) - if self.__IsToken(","): - while self.__GetNextToken(): - if not TokenFindPattern.match(self.__Token): + if self._GetNextToken(): + if TokenFindPattern.match(self._Token): + KeyStringList.append(self._Token) + if self._IsToken(TAB_COMMA_SPLIT): + while self._GetNextToken(): + if not TokenFindPattern.match(self._Token): raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) - KeyStringList.append(self.__Token) + KeyStringList.append(self._Token) - if not self.__IsToken(","): + if not self._IsToken(TAB_COMMA_SPLIT): break else: - self.__UndoToken() + self._UndoToken() Fixed = False - if self.__IsKeyword("Fixed", True): + if self._IsKeyword("Fixed", True): Fixed = True CheckSum = False - if self.__IsKeyword("CheckSum", True): + if self._IsKeyword("CheckSum", True): CheckSum = True AlignValue = "" - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) #For FFS, Auto is default option same to "" - if not self.__Token == "Auto": - AlignValue = self.__Token + if not self._Token == "Auto": + AlignValue = self._Token - if self.__IsToken("{"): + if self._IsToken("{"): # Complex file rule expected - Rule = RuleComplexFile.RuleComplexFile() - Rule.FvFileType = Type - Rule.NameGuid = NameGuid - Rule.Alignment = AlignValue - Rule.CheckSum = CheckSum - Rule.Fixed = Fixed - Rule.KeyStringList = KeyStringList + NewRule = RuleComplexFile() + NewRule.FvFileType = Type + NewRule.NameGuid = NameGuid + NewRule.Alignment = AlignValue + NewRule.CheckSum = CheckSum + NewRule.Fixed = Fixed + NewRule.KeyStringList = KeyStringList if KeepReloc is not None: - Rule.KeepReloc = KeepReloc + NewRule.KeepReloc = KeepReloc while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) - IsLeaf = self.__GetEfiSection(Rule) + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) + IsLeaf = self._GetEfiSection(NewRule) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken("}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - return Rule + return NewRule else: # Simple file rule expected - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) - SectionName = self.__Token + SectionName = self._Token if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("Fixed", True): + if self._IsKeyword("Fixed", True): Fixed = True - if self.__IsKeyword("CheckSum", True): + if self._IsKeyword("CheckSum", True): CheckSum = True SectAlignment = "" - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - SectAlignment = self.__Token + SectAlignment = self._Token Ext = None - if self.__IsToken('|'): - Ext = self.__GetFileExtension() - elif not self.__GetNextToken(): + if self._IsToken(TAB_VALUE_SPLIT): + Ext = self._GetFileExtension() + elif not self._GetNextToken(): raise Warning("expected File name", self.FileName, self.CurrentLineNumber) - Rule = RuleSimpleFile.RuleSimpleFile() - Rule.SectionType = SectionName - Rule.FvFileType = Type - Rule.NameGuid = NameGuid - Rule.Alignment = AlignValue - Rule.SectAlignment = SectAlignment - Rule.CheckSum = CheckSum - Rule.Fixed = Fixed - Rule.KeyStringList = KeyStringList + NewRule = RuleSimpleFile() + NewRule.SectionType = SectionName + NewRule.FvFileType = Type + NewRule.NameGuid = NameGuid + NewRule.Alignment = AlignValue + NewRule.SectAlignment = SectAlignment + NewRule.CheckSum = CheckSum + NewRule.Fixed = Fixed + NewRule.KeyStringList = KeyStringList if KeepReloc is not None: - Rule.KeepReloc = KeepReloc - Rule.FileExtension = Ext - Rule.FileName = self.__Token - return Rule + NewRule.KeepReloc = KeepReloc + NewRule.FileExtension = Ext + NewRule.FileName = self._Token + return NewRule - ## __GetEfiSection() method + ## _GetEfiSection() method # # Get section list for Rule # @@ -3842,127 +3756,124 @@ class FdfParser: # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetEfiSection(self, Obj): - + def _GetEfiSection(self, Obj): OldPos = self.GetFileBufferPos() - if not self.__GetNextWord(): + if not self._GetNextWord(): return False - SectionName = self.__Token + SectionName = self._Token if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - self.__UndoToken() + self._UndoToken() return False if SectionName == "FV_IMAGE": - FvImageSectionObj = FvImageSection.FvImageSection() - if self.__IsKeyword("FV_IMAGE"): + FvImageSectionObj = FvImageSection() + if self._IsKeyword("FV_IMAGE"): pass - if self.__IsToken( "{"): - FvObj = Fv.FV() - self.__GetDefineStatements(FvObj) - self.__GetBlockStatement(FvObj) - self.__GetSetStatements(FvObj) - self.__GetFvAlignment(FvObj) - self.__GetFvAttributes(FvObj) - self.__GetAprioriSection(FvObj) - self.__GetAprioriSection(FvObj) + if self._IsToken("{"): + FvObj = FV() + self._GetDefineStatements(FvObj) + self._GetBlockStatement(FvObj) + self._GetSetStatements(FvObj) + self._GetFvAlignment(FvObj) + self._GetFvAttributes(FvObj) + self._GetAprioriSection(FvObj) + self._GetAprioriSection(FvObj) while True: - IsInf = self.__GetInfStatement(FvObj) - IsFile = self.__GetFileStatement(FvObj) + IsInf = self._GetInfStatement(FvObj) + IsFile = self._GetFileStatement(FvObj) if not IsInf and not IsFile: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) FvImageSectionObj.Fv = FvObj FvImageSectionObj.FvName = None else: - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) - FvImageSectionObj.FvFileType = self.__Token + if not self._IsKeyword(BINARY_FILE_TYPE_FV): + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) + FvImageSectionObj.FvFileType = self._Token - if self.__GetAlignment(): - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - FvImageSectionObj.Alignment = self.__Token + if self._GetAlignment(): + if self._Token not in ALIGNMENT_NOAUTO: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + FvImageSectionObj.Alignment = self._Token - if self.__IsToken('|'): - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() - elif self.__GetNextToken(): - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._IsToken(TAB_VALUE_SPLIT): + FvImageSectionObj.FvFileExtension = self._GetFileExtension() + elif self._GetNextToken(): + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - FvImageSectionObj.FvFileName = self.__Token + FvImageSectionObj.FvFileName = self._Token else: - self.__UndoToken() + self._UndoToken() else: raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(FvImageSectionObj) return True - EfiSectionObj = EfiSection.EfiSection() + EfiSectionObj = EfiSection() EfiSectionObj.SectionType = SectionName - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected file type", self.FileName, self.CurrentLineNumber) - if self.__Token == "STRING": - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): + if self._Token == "STRING": + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken('='): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) - if self.__GetStringData(): - EfiSectionObj.StringData = self.__Token + if self._GetStringData(): + EfiSectionObj.StringData = self._Token - if self.__IsKeyword("BUILD_NUM"): - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): + if self._IsKeyword("BUILD_NUM"): + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) - EfiSectionObj.BuildNum = self.__Token + EfiSectionObj.BuildNum = self._Token else: - EfiSectionObj.FileType = self.__Token - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) + EfiSectionObj.FileType = self._Token + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) - if self.__IsKeyword("Optional"): - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): + if self._IsKeyword("Optional"): + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) EfiSectionObj.Optional = True - if self.__IsKeyword("BUILD_NUM"): - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): + if self._IsKeyword("BUILD_NUM"): + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) - EfiSectionObj.BuildNum = self.__Token + EfiSectionObj.BuildNum = self._Token - if self.__GetAlignment(): - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): + if self._GetAlignment(): + if self._Token not in ALIGNMENTS: + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) - EfiSectionObj.Alignment = self.__Token + EfiSectionObj.Alignment = self._Token - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): - if self.__Token == 'RELOCS_STRIPPED': + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): + if self._Token == 'RELOCS_STRIPPED': EfiSectionObj.KeepReloc = False else: EfiSectionObj.KeepReloc = True @@ -3972,35 +3883,35 @@ class FdfParser: raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, self.CurrentLineNumber) - if self.__IsToken('|'): - EfiSectionObj.FileExtension = self.__GetFileExtension() - elif self.__GetNextToken(): - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + if self._IsToken(TAB_VALUE_SPLIT): + EfiSectionObj.FileExtension = self._GetFileExtension() + elif self._GetNextToken(): + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): - if self.__Token.startswith('PCD'): - self.__UndoToken() - self.__GetNextWord() + if self._Token.startswith('PCD'): + self._UndoToken() + self._GetNextWord() - if self.__Token == 'PCD': - if not self.__IsToken( "("): + if self._Token == 'PCD': + if not self._IsToken("("): raise Warning("expected '('", self.FileName, self.CurrentLineNumber) - PcdPair = self.__GetNextPcdSettings() - if not self.__IsToken( ")"): + PcdPair = self._GetNextPcdSettings() + if not self._IsToken(")"): raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' - EfiSectionObj.FileName = self.__Token + EfiSectionObj.FileName = self._Token else: - self.__UndoToken() + self._UndoToken() else: raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(EfiSectionObj) return True - ## __RuleSectionCouldBeOptional() method + ## _RuleSectionCouldBeOptional() method # # Get whether a section could be optional # @@ -4009,13 +3920,13 @@ class FdfParser: # @retval False section never optional # @staticmethod - def __RuleSectionCouldBeOptional(SectionType): + def _RuleSectionCouldBeOptional(SectionType): if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "RAW", BINARY_FILE_TYPE_SMM_DEPEX): return True else: return False - ## __RuleSectionCouldHaveBuildNum() method + ## _RuleSectionCouldHaveBuildNum() method # # Get whether a section could have build number information # @@ -4024,13 +3935,13 @@ class FdfParser: # @retval False section never have build number information # @staticmethod - def __RuleSectionCouldHaveBuildNum(SectionType): + def _RuleSectionCouldHaveBuildNum(SectionType): if SectionType in ("VERSION"): return True else: return False - ## __RuleSectionCouldHaveString() method + ## _RuleSectionCouldHaveString() method # # Get whether a section could have string # @@ -4039,13 +3950,13 @@ class FdfParser: # @retval False section never have string # @staticmethod - def __RuleSectionCouldHaveString(SectionType): + def _RuleSectionCouldHaveString(SectionType): if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): return True else: return False - ## __CheckRuleSectionFileType() method + ## _CheckRuleSectionFileType() method # # Get whether a section matches a file type # @@ -4053,7 +3964,7 @@ class FdfParser: # @param SectionType The section type to check # @param FileType The file type to check # - def __CheckRuleSectionFileType(self, SectionType, FileType): + def _CheckRuleSectionFileType(self, SectionType, FileType): if SectionType == "COMPAT16": if FileType not in ("COMPAT16", "SEC_COMPAT16"): raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) @@ -4085,53 +3996,53 @@ class FdfParser: if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) - ## __GetRuleEncapsulationSection() method + ## _GetRuleEncapsulationSection() method # # Get encapsulation section for Rule # # @param self The object pointer - # @param Rule for whom section is got + # @param theRule for whom section is got # @retval True Successfully find section statement # @retval False Not able to find section statement # - def __GetRuleEncapsulationSection(self, Rule): - if self.__IsKeyword( "COMPRESS"): + def _GetRuleEncapsulationSection(self, theRule): + if self._IsKeyword("COMPRESS"): Type = "PI_STD" - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): - Type = self.__Token + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): + Type = self._Token - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - CompressSectionObj = CompressSection.CompressSection() + CompressSectionObj = CompressSection() CompressSectionObj.CompType = Type # Recursive sections... while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) - IsLeaf = self.__GetEfiSection(CompressSectionObj) + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) + IsLeaf = self._GetEfiSection(CompressSectionObj) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - Rule.SectionList.append(CompressSectionObj) + theRule.SectionList.append(CompressSectionObj) return True - elif self.__IsKeyword( "GUIDED"): + elif self._IsKeyword("GUIDED"): GuidValue = None - if self.__GetNextGuid(): - GuidValue = self.__Token + if self._GetNextGuid(): + GuidValue = self._Token - if self.__IsKeyword( "$(NAMED_GUID)"): - GuidValue = self.__Token + if self._IsKeyword("$(NAMED_GUID)"): + GuidValue = self._Token - AttribDict = self.__GetGuidAttrib() + AttribDict = self._GetGuidAttrib() - if not self.__IsToken("{"): + if not self._IsToken("{"): raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) - GuidSectionObj = GuidSection.GuidSection() + GuidSectionObj = GuidSection() GuidSectionObj.NameGuid = GuidValue GuidSectionObj.SectionType = "GUIDED" GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] @@ -4140,20 +4051,20 @@ class FdfParser: # Efi sections... while True: - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) - IsLeaf = self.__GetEfiSection(GuidSectionObj) + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) + IsLeaf = self._GetEfiSection(GuidSectionObj) if not IsEncapsulate and not IsLeaf: break - if not self.__IsToken( "}"): + if not self._IsToken("}"): raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) - Rule.SectionList.append(GuidSectionObj) + theRule.SectionList.append(GuidSectionObj) return True return False - ## __GetVtf() method + ## _GetVtf() method # # Get VTF section contents and store its data into VTF list of self.Profile # @@ -4161,69 +4072,69 @@ class FdfParser: # @retval True Successfully find a VTF # @retval False Not able to find a VTF # - def __GetVtf(self): - if not self.__GetNextToken(): + def _GetVtf(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[VTF."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[VTF.", True): + self._UndoToken() + if not self._IsToken("[VTF.", True): FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) - if not self.__SkipToToken("."): + if not self._SkipToToken(TAB_SPLIT): raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) - Arch = self.__SkippedChars.rstrip(".").upper() - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) - Name = self.__Token.upper() + Name = self._Token.upper() - VtfObj = Vtf.Vtf() + VtfObj = Vtf() VtfObj.UiName = Name VtfObj.KeyArch = Arch - if self.__IsToken(","): - if not self.__GetNextWord(): + if self._IsToken(TAB_COMMA_SPLIT): + if not self._GetNextWord(): raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - VtfObj.ArchList = self.__Token.upper() + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64): + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + VtfObj.ArchList = self._Token.upper() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - if self.__IsKeyword("IA32_RST_BIN"): - if not self.__IsToken("="): + if self._IsKeyword("IA32_RST_BIN"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) - VtfObj.ResetBin = self.__Token - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: + VtfObj.ResetBin = self._Token + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - while self.__GetComponentStatement(VtfObj): + while self._GetComponentStatement(VtfObj): pass self.Profile.VtfList.append(VtfObj) return True - ## __GetComponentStatement() method + ## _GetComponentStatement() method # # Get components in VTF # @@ -4232,132 +4143,132 @@ class FdfParser: # @retval True Successfully find a component # @retval False Not able to find a component # - def __GetComponentStatement(self, VtfObj): - if not self.__IsKeyword("COMP_NAME"): + def _GetComponentStatement(self, VtfObj): + if not self._IsKeyword("COMP_NAME"): return False - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextWord(): + if not self._GetNextWord(): raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) - CompStatementObj = ComponentStatement.ComponentStatement() - CompStatementObj.CompName = self.__Token + CompStatementObj = ComponentStatement() + CompStatementObj.CompName = self._Token - if not self.__IsKeyword("COMP_LOC"): + if not self._IsKeyword("COMP_LOC"): raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) CompStatementObj.CompLoc = "" - if self.__GetNextWord(): - CompStatementObj.CompLoc = self.__Token - if self.__IsToken('|'): - if not self.__GetNextWord(): + if self._GetNextWord(): + CompStatementObj.CompLoc = self._Token + if self._IsToken(TAB_VALUE_SPLIT): + if not self._GetNextWord(): raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.FilePos = self.__Token + CompStatementObj.FilePos = self._Token else: self.CurrentLineNumber += 1 self.CurrentOffsetWithinLine = 0 - if not self.__IsKeyword("COMP_TYPE"): + if not self._IsKeyword("COMP_TYPE"): raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompType = self.__Token + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompType = self._Token - if not self.__IsKeyword("COMP_VER"): + if not self._IsKeyword("COMP_VER"): raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) - if Pattern.match(self.__Token) is None: - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompVer = self.__Token + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) + if Pattern.match(self._Token) is None: + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompVer = self._Token - if not self.__IsKeyword("COMP_CS"): + if not self._IsKeyword("COMP_CS"): raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) - if self.__Token not in ("1", "0"): - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) - CompStatementObj.CompCs = self.__Token + if self._Token not in ("1", "0"): + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + CompStatementObj.CompCs = self._Token - if not self.__IsKeyword("COMP_BIN"): + if not self._IsKeyword("COMP_BIN"): raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) - CompStatementObj.CompBin = self.__Token - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: + CompStatementObj.CompBin = self._Token + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - if not self.__IsKeyword("COMP_SYM"): + if not self._IsKeyword("COMP_SYM"): raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) - CompStatementObj.CompSym = self.__Token - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: + CompStatementObj.CompSym = self._Token + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) - if not self.__IsKeyword("COMP_SIZE"): + if not self._IsKeyword("COMP_SIZE"): raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) - if not self.__IsToken("="): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if self.__IsToken("-"): - CompStatementObj.CompSize = self.__Token - elif self.__GetNextDecimalNumber(): - CompStatementObj.CompSize = self.__Token - elif self.__GetNextHexNumber(): - CompStatementObj.CompSize = self.__Token + if self._IsToken("-"): + CompStatementObj.CompSize = self._Token + elif self._GetNextDecimalNumber(): + CompStatementObj.CompSize = self._Token + elif self._GetNextHexNumber(): + CompStatementObj.CompSize = self._Token else: - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) VtfObj.ComponentStatementList.append(CompStatementObj) return True - ## __GetOptionRom() method + ## _GetOptionRom() method # # Get OptionROM section contents and store its data into OptionROM list of self.Profile # @@ -4365,38 +4276,37 @@ class FdfParser: # @retval True Successfully find a OptionROM # @retval False Not able to find a OptionROM # - def __GetOptionRom(self): - if not self.__GetNextToken(): + def _GetOptionRom(self): + if not self._GetNextToken(): return False - S = self.__Token.upper() - if S.startswith("[") and not S.startswith("[OPTIONROM."): + S = self._Token.upper() + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): self.SectionParser(S) - self.__UndoToken() + self._UndoToken() return False - self.__UndoToken() - if not self.__IsToken("[OptionRom.", True): - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) + self._UndoToken() + if not self._IsToken("[OptionRom.", True): + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - OptRomName = self.__GetUiName() + OptRomName = self._GetUiName() - if not self.__IsToken( "]"): + if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - OptRomObj = OptionRom.OPTIONROM() - OptRomObj.DriverName = OptRomName + OptRomObj = OPTIONROM(OptRomName) self.Profile.OptRomDict[OptRomName] = OptRomObj while True: - isInf = self.__GetOptRomInfStatement(OptRomObj) - isFile = self.__GetOptRomFileStatement(OptRomObj) + isInf = self._GetOptRomInfStatement(OptRomObj) + isFile = self._GetOptRomFileStatement(OptRomObj) if not isInf and not isFile: break return True - ## __GetOptRomInfStatement() method + ## _GetOptRomInfStatement() method # # Get INF statements # @@ -4405,17 +4315,17 @@ class FdfParser: # @retval True Successfully find inf statement # @retval False Not able to find inf statement # - def __GetOptRomInfStatement(self, Obj): - if not self.__IsKeyword( "INF"): + def _GetOptRomInfStatement(self, Obj): + if not self._IsKeyword("INF"): return False - ffsInf = OptRomInfStatement.OptRomInfStatement() - self.__GetInfOptions( ffsInf) + ffsInf = OptRomInfStatement() + self._GetInfOptions(ffsInf) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) - ffsInf.InfFileName = self.__Token - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: + ffsInf.InfFileName = self._Token + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: @@ -4434,71 +4344,71 @@ class FdfParser: self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) - self.__GetOptRomOverrides (ffsInf) + self._GetOptRomOverrides (ffsInf) Obj.FfsList.append(ffsInf) return True - ## __GetOptRomOverrides() method + ## _GetOptRomOverrides() method # # Get overrides for OptROM INF & FILE # # @param self The object pointer # @param FfsInfObj for whom overrides is got # - def __GetOptRomOverrides(self, Obj): - if self.__IsToken('{'): - Overrides = OptRomInfStatement.OverrideAttribs() + def _GetOptRomOverrides(self, Obj): + if self._IsToken('{'): + Overrides = OverrideAttribs() while True: - if self.__IsKeyword( "PCI_VENDOR_ID"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_VENDOR_ID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) - Overrides.PciVendorId = self.__Token + Overrides.PciVendorId = self._Token continue - if self.__IsKeyword( "PCI_CLASS_CODE"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_CLASS_CODE"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) - Overrides.PciClassCode = self.__Token + Overrides.PciClassCode = self._Token continue - if self.__IsKeyword( "PCI_DEVICE_ID"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_DEVICE_ID"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) - Overrides.PciDeviceId = self.__Token + Overrides.PciDeviceId = self._Token continue - if self.__IsKeyword( "PCI_REVISION"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_REVISION"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextHexNumber(): + if not self._GetNextHexNumber(): raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) - Overrides.PciRevision = self.__Token + Overrides.PciRevision = self._Token continue - if self.__IsKeyword( "PCI_COMPRESS"): - if not self.__IsToken( "="): + if self._IsKeyword("PCI_COMPRESS"): + if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' + Overrides.NeedCompress = self._Token.upper() == 'TRUE' continue - if self.__IsToken( "}"): + if self._IsToken("}"): break else: EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) Obj.OverrideAttribs = Overrides - ## __GetOptRomFileStatement() method + ## _GetOptRomFileStatement() method # # Get FILE statements # @@ -4507,33 +4417,33 @@ class FdfParser: # @retval True Successfully find FILE statement # @retval False Not able to find FILE statement # - def __GetOptRomFileStatement(self, Obj): - if not self.__IsKeyword( "FILE"): + def _GetOptRomFileStatement(self, Obj): + if not self._IsKeyword("FILE"): return False - FfsFileObj = OptRomFileStatement.OptRomFileStatement() + FfsFileObj = OptRomFileStatement() - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) - FfsFileObj.FileType = self.__Token + FfsFileObj.FileType = self._Token - if not self.__GetNextToken(): + if not self._GetNextToken(): raise Warning("expected File path", self.FileName, self.CurrentLineNumber) - FfsFileObj.FileName = self.__Token - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: + FfsFileObj.FileName = self._Token + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() if ErrorCode != 0: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) if FfsFileObj.FileType == 'EFI': - self.__GetOptRomOverrides(FfsFileObj) + self._GetOptRomOverrides(FfsFileObj) Obj.FfsList.append(FfsFileObj) return True - ## __GetCapInFd() method + ## _GetCapInFd() method # # Get Cap list contained in FD # @@ -4541,7 +4451,7 @@ class FdfParser: # @param FdName FD name # @retval CapList List of Capsule in FD # - def __GetCapInFd (self, FdName): + def _GetCapInFd (self, FdName): CapList = [] if FdName.upper() in self.Profile.FdDict: FdObj = self.Profile.FdDict[FdName.upper()] @@ -4554,7 +4464,7 @@ class FdfParser: CapList.append(elementRegionData.upper()) return CapList - ## __GetReferencedFdCapTuple() method + ## _GetReferencedFdCapTuple() method # # Get FV and FD list referenced by a capsule image # @@ -4563,22 +4473,22 @@ class FdfParser: # @param RefFdList referenced FD by section # @param RefFvList referenced FV by section # - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): - for CapsuleDataObj in CapObj.CapsuleDataList : + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): + for CapsuleDataObj in CapObj.CapsuleDataList: if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and CapsuleDataObj.FvName.upper() not in RefFvList: RefFvList.append (CapsuleDataObj.FvName.upper()) elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and CapsuleDataObj.FdName.upper() not in RefFdList: RefFdList.append (CapsuleDataObj.FdName.upper()) elif CapsuleDataObj.Ffs is not None: - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): + if isinstance(CapsuleDataObj.Ffs, FileStatement): if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) else: - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) - ## __GetFvInFd() method + ## _GetFvInFd() method # # Get FV list contained in FD # @@ -4586,7 +4496,7 @@ class FdfParser: # @param FdName FD name # @retval FvList list of FV in FD # - def __GetFvInFd (self, FdName): + def _GetFvInFd (self, FdName): FvList = [] if FdName.upper() in self.Profile.FdDict: FdObj = self.Profile.FdDict[FdName.upper()] @@ -4599,7 +4509,7 @@ class FdfParser: FvList.append(elementRegionData.upper()) return FvList - ## __GetReferencedFdFvTuple() method + ## _GetReferencedFdFvTuple() method # # Get FD and FV list referenced by a FFS file # @@ -4608,17 +4518,17 @@ class FdfParser: # @param RefFdList referenced FD by section # @param RefFvList referenced FV by section # - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): for FfsObj in FvObj.FfsList: - if isinstance(FfsObj, FfsFileStatement.FileStatement): + if isinstance(FfsObj, FileStatement): if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: RefFvList.append(FfsObj.FvName.upper()) elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: RefFdList.append(FfsObj.FdName.upper()) else: - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) - ## __GetReferencedFdFvTupleFromSection() method + ## _GetReferencedFdFvTupleFromSection() method # # Get FD and FV list referenced by a FFS section # @@ -4627,19 +4537,19 @@ class FdfParser: # @param FdList referenced FD by section # @param FvList referenced FV by section # - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): SectionStack = [] SectionStack.extend(FfsFile.SectionList) while SectionStack != []: SectionObj = SectionStack.pop() - if isinstance(SectionObj, FvImageSection.FvImageSection): + if isinstance(SectionObj, FvImageSection): if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: FvList.append(SectionObj.FvName.upper()) if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not in FvList: FvList.append(SectionObj.Fv.UiFvName.upper()) - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): SectionStack.extend(SectionObj.SectionList) ## CycleReferenceCheck() method @@ -4662,7 +4572,7 @@ class FdfParser: FdAnalyzedList = [] Index = 0 - while RefFvStack != [] and Index < MaxLength: + while RefFvStack and Index < MaxLength: Index = Index + 1 FvNameFromStack = RefFvStack.pop() if FvNameFromStack.upper() in self.Profile.FvDict: @@ -4672,14 +4582,14 @@ class FdfParser: RefFdList = [] RefFvList = [] - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) for RefFdName in RefFdList: if RefFdName in FdAnalyzedList: continue LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) - FvInFdList = self.__GetFvInFd(RefFdName) + FvInFdList = self._GetFvInFd(RefFdName) if FvInFdList != []: for FvNameInFd in FvInFdList: LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) @@ -4725,7 +4635,7 @@ class FdfParser: RefFvList = [] RefFdList = [] - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) FvListLength = 0 FdListLength = 0 @@ -4735,7 +4645,7 @@ class FdfParser: continue LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) - CapInFdList = self.__GetCapInFd(RefFdName) + CapInFdList = self._GetCapInFd(RefFdName) if CapInFdList != []: for CapNameInFd in CapInFdList: LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) @@ -4746,7 +4656,7 @@ class FdfParser: EdkLogger.info(LogStr) return True - FvInFdList = self.__GetFvInFd(RefFdName) + FvInFdList = self._GetFvInFd(RefFdName) if FvInFdList != []: for FvNameInFd in FvInFdList: LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) @@ -4767,7 +4677,7 @@ class FdfParser: FvObj = self.Profile.FvDict[RefFvName.upper()] else: continue - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) FvAnalyzedList.append(RefFvName) return False diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py index 598b2d023124..4c1b43572964 100644 --- a/BaseTools/Source/Python/GenFds/Ffs.py +++ b/BaseTools/Source/Python/GenFds/Ffs.py @@ -15,53 +15,41 @@ ## # Import Modules # -from CommonDataClass.FdfClass import FDClassObject from Common.DataType import * -## generate FFS -# -# -class Ffs(FDClassObject): - # mapping between FILE type in FDF and file type for GenFfs - FdfFvFileTypeToFileType = { - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', - 'RAW' : 'EFI_FV_FILETYPE_RAW', - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', - 'SMM' : 'EFI_FV_FILETYPE_SMM', - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' - } +# mapping between FILE type in FDF and file type for GenFfs +FdfFvFileTypeToFileType = { + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', + 'RAW' : 'EFI_FV_FILETYPE_RAW', + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', + 'SMM' : 'EFI_FV_FILETYPE_SMM', + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' +} - # mapping between section type in FDF and file suffix - SectionSuffix = { - BINARY_FILE_TYPE_PE32 : '.pe32', - BINARY_FILE_TYPE_PIC : '.pic', - BINARY_FILE_TYPE_TE : '.te', - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', - 'VERSION' : '.ver', - BINARY_FILE_TYPE_UI : '.ui', - 'COMPAT16' : '.com16', - 'RAW' : '.raw', - 'FREEFORM_SUBTYPE_GUID': '.guid', - 'SUBTYPE_GUID' : '.guid', - 'FV_IMAGE' : 'fv.sec', - 'COMPRESS' : '.com', - 'GUIDED' : '.guided', - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' - } - - ## The constructor - # - # @param self The object pointer - # - def __init__(self): - FfsClassObject.__init__(self) +# mapping between section type in FDF and file suffix +SectionSuffix = { + BINARY_FILE_TYPE_PE32 : '.pe32', + BINARY_FILE_TYPE_PIC : '.pic', + BINARY_FILE_TYPE_TE : '.te', + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', + 'VERSION' : '.ver', + BINARY_FILE_TYPE_UI : '.ui', + 'COMPAT16' : '.com16', + 'RAW' : '.raw', + 'FREEFORM_SUBTYPE_GUID': '.guid', + 'SUBTYPE_GUID' : '.guid', + 'FV_IMAGE' : 'fv.sec', + 'COMPRESS' : '.com', + 'GUIDED' : '.guided', + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' +} diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py index 4c35aac76a00..7479efff04c7 100644 --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py @@ -16,26 +16,22 @@ # Import Modules # from __future__ import absolute_import -from . import Ffs -from . import Rule -import Common.LongFilePathOs as os from io import BytesIO -import subprocess - -from .GenFdsGlobalVariable import GenFdsGlobalVariable +from struct import pack from CommonDataClass.FdfClass import FileStatementClassObject from Common import EdkLogger -from Common.BuildToolError import * -from Common.Misc import GuidStructureByteArrayToGuidString +from Common.BuildToolError import GENFDS_ERROR +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange +import Common.LongFilePathOs as os from .GuidSection import GuidSection from .FvImageSection import FvImageSection -from Common.Misc import SaveFileOnChange -from struct import * +from .Ffs import FdfFvFileTypeToFileType +from .GenFdsGlobalVariable import GenFdsGlobalVariable ## generate FFS from FILE # # -class FileStatement (FileStatementClassObject) : +class FileStatement (FileStatementClassObject): ## The constructor # # @param self The object pointer @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : # def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): + if self.NameGuid and self.NameGuid.startswith('PCD('): PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) if len(PcdValue) == 0: EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : Dict.update(self.DefineVarDict) SectionAlignments = None - if self.FvName is not None : + if self.FvName: Buffer = BytesIO('') if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : FileName = Fv.AddToBuffer(Buffer) SectionFiles = [FileName] - elif self.FdName is not None: + elif self.FdName: if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) FileName = Fd.GenFd() SectionFiles = [FileName] - elif self.FileName is not None: + elif self.FileName: if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == len(self.SubAlignment): FileContent = '' @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : Content = f.read() f.close() AlignValue = 1 - if self.SubAlignment[Index] is not None: + if self.SubAlignment[Index]: AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) if AlignValue > MaxAlignValue: MaxAlignIndex = Index @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : SectionFiles = [] Index = 0 SectionAlignments = [] - for section in self.SectionList : + for section in self.SectionList: Index = Index + 1 SecIndex = '%d' %Index # process the inside FvImage from FvSection or GuidSection @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : section.FvAddr = FvChildAddr.pop(0) elif isinstance(section, GuidSection): section.FvAddr = FvChildAddr - if FvParentAddr is not None and isinstance(section, GuidSection): + if FvParentAddr and isinstance(section, GuidSection): section.FvParentAddr = FvParentAddr if self.KeepReloc == False: @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : # FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), + FdfFvFileTypeToFileType.get(self.FvFileType), self.NameGuid, Fixed=self.Fixed, CheckSum=self.CheckSum, @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : ) return FfsFileOutput - - - diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py index 6149bc81b4ef..c1073c96e9aa 100644 --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os from io import BytesIO from struct import * from .GenFdsGlobalVariable import GenFdsGlobalVariable -from . import Ffs +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType import subprocess import sys from . import Section @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): SecNum = '%d' %Index GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum Index = Index + 1 OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): else: SecNum = '%d' %Index GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): self.ModuleGuid = RegistryGuidStr GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], + FdfFvFileTypeToFileType[Rule.FvFileType], self.ModuleGuid, Fixed=Rule.Fixed, CheckSum=Rule.CheckSum, Align=Rule.Alignment, SectionAlign=SectionAlignments, @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], + FdfFvFileTypeToFileType[Rule.FvFileType], self.ModuleGuid, Fixed=Rule.Fixed, CheckSum=Rule.CheckSum, Align=Rule.Alignment, SectionAlign=Alignments, diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py index c66fc383278d..30525bd11fcc 100644 --- a/BaseTools/Source/Python/GenFds/Fv.py +++ b/BaseTools/Source/Python/GenFds/Fv.py @@ -1,4 +1,3 @@ -from __future__ import absolute_import ## @file # process FV generation # @@ -16,16 +15,13 @@ from __future__ import absolute_import ## # Import Modules # +from __future__ import absolute_import import Common.LongFilePathOs as os import subprocess from io import BytesIO from struct import * - -from . import Ffs -from . import AprioriSection from . import FfsFileStatement from .GenFdsGlobalVariable import GenFdsGlobalVariable -from CommonDataClass.FdfClass import FvClassObject from Common.Misc import SaveFileOnChange, PackGUID from Common.LongFilePathSupport import CopyLongFilePath from Common.LongFilePathSupport import OpenLongFilePath as open @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' ## generate FV # # -class FV (FvClassObject): +class FV (object): ## The constructor # # @param self The object pointer # def __init__(self): - FvClassObject.__init__(self) + self.UiFvName = None + self.CreateFileName = None + self.BlockSizeList = [] + self.DefineVarDict = {} + self.SetVarDict = {} + self.FvAlignment = None + self.FvAttributeDict = {} + self.FvNameGuid = None + self.FvNameString = None + self.AprioriSectionList = [] + self.FfsList = [] + self.BsBaseAddress = None + self.RtBaseAddress = None self.FvInfFile = None self.FvAddressFile = None self.BaseAddress = None @@ -68,7 +76,7 @@ class FV (FvClassObject): # @param MacroDict macro value pair # @retval string Generated FV file path # - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False) : + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False): if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] @@ -96,7 +104,7 @@ class FV (FvClassObject): if self.FvBaseAddress is not None: BaseAddress = self.FvBaseAddress if not Flag: - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) # # First Process the Apriori section # @@ -114,7 +122,7 @@ class FV (FvClassObject): TAB_LINE_BREAK) # Process Modules in FfsList - for FfsFile in self.FfsList : + for FfsFile in self.FfsList: if Flag: if isinstance(FfsFile, FfsFileStatement.FileStatement): continue @@ -177,7 +185,7 @@ class FV (FvClassObject): if FvChildAddr != []: # Update Ffs again - for FfsFile in self.FfsList : + for FfsFile in self.FfsList: FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, FvName=self.UiFvName) if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: @@ -252,7 +260,7 @@ class FV (FvClassObject): return True return False - ## __InitializeInf__() + ## _InitializeInf() # # Initilize the inf file to create FV # @@ -263,7 +271,7 @@ class FV (FvClassObject): # @param ErasePolarity Flash erase polarity # @param VtfDict VTF objects # - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): # # Create FV inf file # @@ -275,7 +283,7 @@ class FV (FvClassObject): # Add [Options] # self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) - if BaseAddress is not None : + if BaseAddress is not None: self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ BaseAddress + \ TAB_LINE_BREAK) @@ -294,7 +302,7 @@ class FV (FvClassObject): #set default block size is 1 self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) - for BlockSize in self.BlockSizeList : + for BlockSize in self.BlockSizeList: if BlockSize[0] is not None: self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ '0x%X' %BlockSize[0] + \ @@ -320,9 +328,9 @@ class FV (FvClassObject): ' %s' %ErasePloarity + \ TAB_LINE_BREAK) if not (self.FvAttributeDict is None): - for FvAttribute in self.FvAttributeDict.keys() : + for FvAttribute in self.FvAttributeDict.keys(): if FvAttribute == "FvUsedSizeEnable": - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): self.UsedSizeEnable = True continue self.FvInfFile.writelines("EFI_" + \ @@ -365,8 +373,8 @@ class FV (FvClassObject): Guid = FV_UI_EXT_ENTY_GUID.split('-') # # Layout: - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 - # GUID : size 16 + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 + # GUID: size 16 # FV UI name # Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) @@ -422,7 +430,6 @@ class FV (FvClassObject): FvExtHeaderFileName + \ TAB_LINE_BREAK) - # # Add [Files] # diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py index 04556fc87099..d6e1f3315b7e 100644 --- a/BaseTools/Source/Python/GenFds/FvImageSection.py +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py @@ -18,7 +18,7 @@ from __future__ import absolute_import from . import Section from io import BytesIO -from .Ffs import Ffs +from .Ffs import SectionSuffix import subprocess from .GenFdsGlobalVariable import GenFdsGlobalVariable import Common.LongFilePathOs as os @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): if FvAlignmentValue > MaxFvAlignment: MaxFvAlignment = FvAlignmentValue - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get("FV_IMAGE")) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get("FV_IMAGE")) GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) OutputFileList.append(OutputFile) @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): # # Prepare the parameter of GenSection # - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get("FV_IMAGE")) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get("FV_IMAGE")) GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) OutputFileList.append(OutputFile) diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py index 9dec9c5eb57d..0c8091b79841 100644 --- a/BaseTools/Source/Python/GenFds/GenFds.py +++ b/BaseTools/Source/Python/GenFds/GenFds.py @@ -17,34 +17,30 @@ # from __future__ import print_function from __future__ import absolute_import +from re import compile from optparse import OptionParser -import sys -import Common.LongFilePathOs as os -import linecache -from . import FdfParser -import Common.BuildToolError as BuildToolError -from .GenFdsGlobalVariable import GenFdsGlobalVariable -from Workspace.WorkspaceDatabase import WorkspaceDatabase -from Workspace.BuildClassObject import PcdClassObject -from . import RuleComplexFile -from .EfiSection import EfiSection +from sys import exit +from glob import glob +from struct import unpack +from linecache import getlines from io import BytesIO -import Common.TargetTxtClassObject as TargetTxtClassObject -import Common.ToolDefClassObject as ToolDefClassObject + +import Common.LongFilePathOs as os +from Common.TargetTxtClassObject import TargetTxtClassObject from Common.DataType import * import Common.GlobalData as GlobalData from Common import EdkLogger -from Common.StringUtils import * -from Common.Misc import DirCache, PathClass -from Common.Misc import SaveFileOnChange -from Common.Misc import ClearDuplicatedInf -from Common.Misc import GuidStructureStringToGuidString +from Common.StringUtils import NormPath +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf from Common.BuildVersion import gBUILD_VERSION from Common.MultipleWorkspace import MultipleWorkspace as mws -from . import FfsFileStatement -import glob -from struct import unpack -from Common.GlobalData import gGuidPattern +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID +from Workspace.WorkspaceDatabase import WorkspaceDatabase + +from .FdfParser import FdfParser, Warning +from .GenFdsGlobalVariable import GenFdsGlobalVariable +from .FfsFileStatement import FileStatement ## Version and Copyright versionNumber = "1.0" + ' ' + gBUILD_VERSION @@ -71,22 +67,22 @@ def main(): EdkLogger.Initialize() try: - if Options.verbose is not None: + if Options.verbose: EdkLogger.SetLevel(EdkLogger.VERBOSE) GenFdsGlobalVariable.VerboseMode = True - if Options.FixedAddress is not None: + if Options.FixedAddress: GenFdsGlobalVariable.FixedLoadAddress = True - if Options.quiet is not None: + if Options.quiet: EdkLogger.SetLevel(EdkLogger.QUIET) - if Options.debug is not None: + if Options.debug: EdkLogger.SetLevel(Options.debug + 1) GenFdsGlobalVariable.DebugLevel = Options.debug else: EdkLogger.SetLevel(EdkLogger.INFO) - if (Options.Workspace is None): + if not Options.Workspace: EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") elif not os.path.exists(Options.Workspace): @@ -97,7 +93,7 @@ def main(): GenFdsGlobalVariable.WorkSpaceDir = Workspace if 'EDK_SOURCE' in os.environ: GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) - if (Options.debug): + if Options.debug: GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) if Options.GenfdsMultiThread: GenFdsGlobalVariable.EnableGenfdsMultiThread = True @@ -107,7 +103,7 @@ def main(): PackagesPath = os.getenv("PACKAGES_PATH") mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) - if (Options.filename): + if Options.filename: FdfFilename = Options.filename FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) @@ -123,13 +119,13 @@ def main(): else: EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") - if (Options.BuildTarget): + if Options.BuildTarget: GenFdsGlobalVariable.TargetName = Options.BuildTarget - if (Options.ToolChain): + if Options.ToolChain: GenFdsGlobalVariable.ToolChainTag = Options.ToolChain - if (Options.activePlatform): + if Options.activePlatform: ActivePlatform = Options.activePlatform ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) @@ -139,15 +135,15 @@ def main(): if not os.path.isabs (ActivePlatform): ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) - if not os.path.exists(ActivePlatform) : + if not os.path.exists(ActivePlatform): EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") else: EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) - if (Options.ConfDirectory): + if Options.ConfDirectory: # Get alternate Conf location, if it is absolute, then just use the absolute directory name ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) if ConfDirectoryPath.startswith('"'): @@ -169,7 +165,7 @@ def main(): GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) if os.path.isfile(BuildConfigurationFile) == True: - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() + TargetTxt = TargetTxtClassObject() TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) # if no build target given in command line, get it from target.txt if not GenFdsGlobalVariable.TargetName: @@ -237,10 +233,9 @@ def main(): GlobalData.gAllFiles = DirCache(Workspace) GlobalData.gWorkspace = Workspace - if (Options.archList) : + if Options.archList: ArchList = Options.archList.split(',') else: -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) @@ -249,9 +244,11 @@ def main(): for Arch in ArchList: GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].OutputDirectory) - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].PlatformName - if (Options.outputDir): + # assign platform name based on last entry in ArchList + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], Options.BuildTarget, Options.ToolChain].PlatformName + + if Options.outputDir: OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) if not os.path.isabs (OutputDirFromCommandLine): OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, OutputDirFromCommandLine) @@ -274,27 +271,27 @@ def main(): GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ - FdfParserObj = FdfParser.FdfParser(FdfFilename) + FdfParserObj = FdfParser(FdfFilename) FdfParserObj.ParseFile() if FdfParserObj.CycleReferenceCheck(): EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") - if (Options.uiFdName) : + if Options.uiFdName: if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: GenFds.OnlyGenerateThisFd = Options.uiFdName else: EdkLogger.error("GenFds", OPTION_VALUE_INVALID, "No such an FD in FDF file: %s" % Options.uiFdName) - if (Options.uiFvName) : + if Options.uiFvName: if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: GenFds.OnlyGenerateThisFv = Options.uiFvName else: EdkLogger.error("GenFds", OPTION_VALUE_INVALID, "No such an FV in FDF file: %s" % Options.uiFvName) - if (Options.uiCapName) : + if Options.uiCapName: if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: GenFds.OnlyGenerateThisCap = Options.uiCapName else: @@ -302,7 +299,7 @@ def main(): "No such a Capsule in FDF file: %s" % Options.uiCapName) GenFdsGlobalVariable.WorkSpace = BuildWorkSpace - if ArchList is not None: + if ArchList: GenFdsGlobalVariable.ArchList = ArchList # Dsc Build Data will handle Pcd Settings from CommandLine. @@ -313,10 +310,8 @@ def main(): # Record the FV Region info that may specific in the FD if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: - for Fv in FdfParserObj.Profile.FvDict: - FvObj = FdfParserObj.Profile.FvDict[Fv] - for Fd in FdfParserObj.Profile.FdDict: - FdObj = FdfParserObj.Profile.FdDict[Fd] + for FvObj in FdfParserObj.Profile.FvDict.values(): + for FdObj in FdfParserObj.Profile.FdDict.values(): for RegionObj in FdObj.RegionList: if RegionObj.RegionType != BINARY_FILE_TYPE_FV: continue @@ -338,7 +333,7 @@ def main(): """Display FV space info.""" GenFds.DisplayFvSpaceInfo(FdfParserObj) - except FdfParser.Warning as X: + except Warning as X: EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, RaiseError=False) ReturnCode = FORMAT_INVALID except FatalError as X: @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): # Using standard Python module optparse to parse command line option of this tool. # # @retval Opt A optparse.Values object containing the parsed options -# @retval Args Target of build command # def myOptionParser(): usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= MacroValue]\"" @@ -404,7 +398,7 @@ def myOptionParser(): Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: \"PcdName=Value\" ") Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable GenFds multi thread to generate ffs file.") - (Options, args) = Parser.parse_args() + Options, _ = Parser.parse_args() return Options ## The class implementing the EDK2 flash image generation process @@ -414,7 +408,7 @@ def myOptionParser(): # 2. Call methods of Fd class to generate FD # 3. Call methods of Fv class to generate FV that not belong to FD # -class GenFds : +class GenFds(object): FdfParsef = None OnlyGenerateThisFd = None OnlyGenerateThisFv = None @@ -423,12 +417,13 @@ class GenFds : ## GenFd() # # @param OutputDir Output directory - # @param FdfParser FDF contents parser + # @param FdfParserObject FDF contents parser # @param Workspace The directory of workspace # @param ArchList The Arch list of platform # - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) + @staticmethod + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: @@ -470,9 +465,10 @@ class GenFds : GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): OptRomObj.AddToBuffer(None) + @staticmethod - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): FdObj.GenFd(Flag=True) @@ -490,6 +486,7 @@ class GenFds : # @param FvObj Whose block size to get # @retval int Block size value # + @staticmethod def GetFvBlockSize(FvObj): DefaultBlockSize = 0x1 FdObj = None @@ -524,16 +521,17 @@ class GenFds : # @param FvObj Whose block size to get # @retval None # - def DisplayFvSpaceInfo(FdfParser): + @staticmethod + def DisplayFvSpaceInfo(FdfParserObject): FvSpaceInfoList = [] MaxFvNameLength = 0 - for FvName in FdfParser.Profile.FvDict: + for FvName in FdfParserObject.Profile.FvDict: if len(FvName) > MaxFvNameLength: MaxFvNameLength = len(FvName) FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') if os.path.exists(FvSpaceInfoFileName): - FileLinesList = linecache.getlines(FvSpaceInfoFileName) + FileLinesList = getlines(FvSpaceInfoFileName) TotalFound = False Total = '' UsedFound = False @@ -575,6 +573,7 @@ class GenFds : # @param DscFile modules from dsc file will be preprocessed # @retval None # + @staticmethod def PreprocessImage(BuildDb, DscFile): PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].Pcds PcdValue = '' @@ -600,6 +599,7 @@ class GenFds : ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) + @staticmethod def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") GuidXRefFile = BytesIO('') @@ -607,7 +607,6 @@ class GenFds : GuidDict = {} ModuleList = [] FileGuidList = [] - GuidPattern = gGuidPattern for Arch in ArchList: PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) @@ -626,20 +625,16 @@ class GenFds : continue else: ModuleList.append(Module) - GuidMatch = GuidPattern.match(ModuleFile.BaseName) - if GuidMatch is not None: + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) else: GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) - for key, item in Module.Protocols.items(): - GuidDict[key] = item - for key, item in Module.Guids.items(): - GuidDict[key] = item - for key, item in Module.Ppis.items(): - GuidDict[key] = item + GuidDict.update(Module.Protocols) + GuidDict.update(Module.Guids) + GuidDict.update(Module.Ppis) for FvName in FdfParserObj.Profile.FvDict: for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: - if not isinstance(FfsObj, FfsFileStatement.FileStatement): + if not isinstance(FfsObj, FileStatement): InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] if FdfModule in ModuleList: @@ -647,12 +642,9 @@ class GenFds : else: ModuleList.append(FdfModule) GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) - for key, item in FdfModule.Protocols.items(): - GuidDict[key] = item - for key, item in FdfModule.Guids.items(): - GuidDict[key] = item - for key, item in FdfModule.Ppis.items(): - GuidDict[key] = item + GuidDict.update(FdfModule.Protocols) + GuidDict.update(FdfModule.Guids) + GuidDict.update(FdfModule.Ppis) else: FileStatementGuid = FfsObj.NameGuid if FileStatementGuid in FileGuidList: @@ -661,13 +653,13 @@ class GenFds : FileGuidList.append(FileStatementGuid) Name = [] FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') if not FfsPath: continue if not os.path.exists(FfsPath[0]): continue MatchDict = {} - ReFileEnds = re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') + ReFileEnds = compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') FileList = os.listdir(FfsPath[0]) for File in FileList: Match = ReFileEnds.search(File) @@ -724,16 +716,10 @@ class GenFds : os.remove(GuidXRefFileName) GuidXRefFile.close() - ##Define GenFd as static function - GenFd = staticmethod(GenFd) - GetFvBlockSize = staticmethod(GetFvBlockSize) - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) - PreprocessImage = staticmethod(PreprocessImage) - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) - if __name__ == '__main__': r = main() ## 0-127 is a safe return range, and 1 is a standard default error - if r < 0 or r > 127: r = 1 - sys.exit(r) + if r < 0 or r > 127: + r = 1 + exit(r) diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py index 14578a92a9b1..e867693d7c88 100644 --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py @@ -16,13 +16,15 @@ # Import Modules # from __future__ import print_function +from __future__ import absolute_import + import Common.LongFilePathOs as os -import sys -import subprocess -import struct -import array +from sys import stdout +from subprocess import PIPE,Popen +from struct import Struct +from array import array -from Common.BuildToolError import * +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR from Common import EdkLogger from Common.Misc import SaveFileOnChange @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' LARGE_FILE_SIZE = 0x1000000 - SectionHeader = struct.Struct("3B 1B") + SectionHeader = Struct("3B 1B") # FvName, FdName, CapName in FDF, Image file name ImageBinDict = {} @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: ## LoadBuildRule # @staticmethod - def __LoadBuildRule(): + def _LoadBuildRule(): if GenFdsGlobalVariable.__BuildRuleDatabase: return GenFdsGlobalVariable.__BuildRuleDatabase BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: if not Arch in GenFdsGlobalVariable.OutputDirDict: return {} - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() if not BuildRuleDatabase: return {} PathClassObj = PathClass(Inf.MetaFile.File, GenFdsGlobalVariable.WorkSpaceDir) - Macro = {} - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir - Macro["MODULE_NAME" ] = Inf.BaseName - Macro["MODULE_GUID" ] = Inf.Guid - Macro["MODULE_VERSION" ] = Inf.Version - Macro["MODULE_TYPE" ] = Inf.ModuleType - Macro["MODULE_FILE" ] = str(PathClassObj) - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir - Macro["MODULE_DIR" ] = PathClassObj.SubDir - - Macro["BASE_NAME" ] = Inf.BaseName - - Macro["ARCH" ] = Arch - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName - - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) BuildDir = os.path.join( GenFdsGlobalVariable.OutputDirDict[Arch], Arch, PathClassObj.SubDir, PathClassObj.BaseName ) - Macro["MODULE_BUILD_DIR" ] = BuildDir - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") - + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) + Macro = { + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, + "MODULE_NAME":Inf.BaseName, + "MODULE_GUID":Inf.Guid, + "MODULE_VERSION":Inf.Version, + "MODULE_TYPE":Inf.ModuleType, + "MODULE_FILE":str(PathClassObj), + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, + "MODULE_DIR":PathClassObj.SubDir, + "BASE_NAME":Inf.BaseName, + "ARCH":Arch, + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, + "TARGET":GenFdsGlobalVariable.TargetName, + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], + "BIN_DIR":BinDir, + "LIB_DIR":BinDir, + "MODULE_BUILD_DIR":BuildDir, + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") + } + BuildRules = {} for Type in BuildRuleDatabase.FileTypeList: #first try getting build rule by BuildRuleFamily @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: if not Inf.IsBinaryModule: for File in Inf.Sources: - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: FileList.append((File, DataType.TAB_UNKNOWN_FILE)) for File in Inf.Binaries: - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: FileList.append((File, File.Type)) for File, FileType in FileList: @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: Source = SourceList[Index] Index = Index + 1 - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: # Skip all files that are not binary libraries if not Inf.LibraryClass: continue @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: # @param Workspace The directory of workspace # @param ArchList The Arch list of platform # + @staticmethod def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) -# GenFdsGlobalVariable.OutputDirDict = OutputDir + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) GenFdsGlobalVariable.FdfParser = FdfParser GenFdsGlobalVariable.WorkSpace = WorkSpace GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], DataType.TAB_FV_DIRECTORY) - if not os.path.exists(GenFdsGlobalVariable.FvDir) : + if not os.path.exists(GenFdsGlobalVariable.FvDir): os.makedirs(GenFdsGlobalVariable.FvDir) GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : + if not os.path.exists(GenFdsGlobalVariable.FfsDir): os.makedirs(GenFdsGlobalVariable.FfsDir) - T_CHAR_LF = '\n' # # Create FV Address inf file # @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: # # Add [Options] # - FvAddressFile.writelines("[options]" + T_CHAR_LF) + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) BsAddress = '0' for Arch in ArchList: if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ BsAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) RtAddress = '0' - for Arch in ArchList: - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress + for Arch in reversed(ArchList): + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress + if temp: + RtAddress = temp + break FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ RtAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) FvAddressFile.close() + @staticmethod def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile GenFdsGlobalVariable.FdfParser = FdfParser @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: if not os.path.exists(GenFdsGlobalVariable.FfsDir): os.makedirs(GenFdsGlobalVariable.FfsDir) - T_CHAR_LF = '\n' # # Create FV Address inf file # @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: # # Add [Options] # - FvAddressFile.writelines("[options]" + T_CHAR_LF) + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) BsAddress = '0' for Arch in ArchList: BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ BsAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) RtAddress = '0' - for Arch in ArchList: - if GenFdsGlobalVariable.WorkSpace.BuildObject[ + for Arch in reversed(ArchList): + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress + if temp: + RtAddress = temp + break FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ RtAddress + \ - T_CHAR_LF) + DataType.TAB_LINE_BREAK) FvAddressFile.close() @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: # # @param String String that may contain macro # + @staticmethod def ReplaceWorkspaceMacro(String): String = mws.handleWsMacro(String) Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: if not os.path.exists(Output): return True # always update "Output" if no "Input" given - if Input is None or len(Input) == 0: + if not Input: return True # if fdf file is changed after the 'Output" is generated, update the 'Output' @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: Cmd += ("-s", Type) if CompressionType: Cmd += ("-c", CompressionType) - if Guid is not None: + if Guid: Cmd += ("-g", Guid) - if DummyFile is not None: + if DummyFile: Cmd += ("--dummy", DummyFile) if GuidHdrLen: Cmd += ("-l", GuidHdrLen) @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) else: - SectionData = array.array('B', [0, 0, 0, 0]) + SectionData = array('B', [0, 0, 0, 0]) SectionData.fromstring(Ui.encode("utf_16_le")) SectionData.append(0) SectionData.append(0) @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: if BaseAddress: Cmd += ("-r", BaseAddress) - if ForceRebase == False: + if not ForceRebase: Cmd += ("-F", "FALSE") - elif ForceRebase == True: + else: Cmd += ("-F", "TRUE") if Capsule: @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): InputList = [] Cmd = ["EfiRom"] - if len(EfiInput) > 0: + if EfiInput: if Compress: Cmd.append("-ec") @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: Cmd.append(EfiFile) InputList.append (EfiFile) - if len(BinaryInput) > 0: + if BinaryInput: Cmd.append("-b") for BinFile in BinaryInput: Cmd.append(BinFile) @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: return GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) - if ClassCode is not None: + if ClassCode: Cmd += ("-l", ClassCode) - if Revision is not None: + if Revision: Cmd += ("-r", Revision) - if DeviceId is not None: + if DeviceId: Cmd += ("-i", DeviceId) - if VendorId is not None: + if VendorId: Cmd += ("-f", VendorId) Cmd += ("-o", Output) @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: else: GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) + @staticmethod def CallExternalTool (cmd, errorMess, returnValue=[]): if type(cmd) not in (tuple, list): @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: cmd += ('-v',) GenFdsGlobalVariable.InfLogger (cmd) else: - sys.stdout.write ('#') - sys.stdout.flush() + stdout.write ('#') + stdout.flush() GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: - sys.stdout.write('\n') + stdout.write('\n') try: - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) except Exception as X: EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) (out, error) = PopenObject.communicate() - while PopenObject.returncode is None : + while PopenObject.returncode is None: PopenObject.wait() if returnValue != [] and returnValue[0] != 0: #get command return value @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: print("###", cmd) EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) + @staticmethod def VerboseLogger (msg): EdkLogger.verbose(msg) + @staticmethod def InfLogger (msg): EdkLogger.info(msg) + @staticmethod def ErrorLogger (msg, File=None, Line=None, ExtraData=None): EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) + @staticmethod def DebugLogger (Level, msg): EdkLogger.debug(Level, msg) - ## ReplaceWorkspaceMacro() + ## MacroExtend() # # @param Str String that may contain macro # @param MacroDict Dictionary that contains macro value pair # + @staticmethod def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): - if Str is None : + if Str is None: return None - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, - '$(TARGET)' : GenFdsGlobalVariable.TargetName, - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, - '$(SPACE)' : ' ' + '$(TARGET)': GenFdsGlobalVariable.TargetName, + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, + '$(SPACE)': ' ' } - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] + if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] + else: + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] Dict['$(OUTPUT_DIRECTORY)'] = OutputDir - if MacroDict is not None and len (MacroDict) != 0: + if MacroDict: Dict.update(MacroDict) for key in Dict: - if Str.find(key) >= 0 : + if Str.find(key) >= 0: Str = Str.replace (key, Dict[key]) if Str.find('$(ARCH)') >= 0: @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: # # @param PcdPattern pattern that labels a PCD. # + @staticmethod def GetPcdValue (PcdPattern): - if PcdPattern is None : + if PcdPattern is None: return None PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') TokenSpace = PcdPair[0] TokenCName = PcdPair[1] - PcdValue = '' for Arch in GenFdsGlobalVariable.ArchList: Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] PcdDict = Platform.Pcds @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: if PcdObj.DatumType != DataType.TAB_VOID: EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) - PcdValue = PcdObj.DefaultValue - return PcdValue + return PcdObj.DefaultValue for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: if PcdObj.DatumType != DataType.TAB_VOID: EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) - PcdValue = PcdObj.DefaultValue - return PcdValue + return PcdObj.DefaultValue - return PcdValue - - SetDir = staticmethod(SetDir) - SetEnv = staticmethod(SetEnv) - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) - CallExternalTool = staticmethod(CallExternalTool) - VerboseLogger = staticmethod(VerboseLogger) - InfLogger = staticmethod(InfLogger) - ErrorLogger = staticmethod(ErrorLogger) - DebugLogger = staticmethod(DebugLogger) - MacroExtend = staticmethod (MacroExtend) - GetPcdValue = staticmethod(GetPcdValue) + return '' ## FindExtendTool() # @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): ToolOptionKey = None KeyList = None for ToolDef in ToolDefinition.items(): - if NameGuid.lower() == ToolDef[1].lower() : + if NameGuid.lower() == ToolDef[1].lower(): KeyList = ToolDef[0].split('_') Key = KeyList[0] + \ '_' + \ diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py index fd660737f421..25609e6f2ae1 100644 --- a/BaseTools/Source/Python/GenFds/GuidSection.py +++ b/BaseTools/Source/Python/GenFds/GuidSection.py @@ -19,7 +19,7 @@ from __future__ import absolute_import from . import Section import subprocess -from .Ffs import Ffs +from .Ffs import SectionSuffix import Common.LongFilePathOs as os from .GenFdsGlobalVariable import GenFdsGlobalVariable from .GenFdsGlobalVariable import FindExtendTool @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : ModuleName + \ SUP_MODULE_SEC + \ SecNum + \ - Ffs.SectionSuffix['GUIDED'] + SectionSuffix['GUIDED'] OutputFile = os.path.normpath(OutputFile) ExternalTool = None diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py index bb7071fa7fca..9d7ad25ecc43 100644 --- a/BaseTools/Source/Python/GenFds/OptionRom.py +++ b/BaseTools/Source/Python/GenFds/OptionRom.py @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange from Common import EdkLogger from Common.BuildToolError import * -T_CHAR_LF = '\n' - ## # # @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): # # @param self The object pointer # - def __init__(self): + def __init__(self, Name = ""): OptionRomClassObject.__init__(self) - + self.DriverName = Name ## AddToBuffer() # diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py index 5242b74c9e70..9aeaa45d4566 100644 --- a/BaseTools/Source/Python/GenFds/Region.py +++ b/BaseTools/Source/Python/GenFds/Region.py @@ -20,7 +20,6 @@ from struct import * from .GenFdsGlobalVariable import GenFdsGlobalVariable from io import BytesIO import string -from CommonDataClass.FdfClass import RegionClassObject import Common.LongFilePathOs as os from stat import * from Common import EdkLogger @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV ## generate Region # # -class Region(RegionClassObject): +class Region(object): ## The constructor # # @param self The object pointer # def __init__(self): - RegionClassObject.__init__(self) - + self.Offset = None # The begin position of the Region + self.Size = None # The Size of the Region + self.PcdOffset = None + self.PcdSize = None + self.SetVarDict = {} + self.RegionType = None + self.RegionDataList = [] ## PadBuffer() # diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py index 24f2f3ca938d..c1dff188be96 100644 --- a/BaseTools/Source/Python/GenFds/UiSection.py +++ b/BaseTools/Source/Python/GenFds/UiSection.py @@ -17,7 +17,7 @@ # from __future__ import absolute_import from . import Section -from .Ffs import Ffs +from .Ffs import SectionSuffix import subprocess import Common.LongFilePathOs as os from .GenFdsGlobalVariable import GenFdsGlobalVariable @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): self.StringData = FfsInf.__ExtendMacro__(self.StringData) self.FileName = FfsInf.__ExtendMacro__(self.FileName) - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(BINARY_FILE_TYPE_UI)) if self.StringData is not None : NameString = self.StringData diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py index db71fe9653d4..051743f10c21 100644 --- a/BaseTools/Source/Python/GenFds/VerSection.py +++ b/BaseTools/Source/Python/GenFds/VerSection.py @@ -1,7 +1,7 @@ ## @file # process Version section generation # -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> # # This program and the accompanying materials # are licensed and made available under the terms and conditions of the BSD License @@ -16,10 +16,8 @@ # Import Modules # from __future__ import absolute_import -from .Ffs import Ffs -from . import Section +from .Ffs import SectionSuffix import Common.LongFilePathOs as os -import subprocess from .GenFdsGlobalVariable import GenFdsGlobalVariable from CommonDataClass.FdfClass import VerSectionClassObject from Common.LongFilePathSupport import OpenLongFilePath as open @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): # # Prepare the parameter of GenSection # - if FfsInf is not None: + if FfsInf: self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) self.StringData = FfsInf.__ExtendMacro__(self.StringData) self.FileName = FfsInf.__ExtendMacro__(self.FileName) OutputFile = os.path.join(OutputPath, - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) OutputFile = os.path.normpath(OutputFile) # Get String Data StringData = '' - if self.StringData is not None: + if self.StringData: StringData = self.StringData - elif self.FileName is not None: + elif self.FileName: FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) FileObj = open(FileNameStr, 'r') StringData = FileObj.read() StringData = '"' + StringData + '"' FileObj.close() - else: - StringData = '' GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) OutputFileList = [] diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py index 5cb2d4acfb1c..9dcd48b2d812 100644 --- a/BaseTools/Source/Python/GenFds/Vtf.py +++ b/BaseTools/Source/Python/GenFds/Vtf.py @@ -18,21 +18,24 @@ from __future__ import absolute_import from .GenFdsGlobalVariable import GenFdsGlobalVariable import Common.LongFilePathOs as os -from CommonDataClass.FdfClass import VtfClassObject from Common.LongFilePathSupport import OpenLongFilePath as open -T_CHAR_LF = '\n' +from Common.DataType import TAB_LINE_BREAK ## generate VTF # # -class Vtf (VtfClassObject): +class Vtf (object): ## The constructor # # @param self The object pointer # def __init__(self): - VtfClassObject.__init__(self) + self.KeyArch = None + self.ArchList = None + self.UiName = None + self.ResetBin = None + self.ComponentStatementList = [] ## GenVtf() method # @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): # def GenVtf(self, FdAddressDict) : self.GenBsfInf() - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') BaseAddArg = self.GetBaseAddressArg(FdAddressDict) OutputArg, VtfRawDict = self.GenOutputArg() @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): FvList = self.GetFvList() self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') BsfInf = open(self.BsfInfName, 'w+') - if self.ResetBin is not None: - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) + if self.ResetBin: + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) BsfInf.writelines ("IA32_RST_BIN" + \ " = " + \ GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ - T_CHAR_LF) - BsfInf.writelines (T_CHAR_LF) + TAB_LINE_BREAK) + BsfInf.writelines (TAB_LINE_BREAK) - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) for ComponentObj in self.ComponentStatementList : BsfInf.writelines ("COMP_NAME" + \ " = " + \ ComponentObj.CompName + \ - T_CHAR_LF) + TAB_LINE_BREAK) if ComponentObj.CompLoc.upper() == 'NONE': BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'N' + \ - T_CHAR_LF) + TAB_LINE_BREAK) - elif ComponentObj.FilePos is not None: + elif ComponentObj.FilePos: BsfInf.writelines ("COMP_LOC" + \ " = " + \ ComponentObj.FilePos + \ - T_CHAR_LF) + TAB_LINE_BREAK) else: Index = FvList.index(ComponentObj.CompLoc.upper()) if Index == 0: BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'F' + \ - T_CHAR_LF) + TAB_LINE_BREAK) elif Index == 1: BsfInf.writelines ("COMP_LOC" + \ " = " + \ 'S' + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_TYPE" + \ " = " + \ ComponentObj.CompType + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_VER" + \ " = " + \ ComponentObj.CompVer + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_CS" + \ " = " + \ ComponentObj.CompCs + \ - T_CHAR_LF) + TAB_LINE_BREAK) BinPath = ComponentObj.CompBin if BinPath != '-': @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): BsfInf.writelines ("COMP_BIN" + \ " = " + \ BinPath + \ - T_CHAR_LF) + TAB_LINE_BREAK) SymPath = ComponentObj.CompSym if SymPath != '-': @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): BsfInf.writelines ("COMP_SYM" + \ " = " + \ SymPath + \ - T_CHAR_LF) + TAB_LINE_BREAK) BsfInf.writelines ("COMP_SIZE" + \ " = " + \ ComponentObj.CompSize + \ - T_CHAR_LF) - BsfInf.writelines (T_CHAR_LF) + TAB_LINE_BREAK) + BsfInf.writelines (TAB_LINE_BREAK) BsfInf.close() diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py index a66adfb16746..95863fefe151 100644 --- a/BaseTools/Source/Python/build/BuildReport.py +++ b/BaseTools/Source/Python/build/BuildReport.py @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath from Common.DataType import * import collections from Common.Expression import * +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID ## Pattern to extract contents in EDK DXS files gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) @@ -1853,8 +1854,8 @@ class FdRegionReport(object): # # Add PEI and DXE a priori files GUIDs defined in PI specification. # - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" # # Add ACPI table storage file # -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey 2018-10-25 13:45 ` Feng, Bob C @ 2018-10-26 4:53 ` Gary Lin 2018-10-26 5:25 ` Gao, Liming 1 sibling, 1 reply; 14+ messages in thread From: Gary Lin @ 2018-10-26 4:53 UTC (permalink / raw) To: Jaben Carsey; +Cc: edk2-devel, Liming Gao On Tue, Oct 23, 2018 at 10:29:19AM -0700, Jaben Carsey wrote: > 1) remove wildcard imports and use explicit imports > 2) refactor to use shared variables from Common/DataType > 3) rename to not shadow imports > 4) don't assign a variable in a loop (just do final assignment) > 5) remove spaces, parens, unused or commented out code, etc. > 6) merge unnecessary parent classes into child > 7) refactor to share DXE and PEI apriori GUIDs from one place > this includes changes to Build and EOT files > 8) for PEP8, dont use __ for custom methods. > Hi Jaben, I found OVMF crashed immediately at starting up after applying this patch :-\ == $ qemu-system-x86_64 -s -enable-kvm -smp 1 -m 1024 \ -drive if=pflash,format=raw,readonly,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd \ -drive if=pflash,format=raw,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd \ -machine type=q35 \ -drive file=fat:hda-contents/,format=raw,read-only,if=virtio \ -monitor stdio -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \ -serial file:serial.log QEMU 3.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure RAX=000000000081f6a0 RBX=0000000000000200 RCX=000000000081f6a0 RDX=0000000000000200 RSI=000000000081f698 RDI=000000000081fd20 RBP=000000000081f698 RSP=000000000081f4a8 R8 =000000000081f4d8 R9 =000000000081f748 R10=0000000000000010 R11=0000000000900000 R12=0000000000000000 R13=0000000000820000 R14=000000000081fd01 R15=000000000081fd01 RIP=000000000009ffff RFL=00010807 [-O---PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] CS =0018 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] SS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] DS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] FS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] GS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy GDT= 00000000ffffff80 0000001f IDT= 000000000081fd70 0000021f CR0=c0000033 CR2=0000000000000000 CR3=0000000000800000 CR4=00000660 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000500 Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 == And, there is only one line in debug.log: SecCoreStartupWithStack(0xFFFCC000, 0x820000) Without "-enable-kvm", the error became: == (qemu) qemu-system-x86_64: Trying to execute code outside RAM or ROM at 0x00000000000a0000 This usually means one of the following happened: (1) You told QEMU to execute a kernel for the wrong machine type, and it crashed on startup (eg trying to run a raspberry pi kernel on a versatilepb QEMU machine) (2) You didn't give QEMU a kernel or BIOS filename at all, and QEMU executed a ROM full of no-op instructions until it fell off the end (3) Your guest kernel has a bug and crashed by jumping off into nowhere This is almost always one of the first two, so check your command line and that you are using the right type of kernel for this machine. If you think option (3) is likely then you can try debugging your guest with the -d debug options; in particular -d guest_errors will cause the log to include a dump of the guest register state at this point. Execution cannot continue; stopping here. == Gary Lin > Cc: Yonghong Zhu <yonghong.zhu@intel.com> > Cc: Liming Gao <liming.gao@intel.com> > Cc: Bob C Feng <bob.c.feng@intel.com> > Contributed-under: TianoCore Contribution Agreement 1.1 > Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> > --- > BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - > BaseTools/Source/Python/Eot/EotMain.py | 372 ++- > BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- > BaseTools/Source/Python/GenFds/Capsule.py | 26 +- > BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - > BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- > BaseTools/Source/Python/GenFds/DataSection.py | 4 +- > BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- > BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- > BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- > BaseTools/Source/Python/GenFds/Ffs.py | 82 +- > BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- > BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- > BaseTools/Source/Python/GenFds/Fv.py | 47 +- > BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- > BaseTools/Source/Python/GenFds/GenFds.py | 160 +- > BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- > BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- > BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- > BaseTools/Source/Python/GenFds/Region.py | 12 +- > BaseTools/Source/Python/GenFds/UiSection.py | 4 +- > BaseTools/Source/Python/GenFds/VerSection.py | 16 +- > BaseTools/Source/Python/GenFds/Vtf.py | 48 +- > BaseTools/Source/Python/build/BuildReport.py | 5 +- > 24 files changed, 2185 insertions(+), 2020 deletions(-) > > diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > index 64b58c2078a3..3df82082457f 100644 > --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py > +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > @@ -37,50 +37,6 @@ class FDClassObject: > self.RegionList = [] > self.vtfRawDict = {} > > -## FV data in FDF > -# > -# > -class FvClassObject: > - ## The constructor > - # > - # @param self The object pointer > - # > - def __init__(self): > - self.UiFvName = None > - self.CreateFileName = None > - # 3-tuple list (blockSize, numBlocks, pcd) > - self.BlockSizeList = [] > - # DefineVarDict[var] = value > - self.DefineVarDict = {} > - # SetVarDict[var] = value > - self.SetVarDict = {} > - self.FvAlignment = None > - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) > - self.FvAttributeDict = {} > - self.FvNameGuid = None > - self.FvNameString = None > - self.AprioriSectionList = [] > - self.FfsList = [] > - self.BsBaseAddress = None > - self.RtBaseAddress = None > - > -## Region data in FDF > -# > -# > -class RegionClassObject: > - ## The constructor > - # > - # @param self The object pointer > - # > - def __init__(self): > - self.Offset = None # The begin position of the Region > - self.Size = None # The Size of the Region > - self.PcdOffset = None > - self.PcdSize = None > - self.SetVarDict = {} > - self.RegionType = None > - self.RegionDataList = [] > - > ## FFS data in FDF > # > # > @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : > self.FvName = None > self.FdName = None > self.DefineVarDict = {} > - self.AprioriSection = None > self.KeepReloc = None > > ## INF statement data in FDF > @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): > self.KeepReloc = None > self.UseArch = None > > -## APRIORI section data in FDF > -# > -# > -class AprioriSectionClassObject: > - ## The constructor > - # > - # @param self The object pointer > - # > - def __init__(self): > - # DefineVarDict[var] = value > - self.DefineVarDict = {} > - self.FfsList = [] > - > ## section data in FDF > # > # > @@ -363,21 +305,6 @@ class CapsuleClassObject : > self.CapsuleDataList = [] > self.FmpPayloadList = [] > > -## VTF data in FDF > -# > -# > -class VtfClassObject : > - ## The constructor > - # > - # @param self The object pointer > - # > - def __init__(self): > - self.KeyArch = None > - self.ArchList = None > - self.UiName = None > - self.ResetBin = None > - self.ComponentStatementList = [] > - > ## VTF component data in FDF > # > # > diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py > index 49a1494126c4..2ad93cc6b4e3 100644 > --- a/BaseTools/Source/Python/Eot/EotMain.py > +++ b/BaseTools/Source/Python/Eot/EotMain.py > @@ -36,10 +36,9 @@ import struct > import uuid > import copy > import codecs > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" > -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' > -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' > gIndention = -4 > > class Image(array): > @@ -402,10 +401,10 @@ class FirmwareVolume(Image): > if Ffs.Type == 0x05: > FfsDxeCoreGuid = FfsID > continue > - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > FfsPeiPrioriGuid = FfsID > continue > - if Ffs.Guid.lower() == gAprioriGuid: > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > FfsDxePrioriGuid = FfsID > continue > > @@ -1084,6 +1083,371 @@ class Ffs(Image): > State = property(_GetState, _SetState) > > > + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" > + > + _GUID_ = struct.Struct("16x 1I2H8B") > + _LENGTH_ = struct.Struct("16x 16x 1Q") > + _SIG_ = struct.Struct("16x 16x 8x 1I") > + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") > + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") > + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") > + > + def __init__(self, Name=''): > + Image.__init__(self) > + self.Name = Name > + self.FfsDict = sdict() > + self.OrderedFfsDict = sdict() > + self.UnDispatchedFfsDict = sdict() > + self.ProtocolList = sdict() > + > + def CheckArchProtocol(self): > + for Item in EotGlobalData.gArchProtocolGuids: > + if Item.lower() not in EotGlobalData.gProtocolList: > + return False > + return True > + > + def ParseDepex(self, Depex, Type): > + List = None > + if Type == 'Ppi': > + List = EotGlobalData.gPpiList > + if Type == 'Protocol': > + List = EotGlobalData.gProtocolList > + DepexStack = [] > + DepexList = [] > + DepexString = '' > + FileDepex = None > + CouldBeLoaded = True > + for Index in range(0, len(Depex.Expression)): > + Item = Depex.Expression[Index] > + if Item == 0x00: > + Index = Index + 1 > + Guid = gGuidStringFormat % Depex.Expression[Index] > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) > + elif Item == 0x01: > + Index = Index + 1 > + Guid = gGuidStringFormat % Depex.Expression[Index] > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) > + elif Item == 0x02: > + Index = Index + 1 > + Guid = gGuidStringFormat % Depex.Expression[Index] > + if Guid.lower() in List: > + DepexStack.append(True) > + DepexList.append(Guid) > + else: > + DepexStack.append(False) > + DepexList.append(Guid) > + continue > + elif Item == 0x03 or Item == 0x04: > + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) > + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + str(DepexList.pop())) > + elif Item == 0x05: > + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) > + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) > + elif Item == 0x06: > + DepexStack.append(True) > + DepexList.append('TRUE') > + DepexString = DepexString + 'TRUE' + ' ' > + elif Item == 0x07: > + DepexStack.append(False) > + DepexList.append('False') > + DepexString = DepexString + 'FALSE' + ' ' > + elif Item == 0x08: > + if Index != len(Depex.Expression) - 1: > + CouldBeLoaded = False > + else: > + CouldBeLoaded = DepexStack.pop() > + else: > + CouldBeLoaded = False > + if DepexList != []: > + DepexString = DepexList[0].strip() > + return (CouldBeLoaded, DepexString, FileDepex) > + > + def Dispatch(self, Db = None): > + if Db is None: > + return False > + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) > + # Find PeiCore, DexCore, PeiPriori, DxePriori first > + FfsSecCoreGuid = None > + FfsPeiCoreGuid = None > + FfsDxeCoreGuid = None > + FfsPeiPrioriGuid = None > + FfsDxePrioriGuid = None > + for FfsID in self.UnDispatchedFfsDict: > + Ffs = self.UnDispatchedFfsDict[FfsID] > + if Ffs.Type == 0x03: > + FfsSecCoreGuid = FfsID > + continue > + if Ffs.Type == 0x04: > + FfsPeiCoreGuid = FfsID > + continue > + if Ffs.Type == 0x05: > + FfsDxeCoreGuid = FfsID > + continue > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > + FfsPeiPrioriGuid = FfsID > + continue > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > + FfsDxePrioriGuid = FfsID > + continue > + > + # Parse SEC_CORE first > + if FfsSecCoreGuid is not None: > + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) > + self.LoadPpi(Db, FfsSecCoreGuid) > + > + # Parse PEI first > + if FfsPeiCoreGuid is not None: > + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) > + self.LoadPpi(Db, FfsPeiCoreGuid) > + if FfsPeiPrioriGuid is not None: > + # Load PEIM described in priori file > + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) > + if len(FfsPeiPriori.Sections) == 1: > + Section = FfsPeiPriori.Sections.popitem()[1] > + if Section.Type == 0x19: > + GuidStruct = struct.Struct('1I2H8B') > + Start = 4 > + while len(Section) > Start: > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > + GuidString = gGuidStringFormat % Guid > + Start = Start + 16 > + if GuidString in self.UnDispatchedFfsDict: > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > + self.LoadPpi(Db, GuidString) > + > + self.DisPatchPei(Db) > + > + # Parse DXE then > + if FfsDxeCoreGuid is not None: > + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) > + self.LoadProtocol(Db, FfsDxeCoreGuid) > + if FfsDxePrioriGuid is not None: > + # Load PEIM described in priori file > + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) > + if len(FfsDxePriori.Sections) == 1: > + Section = FfsDxePriori.Sections.popitem()[1] > + if Section.Type == 0x19: > + GuidStruct = struct.Struct('1I2H8B') > + Start = 4 > + while len(Section) > Start: > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > + GuidString = gGuidStringFormat % Guid > + Start = Start + 16 > + if GuidString in self.UnDispatchedFfsDict: > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > + self.LoadProtocol(Db, GuidString) > + > + self.DisPatchDxe(Db) > + > + def LoadProtocol(self, Db, ModuleGuid): > + SqlCommand = """select GuidValue from Report > + where SourceFileFullPath in > + (select Value1 from Inf where BelongsToFile = > + (select BelongsToFile from Inf > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > + and Model = %s) > + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ > + % (ModuleGuid, 5001, 3007) > + RecordSet = Db.TblReport.Exec(SqlCommand) > + for Record in RecordSet: > + SqlCommand = """select Value2 from Inf where BelongsToFile = > + (select DISTINCT BelongsToFile from Inf > + where Value1 = > + (select SourceFileFullPath from Report > + where GuidValue like '%s' and ItemMode = 'Callback')) > + and Value1 = 'FILE_GUID'""" % Record[0] > + CallBackSet = Db.TblReport.Exec(SqlCommand) > + if CallBackSet != []: > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > + else: > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > + > + def LoadPpi(self, Db, ModuleGuid): > + SqlCommand = """select GuidValue from Report > + where SourceFileFullPath in > + (select Value1 from Inf where BelongsToFile = > + (select BelongsToFile from Inf > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > + and Model = %s) > + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ > + % (ModuleGuid, 5001, 3007) > + RecordSet = Db.TblReport.Exec(SqlCommand) > + for Record in RecordSet: > + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid > + > + def DisPatchDxe(self, Db): > + IsInstalled = False > + ScheduleList = sdict() > + for FfsID in self.UnDispatchedFfsDict: > + CouldBeLoaded = False > + DepexString = '' > + FileDepex = None > + Ffs = self.UnDispatchedFfsDict[FfsID] > + if Ffs.Type == 0x07: > + # Get Depex > + IsFoundDepex = False > + for Section in Ffs.Sections.values(): > + # Find Depex > + if Section.Type == 0x13: > + IsFoundDepex = True > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') > + break > + if Section.Type == 0x01: > + CompressSections = Section._SubImages[4] > + for CompressSection in CompressSections.Sections: > + if CompressSection.Type == 0x13: > + IsFoundDepex = True > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Protocol') > + break > + if CompressSection.Type == 0x02: > + NewSections = CompressSection._SubImages[4] > + for NewSection in NewSections.Sections: > + if NewSection.Type == 0x13: > + IsFoundDepex = True > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Protocol') > + break > + > + # Not find Depex > + if not IsFoundDepex: > + CouldBeLoaded = self.CheckArchProtocol() > + DepexString = '' > + FileDepex = None > + > + # Append New Ffs > + if CouldBeLoaded: > + IsInstalled = True > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > + NewFfs.Depex = DepexString > + if FileDepex is not None: > + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) > + else: > + ScheduleList[FfsID] = NewFfs > + else: > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > + > + for FfsID in ScheduleList: > + NewFfs = ScheduleList.pop(FfsID) > + FfsName = 'UnKnown' > + self.OrderedFfsDict[FfsID] = NewFfs > + self.LoadProtocol(Db, FfsID) > + > + SqlCommand = """select Value2 from Inf > + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and lower(Value2) = lower('%s') and Model = %s) > + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) > + RecordSet = Db.TblReport.Exec(SqlCommand) > + if RecordSet != []: > + FfsName = RecordSet[0][0] > + > + if IsInstalled: > + self.DisPatchDxe(Db) > + > + def DisPatchPei(self, Db): > + IsInstalled = False > + for FfsID in self.UnDispatchedFfsDict: > + CouldBeLoaded = True > + DepexString = '' > + FileDepex = None > + Ffs = self.UnDispatchedFfsDict[FfsID] > + if Ffs.Type == 0x06 or Ffs.Type == 0x08: > + # Get Depex > + for Section in Ffs.Sections.values(): > + if Section.Type == 0x1B: > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') > + break > + if Section.Type == 0x01: > + CompressSections = Section._SubImages[4] > + for CompressSection in CompressSections.Sections: > + if CompressSection.Type == 0x1B: > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], 'Ppi') > + break > + if CompressSection.Type == 0x02: > + NewSections = CompressSection._SubImages[4] > + for NewSection in NewSections.Sections: > + if NewSection.Type == 0x1B: > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(NewSection._SubImages[4], 'Ppi') > + break > + > + # Append New Ffs > + if CouldBeLoaded: > + IsInstalled = True > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > + NewFfs.Depex = DepexString > + self.OrderedFfsDict[FfsID] = NewFfs > + self.LoadPpi(Db, FfsID) > + else: > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > + > + if IsInstalled: > + self.DisPatchPei(Db) > + > + > + def __str__(self): > + global gIndention > + gIndention += 4 > + FvInfo = '\n' + ' ' * gIndention > + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) > + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) > + gIndention -= 4 > + return FvInfo + FfsInfo > + > + def _Unpack(self): > + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] > + self.empty() > + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) > + > + # traverse the FFS > + EndOfFv = Size > + FfsStartAddress = self.HeaderSize > + LastFfsObj = None > + while FfsStartAddress < EndOfFv: > + FfsObj = Ffs() > + FfsObj.frombuffer(self, FfsStartAddress) > + FfsId = repr(FfsObj) > + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ > + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): > + if LastFfsObj is not None: > + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) > + else: > + if FfsId in self.FfsDict: > + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", > + ExtraData="\t%s @ %s\n\t%s @ %s" \ > + % (FfsObj.Guid, FfsObj.Offset, > + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) > + self.FfsDict[FfsId] = FfsObj > + if LastFfsObj is not None: > + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) > + > + FfsStartAddress += len(FfsObj) > + # > + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) > + # The next FFS must be at the latest next 8-byte aligned address > + # > + FfsStartAddress = (FfsStartAddress + 7) & (~7) > + LastFfsObj = FfsObj > + > + def _GetAttributes(self): > + return self.GetField(self._ATTR_, 0)[0] > + > + def _GetSize(self): > + return self.GetField(self._LENGTH_, 0)[0] > + > + def _GetChecksum(self): > + return self.GetField(self._CHECKSUM_, 0)[0] > + > + def _GetHeaderLength(self): > + return self.GetField(self._HLEN_, 0)[0] > + > + def _GetFileSystemGuid(self): > + return gGuidStringFormat % self.GetField(self._GUID_, 0) > + > + Attributes = property(_GetAttributes) > + Size = property(_GetSize) > + Checksum = property(_GetChecksum) > + HeaderSize = property(_GetHeaderLength) > + FileSystemGuid = property(_GetFileSystemGuid) > + > ## MultipleFv() class > # > # A class for Multiple FV > diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py > index 7196f7f2c753..e7b856a1150a 100644 > --- a/BaseTools/Source/Python/GenFds/AprioriSection.py > +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py > @@ -16,28 +16,31 @@ > # Import Modules > # > from __future__ import absolute_import > -from struct import * > +from struct import pack > import Common.LongFilePathOs as os > from io import BytesIO > -from . import FfsFileStatement > +from .FfsFileStatement import FileStatement > from .GenFdsGlobalVariable import GenFdsGlobalVariable > -from CommonDataClass.FdfClass import AprioriSectionClassObject > -from Common.StringUtils import * > +from Common.StringUtils import NormPath > from Common.Misc import SaveFileOnChange, PathClass > -from Common import EdkLogger > -from Common.BuildToolError import * > +from Common.EdkLogger import error as EdkLoggerError > +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE > from Common.DataType import TAB_COMMON > > +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > + > ## process APRIORI file data and generate PEI/DXE APRIORI file > # > # > -class AprioriSection (AprioriSectionClassObject): > +class AprioriSection (object): > ## The constructor > # > # @param self The object pointer > # > def __init__(self): > - AprioriSectionClassObject.__init__(self) > + self.DefineVarDict = {} > + self.FfsList = [] > self.AprioriType = "" > > ## GenFfs() method > @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): > # @retval string Generated file name > # > def GenFfs (self, FvName, Dict = {}, IsMakefile = False): > - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > Buffer = BytesIO('') > - AprioriFileGuid = DXE_GUID > if self.AprioriType == "PEI": > - AprioriFileGuid = PEI_GUID > + AprioriFileGuid = PEI_APRIORI_GUID > + else: > + AprioriFileGuid = DXE_APRIORI_GUID > + > OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ > GenFdsGlobalVariable.FfsDir,\ > AprioriFileGuid + FvName) > - if not os.path.exists(OutputAprFilePath) : > + if not os.path.exists(OutputAprFilePath): > os.makedirs(OutputAprFilePath) > > OutputAprFileName = os.path.join( OutputAprFilePath, \ > @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): > > Dict.update(self.DefineVarDict) > InfFileName = None > - for FfsObj in self.FfsList : > + for FfsObj in self.FfsList: > Guid = "" > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > + if isinstance(FfsObj, FileStatement): > Guid = FfsObj.NameGuid > else: > InfFileName = NormPath(FfsObj.InfFileName) > Arch = FfsObj.GetCurrentArch() > > - if Arch is not None: > + if Arch: > Dict['$(ARCH)'] = Arch > InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) > > - if Arch is not None: > + if Arch: > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > Guid = Inf.Guid > - > else: > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > Guid = Inf.Guid > > - self.BinFileList = Inf.Module.Binaries > - if self.BinFileList == []: > - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, > + if not Inf.Module.Binaries: > + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, > "INF %s not found in build ARCH %s!" \ > % (InfFileName, GenFdsGlobalVariable.ArchList)) > > - > GuidPart = Guid.split('-') > Buffer.write(pack('I', long(GuidPart[0], 16))) > Buffer.write(pack('H', int(GuidPart[1], 16))) > @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): > 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) > > return AprFfsFileName > - > diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py > index baa6e1dfa2ab..df29c40dbd20 100644 > --- a/BaseTools/Source/Python/GenFds/Capsule.py > +++ b/BaseTools/Source/Python/GenFds/Capsule.py > @@ -16,22 +16,18 @@ > # Import Modules > # > from __future__ import absolute_import > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > -from .GenFdsGlobalVariable import FindExtendTool > +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool > from CommonDataClass.FdfClass import CapsuleClassObject > import Common.LongFilePathOs as os > -import subprocess > from io import BytesIO > -from Common.Misc import SaveFileOnChange > -from Common.Misc import PackRegistryFormatGuid > +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid > import uuid > from struct import pack > from Common import EdkLogger > -from Common.BuildToolError import * > +from Common.BuildToolError import GENFDS_ERROR > +from Common.DataType import TAB_LINE_BREAK > > - > -T_CHAR_LF = '\n' > -WIN_CERT_REVISION = 0x0200 > +WIN_CERT_REVISION = 0x0200 > WIN_CERT_TYPE_EFI_GUID = 0x0EF1 > EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') > EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') > @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 > ## create inf file describes what goes into capsule and call GenFv to generate capsule > # > # > -class Capsule (CapsuleClassObject) : > +class Capsule (CapsuleClassObject): > ## The constructor > # > # @param self The object pointer > @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : > return self.GenFmpCapsule() > > CapInfFile = self.GenCapInf() > - CapInfFile.writelines("[files]" + T_CHAR_LF) > + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) > CapFileList = [] > - for CapsuleDataObj in self.CapsuleDataList : > + for CapsuleDataObj in self.CapsuleDataList: > CapsuleDataObj.CapsuleName = self.CapsuleName > FileName = CapsuleDataObj.GenCapsuleSubItem() > CapsuleDataObj.CapsuleName = None > CapFileList.append(FileName) > CapInfFile.writelines("EFI_FILE_NAME = " + \ > FileName + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) > CapInfFile.close() > # > @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : > self.UiCapsuleName + "_Cap" + '.inf') > CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') > > - CapInfFile.writelines("[options]" + T_CHAR_LF) > + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) > > for Item in self.TokensDict: > CapInfFile.writelines("EFI_" + \ > Item + \ > ' = ' + \ > self.TokensDict[Item] + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > > return CapInfFile > diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py > index 0caba8983d2e..6071e9f4a545 100644 > --- a/BaseTools/Source/Python/GenFds/CapsuleData.py > +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py > @@ -16,7 +16,6 @@ > # Import Modules > # > from __future__ import absolute_import > -from . import Ffs > from .GenFdsGlobalVariable import GenFdsGlobalVariable > from io import BytesIO > from struct import pack > diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py > index aaaabf84dca8..51f6791fab19 100644 > --- a/BaseTools/Source/Python/GenFds/CompressSection.py > +++ b/BaseTools/Source/Python/GenFds/CompressSection.py > @@ -16,7 +16,7 @@ > # Import Modules > # > from __future__ import absolute_import > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > from . import Section > import subprocess > import Common.LongFilePathOs as os > @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : > ModuleName + \ > SUP_MODULE_SEC + \ > SecNum + \ > - Ffs.SectionSuffix['COMPRESS'] > + SectionSuffix['COMPRESS'] > OutputFile = os.path.normpath(OutputFile) > DummyFile = OutputFile + '.dummy' > GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) > diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py > index a6387b07c582..28f9b931ca4b 100644 > --- a/BaseTools/Source/Python/GenFds/DataSection.py > +++ b/BaseTools/Source/Python/GenFds/DataSection.py > @@ -19,7 +19,7 @@ from __future__ import absolute_import > from . import Section > from .GenFdsGlobalVariable import GenFdsGlobalVariable > import subprocess > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > import Common.LongFilePathOs as os > from CommonDataClass.FdfClass import DataSectionClassObject > from Common.Misc import PeImageClass > @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): > ) > self.SectFileName = TeFile > > - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) > + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) > OutputFile = os.path.normpath(OutputFile) > GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), IsMakefile = IsMakefile) > FileList = [OutputFile] > diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py > index b2d123bfc045..78efc18816e8 100644 > --- a/BaseTools/Source/Python/GenFds/DepexSection.py > +++ b/BaseTools/Source/Python/GenFds/DepexSection.py > @@ -18,8 +18,6 @@ > from __future__ import absolute_import > from . import Section > from .GenFdsGlobalVariable import GenFdsGlobalVariable > -import subprocess > -from .Ffs import Ffs > import Common.LongFilePathOs as os > from CommonDataClass.FdfClass import DepexSectionClassObject > from AutoGen.GenDepex import DependencyExpression > @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): > GenFdsGlobalVariable.TargetName, > GenFdsGlobalVariable.ToolChainTag) > for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: > - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) > ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ > - ModuleFile, > + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), > Arch, > GenFdsGlobalVariable.TargetName, > GenFdsGlobalVariable.ToolChainTag > diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py > index 623b77d27427..f8573b5c7d1b 100644 > --- a/BaseTools/Source/Python/GenFds/EfiSection.py > +++ b/BaseTools/Source/Python/GenFds/EfiSection.py > @@ -20,7 +20,7 @@ from struct import * > from . import Section > from .GenFdsGlobalVariable import GenFdsGlobalVariable > import subprocess > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > import Common.LongFilePathOs as os > from CommonDataClass.FdfClass import EfiSectionClassObject > from Common import EdkLogger > @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): > BuildNumTuple = tuple() > > Num = SecNum > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > #Ui=StringData, > Ver=BuildNum, > @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): > for File in FileList: > Index = Index + 1 > Num = '%s.%d' %(SecNum, Index) > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) > f = open(File, 'r') > VerString = f.read() > f.close() > @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): > else: > EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) > Num = SecNum > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > #Ui=VerString, > Ver=BuildNum, > @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): > Num = SecNum > if IsMakefile and StringData == ModuleNameStr: > StringData = "$(MODULE_NAME)" > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > Ui=StringData, IsMakefile=IsMakefile) > OutputFileList.append(OutputFile) > @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): > for File in FileList: > Index = Index + 1 > Num = '%s.%d' %(SecNum, Index) > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) > f = open(File, 'r') > UiString = f.read() > f.close() > @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): > Num = SecNum > if IsMakefile and StringData == ModuleNameStr: > StringData = "$(MODULE_NAME)" > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + SectionSuffix.get(SectionType)) > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > Ui=StringData, IsMakefile=IsMakefile) > OutputFileList.append(OutputFile) > @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): > """ Copy Map file to FFS output path """ > Index = Index + 1 > Num = '%s.%d' %(SecNum, Index) > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get(SectionType)) > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get(SectionType)) > File = GenFdsGlobalVariable.MacroExtend(File, Dict) > > #Get PE Section alignment when align is set to AUTO > diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py > index 63687e98bb31..62bf9f14012a 100644 > --- a/BaseTools/Source/Python/GenFds/FdfParser.py > +++ b/BaseTools/Source/Python/GenFds/FdfParser.py > @@ -18,79 +18,66 @@ > # > from __future__ import print_function > from __future__ import absolute_import > -import re > +from re import compile, DOTALL > +from string import hexdigits > +from uuid import UUID > > -from . import Fd > -from . import Region > -from . import Fv > -from . import AprioriSection > -from . import FfsInfStatement > -from . import FfsFileStatement > -from . import VerSection > -from . import UiSection > -from . import FvImageSection > -from . import DataSection > -from . import DepexSection > -from . import CompressSection > -from . import GuidSection > -from . import Capsule > -from . import CapsuleData > -from . import Rule > -from . import RuleComplexFile > -from . import RuleSimpleFile > -from . import EfiSection > -from . import Vtf > -from . import ComponentStatement > -from . import OptionRom > -from . import OptRomInfStatement > -from . import OptRomFileStatement > -import string > - > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > from Common.BuildToolError import * > from Common import EdkLogger > -from Common.Misc import PathClass > -from Common.StringUtils import NormPath > -import Common.GlobalData as GlobalData > -from Common.Expression import * > +from Common.Misc import PathClass, tdict > +from Common.StringUtils import NormPath, ReplaceMacro > from Common import GlobalData > +from Common.Expression import * > from Common.DataType import * > -from Common.StringUtils import ReplaceMacro > -import uuid > -from Common.Misc import tdict > from Common.MultipleWorkspace import MultipleWorkspace as mws > import Common.LongFilePathOs as os > from Common.LongFilePathSupport import OpenLongFilePath as open > -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID > -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID > from Common.RangeExpression import RangeExpression > from collections import OrderedDict > > -##define T_CHAR_SPACE ' ' > -##define T_CHAR_NULL '\0' > -##define T_CHAR_CR '\r' > -##define T_CHAR_TAB '\t' > -##define T_CHAR_LF '\n' > -##define T_CHAR_SLASH '/' > -##define T_CHAR_BACKSLASH '\\' > -##define T_CHAR_DOUBLE_QUOTE '\"' > -##define T_CHAR_SINGLE_QUOTE '\'' > -##define T_CHAR_STAR '*' > -##define T_CHAR_HASH '#' > +from .Fd import FD > +from .Region import Region > +from .Fv import FV > +from .AprioriSection import AprioriSection > +from .FfsInfStatement import FfsInfStatement > +from .FfsFileStatement import FileStatement > +from .VerSection import VerSection > +from .UiSection import UiSection > +from .FvImageSection import FvImageSection > +from .DataSection import DataSection > +from .DepexSection import DepexSection > +from .CompressSection import CompressSection > +from .GuidSection import GuidSection > +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule > +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile > +from .RuleComplexFile import RuleComplexFile > +from .RuleSimpleFile import RuleSimpleFile > +from .EfiSection import EfiSection > +from .Vtf import Vtf > +from .ComponentStatement import ComponentStatement > +from .OptionRom import OPTIONROM > +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs > +from .OptRomFileStatement import OptRomFileStatement > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ > -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ > -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') > +T_CHAR_CR = '\r' > +T_CHAR_TAB = '\t' > +T_CHAR_DOUBLE_QUOTE = '\"' > +T_CHAR_SINGLE_QUOTE = '\'' > +T_CHAR_STAR = '*' > > -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') > +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} > +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} > +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} > > -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') > -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') > -TokenFindPattern = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') > +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') > +TokenFindPattern = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > AllIncludeFileList = [] > > # Get the closest parent > @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): > return True > > def GetRealFileLine (File, Line): > - > InsertedLines = 0 > for Profile in AllIncludeFileList: > if Profile.IsLineInFile(Line): > @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): > > ## The exception class that used to report error messages when parsing FDF > # > -# Currently the "ToolName" is set to be "FDF Parser". > +# Currently the "ToolName" is set to be "FdfParser". > # > class Warning (Exception): > ## The constructor > @@ -132,7 +118,6 @@ class Warning (Exception): > # @param Line The Line number that error occurs > # > def __init__(self, Str, File = None, Line = None): > - > FileLineTuple = GetRealFileLine(File, Line) > self.FileName = FileLineTuple[0] > self.LineNumber = FileLineTuple[1] > @@ -147,7 +132,7 @@ class Warning (Exception): > # > # May raise Exception when opening file. > # > -class IncludeFileProfile : > +class IncludeFileProfile: > ## The constructor > # > # @param self The object pointer > @@ -157,16 +142,11 @@ class IncludeFileProfile : > self.FileName = FileName > self.FileLinesList = [] > try: > - fsock = open(FileName, "rb", 0) > - try: > + with open(FileName, "rb", 0) as fsock: > self.FileLinesList = fsock.readlines() > for index, line in enumerate(self.FileLinesList): > - if not line.endswith('\n'): > - self.FileLinesList[index] += '\n' > - > - finally: > - fsock.close() > - > + if not line.endswith(TAB_LINE_BREAK): > + self.FileLinesList[index] += TAB_LINE_BREAK > except: > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > @@ -209,7 +189,7 @@ class IncludeFileProfile : > # > # May raise Exception when opening file. > # > -class FileProfile : > +class FileProfile: > ## The constructor > # > # @param self The object pointer > @@ -218,11 +198,8 @@ class FileProfile : > def __init__(self, FileName): > self.FileLinesList = [] > try: > - fsock = open(FileName, "rb", 0) > - try: > + with open(FileName, "rb", 0) as fsock: > self.FileLinesList = fsock.readlines() > - finally: > - fsock.close() > > except: > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > @@ -266,41 +243,37 @@ class FdfParser: > self.CurrentOffsetWithinLine = 0 > self.CurrentFdName = None > self.CurrentFvName = None > - self.__Token = "" > - self.__SkippedChars = "" > + self._Token = "" > + self._SkippedChars = "" > GlobalData.gFdfParser = self > > # Used to section info > - self.__CurSection = [] > + self._CurSection = [] > # Key: [section name, UI name, arch] > - # Value: {MACRO_NAME : MACRO_VALUE} > - self.__MacroDict = tdict(True, 3) > - self.__PcdDict = OrderedDict() > + # Value: {MACRO_NAME: MACRO_VALUE} > + self._MacroDict = tdict(True, 3) > + self._PcdDict = OrderedDict() > > - self.__WipeOffArea = [] > + self._WipeOffArea = [] > if GenFdsGlobalVariable.WorkSpaceDir == '': > GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") > > - ## __SkipWhiteSpace() method > + ## _SkipWhiteSpace() method > # > - # Skip white spaces from current char, return number of chars skipped > + # Skip white spaces from current char. > # > # @param self The object pointer > - # @retval Count The number of chars skipped > # > - def __SkipWhiteSpace(self): > - Count = 0 > - while not self.__EndOfFile(): > - Count += 1 > - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): > - self.__SkippedChars += str(self.__CurrentChar()) > - self.__GetOneChar() > - > + def _SkipWhiteSpace(self): > + while not self._EndOfFile(): > + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): > + self._SkippedChars += str(self._CurrentChar()) > + self._GetOneChar() > else: > - Count = Count - 1 > - return Count > + return > + return > > - ## __EndOfFile() method > + ## _EndOfFile() method > # > # Judge current buffer pos is at file end > # > @@ -308,17 +281,16 @@ class FdfParser: > # @retval True Current File buffer position is at file end > # @retval False Current File buffer position is NOT at file end > # > - def __EndOfFile(self): > + def _EndOfFile(self): > NumberOfLines = len(self.Profile.FileLinesList) > SizeOfLastLine = len(self.Profile.FileLinesList[-1]) > if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: > return True > - elif self.CurrentLineNumber > NumberOfLines: > + if self.CurrentLineNumber > NumberOfLines: > return True > - else: > - return False > + return False > > - ## __EndOfLine() method > + ## _EndOfLine() method > # > # Judge current buffer pos is at line end > # > @@ -326,14 +298,13 @@ class FdfParser: > # @retval True Current File buffer position is at line end > # @retval False Current File buffer position is NOT at line end > # > - def __EndOfLine(self): > + def _EndOfLine(self): > if self.CurrentLineNumber > len(self.Profile.FileLinesList): > return True > SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: > return True > - else: > - return False > + return False > > ## Rewind() method > # > @@ -347,7 +318,7 @@ class FdfParser: > self.CurrentLineNumber = DestLine > self.CurrentOffsetWithinLine = DestOffset > > - ## __UndoOneChar() method > + ## _UndoOneChar() method > # > # Go back one char in the file buffer > # > @@ -355,80 +326,78 @@ class FdfParser: > # @retval True Successfully go back one char > # @retval False Not able to go back one char as file beginning reached > # > - def __UndoOneChar(self): > - > + def _UndoOneChar(self): > if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: > return False > elif self.CurrentOffsetWithinLine == 0: > self.CurrentLineNumber -= 1 > - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 > + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 > else: > self.CurrentOffsetWithinLine -= 1 > return True > > - ## __GetOneChar() method > + ## _GetOneChar() method > # > # Move forward one char in the file buffer > # > # @param self The object pointer > # > - def __GetOneChar(self): > + def _GetOneChar(self): > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > self.CurrentLineNumber += 1 > self.CurrentOffsetWithinLine = 0 > else: > self.CurrentOffsetWithinLine += 1 > > - ## __CurrentChar() method > + ## _CurrentChar() method > # > # Get the char pointed to by the file buffer pointer > # > # @param self The object pointer > # @retval Char Current char > # > - def __CurrentChar(self): > + def _CurrentChar(self): > return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] > > - ## __NextChar() method > + ## _NextChar() method > # > # Get the one char pass the char pointed to by the file buffer pointer > # > # @param self The object pointer > # @retval Char Next char > # > - def __NextChar(self): > + def _NextChar(self): > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > return self.Profile.FileLinesList[self.CurrentLineNumber][0] > - else: > - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > - ## __SetCurrentCharValue() method > + ## _SetCurrentCharValue() method > # > # Modify the value of current char > # > # @param self The object pointer > # @param Value The new value of current char > # > - def __SetCurrentCharValue(self, Value): > + def _SetCurrentCharValue(self, Value): > self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value > > - ## __CurrentLine() method > + ## _CurrentLine() method > # > # Get the list that contains current line contents > # > # @param self The object pointer > # @retval List current line contents > # > - def __CurrentLine(self): > + def _CurrentLine(self): > return self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > - def __StringToList(self): > + def _StringToList(self): > self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] > if not self.Profile.FileLinesList: > EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) > self.Profile.FileLinesList[-1].append(' ') > > - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): > + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): > if StartPos[0] == EndPos[0]: > Offset = StartPos[1] > while Offset <= EndPos[1]: > @@ -437,14 +406,14 @@ class FdfParser: > return > > Offset = StartPos[1] > - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): > + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > self.Profile.FileLinesList[StartPos[0]][Offset] = Value > Offset += 1 > > Line = StartPos[0] > while Line < EndPos[0]: > Offset = 0 > - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): > + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > self.Profile.FileLinesList[Line][Offset] = Value > Offset += 1 > Line += 1 > @@ -454,45 +423,29 @@ class FdfParser: > self.Profile.FileLinesList[EndPos[0]][Offset] = Value > Offset += 1 > > - > - def __GetMacroName(self): > - if not self.__GetNextToken(): > - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > - MacroName = self.__Token > - NotFlag = False > - if MacroName.startswith('!'): > - NotFlag = True > - MacroName = MacroName[1:].strip() > - > - if not MacroName.startswith('$(') or not MacroName.endswith(')'): > - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : MacroName}, > - self.FileName, self.CurrentLineNumber) > - MacroName = MacroName[2:-1] > - return MacroName, NotFlag > - > - def __SetMacroValue(self, Macro, Value): > - if not self.__CurSection: > + def _SetMacroValue(self, Macro, Value): > + if not self._CurSection: > return > > MacroDict = {} > - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: > - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict > + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: > + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict > else: > - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] > + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] > MacroDict[Macro] = Value > > - def __GetMacroValue(self, Macro): > + def _GetMacroValue(self, Macro): > # Highest priority > if Macro in GlobalData.gCommandLineDefines: > return GlobalData.gCommandLineDefines[Macro] > if Macro in GlobalData.gGlobalDefines: > return GlobalData.gGlobalDefines[Macro] > > - if self.__CurSection: > - MacroDict = self.__MacroDict[ > - self.__CurSection[0], > - self.__CurSection[1], > - self.__CurSection[2] > + if self._CurSection: > + MacroDict = self._MacroDict[ > + self._CurSection[0], > + self._CurSection[1], > + self._CurSection[2] > ] > if MacroDict and Macro in MacroDict: > return MacroDict[Macro] > @@ -502,7 +455,7 @@ class FdfParser: > return GlobalData.gPlatformDefines[Macro] > return None > > - def __SectionHeaderParser(self, Section): > + def _SectionHeaderParser(self, Section): > # [Defines] > # [FD.UiName]: use dummy instead if UI name is optional > # [FV.UiName] > @@ -510,25 +463,25 @@ class FdfParser: > # [Rule]: don't take rule section into account, macro is not allowed in this section > # [VTF.arch.UiName, arch] > # [OptionRom.DriverName] > - self.__CurSection = [] > - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') > - ItemList = Section.split('.') > + self._CurSection = [] > + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) > + ItemList = Section.split(TAB_SPLIT) > Item = ItemList[0] > if Item == '' or Item == 'RULE': > return > > if Item == TAB_COMMON_DEFINES.upper(): > - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > elif Item == 'VTF' and len(ItemList) == 3: > UiName = ItemList[2] > - Pos = UiName.find(',') > + Pos = UiName.find(TAB_COMMA_SPLIT) > if Pos != -1: > UiName = UiName[:Pos] > - self.__CurSection = ['VTF', UiName, ItemList[1]] > + self._CurSection = ['VTF', UiName, ItemList[1]] > elif len(ItemList) > 1: > - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > elif len(ItemList) > 0: > - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > ## PreprocessFile() method > # > @@ -540,7 +493,6 @@ class FdfParser: > # @param self The object pointer > # > def PreprocessFile(self): > - > self.Rewind() > InComment = False > DoubleSlashComment = False > @@ -548,12 +500,12 @@ class FdfParser: > # HashComment in quoted string " " is ignored. > InString = False > > - while not self.__EndOfFile(): > + while not self._EndOfFile(): > > - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > InString = not InString > # meet new line, then no longer in a comment for // and '#' > - if self.__CurrentChar() == T_CHAR_LF: > + if self._CurrentChar() == TAB_LINE_BREAK: > self.CurrentLineNumber += 1 > self.CurrentOffsetWithinLine = 0 > if InComment and DoubleSlashComment: > @@ -563,33 +515,33 @@ class FdfParser: > InComment = False > HashComment = False > # check for */ comment end > - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR and self.__NextChar() == T_CHAR_SLASH: > - self.__SetCurrentCharValue(T_CHAR_SPACE) > - self.__GetOneChar() > - self.__SetCurrentCharValue(T_CHAR_SPACE) > - self.__GetOneChar() > + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and self._NextChar() == TAB_BACK_SLASH: > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > + self._GetOneChar() > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > + self._GetOneChar() > InComment = False > # set comments to spaces > elif InComment: > - self.__SetCurrentCharValue(T_CHAR_SPACE) > - self.__GetOneChar() > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > + self._GetOneChar() > # check for // comment > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): > InComment = True > DoubleSlashComment = True > # check for '#' comment > - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: > + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: > InComment = True > HashComment = True > # check for /* comment start > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: > - self.__SetCurrentCharValue( T_CHAR_SPACE) > - self.__GetOneChar() > - self.__SetCurrentCharValue( T_CHAR_SPACE) > - self.__GetOneChar() > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > + self._GetOneChar() > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > + self._GetOneChar() > InComment = True > else: > - self.__GetOneChar() > + self._GetOneChar() > > # restore from ListOfList to ListOfString > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > @@ -606,30 +558,30 @@ class FdfParser: > # nested include support > Processed = False > MacroDict = {} > - while self.__GetNextToken(): > + while self._GetNextToken(): > > - if self.__Token == 'DEFINE': > - if not self.__GetNextToken(): > + if self._Token == TAB_DEFINE: > + if not self._GetNextToken(): > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > - Macro = self.__Token > - if not self.__IsToken( "="): > + Macro = self._Token > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - Value = self.__GetExpression() > + Value = self._GetExpression() > MacroDict[Macro] = Value > > - elif self.__Token == '!include': > + elif self._Token == TAB_INCLUDE: > Processed = True > IncludeLine = self.CurrentLineNumber > - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') > - if not self.__GetNextToken(): > + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) > + if not self._GetNextToken(): > raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) > - IncFileName = self.__Token > + IncFileName = self._Token > PreIndex = 0 > StartPos = IncFileName.find('$(', PreIndex) > EndPos = IncFileName.find(')', StartPos+2) > while StartPos != -1 and EndPos != -1: > - Macro = IncFileName[StartPos+2 : EndPos] > - MacroVal = self.__GetMacroValue(Macro) > + Macro = IncFileName[StartPos+2: EndPos] > + MacroVal = self._GetMacroValue(Macro) > if not MacroVal: > if Macro in MacroDict: > MacroVal = MacroDict[Macro] > @@ -686,9 +638,9 @@ class FdfParser: > IncFileProfile.Level = ParentProfile.Level + 1 > IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 > # deal with remaining portions after "!include filename", if exists. > - if self.__GetNextToken(): > + if self._GetNextToken(): > if self.CurrentLineNumber == CurrentLine: > - RemainingLine = self.__CurrentLine()[CurrentOffset:] > + RemainingLine = self._CurrentLine()[CurrentOffset:] > self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) > IncFileProfile.InsertAdjust += 1 > self.CurrentLineNumber += 1 > @@ -704,7 +656,7 @@ class FdfParser: > > # comment out the processed include file statement > TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) > - TempList.insert(IncludeOffset, '#') > + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) > self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) > if Processed: # Nested and back-to-back support > self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) > @@ -713,7 +665,7 @@ class FdfParser: > self.Rewind() > > @staticmethod > - def __GetIfListCurrentItemStat(IfList): > + def _GetIfListCurrentItemStat(IfList): > if len(IfList) == 0: > return True > > @@ -735,29 +687,29 @@ class FdfParser: > IfList = [] > RegionLayoutLine = 0 > ReplacedLine = -1 > - while self.__GetNextToken(): > + while self._GetNextToken(): > # Determine section name and the location dependent macro > - if self.__GetIfListCurrentItemStat(IfList): > - if self.__Token.startswith('['): > - Header = self.__Token > - if not self.__Token.endswith(']'): > - self.__SkipToToken(']') > - Header += self.__SkippedChars > + if self._GetIfListCurrentItemStat(IfList): > + if self._Token.startswith(TAB_SECTION_START): > + Header = self._Token > + if not self._Token.endswith(TAB_SECTION_END): > + self._SkipToToken(TAB_SECTION_END) > + Header += self._SkippedChars > if Header.find('$(') != -1: > raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) > - self.__SectionHeaderParser(Header) > + self._SectionHeaderParser(Header) > continue > # Replace macros except in RULE section or out of section > - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: > + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: > ReplacedLine = self.CurrentLineNumber > - self.__UndoToken() > + self._UndoToken() > CurLine = self.Profile.FileLinesList[ReplacedLine - 1] > PreIndex = 0 > StartPos = CurLine.find('$(', PreIndex) > EndPos = CurLine.find(')', StartPos+2) > - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: > - MacroName = CurLine[StartPos+2 : EndPos] > - MacorValue = self.__GetMacroValue(MacroName) > + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, TAB_ELSE_IF]: > + MacroName = CurLine[StartPos+2: EndPos] > + MacorValue = self._GetMacroValue(MacroName) > if MacorValue is not None: > CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) > if MacorValue.find('$(') != -1: > @@ -771,74 +723,74 @@ class FdfParser: > self.Profile.FileLinesList[ReplacedLine - 1] = CurLine > continue > > - if self.__Token == 'DEFINE': > - if self.__GetIfListCurrentItemStat(IfList): > - if not self.__CurSection: > + if self._Token == TAB_DEFINE: > + if self._GetIfListCurrentItemStat(IfList): > + if not self._CurSection: > raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, self.CurrentLineNumber) > DefineLine = self.CurrentLineNumber - 1 > - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') > - if not self.__GetNextToken(): > + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) > + if not self._GetNextToken(): > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > - Macro = self.__Token > - if not self.__IsToken( "="): > + Macro = self._Token > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - Value = self.__GetExpression() > - self.__SetMacroValue(Macro, Value) > - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > - elif self.__Token == 'SET': > - if not self.__GetIfListCurrentItemStat(IfList): > + Value = self._GetExpression() > + self._SetMacroValue(Macro, Value) > + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + elif self._Token == 'SET': > + if not self._GetIfListCurrentItemStat(IfList): > continue > SetLine = self.CurrentLineNumber - 1 > SetOffset = self.CurrentOffsetWithinLine - len('SET') > - PcdPair = self.__GetNextPcdSettings() > + PcdPair = self._GetNextPcdSettings() > PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - Value = self.__GetExpression() > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > + Value = self._GetExpression() > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > - self.__PcdDict[PcdName] = Value > + self._PcdDict[PcdName] = Value > > self.Profile.PcdDict[PcdPair] = Value > self.SetPcdLocalation(PcdPair) > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple > > - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > - elif self.__Token in ('!ifdef', '!ifndef', '!if'): > - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): > + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > IfList.append([IfStartPos, None, None]) > > - CondLabel = self.__Token > - Expression = self.__GetExpression() > + CondLabel = self._Token > + Expression = self._GetExpression() > > - if CondLabel == '!if': > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > + if CondLabel == TAB_IF: > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > else: > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > - if CondLabel == '!ifndef': > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > + if CondLabel == TAB_IF_N_DEF: > ConditionSatisfied = not ConditionSatisfied > > BranchDetermined = ConditionSatisfied > IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] > if ConditionSatisfied: > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > - elif self.__Token in ('!elseif', '!else'): > - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + elif self._Token in (TAB_ELSE_IF, '!else'): > + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > if len(IfList) <= 0: > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > if IfList[-1][1]: > IfList[-1] = [ElseStartPos, False, True] > - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > else: > - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) > + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) > IfList[-1] = [ElseStartPos, True, IfList[-1][2]] > - if self.__Token == '!elseif': > - Expression = self.__GetExpression() > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > + if self._Token == TAB_ELSE_IF: > + Expression = self._GetExpression() > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] > > if IfList[-1][1]: > @@ -846,14 +798,14 @@ class FdfParser: > IfList[-1][1] = False > else: > IfList[-1][2] = True > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > - elif self.__Token == '!endif': > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + elif self._Token == '!endif': > if len(IfList) <= 0: > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > if IfList[-1][1]: > - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > else: > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > IfList.pop() > elif not IfList: # Don't use PCDs inside conditional directive > @@ -862,7 +814,7 @@ class FdfParser: > continue > SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > if SetPcd: > - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') > + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') > RegionLayoutLine = self.CurrentLineNumber > continue > RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > @@ -873,35 +825,35 @@ class FdfParser: > if not RegionSizeGuid: > RegionLayoutLine = self.CurrentLineNumber + 1 > continue > - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > RegionLayoutLine = self.CurrentLineNumber + 1 > > if IfList: > raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) > self.Rewind() > > - def __CollectMacroPcd(self): > + def _CollectMacroPcd(self): > MacroDict = {} > > # PCD macro > MacroDict.update(GlobalData.gPlatformPcds) > - MacroDict.update(self.__PcdDict) > + MacroDict.update(self._PcdDict) > > # Lowest priority > MacroDict.update(GlobalData.gPlatformDefines) > > - if self.__CurSection: > + if self._CurSection: > # Defines macro > - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > if ScopeMacro: > MacroDict.update(ScopeMacro) > > # Section macro > - ScopeMacro = self.__MacroDict[ > - self.__CurSection[0], > - self.__CurSection[1], > - self.__CurSection[2] > + ScopeMacro = self._MacroDict[ > + self._CurSection[0], > + self._CurSection[1], > + self._CurSection[2] > ] > if ScopeMacro: > MacroDict.update(ScopeMacro) > @@ -912,15 +864,15 @@ class FdfParser: > for Item in GlobalData.BuildOptionPcd: > if isinstance(Item, tuple): > continue > - PcdName, TmpValue = Item.split("=") > + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) > TmpValue = BuildOptionValue(TmpValue, {}) > MacroDict[PcdName.strip()] = TmpValue > # Highest priority > > return MacroDict > > - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): > - MacroPcdDict = self.__CollectMacroPcd() > + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): > + MacroPcdDict = self._CollectMacroPcd() > if Op == 'eval': > try: > if Value: > @@ -933,7 +885,7 @@ class FdfParser: > # the precise number of line and return the evaluation result > # > EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), > - File=self.FileName, ExtraData=self.__CurrentLine(), > + File=self.FileName, ExtraData=self._CurrentLine(), > Line=Line) > return Excpt.result > except Exception as Excpt: > @@ -955,10 +907,10 @@ class FdfParser: > Expression = Expression[2:-1] > return Expression in MacroPcdDict > > - ## __IsToken() method > + ## _IsToken() method > # > # Check whether input string is found from current char position along > - # If found, the string value is put into self.__Token > + # If found, the string value is put into self._Token > # > # @param self The object pointer > # @param String The string to search > @@ -966,26 +918,26 @@ class FdfParser: > # @retval True Successfully find string, file buffer pointer moved forward > # @retval False Not able to find string, file buffer pointer not changed > # > - def __IsToken(self, String, IgnoreCase = False): > - self.__SkipWhiteSpace() > + def _IsToken(self, String, IgnoreCase = False): > + self._SkipWhiteSpace() > > # Only consider the same line, no multi-line token allowed > StartPos = self.CurrentOffsetWithinLine > index = -1 > if IgnoreCase: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > else: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > if index == 0: > self.CurrentOffsetWithinLine += len(String) > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > return True > return False > > - ## __IsKeyword() method > + ## _IsKeyword() method > # > # Check whether input keyword is found from current char position along, whole word only! > - # If found, the string value is put into self.__Token > + # If found, the string value is put into self._Token > # > # @param self The object pointer > # @param Keyword The string to search > @@ -993,117 +945,117 @@ class FdfParser: > # @retval True Successfully find string, file buffer pointer moved forward > # @retval False Not able to find string, file buffer pointer not changed > # > - def __IsKeyword(self, KeyWord, IgnoreCase = False): > - self.__SkipWhiteSpace() > + def _IsKeyword(self, KeyWord, IgnoreCase = False): > + self._SkipWhiteSpace() > > # Only consider the same line, no multi-line token allowed > StartPos = self.CurrentOffsetWithinLine > index = -1 > if IgnoreCase: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) > else: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) > if index == 0: > - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: > + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > + if not str(followingChar).isspace() and followingChar not in SEPARATORS: > return False > self.CurrentOffsetWithinLine += len(KeyWord) > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > return True > return False > > - def __GetExpression(self): > + def _GetExpression(self): > Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] > Index = len(Line) - 1 > - while Line[Index] in ['\r', '\n']: > + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: > Index -= 1 > ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] > self.CurrentOffsetWithinLine += len(ExpressionString) > ExpressionString = ExpressionString.strip() > return ExpressionString > > - ## __GetNextWord() method > + ## _GetNextWord() method > # > # Get next C name from file lines > - # If found, the string value is put into self.__Token > + # If found, the string value is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a C name string, file buffer pointer moved forward > # @retval False Not able to find a C name string, file buffer pointer not changed > # > - def __GetNextWord(self): > - self.__SkipWhiteSpace() > - if self.__EndOfFile(): > + def _GetNextWord(self): > + self._SkipWhiteSpace() > + if self._EndOfFile(): > return False > > - TempChar = self.__CurrentChar() > + TempChar = self._CurrentChar() > StartPos = self.CurrentOffsetWithinLine > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': > - self.__GetOneChar() > - while not self.__EndOfLine(): > - TempChar = self.__CurrentChar() > + self._GetOneChar() > + while not self._EndOfLine(): > + TempChar = self._CurrentChar() > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': > - self.__GetOneChar() > + self._GetOneChar() > > else: > break > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > return True > > return False > > - def __GetNextPcdWord(self): > - self.__SkipWhiteSpace() > - if self.__EndOfFile(): > + def _GetNextPcdWord(self): > + self._SkipWhiteSpace() > + if self._EndOfFile(): > return False > > - TempChar = self.__CurrentChar() > + TempChar = self._CurrentChar() > StartPos = self.CurrentOffsetWithinLine > - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == '[' or TempChar == ']': > - self.__GetOneChar() > - while not self.__EndOfLine(): > - TempChar = self.__CurrentChar() > + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: > + self._GetOneChar() > + while not self._EndOfLine(): > + TempChar = self._CurrentChar() > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or TempChar == ']': > - self.__GetOneChar() > + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == TAB_SECTION_START or TempChar == TAB_SECTION_END: > + self._GetOneChar() > > else: > break > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > return True > > return False > > - ## __GetNextToken() method > + ## _GetNextToken() method > # > # Get next token unit before a seperator > - # If found, the string value is put into self.__Token > + # If found, the string value is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a token unit, file buffer pointer moved forward > # @retval False Not able to find a token unit, file buffer pointer not changed > # > - def __GetNextToken(self): > + def _GetNextToken(self): > # Skip leading spaces, if exist. > - self.__SkipWhiteSpace() > - if self.__EndOfFile(): > + self._SkipWhiteSpace() > + if self._EndOfFile(): > return False > # Record the token start position, the position of the first non-space char. > StartPos = self.CurrentOffsetWithinLine > StartLine = self.CurrentLineNumber > while StartLine == self.CurrentLineNumber: > - TempChar = self.__CurrentChar() > + TempChar = self._CurrentChar() > # Try to find the end char that is not a space and not in seperator tuple. > # That is, when we got a space or any char in the tuple, we got the end of token. > - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: > - self.__GetOneChar() > + if not str(TempChar).isspace() and TempChar not in SEPARATORS: > + self._GetOneChar() > # if we happen to meet a seperator as the first char, we must proceed to get it. > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > - self.__GetOneChar() > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > + self._GetOneChar() > break > else: > break > @@ -1113,35 +1065,34 @@ class FdfParser: > EndPos = self.CurrentOffsetWithinLine > if self.CurrentLineNumber != StartLine: > EndPos = len(self.Profile.FileLinesList[StartLine-1]) > - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] > - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: > - self.__Token = self.__Token.lower() > + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] > + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: > + self._Token = self._Token.lower() > if StartPos != self.CurrentOffsetWithinLine: > return True > else: > return False > > - ## __GetNextGuid() method > + ## _GetNextGuid() method > # > # Get next token unit before a seperator > - # If found, the GUID string is put into self.__Token > + # If found, the GUID string is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a registry format GUID, file buffer pointer moved forward > # @retval False Not able to find a registry format GUID, file buffer pointer not changed > # > - def __GetNextGuid(self): > - > - if not self.__GetNextToken(): > + def _GetNextGuid(self): > + if not self._GetNextToken(): > return False > - if gGuidPattern.match(self.__Token) is not None: > + if gGuidPattern.match(self._Token) is not None: > return True > else: > - self.__UndoToken() > + self._UndoToken() > return False > > @staticmethod > - def __Verify(Name, Value, Scope): > + def _Verify(Name, Value, Scope): > # value verification only applies to numeric values. > if Scope not in TAB_PCD_NUMERIC_TYPES: > return > @@ -1157,17 +1108,17 @@ class FdfParser: > EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) > return True > > - ## __UndoToken() method > + ## _UndoToken() method > # > # Go back one token unit in file buffer > # > # @param self The object pointer > # > - def __UndoToken(self): > - self.__UndoOneChar() > - while self.__CurrentChar().isspace(): > - if not self.__UndoOneChar(): > - self.__GetOneChar() > + def _UndoToken(self): > + self._UndoOneChar() > + while self._CurrentChar().isspace(): > + if not self._UndoOneChar(): > + self._GetOneChar() > return > > > @@ -1175,118 +1126,118 @@ class FdfParser: > CurrentLine = self.CurrentLineNumber > while CurrentLine == self.CurrentLineNumber: > > - TempChar = self.__CurrentChar() > + TempChar = self._CurrentChar() > # Try to find the end char that is not a space and not in seperator tuple. > # That is, when we got a space or any char in the tuple, we got the end of token. > - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: > - if not self.__UndoOneChar(): > + if not str(TempChar).isspace() and not TempChar in SEPARATORS: > + if not self._UndoOneChar(): > return > # if we happen to meet a seperator as the first char, we must proceed to get it. > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > return > else: > break > > - self.__GetOneChar() > + self._GetOneChar() > > - ## __GetNextHexNumber() method > + ## _GetNextHexNumber() method > # > # Get next HEX data before a seperator > - # If found, the HEX data is put into self.__Token > + # If found, the HEX data is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a HEX data, file buffer pointer moved forward > # @retval False Not able to find a HEX data, file buffer pointer not changed > # > - def __GetNextHexNumber(self): > - if not self.__GetNextToken(): > + def _GetNextHexNumber(self): > + if not self._GetNextToken(): > return False > - if gHexPatternAll.match(self.__Token): > + if gHexPatternAll.match(self._Token): > return True > else: > - self.__UndoToken() > + self._UndoToken() > return False > > - ## __GetNextDecimalNumber() method > + ## _GetNextDecimalNumber() method > # > # Get next decimal data before a seperator > - # If found, the decimal data is put into self.__Token > + # If found, the decimal data is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a decimal data, file buffer pointer moved forward > # @retval False Not able to find a decimal data, file buffer pointer not changed > # > - def __GetNextDecimalNumber(self): > - if not self.__GetNextToken(): > + def _GetNextDecimalNumber(self): > + if not self._GetNextToken(): > return False > - if self.__Token.isdigit(): > + if self._Token.isdigit(): > return True > else: > - self.__UndoToken() > + self._UndoToken() > return False > > - def __GetNextPcdSettings(self): > - if not self.__GetNextWord(): > + def _GetNextPcdSettings(self): > + if not self._GetNextWord(): > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > - pcdTokenSpaceCName = self.__Token > + pcdTokenSpaceCName = self._Token > > - if not self.__IsToken( "."): > + if not self._IsToken(TAB_SPLIT): > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > - pcdCName = self.__Token > + pcdCName = self._Token > > Fields = [] > - while self.__IsToken("."): > - if not self.__GetNextPcdWord(): > + while self._IsToken(TAB_SPLIT): > + if not self._GetNextPcdWord(): > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > - Fields.append(self.__Token) > + Fields.append(self._Token) > > - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) > + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) > > - ## __GetStringData() method > + ## _GetStringData() method > # > # Get string contents quoted in "" > - # If found, the decimal data is put into self.__Token > + # If found, the decimal data is put into self._Token > # > # @param self The object pointer > # @retval True Successfully find a string data, file buffer pointer moved forward > # @retval False Not able to find a string data, file buffer pointer not changed > # > - def __GetStringData(self): > - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): > - self.__UndoToken() > - self.__SkipToToken("\"") > + def _GetStringData(self): > + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): > + self._UndoToken() > + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) > currentLineNumber = self.CurrentLineNumber > > - if not self.__SkipToToken("\""): > + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > if currentLineNumber != self.CurrentLineNumber: > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > - self.__Token = self.__SkippedChars.rstrip('\"') > + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) > return True > > - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): > - self.__UndoToken() > - self.__SkipToToken("\'") > + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): > + self._UndoToken() > + self._SkipToToken(T_CHAR_SINGLE_QUOTE) > currentLineNumber = self.CurrentLineNumber > > - if not self.__SkipToToken("\'"): > + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > if currentLineNumber != self.CurrentLineNumber: > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > - self.__Token = self.__SkippedChars.rstrip('\'') > + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) > return True > > else: > return False > > - ## __SkipToToken() method > + ## _SkipToToken() method > # > # Search forward in file buffer for the string > - # The skipped chars are put into self.__SkippedChars > + # The skipped chars are put into self._SkippedChars > # > # @param self The object pointer > # @param String The string to search > @@ -1294,25 +1245,25 @@ class FdfParser: > # @retval True Successfully find the string, file buffer pointer moved forward > # @retval False Not able to find the string, file buffer pointer not changed > # > - def __SkipToToken(self, String, IgnoreCase = False): > + def _SkipToToken(self, String, IgnoreCase = False): > StartPos = self.GetFileBufferPos() > > - self.__SkippedChars = "" > - while not self.__EndOfFile(): > + self._SkippedChars = "" > + while not self._EndOfFile(): > index = -1 > if IgnoreCase: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > else: > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > if index == 0: > self.CurrentOffsetWithinLine += len(String) > - self.__SkippedChars += String > + self._SkippedChars += String > return True > - self.__SkippedChars += str(self.__CurrentChar()) > - self.__GetOneChar() > + self._SkippedChars += str(self._CurrentChar()) > + self._GetOneChar() > > - self.SetFileBufferPos( StartPos) > - self.__SkippedChars = "" > + self.SetFileBufferPos(StartPos) > + self._SkippedChars = "" > return False > > ## GetFileBufferPos() method > @@ -1343,18 +1294,18 @@ class FdfParser: > # @param self The object pointer > # > def Preprocess(self): > - self.__StringToList() > + self._StringToList() > self.PreprocessFile() > self.PreprocessIncludeFile() > - self.__StringToList() > + self._StringToList() > self.PreprocessFile() > self.PreprocessConditionalStatement() > - self.__StringToList() > - for Pos in self.__WipeOffArea: > - self.__ReplaceFragment(Pos[0], Pos[1]) > + self._StringToList() > + for Pos in self._WipeOffArea: > + self._ReplaceFragment(Pos[0], Pos[1]) > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > - while self.__GetDefines(): > + while self._GetDefines(): > pass > > ## ParseFile() method > @@ -1365,19 +1316,18 @@ class FdfParser: > # @param self The object pointer > # > def ParseFile(self): > - > try: > self.Preprocess() > - self.__GetError() > + self._GetError() > # > # Keep processing sections of the FDF until no new sections or a syntax error is found > # > - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() or self.__GetOptionRom(): > + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or self._GetOptionRom(): > pass > > except Warning as X: > - self.__UndoToken() > - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ > + self._UndoToken() > + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ > # At this point, the closest parent would be the included file itself > Profile = GetParentAtLine(X.OriginalLineNumber) > if Profile is not None: > @@ -1386,7 +1336,7 @@ class FdfParser: > else: > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > X.Message += ' near line %d, column %d: %s' \ > - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) > + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) > raise > > ## SectionParser() method > @@ -1403,7 +1353,7 @@ class FdfParser: > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not S.startswith('[FMPPAYLOAD.'): > raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], [FD.], [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) > > - ## __GetDefines() method > + ## _GetDefines() method > # > # Get Defines section contents and store its data into AllMacrosList > # > @@ -1411,56 +1361,55 @@ class FdfParser: > # @retval True Successfully find a Defines > # @retval False Not able to find a Defines > # > - def __GetDefines(self): > - > - if not self.__GetNextToken(): > + def _GetDefines(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[DEFINES"): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[DEFINES", True): > + self._UndoToken() > + if not self._IsToken("[DEFINES", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - while self.__GetNextWord(): > + while self._GetNextWord(): > # handle the SET statement > - if self.__Token == 'SET': > - self.__UndoToken() > - self.__GetSetStatement(None) > + if self._Token == 'SET': > + self._UndoToken() > + self._GetSetStatement(None) > continue > > - Macro = self.__Token > + Macro = self._Token > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken() or self.__Token.startswith('['): > + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): > raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) > - Value = self.__Token > + Value = self._Token > > return False > > - ##__GetError() method > - def __GetError(self): > + ##_GetError() method > + def _GetError(self): > #save the Current information > CurrentLine = self.CurrentLineNumber > CurrentOffset = self.CurrentOffsetWithinLine > - while self.__GetNextToken(): > - if self.__Token == TAB_ERROR: > - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) > + while self._GetNextToken(): > + if self._Token == TAB_ERROR: > + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), File=self.FileName, Line=self.CurrentLineNumber) > self.CurrentLineNumber = CurrentLine > self.CurrentOffsetWithinLine = CurrentOffset > > - ## __GetFd() method > + ## _GetFd() method > # > # Get FD section contents and store its data into FD dictionary of self.Profile > # > @@ -1468,27 +1417,26 @@ class FdfParser: > # @retval True Successfully find a FD > # @retval False Not able to find a FD > # > - def __GetFd(self): > - > - if not self.__GetNextToken(): > + def _GetFd(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[FD."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): > if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): > raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[FD.", True): > + self._UndoToken() > + if not self._IsToken("[FD.", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) > > - FdName = self.__GetUiName() > + FdName = self._GetUiName() > if FdName == "": > if len (self.Profile.FdDict) == 0: > FdName = GenFdsGlobalVariable.PlatformName > @@ -1502,56 +1450,56 @@ class FdfParser: > if self.CurrentFdName in self.Profile.FdDict: > raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - FdObj = Fd.FD() > + FdObj = FD() > FdObj.FdUiName = self.CurrentFdName > self.Profile.FdDict[self.CurrentFdName] = FdObj > > if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: > raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) > > - Status = self.__GetCreateFile(FdObj) > + Status = self._GetCreateFile(FdObj) > if not Status: > raise Warning("FD name error", self.FileName, self.CurrentLineNumber) > > - while self.__GetTokenStatements(FdObj): > + while self._GetTokenStatements(FdObj): > pass > for Attr in ("BaseAddress", "Size", "ErasePolarity"): > if getattr(FdObj, Attr) is None: > - self.__GetNextToken() > + self._GetNextToken() > raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) > > if not FdObj.BlockSizeList: > FdObj.BlockSizeList.append((1, FdObj.Size, None)) > > - self.__GetDefineStatements(FdObj) > + self._GetDefineStatements(FdObj) > > - self.__GetSetStatements(FdObj) > + self._GetSetStatements(FdObj) > > - if not self.__GetRegionLayout(FdObj): > + if not self._GetRegionLayout(FdObj): > raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) > > - while self.__GetRegionLayout(FdObj): > + while self._GetRegionLayout(FdObj): > pass > return True > > - ## __GetUiName() method > + ## _GetUiName() method > # > # Return the UI name of a section > # > # @param self The object pointer > # @retval FdName UI name > # > - def __GetUiName(self): > + def _GetUiName(self): > Name = "" > - if self.__GetNextWord(): > - Name = self.__Token > + if self._GetNextWord(): > + Name = self._Token > > return Name > > - ## __GetCreateFile() method > + ## _GetCreateFile() method > # > # Return the output file name of object > # > @@ -1559,16 +1507,15 @@ class FdfParser: > # @param Obj object whose data will be stored in file > # @retval FdName UI name > # > - def __GetCreateFile(self, Obj): > - > - if self.__IsKeyword( "CREATE_FILE"): > - if not self.__IsToken( "="): > + def _GetCreateFile(self, Obj): > + if self._IsKeyword("CREATE_FILE"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > - FileName = self.__Token > + FileName = self._Token > Obj.CreateFileName = FileName > > return True > @@ -1576,25 +1523,25 @@ class FdfParser: > def SetPcdLocalation(self,pcdpair): > self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) > > - ## __GetTokenStatements() method > + ## _GetTokenStatements() method > # > # Get token statements > # > # @param self The object pointer > # @param Obj for whom token statement is got > # > - def __GetTokenStatements(self, Obj): > - if self.__IsKeyword( "BaseAddress"): > - if not self.__IsToken( "="): > + def _GetTokenStatements(self, Obj): > + if self._IsKeyword("BaseAddress"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) > > - Obj.BaseAddress = self.__Token > + Obj.BaseAddress = self._Token > > - if self.__IsToken( "|"): > - pcdPair = self.__GetNextPcdSettings() > + if self._IsToken(TAB_VALUE_SPLIT): > + pcdPair = self._GetNextPcdSettings() > Obj.BaseAddressPcd = pcdPair > self.Profile.PcdDict[pcdPair] = Obj.BaseAddress > self.SetPcdLocalation(pcdPair) > @@ -1602,16 +1549,16 @@ class FdfParser: > self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple > return True > > - if self.__IsKeyword( "Size"): > - if not self.__IsToken( "="): > + if self._IsKeyword("Size"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) > > - Size = self.__Token > - if self.__IsToken( "|"): > - pcdPair = self.__GetNextPcdSettings() > + Size = self._Token > + if self._IsToken(TAB_VALUE_SPLIT): > + pcdPair = self._GetNextPcdSettings() > Obj.SizePcd = pcdPair > self.Profile.PcdDict[pcdPair] = Size > self.SetPcdLocalation(pcdPair) > @@ -1620,22 +1567,22 @@ class FdfParser: > Obj.Size = long(Size, 0) > return True > > - if self.__IsKeyword( "ErasePolarity"): > - if not self.__IsToken( "="): > + if self._IsKeyword("ErasePolarity"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) > > - if self.__Token != "1" and self.__Token != "0": > + if self._Token != "1" and self._Token != "0": > raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) > > - Obj.ErasePolarity = self.__Token > + Obj.ErasePolarity = self._Token > return True > > - return self.__GetBlockStatements(Obj) > + return self._GetBlockStatements(Obj) > > - ## __GetAddressStatements() method > + ## _GetAddressStatements() method > # > # Get address statements > # > @@ -1644,38 +1591,37 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetAddressStatements(self, Obj): > - > - if self.__IsKeyword("BsBaseAddress"): > - if not self.__IsToken( "="): > + def _GetAddressStatements(self, Obj): > + if self._IsKeyword("BsBaseAddress"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > - BsAddress = long(self.__Token, 0) > + BsAddress = long(self._Token, 0) > Obj.BsBaseAddress = BsAddress > > - if self.__IsKeyword("RtBaseAddress"): > - if not self.__IsToken( "="): > + if self._IsKeyword("RtBaseAddress"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > - RtAddress = long(self.__Token, 0) > + RtAddress = long(self._Token, 0) > Obj.RtBaseAddress = RtAddress > > - ## __GetBlockStatements() method > + ## _GetBlockStatements() method > # > # Get block statements > # > # @param self The object pointer > # @param Obj for whom block statement is got > # > - def __GetBlockStatements(self, Obj): > + def _GetBlockStatements(self, Obj): > IsBlock = False > - while self.__GetBlockStatement(Obj): > + while self._GetBlockStatement(Obj): > IsBlock = True > > Item = Obj.BlockSizeList[-1] > @@ -1683,7 +1629,7 @@ class FdfParser: > raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) > return IsBlock > > - ## __GetBlockStatement() method > + ## _GetBlockStatement() method > # > # Get block statement > # > @@ -1692,20 +1638,20 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetBlockStatement(self, Obj): > - if not self.__IsKeyword( "BlockSize"): > + def _GetBlockStatement(self, Obj): > + if not self._IsKeyword("BlockSize"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) > > - BlockSize = self.__Token > + BlockSize = self._Token > BlockSizePcd = None > - if self.__IsToken( "|"): > - PcdPair = self.__GetNextPcdSettings() > + if self._IsToken(TAB_VALUE_SPLIT): > + PcdPair = self._GetNextPcdSettings() > BlockSizePcd = PcdPair > self.Profile.PcdDict[PcdPair] = BlockSize > self.SetPcdLocalation(PcdPair) > @@ -1714,19 +1660,19 @@ class FdfParser: > BlockSize = long(BlockSize, 0) > > BlockNumber = None > - if self.__IsKeyword( "NumBlocks"): > - if not self.__IsToken( "="): > + if self._IsKeyword("NumBlocks"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) > > - BlockNumber = long(self.__Token, 0) > + BlockNumber = long(self._Token, 0) > > Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) > return True > > - ## __GetDefineStatements() method > + ## _GetDefineStatements() method > # > # Get define statements > # > @@ -1735,11 +1681,11 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetDefineStatements(self, Obj): > - while self.__GetDefineStatement( Obj): > + def _GetDefineStatements(self, Obj): > + while self._GetDefineStatement(Obj): > pass > > - ## __GetDefineStatement() method > + ## _GetDefineStatement() method > # > # Get define statement > # > @@ -1748,24 +1694,24 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetDefineStatement(self, Obj): > - if self.__IsKeyword("DEFINE"): > - self.__GetNextToken() > - Macro = self.__Token > - if not self.__IsToken( "="): > + def _GetDefineStatement(self, Obj): > + if self._IsKeyword(TAB_DEFINE): > + self._GetNextToken() > + Macro = self._Token > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > - Value = self.__Token > + Value = self._Token > Macro = '$(' + Macro + ')' > Obj.DefineVarDict[Macro] = Value > return True > > return False > > - ## __GetSetStatements() method > + ## _GetSetStatements() method > # > # Get set statements > # > @@ -1774,11 +1720,11 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetSetStatements(self, Obj): > - while self.__GetSetStatement(Obj): > + def _GetSetStatements(self, Obj): > + while self._GetSetStatement(Obj): > pass > > - ## __GetSetStatement() method > + ## _GetSetStatement() method > # > # Get set statement > # > @@ -1787,15 +1733,15 @@ class FdfParser: > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetSetStatement(self, Obj): > - if self.__IsKeyword("SET"): > - PcdPair = self.__GetNextPcdSettings() > + def _GetSetStatement(self, Obj): > + if self._IsKeyword("SET"): > + PcdPair = self._GetNextPcdSettings() > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - Value = self.__GetExpression() > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > + Value = self._GetExpression() > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > if Obj: > Obj.SetVarDict[PcdPair] = Value > @@ -1807,19 +1753,19 @@ class FdfParser: > > return False > > - ## __CalcRegionExpr(self) > + ## _CalcRegionExpr(self) > # > # Calculate expression for offset or size of a region > # > # @return: None if invalid expression > # Calculated number if successfully > # > - def __CalcRegionExpr(self): > + def _CalcRegionExpr(self): > StartPos = self.GetFileBufferPos() > Expr = '' > PairCount = 0 > - while not self.__EndOfFile(): > - CurCh = self.__CurrentChar() > + while not self._EndOfFile(): > + CurCh = self._CurrentChar() > if CurCh == '(': > PairCount += 1 > elif CurCh == ')': > @@ -1828,106 +1774,106 @@ class FdfParser: > if CurCh in '|\r\n' and PairCount == 0: > break > Expr += CurCh > - self.__GetOneChar() > + self._GetOneChar() > try: > return long( > ValueExpression(Expr, > - self.__CollectMacroPcd() > + self._CollectMacroPcd() > )(True), 0) > except Exception: > self.SetFileBufferPos(StartPos) > return None > > - ## __GetRegionLayout() method > + ## _GetRegionLayout() method > # > # Get region layout for FD > # > # @param self The object pointer > - # @param Fd for whom region is got > + # @param theFd for whom region is got > # @retval True Successfully find > # @retval False Not able to find > # > - def __GetRegionLayout(self, Fd): > - Offset = self.__CalcRegionExpr() > + def _GetRegionLayout(self, theFd): > + Offset = self._CalcRegionExpr() > if Offset is None: > return False > > - RegionObj = Region.Region() > + RegionObj = Region() > RegionObj.Offset = Offset > - Fd.RegionList.append(RegionObj) > + theFd.RegionList.append(RegionObj) > > - if not self.__IsToken( "|"): > + if not self._IsToken(TAB_VALUE_SPLIT): > raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) > > - Size = self.__CalcRegionExpr() > + Size = self._CalcRegionExpr() > if Size is None: > raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) > RegionObj.Size = Size > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > return True > > - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > # > # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] > # Or it might be next region's offset described by an expression which starts with a PCD. > # PcdOffset[|PcdSize] or OffsetPcdExpression|Size > # > - self.__UndoToken() > - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or > - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) > + self._UndoToken() > + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or > + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) > if IsRegionPcd: > - RegionObj.PcdOffset = self.__GetNextPcdSettings() > - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) > + RegionObj.PcdOffset = self._GetNextPcdSettings() > + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) > self.SetPcdLocalation(RegionObj.PcdOffset) > - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple > - if self.__IsToken( "|"): > - RegionObj.PcdSize = self.__GetNextPcdSettings() > + if self._IsToken(TAB_VALUE_SPLIT): > + RegionObj.PcdSize = self._GetNextPcdSettings() > self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size > self.SetPcdLocalation(RegionObj.PcdSize) > - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > return True > > - if self.__Token == "SET": > - self.__UndoToken() > - self.__GetSetStatements( RegionObj) > - if not self.__GetNextWord(): > + if self._Token == "SET": > + self._UndoToken() > + self._GetSetStatements(RegionObj) > + if not self._GetNextWord(): > return True > > - elif self.__Token == BINARY_FILE_TYPE_FV: > - self.__UndoToken() > - self.__GetRegionFvType( RegionObj) > + elif self._Token == BINARY_FILE_TYPE_FV: > + self._UndoToken() > + self._GetRegionFvType(RegionObj) > > - elif self.__Token == "CAPSULE": > - self.__UndoToken() > - self.__GetRegionCapType( RegionObj) > + elif self._Token == "CAPSULE": > + self._UndoToken() > + self._GetRegionCapType(RegionObj) > > - elif self.__Token == "FILE": > - self.__UndoToken() > - self.__GetRegionFileType(RegionObj) > + elif self._Token == "FILE": > + self._UndoToken() > + self._GetRegionFileType(RegionObj) > > - elif self.__Token == "INF": > - self.__UndoToken() > + elif self._Token == "INF": > + self._UndoToken() > RegionObj.RegionType = "INF" > - while self.__IsKeyword("INF"): > - self.__UndoToken() > - ffsInf = self.__ParseInfStatement() > + while self._IsKeyword("INF"): > + self._UndoToken() > + ffsInf = self._ParseInfStatement() > if not ffsInf: > break > RegionObj.RegionDataList.append(ffsInf) > > - elif self.__Token == "DATA": > - self.__UndoToken() > - self.__GetRegionDataType(RegionObj) > + elif self._Token == "DATA": > + self._UndoToken() > + self._GetRegionDataType(RegionObj) > else: > - self.__UndoToken() > - if self.__GetRegionLayout(Fd): > + self._UndoToken() > + if self._GetRegionLayout(theFd): > return True > raise Warning("A valid region type was not found. " > "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", > @@ -1935,189 +1881,185 @@ class FdfParser: > > return True > > - ## __GetRegionFvType() method > + ## _GetRegionFvType() method > # > # Get region fv data for region > # > # @param self The object pointer > # @param RegionObj for whom region data is got > # > - def __GetRegionFvType(self, RegionObj): > - > - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): > + def _GetRegionFvType(self, RegionObj): > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > RegionObj.RegionType = BINARY_FILE_TYPE_FV > - RegionObj.RegionDataList.append((self.__Token).upper()) > + RegionObj.RegionDataList.append((self._Token).upper()) > > - while self.__IsKeyword( BINARY_FILE_TYPE_FV): > + while self._IsKeyword(BINARY_FILE_TYPE_FV): > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > - RegionObj.RegionDataList.append((self.__Token).upper()) > + RegionObj.RegionDataList.append((self._Token).upper()) > > - ## __GetRegionCapType() method > + ## _GetRegionCapType() method > # > # Get region capsule data for region > # > # @param self The object pointer > # @param RegionObj for whom region data is got > # > - def __GetRegionCapType(self, RegionObj): > - > - if not self.__IsKeyword("CAPSULE"): > + def _GetRegionCapType(self, RegionObj): > + if not self._IsKeyword("CAPSULE"): > raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > RegionObj.RegionType = "CAPSULE" > - RegionObj.RegionDataList.append(self.__Token) > + RegionObj.RegionDataList.append(self._Token) > > - while self.__IsKeyword("CAPSULE"): > + while self._IsKeyword("CAPSULE"): > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > - RegionObj.RegionDataList.append(self.__Token) > + RegionObj.RegionDataList.append(self._Token) > > - ## __GetRegionFileType() method > + ## _GetRegionFileType() method > # > # Get region file data for region > # > # @param self The object pointer > # @param RegionObj for whom region data is got > # > - def __GetRegionFileType(self, RegionObj): > - > - if not self.__IsKeyword( "FILE"): > + def _GetRegionFileType(self, RegionObj): > + if not self._IsKeyword("FILE"): > raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > RegionObj.RegionType = "FILE" > - RegionObj.RegionDataList.append( self.__Token) > + RegionObj.RegionDataList.append(self._Token) > > - while self.__IsKeyword( "FILE"): > + while self._IsKeyword("FILE"): > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) > > - RegionObj.RegionDataList.append(self.__Token) > + RegionObj.RegionDataList.append(self._Token) > > - ## __GetRegionDataType() method > + ## _GetRegionDataType() method > # > # Get region array data for region > # > # @param self The object pointer > # @param RegionObj for whom region data is got > # > - def __GetRegionDataType(self, RegionObj): > - > - if not self.__IsKeyword( "DATA"): > + def _GetRegionDataType(self, RegionObj): > + if not self._IsKeyword("DATA"): > raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 18: > + if len(self._Token) > 18: > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > # convert hex string value to byte hex string array > - AllString = self.__Token > + AllString = self._Token > AllStrLen = len (AllString) > DataString = "" > while AllStrLen > 4: > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > AllStrLen = AllStrLen - 2 > - DataString = DataString + AllString[:AllStrLen] + "," > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > # byte value array > - if len (self.__Token) <= 4: > - while self.__IsToken(","): > - if not self.__GetNextHexNumber(): > + if len (self._Token) <= 4: > + while self._IsToken(TAB_COMMA_SPLIT): > + if not self._GetNextHexNumber(): > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > - if len(self.__Token) > 4: > + if len(self._Token) > 4: > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > - DataString += self.__Token > - DataString += "," > + DataString += self._Token > + DataString += TAB_COMMA_SPLIT > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - DataString = DataString.rstrip(",") > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > RegionObj.RegionType = "DATA" > - RegionObj.RegionDataList.append( DataString) > + RegionObj.RegionDataList.append(DataString) > > - while self.__IsKeyword( "DATA"): > + while self._IsKeyword("DATA"): > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 18: > + if len(self._Token) > 18: > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > # convert hex string value to byte hex string array > - AllString = self.__Token > + AllString = self._Token > AllStrLen = len (AllString) > DataString = "" > while AllStrLen > 4: > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > AllStrLen = AllStrLen - 2 > - DataString = DataString + AllString[:AllStrLen] + "," > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > # byte value array > - if len (self.__Token) <= 4: > - while self.__IsToken(","): > - if not self.__GetNextHexNumber(): > + if len (self._Token) <= 4: > + while self._IsToken(TAB_COMMA_SPLIT): > + if not self._GetNextHexNumber(): > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > - if len(self.__Token) > 4: > + if len(self._Token) > 4: > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > - DataString += self.__Token > - DataString += "," > + DataString += self._Token > + DataString += TAB_COMMA_SPLIT > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - DataString = DataString.rstrip(",") > - RegionObj.RegionDataList.append( DataString) > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > + RegionObj.RegionDataList.append(DataString) > > - ## __GetFv() method > + ## _GetFv() method > # > # Get FV section contents and store its data into FV dictionary of self.Profile > # > @@ -2125,68 +2067,68 @@ class FdfParser: > # @retval True Successfully find a FV > # @retval False Not able to find a FV > # > - def __GetFv(self): > - if not self.__GetNextToken(): > + def _GetFv(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[FV."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[FV.", True): > + self._UndoToken() > + if not self._IsToken("[FV.", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > - FvName = self.__GetUiName() > + FvName = self._GetUiName() > self.CurrentFvName = FvName.upper() > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - FvObj = Fv.FV() > + FvObj = FV() > FvObj.UiFvName = self.CurrentFvName > self.Profile.FvDict[self.CurrentFvName] = FvObj > > - Status = self.__GetCreateFile(FvObj) > + Status = self._GetCreateFile(FvObj) > if not Status: > raise Warning("FV name error", self.FileName, self.CurrentLineNumber) > > - self.__GetDefineStatements(FvObj) > + self._GetDefineStatements(FvObj) > > - self.__GetAddressStatements(FvObj) > + self._GetAddressStatements(FvObj) > > FvObj.FvExtEntryTypeValue = [] > FvObj.FvExtEntryType = [] > FvObj.FvExtEntryData = [] > while True: > - self.__GetSetStatements(FvObj) > + self._GetSetStatements(FvObj) > > - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or > - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or > - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or > - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): > + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or > + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or > + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or > + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): > break > > if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: > raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > while True: > - isInf = self.__GetInfStatement(FvObj) > - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > + isInf = self._GetInfStatement(FvObj) > + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > if not isInf and not isFile: > break > > return True > > - ## __GetFvAlignment() method > + ## _GetFvAlignment() method > # > # Get alignment for FV > # > @@ -2195,26 +2137,25 @@ class FdfParser: > # @retval True Successfully find a alignment statement > # @retval False Not able to find a alignment statement > # > - def __GetFvAlignment(self, Obj): > - > - if not self.__IsKeyword( "FvAlignment"): > + def _GetFvAlignment(self, Obj): > + if not self._IsKeyword("FvAlignment"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ > "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ > "1G", "2G"): > - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - Obj.FvAlignment = self.__Token > + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + Obj.FvAlignment = self._Token > return True > > - ## __GetFvBaseAddress() method > + ## _GetFvBaseAddress() method > # > # Get BaseAddress for FV > # > @@ -2223,23 +2164,22 @@ class FdfParser: > # @retval True Successfully find a FvBaseAddress statement > # @retval False Not able to find a FvBaseAddress statement > # > - def __GetFvBaseAddress(self, Obj): > - > - if not self.__IsKeyword("FvBaseAddress"): > + def _GetFvBaseAddress(self, Obj): > + if not self._IsKeyword("FvBaseAddress"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) > > - if not BaseAddrValuePattern.match(self.__Token.upper()): > - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - Obj.FvBaseAddress = self.__Token > + if not BaseAddrValuePattern.match(self._Token.upper()): > + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + Obj.FvBaseAddress = self._Token > return True > > - ## __GetFvForceRebase() method > + ## _GetFvForceRebase() method > # > # Get FvForceRebase for FV > # > @@ -2248,23 +2188,22 @@ class FdfParser: > # @retval True Successfully find a FvForceRebase statement > # @retval False Not able to find a FvForceRebase statement > # > - def __GetFvForceRebase(self, Obj): > - > - if not self.__IsKeyword("FvForceRebase"): > + def _GetFvForceRebase(self, Obj): > + if not self._IsKeyword("FvForceRebase"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > Obj.FvForceRebase = True > - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > Obj.FvForceRebase = False > else: > Obj.FvForceRebase = None > @@ -2272,7 +2211,7 @@ class FdfParser: > return True > > > - ## __GetFvAttributes() method > + ## _GetFvAttributes() method > # > # Get attributes for FV > # > @@ -2280,31 +2219,31 @@ class FdfParser: > # @param Obj for whom attribute is got > # @retval None > # > - def __GetFvAttributes(self, FvObj): > + def _GetFvAttributes(self, FvObj): > IsWordToken = False > - while self.__GetNextWord(): > + while self._GetNextWord(): > IsWordToken = True > - name = self.__Token > + name = self._Token > if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ > "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ > "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ > "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ > "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ > "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): > - self.__UndoToken() > + self._UndoToken() > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > - FvObj.FvAttributeDict[name] = self.__Token > + FvObj.FvAttributeDict[name] = self._Token > > return IsWordToken > > - ## __GetFvNameGuid() method > + ## _GetFvNameGuid() method > # > # Get FV GUID for FV > # > @@ -2312,110 +2251,107 @@ class FdfParser: > # @param Obj for whom GUID is got > # @retval None > # > - def __GetFvNameGuid(self, FvObj): > - > - if not self.__IsKeyword( "FvNameGuid"): > + def _GetFvNameGuid(self, FvObj): > + if not self._IsKeyword("FvNameGuid"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextGuid(): > + if not self._GetNextGuid(): > raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) > > - FvObj.FvNameGuid = self.__Token > + FvObj.FvNameGuid = self._Token > > return True > > - def __GetFvNameString(self, FvObj): > - > - if not self.__IsKeyword( "FvNameString"): > + def _GetFvNameString(self, FvObj): > + if not self._IsKeyword("FvNameString"): > return False > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): > + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): > raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) > > - FvObj.FvNameString = self.__Token > + FvObj.FvNameString = self._Token > > return True > > - def __GetFvExtEntryStatement(self, FvObj): > - > - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): > + def _GetFvExtEntryStatement(self, FvObj): > + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): > return False > > - if not self.__IsKeyword ("TYPE"): > + if not self._IsKeyword ("TYPE"): > raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) > > - FvObj.FvExtEntryTypeValue.append(self.__Token) > + FvObj.FvExtEntryTypeValue.append(self._Token) > > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): > + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): > raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) > > - FvObj.FvExtEntryType.append(self.__Token) > + FvObj.FvExtEntryType.append(self._Token) > > - if self.__Token == 'DATA': > + if self._Token == 'DATA': > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 4: > + if len(self._Token) > 4: > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > - DataString = self.__Token > - DataString += "," > + DataString = self._Token > + DataString += TAB_COMMA_SPLIT > > - while self.__IsToken(","): > - if not self.__GetNextHexNumber(): > + while self._IsToken(TAB_COMMA_SPLIT): > + if not self._GetNextHexNumber(): > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > - if len(self.__Token) > 4: > + if len(self._Token) > 4: > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > - DataString += self.__Token > - DataString += "," > + DataString += self._Token > + DataString += TAB_COMMA_SPLIT > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - DataString = DataString.rstrip(",") > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > FvObj.FvExtEntryData.append(DataString) > > - if self.__Token == 'FILE': > + if self._Token == 'FILE': > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) > > - FvObj.FvExtEntryData.append(self.__Token) > + FvObj.FvExtEntryData.append(self._Token) > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > return True > > - ## __GetAprioriSection() method > + ## _GetAprioriSection() method > # > # Get token statements > # > @@ -2425,56 +2361,55 @@ class FdfParser: > # @retval True Successfully find apriori statement > # @retval False Not able to find apriori statement > # > - def __GetAprioriSection(self, FvObj, MacroDict = {}): > - > - if not self.__IsKeyword( "APRIORI"): > + def _GetAprioriSection(self, FvObj, MacroDict = {}): > + if not self._IsKeyword("APRIORI"): > return False > > - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): > + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): > raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) > - AprType = self.__Token > + AprType = self._Token > > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - AprSectionObj = AprioriSection.AprioriSection() > + AprSectionObj = AprioriSection() > AprSectionObj.AprioriType = AprType > > - self.__GetDefineStatements(AprSectionObj) > + self._GetDefineStatements(AprSectionObj) > MacroDict.update(AprSectionObj.DefineVarDict) > > while True: > - IsInf = self.__GetInfStatement(AprSectionObj) > - IsFile = self.__GetFileStatement( AprSectionObj) > + IsInf = self._GetInfStatement(AprSectionObj) > + IsFile = self._GetFileStatement(AprSectionObj) > if not IsInf and not IsFile: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > FvObj.AprioriSectionList.append(AprSectionObj) > return True > > - def __ParseInfStatement(self): > - if not self.__IsKeyword("INF"): > + def _ParseInfStatement(self): > + if not self._IsKeyword("INF"): > return None > > - ffsInf = FfsInfStatement.FfsInfStatement() > - self.__GetInfOptions(ffsInf) > + ffsInf = FfsInfStatement() > + self._GetInfOptions(ffsInf) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > - ffsInf.InfFileName = self.__Token > + ffsInf.InfFileName = self._Token > if not ffsInf.InfFileName.endswith('.inf'): > raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) > > ffsInf.CurrentLineNum = self.CurrentLineNumber > - ffsInf.CurrentLineContent = self.__CurrentLine() > + ffsInf.CurrentLineContent = self._CurrentLine() > > #Replace $(SAPCE) with real space > ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > #do case sensitive check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > @@ -2492,16 +2427,16 @@ class FdfParser: > else: > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > - if self.__IsToken('|'): > - if self.__IsKeyword('RELOCS_STRIPPED'): > + if self._IsToken(TAB_VALUE_SPLIT): > + if self._IsKeyword('RELOCS_STRIPPED'): > ffsInf.KeepReloc = False > - elif self.__IsKeyword('RELOCS_RETAINED'): > + elif self._IsKeyword('RELOCS_RETAINED'): > ffsInf.KeepReloc = True > else: > - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > return ffsInf > > - ## __GetInfStatement() method > + ## _GetInfStatement() method > # > # Get INF statements > # > @@ -2510,86 +2445,86 @@ class FdfParser: > # @retval True Successfully find inf statement > # @retval False Not able to find inf statement > # > - def __GetInfStatement(self, Obj, ForCapsule=False): > - ffsInf = self.__ParseInfStatement() > + def _GetInfStatement(self, Obj, ForCapsule=False): > + ffsInf = self._ParseInfStatement() > if not ffsInf: > return False > > if ForCapsule: > - capsuleFfs = CapsuleData.CapsuleFfs() > - capsuleFfs.Ffs = ffsInf > - Obj.CapsuleDataList.append(capsuleFfs) > + myCapsuleFfs = CapsuleFfs() > + myCapsuleFfs.Ffs = ffsInf > + Obj.CapsuleDataList.append(myCapsuleFfs) > else: > Obj.FfsList.append(ffsInf) > return True > > - ## __GetInfOptions() method > + ## _GetInfOptions() method > # > # Get options for INF > # > # @param self The object pointer > # @param FfsInfObj for whom option is got > # > - def __GetInfOptions(self, FfsInfObj): > - if self.__IsKeyword("FILE_GUID"): > - if not self.__IsToken("="): > + def _GetInfOptions(self, FfsInfObj): > + if self._IsKeyword("FILE_GUID"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextGuid(): > + if not self._GetNextGuid(): > raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) > - FfsInfObj.OverrideGuid = self.__Token > + FfsInfObj.OverrideGuid = self._Token > > - if self.__IsKeyword( "RuleOverride"): > - if not self.__IsToken( "="): > + if self._IsKeyword("RuleOverride"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) > - FfsInfObj.Rule = self.__Token > + FfsInfObj.Rule = self._Token > > - if self.__IsKeyword( "VERSION"): > - if not self.__IsToken( "="): > + if self._IsKeyword("VERSION"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Version", self.FileName, self.CurrentLineNumber) > > - if self.__GetStringData(): > - FfsInfObj.Version = self.__Token > + if self._GetStringData(): > + FfsInfObj.Version = self._Token > > - if self.__IsKeyword( BINARY_FILE_TYPE_UI): > - if not self.__IsToken( "="): > + if self._IsKeyword(BINARY_FILE_TYPE_UI): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) > > - if self.__GetStringData(): > - FfsInfObj.Ui = self.__Token > + if self._GetStringData(): > + FfsInfObj.Ui = self._Token > > - if self.__IsKeyword( "USE"): > - if not self.__IsToken( "="): > + if self._IsKeyword("USE"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) > - FfsInfObj.UseArch = self.__Token > + FfsInfObj.UseArch = self._Token > > > - if self.__GetNextToken(): > - p = re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): > - FfsInfObj.KeyStringList.append(self.__Token) > - if not self.__IsToken(","): > + if self._GetNextToken(): > + p = compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): > + FfsInfObj.KeyStringList.append(self._Token) > + if not self._IsToken(TAB_COMMA_SPLIT): > return > else: > - self.__UndoToken() > + self._UndoToken() > return > > - while self.__GetNextToken(): > - if not p.match(self.__Token): > + while self._GetNextToken(): > + if not p.match(self._Token): > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > - FfsInfObj.KeyStringList.append(self.__Token) > + FfsInfObj.KeyStringList.append(self._Token) > > - if not self.__IsToken(","): > + if not self._IsToken(TAB_COMMA_SPLIT): > break > > - ## __GetFileStatement() method > + ## _GetFileStatement() method > # > # Get FILE statements > # > @@ -2599,42 +2534,41 @@ class FdfParser: > # @retval True Successfully find FILE statement > # @retval False Not able to find FILE statement > # > - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > - > - if not self.__IsKeyword( "FILE"): > + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > + if not self._IsKeyword("FILE"): > return False > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > - if ForCapsule and self.__Token == 'DATA': > - self.__UndoToken() > - self.__UndoToken() > + if ForCapsule and self._Token == 'DATA': > + self._UndoToken() > + self._UndoToken() > return False > > - FfsFileObj = FfsFileStatement.FileStatement() > - FfsFileObj.FvFileType = self.__Token > + FfsFileObj = FileStatement() > + FfsFileObj.FvFileType = self._Token > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextGuid(): > - if not self.__GetNextWord(): > + if not self._GetNextGuid(): > + if not self._GetNextWord(): > raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) > - if self.__Token == 'PCD': > - if not self.__IsToken( "("): > + if self._Token == 'PCD': > + if not self._IsToken("("): > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > - PcdPair = self.__GetNextPcdSettings() > - if not self.__IsToken( ")"): > + PcdPair = self._GetNextPcdSettings() > + if not self._IsToken(")"): > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > - FfsFileObj.NameGuid = self.__Token > + FfsFileObj.NameGuid = self._Token > > - self.__GetFilePart( FfsFileObj, MacroDict.copy()) > + self._GetFilePart(FfsFileObj, MacroDict.copy()) > > if ForCapsule: > - capsuleFfs = CapsuleData.CapsuleFfs() > + capsuleFfs = CapsuleFfs() > capsuleFfs.Ffs = FfsFileObj > Obj.CapsuleDataList.append(capsuleFfs) > else: > @@ -2642,7 +2576,7 @@ class FdfParser: > > return True > > - ## __FileCouldHaveRelocFlag() method > + ## _FileCouldHaveRelocFlag() method > # > # Check whether reloc strip flag can be set for a file type. > # > @@ -2651,13 +2585,13 @@ class FdfParser: > # @retval False No way to have it > # > @staticmethod > - def __FileCouldHaveRelocFlag (FileType): > + def _FileCouldHaveRelocFlag (FileType): > if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): > return True > else: > return False > > - ## __SectionCouldHaveRelocFlag() method > + ## _SectionCouldHaveRelocFlag() method > # > # Check whether reloc strip flag can be set for a section type. > # > @@ -2666,13 +2600,13 @@ class FdfParser: > # @retval False No way to have it > # > @staticmethod > - def __SectionCouldHaveRelocFlag (SectionType): > + def _SectionCouldHaveRelocFlag (SectionType): > if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): > return True > else: > return False > > - ## __GetFilePart() method > + ## _GetFilePart() method > # > # Get components for FILE statement > # > @@ -2680,58 +2614,57 @@ class FdfParser: > # @param FfsFileObj for whom component is got > # @param MacroDict dictionary used to replace macro > # > - def __GetFilePart(self, FfsFileObj, MacroDict = {}): > + def _GetFilePart(self, FfsFileObj, MacroDict = {}): > + self._GetFileOpts(FfsFileObj) > > - self.__GetFileOpts( FfsFileObj) > - > - if not self.__IsToken("{"): > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > - if self.__Token == 'RELOCS_STRIPPED': > + if not self._IsToken("{"): > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > + if self._Token == 'RELOCS_STRIPPED': > FfsFileObj.KeepReloc = False > else: > FfsFileObj.KeepReloc = True > else: > raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) > > - if self.__Token == BINARY_FILE_TYPE_FV: > - if not self.__IsToken( "="): > + if self._Token == BINARY_FILE_TYPE_FV: > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > - FfsFileObj.FvName = self.__Token > + FfsFileObj.FvName = self._Token > > - elif self.__Token == "FD": > - if not self.__IsToken( "="): > + elif self._Token == "FD": > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > - FfsFileObj.FdName = self.__Token > + FfsFileObj.FdName = self._Token > > - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): > - self.__UndoToken() > - self.__GetSectionData( FfsFileObj, MacroDict) > + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): > + self._UndoToken() > + self._GetSectionData(FfsFileObj, MacroDict) > > elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': > - self.__UndoToken() > - self.__GetRAWData(FfsFileObj, MacroDict) > + self._UndoToken() > + self._GetRAWData(FfsFileObj, MacroDict) > > else: > FfsFileObj.CurrentLineNum = self.CurrentLineNumber > - FfsFileObj.CurrentLineContent = self.__CurrentLine() > - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') > - self.__VerifyFile(FfsFileObj.FileName) > + FfsFileObj.CurrentLineContent = self._CurrentLine() > + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') > + self._VerifyFile(FfsFileObj.FileName) > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - ## __GetRAWData() method > + ## _GetRAWData() method > # > # Get RAW data for FILE statement > # > @@ -2739,33 +2672,32 @@ class FdfParser: > # @param FfsFileObj for whom section is got > # @param MacroDict dictionary used to replace macro > # > - def __GetRAWData(self, FfsFileObj, MacroDict = {}): > + def _GetRAWData(self, FfsFileObj, MacroDict = {}): > FfsFileObj.FileName = [] > FfsFileObj.SubAlignment = [] > while True: > AlignValue = None > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > #For FFS, Auto is default option same to "" > - if not self.__Token == "Auto": > - AlignValue = self.__Token > - if not self.__GetNextToken(): > + if not self._Token == "Auto": > + AlignValue = self._Token > + if not self._GetNextToken(): > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > - FileName = self.__Token.replace('$(SPACE)', ' ') > + FileName = self._Token.replace('$(SPACE)', ' ') > if FileName == '}': > - self.__UndoToken() > + self._UndoToken() > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > - self.__VerifyFile(FileName) > + self._VerifyFile(FileName) > File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) > FfsFileObj.FileName.append(File.Path) > FfsFileObj.SubAlignment.append(AlignValue) > > - if self.__IsToken( "}"): > - self.__UndoToken() > + if self._IsToken("}"): > + self._UndoToken() > break > > if len(FfsFileObj.SubAlignment) == 1: > @@ -2773,45 +2705,43 @@ class FdfParser: > if len(FfsFileObj.FileName) == 1: > FfsFileObj.FileName = FfsFileObj.FileName[0] > > - ## __GetFileOpts() method > + ## _GetFileOpts() method > # > # Get options for FILE statement > # > # @param self The object pointer > # @param FfsFileObj for whom options is got > # > - def __GetFileOpts(self, FfsFileObj): > - > - if self.__GetNextToken(): > - if TokenFindPattern.match(self.__Token): > - FfsFileObj.KeyStringList.append(self.__Token) > - if self.__IsToken(","): > - while self.__GetNextToken(): > - if not TokenFindPattern.match(self.__Token): > + def _GetFileOpts(self, FfsFileObj): > + if self._GetNextToken(): > + if TokenFindPattern.match(self._Token): > + FfsFileObj.KeyStringList.append(self._Token) > + if self._IsToken(TAB_COMMA_SPLIT): > + while self._GetNextToken(): > + if not TokenFindPattern.match(self._Token): > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > - FfsFileObj.KeyStringList.append(self.__Token) > + FfsFileObj.KeyStringList.append(self._Token) > > - if not self.__IsToken(","): > + if not self._IsToken(TAB_COMMA_SPLIT): > break > > else: > - self.__UndoToken() > + self._UndoToken() > > - if self.__IsKeyword( "FIXED", True): > + if self._IsKeyword("FIXED", True): > FfsFileObj.Fixed = True > > - if self.__IsKeyword( "CHECKSUM", True): > + if self._IsKeyword("CHECKSUM", True): > FfsFileObj.CheckSum = True > > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > #For FFS, Auto is default option same to "" > - if not self.__Token == "Auto": > - FfsFileObj.Alignment = self.__Token > + if not self._Token == "Auto": > + FfsFileObj.Alignment = self._Token > > - ## __GetAlignment() method > + ## _GetAlignment() method > # > # Return the alignment value > # > @@ -2819,18 +2749,18 @@ class FdfParser: > # @retval True Successfully find alignment > # @retval False Not able to find alignment > # > - def __GetAlignment(self): > - if self.__IsKeyword( "Align", True): > - if not self.__IsToken( "="): > + def _GetAlignment(self): > + if self._IsKeyword("Align", True): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > return True > > return False > > - ## __GetFilePart() method > + ## _GetFilePart() method > # > # Get section data for FILE statement > # > @@ -2838,23 +2768,23 @@ class FdfParser: > # @param FfsFileObj for whom section is got > # @param MacroDict dictionary used to replace macro > # > - def __GetSectionData(self, FfsFileObj, MacroDict = {}): > + def _GetSectionData(self, FfsFileObj, MacroDict = {}): > Dict = {} > Dict.update(MacroDict) > > - self.__GetDefineStatements(FfsFileObj) > + self._GetDefineStatements(FfsFileObj) > > Dict.update(FfsFileObj.DefineVarDict) > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > while True: > - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) > - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) > + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) > + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) > if not IsLeafSection and not IsEncapSection: > break > > - ## __GetLeafSection() method > + ## _GetLeafSection() method > # > # Get leaf section for Obj > # > @@ -2864,97 +2794,95 @@ class FdfParser: > # @retval True Successfully find section statement > # @retval False Not able to find section statement > # > - def __GetLeafSection(self, Obj, MacroDict = {}): > - > + def _GetLeafSection(self, Obj, MacroDict = {}): > OldPos = self.GetFileBufferPos() > > - if not self.__IsKeyword( "SECTION"): > + if not self._IsKeyword("SECTION"): > if len(Obj.SectionList) == 0: > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > else: > return False > > AlignValue = None > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - AlignValue = self.__Token > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + AlignValue = self._Token > > BuildNum = None > - if self.__IsKeyword( "BUILD_NUM"): > - if not self.__IsToken( "="): > + if self._IsKeyword("BUILD_NUM"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) > > - BuildNum = self.__Token > + BuildNum = self._Token > > - if self.__IsKeyword( "VERSION"): > + if self._IsKeyword("VERSION"): > if AlignValue == 'Auto': > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected version", self.FileName, self.CurrentLineNumber) > - VerSectionObj = VerSection.VerSection() > + VerSectionObj = VerSection() > VerSectionObj.Alignment = AlignValue > VerSectionObj.BuildNum = BuildNum > - if self.__GetStringData(): > - VerSectionObj.StringData = self.__Token > + if self._GetStringData(): > + VerSectionObj.StringData = self._Token > else: > - VerSectionObj.FileName = self.__Token > + VerSectionObj.FileName = self._Token > Obj.SectionList.append(VerSectionObj) > > - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): > + elif self._IsKeyword(BINARY_FILE_TYPE_UI): > if AlignValue == 'Auto': > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected UI", self.FileName, self.CurrentLineNumber) > - UiSectionObj = UiSection.UiSection() > + UiSectionObj = UiSection() > UiSectionObj.Alignment = AlignValue > - if self.__GetStringData(): > - UiSectionObj.StringData = self.__Token > + if self._GetStringData(): > + UiSectionObj.StringData = self._Token > else: > - UiSectionObj.FileName = self.__Token > + UiSectionObj.FileName = self._Token > Obj.SectionList.append(UiSectionObj) > > - elif self.__IsKeyword( "FV_IMAGE"): > + elif self._IsKeyword("FV_IMAGE"): > if AlignValue == 'Auto': > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) > > - FvName = self.__Token > + FvName = self._Token > FvObj = None > > - if self.__IsToken( "{"): > - FvObj = Fv.FV() > + if self._IsToken("{"): > + FvObj = FV() > FvObj.UiFvName = FvName.upper() > - self.__GetDefineStatements(FvObj) > + self._GetDefineStatements(FvObj) > MacroDict.update(FvObj.DefineVarDict) > - self.__GetBlockStatement(FvObj) > - self.__GetSetStatements(FvObj) > - self.__GetFvAlignment(FvObj) > - self.__GetFvAttributes(FvObj) > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > + self._GetBlockStatement(FvObj) > + self._GetSetStatements(FvObj) > + self._GetFvAlignment(FvObj) > + self._GetFvAttributes(FvObj) > + self._GetAprioriSection(FvObj, MacroDict.copy()) > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > while True: > - IsInf = self.__GetInfStatement(FvObj) > - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) > + IsInf = self._GetInfStatement(FvObj) > + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) > if not IsInf and not IsFile: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - FvImageSectionObj = FvImageSection.FvImageSection() > + FvImageSectionObj = FvImageSection() > FvImageSectionObj.Alignment = AlignValue > if FvObj is not None: > FvImageSectionObj.Fv = FvObj > @@ -2965,81 +2893,81 @@ class FdfParser: > > Obj.SectionList.append(FvImageSectionObj) > > - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): > + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): > if AlignValue == 'Auto': > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - DepexSectionObj = DepexSection.DepexSection() > + DepexSectionObj = DepexSection() > DepexSectionObj.Alignment = AlignValue > - DepexSectionObj.DepexType = self.__Token > + DepexSectionObj.DepexType = self._Token > > - if not self.__IsToken( "="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__IsToken( "{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > - if not self.__SkipToToken( "}"): > + if not self._SkipToToken("}"): > raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) > > - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') > + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') > Obj.SectionList.append(DepexSectionObj) > > else: > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected section type", self.FileName, self.CurrentLineNumber) > > # Encapsulation section appear, UndoToken and return > - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": > + if self._Token == "COMPRESS" or self._Token == "GUIDED": > self.SetFileBufferPos(OldPos) > return False > > - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): > - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == BINARY_FILE_TYPE_TE): > + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == BINARY_FILE_TYPE_TE): > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > # DataSection > - DataSectionObj = DataSection.DataSection() > + DataSectionObj = DataSection() > DataSectionObj.Alignment = AlignValue > - DataSectionObj.SecType = self.__Token > + DataSectionObj.SecType = self._Token > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): > - if self.__Token == 'RELOCS_STRIPPED': > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): > + if self._Token == 'RELOCS_STRIPPED': > DataSectionObj.KeepReloc = False > else: > DataSectionObj.KeepReloc = True > else: > raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > - if self.__IsToken("="): > - if not self.__GetNextToken(): > + if self._IsToken(TAB_EQUAL_SPLIT): > + if not self._GetNextToken(): > raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) > - DataSectionObj.SectFileName = self.__Token > - self.__VerifyFile(DataSectionObj.SectFileName) > + DataSectionObj.SectFileName = self._Token > + self._VerifyFile(DataSectionObj.SectFileName) > else: > - if not self.__GetCglSection(DataSectionObj): > + if not self._GetCglSection(DataSectionObj): > return False > > Obj.SectionList.append(DataSectionObj) > > return True > > - ## __VerifyFile > + ## _VerifyFile > # > # Check if file exists or not: > # If current phase if GenFds, the file must exist; > # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist > # @param FileName: File path to be verified. > # > - def __VerifyFile(self, FileName): > - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: > + def _VerifyFile(self, FileName): > + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: > return > - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: > + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: > ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > - ## __GetCglSection() method > + ## _GetCglSection() method > # > # Get compressed or GUIDed section for Obj > # > @@ -3049,28 +2977,28 @@ class FdfParser: > # @retval True Successfully find section statement > # @retval False Not able to find section statement > # > - def __GetCglSection(self, Obj, AlignValue = None): > + def _GetCglSection(self, Obj, AlignValue = None): > > - if self.__IsKeyword( "COMPRESS"): > + if self._IsKeyword("COMPRESS"): > type = "PI_STD" > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > - type = self.__Token > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > + type = self._Token > > - if not self.__IsToken("{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - CompressSectionObj = CompressSection.CompressSection() > + CompressSectionObj = CompressSection() > CompressSectionObj.Alignment = AlignValue > CompressSectionObj.CompType = type > # Recursive sections... > while True: > - IsLeafSection = self.__GetLeafSection(CompressSectionObj) > - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) > + IsLeafSection = self._GetLeafSection(CompressSectionObj) > + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) > if not IsLeafSection and not IsEncapSection: > break > > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > Obj.SectionList.append(CompressSectionObj) > > @@ -3079,15 +3007,15 @@ class FdfParser: > > return True > > - elif self.__IsKeyword( "GUIDED"): > + elif self._IsKeyword("GUIDED"): > GuidValue = None > - if self.__GetNextGuid(): > - GuidValue = self.__Token > + if self._GetNextGuid(): > + GuidValue = self._Token > > - AttribDict = self.__GetGuidAttrib() > - if not self.__IsToken("{"): > + AttribDict = self._GetGuidAttrib() > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > - GuidSectionObj = GuidSection.GuidSection() > + GuidSectionObj = GuidSection() > GuidSectionObj.Alignment = AlignValue > GuidSectionObj.NameGuid = GuidValue > GuidSectionObj.SectionType = "GUIDED" > @@ -3096,12 +3024,12 @@ class FdfParser: > GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] > # Recursive sections... > while True: > - IsLeafSection = self.__GetLeafSection(GuidSectionObj) > - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) > + IsLeafSection = self._GetLeafSection(GuidSectionObj) > + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) > if not IsLeafSection and not IsEncapSection: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > Obj.SectionList.append(GuidSectionObj) > > @@ -3109,44 +3037,43 @@ class FdfParser: > > return False > > - ## __GetGuidAttri() method > + ## _GetGuidAttri() method > # > # Get attributes for GUID section > # > # @param self The object pointer > # @retval AttribDict Dictionary of key-value pair of section attributes > # > - def __GetGuidAttrib(self): > - > + def _GetGuidAttrib(self): > AttribDict = {} > AttribDict["PROCESSING_REQUIRED"] = "NONE" > AttribDict["AUTH_STATUS_VALID"] = "NONE" > AttribDict["EXTRA_HEADER_SIZE"] = -1 > - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ > - or self.__IsKeyword("EXTRA_HEADER_SIZE"): > - AttribKey = self.__Token > + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ > + or self._IsKeyword("EXTRA_HEADER_SIZE"): > + AttribKey = self._Token > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) > elif AttribKey == "EXTRA_HEADER_SIZE": > Base = 10 > - if self.__Token[0:2].upper() == "0X": > + if self._Token[0:2].upper() == "0X": > Base = 16 > try: > - AttribDict[AttribKey] = int(self.__Token, Base) > + AttribDict[AttribKey] = int(self._Token, Base) > continue > except ValueError: > raise Warning("expected Number", self.FileName, self.CurrentLineNumber) > - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > - AttribDict[AttribKey] = self.__Token > + AttribDict[AttribKey] = self._Token > > return AttribDict > > - ## __GetEncapsulationSec() method > + ## _GetEncapsulationSec() method > # > # Get encapsulation section for FILE > # > @@ -3155,91 +3082,89 @@ class FdfParser: > # @retval True Successfully find section statement > # @retval False Not able to find section statement > # > - def __GetEncapsulationSec(self, FfsFileObj): > - > + def _GetEncapsulationSec(self, FfsFileObj): > OldPos = self.GetFileBufferPos() > - if not self.__IsKeyword( "SECTION"): > + if not self._IsKeyword("SECTION"): > if len(FfsFileObj.SectionList) == 0: > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > else: > return False > > AlignValue = None > - if self.__GetAlignment(): > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - AlignValue = self.__Token > + if self._GetAlignment(): > + if self._Token not in ALIGNMENT_NOAUTO: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + AlignValue = self._Token > > - if not self.__GetCglSection(FfsFileObj, AlignValue): > + if not self._GetCglSection(FfsFileObj, AlignValue): > self.SetFileBufferPos(OldPos) > return False > else: > return True > > - def __GetFmp(self): > - if not self.__GetNextToken(): > + def _GetFmp(self): > + if not self._GetNextToken(): > return False > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - self.__SkipToToken("[FMPPAYLOAD.", True) > - FmpUiName = self.__GetUiName().upper() > + self._UndoToken() > + self._SkipToToken("[FMPPAYLOAD.", True) > + FmpUiName = self._GetUiName().upper() > if FmpUiName in self.Profile.FmpPayloadDict: > raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) > > - FmpData = CapsuleData.CapsulePayload() > + FmpData = CapsulePayload() > FmpData.UiName = FmpUiName > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) > FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] > - while self.__Token in FmpKeyList: > - Name = self.__Token > + while self._Token in FmpKeyList: > + Name = self._Token > FmpKeyList.remove(Name) > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > if Name == 'IMAGE_TYPE_ID': > - if not self.__GetNextGuid(): > + if not self._GetNextGuid(): > raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) > - FmpData.ImageTypeId = self.__Token > + FmpData.ImageTypeId = self._Token > elif Name == 'CERTIFICATE_GUID': > - if not self.__GetNextGuid(): > + if not self._GetNextGuid(): > raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > - FmpData.Certificate_Guid = self.__Token > - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > + FmpData.Certificate_Guid = self._Token > + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > else: > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) > - Value = self.__Token > + Value = self._Token > if Name == 'IMAGE_HEADER_INIT_VERSION': > - if FdfParser.__Verify(Name, Value, 'UINT8'): > + if FdfParser._Verify(Name, Value, 'UINT8'): > FmpData.Version = Value > elif Name == 'IMAGE_INDEX': > - if FdfParser.__Verify(Name, Value, 'UINT8'): > + if FdfParser._Verify(Name, Value, 'UINT8'): > FmpData.ImageIndex = Value > elif Name == 'HARDWARE_INSTANCE': > - if FdfParser.__Verify(Name, Value, 'UINT8'): > + if FdfParser._Verify(Name, Value, 'UINT8'): > FmpData.HardwareInstance = Value > elif Name == 'MONOTONIC_COUNT': > - if FdfParser.__Verify(Name, Value, 'UINT64'): > + if FdfParser._Verify(Name, Value, 'UINT64'): > FmpData.MonotonicCount = Value > if FmpData.MonotonicCount.upper().startswith('0X'): > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) > else: > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > break > else: > - self.__UndoToken() > + self._UndoToken() > > if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and FmpData.Certificate_Guid): > EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a pair.") > @@ -3248,7 +3173,7 @@ class FdfParser: > if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: > raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, self.CurrentLineNumber) > # get the Image file and Vendor code file > - self.__GetFMPCapsuleData(FmpData) > + self._GetFMPCapsuleData(FmpData) > if not FmpData.ImageFile: > raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) > # check whether more than one Vendor code file > @@ -3257,7 +3182,7 @@ class FdfParser: > self.Profile.FmpPayloadDict[FmpUiName] = FmpData > return True > > - ## __GetCapsule() method > + ## _GetCapsule() method > # > # Get capsule section contents and store its data into capsule list of self.Profile > # > @@ -3265,90 +3190,89 @@ class FdfParser: > # @retval True Successfully find a capsule > # @retval False Not able to find a capsule > # > - def __GetCapsule(self): > - > - if not self.__GetNextToken(): > + def _GetCapsule(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[CAPSULE."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[CAPSULE.", True): > + self._UndoToken() > + if not self._IsToken("[CAPSULE.", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) > > - CapsuleObj = Capsule.Capsule() > + CapsuleObj = Capsule() > > - CapsuleName = self.__GetUiName() > + CapsuleName = self._GetUiName() > if not CapsuleName: > raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) > > CapsuleObj.UiCapsuleName = CapsuleName.upper() > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - if self.__IsKeyword("CREATE_FILE"): > - if not self.__IsToken( "="): > + if self._IsKeyword("CREATE_FILE"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > - CapsuleObj.CreateFile = self.__Token > + CapsuleObj.CreateFile = self._Token > > - self.__GetCapsuleStatements(CapsuleObj) > + self._GetCapsuleStatements(CapsuleObj) > self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj > return True > > - ## __GetCapsuleStatements() method > + ## _GetCapsuleStatements() method > # > # Get statements for capsule > # > # @param self The object pointer > # @param Obj for whom statements are got > # > - def __GetCapsuleStatements(self, Obj): > - self.__GetCapsuleTokens(Obj) > - self.__GetDefineStatements(Obj) > - self.__GetSetStatements(Obj) > - self.__GetCapsuleData(Obj) > + def _GetCapsuleStatements(self, Obj): > + self._GetCapsuleTokens(Obj) > + self._GetDefineStatements(Obj) > + self._GetSetStatements(Obj) > + self._GetCapsuleData(Obj) > > - ## __GetCapsuleTokens() method > + ## _GetCapsuleTokens() method > # > # Get token statements for capsule > # > # @param self The object pointer > # @param Obj for whom token statements are got > # > - def __GetCapsuleTokens(self, Obj): > - if not self.__GetNextToken(): > + def _GetCapsuleTokens(self, Obj): > + if not self._GetNextToken(): > return False > - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): > - Name = self.__Token.strip() > - if not self.__IsToken("="): > + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): > + Name = self._Token.strip() > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > if Name == 'CAPSULE_FLAGS': > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) > - Value = self.__Token.strip() > - while self.__IsToken(","): > - Value += ',' > - if not self.__GetNextToken(): > + Value = self._Token.strip() > + while self._IsToken(TAB_COMMA_SPLIT): > + Value += TAB_COMMA_SPLIT > + if not self._GetNextToken(): > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) > - Value += self.__Token.strip() > + Value += self._Token.strip() > elif Name == 'OEM_CAPSULE_FLAGS': > - Value = self.__Token.strip() > + Value = self._Token.strip() > if not Value.upper().startswith('0X'): > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > try: > @@ -3357,51 +3281,49 @@ class FdfParser: > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > if not 0x0000 <= Value <= 0xFFFF: > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > - Value = self.__Token.strip() > + Value = self._Token.strip() > else: > - Value = self.__Token.strip() > + Value = self._Token.strip() > Obj.TokensDict[Name] = Value > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > return False > - self.__UndoToken() > + self._UndoToken() > > - ## __GetCapsuleData() method > + ## _GetCapsuleData() method > # > # Get capsule data for capsule > # > # @param self The object pointer > # @param Obj for whom capsule data are got > # > - def __GetCapsuleData(self, Obj): > - > + def _GetCapsuleData(self, Obj): > while True: > - IsInf = self.__GetInfStatement(Obj, True) > - IsFile = self.__GetFileStatement(Obj, True) > - IsFv = self.__GetFvStatement(Obj) > - IsFd = self.__GetFdStatement(Obj) > - IsAnyFile = self.__GetAnyFileStatement(Obj) > - IsAfile = self.__GetAfileStatement(Obj) > - IsFmp = self.__GetFmpStatement(Obj) > + IsInf = self._GetInfStatement(Obj, True) > + IsFile = self._GetFileStatement(Obj, True) > + IsFv = self._GetFvStatement(Obj) > + IsFd = self._GetFdStatement(Obj) > + IsAnyFile = self._GetAnyFileStatement(Obj) > + IsAfile = self._GetAfileStatement(Obj) > + IsFmp = self._GetFmpStatement(Obj) > if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): > break > > - ## __GetFMPCapsuleData() method > + ## _GetFMPCapsuleData() method > # > # Get capsule data for FMP capsule > # > # @param self The object pointer > # @param Obj for whom capsule data are got > # > - def __GetFMPCapsuleData(self, Obj): > - > + def _GetFMPCapsuleData(self, Obj): > while True: > - IsFv = self.__GetFvStatement(Obj, True) > - IsFd = self.__GetFdStatement(Obj, True) > - IsAnyFile = self.__GetAnyFileStatement(Obj, True) > + IsFv = self._GetFvStatement(Obj, True) > + IsFd = self._GetFdStatement(Obj, True) > + IsAnyFile = self._GetAnyFileStatement(Obj, True) > if not (IsFv or IsFd or IsAnyFile): > break > > - ## __GetFvStatement() method > + ## _GetFvStatement() method > # > # Get FV for capsule > # > @@ -3410,32 +3332,31 @@ class FdfParser: > # @retval True Successfully find a FV statement > # @retval False Not able to find a FV statement > # > - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): > - > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > return False > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in self.Profile.FvDict: > + if self._Token.upper() not in self.Profile.FvDict: > raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) > > - CapsuleFv = CapsuleData.CapsuleFv() > - CapsuleFv.FvName = self.__Token > + myCapsuleFv = CapsuleFv() > + myCapsuleFv.FvName = self._Token > if FMPCapsule: > if not CapsuleObj.ImageFile: > - CapsuleObj.ImageFile.append(CapsuleFv) > + CapsuleObj.ImageFile.append(myCapsuleFv) > else: > - CapsuleObj.VendorCodeFile.append(CapsuleFv) > + CapsuleObj.VendorCodeFile.append(myCapsuleFv) > else: > - CapsuleObj.CapsuleDataList.append(CapsuleFv) > + CapsuleObj.CapsuleDataList.append(myCapsuleFv) > return True > > - ## __GetFdStatement() method > + ## _GetFdStatement() method > # > # Get FD for capsule > # > @@ -3444,74 +3365,73 @@ class FdfParser: > # @retval True Successfully find a FD statement > # @retval False Not able to find a FD statement > # > - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): > - > - if not self.__IsKeyword("FD"): > + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): > + if not self._IsKeyword("FD"): > return False > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in self.Profile.FdDict: > + if self._Token.upper() not in self.Profile.FdDict: > raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) > > - CapsuleFd = CapsuleData.CapsuleFd() > - CapsuleFd.FdName = self.__Token > + myCapsuleFd = CapsuleFd() > + myCapsuleFd.FdName = self._Token > if FMPCapsule: > if not CapsuleObj.ImageFile: > - CapsuleObj.ImageFile.append(CapsuleFd) > + CapsuleObj.ImageFile.append(myCapsuleFd) > else: > - CapsuleObj.VendorCodeFile.append(CapsuleFd) > + CapsuleObj.VendorCodeFile.append(myCapsuleFd) > else: > - CapsuleObj.CapsuleDataList.append(CapsuleFd) > + CapsuleObj.CapsuleDataList.append(myCapsuleFd) > return True > > - def __GetFmpStatement(self, CapsuleObj): > - if not self.__IsKeyword("FMP_PAYLOAD"): > - if not self.__IsKeyword("FMP"): > + def _GetFmpStatement(self, CapsuleObj): > + if not self._IsKeyword("FMP_PAYLOAD"): > + if not self._IsKeyword("FMP"): > return False > > - if not self.__IsKeyword("PAYLOAD"): > - self.__UndoToken() > + if not self._IsKeyword("PAYLOAD"): > + self._UndoToken() > return False > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) > - Payload = self.__Token.upper() > + Payload = self._Token.upper() > if Payload not in self.Profile.FmpPayloadDict: > - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) > + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) > CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) > return True > > - def __ParseRawFileStatement(self): > - if not self.__IsKeyword("FILE"): > + def _ParseRawFileStatement(self): > + if not self._IsKeyword("FILE"): > return None > > - if not self.__IsKeyword("DATA"): > - self.__UndoToken() > + if not self._IsKeyword("DATA"): > + self._UndoToken() > return None > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > - AnyFileName = self.__Token > - self.__VerifyFile(AnyFileName) > + AnyFileName = self._Token > + self._VerifyFile(AnyFileName) > > if not os.path.isabs(AnyFileName): > AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) > > return AnyFileName > > - ## __GetAnyFileStatement() method > + ## _GetAnyFileStatement() method > # > # Get AnyFile for capsule > # > @@ -3520,23 +3440,23 @@ class FdfParser: > # @retval True Successfully find a Anyfile statement > # @retval False Not able to find a AnyFile statement > # > - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > - AnyFileName = self.__ParseRawFileStatement() > + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > + AnyFileName = self._ParseRawFileStatement() > if not AnyFileName: > return False > > - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() > - CapsuleAnyFile.FileName = AnyFileName > + myCapsuleAnyFile = CapsuleAnyFile() > + myCapsuleAnyFile.FileName = AnyFileName > if FMPCapsule: > if not CapsuleObj.ImageFile: > - CapsuleObj.ImageFile.append(CapsuleAnyFile) > + CapsuleObj.ImageFile.append(myCapsuleAnyFile) > else: > - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) > + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) > else: > - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) > + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) > return True > > - ## __GetAfileStatement() method > + ## _GetAfileStatement() method > # > # Get Afile for capsule > # > @@ -3545,18 +3465,17 @@ class FdfParser: > # @retval True Successfully find a Afile statement > # @retval False Not able to find a Afile statement > # > - def __GetAfileStatement(self, CapsuleObj): > - > - if not self.__IsKeyword("APPEND"): > + def _GetAfileStatement(self, CapsuleObj): > + if not self._IsKeyword("APPEND"): > return False > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) > > - AfileName = self.__Token > + AfileName = self._Token > AfileBaseName = os.path.basename(AfileName) > > if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: > @@ -3565,19 +3484,19 @@ class FdfParser: > > if not os.path.isabs(AfileName): > AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) > - self.__VerifyFile(AfileName) > + self._VerifyFile(AfileName) > else: > if not os.path.exists(AfileName): > raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) > else: > pass > > - CapsuleAfile = CapsuleData.CapsuleAfile() > - CapsuleAfile.FileName = AfileName > - CapsuleObj.CapsuleDataList.append(CapsuleAfile) > + myCapsuleAfile = CapsuleAfile() > + myCapsuleAfile.FileName = AfileName > + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) > return True > > - ## __GetRule() method > + ## _GetRule() method > # > # Get Rule section contents and store its data into rule list of self.Profile > # > @@ -3585,143 +3504,140 @@ class FdfParser: > # @retval True Successfully find a Rule > # @retval False Not able to find a Rule > # > - def __GetRule(self): > - > - if not self.__GetNextToken(): > + def _GetRule(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[RULE."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > - self.__UndoToken() > - if not self.__IsToken("[Rule.", True): > + self._UndoToken() > + if not self._IsToken("[Rule.", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) > > - if not self.__SkipToToken("."): > + if not self._SkipToToken(TAB_SPLIT): > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > - Arch = self.__SkippedChars.rstrip(".") > + Arch = self._SkippedChars.rstrip(TAB_SPLIT) > if Arch.upper() not in ARCH_SET_FULL: > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > - ModuleType = self.__GetModuleType() > + ModuleType = self._GetModuleType() > > TemplateName = "" > - if self.__IsToken("."): > - if not self.__GetNextWord(): > + if self._IsToken(TAB_SPLIT): > + if not self._GetNextWord(): > raise Warning("expected template name", self.FileName, self.CurrentLineNumber) > - TemplateName = self.__Token > + TemplateName = self._Token > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - RuleObj = self.__GetRuleFileStatements() > + RuleObj = self._GetRuleFileStatements() > RuleObj.Arch = Arch.upper() > RuleObj.ModuleType = ModuleType > RuleObj.TemplateName = TemplateName > - if TemplateName == '' : > + if TemplateName == '': > self.Profile.RuleDict['RULE' + \ > - '.' + \ > + TAB_SPLIT + \ > Arch.upper() + \ > - '.' + \ > + TAB_SPLIT + \ > ModuleType.upper() ] = RuleObj > - else : > + else: > self.Profile.RuleDict['RULE' + \ > - '.' + \ > + TAB_SPLIT + \ > Arch.upper() + \ > - '.' + \ > + TAB_SPLIT + \ > ModuleType.upper() + \ > - '.' + \ > + TAB_SPLIT + \ > TemplateName.upper() ] = RuleObj > return True > > - ## __GetModuleType() method > + ## _GetModuleType() method > # > # Return the module type > # > # @param self The object pointer > # @retval string module type > # > - def __GetModuleType(self): > - > - if not self.__GetNextWord(): > + def _GetModuleType(self): > + if not self._GetNextWord(): > raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) > - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ > + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ > SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ > SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ > - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, "DEFAULT", SUP_MODULE_BASE, \ > + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, TAB_DEFAULT, SUP_MODULE_BASE, \ > EDK_COMPONENT_TYPE_SECURITY_CORE, EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ > "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): > - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - return self.__Token > + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + return self._Token > > - ## __GetFileExtension() method > + ## _GetFileExtension() method > # > # Return the file extension > # > # @param self The object pointer > # @retval string file name extension > # > - def __GetFileExtension(self): > - if not self.__IsToken("."): > + def _GetFileExtension(self): > + if not self._IsToken(TAB_SPLIT): > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > Ext = "" > - if self.__GetNextToken(): > - if FileExtensionPattern.match(self.__Token): > - Ext = self.__Token > - return '.' + Ext > + if self._GetNextToken(): > + if FileExtensionPattern.match(self._Token): > + Ext = self._Token > + return TAB_SPLIT + Ext > else: > - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > else: > raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) > > - ## __GetRuleFileStatement() method > + ## _GetRuleFileStatement() method > # > # Get rule contents > # > # @param self The object pointer > # @retval Rule Rule object > # > - def __GetRuleFileStatements(self): > - > - if not self.__IsKeyword("FILE"): > + def _GetRuleFileStatements(self): > + if not self._IsKeyword("FILE"): > raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > - Type = self.__Token.strip().upper() > + Type = self._Token.strip().upper() > if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ > "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): > - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__IsKeyword("$(NAMED_GUID)"): > - if not self.__GetNextWord(): > + if not self._IsKeyword("$(NAMED_GUID)"): > + if not self._GetNextWord(): > raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) > - if self.__Token == 'PCD': > - if not self.__IsToken( "("): > + if self._Token == 'PCD': > + if not self._IsToken("("): > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > - PcdPair = self.__GetNextPcdSettings() > - if not self.__IsToken( ")"): > + PcdPair = self._GetNextPcdSettings() > + if not self._IsToken(")"): > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > - NameGuid = self.__Token > + NameGuid = self._Token > > KeepReloc = None > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > - if self.__FileCouldHaveRelocFlag(Type): > - if self.__Token == 'RELOCS_STRIPPED': > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > + if self._FileCouldHaveRelocFlag(Type): > + if self._Token == 'RELOCS_STRIPPED': > KeepReloc = False > else: > KeepReloc = True > @@ -3729,111 +3645,109 @@ class FdfParser: > raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > KeyStringList = [] > - if self.__GetNextToken(): > - if TokenFindPattern.match(self.__Token): > - KeyStringList.append(self.__Token) > - if self.__IsToken(","): > - while self.__GetNextToken(): > - if not TokenFindPattern.match(self.__Token): > + if self._GetNextToken(): > + if TokenFindPattern.match(self._Token): > + KeyStringList.append(self._Token) > + if self._IsToken(TAB_COMMA_SPLIT): > + while self._GetNextToken(): > + if not TokenFindPattern.match(self._Token): > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > - KeyStringList.append(self.__Token) > + KeyStringList.append(self._Token) > > - if not self.__IsToken(","): > + if not self._IsToken(TAB_COMMA_SPLIT): > break > > else: > - self.__UndoToken() > + self._UndoToken() > > > Fixed = False > - if self.__IsKeyword("Fixed", True): > + if self._IsKeyword("Fixed", True): > Fixed = True > > CheckSum = False > - if self.__IsKeyword("CheckSum", True): > + if self._IsKeyword("CheckSum", True): > CheckSum = True > > AlignValue = "" > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > #For FFS, Auto is default option same to "" > - if not self.__Token == "Auto": > - AlignValue = self.__Token > + if not self._Token == "Auto": > + AlignValue = self._Token > > - if self.__IsToken("{"): > + if self._IsToken("{"): > # Complex file rule expected > - Rule = RuleComplexFile.RuleComplexFile() > - Rule.FvFileType = Type > - Rule.NameGuid = NameGuid > - Rule.Alignment = AlignValue > - Rule.CheckSum = CheckSum > - Rule.Fixed = Fixed > - Rule.KeyStringList = KeyStringList > + NewRule = RuleComplexFile() > + NewRule.FvFileType = Type > + NewRule.NameGuid = NameGuid > + NewRule.Alignment = AlignValue > + NewRule.CheckSum = CheckSum > + NewRule.Fixed = Fixed > + NewRule.KeyStringList = KeyStringList > if KeepReloc is not None: > - Rule.KeepReloc = KeepReloc > + NewRule.KeepReloc = KeepReloc > > while True: > - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) > - IsLeaf = self.__GetEfiSection(Rule) > + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) > + IsLeaf = self._GetEfiSection(NewRule) > if not IsEncapsulate and not IsLeaf: > break > > - if not self.__IsToken("}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - return Rule > + return NewRule > > else: > # Simple file rule expected > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) > > - SectionName = self.__Token > + SectionName = self._Token > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): > raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) > > > - if self.__IsKeyword("Fixed", True): > + if self._IsKeyword("Fixed", True): > Fixed = True > > - if self.__IsKeyword("CheckSum", True): > + if self._IsKeyword("CheckSum", True): > CheckSum = True > > SectAlignment = "" > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - SectAlignment = self.__Token > + SectAlignment = self._Token > > Ext = None > - if self.__IsToken('|'): > - Ext = self.__GetFileExtension() > - elif not self.__GetNextToken(): > + if self._IsToken(TAB_VALUE_SPLIT): > + Ext = self._GetFileExtension() > + elif not self._GetNextToken(): > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > - Rule = RuleSimpleFile.RuleSimpleFile() > - Rule.SectionType = SectionName > - Rule.FvFileType = Type > - Rule.NameGuid = NameGuid > - Rule.Alignment = AlignValue > - Rule.SectAlignment = SectAlignment > - Rule.CheckSum = CheckSum > - Rule.Fixed = Fixed > - Rule.KeyStringList = KeyStringList > + NewRule = RuleSimpleFile() > + NewRule.SectionType = SectionName > + NewRule.FvFileType = Type > + NewRule.NameGuid = NameGuid > + NewRule.Alignment = AlignValue > + NewRule.SectAlignment = SectAlignment > + NewRule.CheckSum = CheckSum > + NewRule.Fixed = Fixed > + NewRule.KeyStringList = KeyStringList > if KeepReloc is not None: > - Rule.KeepReloc = KeepReloc > - Rule.FileExtension = Ext > - Rule.FileName = self.__Token > - return Rule > + NewRule.KeepReloc = KeepReloc > + NewRule.FileExtension = Ext > + NewRule.FileName = self._Token > + return NewRule > > - ## __GetEfiSection() method > + ## _GetEfiSection() method > # > # Get section list for Rule > # > @@ -3842,127 +3756,124 @@ class FdfParser: > # @retval True Successfully find section statement > # @retval False Not able to find section statement > # > - def __GetEfiSection(self, Obj): > - > + def _GetEfiSection(self, Obj): > OldPos = self.GetFileBufferPos() > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > return False > - SectionName = self.__Token > + SectionName = self._Token > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): > - self.__UndoToken() > + self._UndoToken() > return False > > if SectionName == "FV_IMAGE": > - FvImageSectionObj = FvImageSection.FvImageSection() > - if self.__IsKeyword("FV_IMAGE"): > + FvImageSectionObj = FvImageSection() > + if self._IsKeyword("FV_IMAGE"): > pass > - if self.__IsToken( "{"): > - FvObj = Fv.FV() > - self.__GetDefineStatements(FvObj) > - self.__GetBlockStatement(FvObj) > - self.__GetSetStatements(FvObj) > - self.__GetFvAlignment(FvObj) > - self.__GetFvAttributes(FvObj) > - self.__GetAprioriSection(FvObj) > - self.__GetAprioriSection(FvObj) > + if self._IsToken("{"): > + FvObj = FV() > + self._GetDefineStatements(FvObj) > + self._GetBlockStatement(FvObj) > + self._GetSetStatements(FvObj) > + self._GetFvAlignment(FvObj) > + self._GetFvAttributes(FvObj) > + self._GetAprioriSection(FvObj) > + self._GetAprioriSection(FvObj) > > while True: > - IsInf = self.__GetInfStatement(FvObj) > - IsFile = self.__GetFileStatement(FvObj) > + IsInf = self._GetInfStatement(FvObj) > + IsFile = self._GetFileStatement(FvObj) > if not IsInf and not IsFile: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > FvImageSectionObj.Fv = FvObj > FvImageSectionObj.FvName = None > > else: > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > - FvImageSectionObj.FvFileType = self.__Token > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) > + FvImageSectionObj.FvFileType = self._Token > > - if self.__GetAlignment(): > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - FvImageSectionObj.Alignment = self.__Token > + if self._GetAlignment(): > + if self._Token not in ALIGNMENT_NOAUTO: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + FvImageSectionObj.Alignment = self._Token > > - if self.__IsToken('|'): > - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() > - elif self.__GetNextToken(): > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > + if self._IsToken(TAB_VALUE_SPLIT): > + FvImageSectionObj.FvFileExtension = self._GetFileExtension() > + elif self._GetNextToken(): > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): > - FvImageSectionObj.FvFileName = self.__Token > + FvImageSectionObj.FvFileName = self._Token > else: > - self.__UndoToken() > + self._UndoToken() > else: > raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) > > Obj.SectionList.append(FvImageSectionObj) > return True > > - EfiSectionObj = EfiSection.EfiSection() > + EfiSectionObj = EfiSection() > EfiSectionObj.SectionType = SectionName > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected file type", self.FileName, self.CurrentLineNumber) > > - if self.__Token == "STRING": > - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): > + if self._Token == "STRING": > + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): > raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken('='): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) > > - if self.__GetStringData(): > - EfiSectionObj.StringData = self.__Token > + if self._GetStringData(): > + EfiSectionObj.StringData = self._Token > > - if self.__IsKeyword("BUILD_NUM"): > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > + if self._IsKeyword("BUILD_NUM"): > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > - EfiSectionObj.BuildNum = self.__Token > + EfiSectionObj.BuildNum = self._Token > > else: > - EfiSectionObj.FileType = self.__Token > - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > + EfiSectionObj.FileType = self._Token > + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > - if self.__IsKeyword("Optional"): > - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > + if self._IsKeyword("Optional"): > + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > EfiSectionObj.Optional = True > > - if self.__IsKeyword("BUILD_NUM"): > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > + if self._IsKeyword("BUILD_NUM"): > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > - EfiSectionObj.BuildNum = self.__Token > + EfiSectionObj.BuildNum = self._Token > > - if self.__GetAlignment(): > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): > + if self._GetAlignment(): > + if self._Token not in ALIGNMENTS: > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == BINARY_FILE_TYPE_TE): > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > - EfiSectionObj.Alignment = self.__Token > + EfiSectionObj.Alignment = self._Token > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > - if self.__Token == 'RELOCS_STRIPPED': > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > + if self._Token == 'RELOCS_STRIPPED': > EfiSectionObj.KeepReloc = False > else: > EfiSectionObj.KeepReloc = True > @@ -3972,35 +3883,35 @@ class FdfParser: > raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, self.CurrentLineNumber) > > > - if self.__IsToken('|'): > - EfiSectionObj.FileExtension = self.__GetFileExtension() > - elif self.__GetNextToken(): > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > + if self._IsToken(TAB_VALUE_SPLIT): > + EfiSectionObj.FileExtension = self._GetFileExtension() > + elif self._GetNextToken(): > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): > > - if self.__Token.startswith('PCD'): > - self.__UndoToken() > - self.__GetNextWord() > + if self._Token.startswith('PCD'): > + self._UndoToken() > + self._GetNextWord() > > - if self.__Token == 'PCD': > - if not self.__IsToken( "("): > + if self._Token == 'PCD': > + if not self._IsToken("("): > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > - PcdPair = self.__GetNextPcdSettings() > - if not self.__IsToken( ")"): > + PcdPair = self._GetNextPcdSettings() > + if not self._IsToken(")"): > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > - EfiSectionObj.FileName = self.__Token > + EfiSectionObj.FileName = self._Token > > else: > - self.__UndoToken() > + self._UndoToken() > else: > raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) > > Obj.SectionList.append(EfiSectionObj) > return True > > - ## __RuleSectionCouldBeOptional() method > + ## _RuleSectionCouldBeOptional() method > # > # Get whether a section could be optional > # > @@ -4009,13 +3920,13 @@ class FdfParser: > # @retval False section never optional > # > @staticmethod > - def __RuleSectionCouldBeOptional(SectionType): > + def _RuleSectionCouldBeOptional(SectionType): > if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "RAW", BINARY_FILE_TYPE_SMM_DEPEX): > return True > else: > return False > > - ## __RuleSectionCouldHaveBuildNum() method > + ## _RuleSectionCouldHaveBuildNum() method > # > # Get whether a section could have build number information > # > @@ -4024,13 +3935,13 @@ class FdfParser: > # @retval False section never have build number information > # > @staticmethod > - def __RuleSectionCouldHaveBuildNum(SectionType): > + def _RuleSectionCouldHaveBuildNum(SectionType): > if SectionType in ("VERSION"): > return True > else: > return False > > - ## __RuleSectionCouldHaveString() method > + ## _RuleSectionCouldHaveString() method > # > # Get whether a section could have string > # > @@ -4039,13 +3950,13 @@ class FdfParser: > # @retval False section never have string > # > @staticmethod > - def __RuleSectionCouldHaveString(SectionType): > + def _RuleSectionCouldHaveString(SectionType): > if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): > return True > else: > return False > > - ## __CheckRuleSectionFileType() method > + ## _CheckRuleSectionFileType() method > # > # Get whether a section matches a file type > # > @@ -4053,7 +3964,7 @@ class FdfParser: > # @param SectionType The section type to check > # @param FileType The file type to check > # > - def __CheckRuleSectionFileType(self, SectionType, FileType): > + def _CheckRuleSectionFileType(self, SectionType, FileType): > if SectionType == "COMPAT16": > if FileType not in ("COMPAT16", "SEC_COMPAT16"): > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > @@ -4085,53 +3996,53 @@ class FdfParser: > if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > - ## __GetRuleEncapsulationSection() method > + ## _GetRuleEncapsulationSection() method > # > # Get encapsulation section for Rule > # > # @param self The object pointer > - # @param Rule for whom section is got > + # @param theRule for whom section is got > # @retval True Successfully find section statement > # @retval False Not able to find section statement > # > - def __GetRuleEncapsulationSection(self, Rule): > - if self.__IsKeyword( "COMPRESS"): > + def _GetRuleEncapsulationSection(self, theRule): > + if self._IsKeyword("COMPRESS"): > Type = "PI_STD" > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > - Type = self.__Token > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > + Type = self._Token > > - if not self.__IsToken("{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - CompressSectionObj = CompressSection.CompressSection() > + CompressSectionObj = CompressSection() > > CompressSectionObj.CompType = Type > # Recursive sections... > while True: > - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) > - IsLeaf = self.__GetEfiSection(CompressSectionObj) > + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) > + IsLeaf = self._GetEfiSection(CompressSectionObj) > if not IsEncapsulate and not IsLeaf: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > - Rule.SectionList.append(CompressSectionObj) > + theRule.SectionList.append(CompressSectionObj) > > return True > > - elif self.__IsKeyword( "GUIDED"): > + elif self._IsKeyword("GUIDED"): > GuidValue = None > - if self.__GetNextGuid(): > - GuidValue = self.__Token > + if self._GetNextGuid(): > + GuidValue = self._Token > > - if self.__IsKeyword( "$(NAMED_GUID)"): > - GuidValue = self.__Token > + if self._IsKeyword("$(NAMED_GUID)"): > + GuidValue = self._Token > > - AttribDict = self.__GetGuidAttrib() > + AttribDict = self._GetGuidAttrib() > > - if not self.__IsToken("{"): > + if not self._IsToken("{"): > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > - GuidSectionObj = GuidSection.GuidSection() > + GuidSectionObj = GuidSection() > GuidSectionObj.NameGuid = GuidValue > GuidSectionObj.SectionType = "GUIDED" > GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] > @@ -4140,20 +4051,20 @@ class FdfParser: > > # Efi sections... > while True: > - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) > - IsLeaf = self.__GetEfiSection(GuidSectionObj) > + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) > + IsLeaf = self._GetEfiSection(GuidSectionObj) > if not IsEncapsulate and not IsLeaf: > break > > - if not self.__IsToken( "}"): > + if not self._IsToken("}"): > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > - Rule.SectionList.append(GuidSectionObj) > + theRule.SectionList.append(GuidSectionObj) > > return True > > return False > > - ## __GetVtf() method > + ## _GetVtf() method > # > # Get VTF section contents and store its data into VTF list of self.Profile > # > @@ -4161,69 +4072,69 @@ class FdfParser: > # @retval True Successfully find a VTF > # @retval False Not able to find a VTF > # > - def __GetVtf(self): > - if not self.__GetNextToken(): > + def _GetVtf(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[VTF."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[VTF.", True): > + self._UndoToken() > + if not self._IsToken("[VTF.", True): > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) > raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) > > - if not self.__SkipToToken("."): > + if not self._SkipToToken(TAB_SPLIT): > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > - Arch = self.__SkippedChars.rstrip(".").upper() > - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() > + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) > - Name = self.__Token.upper() > + Name = self._Token.upper() > > - VtfObj = Vtf.Vtf() > + VtfObj = Vtf() > VtfObj.UiName = Name > VtfObj.KeyArch = Arch > > - if self.__IsToken(","): > - if not self.__GetNextWord(): > + if self._IsToken(TAB_COMMA_SPLIT): > + if not self._GetNextWord(): > raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) > - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - VtfObj.ArchList = self.__Token.upper() > + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64): > + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + VtfObj.ArchList = self._Token.upper() > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - if self.__IsKeyword("IA32_RST_BIN"): > - if not self.__IsToken("="): > + if self._IsKeyword("IA32_RST_BIN"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) > > - VtfObj.ResetBin = self.__Token > - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: > + VtfObj.ResetBin = self._Token > + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: > #check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > - while self.__GetComponentStatement(VtfObj): > + while self._GetComponentStatement(VtfObj): > pass > > self.Profile.VtfList.append(VtfObj) > return True > > - ## __GetComponentStatement() method > + ## _GetComponentStatement() method > # > # Get components in VTF > # > @@ -4232,132 +4143,132 @@ class FdfParser: > # @retval True Successfully find a component > # @retval False Not able to find a component > # > - def __GetComponentStatement(self, VtfObj): > - if not self.__IsKeyword("COMP_NAME"): > + def _GetComponentStatement(self, VtfObj): > + if not self._IsKeyword("COMP_NAME"): > return False > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextWord(): > + if not self._GetNextWord(): > raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) > > - CompStatementObj = ComponentStatement.ComponentStatement() > - CompStatementObj.CompName = self.__Token > + CompStatementObj = ComponentStatement() > + CompStatementObj.CompName = self._Token > > - if not self.__IsKeyword("COMP_LOC"): > + if not self._IsKeyword("COMP_LOC"): > raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > CompStatementObj.CompLoc = "" > - if self.__GetNextWord(): > - CompStatementObj.CompLoc = self.__Token > - if self.__IsToken('|'): > - if not self.__GetNextWord(): > + if self._GetNextWord(): > + CompStatementObj.CompLoc = self._Token > + if self._IsToken(TAB_VALUE_SPLIT): > + if not self._GetNextWord(): > raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) > > - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > - CompStatementObj.FilePos = self.__Token > + CompStatementObj.FilePos = self._Token > else: > self.CurrentLineNumber += 1 > self.CurrentOffsetWithinLine = 0 > > - if not self.__IsKeyword("COMP_TYPE"): > + if not self._IsKeyword("COMP_TYPE"): > raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) > - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ > - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - CompStatementObj.CompType = self.__Token > + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ > + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + CompStatementObj.CompType = self._Token > > - if not self.__IsKeyword("COMP_VER"): > + if not self._IsKeyword("COMP_VER"): > raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) > > - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) > - if Pattern.match(self.__Token) is None: > - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - CompStatementObj.CompVer = self.__Token > + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) > + if Pattern.match(self._Token) is None: > + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + CompStatementObj.CompVer = self._Token > > - if not self.__IsKeyword("COMP_CS"): > + if not self._IsKeyword("COMP_CS"): > raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) > - if self.__Token not in ("1", "0"): > - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > - CompStatementObj.CompCs = self.__Token > + if self._Token not in ("1", "0"): > + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > + CompStatementObj.CompCs = self._Token > > > - if not self.__IsKeyword("COMP_BIN"): > + if not self._IsKeyword("COMP_BIN"): > raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) > > - CompStatementObj.CompBin = self.__Token > - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: > + CompStatementObj.CompBin = self._Token > + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: > #check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > - if not self.__IsKeyword("COMP_SYM"): > + if not self._IsKeyword("COMP_SYM"): > raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) > > - CompStatementObj.CompSym = self.__Token > - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: > + CompStatementObj.CompSym = self._Token > + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: > #check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > - if not self.__IsKeyword("COMP_SIZE"): > + if not self._IsKeyword("COMP_SIZE"): > raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken("="): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if self.__IsToken("-"): > - CompStatementObj.CompSize = self.__Token > - elif self.__GetNextDecimalNumber(): > - CompStatementObj.CompSize = self.__Token > - elif self.__GetNextHexNumber(): > - CompStatementObj.CompSize = self.__Token > + if self._IsToken("-"): > + CompStatementObj.CompSize = self._Token > + elif self._GetNextDecimalNumber(): > + CompStatementObj.CompSize = self._Token > + elif self._GetNextHexNumber(): > + CompStatementObj.CompSize = self._Token > else: > - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > VtfObj.ComponentStatementList.append(CompStatementObj) > return True > > - ## __GetOptionRom() method > + ## _GetOptionRom() method > # > # Get OptionROM section contents and store its data into OptionROM list of self.Profile > # > @@ -4365,38 +4276,37 @@ class FdfParser: > # @retval True Successfully find a OptionROM > # @retval False Not able to find a OptionROM > # > - def __GetOptionRom(self): > - if not self.__GetNextToken(): > + def _GetOptionRom(self): > + if not self._GetNextToken(): > return False > > - S = self.__Token.upper() > - if S.startswith("[") and not S.startswith("[OPTIONROM."): > + S = self._Token.upper() > + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): > self.SectionParser(S) > - self.__UndoToken() > + self._UndoToken() > return False > > - self.__UndoToken() > - if not self.__IsToken("[OptionRom.", True): > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > + self._UndoToken() > + if not self._IsToken("[OptionRom.", True): > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > - OptRomName = self.__GetUiName() > + OptRomName = self._GetUiName() > > - if not self.__IsToken( "]"): > + if not self._IsToken(TAB_SECTION_END): > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > - OptRomObj = OptionRom.OPTIONROM() > - OptRomObj.DriverName = OptRomName > + OptRomObj = OPTIONROM(OptRomName) > self.Profile.OptRomDict[OptRomName] = OptRomObj > > while True: > - isInf = self.__GetOptRomInfStatement(OptRomObj) > - isFile = self.__GetOptRomFileStatement(OptRomObj) > + isInf = self._GetOptRomInfStatement(OptRomObj) > + isFile = self._GetOptRomFileStatement(OptRomObj) > if not isInf and not isFile: > break > > return True > > - ## __GetOptRomInfStatement() method > + ## _GetOptRomInfStatement() method > # > # Get INF statements > # > @@ -4405,17 +4315,17 @@ class FdfParser: > # @retval True Successfully find inf statement > # @retval False Not able to find inf statement > # > - def __GetOptRomInfStatement(self, Obj): > - if not self.__IsKeyword( "INF"): > + def _GetOptRomInfStatement(self, Obj): > + if not self._IsKeyword("INF"): > return False > > - ffsInf = OptRomInfStatement.OptRomInfStatement() > - self.__GetInfOptions( ffsInf) > + ffsInf = OptRomInfStatement() > + self._GetInfOptions(ffsInf) > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > - ffsInf.InfFileName = self.__Token > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > + ffsInf.InfFileName = self._Token > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > #check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > @@ -4434,71 +4344,71 @@ class FdfParser: > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > - self.__GetOptRomOverrides (ffsInf) > + self._GetOptRomOverrides (ffsInf) > > Obj.FfsList.append(ffsInf) > return True > > - ## __GetOptRomOverrides() method > + ## _GetOptRomOverrides() method > # > # Get overrides for OptROM INF & FILE > # > # @param self The object pointer > # @param FfsInfObj for whom overrides is got > # > - def __GetOptRomOverrides(self, Obj): > - if self.__IsToken('{'): > - Overrides = OptRomInfStatement.OverrideAttribs() > + def _GetOptRomOverrides(self, Obj): > + if self._IsToken('{'): > + Overrides = OverrideAttribs() > while True: > - if self.__IsKeyword( "PCI_VENDOR_ID"): > - if not self.__IsToken( "="): > + if self._IsKeyword("PCI_VENDOR_ID"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) > - Overrides.PciVendorId = self.__Token > + Overrides.PciVendorId = self._Token > continue > > - if self.__IsKeyword( "PCI_CLASS_CODE"): > - if not self.__IsToken( "="): > + if self._IsKeyword("PCI_CLASS_CODE"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) > - Overrides.PciClassCode = self.__Token > + Overrides.PciClassCode = self._Token > continue > > - if self.__IsKeyword( "PCI_DEVICE_ID"): > - if not self.__IsToken( "="): > + if self._IsKeyword("PCI_DEVICE_ID"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) > > - Overrides.PciDeviceId = self.__Token > + Overrides.PciDeviceId = self._Token > continue > > - if self.__IsKeyword( "PCI_REVISION"): > - if not self.__IsToken( "="): > + if self._IsKeyword("PCI_REVISION"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextHexNumber(): > + if not self._GetNextHexNumber(): > raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) > - Overrides.PciRevision = self.__Token > + Overrides.PciRevision = self._Token > continue > > - if self.__IsKeyword( "PCI_COMPRESS"): > - if not self.__IsToken( "="): > + if self._IsKeyword("PCI_COMPRESS"): > + if not self._IsToken(TAB_EQUAL_SPLIT): > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) > - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' > + Overrides.NeedCompress = self._Token.upper() == 'TRUE' > continue > > - if self.__IsToken( "}"): > + if self._IsToken("}"): > break > else: > EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) > > Obj.OverrideAttribs = Overrides > > - ## __GetOptRomFileStatement() method > + ## _GetOptRomFileStatement() method > # > # Get FILE statements > # > @@ -4507,33 +4417,33 @@ class FdfParser: > # @retval True Successfully find FILE statement > # @retval False Not able to find FILE statement > # > - def __GetOptRomFileStatement(self, Obj): > - if not self.__IsKeyword( "FILE"): > + def _GetOptRomFileStatement(self, Obj): > + if not self._IsKeyword("FILE"): > return False > > - FfsFileObj = OptRomFileStatement.OptRomFileStatement() > + FfsFileObj = OptRomFileStatement() > > - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): > + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): > raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) > - FfsFileObj.FileType = self.__Token > + FfsFileObj.FileType = self._Token > > - if not self.__GetNextToken(): > + if not self._GetNextToken(): > raise Warning("expected File path", self.FileName, self.CurrentLineNumber) > - FfsFileObj.FileName = self.__Token > - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: > + FfsFileObj.FileName = self._Token > + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: > #check for file path > ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > if ErrorCode != 0: > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > if FfsFileObj.FileType == 'EFI': > - self.__GetOptRomOverrides(FfsFileObj) > + self._GetOptRomOverrides(FfsFileObj) > > Obj.FfsList.append(FfsFileObj) > > return True > > - ## __GetCapInFd() method > + ## _GetCapInFd() method > # > # Get Cap list contained in FD > # > @@ -4541,7 +4451,7 @@ class FdfParser: > # @param FdName FD name > # @retval CapList List of Capsule in FD > # > - def __GetCapInFd (self, FdName): > + def _GetCapInFd (self, FdName): > CapList = [] > if FdName.upper() in self.Profile.FdDict: > FdObj = self.Profile.FdDict[FdName.upper()] > @@ -4554,7 +4464,7 @@ class FdfParser: > CapList.append(elementRegionData.upper()) > return CapList > > - ## __GetReferencedFdCapTuple() method > + ## _GetReferencedFdCapTuple() method > # > # Get FV and FD list referenced by a capsule image > # > @@ -4563,22 +4473,22 @@ class FdfParser: > # @param RefFdList referenced FD by section > # @param RefFvList referenced FV by section > # > - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > - for CapsuleDataObj in CapObj.CapsuleDataList : > + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > + for CapsuleDataObj in CapObj.CapsuleDataList: > if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and CapsuleDataObj.FvName.upper() not in RefFvList: > RefFvList.append (CapsuleDataObj.FvName.upper()) > elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and CapsuleDataObj.FdName.upper() not in RefFdList: > RefFdList.append (CapsuleDataObj.FdName.upper()) > elif CapsuleDataObj.Ffs is not None: > - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): > + if isinstance(CapsuleDataObj.Ffs, FileStatement): > if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: > RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) > elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: > RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) > else: > - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > - ## __GetFvInFd() method > + ## _GetFvInFd() method > # > # Get FV list contained in FD > # > @@ -4586,7 +4496,7 @@ class FdfParser: > # @param FdName FD name > # @retval FvList list of FV in FD > # > - def __GetFvInFd (self, FdName): > + def _GetFvInFd (self, FdName): > FvList = [] > if FdName.upper() in self.Profile.FdDict: > FdObj = self.Profile.FdDict[FdName.upper()] > @@ -4599,7 +4509,7 @@ class FdfParser: > FvList.append(elementRegionData.upper()) > return FvList > > - ## __GetReferencedFdFvTuple() method > + ## _GetReferencedFdFvTuple() method > # > # Get FD and FV list referenced by a FFS file > # > @@ -4608,17 +4518,17 @@ class FdfParser: > # @param RefFdList referenced FD by section > # @param RefFvList referenced FV by section > # > - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > for FfsObj in FvObj.FfsList: > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > + if isinstance(FfsObj, FileStatement): > if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: > RefFvList.append(FfsObj.FvName.upper()) > elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: > RefFdList.append(FfsObj.FdName.upper()) > else: > - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > - ## __GetReferencedFdFvTupleFromSection() method > + ## _GetReferencedFdFvTupleFromSection() method > # > # Get FD and FV list referenced by a FFS section > # > @@ -4627,19 +4537,19 @@ class FdfParser: > # @param FdList referenced FD by section > # @param FvList referenced FV by section > # > - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > SectionStack = [] > SectionStack.extend(FfsFile.SectionList) > while SectionStack != []: > SectionObj = SectionStack.pop() > - if isinstance(SectionObj, FvImageSection.FvImageSection): > + if isinstance(SectionObj, FvImageSection): > if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: > FvList.append(SectionObj.FvName.upper()) > if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not in FvList: > FvList.append(SectionObj.Fv.UiFvName.upper()) > - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): > + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): > SectionStack.extend(SectionObj.SectionList) > > ## CycleReferenceCheck() method > @@ -4662,7 +4572,7 @@ class FdfParser: > FdAnalyzedList = [] > > Index = 0 > - while RefFvStack != [] and Index < MaxLength: > + while RefFvStack and Index < MaxLength: > Index = Index + 1 > FvNameFromStack = RefFvStack.pop() > if FvNameFromStack.upper() in self.Profile.FvDict: > @@ -4672,14 +4582,14 @@ class FdfParser: > > RefFdList = [] > RefFvList = [] > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > for RefFdName in RefFdList: > if RefFdName in FdAnalyzedList: > continue > > LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) > - FvInFdList = self.__GetFvInFd(RefFdName) > + FvInFdList = self._GetFvInFd(RefFdName) > if FvInFdList != []: > for FvNameInFd in FvInFdList: > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > @@ -4725,7 +4635,7 @@ class FdfParser: > > RefFvList = [] > RefFdList = [] > - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > FvListLength = 0 > FdListLength = 0 > @@ -4735,7 +4645,7 @@ class FdfParser: > continue > > LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) > - CapInFdList = self.__GetCapInFd(RefFdName) > + CapInFdList = self._GetCapInFd(RefFdName) > if CapInFdList != []: > for CapNameInFd in CapInFdList: > LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) > @@ -4746,7 +4656,7 @@ class FdfParser: > EdkLogger.info(LogStr) > return True > > - FvInFdList = self.__GetFvInFd(RefFdName) > + FvInFdList = self._GetFvInFd(RefFdName) > if FvInFdList != []: > for FvNameInFd in FvInFdList: > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > @@ -4767,7 +4677,7 @@ class FdfParser: > FvObj = self.Profile.FvDict[RefFvName.upper()] > else: > continue > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > FvAnalyzedList.append(RefFvName) > > return False > diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py > index 598b2d023124..4c1b43572964 100644 > --- a/BaseTools/Source/Python/GenFds/Ffs.py > +++ b/BaseTools/Source/Python/GenFds/Ffs.py > @@ -15,53 +15,41 @@ > ## > # Import Modules > # > -from CommonDataClass.FdfClass import FDClassObject > from Common.DataType import * > > -## generate FFS > -# > -# > -class Ffs(FDClassObject): > - # mapping between FILE type in FDF and file type for GenFfs > - FdfFvFileTypeToFileType = { > - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > - 'RAW' : 'EFI_FV_FILETYPE_RAW', > - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > - 'SMM' : 'EFI_FV_FILETYPE_SMM', > - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > - } > +# mapping between FILE type in FDF and file type for GenFfs > +FdfFvFileTypeToFileType = { > + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > + 'RAW' : 'EFI_FV_FILETYPE_RAW', > + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > + 'SMM' : 'EFI_FV_FILETYPE_SMM', > + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > +} > > - # mapping between section type in FDF and file suffix > - SectionSuffix = { > - BINARY_FILE_TYPE_PE32 : '.pe32', > - BINARY_FILE_TYPE_PIC : '.pic', > - BINARY_FILE_TYPE_TE : '.te', > - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > - 'VERSION' : '.ver', > - BINARY_FILE_TYPE_UI : '.ui', > - 'COMPAT16' : '.com16', > - 'RAW' : '.raw', > - 'FREEFORM_SUBTYPE_GUID': '.guid', > - 'SUBTYPE_GUID' : '.guid', > - 'FV_IMAGE' : 'fv.sec', > - 'COMPRESS' : '.com', > - 'GUIDED' : '.guided', > - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > - } > - > - ## The constructor > - # > - # @param self The object pointer > - # > - def __init__(self): > - FfsClassObject.__init__(self) > +# mapping between section type in FDF and file suffix > +SectionSuffix = { > + BINARY_FILE_TYPE_PE32 : '.pe32', > + BINARY_FILE_TYPE_PIC : '.pic', > + BINARY_FILE_TYPE_TE : '.te', > + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > + 'VERSION' : '.ver', > + BINARY_FILE_TYPE_UI : '.ui', > + 'COMPAT16' : '.com16', > + 'RAW' : '.raw', > + 'FREEFORM_SUBTYPE_GUID': '.guid', > + 'SUBTYPE_GUID' : '.guid', > + 'FV_IMAGE' : 'fv.sec', > + 'COMPRESS' : '.com', > + 'GUIDED' : '.guided', > + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > +} > diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > index 4c35aac76a00..7479efff04c7 100644 > --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py > +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > @@ -16,26 +16,22 @@ > # Import Modules > # > from __future__ import absolute_import > -from . import Ffs > -from . import Rule > -import Common.LongFilePathOs as os > from io import BytesIO > -import subprocess > - > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > +from struct import pack > from CommonDataClass.FdfClass import FileStatementClassObject > from Common import EdkLogger > -from Common.BuildToolError import * > -from Common.Misc import GuidStructureByteArrayToGuidString > +from Common.BuildToolError import GENFDS_ERROR > +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange > +import Common.LongFilePathOs as os > from .GuidSection import GuidSection > from .FvImageSection import FvImageSection > -from Common.Misc import SaveFileOnChange > -from struct import * > +from .Ffs import FdfFvFileTypeToFileType > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > ## generate FFS from FILE > # > # > -class FileStatement (FileStatementClassObject) : > +class FileStatement (FileStatementClassObject): > ## The constructor > # > # @param self The object pointer > @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : > # > def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): > > - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): > + if self.NameGuid and self.NameGuid.startswith('PCD('): > PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) > if len(PcdValue) == 0: > EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ > @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : > > Dict.update(self.DefineVarDict) > SectionAlignments = None > - if self.FvName is not None : > + if self.FvName: > Buffer = BytesIO('') > if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: > EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) > @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : > FileName = Fv.AddToBuffer(Buffer) > SectionFiles = [FileName] > > - elif self.FdName is not None: > + elif self.FdName: > if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: > EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) > Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) > FileName = Fd.GenFd() > SectionFiles = [FileName] > > - elif self.FileName is not None: > + elif self.FileName: > if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': > if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == len(self.SubAlignment): > FileContent = '' > @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : > Content = f.read() > f.close() > AlignValue = 1 > - if self.SubAlignment[Index] is not None: > + if self.SubAlignment[Index]: > AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) > if AlignValue > MaxAlignValue: > MaxAlignIndex = Index > @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : > SectionFiles = [] > Index = 0 > SectionAlignments = [] > - for section in self.SectionList : > + for section in self.SectionList: > Index = Index + 1 > SecIndex = '%d' %Index > # process the inside FvImage from FvSection or GuidSection > @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : > section.FvAddr = FvChildAddr.pop(0) > elif isinstance(section, GuidSection): > section.FvAddr = FvChildAddr > - if FvParentAddr is not None and isinstance(section, GuidSection): > + if FvParentAddr and isinstance(section, GuidSection): > section.FvParentAddr = FvParentAddr > > if self.KeepReloc == False: > @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : > # > FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') > GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, > - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), > + FdfFvFileTypeToFileType.get(self.FvFileType), > self.NameGuid, > Fixed=self.Fixed, > CheckSum=self.CheckSum, > @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : > ) > > return FfsFileOutput > - > - > - > diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > index 6149bc81b4ef..c1073c96e9aa 100644 > --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py > +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os > from io import BytesIO > from struct import * > from .GenFdsGlobalVariable import GenFdsGlobalVariable > -from . import Ffs > +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType > import subprocess > import sys > from . import Section > @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > SecNum = '%d' %Index > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > Index = Index + 1 > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) > @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > else: > SecNum = '%d' %Index > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) > > @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > self.ModuleGuid = RegistryGuidStr > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > + FdfFvFileTypeToFileType[Rule.FvFileType], > self.ModuleGuid, Fixed=Rule.Fixed, > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > SectionAlign=SectionAlignments, > @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > + FdfFvFileTypeToFileType[Rule.FvFileType], > self.ModuleGuid, Fixed=Rule.Fixed, > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > SectionAlign=Alignments, > diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py > index c66fc383278d..30525bd11fcc 100644 > --- a/BaseTools/Source/Python/GenFds/Fv.py > +++ b/BaseTools/Source/Python/GenFds/Fv.py > @@ -1,4 +1,3 @@ > -from __future__ import absolute_import > ## @file > # process FV generation > # > @@ -16,16 +15,13 @@ from __future__ import absolute_import > ## > # Import Modules > # > +from __future__ import absolute_import > import Common.LongFilePathOs as os > import subprocess > from io import BytesIO > from struct import * > - > -from . import Ffs > -from . import AprioriSection > from . import FfsFileStatement > from .GenFdsGlobalVariable import GenFdsGlobalVariable > -from CommonDataClass.FdfClass import FvClassObject > from Common.Misc import SaveFileOnChange, PackGUID > from Common.LongFilePathSupport import CopyLongFilePath > from Common.LongFilePathSupport import OpenLongFilePath as open > @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' > ## generate FV > # > # > -class FV (FvClassObject): > +class FV (object): > ## The constructor > # > # @param self The object pointer > # > def __init__(self): > - FvClassObject.__init__(self) > + self.UiFvName = None > + self.CreateFileName = None > + self.BlockSizeList = [] > + self.DefineVarDict = {} > + self.SetVarDict = {} > + self.FvAlignment = None > + self.FvAttributeDict = {} > + self.FvNameGuid = None > + self.FvNameString = None > + self.AprioriSectionList = [] > + self.FfsList = [] > + self.BsBaseAddress = None > + self.RtBaseAddress = None > self.FvInfFile = None > self.FvAddressFile = None > self.BaseAddress = None > @@ -68,7 +76,7 @@ class FV (FvClassObject): > # @param MacroDict macro value pair > # @retval string Generated FV file path > # > - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False) : > + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False): > > if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: > return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] > @@ -96,7 +104,7 @@ class FV (FvClassObject): > if self.FvBaseAddress is not None: > BaseAddress = self.FvBaseAddress > if not Flag: > - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > # > # First Process the Apriori section > # > @@ -114,7 +122,7 @@ class FV (FvClassObject): > TAB_LINE_BREAK) > > # Process Modules in FfsList > - for FfsFile in self.FfsList : > + for FfsFile in self.FfsList: > if Flag: > if isinstance(FfsFile, FfsFileStatement.FileStatement): > continue > @@ -177,7 +185,7 @@ class FV (FvClassObject): > > if FvChildAddr != []: > # Update Ffs again > - for FfsFile in self.FfsList : > + for FfsFile in self.FfsList: > FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, FvName=self.UiFvName) > > if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: > @@ -252,7 +260,7 @@ class FV (FvClassObject): > return True > return False > > - ## __InitializeInf__() > + ## _InitializeInf() > # > # Initilize the inf file to create FV > # > @@ -263,7 +271,7 @@ class FV (FvClassObject): > # @param ErasePolarity Flash erase polarity > # @param VtfDict VTF objects > # > - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : > + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): > # > # Create FV inf file > # > @@ -275,7 +283,7 @@ class FV (FvClassObject): > # Add [Options] > # > self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) > - if BaseAddress is not None : > + if BaseAddress is not None: > self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ > BaseAddress + \ > TAB_LINE_BREAK) > @@ -294,7 +302,7 @@ class FV (FvClassObject): > #set default block size is 1 > self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) > > - for BlockSize in self.BlockSizeList : > + for BlockSize in self.BlockSizeList: > if BlockSize[0] is not None: > self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ > '0x%X' %BlockSize[0] + \ > @@ -320,9 +328,9 @@ class FV (FvClassObject): > ' %s' %ErasePloarity + \ > TAB_LINE_BREAK) > if not (self.FvAttributeDict is None): > - for FvAttribute in self.FvAttributeDict.keys() : > + for FvAttribute in self.FvAttributeDict.keys(): > if FvAttribute == "FvUsedSizeEnable": > - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : > + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): > self.UsedSizeEnable = True > continue > self.FvInfFile.writelines("EFI_" + \ > @@ -365,8 +373,8 @@ class FV (FvClassObject): > Guid = FV_UI_EXT_ENTY_GUID.split('-') > # > # Layout: > - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 > - # GUID : size 16 > + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 > + # GUID: size 16 > # FV UI name > # > Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) > @@ -422,7 +430,6 @@ class FV (FvClassObject): > FvExtHeaderFileName + \ > TAB_LINE_BREAK) > > - > # > # Add [Files] > # > diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py > index 04556fc87099..d6e1f3315b7e 100644 > --- a/BaseTools/Source/Python/GenFds/FvImageSection.py > +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py > @@ -18,7 +18,7 @@ > from __future__ import absolute_import > from . import Section > from io import BytesIO > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > import subprocess > from .GenFdsGlobalVariable import GenFdsGlobalVariable > import Common.LongFilePathOs as os > @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): > if FvAlignmentValue > MaxFvAlignment: > MaxFvAlignment = FvAlignmentValue > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + Ffs.SectionSuffix.get("FV_IMAGE")) > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get("FV_IMAGE")) > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) > OutputFileList.append(OutputFile) > > @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): > # > # Prepare the parameter of GenSection > # > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get("FV_IMAGE")) > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get("FV_IMAGE")) > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', IsMakefile=IsMakefile) > OutputFileList.append(OutputFile) > > diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py > index 9dec9c5eb57d..0c8091b79841 100644 > --- a/BaseTools/Source/Python/GenFds/GenFds.py > +++ b/BaseTools/Source/Python/GenFds/GenFds.py > @@ -17,34 +17,30 @@ > # > from __future__ import print_function > from __future__ import absolute_import > +from re import compile > from optparse import OptionParser > -import sys > -import Common.LongFilePathOs as os > -import linecache > -from . import FdfParser > -import Common.BuildToolError as BuildToolError > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > -from Workspace.WorkspaceDatabase import WorkspaceDatabase > -from Workspace.BuildClassObject import PcdClassObject > -from . import RuleComplexFile > -from .EfiSection import EfiSection > +from sys import exit > +from glob import glob > +from struct import unpack > +from linecache import getlines > from io import BytesIO > -import Common.TargetTxtClassObject as TargetTxtClassObject > -import Common.ToolDefClassObject as ToolDefClassObject > + > +import Common.LongFilePathOs as os > +from Common.TargetTxtClassObject import TargetTxtClassObject > from Common.DataType import * > import Common.GlobalData as GlobalData > from Common import EdkLogger > -from Common.StringUtils import * > -from Common.Misc import DirCache, PathClass > -from Common.Misc import SaveFileOnChange > -from Common.Misc import ClearDuplicatedInf > -from Common.Misc import GuidStructureStringToGuidString > +from Common.StringUtils import NormPath > +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString > +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf > from Common.BuildVersion import gBUILD_VERSION > from Common.MultipleWorkspace import MultipleWorkspace as mws > -from . import FfsFileStatement > -import glob > -from struct import unpack > -from Common.GlobalData import gGuidPattern > +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID > +from Workspace.WorkspaceDatabase import WorkspaceDatabase > + > +from .FdfParser import FdfParser, Warning > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > +from .FfsFileStatement import FileStatement > > ## Version and Copyright > versionNumber = "1.0" + ' ' + gBUILD_VERSION > @@ -71,22 +67,22 @@ def main(): > > EdkLogger.Initialize() > try: > - if Options.verbose is not None: > + if Options.verbose: > EdkLogger.SetLevel(EdkLogger.VERBOSE) > GenFdsGlobalVariable.VerboseMode = True > > - if Options.FixedAddress is not None: > + if Options.FixedAddress: > GenFdsGlobalVariable.FixedLoadAddress = True > > - if Options.quiet is not None: > + if Options.quiet: > EdkLogger.SetLevel(EdkLogger.QUIET) > - if Options.debug is not None: > + if Options.debug: > EdkLogger.SetLevel(Options.debug + 1) > GenFdsGlobalVariable.DebugLevel = Options.debug > else: > EdkLogger.SetLevel(EdkLogger.INFO) > > - if (Options.Workspace is None): > + if not Options.Workspace: > EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", > ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") > elif not os.path.exists(Options.Workspace): > @@ -97,7 +93,7 @@ def main(): > GenFdsGlobalVariable.WorkSpaceDir = Workspace > if 'EDK_SOURCE' in os.environ: > GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) > - if (Options.debug): > + if Options.debug: > GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) > if Options.GenfdsMultiThread: > GenFdsGlobalVariable.EnableGenfdsMultiThread = True > @@ -107,7 +103,7 @@ def main(): > PackagesPath = os.getenv("PACKAGES_PATH") > mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) > > - if (Options.filename): > + if Options.filename: > FdfFilename = Options.filename > FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) > > @@ -123,13 +119,13 @@ def main(): > else: > EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") > > - if (Options.BuildTarget): > + if Options.BuildTarget: > GenFdsGlobalVariable.TargetName = Options.BuildTarget > > - if (Options.ToolChain): > + if Options.ToolChain: > GenFdsGlobalVariable.ToolChainTag = Options.ToolChain > > - if (Options.activePlatform): > + if Options.activePlatform: > ActivePlatform = Options.activePlatform > ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) > > @@ -139,15 +135,15 @@ def main(): > if not os.path.isabs (ActivePlatform): > ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) > > - if not os.path.exists(ActivePlatform) : > + if not os.path.exists(ActivePlatform): > EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") > else: > EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") > > - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) > > - if (Options.ConfDirectory): > + if Options.ConfDirectory: > # Get alternate Conf location, if it is absolute, then just use the absolute directory name > ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) > if ConfDirectoryPath.startswith('"'): > @@ -169,7 +165,7 @@ def main(): > GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir > BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) > if os.path.isfile(BuildConfigurationFile) == True: > - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() > + TargetTxt = TargetTxtClassObject() > TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) > # if no build target given in command line, get it from target.txt > if not GenFdsGlobalVariable.TargetName: > @@ -237,10 +233,9 @@ def main(): > GlobalData.gAllFiles = DirCache(Workspace) > GlobalData.gWorkspace = Workspace > > - if (Options.archList) : > + if Options.archList: > ArchList = Options.archList.split(',') > else: > -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") > ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList > > TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) > @@ -249,9 +244,11 @@ def main(): > > for Arch in ArchList: > GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].OutputDirectory) > - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, Options.ToolChain].PlatformName > > - if (Options.outputDir): > + # assign platform name based on last entry in ArchList > + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], Options.BuildTarget, Options.ToolChain].PlatformName > + > + if Options.outputDir: > OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) > if not os.path.isabs (OutputDirFromCommandLine): > OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, OutputDirFromCommandLine) > @@ -274,27 +271,27 @@ def main(): > GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir > > """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ > - FdfParserObj = FdfParser.FdfParser(FdfFilename) > + FdfParserObj = FdfParser(FdfFilename) > FdfParserObj.ParseFile() > > if FdfParserObj.CycleReferenceCheck(): > EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") > > - if (Options.uiFdName) : > + if Options.uiFdName: > if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: > GenFds.OnlyGenerateThisFd = Options.uiFdName > else: > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > "No such an FD in FDF file: %s" % Options.uiFdName) > > - if (Options.uiFvName) : > + if Options.uiFvName: > if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: > GenFds.OnlyGenerateThisFv = Options.uiFvName > else: > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > "No such an FV in FDF file: %s" % Options.uiFvName) > > - if (Options.uiCapName) : > + if Options.uiCapName: > if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: > GenFds.OnlyGenerateThisCap = Options.uiCapName > else: > @@ -302,7 +299,7 @@ def main(): > "No such a Capsule in FDF file: %s" % Options.uiCapName) > > GenFdsGlobalVariable.WorkSpace = BuildWorkSpace > - if ArchList is not None: > + if ArchList: > GenFdsGlobalVariable.ArchList = ArchList > > # Dsc Build Data will handle Pcd Settings from CommandLine. > @@ -313,10 +310,8 @@ def main(): > > # Record the FV Region info that may specific in the FD > if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: > - for Fv in FdfParserObj.Profile.FvDict: > - FvObj = FdfParserObj.Profile.FvDict[Fv] > - for Fd in FdfParserObj.Profile.FdDict: > - FdObj = FdfParserObj.Profile.FdDict[Fd] > + for FvObj in FdfParserObj.Profile.FvDict.values(): > + for FdObj in FdfParserObj.Profile.FdDict.values(): > for RegionObj in FdObj.RegionList: > if RegionObj.RegionType != BINARY_FILE_TYPE_FV: > continue > @@ -338,7 +333,7 @@ def main(): > """Display FV space info.""" > GenFds.DisplayFvSpaceInfo(FdfParserObj) > > - except FdfParser.Warning as X: > + except Warning as X: > EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, RaiseError=False) > ReturnCode = FORMAT_INVALID > except FatalError as X: > @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): > # Using standard Python module optparse to parse command line option of this tool. > # > # @retval Opt A optparse.Values object containing the parsed options > -# @retval Args Target of build command > # > def myOptionParser(): > usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= MacroValue]\"" > @@ -404,7 +398,7 @@ def myOptionParser(): > Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: \"PcdName=Value\" ") > Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable GenFds multi thread to generate ffs file.") > > - (Options, args) = Parser.parse_args() > + Options, _ = Parser.parse_args() > return Options > > ## The class implementing the EDK2 flash image generation process > @@ -414,7 +408,7 @@ def myOptionParser(): > # 2. Call methods of Fd class to generate FD > # 3. Call methods of Fv class to generate FV that not belong to FD > # > -class GenFds : > +class GenFds(object): > FdfParsef = None > OnlyGenerateThisFd = None > OnlyGenerateThisFv = None > @@ -423,12 +417,13 @@ class GenFds : > ## GenFd() > # > # @param OutputDir Output directory > - # @param FdfParser FDF contents parser > + # @param FdfParserObject FDF contents parser > # @param Workspace The directory of workspace > # @param ArchList The Arch list of platform > # > - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): > - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) > + @staticmethod > + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): > + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) > > GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") > if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: > @@ -470,9 +465,10 @@ class GenFds : > GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") > for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): > OptRomObj.AddToBuffer(None) > + > @staticmethod > - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): > - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) > + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): > + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) > for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): > FdObj.GenFd(Flag=True) > > @@ -490,6 +486,7 @@ class GenFds : > # @param FvObj Whose block size to get > # @retval int Block size value > # > + @staticmethod > def GetFvBlockSize(FvObj): > DefaultBlockSize = 0x1 > FdObj = None > @@ -524,16 +521,17 @@ class GenFds : > # @param FvObj Whose block size to get > # @retval None > # > - def DisplayFvSpaceInfo(FdfParser): > + @staticmethod > + def DisplayFvSpaceInfo(FdfParserObject): > > FvSpaceInfoList = [] > MaxFvNameLength = 0 > - for FvName in FdfParser.Profile.FvDict: > + for FvName in FdfParserObject.Profile.FvDict: > if len(FvName) > MaxFvNameLength: > MaxFvNameLength = len(FvName) > FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') > if os.path.exists(FvSpaceInfoFileName): > - FileLinesList = linecache.getlines(FvSpaceInfoFileName) > + FileLinesList = getlines(FvSpaceInfoFileName) > TotalFound = False > Total = '' > UsedFound = False > @@ -575,6 +573,7 @@ class GenFds : > # @param DscFile modules from dsc file will be preprocessed > # @retval None > # > + @staticmethod > def PreprocessImage(BuildDb, DscFile): > PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].Pcds > PcdValue = '' > @@ -600,6 +599,7 @@ class GenFds : > ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) > > + @staticmethod > def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): > GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") > GuidXRefFile = BytesIO('') > @@ -607,7 +607,6 @@ class GenFds : > GuidDict = {} > ModuleList = [] > FileGuidList = [] > - GuidPattern = gGuidPattern > for Arch in ArchList: > PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) > @@ -626,20 +625,16 @@ class GenFds : > continue > else: > ModuleList.append(Module) > - GuidMatch = GuidPattern.match(ModuleFile.BaseName) > - if GuidMatch is not None: > + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): > GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) > else: > GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) > - for key, item in Module.Protocols.items(): > - GuidDict[key] = item > - for key, item in Module.Guids.items(): > - GuidDict[key] = item > - for key, item in Module.Ppis.items(): > - GuidDict[key] = item > + GuidDict.update(Module.Protocols) > + GuidDict.update(Module.Guids) > + GuidDict.update(Module.Ppis) > for FvName in FdfParserObj.Profile.FvDict: > for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: > - if not isinstance(FfsObj, FfsFileStatement.FileStatement): > + if not isinstance(FfsObj, FileStatement): > InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) > FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > if FdfModule in ModuleList: > @@ -647,12 +642,9 @@ class GenFds : > else: > ModuleList.append(FdfModule) > GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) > - for key, item in FdfModule.Protocols.items(): > - GuidDict[key] = item > - for key, item in FdfModule.Guids.items(): > - GuidDict[key] = item > - for key, item in FdfModule.Ppis.items(): > - GuidDict[key] = item > + GuidDict.update(FdfModule.Protocols) > + GuidDict.update(FdfModule.Guids) > + GuidDict.update(FdfModule.Ppis) > else: > FileStatementGuid = FfsObj.NameGuid > if FileStatementGuid in FileGuidList: > @@ -661,13 +653,13 @@ class GenFds : > FileGuidList.append(FileStatementGuid) > Name = [] > FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') > + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') > if not FfsPath: > continue > if not os.path.exists(FfsPath[0]): > continue > MatchDict = {} > - ReFileEnds = re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > + ReFileEnds = compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > FileList = os.listdir(FfsPath[0]) > for File in FileList: > Match = ReFileEnds.search(File) > @@ -724,16 +716,10 @@ class GenFds : > os.remove(GuidXRefFileName) > GuidXRefFile.close() > > - ##Define GenFd as static function > - GenFd = staticmethod(GenFd) > - GetFvBlockSize = staticmethod(GetFvBlockSize) > - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) > - PreprocessImage = staticmethod(PreprocessImage) > - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) > - > if __name__ == '__main__': > r = main() > ## 0-127 is a safe return range, and 1 is a standard default error > - if r < 0 or r > 127: r = 1 > - sys.exit(r) > + if r < 0 or r > 127: > + r = 1 > + exit(r) > > diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > index 14578a92a9b1..e867693d7c88 100644 > --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > @@ -16,13 +16,15 @@ > # Import Modules > # > from __future__ import print_function > +from __future__ import absolute_import > + > import Common.LongFilePathOs as os > -import sys > -import subprocess > -import struct > -import array > +from sys import stdout > +from subprocess import PIPE,Popen > +from struct import Struct > +from array import array > > -from Common.BuildToolError import * > +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR > from Common import EdkLogger > from Common.Misc import SaveFileOnChange > > @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: > EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' > LARGE_FILE_SIZE = 0x1000000 > > - SectionHeader = struct.Struct("3B 1B") > + SectionHeader = Struct("3B 1B") > > # FvName, FdName, CapName in FDF, Image file name > ImageBinDict = {} > @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: > ## LoadBuildRule > # > @staticmethod > - def __LoadBuildRule(): > + def _LoadBuildRule(): > if GenFdsGlobalVariable.__BuildRuleDatabase: > return GenFdsGlobalVariable.__BuildRuleDatabase > BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) > @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: > if not Arch in GenFdsGlobalVariable.OutputDirDict: > return {} > > - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() > + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() > if not BuildRuleDatabase: > return {} > > PathClassObj = PathClass(Inf.MetaFile.File, > GenFdsGlobalVariable.WorkSpaceDir) > - Macro = {} > - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir > - Macro["MODULE_NAME" ] = Inf.BaseName > - Macro["MODULE_GUID" ] = Inf.Guid > - Macro["MODULE_VERSION" ] = Inf.Version > - Macro["MODULE_TYPE" ] = Inf.ModuleType > - Macro["MODULE_FILE" ] = str(PathClassObj) > - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName > - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir > - Macro["MODULE_DIR" ] = PathClassObj.SubDir > - > - Macro["BASE_NAME" ] = Inf.BaseName > - > - Macro["ARCH" ] = Arch > - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag > - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName > - > - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] > - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > BuildDir = os.path.join( > GenFdsGlobalVariable.OutputDirDict[Arch], > Arch, > PathClassObj.SubDir, > PathClassObj.BaseName > ) > - Macro["MODULE_BUILD_DIR" ] = BuildDir > - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") > - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") > - > + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > + Macro = { > + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, > + "MODULE_NAME":Inf.BaseName, > + "MODULE_GUID":Inf.Guid, > + "MODULE_VERSION":Inf.Version, > + "MODULE_TYPE":Inf.ModuleType, > + "MODULE_FILE":str(PathClassObj), > + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, > + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, > + "MODULE_DIR":PathClassObj.SubDir, > + "BASE_NAME":Inf.BaseName, > + "ARCH":Arch, > + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, > + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > + "TARGET":GenFdsGlobalVariable.TargetName, > + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], > + "BIN_DIR":BinDir, > + "LIB_DIR":BinDir, > + "MODULE_BUILD_DIR":BuildDir, > + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), > + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") > + } > + > BuildRules = {} > for Type in BuildRuleDatabase.FileTypeList: > #first try getting build rule by BuildRuleFamily > @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: > > if not Inf.IsBinaryModule: > for File in Inf.Sources: > - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ > - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): > + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ > + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: > FileList.append((File, DataType.TAB_UNKNOWN_FILE)) > > for File in Inf.Binaries: > - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: > + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: > FileList.append((File, File.Type)) > > for File, FileType in FileList: > @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: > Source = SourceList[Index] > Index = Index + 1 > > - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: > + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: > # Skip all files that are not binary libraries > if not Inf.LibraryClass: > continue > @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: > # @param Workspace The directory of workspace > # @param ArchList The Arch list of platform > # > + @staticmethod > def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): > - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) > -# GenFdsGlobalVariable.OutputDirDict = OutputDir > + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) > GenFdsGlobalVariable.FdfParser = FdfParser > GenFdsGlobalVariable.WorkSpace = WorkSpace > GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], DataType.TAB_FV_DIRECTORY) > - if not os.path.exists(GenFdsGlobalVariable.FvDir) : > + if not os.path.exists(GenFdsGlobalVariable.FvDir): > os.makedirs(GenFdsGlobalVariable.FvDir) > GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : > + if not os.path.exists(GenFdsGlobalVariable.FfsDir): > os.makedirs(GenFdsGlobalVariable.FfsDir) > > - T_CHAR_LF = '\n' > # > # Create FV Address inf file > # > @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: > # > # Add [Options] > # > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > BsAddress = '0' > for Arch in ArchList: > if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: > @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > BsAddress + \ > - T_CHAR_LF) > + DataType.TAB_LINE_BREAK) > > RtAddress = '0' > - for Arch in ArchList: > - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > + for Arch in reversed(ArchList): > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > + if temp: > + RtAddress = temp > + break > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > RtAddress + \ > - T_CHAR_LF) > + DataType.TAB_LINE_BREAK) > > FvAddressFile.close() > > + @staticmethod > def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): > GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile > GenFdsGlobalVariable.FdfParser = FdfParser > @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: > if not os.path.exists(GenFdsGlobalVariable.FfsDir): > os.makedirs(GenFdsGlobalVariable.FfsDir) > > - T_CHAR_LF = '\n' > # > # Create FV Address inf file > # > @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: > # > # Add [Options] > # > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > BsAddress = '0' > for Arch in ArchList: > BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > BsAddress + \ > - T_CHAR_LF) > + DataType.TAB_LINE_BREAK) > > RtAddress = '0' > - for Arch in ArchList: > - if GenFdsGlobalVariable.WorkSpace.BuildObject[ > + for Arch in reversed(ArchList): > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ > GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ > - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > + if temp: > + RtAddress = temp > + break > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > RtAddress + \ > - T_CHAR_LF) > + DataType.TAB_LINE_BREAK) > > FvAddressFile.close() > > @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: > # > # @param String String that may contain macro > # > + @staticmethod > def ReplaceWorkspaceMacro(String): > String = mws.handleWsMacro(String) > Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) > @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: > if not os.path.exists(Output): > return True > # always update "Output" if no "Input" given > - if Input is None or len(Input) == 0: > + if not Input: > return True > > # if fdf file is changed after the 'Output" is generated, update the 'Output' > @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: > Cmd += ("-s", Type) > if CompressionType: > Cmd += ("-c", CompressionType) > - if Guid is not None: > + if Guid: > Cmd += ("-g", Guid) > - if DummyFile is not None: > + if DummyFile: > Cmd += ("--dummy", DummyFile) > if GuidHdrLen: > Cmd += ("-l", GuidHdrLen) > @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: > if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: > GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) > else: > - SectionData = array.array('B', [0, 0, 0, 0]) > + SectionData = array('B', [0, 0, 0, 0]) > SectionData.fromstring(Ui.encode("utf_16_le")) > SectionData.append(0) > SectionData.append(0) > @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: > if BaseAddress: > Cmd += ("-r", BaseAddress) > > - if ForceRebase == False: > + if not ForceRebase: > Cmd += ("-F", "FALSE") > - elif ForceRebase == True: > + else: > Cmd += ("-F", "TRUE") > > if Capsule: > @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: > Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): > InputList = [] > Cmd = ["EfiRom"] > - if len(EfiInput) > 0: > + if EfiInput: > > if Compress: > Cmd.append("-ec") > @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: > Cmd.append(EfiFile) > InputList.append (EfiFile) > > - if len(BinaryInput) > 0: > + if BinaryInput: > Cmd.append("-b") > for BinFile in BinaryInput: > Cmd.append(BinFile) > @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: > return > GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) > > - if ClassCode is not None: > + if ClassCode: > Cmd += ("-l", ClassCode) > - if Revision is not None: > + if Revision: > Cmd += ("-r", Revision) > - if DeviceId is not None: > + if DeviceId: > Cmd += ("-i", DeviceId) > - if VendorId is not None: > + if VendorId: > Cmd += ("-f", VendorId) > > Cmd += ("-o", Output) > @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: > else: > GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) > > + @staticmethod > def CallExternalTool (cmd, errorMess, returnValue=[]): > > if type(cmd) not in (tuple, list): > @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: > cmd += ('-v',) > GenFdsGlobalVariable.InfLogger (cmd) > else: > - sys.stdout.write ('#') > - sys.stdout.flush() > + stdout.write ('#') > + stdout.flush() > GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 > if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: > - sys.stdout.write('\n') > + stdout.write('\n') > > try: > - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) > + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) > except Exception as X: > EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) > (out, error) = PopenObject.communicate() > > - while PopenObject.returncode is None : > + while PopenObject.returncode is None: > PopenObject.wait() > if returnValue != [] and returnValue[0] != 0: > #get command return value > @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: > print("###", cmd) > EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) > > + @staticmethod > def VerboseLogger (msg): > EdkLogger.verbose(msg) > > + @staticmethod > def InfLogger (msg): > EdkLogger.info(msg) > > + @staticmethod > def ErrorLogger (msg, File=None, Line=None, ExtraData=None): > EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) > > + @staticmethod > def DebugLogger (Level, msg): > EdkLogger.debug(Level, msg) > > - ## ReplaceWorkspaceMacro() > + ## MacroExtend() > # > # @param Str String that may contain macro > # @param MacroDict Dictionary that contains macro value pair > # > + @staticmethod > def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): > - if Str is None : > + if Str is None: > return None > > - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, > - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, > + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, > + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, > # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, > - '$(TARGET)' : GenFdsGlobalVariable.TargetName, > - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, > - '$(SPACE)' : ' ' > + '$(TARGET)': GenFdsGlobalVariable.TargetName, > + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, > + '$(SPACE)': ' ' > } > - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > + > if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: > OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] > + else: > + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > Dict['$(OUTPUT_DIRECTORY)'] = OutputDir > > - if MacroDict is not None and len (MacroDict) != 0: > + if MacroDict: > Dict.update(MacroDict) > > for key in Dict: > - if Str.find(key) >= 0 : > + if Str.find(key) >= 0: > Str = Str.replace (key, Dict[key]) > > if Str.find('$(ARCH)') >= 0: > @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: > # > # @param PcdPattern pattern that labels a PCD. > # > + @staticmethod > def GetPcdValue (PcdPattern): > - if PcdPattern is None : > + if PcdPattern is None: > return None > PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') > TokenSpace = PcdPair[0] > TokenCName = PcdPair[1] > > - PcdValue = '' > for Arch in GenFdsGlobalVariable.ArchList: > Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > PcdDict = Platform.Pcds > @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: > if PcdObj.DatumType != DataType.TAB_VOID: > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > - PcdValue = PcdObj.DefaultValue > - return PcdValue > + return PcdObj.DefaultValue > > for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, > Arch, > @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: > if PcdObj.DatumType != DataType.TAB_VOID: > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > - PcdValue = PcdObj.DefaultValue > - return PcdValue > + return PcdObj.DefaultValue > > - return PcdValue > - > - SetDir = staticmethod(SetDir) > - SetEnv = staticmethod(SetEnv) > - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) > - CallExternalTool = staticmethod(CallExternalTool) > - VerboseLogger = staticmethod(VerboseLogger) > - InfLogger = staticmethod(InfLogger) > - ErrorLogger = staticmethod(ErrorLogger) > - DebugLogger = staticmethod(DebugLogger) > - MacroExtend = staticmethod (MacroExtend) > - GetPcdValue = staticmethod(GetPcdValue) > + return '' > > ## FindExtendTool() > # > @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): > ToolOptionKey = None > KeyList = None > for ToolDef in ToolDefinition.items(): > - if NameGuid.lower() == ToolDef[1].lower() : > + if NameGuid.lower() == ToolDef[1].lower(): > KeyList = ToolDef[0].split('_') > Key = KeyList[0] + \ > '_' + \ > diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py > index fd660737f421..25609e6f2ae1 100644 > --- a/BaseTools/Source/Python/GenFds/GuidSection.py > +++ b/BaseTools/Source/Python/GenFds/GuidSection.py > @@ -19,7 +19,7 @@ > from __future__ import absolute_import > from . import Section > import subprocess > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > import Common.LongFilePathOs as os > from .GenFdsGlobalVariable import GenFdsGlobalVariable > from .GenFdsGlobalVariable import FindExtendTool > @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : > ModuleName + \ > SUP_MODULE_SEC + \ > SecNum + \ > - Ffs.SectionSuffix['GUIDED'] > + SectionSuffix['GUIDED'] > OutputFile = os.path.normpath(OutputFile) > > ExternalTool = None > diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py > index bb7071fa7fca..9d7ad25ecc43 100644 > --- a/BaseTools/Source/Python/GenFds/OptionRom.py > +++ b/BaseTools/Source/Python/GenFds/OptionRom.py > @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange > from Common import EdkLogger > from Common.BuildToolError import * > > -T_CHAR_LF = '\n' > - > ## > # > # > @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): > # > # @param self The object pointer > # > - def __init__(self): > + def __init__(self, Name = ""): > OptionRomClassObject.__init__(self) > - > + self.DriverName = Name > > ## AddToBuffer() > # > diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py > index 5242b74c9e70..9aeaa45d4566 100644 > --- a/BaseTools/Source/Python/GenFds/Region.py > +++ b/BaseTools/Source/Python/GenFds/Region.py > @@ -20,7 +20,6 @@ from struct import * > from .GenFdsGlobalVariable import GenFdsGlobalVariable > from io import BytesIO > import string > -from CommonDataClass.FdfClass import RegionClassObject > import Common.LongFilePathOs as os > from stat import * > from Common import EdkLogger > @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV > ## generate Region > # > # > -class Region(RegionClassObject): > +class Region(object): > > ## The constructor > # > # @param self The object pointer > # > def __init__(self): > - RegionClassObject.__init__(self) > - > + self.Offset = None # The begin position of the Region > + self.Size = None # The Size of the Region > + self.PcdOffset = None > + self.PcdSize = None > + self.SetVarDict = {} > + self.RegionType = None > + self.RegionDataList = [] > > ## PadBuffer() > # > diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py > index 24f2f3ca938d..c1dff188be96 100644 > --- a/BaseTools/Source/Python/GenFds/UiSection.py > +++ b/BaseTools/Source/Python/GenFds/UiSection.py > @@ -17,7 +17,7 @@ > # > from __future__ import absolute_import > from . import Section > -from .Ffs import Ffs > +from .Ffs import SectionSuffix > import subprocess > import Common.LongFilePathOs as os > from .GenFdsGlobalVariable import GenFdsGlobalVariable > @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > if self.StringData is not None : > NameString = self.StringData > diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py > index db71fe9653d4..051743f10c21 100644 > --- a/BaseTools/Source/Python/GenFds/VerSection.py > +++ b/BaseTools/Source/Python/GenFds/VerSection.py > @@ -1,7 +1,7 @@ > ## @file > # process Version section generation > # > -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> > +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> > # > # This program and the accompanying materials > # are licensed and made available under the terms and conditions of the BSD License > @@ -16,10 +16,8 @@ > # Import Modules > # > from __future__ import absolute_import > -from .Ffs import Ffs > -from . import Section > +from .Ffs import SectionSuffix > import Common.LongFilePathOs as os > -import subprocess > from .GenFdsGlobalVariable import GenFdsGlobalVariable > from CommonDataClass.FdfClass import VerSectionClassObject > from Common.LongFilePathSupport import OpenLongFilePath as open > @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): > # > # Prepare the parameter of GenSection > # > - if FfsInf is not None: > + if FfsInf: > self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) > self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > OutputFile = os.path.join(OutputPath, > - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) > + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) > OutputFile = os.path.normpath(OutputFile) > > # Get String Data > StringData = '' > - if self.StringData is not None: > + if self.StringData: > StringData = self.StringData > - elif self.FileName is not None: > + elif self.FileName: > FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) > FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) > FileObj = open(FileNameStr, 'r') > StringData = FileObj.read() > StringData = '"' + StringData + '"' > FileObj.close() > - else: > - StringData = '' > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) > OutputFileList = [] > diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py > index 5cb2d4acfb1c..9dcd48b2d812 100644 > --- a/BaseTools/Source/Python/GenFds/Vtf.py > +++ b/BaseTools/Source/Python/GenFds/Vtf.py > @@ -18,21 +18,24 @@ > from __future__ import absolute_import > from .GenFdsGlobalVariable import GenFdsGlobalVariable > import Common.LongFilePathOs as os > -from CommonDataClass.FdfClass import VtfClassObject > from Common.LongFilePathSupport import OpenLongFilePath as open > -T_CHAR_LF = '\n' > +from Common.DataType import TAB_LINE_BREAK > > ## generate VTF > # > # > -class Vtf (VtfClassObject): > +class Vtf (object): > > ## The constructor > # > # @param self The object pointer > # > def __init__(self): > - VtfClassObject.__init__(self) > + self.KeyArch = None > + self.ArchList = None > + self.UiName = None > + self.ResetBin = None > + self.ComponentStatementList = [] > > ## GenVtf() method > # > @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): > # > def GenVtf(self, FdAddressDict) : > self.GenBsfInf() > - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') > BaseAddArg = self.GetBaseAddressArg(FdAddressDict) > OutputArg, VtfRawDict = self.GenOutputArg() > > @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): > FvList = self.GetFvList() > self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') > BsfInf = open(self.BsfInfName, 'w+') > - if self.ResetBin is not None: > - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) > + if self.ResetBin: > + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) > BsfInf.writelines ("IA32_RST_BIN" + \ > " = " + \ > GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ > - T_CHAR_LF) > - BsfInf.writelines (T_CHAR_LF) > + TAB_LINE_BREAK) > + BsfInf.writelines (TAB_LINE_BREAK) > > - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) > + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) > > for ComponentObj in self.ComponentStatementList : > BsfInf.writelines ("COMP_NAME" + \ > " = " + \ > ComponentObj.CompName + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > if ComponentObj.CompLoc.upper() == 'NONE': > BsfInf.writelines ("COMP_LOC" + \ > " = " + \ > 'N' + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > > - elif ComponentObj.FilePos is not None: > + elif ComponentObj.FilePos: > BsfInf.writelines ("COMP_LOC" + \ > " = " + \ > ComponentObj.FilePos + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > else: > Index = FvList.index(ComponentObj.CompLoc.upper()) > if Index == 0: > BsfInf.writelines ("COMP_LOC" + \ > " = " + \ > 'F' + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > elif Index == 1: > BsfInf.writelines ("COMP_LOC" + \ > " = " + \ > 'S' + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > > BsfInf.writelines ("COMP_TYPE" + \ > " = " + \ > ComponentObj.CompType + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > BsfInf.writelines ("COMP_VER" + \ > " = " + \ > ComponentObj.CompVer + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > BsfInf.writelines ("COMP_CS" + \ > " = " + \ > ComponentObj.CompCs + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > > BinPath = ComponentObj.CompBin > if BinPath != '-': > @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): > BsfInf.writelines ("COMP_BIN" + \ > " = " + \ > BinPath + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > > SymPath = ComponentObj.CompSym > if SymPath != '-': > @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): > BsfInf.writelines ("COMP_SYM" + \ > " = " + \ > SymPath + \ > - T_CHAR_LF) > + TAB_LINE_BREAK) > BsfInf.writelines ("COMP_SIZE" + \ > " = " + \ > ComponentObj.CompSize + \ > - T_CHAR_LF) > - BsfInf.writelines (T_CHAR_LF) > + TAB_LINE_BREAK) > + BsfInf.writelines (TAB_LINE_BREAK) > > BsfInf.close() > > diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py > index a66adfb16746..95863fefe151 100644 > --- a/BaseTools/Source/Python/build/BuildReport.py > +++ b/BaseTools/Source/Python/build/BuildReport.py > @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath > from Common.DataType import * > import collections > from Common.Expression import * > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > ## Pattern to extract contents in EDK DXS files > gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) > @@ -1853,8 +1854,8 @@ class FdRegionReport(object): > # > # Add PEI and DXE a priori files GUIDs defined in PI specification. > # > - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" > - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" > + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" > + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" > # > # Add ACPI table storage file > # > -- > 2.16.2.windows.1 > > _______________________________________________ > edk2-devel mailing list > edk2-devel@lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-26 4:53 ` Gary Lin @ 2018-10-26 5:25 ` Gao, Liming 2018-10-26 6:27 ` Gao, Liming 0 siblings, 1 reply; 14+ messages in thread From: Gao, Liming @ 2018-10-26 5:25 UTC (permalink / raw) To: Gary Lin, Carsey, Jaben; +Cc: edk2-devel@lists.01.org Lin: I am checking this issue. Sorry for the break. Thanks Liming > -----Original Message----- > From: Gary Lin [mailto:glin@suse.com] > Sent: Friday, October 26, 2018 12:54 PM > To: Carsey, Jaben <jaben.carsey@intel.com> > Cc: edk2-devel@lists.01.org; Gao, Liming <liming.gao@intel.com> > Subject: Re: [edk2] [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds > > On Tue, Oct 23, 2018 at 10:29:19AM -0700, Jaben Carsey wrote: > > 1) remove wildcard imports and use explicit imports > > 2) refactor to use shared variables from Common/DataType > > 3) rename to not shadow imports > > 4) don't assign a variable in a loop (just do final assignment) > > 5) remove spaces, parens, unused or commented out code, etc. > > 6) merge unnecessary parent classes into child > > 7) refactor to share DXE and PEI apriori GUIDs from one place > > this includes changes to Build and EOT files > > 8) for PEP8, dont use __ for custom methods. > > > Hi Jaben, > > I found OVMF crashed immediately at starting up after applying this patch :-\ > > == > > $ qemu-system-x86_64 -s -enable-kvm -smp 1 -m 1024 \ > -drive if=pflash,format=raw,readonly,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd \ > -drive if=pflash,format=raw,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd \ > -machine type=q35 \ > -drive file=fat:hda-contents/,format=raw,read-only,if=virtio \ > -monitor stdio -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \ > -serial file:serial.log > QEMU 3.0.0 monitor - type 'help' for more information > (qemu) KVM internal error. Suberror: 1 > emulation failure > RAX=000000000081f6a0 RBX=0000000000000200 RCX=000000000081f6a0 RDX=0000000000000200 > RSI=000000000081f698 RDI=000000000081fd20 RBP=000000000081f698 RSP=000000000081f4a8 > R8 =000000000081f4d8 R9 =000000000081f748 R10=0000000000000010 R11=0000000000900000 > R12=0000000000000000 R13=0000000000820000 R14=000000000081fd01 R15=000000000081fd01 > RIP=000000000009ffff RFL=00010807 [-O---PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 > ES =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > CS =0018 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] > SS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > DS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > FS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > GS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT > TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy > GDT= 00000000ffffff80 0000001f > IDT= 000000000081fd70 0000021f > CR0=c0000033 CR2=0000000000000000 CR3=0000000000800000 CR4=00000660 > DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 > DR6=00000000ffff0ff0 DR7=0000000000000400 > EFER=0000000000000500 > Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 00 00 00 00 00 00 00 00 00 > > == > > And, there is only one line in debug.log: > > SecCoreStartupWithStack(0xFFFCC000, 0x820000) > > Without "-enable-kvm", the error became: > > == > > (qemu) qemu-system-x86_64: Trying to execute code outside RAM or ROM at 0x00000000000a0000 > This usually means one of the following happened: > > (1) You told QEMU to execute a kernel for the wrong machine type, and it crashed on startup (eg trying to run a raspberry pi kernel on a > versatilepb QEMU machine) > (2) You didn't give QEMU a kernel or BIOS filename at all, and QEMU executed a ROM full of no-op instructions until it fell off the end > (3) Your guest kernel has a bug and crashed by jumping off into nowhere > > This is almost always one of the first two, so check your command line and that you are using the right type of kernel for this machine. > If you think option (3) is likely then you can try debugging your guest with the -d debug options; in particular -d guest_errors will cause > the log to include a dump of the guest register state at this point. > > Execution cannot continue; stopping here. > > == > > Gary Lin > > > Cc: Yonghong Zhu <yonghong.zhu@intel.com> > > Cc: Liming Gao <liming.gao@intel.com> > > Cc: Bob C Feng <bob.c.feng@intel.com> > > Contributed-under: TianoCore Contribution Agreement 1.1 > > Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> > > --- > > BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - > > BaseTools/Source/Python/Eot/EotMain.py | 372 ++- > > BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- > > BaseTools/Source/Python/GenFds/Capsule.py | 26 +- > > BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - > > BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- > > BaseTools/Source/Python/GenFds/DataSection.py | 4 +- > > BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- > > BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- > > BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- > > BaseTools/Source/Python/GenFds/Ffs.py | 82 +- > > BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- > > BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- > > BaseTools/Source/Python/GenFds/Fv.py | 47 +- > > BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- > > BaseTools/Source/Python/GenFds/GenFds.py | 160 +- > > BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- > > BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- > > BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- > > BaseTools/Source/Python/GenFds/Region.py | 12 +- > > BaseTools/Source/Python/GenFds/UiSection.py | 4 +- > > BaseTools/Source/Python/GenFds/VerSection.py | 16 +- > > BaseTools/Source/Python/GenFds/Vtf.py | 48 +- > > BaseTools/Source/Python/build/BuildReport.py | 5 +- > > 24 files changed, 2185 insertions(+), 2020 deletions(-) > > > > diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > index 64b58c2078a3..3df82082457f 100644 > > --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > @@ -37,50 +37,6 @@ class FDClassObject: > > self.RegionList = [] > > self.vtfRawDict = {} > > > > -## FV data in FDF > > -# > > -# > > -class FvClassObject: > > - ## The constructor > > - # > > - # @param self The object pointer > > - # > > - def __init__(self): > > - self.UiFvName = None > > - self.CreateFileName = None > > - # 3-tuple list (blockSize, numBlocks, pcd) > > - self.BlockSizeList = [] > > - # DefineVarDict[var] = value > > - self.DefineVarDict = {} > > - # SetVarDict[var] = value > > - self.SetVarDict = {} > > - self.FvAlignment = None > > - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) > > - self.FvAttributeDict = {} > > - self.FvNameGuid = None > > - self.FvNameString = None > > - self.AprioriSectionList = [] > > - self.FfsList = [] > > - self.BsBaseAddress = None > > - self.RtBaseAddress = None > > - > > -## Region data in FDF > > -# > > -# > > -class RegionClassObject: > > - ## The constructor > > - # > > - # @param self The object pointer > > - # > > - def __init__(self): > > - self.Offset = None # The begin position of the Region > > - self.Size = None # The Size of the Region > > - self.PcdOffset = None > > - self.PcdSize = None > > - self.SetVarDict = {} > > - self.RegionType = None > > - self.RegionDataList = [] > > - > > ## FFS data in FDF > > # > > # > > @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : > > self.FvName = None > > self.FdName = None > > self.DefineVarDict = {} > > - self.AprioriSection = None > > self.KeepReloc = None > > > > ## INF statement data in FDF > > @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): > > self.KeepReloc = None > > self.UseArch = None > > > > -## APRIORI section data in FDF > > -# > > -# > > -class AprioriSectionClassObject: > > - ## The constructor > > - # > > - # @param self The object pointer > > - # > > - def __init__(self): > > - # DefineVarDict[var] = value > > - self.DefineVarDict = {} > > - self.FfsList = [] > > - > > ## section data in FDF > > # > > # > > @@ -363,21 +305,6 @@ class CapsuleClassObject : > > self.CapsuleDataList = [] > > self.FmpPayloadList = [] > > > > -## VTF data in FDF > > -# > > -# > > -class VtfClassObject : > > - ## The constructor > > - # > > - # @param self The object pointer > > - # > > - def __init__(self): > > - self.KeyArch = None > > - self.ArchList = None > > - self.UiName = None > > - self.ResetBin = None > > - self.ComponentStatementList = [] > > - > > ## VTF component data in FDF > > # > > # > > diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py > > index 49a1494126c4..2ad93cc6b4e3 100644 > > --- a/BaseTools/Source/Python/Eot/EotMain.py > > +++ b/BaseTools/Source/Python/Eot/EotMain.py > > @@ -36,10 +36,9 @@ import struct > > import uuid > > import copy > > import codecs > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" > > -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' > > -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' > > gIndention = -4 > > > > class Image(array): > > @@ -402,10 +401,10 @@ class FirmwareVolume(Image): > > if Ffs.Type == 0x05: > > FfsDxeCoreGuid = FfsID > > continue > > - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > FfsPeiPrioriGuid = FfsID > > continue > > - if Ffs.Guid.lower() == gAprioriGuid: > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > FfsDxePrioriGuid = FfsID > > continue > > > > @@ -1084,6 +1083,371 @@ class Ffs(Image): > > State = property(_GetState, _SetState) > > > > > > + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" > > + > > + _GUID_ = struct.Struct("16x 1I2H8B") > > + _LENGTH_ = struct.Struct("16x 16x 1Q") > > + _SIG_ = struct.Struct("16x 16x 8x 1I") > > + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") > > + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") > > + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") > > + > > + def __init__(self, Name=''): > > + Image.__init__(self) > > + self.Name = Name > > + self.FfsDict = sdict() > > + self.OrderedFfsDict = sdict() > > + self.UnDispatchedFfsDict = sdict() > > + self.ProtocolList = sdict() > > + > > + def CheckArchProtocol(self): > > + for Item in EotGlobalData.gArchProtocolGuids: > > + if Item.lower() not in EotGlobalData.gProtocolList: > > + return False > > + return True > > + > > + def ParseDepex(self, Depex, Type): > > + List = None > > + if Type == 'Ppi': > > + List = EotGlobalData.gPpiList > > + if Type == 'Protocol': > > + List = EotGlobalData.gProtocolList > > + DepexStack = [] > > + DepexList = [] > > + DepexString = '' > > + FileDepex = None > > + CouldBeLoaded = True > > + for Index in range(0, len(Depex.Expression)): > > + Item = Depex.Expression[Index] > > + if Item == 0x00: > > + Index = Index + 1 > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) > > + elif Item == 0x01: > > + Index = Index + 1 > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) > > + elif Item == 0x02: > > + Index = Index + 1 > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > + if Guid.lower() in List: > > + DepexStack.append(True) > > + DepexList.append(Guid) > > + else: > > + DepexStack.append(False) > > + DepexList.append(Guid) > > + continue > > + elif Item == 0x03 or Item == 0x04: > > + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + > str(DepexStack.pop()))) > > + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + str(DepexList.pop())) > > + elif Item == 0x05: > > + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) > > + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) > > + elif Item == 0x06: > > + DepexStack.append(True) > > + DepexList.append('TRUE') > > + DepexString = DepexString + 'TRUE' + ' ' > > + elif Item == 0x07: > > + DepexStack.append(False) > > + DepexList.append('False') > > + DepexString = DepexString + 'FALSE' + ' ' > > + elif Item == 0x08: > > + if Index != len(Depex.Expression) - 1: > > + CouldBeLoaded = False > > + else: > > + CouldBeLoaded = DepexStack.pop() > > + else: > > + CouldBeLoaded = False > > + if DepexList != []: > > + DepexString = DepexList[0].strip() > > + return (CouldBeLoaded, DepexString, FileDepex) > > + > > + def Dispatch(self, Db = None): > > + if Db is None: > > + return False > > + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) > > + # Find PeiCore, DexCore, PeiPriori, DxePriori first > > + FfsSecCoreGuid = None > > + FfsPeiCoreGuid = None > > + FfsDxeCoreGuid = None > > + FfsPeiPrioriGuid = None > > + FfsDxePrioriGuid = None > > + for FfsID in self.UnDispatchedFfsDict: > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > + if Ffs.Type == 0x03: > > + FfsSecCoreGuid = FfsID > > + continue > > + if Ffs.Type == 0x04: > > + FfsPeiCoreGuid = FfsID > > + continue > > + if Ffs.Type == 0x05: > > + FfsDxeCoreGuid = FfsID > > + continue > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > + FfsPeiPrioriGuid = FfsID > > + continue > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > + FfsDxePrioriGuid = FfsID > > + continue > > + > > + # Parse SEC_CORE first > > + if FfsSecCoreGuid is not None: > > + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) > > + self.LoadPpi(Db, FfsSecCoreGuid) > > + > > + # Parse PEI first > > + if FfsPeiCoreGuid is not None: > > + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) > > + self.LoadPpi(Db, FfsPeiCoreGuid) > > + if FfsPeiPrioriGuid is not None: > > + # Load PEIM described in priori file > > + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) > > + if len(FfsPeiPriori.Sections) == 1: > > + Section = FfsPeiPriori.Sections.popitem()[1] > > + if Section.Type == 0x19: > > + GuidStruct = struct.Struct('1I2H8B') > > + Start = 4 > > + while len(Section) > Start: > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > + GuidString = gGuidStringFormat % Guid > > + Start = Start + 16 > > + if GuidString in self.UnDispatchedFfsDict: > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > + self.LoadPpi(Db, GuidString) > > + > > + self.DisPatchPei(Db) > > + > > + # Parse DXE then > > + if FfsDxeCoreGuid is not None: > > + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) > > + self.LoadProtocol(Db, FfsDxeCoreGuid) > > + if FfsDxePrioriGuid is not None: > > + # Load PEIM described in priori file > > + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) > > + if len(FfsDxePriori.Sections) == 1: > > + Section = FfsDxePriori.Sections.popitem()[1] > > + if Section.Type == 0x19: > > + GuidStruct = struct.Struct('1I2H8B') > > + Start = 4 > > + while len(Section) > Start: > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > + GuidString = gGuidStringFormat % Guid > > + Start = Start + 16 > > + if GuidString in self.UnDispatchedFfsDict: > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > + self.LoadProtocol(Db, GuidString) > > + > > + self.DisPatchDxe(Db) > > + > > + def LoadProtocol(self, Db, ModuleGuid): > > + SqlCommand = """select GuidValue from Report > > + where SourceFileFullPath in > > + (select Value1 from Inf where BelongsToFile = > > + (select BelongsToFile from Inf > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > + and Model = %s) > > + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ > > + % (ModuleGuid, 5001, 3007) > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > + for Record in RecordSet: > > + SqlCommand = """select Value2 from Inf where BelongsToFile = > > + (select DISTINCT BelongsToFile from Inf > > + where Value1 = > > + (select SourceFileFullPath from Report > > + where GuidValue like '%s' and ItemMode = 'Callback')) > > + and Value1 = 'FILE_GUID'""" % Record[0] > > + CallBackSet = Db.TblReport.Exec(SqlCommand) > > + if CallBackSet != []: > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > + else: > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > + > > + def LoadPpi(self, Db, ModuleGuid): > > + SqlCommand = """select GuidValue from Report > > + where SourceFileFullPath in > > + (select Value1 from Inf where BelongsToFile = > > + (select BelongsToFile from Inf > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > + and Model = %s) > > + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ > > + % (ModuleGuid, 5001, 3007) > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > + for Record in RecordSet: > > + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid > > + > > + def DisPatchDxe(self, Db): > > + IsInstalled = False > > + ScheduleList = sdict() > > + for FfsID in self.UnDispatchedFfsDict: > > + CouldBeLoaded = False > > + DepexString = '' > > + FileDepex = None > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > + if Ffs.Type == 0x07: > > + # Get Depex > > + IsFoundDepex = False > > + for Section in Ffs.Sections.values(): > > + # Find Depex > > + if Section.Type == 0x13: > > + IsFoundDepex = True > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') > > + break > > + if Section.Type == 0x01: > > + CompressSections = Section._SubImages[4] > > + for CompressSection in CompressSections.Sections: > > + if CompressSection.Type == 0x13: > > + IsFoundDepex = True > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > 'Protocol') > > + break > > + if CompressSection.Type == 0x02: > > + NewSections = CompressSection._SubImages[4] > > + for NewSection in NewSections.Sections: > > + if NewSection.Type == 0x13: > > + IsFoundDepex = True > > + CouldBeLoaded, DepexString, FileDepex = > self.ParseDepex(NewSection._SubImages[4], 'Protocol') > > + break > > + > > + # Not find Depex > > + if not IsFoundDepex: > > + CouldBeLoaded = self.CheckArchProtocol() > > + DepexString = '' > > + FileDepex = None > > + > > + # Append New Ffs > > + if CouldBeLoaded: > > + IsInstalled = True > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > + NewFfs.Depex = DepexString > > + if FileDepex is not None: > > + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) > > + else: > > + ScheduleList[FfsID] = NewFfs > > + else: > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > + > > + for FfsID in ScheduleList: > > + NewFfs = ScheduleList.pop(FfsID) > > + FfsName = 'UnKnown' > > + self.OrderedFfsDict[FfsID] = NewFfs > > + self.LoadProtocol(Db, FfsID) > > + > > + SqlCommand = """select Value2 from Inf > > + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and > lower(Value2) = lower('%s') and Model = %s) > > + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > + if RecordSet != []: > > + FfsName = RecordSet[0][0] > > + > > + if IsInstalled: > > + self.DisPatchDxe(Db) > > + > > + def DisPatchPei(self, Db): > > + IsInstalled = False > > + for FfsID in self.UnDispatchedFfsDict: > > + CouldBeLoaded = True > > + DepexString = '' > > + FileDepex = None > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > + if Ffs.Type == 0x06 or Ffs.Type == 0x08: > > + # Get Depex > > + for Section in Ffs.Sections.values(): > > + if Section.Type == 0x1B: > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') > > + break > > + if Section.Type == 0x01: > > + CompressSections = Section._SubImages[4] > > + for CompressSection in CompressSections.Sections: > > + if CompressSection.Type == 0x1B: > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > 'Ppi') > > + break > > + if CompressSection.Type == 0x02: > > + NewSections = CompressSection._SubImages[4] > > + for NewSection in NewSections.Sections: > > + if NewSection.Type == 0x1B: > > + CouldBeLoaded, DepexString, FileDepex = > self.ParseDepex(NewSection._SubImages[4], 'Ppi') > > + break > > + > > + # Append New Ffs > > + if CouldBeLoaded: > > + IsInstalled = True > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > + NewFfs.Depex = DepexString > > + self.OrderedFfsDict[FfsID] = NewFfs > > + self.LoadPpi(Db, FfsID) > > + else: > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > + > > + if IsInstalled: > > + self.DisPatchPei(Db) > > + > > + > > + def __str__(self): > > + global gIndention > > + gIndention += 4 > > + FvInfo = '\n' + ' ' * gIndention > > + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) > > + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) > > + gIndention -= 4 > > + return FvInfo + FfsInfo > > + > > + def _Unpack(self): > > + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] > > + self.empty() > > + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) > > + > > + # traverse the FFS > > + EndOfFv = Size > > + FfsStartAddress = self.HeaderSize > > + LastFfsObj = None > > + while FfsStartAddress < EndOfFv: > > + FfsObj = Ffs() > > + FfsObj.frombuffer(self, FfsStartAddress) > > + FfsId = repr(FfsObj) > > + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ > > + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): > > + if LastFfsObj is not None: > > + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) > > + else: > > + if FfsId in self.FfsDict: > > + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", > > + ExtraData="\t%s @ %s\n\t%s @ %s" \ > > + % (FfsObj.Guid, FfsObj.Offset, > > + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) > > + self.FfsDict[FfsId] = FfsObj > > + if LastFfsObj is not None: > > + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) > > + > > + FfsStartAddress += len(FfsObj) > > + # > > + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) > > + # The next FFS must be at the latest next 8-byte aligned address > > + # > > + FfsStartAddress = (FfsStartAddress + 7) & (~7) > > + LastFfsObj = FfsObj > > + > > + def _GetAttributes(self): > > + return self.GetField(self._ATTR_, 0)[0] > > + > > + def _GetSize(self): > > + return self.GetField(self._LENGTH_, 0)[0] > > + > > + def _GetChecksum(self): > > + return self.GetField(self._CHECKSUM_, 0)[0] > > + > > + def _GetHeaderLength(self): > > + return self.GetField(self._HLEN_, 0)[0] > > + > > + def _GetFileSystemGuid(self): > > + return gGuidStringFormat % self.GetField(self._GUID_, 0) > > + > > + Attributes = property(_GetAttributes) > > + Size = property(_GetSize) > > + Checksum = property(_GetChecksum) > > + HeaderSize = property(_GetHeaderLength) > > + FileSystemGuid = property(_GetFileSystemGuid) > > + > > ## MultipleFv() class > > # > > # A class for Multiple FV > > diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py > > index 7196f7f2c753..e7b856a1150a 100644 > > --- a/BaseTools/Source/Python/GenFds/AprioriSection.py > > +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py > > @@ -16,28 +16,31 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from struct import * > > +from struct import pack > > import Common.LongFilePathOs as os > > from io import BytesIO > > -from . import FfsFileStatement > > +from .FfsFileStatement import FileStatement > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -from CommonDataClass.FdfClass import AprioriSectionClassObject > > -from Common.StringUtils import * > > +from Common.StringUtils import NormPath > > from Common.Misc import SaveFileOnChange, PathClass > > -from Common import EdkLogger > > -from Common.BuildToolError import * > > +from Common.EdkLogger import error as EdkLoggerError > > +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE > > from Common.DataType import TAB_COMMON > > > > +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > + > > ## process APRIORI file data and generate PEI/DXE APRIORI file > > # > > # > > -class AprioriSection (AprioriSectionClassObject): > > +class AprioriSection (object): > > ## The constructor > > # > > # @param self The object pointer > > # > > def __init__(self): > > - AprioriSectionClassObject.__init__(self) > > + self.DefineVarDict = {} > > + self.FfsList = [] > > self.AprioriType = "" > > > > ## GenFfs() method > > @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): > > # @retval string Generated file name > > # > > def GenFfs (self, FvName, Dict = {}, IsMakefile = False): > > - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > Buffer = BytesIO('') > > - AprioriFileGuid = DXE_GUID > > if self.AprioriType == "PEI": > > - AprioriFileGuid = PEI_GUID > > + AprioriFileGuid = PEI_APRIORI_GUID > > + else: > > + AprioriFileGuid = DXE_APRIORI_GUID > > + > > OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ > > GenFdsGlobalVariable.FfsDir,\ > > AprioriFileGuid + FvName) > > - if not os.path.exists(OutputAprFilePath) : > > + if not os.path.exists(OutputAprFilePath): > > os.makedirs(OutputAprFilePath) > > > > OutputAprFileName = os.path.join( OutputAprFilePath, \ > > @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): > > > > Dict.update(self.DefineVarDict) > > InfFileName = None > > - for FfsObj in self.FfsList : > > + for FfsObj in self.FfsList: > > Guid = "" > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > + if isinstance(FfsObj, FileStatement): > > Guid = FfsObj.NameGuid > > else: > > InfFileName = NormPath(FfsObj.InfFileName) > > Arch = FfsObj.GetCurrentArch() > > > > - if Arch is not None: > > + if Arch: > > Dict['$(ARCH)'] = Arch > > InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) > > > > - if Arch is not None: > > + if Arch: > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > Guid = Inf.Guid > > - > > else: > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > Guid = Inf.Guid > > > > - self.BinFileList = Inf.Module.Binaries > > - if self.BinFileList == []: > > - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, > > + if not Inf.Module.Binaries: > > + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, > > "INF %s not found in build ARCH %s!" \ > > % (InfFileName, GenFdsGlobalVariable.ArchList)) > > > > - > > GuidPart = Guid.split('-') > > Buffer.write(pack('I', long(GuidPart[0], 16))) > > Buffer.write(pack('H', int(GuidPart[1], 16))) > > @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): > > 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) > > > > return AprFfsFileName > > - > > diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py > > index baa6e1dfa2ab..df29c40dbd20 100644 > > --- a/BaseTools/Source/Python/GenFds/Capsule.py > > +++ b/BaseTools/Source/Python/GenFds/Capsule.py > > @@ -16,22 +16,18 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -from .GenFdsGlobalVariable import FindExtendTool > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool > > from CommonDataClass.FdfClass import CapsuleClassObject > > import Common.LongFilePathOs as os > > -import subprocess > > from io import BytesIO > > -from Common.Misc import SaveFileOnChange > > -from Common.Misc import PackRegistryFormatGuid > > +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid > > import uuid > > from struct import pack > > from Common import EdkLogger > > -from Common.BuildToolError import * > > +from Common.BuildToolError import GENFDS_ERROR > > +from Common.DataType import TAB_LINE_BREAK > > > > - > > -T_CHAR_LF = '\n' > > -WIN_CERT_REVISION = 0x0200 > > +WIN_CERT_REVISION = 0x0200 > > WIN_CERT_TYPE_EFI_GUID = 0x0EF1 > > EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') > > EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') > > @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 > > ## create inf file describes what goes into capsule and call GenFv to generate capsule > > # > > # > > -class Capsule (CapsuleClassObject) : > > +class Capsule (CapsuleClassObject): > > ## The constructor > > # > > # @param self The object pointer > > @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : > > return self.GenFmpCapsule() > > > > CapInfFile = self.GenCapInf() > > - CapInfFile.writelines("[files]" + T_CHAR_LF) > > + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) > > CapFileList = [] > > - for CapsuleDataObj in self.CapsuleDataList : > > + for CapsuleDataObj in self.CapsuleDataList: > > CapsuleDataObj.CapsuleName = self.CapsuleName > > FileName = CapsuleDataObj.GenCapsuleSubItem() > > CapsuleDataObj.CapsuleName = None > > CapFileList.append(FileName) > > CapInfFile.writelines("EFI_FILE_NAME = " + \ > > FileName + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) > > CapInfFile.close() > > # > > @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : > > self.UiCapsuleName + "_Cap" + '.inf') > > CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') > > > > - CapInfFile.writelines("[options]" + T_CHAR_LF) > > + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) > > > > for Item in self.TokensDict: > > CapInfFile.writelines("EFI_" + \ > > Item + \ > > ' = ' + \ > > self.TokensDict[Item] + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > > > return CapInfFile > > diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py > > index 0caba8983d2e..6071e9f4a545 100644 > > --- a/BaseTools/Source/Python/GenFds/CapsuleData.py > > +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py > > @@ -16,7 +16,6 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from . import Ffs > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > from io import BytesIO > > from struct import pack > > diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py > > index aaaabf84dca8..51f6791fab19 100644 > > --- a/BaseTools/Source/Python/GenFds/CompressSection.py > > +++ b/BaseTools/Source/Python/GenFds/CompressSection.py > > @@ -16,7 +16,7 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > from . import Section > > import subprocess > > import Common.LongFilePathOs as os > > @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : > > ModuleName + \ > > SUP_MODULE_SEC + \ > > SecNum + \ > > - Ffs.SectionSuffix['COMPRESS'] > > + SectionSuffix['COMPRESS'] > > OutputFile = os.path.normpath(OutputFile) > > DummyFile = OutputFile + '.dummy' > > GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) > > diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py > > index a6387b07c582..28f9b931ca4b 100644 > > --- a/BaseTools/Source/Python/GenFds/DataSection.py > > +++ b/BaseTools/Source/Python/GenFds/DataSection.py > > @@ -19,7 +19,7 @@ from __future__ import absolute_import > > from . import Section > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > import subprocess > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > import Common.LongFilePathOs as os > > from CommonDataClass.FdfClass import DataSectionClassObject > > from Common.Misc import PeImageClass > > @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): > > ) > > self.SectFileName = TeFile > > > > - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) > > + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) > > OutputFile = os.path.normpath(OutputFile) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), > IsMakefile = IsMakefile) > > FileList = [OutputFile] > > diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py > > index b2d123bfc045..78efc18816e8 100644 > > --- a/BaseTools/Source/Python/GenFds/DepexSection.py > > +++ b/BaseTools/Source/Python/GenFds/DepexSection.py > > @@ -18,8 +18,6 @@ > > from __future__ import absolute_import > > from . import Section > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -import subprocess > > -from .Ffs import Ffs > > import Common.LongFilePathOs as os > > from CommonDataClass.FdfClass import DepexSectionClassObject > > from AutoGen.GenDepex import DependencyExpression > > @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): > > GenFdsGlobalVariable.TargetName, > > GenFdsGlobalVariable.ToolChainTag) > > for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: > > - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) > > ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > - ModuleFile, > > + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), > > Arch, > > GenFdsGlobalVariable.TargetName, > > GenFdsGlobalVariable.ToolChainTag > > diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py > > index 623b77d27427..f8573b5c7d1b 100644 > > --- a/BaseTools/Source/Python/GenFds/EfiSection.py > > +++ b/BaseTools/Source/Python/GenFds/EfiSection.py > > @@ -20,7 +20,7 @@ from struct import * > > from . import Section > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > import subprocess > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > import Common.LongFilePathOs as os > > from CommonDataClass.FdfClass import EfiSectionClassObject > > from Common import EdkLogger > > @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): > > BuildNumTuple = tuple() > > > > Num = SecNum > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > SectionSuffix.get(SectionType)) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > #Ui=StringData, > > Ver=BuildNum, > > @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): > > for File in FileList: > > Index = Index + 1 > > Num = '%s.%d' %(SecNum, Index) > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > SectionSuffix.get(SectionType)) > > f = open(File, 'r') > > VerString = f.read() > > f.close() > > @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): > > else: > > EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) > > Num = SecNum > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > SectionSuffix.get(SectionType)) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > #Ui=VerString, > > Ver=BuildNum, > > @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): > > Num = SecNum > > if IsMakefile and StringData == ModuleNameStr: > > StringData = "$(MODULE_NAME)" > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > SectionSuffix.get(SectionType)) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > Ui=StringData, IsMakefile=IsMakefile) > > OutputFileList.append(OutputFile) > > @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): > > for File in FileList: > > Index = Index + 1 > > Num = '%s.%d' %(SecNum, Index) > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > SectionSuffix.get(SectionType)) > > f = open(File, 'r') > > UiString = f.read() > > f.close() > > @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): > > Num = SecNum > > if IsMakefile and StringData == ModuleNameStr: > > StringData = "$(MODULE_NAME)" > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > SectionSuffix.get(SectionType)) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > Ui=StringData, IsMakefile=IsMakefile) > > OutputFileList.append(OutputFile) > > @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): > > """ Copy Map file to FFS output path """ > > Index = Index + 1 > > Num = '%s.%d' %(SecNum, Index) > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > Ffs.SectionSuffix.get(SectionType)) > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > SectionSuffix.get(SectionType)) > > File = GenFdsGlobalVariable.MacroExtend(File, Dict) > > > > #Get PE Section alignment when align is set to AUTO > > diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py > > index 63687e98bb31..62bf9f14012a 100644 > > --- a/BaseTools/Source/Python/GenFds/FdfParser.py > > +++ b/BaseTools/Source/Python/GenFds/FdfParser.py > > @@ -18,79 +18,66 @@ > > # > > from __future__ import print_function > > from __future__ import absolute_import > > -import re > > +from re import compile, DOTALL > > +from string import hexdigits > > +from uuid import UUID > > > > -from . import Fd > > -from . import Region > > -from . import Fv > > -from . import AprioriSection > > -from . import FfsInfStatement > > -from . import FfsFileStatement > > -from . import VerSection > > -from . import UiSection > > -from . import FvImageSection > > -from . import DataSection > > -from . import DepexSection > > -from . import CompressSection > > -from . import GuidSection > > -from . import Capsule > > -from . import CapsuleData > > -from . import Rule > > -from . import RuleComplexFile > > -from . import RuleSimpleFile > > -from . import EfiSection > > -from . import Vtf > > -from . import ComponentStatement > > -from . import OptionRom > > -from . import OptRomInfStatement > > -from . import OptRomFileStatement > > -import string > > - > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > from Common.BuildToolError import * > > from Common import EdkLogger > > -from Common.Misc import PathClass > > -from Common.StringUtils import NormPath > > -import Common.GlobalData as GlobalData > > -from Common.Expression import * > > +from Common.Misc import PathClass, tdict > > +from Common.StringUtils import NormPath, ReplaceMacro > > from Common import GlobalData > > +from Common.Expression import * > > from Common.DataType import * > > -from Common.StringUtils import ReplaceMacro > > -import uuid > > -from Common.Misc import tdict > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > import Common.LongFilePathOs as os > > from Common.LongFilePathSupport import OpenLongFilePath as open > > -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID > > -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID > > from Common.RangeExpression import RangeExpression > > from collections import OrderedDict > > > > -##define T_CHAR_SPACE ' ' > > -##define T_CHAR_NULL '\0' > > -##define T_CHAR_CR '\r' > > -##define T_CHAR_TAB '\t' > > -##define T_CHAR_LF '\n' > > -##define T_CHAR_SLASH '/' > > -##define T_CHAR_BACKSLASH '\\' > > -##define T_CHAR_DOUBLE_QUOTE '\"' > > -##define T_CHAR_SINGLE_QUOTE '\'' > > -##define T_CHAR_STAR '*' > > -##define T_CHAR_HASH '#' > > +from .Fd import FD > > +from .Region import Region > > +from .Fv import FV > > +from .AprioriSection import AprioriSection > > +from .FfsInfStatement import FfsInfStatement > > +from .FfsFileStatement import FileStatement > > +from .VerSection import VerSection > > +from .UiSection import UiSection > > +from .FvImageSection import FvImageSection > > +from .DataSection import DataSection > > +from .DepexSection import DepexSection > > +from .CompressSection import CompressSection > > +from .GuidSection import GuidSection > > +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule > > +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile > > +from .RuleComplexFile import RuleComplexFile > > +from .RuleSimpleFile import RuleSimpleFile > > +from .EfiSection import EfiSection > > +from .Vtf import Vtf > > +from .ComponentStatement import ComponentStatement > > +from .OptionRom import OPTIONROM > > +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs > > +from .OptRomFileStatement import OptRomFileStatement > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ > > -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ > > -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') > > +T_CHAR_CR = '\r' > > +T_CHAR_TAB = '\t' > > +T_CHAR_DOUBLE_QUOTE = '\"' > > +T_CHAR_SINGLE_QUOTE = '\'' > > +T_CHAR_STAR = '*' > > > > -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') > > +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} > > +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} > > +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} > > > > -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') > > -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > -TokenFindPattern = > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') > > +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > +TokenFindPattern = > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > AllIncludeFileList = [] > > > > # Get the closest parent > > @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): > > return True > > > > def GetRealFileLine (File, Line): > > - > > InsertedLines = 0 > > for Profile in AllIncludeFileList: > > if Profile.IsLineInFile(Line): > > @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): > > > > ## The exception class that used to report error messages when parsing FDF > > # > > -# Currently the "ToolName" is set to be "FDF Parser". > > +# Currently the "ToolName" is set to be "FdfParser". > > # > > class Warning (Exception): > > ## The constructor > > @@ -132,7 +118,6 @@ class Warning (Exception): > > # @param Line The Line number that error occurs > > # > > def __init__(self, Str, File = None, Line = None): > > - > > FileLineTuple = GetRealFileLine(File, Line) > > self.FileName = FileLineTuple[0] > > self.LineNumber = FileLineTuple[1] > > @@ -147,7 +132,7 @@ class Warning (Exception): > > # > > # May raise Exception when opening file. > > # > > -class IncludeFileProfile : > > +class IncludeFileProfile: > > ## The constructor > > # > > # @param self The object pointer > > @@ -157,16 +142,11 @@ class IncludeFileProfile : > > self.FileName = FileName > > self.FileLinesList = [] > > try: > > - fsock = open(FileName, "rb", 0) > > - try: > > + with open(FileName, "rb", 0) as fsock: > > self.FileLinesList = fsock.readlines() > > for index, line in enumerate(self.FileLinesList): > > - if not line.endswith('\n'): > > - self.FileLinesList[index] += '\n' > > - > > - finally: > > - fsock.close() > > - > > + if not line.endswith(TAB_LINE_BREAK): > > + self.FileLinesList[index] += TAB_LINE_BREAK > > except: > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > > > @@ -209,7 +189,7 @@ class IncludeFileProfile : > > # > > # May raise Exception when opening file. > > # > > -class FileProfile : > > +class FileProfile: > > ## The constructor > > # > > # @param self The object pointer > > @@ -218,11 +198,8 @@ class FileProfile : > > def __init__(self, FileName): > > self.FileLinesList = [] > > try: > > - fsock = open(FileName, "rb", 0) > > - try: > > + with open(FileName, "rb", 0) as fsock: > > self.FileLinesList = fsock.readlines() > > - finally: > > - fsock.close() > > > > except: > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > @@ -266,41 +243,37 @@ class FdfParser: > > self.CurrentOffsetWithinLine = 0 > > self.CurrentFdName = None > > self.CurrentFvName = None > > - self.__Token = "" > > - self.__SkippedChars = "" > > + self._Token = "" > > + self._SkippedChars = "" > > GlobalData.gFdfParser = self > > > > # Used to section info > > - self.__CurSection = [] > > + self._CurSection = [] > > # Key: [section name, UI name, arch] > > - # Value: {MACRO_NAME : MACRO_VALUE} > > - self.__MacroDict = tdict(True, 3) > > - self.__PcdDict = OrderedDict() > > + # Value: {MACRO_NAME: MACRO_VALUE} > > + self._MacroDict = tdict(True, 3) > > + self._PcdDict = OrderedDict() > > > > - self.__WipeOffArea = [] > > + self._WipeOffArea = [] > > if GenFdsGlobalVariable.WorkSpaceDir == '': > > GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") > > > > - ## __SkipWhiteSpace() method > > + ## _SkipWhiteSpace() method > > # > > - # Skip white spaces from current char, return number of chars skipped > > + # Skip white spaces from current char. > > # > > # @param self The object pointer > > - # @retval Count The number of chars skipped > > # > > - def __SkipWhiteSpace(self): > > - Count = 0 > > - while not self.__EndOfFile(): > > - Count += 1 > > - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): > > - self.__SkippedChars += str(self.__CurrentChar()) > > - self.__GetOneChar() > > - > > + def _SkipWhiteSpace(self): > > + while not self._EndOfFile(): > > + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): > > + self._SkippedChars += str(self._CurrentChar()) > > + self._GetOneChar() > > else: > > - Count = Count - 1 > > - return Count > > + return > > + return > > > > - ## __EndOfFile() method > > + ## _EndOfFile() method > > # > > # Judge current buffer pos is at file end > > # > > @@ -308,17 +281,16 @@ class FdfParser: > > # @retval True Current File buffer position is at file end > > # @retval False Current File buffer position is NOT at file end > > # > > - def __EndOfFile(self): > > + def _EndOfFile(self): > > NumberOfLines = len(self.Profile.FileLinesList) > > SizeOfLastLine = len(self.Profile.FileLinesList[-1]) > > if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: > > return True > > - elif self.CurrentLineNumber > NumberOfLines: > > + if self.CurrentLineNumber > NumberOfLines: > > return True > > - else: > > - return False > > + return False > > > > - ## __EndOfLine() method > > + ## _EndOfLine() method > > # > > # Judge current buffer pos is at line end > > # > > @@ -326,14 +298,13 @@ class FdfParser: > > # @retval True Current File buffer position is at line end > > # @retval False Current File buffer position is NOT at line end > > # > > - def __EndOfLine(self): > > + def _EndOfLine(self): > > if self.CurrentLineNumber > len(self.Profile.FileLinesList): > > return True > > SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: > > return True > > - else: > > - return False > > + return False > > > > ## Rewind() method > > # > > @@ -347,7 +318,7 @@ class FdfParser: > > self.CurrentLineNumber = DestLine > > self.CurrentOffsetWithinLine = DestOffset > > > > - ## __UndoOneChar() method > > + ## _UndoOneChar() method > > # > > # Go back one char in the file buffer > > # > > @@ -355,80 +326,78 @@ class FdfParser: > > # @retval True Successfully go back one char > > # @retval False Not able to go back one char as file beginning reached > > # > > - def __UndoOneChar(self): > > - > > + def _UndoOneChar(self): > > if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: > > return False > > elif self.CurrentOffsetWithinLine == 0: > > self.CurrentLineNumber -= 1 > > - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 > > + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 > > else: > > self.CurrentOffsetWithinLine -= 1 > > return True > > > > - ## __GetOneChar() method > > + ## _GetOneChar() method > > # > > # Move forward one char in the file buffer > > # > > # @param self The object pointer > > # > > - def __GetOneChar(self): > > + def _GetOneChar(self): > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > self.CurrentLineNumber += 1 > > self.CurrentOffsetWithinLine = 0 > > else: > > self.CurrentOffsetWithinLine += 1 > > > > - ## __CurrentChar() method > > + ## _CurrentChar() method > > # > > # Get the char pointed to by the file buffer pointer > > # > > # @param self The object pointer > > # @retval Char Current char > > # > > - def __CurrentChar(self): > > + def _CurrentChar(self): > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] > > > > - ## __NextChar() method > > + ## _NextChar() method > > # > > # Get the one char pass the char pointed to by the file buffer pointer > > # > > # @param self The object pointer > > # @retval Char Next char > > # > > - def __NextChar(self): > > + def _NextChar(self): > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > return self.Profile.FileLinesList[self.CurrentLineNumber][0] > > - else: > > - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > > > - ## __SetCurrentCharValue() method > > + ## _SetCurrentCharValue() method > > # > > # Modify the value of current char > > # > > # @param self The object pointer > > # @param Value The new value of current char > > # > > - def __SetCurrentCharValue(self, Value): > > + def _SetCurrentCharValue(self, Value): > > self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value > > > > - ## __CurrentLine() method > > + ## _CurrentLine() method > > # > > # Get the list that contains current line contents > > # > > # @param self The object pointer > > # @retval List current line contents > > # > > - def __CurrentLine(self): > > + def _CurrentLine(self): > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > > > - def __StringToList(self): > > + def _StringToList(self): > > self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] > > if not self.Profile.FileLinesList: > > EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) > > self.Profile.FileLinesList[-1].append(' ') > > > > - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > if StartPos[0] == EndPos[0]: > > Offset = StartPos[1] > > while Offset <= EndPos[1]: > > @@ -437,14 +406,14 @@ class FdfParser: > > return > > > > Offset = StartPos[1] > > - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): > > + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > self.Profile.FileLinesList[StartPos[0]][Offset] = Value > > Offset += 1 > > > > Line = StartPos[0] > > while Line < EndPos[0]: > > Offset = 0 > > - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): > > + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > self.Profile.FileLinesList[Line][Offset] = Value > > Offset += 1 > > Line += 1 > > @@ -454,45 +423,29 @@ class FdfParser: > > self.Profile.FileLinesList[EndPos[0]][Offset] = Value > > Offset += 1 > > > > - > > - def __GetMacroName(self): > > - if not self.__GetNextToken(): > > - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > - MacroName = self.__Token > > - NotFlag = False > > - if MacroName.startswith('!'): > > - NotFlag = True > > - MacroName = MacroName[1:].strip() > > - > > - if not MacroName.startswith('$(') or not MacroName.endswith(')'): > > - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : MacroName}, > > - self.FileName, self.CurrentLineNumber) > > - MacroName = MacroName[2:-1] > > - return MacroName, NotFlag > > - > > - def __SetMacroValue(self, Macro, Value): > > - if not self.__CurSection: > > + def _SetMacroValue(self, Macro, Value): > > + if not self._CurSection: > > return > > > > MacroDict = {} > > - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: > > - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict > > + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: > > + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict > > else: > > - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] > > + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] > > MacroDict[Macro] = Value > > > > - def __GetMacroValue(self, Macro): > > + def _GetMacroValue(self, Macro): > > # Highest priority > > if Macro in GlobalData.gCommandLineDefines: > > return GlobalData.gCommandLineDefines[Macro] > > if Macro in GlobalData.gGlobalDefines: > > return GlobalData.gGlobalDefines[Macro] > > > > - if self.__CurSection: > > - MacroDict = self.__MacroDict[ > > - self.__CurSection[0], > > - self.__CurSection[1], > > - self.__CurSection[2] > > + if self._CurSection: > > + MacroDict = self._MacroDict[ > > + self._CurSection[0], > > + self._CurSection[1], > > + self._CurSection[2] > > ] > > if MacroDict and Macro in MacroDict: > > return MacroDict[Macro] > > @@ -502,7 +455,7 @@ class FdfParser: > > return GlobalData.gPlatformDefines[Macro] > > return None > > > > - def __SectionHeaderParser(self, Section): > > + def _SectionHeaderParser(self, Section): > > # [Defines] > > # [FD.UiName]: use dummy instead if UI name is optional > > # [FV.UiName] > > @@ -510,25 +463,25 @@ class FdfParser: > > # [Rule]: don't take rule section into account, macro is not allowed in this section > > # [VTF.arch.UiName, arch] > > # [OptionRom.DriverName] > > - self.__CurSection = [] > > - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') > > - ItemList = Section.split('.') > > + self._CurSection = [] > > + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) > > + ItemList = Section.split(TAB_SPLIT) > > Item = ItemList[0] > > if Item == '' or Item == 'RULE': > > return > > > > if Item == TAB_COMMON_DEFINES.upper(): > > - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > elif Item == 'VTF' and len(ItemList) == 3: > > UiName = ItemList[2] > > - Pos = UiName.find(',') > > + Pos = UiName.find(TAB_COMMA_SPLIT) > > if Pos != -1: > > UiName = UiName[:Pos] > > - self.__CurSection = ['VTF', UiName, ItemList[1]] > > + self._CurSection = ['VTF', UiName, ItemList[1]] > > elif len(ItemList) > 1: > > - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > elif len(ItemList) > 0: > > - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > > > ## PreprocessFile() method > > # > > @@ -540,7 +493,6 @@ class FdfParser: > > # @param self The object pointer > > # > > def PreprocessFile(self): > > - > > self.Rewind() > > InComment = False > > DoubleSlashComment = False > > @@ -548,12 +500,12 @@ class FdfParser: > > # HashComment in quoted string " " is ignored. > > InString = False > > > > - while not self.__EndOfFile(): > > + while not self._EndOfFile(): > > > > - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > InString = not InString > > # meet new line, then no longer in a comment for // and '#' > > - if self.__CurrentChar() == T_CHAR_LF: > > + if self._CurrentChar() == TAB_LINE_BREAK: > > self.CurrentLineNumber += 1 > > self.CurrentOffsetWithinLine = 0 > > if InComment and DoubleSlashComment: > > @@ -563,33 +515,33 @@ class FdfParser: > > InComment = False > > HashComment = False > > # check for */ comment end > > - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR and > self.__NextChar() == T_CHAR_SLASH: > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > - self.__GetOneChar() > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > - self.__GetOneChar() > > + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and > self._NextChar() == TAB_BACK_SLASH: > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > + self._GetOneChar() > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > + self._GetOneChar() > > InComment = False > > # set comments to spaces > > elif InComment: > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > - self.__GetOneChar() > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > + self._GetOneChar() > > # check for // comment > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): > > InComment = True > > DoubleSlashComment = True > > # check for '#' comment > > - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: > > + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: > > InComment = True > > HashComment = True > > # check for /* comment start > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > - self.__GetOneChar() > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > - self.__GetOneChar() > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > + self._GetOneChar() > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > + self._GetOneChar() > > InComment = True > > else: > > - self.__GetOneChar() > > + self._GetOneChar() > > > > # restore from ListOfList to ListOfString > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > @@ -606,30 +558,30 @@ class FdfParser: > > # nested include support > > Processed = False > > MacroDict = {} > > - while self.__GetNextToken(): > > + while self._GetNextToken(): > > > > - if self.__Token == 'DEFINE': > > - if not self.__GetNextToken(): > > + if self._Token == TAB_DEFINE: > > + if not self._GetNextToken(): > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > - Macro = self.__Token > > - if not self.__IsToken( "="): > > + Macro = self._Token > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - Value = self.__GetExpression() > > + Value = self._GetExpression() > > MacroDict[Macro] = Value > > > > - elif self.__Token == '!include': > > + elif self._Token == TAB_INCLUDE: > > Processed = True > > IncludeLine = self.CurrentLineNumber > > - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') > > - if not self.__GetNextToken(): > > + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) > > + if not self._GetNextToken(): > > raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) > > - IncFileName = self.__Token > > + IncFileName = self._Token > > PreIndex = 0 > > StartPos = IncFileName.find('$(', PreIndex) > > EndPos = IncFileName.find(')', StartPos+2) > > while StartPos != -1 and EndPos != -1: > > - Macro = IncFileName[StartPos+2 : EndPos] > > - MacroVal = self.__GetMacroValue(Macro) > > + Macro = IncFileName[StartPos+2: EndPos] > > + MacroVal = self._GetMacroValue(Macro) > > if not MacroVal: > > if Macro in MacroDict: > > MacroVal = MacroDict[Macro] > > @@ -686,9 +638,9 @@ class FdfParser: > > IncFileProfile.Level = ParentProfile.Level + 1 > > IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 > > # deal with remaining portions after "!include filename", if exists. > > - if self.__GetNextToken(): > > + if self._GetNextToken(): > > if self.CurrentLineNumber == CurrentLine: > > - RemainingLine = self.__CurrentLine()[CurrentOffset:] > > + RemainingLine = self._CurrentLine()[CurrentOffset:] > > self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) > > IncFileProfile.InsertAdjust += 1 > > self.CurrentLineNumber += 1 > > @@ -704,7 +656,7 @@ class FdfParser: > > > > # comment out the processed include file statement > > TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) > > - TempList.insert(IncludeOffset, '#') > > + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) > > self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) > > if Processed: # Nested and back-to-back support > > self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) > > @@ -713,7 +665,7 @@ class FdfParser: > > self.Rewind() > > > > @staticmethod > > - def __GetIfListCurrentItemStat(IfList): > > + def _GetIfListCurrentItemStat(IfList): > > if len(IfList) == 0: > > return True > > > > @@ -735,29 +687,29 @@ class FdfParser: > > IfList = [] > > RegionLayoutLine = 0 > > ReplacedLine = -1 > > - while self.__GetNextToken(): > > + while self._GetNextToken(): > > # Determine section name and the location dependent macro > > - if self.__GetIfListCurrentItemStat(IfList): > > - if self.__Token.startswith('['): > > - Header = self.__Token > > - if not self.__Token.endswith(']'): > > - self.__SkipToToken(']') > > - Header += self.__SkippedChars > > + if self._GetIfListCurrentItemStat(IfList): > > + if self._Token.startswith(TAB_SECTION_START): > > + Header = self._Token > > + if not self._Token.endswith(TAB_SECTION_END): > > + self._SkipToToken(TAB_SECTION_END) > > + Header += self._SkippedChars > > if Header.find('$(') != -1: > > raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) > > - self.__SectionHeaderParser(Header) > > + self._SectionHeaderParser(Header) > > continue > > # Replace macros except in RULE section or out of section > > - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: > > + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: > > ReplacedLine = self.CurrentLineNumber > > - self.__UndoToken() > > + self._UndoToken() > > CurLine = self.Profile.FileLinesList[ReplacedLine - 1] > > PreIndex = 0 > > StartPos = CurLine.find('$(', PreIndex) > > EndPos = CurLine.find(')', StartPos+2) > > - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: > > - MacroName = CurLine[StartPos+2 : EndPos] > > - MacorValue = self.__GetMacroValue(MacroName) > > + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, > TAB_ELSE_IF]: > > + MacroName = CurLine[StartPos+2: EndPos] > > + MacorValue = self._GetMacroValue(MacroName) > > if MacorValue is not None: > > CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) > > if MacorValue.find('$(') != -1: > > @@ -771,74 +723,74 @@ class FdfParser: > > self.Profile.FileLinesList[ReplacedLine - 1] = CurLine > > continue > > > > - if self.__Token == 'DEFINE': > > - if self.__GetIfListCurrentItemStat(IfList): > > - if not self.__CurSection: > > + if self._Token == TAB_DEFINE: > > + if self._GetIfListCurrentItemStat(IfList): > > + if not self._CurSection: > > raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, > self.CurrentLineNumber) > > DefineLine = self.CurrentLineNumber - 1 > > - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') > > - if not self.__GetNextToken(): > > + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) > > + if not self._GetNextToken(): > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > - Macro = self.__Token > > - if not self.__IsToken( "="): > > + Macro = self._Token > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__GetExpression() > > - self.__SetMacroValue(Macro, Value) > > - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > self.CurrentOffsetWithinLine - 1))) > > - elif self.__Token == 'SET': > > - if not self.__GetIfListCurrentItemStat(IfList): > > + Value = self._GetExpression() > > + self._SetMacroValue(Macro, Value) > > + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > self.CurrentOffsetWithinLine - 1))) > > + elif self._Token == 'SET': > > + if not self._GetIfListCurrentItemStat(IfList): > > continue > > SetLine = self.CurrentLineNumber - 1 > > SetOffset = self.CurrentOffsetWithinLine - len('SET') > > - PcdPair = self.__GetNextPcdSettings() > > + PcdPair = self._GetNextPcdSettings() > > PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__GetExpression() > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > + Value = self._GetExpression() > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > - self.__PcdDict[PcdName] = Value > > + self._PcdDict[PcdName] = Value > > > > self.Profile.PcdDict[PcdPair] = Value > > self.SetPcdLocalation(PcdPair) > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple > > > > - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > - elif self.__Token in ('!ifdef', '!ifndef', '!if'): > > - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): > > + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > IfList.append([IfStartPos, None, None]) > > > > - CondLabel = self.__Token > > - Expression = self.__GetExpression() > > + CondLabel = self._Token > > + Expression = self._GetExpression() > > > > - if CondLabel == '!if': > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > + if CondLabel == TAB_IF: > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > else: > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > - if CondLabel == '!ifndef': > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > + if CondLabel == TAB_IF_N_DEF: > > ConditionSatisfied = not ConditionSatisfied > > > > BranchDetermined = ConditionSatisfied > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] > > if ConditionSatisfied: > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > - elif self.__Token in ('!elseif', '!else'): > > - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > + elif self._Token in (TAB_ELSE_IF, '!else'): > > + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > if len(IfList) <= 0: > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > > > if IfList[-1][1]: > > IfList[-1] = [ElseStartPos, False, True] > > - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > else: > > - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > IfList[-1] = [ElseStartPos, True, IfList[-1][2]] > > - if self.__Token == '!elseif': > > - Expression = self.__GetExpression() > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > + if self._Token == TAB_ELSE_IF: > > + Expression = self._GetExpression() > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] > > > > if IfList[-1][1]: > > @@ -846,14 +798,14 @@ class FdfParser: > > IfList[-1][1] = False > > else: > > IfList[-1][2] = True > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine > - 1))) > > - elif self.__Token == '!endif': > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine > - 1))) > > + elif self._Token == '!endif': > > if len(IfList) <= 0: > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > if IfList[-1][1]: > > - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > else: > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > IfList.pop() > > elif not IfList: # Don't use PCDs inside conditional directive > > @@ -862,7 +814,7 @@ class FdfParser: > > continue > > SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > if SetPcd: > > - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > RegionLayoutLine = self.CurrentLineNumber > > continue > > RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > @@ -873,35 +825,35 @@ class FdfParser: > > if not RegionSizeGuid: > > RegionLayoutLine = self.CurrentLineNumber + 1 > > continue > > - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > RegionLayoutLine = self.CurrentLineNumber + 1 > > > > if IfList: > > raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) > > self.Rewind() > > > > - def __CollectMacroPcd(self): > > + def _CollectMacroPcd(self): > > MacroDict = {} > > > > # PCD macro > > MacroDict.update(GlobalData.gPlatformPcds) > > - MacroDict.update(self.__PcdDict) > > + MacroDict.update(self._PcdDict) > > > > # Lowest priority > > MacroDict.update(GlobalData.gPlatformDefines) > > > > - if self.__CurSection: > > + if self._CurSection: > > # Defines macro > > - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > if ScopeMacro: > > MacroDict.update(ScopeMacro) > > > > # Section macro > > - ScopeMacro = self.__MacroDict[ > > - self.__CurSection[0], > > - self.__CurSection[1], > > - self.__CurSection[2] > > + ScopeMacro = self._MacroDict[ > > + self._CurSection[0], > > + self._CurSection[1], > > + self._CurSection[2] > > ] > > if ScopeMacro: > > MacroDict.update(ScopeMacro) > > @@ -912,15 +864,15 @@ class FdfParser: > > for Item in GlobalData.BuildOptionPcd: > > if isinstance(Item, tuple): > > continue > > - PcdName, TmpValue = Item.split("=") > > + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) > > TmpValue = BuildOptionValue(TmpValue, {}) > > MacroDict[PcdName.strip()] = TmpValue > > # Highest priority > > > > return MacroDict > > > > - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > - MacroPcdDict = self.__CollectMacroPcd() > > + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > + MacroPcdDict = self._CollectMacroPcd() > > if Op == 'eval': > > try: > > if Value: > > @@ -933,7 +885,7 @@ class FdfParser: > > # the precise number of line and return the evaluation result > > # > > EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), > > - File=self.FileName, ExtraData=self.__CurrentLine(), > > + File=self.FileName, ExtraData=self._CurrentLine(), > > Line=Line) > > return Excpt.result > > except Exception as Excpt: > > @@ -955,10 +907,10 @@ class FdfParser: > > Expression = Expression[2:-1] > > return Expression in MacroPcdDict > > > > - ## __IsToken() method > > + ## _IsToken() method > > # > > # Check whether input string is found from current char position along > > - # If found, the string value is put into self.__Token > > + # If found, the string value is put into self._Token > > # > > # @param self The object pointer > > # @param String The string to search > > @@ -966,26 +918,26 @@ class FdfParser: > > # @retval True Successfully find string, file buffer pointer moved forward > > # @retval False Not able to find string, file buffer pointer not changed > > # > > - def __IsToken(self, String, IgnoreCase = False): > > - self.__SkipWhiteSpace() > > + def _IsToken(self, String, IgnoreCase = False): > > + self._SkipWhiteSpace() > > > > # Only consider the same line, no multi-line token allowed > > StartPos = self.CurrentOffsetWithinLine > > index = -1 > > if IgnoreCase: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > else: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > if index == 0: > > self.CurrentOffsetWithinLine += len(String) > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > return True > > return False > > > > - ## __IsKeyword() method > > + ## _IsKeyword() method > > # > > # Check whether input keyword is found from current char position along, whole word only! > > - # If found, the string value is put into self.__Token > > + # If found, the string value is put into self._Token > > # > > # @param self The object pointer > > # @param Keyword The string to search > > @@ -993,117 +945,117 @@ class FdfParser: > > # @retval True Successfully find string, file buffer pointer moved forward > > # @retval False Not able to find string, file buffer pointer not changed > > # > > - def __IsKeyword(self, KeyWord, IgnoreCase = False): > > - self.__SkipWhiteSpace() > > + def _IsKeyword(self, KeyWord, IgnoreCase = False): > > + self._SkipWhiteSpace() > > > > # Only consider the same line, no multi-line token allowed > > StartPos = self.CurrentOffsetWithinLine > > index = -1 > > if IgnoreCase: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) > > else: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) > > if index == 0: > > - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: > > + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > + if not str(followingChar).isspace() and followingChar not in SEPARATORS: > > return False > > self.CurrentOffsetWithinLine += len(KeyWord) > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > return True > > return False > > > > - def __GetExpression(self): > > + def _GetExpression(self): > > Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > Index = len(Line) - 1 > > - while Line[Index] in ['\r', '\n']: > > + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: > > Index -= 1 > > ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] > > self.CurrentOffsetWithinLine += len(ExpressionString) > > ExpressionString = ExpressionString.strip() > > return ExpressionString > > > > - ## __GetNextWord() method > > + ## _GetNextWord() method > > # > > # Get next C name from file lines > > - # If found, the string value is put into self.__Token > > + # If found, the string value is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a C name string, file buffer pointer moved forward > > # @retval False Not able to find a C name string, file buffer pointer not changed > > # > > - def __GetNextWord(self): > > - self.__SkipWhiteSpace() > > - if self.__EndOfFile(): > > + def _GetNextWord(self): > > + self._SkipWhiteSpace() > > + if self._EndOfFile(): > > return False > > > > - TempChar = self.__CurrentChar() > > + TempChar = self._CurrentChar() > > StartPos = self.CurrentOffsetWithinLine > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': > > - self.__GetOneChar() > > - while not self.__EndOfLine(): > > - TempChar = self.__CurrentChar() > > + self._GetOneChar() > > + while not self._EndOfLine(): > > + TempChar = self._CurrentChar() > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': > > - self.__GetOneChar() > > + self._GetOneChar() > > > > else: > > break > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > return True > > > > return False > > > > - def __GetNextPcdWord(self): > > - self.__SkipWhiteSpace() > > - if self.__EndOfFile(): > > + def _GetNextPcdWord(self): > > + self._SkipWhiteSpace() > > + if self._EndOfFile(): > > return False > > > > - TempChar = self.__CurrentChar() > > + TempChar = self._CurrentChar() > > StartPos = self.CurrentOffsetWithinLine > > - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == '[' > or TempChar == ']': > > - self.__GetOneChar() > > - while not self.__EndOfLine(): > > - TempChar = self.__CurrentChar() > > + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > + self._GetOneChar() > > + while not self._EndOfLine(): > > + TempChar = self._CurrentChar() > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or > TempChar == ']': > > - self.__GetOneChar() > > + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > + self._GetOneChar() > > > > else: > > break > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > return True > > > > return False > > > > - ## __GetNextToken() method > > + ## _GetNextToken() method > > # > > # Get next token unit before a seperator > > - # If found, the string value is put into self.__Token > > + # If found, the string value is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a token unit, file buffer pointer moved forward > > # @retval False Not able to find a token unit, file buffer pointer not changed > > # > > - def __GetNextToken(self): > > + def _GetNextToken(self): > > # Skip leading spaces, if exist. > > - self.__SkipWhiteSpace() > > - if self.__EndOfFile(): > > + self._SkipWhiteSpace() > > + if self._EndOfFile(): > > return False > > # Record the token start position, the position of the first non-space char. > > StartPos = self.CurrentOffsetWithinLine > > StartLine = self.CurrentLineNumber > > while StartLine == self.CurrentLineNumber: > > - TempChar = self.__CurrentChar() > > + TempChar = self._CurrentChar() > > # Try to find the end char that is not a space and not in seperator tuple. > > # That is, when we got a space or any char in the tuple, we got the end of token. > > - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: > > - self.__GetOneChar() > > + if not str(TempChar).isspace() and TempChar not in SEPARATORS: > > + self._GetOneChar() > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > - self.__GetOneChar() > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > + self._GetOneChar() > > break > > else: > > break > > @@ -1113,35 +1065,34 @@ class FdfParser: > > EndPos = self.CurrentOffsetWithinLine > > if self.CurrentLineNumber != StartLine: > > EndPos = len(self.Profile.FileLinesList[StartLine-1]) > > - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] > > - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > TAB_INCLUDE]: > > - self.__Token = self.__Token.lower() > > + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] > > + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > TAB_INCLUDE]: > > + self._Token = self._Token.lower() > > if StartPos != self.CurrentOffsetWithinLine: > > return True > > else: > > return False > > > > - ## __GetNextGuid() method > > + ## _GetNextGuid() method > > # > > # Get next token unit before a seperator > > - # If found, the GUID string is put into self.__Token > > + # If found, the GUID string is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a registry format GUID, file buffer pointer moved forward > > # @retval False Not able to find a registry format GUID, file buffer pointer not changed > > # > > - def __GetNextGuid(self): > > - > > - if not self.__GetNextToken(): > > + def _GetNextGuid(self): > > + if not self._GetNextToken(): > > return False > > - if gGuidPattern.match(self.__Token) is not None: > > + if gGuidPattern.match(self._Token) is not None: > > return True > > else: > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > @staticmethod > > - def __Verify(Name, Value, Scope): > > + def _Verify(Name, Value, Scope): > > # value verification only applies to numeric values. > > if Scope not in TAB_PCD_NUMERIC_TYPES: > > return > > @@ -1157,17 +1108,17 @@ class FdfParser: > > EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) > > return True > > > > - ## __UndoToken() method > > + ## _UndoToken() method > > # > > # Go back one token unit in file buffer > > # > > # @param self The object pointer > > # > > - def __UndoToken(self): > > - self.__UndoOneChar() > > - while self.__CurrentChar().isspace(): > > - if not self.__UndoOneChar(): > > - self.__GetOneChar() > > + def _UndoToken(self): > > + self._UndoOneChar() > > + while self._CurrentChar().isspace(): > > + if not self._UndoOneChar(): > > + self._GetOneChar() > > return > > > > > > @@ -1175,118 +1126,118 @@ class FdfParser: > > CurrentLine = self.CurrentLineNumber > > while CurrentLine == self.CurrentLineNumber: > > > > - TempChar = self.__CurrentChar() > > + TempChar = self._CurrentChar() > > # Try to find the end char that is not a space and not in seperator tuple. > > # That is, when we got a space or any char in the tuple, we got the end of token. > > - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: > > - if not self.__UndoOneChar(): > > + if not str(TempChar).isspace() and not TempChar in SEPARATORS: > > + if not self._UndoOneChar(): > > return > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > return > > else: > > break > > > > - self.__GetOneChar() > > + self._GetOneChar() > > > > - ## __GetNextHexNumber() method > > + ## _GetNextHexNumber() method > > # > > # Get next HEX data before a seperator > > - # If found, the HEX data is put into self.__Token > > + # If found, the HEX data is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a HEX data, file buffer pointer moved forward > > # @retval False Not able to find a HEX data, file buffer pointer not changed > > # > > - def __GetNextHexNumber(self): > > - if not self.__GetNextToken(): > > + def _GetNextHexNumber(self): > > + if not self._GetNextToken(): > > return False > > - if gHexPatternAll.match(self.__Token): > > + if gHexPatternAll.match(self._Token): > > return True > > else: > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - ## __GetNextDecimalNumber() method > > + ## _GetNextDecimalNumber() method > > # > > # Get next decimal data before a seperator > > - # If found, the decimal data is put into self.__Token > > + # If found, the decimal data is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a decimal data, file buffer pointer moved forward > > # @retval False Not able to find a decimal data, file buffer pointer not changed > > # > > - def __GetNextDecimalNumber(self): > > - if not self.__GetNextToken(): > > + def _GetNextDecimalNumber(self): > > + if not self._GetNextToken(): > > return False > > - if self.__Token.isdigit(): > > + if self._Token.isdigit(): > > return True > > else: > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - def __GetNextPcdSettings(self): > > - if not self.__GetNextWord(): > > + def _GetNextPcdSettings(self): > > + if not self._GetNextWord(): > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > - pcdTokenSpaceCName = self.__Token > > + pcdTokenSpaceCName = self._Token > > > > - if not self.__IsToken( "."): > > + if not self._IsToken(TAB_SPLIT): > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > - pcdCName = self.__Token > > + pcdCName = self._Token > > > > Fields = [] > > - while self.__IsToken("."): > > - if not self.__GetNextPcdWord(): > > + while self._IsToken(TAB_SPLIT): > > + if not self._GetNextPcdWord(): > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, > self.CurrentLineNumber) > > - Fields.append(self.__Token) > > + Fields.append(self._Token) > > > > - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) > > + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) > > > > - ## __GetStringData() method > > + ## _GetStringData() method > > # > > # Get string contents quoted in "" > > - # If found, the decimal data is put into self.__Token > > + # If found, the decimal data is put into self._Token > > # > > # @param self The object pointer > > # @retval True Successfully find a string data, file buffer pointer moved forward > > # @retval False Not able to find a string data, file buffer pointer not changed > > # > > - def __GetStringData(self): > > - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): > > - self.__UndoToken() > > - self.__SkipToToken("\"") > > + def _GetStringData(self): > > + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): > > + self._UndoToken() > > + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) > > currentLineNumber = self.CurrentLineNumber > > > > - if not self.__SkipToToken("\""): > > + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > if currentLineNumber != self.CurrentLineNumber: > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > - self.__Token = self.__SkippedChars.rstrip('\"') > > + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) > > return True > > > > - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): > > - self.__UndoToken() > > - self.__SkipToToken("\'") > > + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): > > + self._UndoToken() > > + self._SkipToToken(T_CHAR_SINGLE_QUOTE) > > currentLineNumber = self.CurrentLineNumber > > > > - if not self.__SkipToToken("\'"): > > + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > if currentLineNumber != self.CurrentLineNumber: > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > - self.__Token = self.__SkippedChars.rstrip('\'') > > + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) > > return True > > > > else: > > return False > > > > - ## __SkipToToken() method > > + ## _SkipToToken() method > > # > > # Search forward in file buffer for the string > > - # The skipped chars are put into self.__SkippedChars > > + # The skipped chars are put into self._SkippedChars > > # > > # @param self The object pointer > > # @param String The string to search > > @@ -1294,25 +1245,25 @@ class FdfParser: > > # @retval True Successfully find the string, file buffer pointer moved forward > > # @retval False Not able to find the string, file buffer pointer not changed > > # > > - def __SkipToToken(self, String, IgnoreCase = False): > > + def _SkipToToken(self, String, IgnoreCase = False): > > StartPos = self.GetFileBufferPos() > > > > - self.__SkippedChars = "" > > - while not self.__EndOfFile(): > > + self._SkippedChars = "" > > + while not self._EndOfFile(): > > index = -1 > > if IgnoreCase: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > else: > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > if index == 0: > > self.CurrentOffsetWithinLine += len(String) > > - self.__SkippedChars += String > > + self._SkippedChars += String > > return True > > - self.__SkippedChars += str(self.__CurrentChar()) > > - self.__GetOneChar() > > + self._SkippedChars += str(self._CurrentChar()) > > + self._GetOneChar() > > > > - self.SetFileBufferPos( StartPos) > > - self.__SkippedChars = "" > > + self.SetFileBufferPos(StartPos) > > + self._SkippedChars = "" > > return False > > > > ## GetFileBufferPos() method > > @@ -1343,18 +1294,18 @@ class FdfParser: > > # @param self The object pointer > > # > > def Preprocess(self): > > - self.__StringToList() > > + self._StringToList() > > self.PreprocessFile() > > self.PreprocessIncludeFile() > > - self.__StringToList() > > + self._StringToList() > > self.PreprocessFile() > > self.PreprocessConditionalStatement() > > - self.__StringToList() > > - for Pos in self.__WipeOffArea: > > - self.__ReplaceFragment(Pos[0], Pos[1]) > > + self._StringToList() > > + for Pos in self._WipeOffArea: > > + self._ReplaceFragment(Pos[0], Pos[1]) > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > > > - while self.__GetDefines(): > > + while self._GetDefines(): > > pass > > > > ## ParseFile() method > > @@ -1365,19 +1316,18 @@ class FdfParser: > > # @param self The object pointer > > # > > def ParseFile(self): > > - > > try: > > self.Preprocess() > > - self.__GetError() > > + self._GetError() > > # > > # Keep processing sections of the FDF until no new sections or a syntax error is found > > # > > - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() or > self.__GetOptionRom(): > > + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or > self._GetOptionRom(): > > pass > > > > except Warning as X: > > - self.__UndoToken() > > - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ > > + self._UndoToken() > > + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ > > # At this point, the closest parent would be the included file itself > > Profile = GetParentAtLine(X.OriginalLineNumber) > > if Profile is not None: > > @@ -1386,7 +1336,7 @@ class FdfParser: > > else: > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > X.Message += ' near line %d, column %d: %s' \ > > - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) > > + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) > > raise > > > > ## SectionParser() method > > @@ -1403,7 +1353,7 @@ class FdfParser: > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not > S.startswith('[FMPPAYLOAD.'): > > raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], [FD.], > [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) > > > > - ## __GetDefines() method > > + ## _GetDefines() method > > # > > # Get Defines section contents and store its data into AllMacrosList > > # > > @@ -1411,56 +1361,55 @@ class FdfParser: > > # @retval True Successfully find a Defines > > # @retval False Not able to find a Defines > > # > > - def __GetDefines(self): > > - > > - if not self.__GetNextToken(): > > + def _GetDefines(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[DEFINES"): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[DEFINES", True): > > + self._UndoToken() > > + if not self._IsToken("[DEFINES", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - while self.__GetNextWord(): > > + while self._GetNextWord(): > > # handle the SET statement > > - if self.__Token == 'SET': > > - self.__UndoToken() > > - self.__GetSetStatement(None) > > + if self._Token == 'SET': > > + self._UndoToken() > > + self._GetSetStatement(None) > > continue > > > > - Macro = self.__Token > > + Macro = self._Token > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken() or self.__Token.startswith('['): > > + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): > > raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) > > - Value = self.__Token > > + Value = self._Token > > > > return False > > > > - ##__GetError() method > > - def __GetError(self): > > + ##_GetError() method > > + def _GetError(self): > > #save the Current information > > CurrentLine = self.CurrentLineNumber > > CurrentOffset = self.CurrentOffsetWithinLine > > - while self.__GetNextToken(): > > - if self.__Token == TAB_ERROR: > > - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), > File=self.FileName, Line=self.CurrentLineNumber) > > + while self._GetNextToken(): > > + if self._Token == TAB_ERROR: > > + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), > File=self.FileName, Line=self.CurrentLineNumber) > > self.CurrentLineNumber = CurrentLine > > self.CurrentOffsetWithinLine = CurrentOffset > > > > - ## __GetFd() method > > + ## _GetFd() method > > # > > # Get FD section contents and store its data into FD dictionary of self.Profile > > # > > @@ -1468,27 +1417,26 @@ class FdfParser: > > # @retval True Successfully find a FD > > # @retval False Not able to find a FD > > # > > - def __GetFd(self): > > - > > - if not self.__GetNextToken(): > > + def _GetFd(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[FD."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): > > if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): > > raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[FD.", True): > > + self._UndoToken() > > + if not self._IsToken("[FD.", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) > > > > - FdName = self.__GetUiName() > > + FdName = self._GetUiName() > > if FdName == "": > > if len (self.Profile.FdDict) == 0: > > FdName = GenFdsGlobalVariable.PlatformName > > @@ -1502,56 +1450,56 @@ class FdfParser: > > if self.CurrentFdName in self.Profile.FdDict: > > raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - FdObj = Fd.FD() > > + FdObj = FD() > > FdObj.FdUiName = self.CurrentFdName > > self.Profile.FdDict[self.CurrentFdName] = FdObj > > > > if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: > > raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) > > > > - Status = self.__GetCreateFile(FdObj) > > + Status = self._GetCreateFile(FdObj) > > if not Status: > > raise Warning("FD name error", self.FileName, self.CurrentLineNumber) > > > > - while self.__GetTokenStatements(FdObj): > > + while self._GetTokenStatements(FdObj): > > pass > > for Attr in ("BaseAddress", "Size", "ErasePolarity"): > > if getattr(FdObj, Attr) is None: > > - self.__GetNextToken() > > + self._GetNextToken() > > raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) > > > > if not FdObj.BlockSizeList: > > FdObj.BlockSizeList.append((1, FdObj.Size, None)) > > > > - self.__GetDefineStatements(FdObj) > > + self._GetDefineStatements(FdObj) > > > > - self.__GetSetStatements(FdObj) > > + self._GetSetStatements(FdObj) > > > > - if not self.__GetRegionLayout(FdObj): > > + if not self._GetRegionLayout(FdObj): > > raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) > > > > - while self.__GetRegionLayout(FdObj): > > + while self._GetRegionLayout(FdObj): > > pass > > return True > > > > - ## __GetUiName() method > > + ## _GetUiName() method > > # > > # Return the UI name of a section > > # > > # @param self The object pointer > > # @retval FdName UI name > > # > > - def __GetUiName(self): > > + def _GetUiName(self): > > Name = "" > > - if self.__GetNextWord(): > > - Name = self.__Token > > + if self._GetNextWord(): > > + Name = self._Token > > > > return Name > > > > - ## __GetCreateFile() method > > + ## _GetCreateFile() method > > # > > # Return the output file name of object > > # > > @@ -1559,16 +1507,15 @@ class FdfParser: > > # @param Obj object whose data will be stored in file > > # @retval FdName UI name > > # > > - def __GetCreateFile(self, Obj): > > - > > - if self.__IsKeyword( "CREATE_FILE"): > > - if not self.__IsToken( "="): > > + def _GetCreateFile(self, Obj): > > + if self._IsKeyword("CREATE_FILE"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > - FileName = self.__Token > > + FileName = self._Token > > Obj.CreateFileName = FileName > > > > return True > > @@ -1576,25 +1523,25 @@ class FdfParser: > > def SetPcdLocalation(self,pcdpair): > > self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) > > > > - ## __GetTokenStatements() method > > + ## _GetTokenStatements() method > > # > > # Get token statements > > # > > # @param self The object pointer > > # @param Obj for whom token statement is got > > # > > - def __GetTokenStatements(self, Obj): > > - if self.__IsKeyword( "BaseAddress"): > > - if not self.__IsToken( "="): > > + def _GetTokenStatements(self, Obj): > > + if self._IsKeyword("BaseAddress"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) > > > > - Obj.BaseAddress = self.__Token > > + Obj.BaseAddress = self._Token > > > > - if self.__IsToken( "|"): > > - pcdPair = self.__GetNextPcdSettings() > > + if self._IsToken(TAB_VALUE_SPLIT): > > + pcdPair = self._GetNextPcdSettings() > > Obj.BaseAddressPcd = pcdPair > > self.Profile.PcdDict[pcdPair] = Obj.BaseAddress > > self.SetPcdLocalation(pcdPair) > > @@ -1602,16 +1549,16 @@ class FdfParser: > > self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple > > return True > > > > - if self.__IsKeyword( "Size"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("Size"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) > > > > - Size = self.__Token > > - if self.__IsToken( "|"): > > - pcdPair = self.__GetNextPcdSettings() > > + Size = self._Token > > + if self._IsToken(TAB_VALUE_SPLIT): > > + pcdPair = self._GetNextPcdSettings() > > Obj.SizePcd = pcdPair > > self.Profile.PcdDict[pcdPair] = Size > > self.SetPcdLocalation(pcdPair) > > @@ -1620,22 +1567,22 @@ class FdfParser: > > Obj.Size = long(Size, 0) > > return True > > > > - if self.__IsKeyword( "ErasePolarity"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("ErasePolarity"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token != "1" and self.__Token != "0": > > + if self._Token != "1" and self._Token != "0": > > raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > - Obj.ErasePolarity = self.__Token > > + Obj.ErasePolarity = self._Token > > return True > > > > - return self.__GetBlockStatements(Obj) > > + return self._GetBlockStatements(Obj) > > > > - ## __GetAddressStatements() method > > + ## _GetAddressStatements() method > > # > > # Get address statements > > # > > @@ -1644,38 +1591,37 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetAddressStatements(self, Obj): > > - > > - if self.__IsKeyword("BsBaseAddress"): > > - if not self.__IsToken( "="): > > + def _GetAddressStatements(self, Obj): > > + if self._IsKeyword("BsBaseAddress"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > - BsAddress = long(self.__Token, 0) > > + BsAddress = long(self._Token, 0) > > Obj.BsBaseAddress = BsAddress > > > > - if self.__IsKeyword("RtBaseAddress"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("RtBaseAddress"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > - RtAddress = long(self.__Token, 0) > > + RtAddress = long(self._Token, 0) > > Obj.RtBaseAddress = RtAddress > > > > - ## __GetBlockStatements() method > > + ## _GetBlockStatements() method > > # > > # Get block statements > > # > > # @param self The object pointer > > # @param Obj for whom block statement is got > > # > > - def __GetBlockStatements(self, Obj): > > + def _GetBlockStatements(self, Obj): > > IsBlock = False > > - while self.__GetBlockStatement(Obj): > > + while self._GetBlockStatement(Obj): > > IsBlock = True > > > > Item = Obj.BlockSizeList[-1] > > @@ -1683,7 +1629,7 @@ class FdfParser: > > raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) > > return IsBlock > > > > - ## __GetBlockStatement() method > > + ## _GetBlockStatement() method > > # > > # Get block statement > > # > > @@ -1692,20 +1638,20 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetBlockStatement(self, Obj): > > - if not self.__IsKeyword( "BlockSize"): > > + def _GetBlockStatement(self, Obj): > > + if not self._IsKeyword("BlockSize"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) > > > > - BlockSize = self.__Token > > + BlockSize = self._Token > > BlockSizePcd = None > > - if self.__IsToken( "|"): > > - PcdPair = self.__GetNextPcdSettings() > > + if self._IsToken(TAB_VALUE_SPLIT): > > + PcdPair = self._GetNextPcdSettings() > > BlockSizePcd = PcdPair > > self.Profile.PcdDict[PcdPair] = BlockSize > > self.SetPcdLocalation(PcdPair) > > @@ -1714,19 +1660,19 @@ class FdfParser: > > BlockSize = long(BlockSize, 0) > > > > BlockNumber = None > > - if self.__IsKeyword( "NumBlocks"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("NumBlocks"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) > > > > - BlockNumber = long(self.__Token, 0) > > + BlockNumber = long(self._Token, 0) > > > > Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) > > return True > > > > - ## __GetDefineStatements() method > > + ## _GetDefineStatements() method > > # > > # Get define statements > > # > > @@ -1735,11 +1681,11 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetDefineStatements(self, Obj): > > - while self.__GetDefineStatement( Obj): > > + def _GetDefineStatements(self, Obj): > > + while self._GetDefineStatement(Obj): > > pass > > > > - ## __GetDefineStatement() method > > + ## _GetDefineStatement() method > > # > > # Get define statement > > # > > @@ -1748,24 +1694,24 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetDefineStatement(self, Obj): > > - if self.__IsKeyword("DEFINE"): > > - self.__GetNextToken() > > - Macro = self.__Token > > - if not self.__IsToken( "="): > > + def _GetDefineStatement(self, Obj): > > + if self._IsKeyword(TAB_DEFINE): > > + self._GetNextToken() > > + Macro = self._Token > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__Token > > + Value = self._Token > > Macro = '$(' + Macro + ')' > > Obj.DefineVarDict[Macro] = Value > > return True > > > > return False > > > > - ## __GetSetStatements() method > > + ## _GetSetStatements() method > > # > > # Get set statements > > # > > @@ -1774,11 +1720,11 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetSetStatements(self, Obj): > > - while self.__GetSetStatement(Obj): > > + def _GetSetStatements(self, Obj): > > + while self._GetSetStatement(Obj): > > pass > > > > - ## __GetSetStatement() method > > + ## _GetSetStatement() method > > # > > # Get set statement > > # > > @@ -1787,15 +1733,15 @@ class FdfParser: > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetSetStatement(self, Obj): > > - if self.__IsKeyword("SET"): > > - PcdPair = self.__GetNextPcdSettings() > > + def _GetSetStatement(self, Obj): > > + if self._IsKeyword("SET"): > > + PcdPair = self._GetNextPcdSettings() > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__GetExpression() > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > + Value = self._GetExpression() > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > if Obj: > > Obj.SetVarDict[PcdPair] = Value > > @@ -1807,19 +1753,19 @@ class FdfParser: > > > > return False > > > > - ## __CalcRegionExpr(self) > > + ## _CalcRegionExpr(self) > > # > > # Calculate expression for offset or size of a region > > # > > # @return: None if invalid expression > > # Calculated number if successfully > > # > > - def __CalcRegionExpr(self): > > + def _CalcRegionExpr(self): > > StartPos = self.GetFileBufferPos() > > Expr = '' > > PairCount = 0 > > - while not self.__EndOfFile(): > > - CurCh = self.__CurrentChar() > > + while not self._EndOfFile(): > > + CurCh = self._CurrentChar() > > if CurCh == '(': > > PairCount += 1 > > elif CurCh == ')': > > @@ -1828,106 +1774,106 @@ class FdfParser: > > if CurCh in '|\r\n' and PairCount == 0: > > break > > Expr += CurCh > > - self.__GetOneChar() > > + self._GetOneChar() > > try: > > return long( > > ValueExpression(Expr, > > - self.__CollectMacroPcd() > > + self._CollectMacroPcd() > > )(True), 0) > > except Exception: > > self.SetFileBufferPos(StartPos) > > return None > > > > - ## __GetRegionLayout() method > > + ## _GetRegionLayout() method > > # > > # Get region layout for FD > > # > > # @param self The object pointer > > - # @param Fd for whom region is got > > + # @param theFd for whom region is got > > # @retval True Successfully find > > # @retval False Not able to find > > # > > - def __GetRegionLayout(self, Fd): > > - Offset = self.__CalcRegionExpr() > > + def _GetRegionLayout(self, theFd): > > + Offset = self._CalcRegionExpr() > > if Offset is None: > > return False > > > > - RegionObj = Region.Region() > > + RegionObj = Region() > > RegionObj.Offset = Offset > > - Fd.RegionList.append(RegionObj) > > + theFd.RegionList.append(RegionObj) > > > > - if not self.__IsToken( "|"): > > + if not self._IsToken(TAB_VALUE_SPLIT): > > raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) > > > > - Size = self.__CalcRegionExpr() > > + Size = self._CalcRegionExpr() > > if Size is None: > > raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) > > RegionObj.Size = Size > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > return True > > > > - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > # > > # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] > > # Or it might be next region's offset described by an expression which starts with a PCD. > > # PcdOffset[|PcdSize] or OffsetPcdExpression|Size > > # > > - self.__UndoToken() > > - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or > > - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) > > + self._UndoToken() > > + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or > > + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) > > if IsRegionPcd: > > - RegionObj.PcdOffset = self.__GetNextPcdSettings() > > - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) > > + RegionObj.PcdOffset = self._GetNextPcdSettings() > > + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) > > self.SetPcdLocalation(RegionObj.PcdOffset) > > - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple > > - if self.__IsToken( "|"): > > - RegionObj.PcdSize = self.__GetNextPcdSettings() > > + if self._IsToken(TAB_VALUE_SPLIT): > > + RegionObj.PcdSize = self._GetNextPcdSettings() > > self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size > > self.SetPcdLocalation(RegionObj.PcdSize) > > - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > return True > > > > - if self.__Token == "SET": > > - self.__UndoToken() > > - self.__GetSetStatements( RegionObj) > > - if not self.__GetNextWord(): > > + if self._Token == "SET": > > + self._UndoToken() > > + self._GetSetStatements(RegionObj) > > + if not self._GetNextWord(): > > return True > > > > - elif self.__Token == BINARY_FILE_TYPE_FV: > > - self.__UndoToken() > > - self.__GetRegionFvType( RegionObj) > > + elif self._Token == BINARY_FILE_TYPE_FV: > > + self._UndoToken() > > + self._GetRegionFvType(RegionObj) > > > > - elif self.__Token == "CAPSULE": > > - self.__UndoToken() > > - self.__GetRegionCapType( RegionObj) > > + elif self._Token == "CAPSULE": > > + self._UndoToken() > > + self._GetRegionCapType(RegionObj) > > > > - elif self.__Token == "FILE": > > - self.__UndoToken() > > - self.__GetRegionFileType(RegionObj) > > + elif self._Token == "FILE": > > + self._UndoToken() > > + self._GetRegionFileType(RegionObj) > > > > - elif self.__Token == "INF": > > - self.__UndoToken() > > + elif self._Token == "INF": > > + self._UndoToken() > > RegionObj.RegionType = "INF" > > - while self.__IsKeyword("INF"): > > - self.__UndoToken() > > - ffsInf = self.__ParseInfStatement() > > + while self._IsKeyword("INF"): > > + self._UndoToken() > > + ffsInf = self._ParseInfStatement() > > if not ffsInf: > > break > > RegionObj.RegionDataList.append(ffsInf) > > > > - elif self.__Token == "DATA": > > - self.__UndoToken() > > - self.__GetRegionDataType(RegionObj) > > + elif self._Token == "DATA": > > + self._UndoToken() > > + self._GetRegionDataType(RegionObj) > > else: > > - self.__UndoToken() > > - if self.__GetRegionLayout(Fd): > > + self._UndoToken() > > + if self._GetRegionLayout(theFd): > > return True > > raise Warning("A valid region type was not found. " > > "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", > > @@ -1935,189 +1881,185 @@ class FdfParser: > > > > return True > > > > - ## __GetRegionFvType() method > > + ## _GetRegionFvType() method > > # > > # Get region fv data for region > > # > > # @param self The object pointer > > # @param RegionObj for whom region data is got > > # > > - def __GetRegionFvType(self, RegionObj): > > - > > - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): > > + def _GetRegionFvType(self, RegionObj): > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > RegionObj.RegionType = BINARY_FILE_TYPE_FV > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > - while self.__IsKeyword( BINARY_FILE_TYPE_FV): > > + while self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > - ## __GetRegionCapType() method > > + ## _GetRegionCapType() method > > # > > # Get region capsule data for region > > # > > # @param self The object pointer > > # @param RegionObj for whom region data is got > > # > > - def __GetRegionCapType(self, RegionObj): > > - > > - if not self.__IsKeyword("CAPSULE"): > > + def _GetRegionCapType(self, RegionObj): > > + if not self._IsKeyword("CAPSULE"): > > raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > RegionObj.RegionType = "CAPSULE" > > - RegionObj.RegionDataList.append(self.__Token) > > + RegionObj.RegionDataList.append(self._Token) > > > > - while self.__IsKeyword("CAPSULE"): > > + while self._IsKeyword("CAPSULE"): > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > - RegionObj.RegionDataList.append(self.__Token) > > + RegionObj.RegionDataList.append(self._Token) > > > > - ## __GetRegionFileType() method > > + ## _GetRegionFileType() method > > # > > # Get region file data for region > > # > > # @param self The object pointer > > # @param RegionObj for whom region data is got > > # > > - def __GetRegionFileType(self, RegionObj): > > - > > - if not self.__IsKeyword( "FILE"): > > + def _GetRegionFileType(self, RegionObj): > > + if not self._IsKeyword("FILE"): > > raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > RegionObj.RegionType = "FILE" > > - RegionObj.RegionDataList.append( self.__Token) > > + RegionObj.RegionDataList.append(self._Token) > > > > - while self.__IsKeyword( "FILE"): > > + while self._IsKeyword("FILE"): > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) > > > > - RegionObj.RegionDataList.append(self.__Token) > > + RegionObj.RegionDataList.append(self._Token) > > > > - ## __GetRegionDataType() method > > + ## _GetRegionDataType() method > > # > > # Get region array data for region > > # > > # @param self The object pointer > > # @param RegionObj for whom region data is got > > # > > - def __GetRegionDataType(self, RegionObj): > > - > > - if not self.__IsKeyword( "DATA"): > > + def _GetRegionDataType(self, RegionObj): > > + if not self._IsKeyword("DATA"): > > raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 18: > > + if len(self._Token) > 18: > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > # convert hex string value to byte hex string array > > - AllString = self.__Token > > + AllString = self._Token > > AllStrLen = len (AllString) > > DataString = "" > > while AllStrLen > 4: > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > AllStrLen = AllStrLen - 2 > > - DataString = DataString + AllString[:AllStrLen] + "," > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > # byte value array > > - if len (self.__Token) <= 4: > > - while self.__IsToken(","): > > - if not self.__GetNextHexNumber(): > > + if len (self._Token) <= 4: > > + while self._IsToken(TAB_COMMA_SPLIT): > > + if not self._GetNextHexNumber(): > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 4: > > + if len(self._Token) > 4: > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > - DataString += self.__Token > > - DataString += "," > > + DataString += self._Token > > + DataString += TAB_COMMA_SPLIT > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - DataString = DataString.rstrip(",") > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > RegionObj.RegionType = "DATA" > > - RegionObj.RegionDataList.append( DataString) > > + RegionObj.RegionDataList.append(DataString) > > > > - while self.__IsKeyword( "DATA"): > > + while self._IsKeyword("DATA"): > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 18: > > + if len(self._Token) > 18: > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > # convert hex string value to byte hex string array > > - AllString = self.__Token > > + AllString = self._Token > > AllStrLen = len (AllString) > > DataString = "" > > while AllStrLen > 4: > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > AllStrLen = AllStrLen - 2 > > - DataString = DataString + AllString[:AllStrLen] + "," > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > # byte value array > > - if len (self.__Token) <= 4: > > - while self.__IsToken(","): > > - if not self.__GetNextHexNumber(): > > + if len (self._Token) <= 4: > > + while self._IsToken(TAB_COMMA_SPLIT): > > + if not self._GetNextHexNumber(): > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 4: > > + if len(self._Token) > 4: > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > - DataString += self.__Token > > - DataString += "," > > + DataString += self._Token > > + DataString += TAB_COMMA_SPLIT > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - DataString = DataString.rstrip(",") > > - RegionObj.RegionDataList.append( DataString) > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > + RegionObj.RegionDataList.append(DataString) > > > > - ## __GetFv() method > > + ## _GetFv() method > > # > > # Get FV section contents and store its data into FV dictionary of self.Profile > > # > > @@ -2125,68 +2067,68 @@ class FdfParser: > > # @retval True Successfully find a FV > > # @retval False Not able to find a FV > > # > > - def __GetFv(self): > > - if not self.__GetNextToken(): > > + def _GetFv(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[FV."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[FV.", True): > > + self._UndoToken() > > + if not self._IsToken("[FV.", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > - FvName = self.__GetUiName() > > + FvName = self._GetUiName() > > self.CurrentFvName = FvName.upper() > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - FvObj = Fv.FV() > > + FvObj = FV() > > FvObj.UiFvName = self.CurrentFvName > > self.Profile.FvDict[self.CurrentFvName] = FvObj > > > > - Status = self.__GetCreateFile(FvObj) > > + Status = self._GetCreateFile(FvObj) > > if not Status: > > raise Warning("FV name error", self.FileName, self.CurrentLineNumber) > > > > - self.__GetDefineStatements(FvObj) > > + self._GetDefineStatements(FvObj) > > > > - self.__GetAddressStatements(FvObj) > > + self._GetAddressStatements(FvObj) > > > > FvObj.FvExtEntryTypeValue = [] > > FvObj.FvExtEntryType = [] > > FvObj.FvExtEntryData = [] > > while True: > > - self.__GetSetStatements(FvObj) > > + self._GetSetStatements(FvObj) > > > > - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or > > - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or > > - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or > > - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): > > + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or > > + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or > > + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or > > + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): > > break > > > > if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: > > raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) > > > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > while True: > > - isInf = self.__GetInfStatement(FvObj) > > - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > + isInf = self._GetInfStatement(FvObj) > > + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > if not isInf and not isFile: > > break > > > > return True > > > > - ## __GetFvAlignment() method > > + ## _GetFvAlignment() method > > # > > # Get alignment for FV > > # > > @@ -2195,26 +2137,25 @@ class FdfParser: > > # @retval True Successfully find a alignment statement > > # @retval False Not able to find a alignment statement > > # > > - def __GetFvAlignment(self, Obj): > > - > > - if not self.__IsKeyword( "FvAlignment"): > > + def _GetFvAlignment(self, Obj): > > + if not self._IsKeyword("FvAlignment"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ > > "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ > > "1G", "2G"): > > - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - Obj.FvAlignment = self.__Token > > + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + Obj.FvAlignment = self._Token > > return True > > > > - ## __GetFvBaseAddress() method > > + ## _GetFvBaseAddress() method > > # > > # Get BaseAddress for FV > > # > > @@ -2223,23 +2164,22 @@ class FdfParser: > > # @retval True Successfully find a FvBaseAddress statement > > # @retval False Not able to find a FvBaseAddress statement > > # > > - def __GetFvBaseAddress(self, Obj): > > - > > - if not self.__IsKeyword("FvBaseAddress"): > > + def _GetFvBaseAddress(self, Obj): > > + if not self._IsKeyword("FvBaseAddress"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) > > > > - if not BaseAddrValuePattern.match(self.__Token.upper()): > > - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - Obj.FvBaseAddress = self.__Token > > + if not BaseAddrValuePattern.match(self._Token.upper()): > > + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + Obj.FvBaseAddress = self._Token > > return True > > > > - ## __GetFvForceRebase() method > > + ## _GetFvForceRebase() method > > # > > # Get FvForceRebase for FV > > # > > @@ -2248,23 +2188,22 @@ class FdfParser: > > # @retval True Successfully find a FvForceRebase statement > > # @retval False Not able to find a FvForceRebase statement > > # > > - def __GetFvForceRebase(self, Obj): > > - > > - if not self.__IsKeyword("FvForceRebase"): > > + def _GetFvForceRebase(self, Obj): > > + if not self._IsKeyword("FvForceRebase"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > Obj.FvForceRebase = True > > - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > Obj.FvForceRebase = False > > else: > > Obj.FvForceRebase = None > > @@ -2272,7 +2211,7 @@ class FdfParser: > > return True > > > > > > - ## __GetFvAttributes() method > > + ## _GetFvAttributes() method > > # > > # Get attributes for FV > > # > > @@ -2280,31 +2219,31 @@ class FdfParser: > > # @param Obj for whom attribute is got > > # @retval None > > # > > - def __GetFvAttributes(self, FvObj): > > + def _GetFvAttributes(self, FvObj): > > IsWordToken = False > > - while self.__GetNextWord(): > > + while self._GetNextWord(): > > IsWordToken = True > > - name = self.__Token > > + name = self._Token > > if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ > > "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ > > "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ > > "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ > > "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ > > "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvAttributeDict[name] = self.__Token > > + FvObj.FvAttributeDict[name] = self._Token > > > > return IsWordToken > > > > - ## __GetFvNameGuid() method > > + ## _GetFvNameGuid() method > > # > > # Get FV GUID for FV > > # > > @@ -2312,110 +2251,107 @@ class FdfParser: > > # @param Obj for whom GUID is got > > # @retval None > > # > > - def __GetFvNameGuid(self, FvObj): > > - > > - if not self.__IsKeyword( "FvNameGuid"): > > + def _GetFvNameGuid(self, FvObj): > > + if not self._IsKeyword("FvNameGuid"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextGuid(): > > + if not self._GetNextGuid(): > > raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvNameGuid = self.__Token > > + FvObj.FvNameGuid = self._Token > > > > return True > > > > - def __GetFvNameString(self, FvObj): > > - > > - if not self.__IsKeyword( "FvNameString"): > > + def _GetFvNameString(self, FvObj): > > + if not self._IsKeyword("FvNameString"): > > return False > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): > > + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): > > raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvNameString = self.__Token > > + FvObj.FvNameString = self._Token > > > > return True > > > > - def __GetFvExtEntryStatement(self, FvObj): > > - > > - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): > > + def _GetFvExtEntryStatement(self, FvObj): > > + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): > > return False > > > > - if not self.__IsKeyword ("TYPE"): > > + if not self._IsKeyword ("TYPE"): > > raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvExtEntryTypeValue.append(self.__Token) > > + FvObj.FvExtEntryTypeValue.append(self._Token) > > > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): > > + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): > > raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvExtEntryType.append(self.__Token) > > + FvObj.FvExtEntryType.append(self._Token) > > > > - if self.__Token == 'DATA': > > + if self._Token == 'DATA': > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 4: > > + if len(self._Token) > 4: > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > - DataString = self.__Token > > - DataString += "," > > + DataString = self._Token > > + DataString += TAB_COMMA_SPLIT > > > > - while self.__IsToken(","): > > - if not self.__GetNextHexNumber(): > > + while self._IsToken(TAB_COMMA_SPLIT): > > + if not self._GetNextHexNumber(): > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > - if len(self.__Token) > 4: > > + if len(self._Token) > 4: > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > - DataString += self.__Token > > - DataString += "," > > + DataString += self._Token > > + DataString += TAB_COMMA_SPLIT > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - DataString = DataString.rstrip(",") > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > FvObj.FvExtEntryData.append(DataString) > > > > - if self.__Token == 'FILE': > > + if self._Token == 'FILE': > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) > > > > - FvObj.FvExtEntryData.append(self.__Token) > > + FvObj.FvExtEntryData.append(self._Token) > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > return True > > > > - ## __GetAprioriSection() method > > + ## _GetAprioriSection() method > > # > > # Get token statements > > # > > @@ -2425,56 +2361,55 @@ class FdfParser: > > # @retval True Successfully find apriori statement > > # @retval False Not able to find apriori statement > > # > > - def __GetAprioriSection(self, FvObj, MacroDict = {}): > > - > > - if not self.__IsKeyword( "APRIORI"): > > + def _GetAprioriSection(self, FvObj, MacroDict = {}): > > + if not self._IsKeyword("APRIORI"): > > return False > > > > - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): > > + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): > > raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) > > - AprType = self.__Token > > + AprType = self._Token > > > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - AprSectionObj = AprioriSection.AprioriSection() > > + AprSectionObj = AprioriSection() > > AprSectionObj.AprioriType = AprType > > > > - self.__GetDefineStatements(AprSectionObj) > > + self._GetDefineStatements(AprSectionObj) > > MacroDict.update(AprSectionObj.DefineVarDict) > > > > while True: > > - IsInf = self.__GetInfStatement(AprSectionObj) > > - IsFile = self.__GetFileStatement( AprSectionObj) > > + IsInf = self._GetInfStatement(AprSectionObj) > > + IsFile = self._GetFileStatement(AprSectionObj) > > if not IsInf and not IsFile: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > FvObj.AprioriSectionList.append(AprSectionObj) > > return True > > > > - def __ParseInfStatement(self): > > - if not self.__IsKeyword("INF"): > > + def _ParseInfStatement(self): > > + if not self._IsKeyword("INF"): > > return None > > > > - ffsInf = FfsInfStatement.FfsInfStatement() > > - self.__GetInfOptions(ffsInf) > > + ffsInf = FfsInfStatement() > > + self._GetInfOptions(ffsInf) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > - ffsInf.InfFileName = self.__Token > > + ffsInf.InfFileName = self._Token > > if not ffsInf.InfFileName.endswith('.inf'): > > raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) > > > > ffsInf.CurrentLineNum = self.CurrentLineNumber > > - ffsInf.CurrentLineContent = self.__CurrentLine() > > + ffsInf.CurrentLineContent = self._CurrentLine() > > > > #Replace $(SAPCE) with real space > > ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') > > > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > #do case sensitive check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > @@ -2492,16 +2427,16 @@ class FdfParser: > > else: > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > - if self.__IsToken('|'): > > - if self.__IsKeyword('RELOCS_STRIPPED'): > > + if self._IsToken(TAB_VALUE_SPLIT): > > + if self._IsKeyword('RELOCS_STRIPPED'): > > ffsInf.KeepReloc = False > > - elif self.__IsKeyword('RELOCS_RETAINED'): > > + elif self._IsKeyword('RELOCS_RETAINED'): > > ffsInf.KeepReloc = True > > else: > > - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > return ffsInf > > > > - ## __GetInfStatement() method > > + ## _GetInfStatement() method > > # > > # Get INF statements > > # > > @@ -2510,86 +2445,86 @@ class FdfParser: > > # @retval True Successfully find inf statement > > # @retval False Not able to find inf statement > > # > > - def __GetInfStatement(self, Obj, ForCapsule=False): > > - ffsInf = self.__ParseInfStatement() > > + def _GetInfStatement(self, Obj, ForCapsule=False): > > + ffsInf = self._ParseInfStatement() > > if not ffsInf: > > return False > > > > if ForCapsule: > > - capsuleFfs = CapsuleData.CapsuleFfs() > > - capsuleFfs.Ffs = ffsInf > > - Obj.CapsuleDataList.append(capsuleFfs) > > + myCapsuleFfs = CapsuleFfs() > > + myCapsuleFfs.Ffs = ffsInf > > + Obj.CapsuleDataList.append(myCapsuleFfs) > > else: > > Obj.FfsList.append(ffsInf) > > return True > > > > - ## __GetInfOptions() method > > + ## _GetInfOptions() method > > # > > # Get options for INF > > # > > # @param self The object pointer > > # @param FfsInfObj for whom option is got > > # > > - def __GetInfOptions(self, FfsInfObj): > > - if self.__IsKeyword("FILE_GUID"): > > - if not self.__IsToken("="): > > + def _GetInfOptions(self, FfsInfObj): > > + if self._IsKeyword("FILE_GUID"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextGuid(): > > + if not self._GetNextGuid(): > > raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) > > - FfsInfObj.OverrideGuid = self.__Token > > + FfsInfObj.OverrideGuid = self._Token > > > > - if self.__IsKeyword( "RuleOverride"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("RuleOverride"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) > > - FfsInfObj.Rule = self.__Token > > + FfsInfObj.Rule = self._Token > > > > - if self.__IsKeyword( "VERSION"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("VERSION"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Version", self.FileName, self.CurrentLineNumber) > > > > - if self.__GetStringData(): > > - FfsInfObj.Version = self.__Token > > + if self._GetStringData(): > > + FfsInfObj.Version = self._Token > > > > - if self.__IsKeyword( BINARY_FILE_TYPE_UI): > > - if not self.__IsToken( "="): > > + if self._IsKeyword(BINARY_FILE_TYPE_UI): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) > > > > - if self.__GetStringData(): > > - FfsInfObj.Ui = self.__Token > > + if self._GetStringData(): > > + FfsInfObj.Ui = self._Token > > > > - if self.__IsKeyword( "USE"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("USE"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) > > - FfsInfObj.UseArch = self.__Token > > + FfsInfObj.UseArch = self._Token > > > > > > - if self.__GetNextToken(): > > - p = > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): > > - FfsInfObj.KeyStringList.append(self.__Token) > > - if not self.__IsToken(","): > > + if self._GetNextToken(): > > + p = > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): > > + FfsInfObj.KeyStringList.append(self._Token) > > + if not self._IsToken(TAB_COMMA_SPLIT): > > return > > else: > > - self.__UndoToken() > > + self._UndoToken() > > return > > > > - while self.__GetNextToken(): > > - if not p.match(self.__Token): > > + while self._GetNextToken(): > > + if not p.match(self._Token): > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > > - FfsInfObj.KeyStringList.append(self.__Token) > > + FfsInfObj.KeyStringList.append(self._Token) > > > > - if not self.__IsToken(","): > > + if not self._IsToken(TAB_COMMA_SPLIT): > > break > > > > - ## __GetFileStatement() method > > + ## _GetFileStatement() method > > # > > # Get FILE statements > > # > > @@ -2599,42 +2534,41 @@ class FdfParser: > > # @retval True Successfully find FILE statement > > # @retval False Not able to find FILE statement > > # > > - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > - > > - if not self.__IsKeyword( "FILE"): > > + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > + if not self._IsKeyword("FILE"): > > return False > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > - if ForCapsule and self.__Token == 'DATA': > > - self.__UndoToken() > > - self.__UndoToken() > > + if ForCapsule and self._Token == 'DATA': > > + self._UndoToken() > > + self._UndoToken() > > return False > > > > - FfsFileObj = FfsFileStatement.FileStatement() > > - FfsFileObj.FvFileType = self.__Token > > + FfsFileObj = FileStatement() > > + FfsFileObj.FvFileType = self._Token > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextGuid(): > > - if not self.__GetNextWord(): > > + if not self._GetNextGuid(): > > + if not self._GetNextWord(): > > raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) > > - if self.__Token == 'PCD': > > - if not self.__IsToken( "("): > > + if self._Token == 'PCD': > > + if not self._IsToken("("): > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > - PcdPair = self.__GetNextPcdSettings() > > - if not self.__IsToken( ")"): > > + PcdPair = self._GetNextPcdSettings() > > + if not self._IsToken(")"): > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > - FfsFileObj.NameGuid = self.__Token > > + FfsFileObj.NameGuid = self._Token > > > > - self.__GetFilePart( FfsFileObj, MacroDict.copy()) > > + self._GetFilePart(FfsFileObj, MacroDict.copy()) > > > > if ForCapsule: > > - capsuleFfs = CapsuleData.CapsuleFfs() > > + capsuleFfs = CapsuleFfs() > > capsuleFfs.Ffs = FfsFileObj > > Obj.CapsuleDataList.append(capsuleFfs) > > else: > > @@ -2642,7 +2576,7 @@ class FdfParser: > > > > return True > > > > - ## __FileCouldHaveRelocFlag() method > > + ## _FileCouldHaveRelocFlag() method > > # > > # Check whether reloc strip flag can be set for a file type. > > # > > @@ -2651,13 +2585,13 @@ class FdfParser: > > # @retval False No way to have it > > # > > @staticmethod > > - def __FileCouldHaveRelocFlag (FileType): > > + def _FileCouldHaveRelocFlag (FileType): > > if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): > > return True > > else: > > return False > > > > - ## __SectionCouldHaveRelocFlag() method > > + ## _SectionCouldHaveRelocFlag() method > > # > > # Check whether reloc strip flag can be set for a section type. > > # > > @@ -2666,13 +2600,13 @@ class FdfParser: > > # @retval False No way to have it > > # > > @staticmethod > > - def __SectionCouldHaveRelocFlag (SectionType): > > + def _SectionCouldHaveRelocFlag (SectionType): > > if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): > > return True > > else: > > return False > > > > - ## __GetFilePart() method > > + ## _GetFilePart() method > > # > > # Get components for FILE statement > > # > > @@ -2680,58 +2614,57 @@ class FdfParser: > > # @param FfsFileObj for whom component is got > > # @param MacroDict dictionary used to replace macro > > # > > - def __GetFilePart(self, FfsFileObj, MacroDict = {}): > > + def _GetFilePart(self, FfsFileObj, MacroDict = {}): > > + self._GetFileOpts(FfsFileObj) > > > > - self.__GetFileOpts( FfsFileObj) > > - > > - if not self.__IsToken("{"): > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > - if self.__Token == 'RELOCS_STRIPPED': > > + if not self._IsToken("{"): > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > + if self._Token == 'RELOCS_STRIPPED': > > FfsFileObj.KeepReloc = False > > else: > > FfsFileObj.KeepReloc = True > > else: > > raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == BINARY_FILE_TYPE_FV: > > - if not self.__IsToken( "="): > > + if self._Token == BINARY_FILE_TYPE_FV: > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > - FfsFileObj.FvName = self.__Token > > + FfsFileObj.FvName = self._Token > > > > - elif self.__Token == "FD": > > - if not self.__IsToken( "="): > > + elif self._Token == "FD": > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > - FfsFileObj.FdName = self.__Token > > + FfsFileObj.FdName = self._Token > > > > - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): > > - self.__UndoToken() > > - self.__GetSectionData( FfsFileObj, MacroDict) > > + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): > > + self._UndoToken() > > + self._GetSectionData(FfsFileObj, MacroDict) > > > > elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': > > - self.__UndoToken() > > - self.__GetRAWData(FfsFileObj, MacroDict) > > + self._UndoToken() > > + self._GetRAWData(FfsFileObj, MacroDict) > > > > else: > > FfsFileObj.CurrentLineNum = self.CurrentLineNumber > > - FfsFileObj.CurrentLineContent = self.__CurrentLine() > > - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') > > - self.__VerifyFile(FfsFileObj.FileName) > > + FfsFileObj.CurrentLineContent = self._CurrentLine() > > + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') > > + self._VerifyFile(FfsFileObj.FileName) > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - ## __GetRAWData() method > > + ## _GetRAWData() method > > # > > # Get RAW data for FILE statement > > # > > @@ -2739,33 +2672,32 @@ class FdfParser: > > # @param FfsFileObj for whom section is got > > # @param MacroDict dictionary used to replace macro > > # > > - def __GetRAWData(self, FfsFileObj, MacroDict = {}): > > + def _GetRAWData(self, FfsFileObj, MacroDict = {}): > > FfsFileObj.FileName = [] > > FfsFileObj.SubAlignment = [] > > while True: > > AlignValue = None > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > #For FFS, Auto is default option same to "" > > - if not self.__Token == "Auto": > > - AlignValue = self.__Token > > - if not self.__GetNextToken(): > > + if not self._Token == "Auto": > > + AlignValue = self._Token > > + if not self._GetNextToken(): > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > - FileName = self.__Token.replace('$(SPACE)', ' ') > > + FileName = self._Token.replace('$(SPACE)', ' ') > > if FileName == '}': > > - self.__UndoToken() > > + self._UndoToken() > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > - self.__VerifyFile(FileName) > > + self._VerifyFile(FileName) > > File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) > > FfsFileObj.FileName.append(File.Path) > > FfsFileObj.SubAlignment.append(AlignValue) > > > > - if self.__IsToken( "}"): > > - self.__UndoToken() > > + if self._IsToken("}"): > > + self._UndoToken() > > break > > > > if len(FfsFileObj.SubAlignment) == 1: > > @@ -2773,45 +2705,43 @@ class FdfParser: > > if len(FfsFileObj.FileName) == 1: > > FfsFileObj.FileName = FfsFileObj.FileName[0] > > > > - ## __GetFileOpts() method > > + ## _GetFileOpts() method > > # > > # Get options for FILE statement > > # > > # @param self The object pointer > > # @param FfsFileObj for whom options is got > > # > > - def __GetFileOpts(self, FfsFileObj): > > - > > - if self.__GetNextToken(): > > - if TokenFindPattern.match(self.__Token): > > - FfsFileObj.KeyStringList.append(self.__Token) > > - if self.__IsToken(","): > > - while self.__GetNextToken(): > > - if not TokenFindPattern.match(self.__Token): > > + def _GetFileOpts(self, FfsFileObj): > > + if self._GetNextToken(): > > + if TokenFindPattern.match(self._Token): > > + FfsFileObj.KeyStringList.append(self._Token) > > + if self._IsToken(TAB_COMMA_SPLIT): > > + while self._GetNextToken(): > > + if not TokenFindPattern.match(self._Token): > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > > - FfsFileObj.KeyStringList.append(self.__Token) > > + FfsFileObj.KeyStringList.append(self._Token) > > > > - if not self.__IsToken(","): > > + if not self._IsToken(TAB_COMMA_SPLIT): > > break > > > > else: > > - self.__UndoToken() > > + self._UndoToken() > > > > - if self.__IsKeyword( "FIXED", True): > > + if self._IsKeyword("FIXED", True): > > FfsFileObj.Fixed = True > > > > - if self.__IsKeyword( "CHECKSUM", True): > > + if self._IsKeyword("CHECKSUM", True): > > FfsFileObj.CheckSum = True > > > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > #For FFS, Auto is default option same to "" > > - if not self.__Token == "Auto": > > - FfsFileObj.Alignment = self.__Token > > + if not self._Token == "Auto": > > + FfsFileObj.Alignment = self._Token > > > > - ## __GetAlignment() method > > + ## _GetAlignment() method > > # > > # Return the alignment value > > # > > @@ -2819,18 +2749,18 @@ class FdfParser: > > # @retval True Successfully find alignment > > # @retval False Not able to find alignment > > # > > - def __GetAlignment(self): > > - if self.__IsKeyword( "Align", True): > > - if not self.__IsToken( "="): > > + def _GetAlignment(self): > > + if self._IsKeyword("Align", True): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > return True > > > > return False > > > > - ## __GetFilePart() method > > + ## _GetFilePart() method > > # > > # Get section data for FILE statement > > # > > @@ -2838,23 +2768,23 @@ class FdfParser: > > # @param FfsFileObj for whom section is got > > # @param MacroDict dictionary used to replace macro > > # > > - def __GetSectionData(self, FfsFileObj, MacroDict = {}): > > + def _GetSectionData(self, FfsFileObj, MacroDict = {}): > > Dict = {} > > Dict.update(MacroDict) > > > > - self.__GetDefineStatements(FfsFileObj) > > + self._GetDefineStatements(FfsFileObj) > > > > Dict.update(FfsFileObj.DefineVarDict) > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > > > while True: > > - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) > > - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) > > + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) > > + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) > > if not IsLeafSection and not IsEncapSection: > > break > > > > - ## __GetLeafSection() method > > + ## _GetLeafSection() method > > # > > # Get leaf section for Obj > > # > > @@ -2864,97 +2794,95 @@ class FdfParser: > > # @retval True Successfully find section statement > > # @retval False Not able to find section statement > > # > > - def __GetLeafSection(self, Obj, MacroDict = {}): > > - > > + def _GetLeafSection(self, Obj, MacroDict = {}): > > OldPos = self.GetFileBufferPos() > > > > - if not self.__IsKeyword( "SECTION"): > > + if not self._IsKeyword("SECTION"): > > if len(Obj.SectionList) == 0: > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > else: > > return False > > > > AlignValue = None > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - AlignValue = self.__Token > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + AlignValue = self._Token > > > > BuildNum = None > > - if self.__IsKeyword( "BUILD_NUM"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("BUILD_NUM"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) > > > > - BuildNum = self.__Token > > + BuildNum = self._Token > > > > - if self.__IsKeyword( "VERSION"): > > + if self._IsKeyword("VERSION"): > > if AlignValue == 'Auto': > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected version", self.FileName, self.CurrentLineNumber) > > - VerSectionObj = VerSection.VerSection() > > + VerSectionObj = VerSection() > > VerSectionObj.Alignment = AlignValue > > VerSectionObj.BuildNum = BuildNum > > - if self.__GetStringData(): > > - VerSectionObj.StringData = self.__Token > > + if self._GetStringData(): > > + VerSectionObj.StringData = self._Token > > else: > > - VerSectionObj.FileName = self.__Token > > + VerSectionObj.FileName = self._Token > > Obj.SectionList.append(VerSectionObj) > > > > - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): > > + elif self._IsKeyword(BINARY_FILE_TYPE_UI): > > if AlignValue == 'Auto': > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected UI", self.FileName, self.CurrentLineNumber) > > - UiSectionObj = UiSection.UiSection() > > + UiSectionObj = UiSection() > > UiSectionObj.Alignment = AlignValue > > - if self.__GetStringData(): > > - UiSectionObj.StringData = self.__Token > > + if self._GetStringData(): > > + UiSectionObj.StringData = self._Token > > else: > > - UiSectionObj.FileName = self.__Token > > + UiSectionObj.FileName = self._Token > > Obj.SectionList.append(UiSectionObj) > > > > - elif self.__IsKeyword( "FV_IMAGE"): > > + elif self._IsKeyword("FV_IMAGE"): > > if AlignValue == 'Auto': > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) > > > > - FvName = self.__Token > > + FvName = self._Token > > FvObj = None > > > > - if self.__IsToken( "{"): > > - FvObj = Fv.FV() > > + if self._IsToken("{"): > > + FvObj = FV() > > FvObj.UiFvName = FvName.upper() > > - self.__GetDefineStatements(FvObj) > > + self._GetDefineStatements(FvObj) > > MacroDict.update(FvObj.DefineVarDict) > > - self.__GetBlockStatement(FvObj) > > - self.__GetSetStatements(FvObj) > > - self.__GetFvAlignment(FvObj) > > - self.__GetFvAttributes(FvObj) > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > + self._GetBlockStatement(FvObj) > > + self._GetSetStatements(FvObj) > > + self._GetFvAlignment(FvObj) > > + self._GetFvAttributes(FvObj) > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > > > while True: > > - IsInf = self.__GetInfStatement(FvObj) > > - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) > > + IsInf = self._GetInfStatement(FvObj) > > + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) > > if not IsInf and not IsFile: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - FvImageSectionObj = FvImageSection.FvImageSection() > > + FvImageSectionObj = FvImageSection() > > FvImageSectionObj.Alignment = AlignValue > > if FvObj is not None: > > FvImageSectionObj.Fv = FvObj > > @@ -2965,81 +2893,81 @@ class FdfParser: > > > > Obj.SectionList.append(FvImageSectionObj) > > > > - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): > > + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): > > if AlignValue == 'Auto': > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > - DepexSectionObj = DepexSection.DepexSection() > > + DepexSectionObj = DepexSection() > > DepexSectionObj.Alignment = AlignValue > > - DepexSectionObj.DepexType = self.__Token > > + DepexSectionObj.DepexType = self._Token > > > > - if not self.__IsToken( "="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__IsToken( "{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - if not self.__SkipToToken( "}"): > > + if not self._SkipToToken("}"): > > raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) > > > > - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') > > + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') > > Obj.SectionList.append(DepexSectionObj) > > > > else: > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected section type", self.FileName, self.CurrentLineNumber) > > > > # Encapsulation section appear, UndoToken and return > > - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": > > + if self._Token == "COMPRESS" or self._Token == "GUIDED": > > self.SetFileBufferPos(OldPos) > > return False > > > > - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", > BINARY_FILE_TYPE_SMM_DEPEX): > > - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == > BINARY_FILE_TYPE_TE): > > + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == > BINARY_FILE_TYPE_TE): > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > > > # DataSection > > - DataSectionObj = DataSection.DataSection() > > + DataSectionObj = DataSection() > > DataSectionObj.Alignment = AlignValue > > - DataSectionObj.SecType = self.__Token > > + DataSectionObj.SecType = self._Token > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > - if self.__Token == 'RELOCS_STRIPPED': > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > + if self._Token == 'RELOCS_STRIPPED': > > DataSectionObj.KeepReloc = False > > else: > > DataSectionObj.KeepReloc = True > > else: > > raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, > DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > - if self.__IsToken("="): > > - if not self.__GetNextToken(): > > + if self._IsToken(TAB_EQUAL_SPLIT): > > + if not self._GetNextToken(): > > raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) > > - DataSectionObj.SectFileName = self.__Token > > - self.__VerifyFile(DataSectionObj.SectFileName) > > + DataSectionObj.SectFileName = self._Token > > + self._VerifyFile(DataSectionObj.SectFileName) > > else: > > - if not self.__GetCglSection(DataSectionObj): > > + if not self._GetCglSection(DataSectionObj): > > return False > > > > Obj.SectionList.append(DataSectionObj) > > > > return True > > > > - ## __VerifyFile > > + ## _VerifyFile > > # > > # Check if file exists or not: > > # If current phase if GenFds, the file must exist; > > # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist > > # @param FileName: File path to be verified. > > # > > - def __VerifyFile(self, FileName): > > - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: > > + def _VerifyFile(self, FileName): > > + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: > > return > > - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: > > + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: > > ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > - ## __GetCglSection() method > > + ## _GetCglSection() method > > # > > # Get compressed or GUIDed section for Obj > > # > > @@ -3049,28 +2977,28 @@ class FdfParser: > > # @retval True Successfully find section statement > > # @retval False Not able to find section statement > > # > > - def __GetCglSection(self, Obj, AlignValue = None): > > + def _GetCglSection(self, Obj, AlignValue = None): > > > > - if self.__IsKeyword( "COMPRESS"): > > + if self._IsKeyword("COMPRESS"): > > type = "PI_STD" > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > - type = self.__Token > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > + type = self._Token > > > > - if not self.__IsToken("{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - CompressSectionObj = CompressSection.CompressSection() > > + CompressSectionObj = CompressSection() > > CompressSectionObj.Alignment = AlignValue > > CompressSectionObj.CompType = type > > # Recursive sections... > > while True: > > - IsLeafSection = self.__GetLeafSection(CompressSectionObj) > > - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) > > + IsLeafSection = self._GetLeafSection(CompressSectionObj) > > + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) > > if not IsLeafSection and not IsEncapSection: > > break > > > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > Obj.SectionList.append(CompressSectionObj) > > > > @@ -3079,15 +3007,15 @@ class FdfParser: > > > > return True > > > > - elif self.__IsKeyword( "GUIDED"): > > + elif self._IsKeyword("GUIDED"): > > GuidValue = None > > - if self.__GetNextGuid(): > > - GuidValue = self.__Token > > + if self._GetNextGuid(): > > + GuidValue = self._Token > > > > - AttribDict = self.__GetGuidAttrib() > > - if not self.__IsToken("{"): > > + AttribDict = self._GetGuidAttrib() > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - GuidSectionObj = GuidSection.GuidSection() > > + GuidSectionObj = GuidSection() > > GuidSectionObj.Alignment = AlignValue > > GuidSectionObj.NameGuid = GuidValue > > GuidSectionObj.SectionType = "GUIDED" > > @@ -3096,12 +3024,12 @@ class FdfParser: > > GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] > > # Recursive sections... > > while True: > > - IsLeafSection = self.__GetLeafSection(GuidSectionObj) > > - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) > > + IsLeafSection = self._GetLeafSection(GuidSectionObj) > > + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) > > if not IsLeafSection and not IsEncapSection: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > Obj.SectionList.append(GuidSectionObj) > > > > @@ -3109,44 +3037,43 @@ class FdfParser: > > > > return False > > > > - ## __GetGuidAttri() method > > + ## _GetGuidAttri() method > > # > > # Get attributes for GUID section > > # > > # @param self The object pointer > > # @retval AttribDict Dictionary of key-value pair of section attributes > > # > > - def __GetGuidAttrib(self): > > - > > + def _GetGuidAttrib(self): > > AttribDict = {} > > AttribDict["PROCESSING_REQUIRED"] = "NONE" > > AttribDict["AUTH_STATUS_VALID"] = "NONE" > > AttribDict["EXTRA_HEADER_SIZE"] = -1 > > - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ > > - or self.__IsKeyword("EXTRA_HEADER_SIZE"): > > - AttribKey = self.__Token > > + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ > > + or self._IsKeyword("EXTRA_HEADER_SIZE"): > > + AttribKey = self._Token > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) > > elif AttribKey == "EXTRA_HEADER_SIZE": > > Base = 10 > > - if self.__Token[0:2].upper() == "0X": > > + if self._Token[0:2].upper() == "0X": > > Base = 16 > > try: > > - AttribDict[AttribKey] = int(self.__Token, Base) > > + AttribDict[AttribKey] = int(self._Token, Base) > > continue > > except ValueError: > > raise Warning("expected Number", self.FileName, self.CurrentLineNumber) > > - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > - AttribDict[AttribKey] = self.__Token > > + AttribDict[AttribKey] = self._Token > > > > return AttribDict > > > > - ## __GetEncapsulationSec() method > > + ## _GetEncapsulationSec() method > > # > > # Get encapsulation section for FILE > > # > > @@ -3155,91 +3082,89 @@ class FdfParser: > > # @retval True Successfully find section statement > > # @retval False Not able to find section statement > > # > > - def __GetEncapsulationSec(self, FfsFileObj): > > - > > + def _GetEncapsulationSec(self, FfsFileObj): > > OldPos = self.GetFileBufferPos() > > - if not self.__IsKeyword( "SECTION"): > > + if not self._IsKeyword("SECTION"): > > if len(FfsFileObj.SectionList) == 0: > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > else: > > return False > > > > AlignValue = None > > - if self.__GetAlignment(): > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - AlignValue = self.__Token > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENT_NOAUTO: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + AlignValue = self._Token > > > > - if not self.__GetCglSection(FfsFileObj, AlignValue): > > + if not self._GetCglSection(FfsFileObj, AlignValue): > > self.SetFileBufferPos(OldPos) > > return False > > else: > > return True > > > > - def __GetFmp(self): > > - if not self.__GetNextToken(): > > + def _GetFmp(self): > > + if not self._GetNextToken(): > > return False > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - self.__SkipToToken("[FMPPAYLOAD.", True) > > - FmpUiName = self.__GetUiName().upper() > > + self._UndoToken() > > + self._SkipToToken("[FMPPAYLOAD.", True) > > + FmpUiName = self._GetUiName().upper() > > if FmpUiName in self.Profile.FmpPayloadDict: > > raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) > > > > - FmpData = CapsuleData.CapsulePayload() > > + FmpData = CapsulePayload() > > FmpData.UiName = FmpUiName > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) > > FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', > 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] > > - while self.__Token in FmpKeyList: > > - Name = self.__Token > > + while self._Token in FmpKeyList: > > + Name = self._Token > > FmpKeyList.remove(Name) > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > if Name == 'IMAGE_TYPE_ID': > > - if not self.__GetNextGuid(): > > + if not self._GetNextGuid(): > > raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) > > - FmpData.ImageTypeId = self.__Token > > + FmpData.ImageTypeId = self._Token > > elif Name == 'CERTIFICATE_GUID': > > - if not self.__GetNextGuid(): > > + if not self._GetNextGuid(): > > raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > - FmpData.Certificate_Guid = self.__Token > > - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > + FmpData.Certificate_Guid = self._Token > > + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for > CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > else: > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) > > - Value = self.__Token > > + Value = self._Token > > if Name == 'IMAGE_HEADER_INIT_VERSION': > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > FmpData.Version = Value > > elif Name == 'IMAGE_INDEX': > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > FmpData.ImageIndex = Value > > elif Name == 'HARDWARE_INSTANCE': > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > FmpData.HardwareInstance = Value > > elif Name == 'MONOTONIC_COUNT': > > - if FdfParser.__Verify(Name, Value, 'UINT64'): > > + if FdfParser._Verify(Name, Value, 'UINT64'): > > FmpData.MonotonicCount = Value > > if FmpData.MonotonicCount.upper().startswith('0X'): > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) > > else: > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > break > > else: > > - self.__UndoToken() > > + self._UndoToken() > > > > if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and > FmpData.Certificate_Guid): > > EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a > pair.") > > @@ -3248,7 +3173,7 @@ class FdfParser: > > if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: > > raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, self.CurrentLineNumber) > > # get the Image file and Vendor code file > > - self.__GetFMPCapsuleData(FmpData) > > + self._GetFMPCapsuleData(FmpData) > > if not FmpData.ImageFile: > > raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) > > # check whether more than one Vendor code file > > @@ -3257,7 +3182,7 @@ class FdfParser: > > self.Profile.FmpPayloadDict[FmpUiName] = FmpData > > return True > > > > - ## __GetCapsule() method > > + ## _GetCapsule() method > > # > > # Get capsule section contents and store its data into capsule list of self.Profile > > # > > @@ -3265,90 +3190,89 @@ class FdfParser: > > # @retval True Successfully find a capsule > > # @retval False Not able to find a capsule > > # > > - def __GetCapsule(self): > > - > > - if not self.__GetNextToken(): > > + def _GetCapsule(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[CAPSULE."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[CAPSULE.", True): > > + self._UndoToken() > > + if not self._IsToken("[CAPSULE.", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) > > > > - CapsuleObj = Capsule.Capsule() > > + CapsuleObj = Capsule() > > > > - CapsuleName = self.__GetUiName() > > + CapsuleName = self._GetUiName() > > if not CapsuleName: > > raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) > > > > CapsuleObj.UiCapsuleName = CapsuleName.upper() > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - if self.__IsKeyword("CREATE_FILE"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("CREATE_FILE"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > - CapsuleObj.CreateFile = self.__Token > > + CapsuleObj.CreateFile = self._Token > > > > - self.__GetCapsuleStatements(CapsuleObj) > > + self._GetCapsuleStatements(CapsuleObj) > > self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj > > return True > > > > - ## __GetCapsuleStatements() method > > + ## _GetCapsuleStatements() method > > # > > # Get statements for capsule > > # > > # @param self The object pointer > > # @param Obj for whom statements are got > > # > > - def __GetCapsuleStatements(self, Obj): > > - self.__GetCapsuleTokens(Obj) > > - self.__GetDefineStatements(Obj) > > - self.__GetSetStatements(Obj) > > - self.__GetCapsuleData(Obj) > > + def _GetCapsuleStatements(self, Obj): > > + self._GetCapsuleTokens(Obj) > > + self._GetDefineStatements(Obj) > > + self._GetSetStatements(Obj) > > + self._GetCapsuleData(Obj) > > > > - ## __GetCapsuleTokens() method > > + ## _GetCapsuleTokens() method > > # > > # Get token statements for capsule > > # > > # @param self The object pointer > > # @param Obj for whom token statements are got > > # > > - def __GetCapsuleTokens(self, Obj): > > - if not self.__GetNextToken(): > > + def _GetCapsuleTokens(self, Obj): > > + if not self._GetNextToken(): > > return False > > - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > "CAPSULE_HEADER_INIT_VERSION"): > > - Name = self.__Token.strip() > > - if not self.__IsToken("="): > > + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > "CAPSULE_HEADER_INIT_VERSION"): > > + Name = self._Token.strip() > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > if Name == 'CAPSULE_FLAGS': > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > self.CurrentLineNumber) > > - Value = self.__Token.strip() > > - while self.__IsToken(","): > > - Value += ',' > > - if not self.__GetNextToken(): > > + Value = self._Token.strip() > > + while self._IsToken(TAB_COMMA_SPLIT): > > + Value += TAB_COMMA_SPLIT > > + if not self._GetNextToken(): > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > self.CurrentLineNumber) > > - Value += self.__Token.strip() > > + Value += self._Token.strip() > > elif Name == 'OEM_CAPSULE_FLAGS': > > - Value = self.__Token.strip() > > + Value = self._Token.strip() > > if not Value.upper().startswith('0X'): > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > try: > > @@ -3357,51 +3281,49 @@ class FdfParser: > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > if not 0x0000 <= Value <= 0xFFFF: > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > - Value = self.__Token.strip() > > + Value = self._Token.strip() > > else: > > - Value = self.__Token.strip() > > + Value = self._Token.strip() > > Obj.TokensDict[Name] = Value > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > return False > > - self.__UndoToken() > > + self._UndoToken() > > > > - ## __GetCapsuleData() method > > + ## _GetCapsuleData() method > > # > > # Get capsule data for capsule > > # > > # @param self The object pointer > > # @param Obj for whom capsule data are got > > # > > - def __GetCapsuleData(self, Obj): > > - > > + def _GetCapsuleData(self, Obj): > > while True: > > - IsInf = self.__GetInfStatement(Obj, True) > > - IsFile = self.__GetFileStatement(Obj, True) > > - IsFv = self.__GetFvStatement(Obj) > > - IsFd = self.__GetFdStatement(Obj) > > - IsAnyFile = self.__GetAnyFileStatement(Obj) > > - IsAfile = self.__GetAfileStatement(Obj) > > - IsFmp = self.__GetFmpStatement(Obj) > > + IsInf = self._GetInfStatement(Obj, True) > > + IsFile = self._GetFileStatement(Obj, True) > > + IsFv = self._GetFvStatement(Obj) > > + IsFd = self._GetFdStatement(Obj) > > + IsAnyFile = self._GetAnyFileStatement(Obj) > > + IsAfile = self._GetAfileStatement(Obj) > > + IsFmp = self._GetFmpStatement(Obj) > > if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): > > break > > > > - ## __GetFMPCapsuleData() method > > + ## _GetFMPCapsuleData() method > > # > > # Get capsule data for FMP capsule > > # > > # @param self The object pointer > > # @param Obj for whom capsule data are got > > # > > - def __GetFMPCapsuleData(self, Obj): > > - > > + def _GetFMPCapsuleData(self, Obj): > > while True: > > - IsFv = self.__GetFvStatement(Obj, True) > > - IsFd = self.__GetFdStatement(Obj, True) > > - IsAnyFile = self.__GetAnyFileStatement(Obj, True) > > + IsFv = self._GetFvStatement(Obj, True) > > + IsFd = self._GetFdStatement(Obj, True) > > + IsAnyFile = self._GetAnyFileStatement(Obj, True) > > if not (IsFv or IsFd or IsAnyFile): > > break > > > > - ## __GetFvStatement() method > > + ## _GetFvStatement() method > > # > > # Get FV for capsule > > # > > @@ -3410,32 +3332,31 @@ class FdfParser: > > # @retval True Successfully find a FV statement > > # @retval False Not able to find a FV statement > > # > > - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > - > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > return False > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in self.Profile.FvDict: > > + if self._Token.upper() not in self.Profile.FvDict: > > raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) > > > > - CapsuleFv = CapsuleData.CapsuleFv() > > - CapsuleFv.FvName = self.__Token > > + myCapsuleFv = CapsuleFv() > > + myCapsuleFv.FvName = self._Token > > if FMPCapsule: > > if not CapsuleObj.ImageFile: > > - CapsuleObj.ImageFile.append(CapsuleFv) > > + CapsuleObj.ImageFile.append(myCapsuleFv) > > else: > > - CapsuleObj.VendorCodeFile.append(CapsuleFv) > > + CapsuleObj.VendorCodeFile.append(myCapsuleFv) > > else: > > - CapsuleObj.CapsuleDataList.append(CapsuleFv) > > + CapsuleObj.CapsuleDataList.append(myCapsuleFv) > > return True > > > > - ## __GetFdStatement() method > > + ## _GetFdStatement() method > > # > > # Get FD for capsule > > # > > @@ -3444,74 +3365,73 @@ class FdfParser: > > # @retval True Successfully find a FD statement > > # @retval False Not able to find a FD statement > > # > > - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > - > > - if not self.__IsKeyword("FD"): > > + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > + if not self._IsKeyword("FD"): > > return False > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in self.Profile.FdDict: > > + if self._Token.upper() not in self.Profile.FdDict: > > raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) > > > > - CapsuleFd = CapsuleData.CapsuleFd() > > - CapsuleFd.FdName = self.__Token > > + myCapsuleFd = CapsuleFd() > > + myCapsuleFd.FdName = self._Token > > if FMPCapsule: > > if not CapsuleObj.ImageFile: > > - CapsuleObj.ImageFile.append(CapsuleFd) > > + CapsuleObj.ImageFile.append(myCapsuleFd) > > else: > > - CapsuleObj.VendorCodeFile.append(CapsuleFd) > > + CapsuleObj.VendorCodeFile.append(myCapsuleFd) > > else: > > - CapsuleObj.CapsuleDataList.append(CapsuleFd) > > + CapsuleObj.CapsuleDataList.append(myCapsuleFd) > > return True > > > > - def __GetFmpStatement(self, CapsuleObj): > > - if not self.__IsKeyword("FMP_PAYLOAD"): > > - if not self.__IsKeyword("FMP"): > > + def _GetFmpStatement(self, CapsuleObj): > > + if not self._IsKeyword("FMP_PAYLOAD"): > > + if not self._IsKeyword("FMP"): > > return False > > > > - if not self.__IsKeyword("PAYLOAD"): > > - self.__UndoToken() > > + if not self._IsKeyword("PAYLOAD"): > > + self._UndoToken() > > return False > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) > > - Payload = self.__Token.upper() > > + Payload = self._Token.upper() > > if Payload not in self.Profile.FmpPayloadDict: > > - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) > > + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) > > CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) > > return True > > > > - def __ParseRawFileStatement(self): > > - if not self.__IsKeyword("FILE"): > > + def _ParseRawFileStatement(self): > > + if not self._IsKeyword("FILE"): > > return None > > > > - if not self.__IsKeyword("DATA"): > > - self.__UndoToken() > > + if not self._IsKeyword("DATA"): > > + self._UndoToken() > > return None > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > - AnyFileName = self.__Token > > - self.__VerifyFile(AnyFileName) > > + AnyFileName = self._Token > > + self._VerifyFile(AnyFileName) > > > > if not os.path.isabs(AnyFileName): > > AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) > > > > return AnyFileName > > > > - ## __GetAnyFileStatement() method > > + ## _GetAnyFileStatement() method > > # > > # Get AnyFile for capsule > > # > > @@ -3520,23 +3440,23 @@ class FdfParser: > > # @retval True Successfully find a Anyfile statement > > # @retval False Not able to find a AnyFile statement > > # > > - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > - AnyFileName = self.__ParseRawFileStatement() > > + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > + AnyFileName = self._ParseRawFileStatement() > > if not AnyFileName: > > return False > > > > - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() > > - CapsuleAnyFile.FileName = AnyFileName > > + myCapsuleAnyFile = CapsuleAnyFile() > > + myCapsuleAnyFile.FileName = AnyFileName > > if FMPCapsule: > > if not CapsuleObj.ImageFile: > > - CapsuleObj.ImageFile.append(CapsuleAnyFile) > > + CapsuleObj.ImageFile.append(myCapsuleAnyFile) > > else: > > - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) > > + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) > > else: > > - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) > > + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) > > return True > > > > - ## __GetAfileStatement() method > > + ## _GetAfileStatement() method > > # > > # Get Afile for capsule > > # > > @@ -3545,18 +3465,17 @@ class FdfParser: > > # @retval True Successfully find a Afile statement > > # @retval False Not able to find a Afile statement > > # > > - def __GetAfileStatement(self, CapsuleObj): > > - > > - if not self.__IsKeyword("APPEND"): > > + def _GetAfileStatement(self, CapsuleObj): > > + if not self._IsKeyword("APPEND"): > > return False > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) > > > > - AfileName = self.__Token > > + AfileName = self._Token > > AfileBaseName = os.path.basename(AfileName) > > > > if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: > > @@ -3565,19 +3484,19 @@ class FdfParser: > > > > if not os.path.isabs(AfileName): > > AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) > > - self.__VerifyFile(AfileName) > > + self._VerifyFile(AfileName) > > else: > > if not os.path.exists(AfileName): > > raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) > > else: > > pass > > > > - CapsuleAfile = CapsuleData.CapsuleAfile() > > - CapsuleAfile.FileName = AfileName > > - CapsuleObj.CapsuleDataList.append(CapsuleAfile) > > + myCapsuleAfile = CapsuleAfile() > > + myCapsuleAfile.FileName = AfileName > > + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) > > return True > > > > - ## __GetRule() method > > + ## _GetRule() method > > # > > # Get Rule section contents and store its data into rule list of self.Profile > > # > > @@ -3585,143 +3504,140 @@ class FdfParser: > > # @retval True Successfully find a Rule > > # @retval False Not able to find a Rule > > # > > - def __GetRule(self): > > - > > - if not self.__GetNextToken(): > > + def _GetRule(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[RULE."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > - self.__UndoToken() > > - if not self.__IsToken("[Rule.", True): > > + self._UndoToken() > > + if not self._IsToken("[Rule.", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) > > > > - if not self.__SkipToToken("."): > > + if not self._SkipToToken(TAB_SPLIT): > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > - Arch = self.__SkippedChars.rstrip(".") > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT) > > if Arch.upper() not in ARCH_SET_FULL: > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > - ModuleType = self.__GetModuleType() > > + ModuleType = self._GetModuleType() > > > > TemplateName = "" > > - if self.__IsToken("."): > > - if not self.__GetNextWord(): > > + if self._IsToken(TAB_SPLIT): > > + if not self._GetNextWord(): > > raise Warning("expected template name", self.FileName, self.CurrentLineNumber) > > - TemplateName = self.__Token > > + TemplateName = self._Token > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - RuleObj = self.__GetRuleFileStatements() > > + RuleObj = self._GetRuleFileStatements() > > RuleObj.Arch = Arch.upper() > > RuleObj.ModuleType = ModuleType > > RuleObj.TemplateName = TemplateName > > - if TemplateName == '' : > > + if TemplateName == '': > > self.Profile.RuleDict['RULE' + \ > > - '.' + \ > > + TAB_SPLIT + \ > > Arch.upper() + \ > > - '.' + \ > > + TAB_SPLIT + \ > > ModuleType.upper() ] = RuleObj > > - else : > > + else: > > self.Profile.RuleDict['RULE' + \ > > - '.' + \ > > + TAB_SPLIT + \ > > Arch.upper() + \ > > - '.' + \ > > + TAB_SPLIT + \ > > ModuleType.upper() + \ > > - '.' + \ > > + TAB_SPLIT + \ > > TemplateName.upper() ] = RuleObj > > return True > > > > - ## __GetModuleType() method > > + ## _GetModuleType() method > > # > > # Return the module type > > # > > # @param self The object pointer > > # @retval string module type > > # > > - def __GetModuleType(self): > > - > > - if not self.__GetNextWord(): > > + def _GetModuleType(self): > > + if not self._GetNextWord(): > > raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > SUP_MODULE_DXE_CORE, \ > > + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > SUP_MODULE_DXE_CORE, \ > > SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ > > SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ > > - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, > "DEFAULT", SUP_MODULE_BASE, \ > > + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, > TAB_DEFAULT, SUP_MODULE_BASE, \ > > EDK_COMPONENT_TYPE_SECURITY_CORE, > EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, > EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ > > "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, > EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", > SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): > > - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - return self.__Token > > + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + return self._Token > > > > - ## __GetFileExtension() method > > + ## _GetFileExtension() method > > # > > # Return the file extension > > # > > # @param self The object pointer > > # @retval string file name extension > > # > > - def __GetFileExtension(self): > > - if not self.__IsToken("."): > > + def _GetFileExtension(self): > > + if not self._IsToken(TAB_SPLIT): > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > Ext = "" > > - if self.__GetNextToken(): > > - if FileExtensionPattern.match(self.__Token): > > - Ext = self.__Token > > - return '.' + Ext > > + if self._GetNextToken(): > > + if FileExtensionPattern.match(self._Token): > > + Ext = self._Token > > + return TAB_SPLIT + Ext > > else: > > - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > else: > > raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) > > > > - ## __GetRuleFileStatement() method > > + ## _GetRuleFileStatement() method > > # > > # Get rule contents > > # > > # @param self The object pointer > > # @retval Rule Rule object > > # > > - def __GetRuleFileStatements(self): > > - > > - if not self.__IsKeyword("FILE"): > > + def _GetRuleFileStatements(self): > > + if not self._IsKeyword("FILE"): > > raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > - Type = self.__Token.strip().upper() > > + Type = self._Token.strip().upper() > > if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ > > "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, > EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, > SUP_MODULE_MM_CORE_STANDALONE): > > - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsKeyword("$(NAMED_GUID)"): > > - if not self.__GetNextWord(): > > + if not self._IsKeyword("$(NAMED_GUID)"): > > + if not self._GetNextWord(): > > raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) > > - if self.__Token == 'PCD': > > - if not self.__IsToken( "("): > > + if self._Token == 'PCD': > > + if not self._IsToken("("): > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > - PcdPair = self.__GetNextPcdSettings() > > - if not self.__IsToken( ")"): > > + PcdPair = self._GetNextPcdSettings() > > + if not self._IsToken(")"): > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > - NameGuid = self.__Token > > + NameGuid = self._Token > > > > KeepReloc = None > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > - if self.__FileCouldHaveRelocFlag(Type): > > - if self.__Token == 'RELOCS_STRIPPED': > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > + if self._FileCouldHaveRelocFlag(Type): > > + if self._Token == 'RELOCS_STRIPPED': > > KeepReloc = False > > else: > > KeepReloc = True > > @@ -3729,111 +3645,109 @@ class FdfParser: > > raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), self.FileName, > self.CurrentLineNumber) > > > > KeyStringList = [] > > - if self.__GetNextToken(): > > - if TokenFindPattern.match(self.__Token): > > - KeyStringList.append(self.__Token) > > - if self.__IsToken(","): > > - while self.__GetNextToken(): > > - if not TokenFindPattern.match(self.__Token): > > + if self._GetNextToken(): > > + if TokenFindPattern.match(self._Token): > > + KeyStringList.append(self._Token) > > + if self._IsToken(TAB_COMMA_SPLIT): > > + while self._GetNextToken(): > > + if not TokenFindPattern.match(self._Token): > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > > - KeyStringList.append(self.__Token) > > + KeyStringList.append(self._Token) > > > > - if not self.__IsToken(","): > > + if not self._IsToken(TAB_COMMA_SPLIT): > > break > > > > else: > > - self.__UndoToken() > > + self._UndoToken() > > > > > > Fixed = False > > - if self.__IsKeyword("Fixed", True): > > + if self._IsKeyword("Fixed", True): > > Fixed = True > > > > CheckSum = False > > - if self.__IsKeyword("CheckSum", True): > > + if self._IsKeyword("CheckSum", True): > > CheckSum = True > > > > AlignValue = "" > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > #For FFS, Auto is default option same to "" > > - if not self.__Token == "Auto": > > - AlignValue = self.__Token > > + if not self._Token == "Auto": > > + AlignValue = self._Token > > > > - if self.__IsToken("{"): > > + if self._IsToken("{"): > > # Complex file rule expected > > - Rule = RuleComplexFile.RuleComplexFile() > > - Rule.FvFileType = Type > > - Rule.NameGuid = NameGuid > > - Rule.Alignment = AlignValue > > - Rule.CheckSum = CheckSum > > - Rule.Fixed = Fixed > > - Rule.KeyStringList = KeyStringList > > + NewRule = RuleComplexFile() > > + NewRule.FvFileType = Type > > + NewRule.NameGuid = NameGuid > > + NewRule.Alignment = AlignValue > > + NewRule.CheckSum = CheckSum > > + NewRule.Fixed = Fixed > > + NewRule.KeyStringList = KeyStringList > > if KeepReloc is not None: > > - Rule.KeepReloc = KeepReloc > > + NewRule.KeepReloc = KeepReloc > > > > while True: > > - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) > > - IsLeaf = self.__GetEfiSection(Rule) > > + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) > > + IsLeaf = self._GetEfiSection(NewRule) > > if not IsEncapsulate and not IsLeaf: > > break > > > > - if not self.__IsToken("}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - return Rule > > + return NewRule > > > > else: > > # Simple file rule expected > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) > > > > - SectionName = self.__Token > > + SectionName = self._Token > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", > BINARY_FILE_TYPE_SMM_DEPEX): > > raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) > > > > > > - if self.__IsKeyword("Fixed", True): > > + if self._IsKeyword("Fixed", True): > > Fixed = True > > > > - if self.__IsKeyword("CheckSum", True): > > + if self._IsKeyword("CheckSum", True): > > CheckSum = True > > > > SectAlignment = "" > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > BINARY_FILE_TYPE_TE): > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > BINARY_FILE_TYPE_TE): > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > - SectAlignment = self.__Token > > + SectAlignment = self._Token > > > > Ext = None > > - if self.__IsToken('|'): > > - Ext = self.__GetFileExtension() > > - elif not self.__GetNextToken(): > > + if self._IsToken(TAB_VALUE_SPLIT): > > + Ext = self._GetFileExtension() > > + elif not self._GetNextToken(): > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > - Rule = RuleSimpleFile.RuleSimpleFile() > > - Rule.SectionType = SectionName > > - Rule.FvFileType = Type > > - Rule.NameGuid = NameGuid > > - Rule.Alignment = AlignValue > > - Rule.SectAlignment = SectAlignment > > - Rule.CheckSum = CheckSum > > - Rule.Fixed = Fixed > > - Rule.KeyStringList = KeyStringList > > + NewRule = RuleSimpleFile() > > + NewRule.SectionType = SectionName > > + NewRule.FvFileType = Type > > + NewRule.NameGuid = NameGuid > > + NewRule.Alignment = AlignValue > > + NewRule.SectAlignment = SectAlignment > > + NewRule.CheckSum = CheckSum > > + NewRule.Fixed = Fixed > > + NewRule.KeyStringList = KeyStringList > > if KeepReloc is not None: > > - Rule.KeepReloc = KeepReloc > > - Rule.FileExtension = Ext > > - Rule.FileName = self.__Token > > - return Rule > > + NewRule.KeepReloc = KeepReloc > > + NewRule.FileExtension = Ext > > + NewRule.FileName = self._Token > > + return NewRule > > > > - ## __GetEfiSection() method > > + ## _GetEfiSection() method > > # > > # Get section list for Rule > > # > > @@ -3842,127 +3756,124 @@ class FdfParser: > > # @retval True Successfully find section statement > > # @retval False Not able to find section statement > > # > > - def __GetEfiSection(self, Obj): > > - > > + def _GetEfiSection(self, Obj): > > OldPos = self.GetFileBufferPos() > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > return False > > - SectionName = self.__Token > > + SectionName = self._Token > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, > BINARY_FILE_TYPE_SMM_DEPEX): > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > if SectionName == "FV_IMAGE": > > - FvImageSectionObj = FvImageSection.FvImageSection() > > - if self.__IsKeyword("FV_IMAGE"): > > + FvImageSectionObj = FvImageSection() > > + if self._IsKeyword("FV_IMAGE"): > > pass > > - if self.__IsToken( "{"): > > - FvObj = Fv.FV() > > - self.__GetDefineStatements(FvObj) > > - self.__GetBlockStatement(FvObj) > > - self.__GetSetStatements(FvObj) > > - self.__GetFvAlignment(FvObj) > > - self.__GetFvAttributes(FvObj) > > - self.__GetAprioriSection(FvObj) > > - self.__GetAprioriSection(FvObj) > > + if self._IsToken("{"): > > + FvObj = FV() > > + self._GetDefineStatements(FvObj) > > + self._GetBlockStatement(FvObj) > > + self._GetSetStatements(FvObj) > > + self._GetFvAlignment(FvObj) > > + self._GetFvAttributes(FvObj) > > + self._GetAprioriSection(FvObj) > > + self._GetAprioriSection(FvObj) > > > > while True: > > - IsInf = self.__GetInfStatement(FvObj) > > - IsFile = self.__GetFileStatement(FvObj) > > + IsInf = self._GetInfStatement(FvObj) > > + IsFile = self._GetFileStatement(FvObj) > > if not IsInf and not IsFile: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > FvImageSectionObj.Fv = FvObj > > FvImageSectionObj.FvName = None > > > > else: > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > - FvImageSectionObj.FvFileType = self.__Token > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) > > + FvImageSectionObj.FvFileType = self._Token > > > > - if self.__GetAlignment(): > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - FvImageSectionObj.Alignment = self.__Token > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENT_NOAUTO: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + FvImageSectionObj.Alignment = self._Token > > > > - if self.__IsToken('|'): > > - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() > > - elif self.__GetNextToken(): > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > + if self._IsToken(TAB_VALUE_SPLIT): > > + FvImageSectionObj.FvFileExtension = self._GetFileExtension() > > + elif self._GetNextToken(): > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, > BINARY_FILE_TYPE_SMM_DEPEX): > > - FvImageSectionObj.FvFileName = self.__Token > > + FvImageSectionObj.FvFileName = self._Token > > else: > > - self.__UndoToken() > > + self._UndoToken() > > else: > > raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) > > > > Obj.SectionList.append(FvImageSectionObj) > > return True > > > > - EfiSectionObj = EfiSection.EfiSection() > > + EfiSectionObj = EfiSection() > > EfiSectionObj.SectionType = SectionName > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected file type", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == "STRING": > > - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > + if self._Token == "STRING": > > + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), > self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken('='): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) > > > > - if self.__GetStringData(): > > - EfiSectionObj.StringData = self.__Token > > + if self._GetStringData(): > > + EfiSectionObj.StringData = self._Token > > > > - if self.__IsKeyword("BUILD_NUM"): > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > + if self._IsKeyword("BUILD_NUM"): > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > - EfiSectionObj.BuildNum = self.__Token > > + EfiSectionObj.BuildNum = self._Token > > > > else: > > - EfiSectionObj.FileType = self.__Token > > - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > + EfiSectionObj.FileType = self._Token > > + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > > > - if self.__IsKeyword("Optional"): > > - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > + if self._IsKeyword("Optional"): > > + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), > self.FileName, self.CurrentLineNumber) > > EfiSectionObj.Optional = True > > > > - if self.__IsKeyword("BUILD_NUM"): > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > + if self._IsKeyword("BUILD_NUM"): > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > - EfiSectionObj.BuildNum = self.__Token > > + EfiSectionObj.BuildNum = self._Token > > > > - if self.__GetAlignment(): > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > BINARY_FILE_TYPE_TE): > > + if self._GetAlignment(): > > + if self._Token not in ALIGNMENTS: > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > BINARY_FILE_TYPE_TE): > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) > > - EfiSectionObj.Alignment = self.__Token > > + EfiSectionObj.Alignment = self._Token > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > - if self.__Token == 'RELOCS_STRIPPED': > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > + if self._Token == 'RELOCS_STRIPPED': > > EfiSectionObj.KeepReloc = False > > else: > > EfiSectionObj.KeepReloc = True > > @@ -3972,35 +3883,35 @@ class FdfParser: > > raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, > self.CurrentLineNumber) > > > > > > - if self.__IsToken('|'): > > - EfiSectionObj.FileExtension = self.__GetFileExtension() > > - elif self.__GetNextToken(): > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > + if self._IsToken(TAB_VALUE_SPLIT): > > + EfiSectionObj.FileExtension = self._GetFileExtension() > > + elif self._GetNextToken(): > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, > BINARY_FILE_TYPE_SMM_DEPEX): > > > > - if self.__Token.startswith('PCD'): > > - self.__UndoToken() > > - self.__GetNextWord() > > + if self._Token.startswith('PCD'): > > + self._UndoToken() > > + self._GetNextWord() > > > > - if self.__Token == 'PCD': > > - if not self.__IsToken( "("): > > + if self._Token == 'PCD': > > + if not self._IsToken("("): > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > - PcdPair = self.__GetNextPcdSettings() > > - if not self.__IsToken( ")"): > > + PcdPair = self._GetNextPcdSettings() > > + if not self._IsToken(")"): > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > - EfiSectionObj.FileName = self.__Token > > + EfiSectionObj.FileName = self._Token > > > > else: > > - self.__UndoToken() > > + self._UndoToken() > > else: > > raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) > > > > Obj.SectionList.append(EfiSectionObj) > > return True > > > > - ## __RuleSectionCouldBeOptional() method > > + ## _RuleSectionCouldBeOptional() method > > # > > # Get whether a section could be optional > > # > > @@ -4009,13 +3920,13 @@ class FdfParser: > > # @retval False section never optional > > # > > @staticmethod > > - def __RuleSectionCouldBeOptional(SectionType): > > + def _RuleSectionCouldBeOptional(SectionType): > > if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > "RAW", BINARY_FILE_TYPE_SMM_DEPEX): > > return True > > else: > > return False > > > > - ## __RuleSectionCouldHaveBuildNum() method > > + ## _RuleSectionCouldHaveBuildNum() method > > # > > # Get whether a section could have build number information > > # > > @@ -4024,13 +3935,13 @@ class FdfParser: > > # @retval False section never have build number information > > # > > @staticmethod > > - def __RuleSectionCouldHaveBuildNum(SectionType): > > + def _RuleSectionCouldHaveBuildNum(SectionType): > > if SectionType in ("VERSION"): > > return True > > else: > > return False > > > > - ## __RuleSectionCouldHaveString() method > > + ## _RuleSectionCouldHaveString() method > > # > > # Get whether a section could have string > > # > > @@ -4039,13 +3950,13 @@ class FdfParser: > > # @retval False section never have string > > # > > @staticmethod > > - def __RuleSectionCouldHaveString(SectionType): > > + def _RuleSectionCouldHaveString(SectionType): > > if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): > > return True > > else: > > return False > > > > - ## __CheckRuleSectionFileType() method > > + ## _CheckRuleSectionFileType() method > > # > > # Get whether a section matches a file type > > # > > @@ -4053,7 +3964,7 @@ class FdfParser: > > # @param SectionType The section type to check > > # @param FileType The file type to check > > # > > - def __CheckRuleSectionFileType(self, SectionType, FileType): > > + def _CheckRuleSectionFileType(self, SectionType, FileType): > > if SectionType == "COMPAT16": > > if FileType not in ("COMPAT16", "SEC_COMPAT16"): > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > @@ -4085,53 +3996,53 @@ class FdfParser: > > if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > > > - ## __GetRuleEncapsulationSection() method > > + ## _GetRuleEncapsulationSection() method > > # > > # Get encapsulation section for Rule > > # > > # @param self The object pointer > > - # @param Rule for whom section is got > > + # @param theRule for whom section is got > > # @retval True Successfully find section statement > > # @retval False Not able to find section statement > > # > > - def __GetRuleEncapsulationSection(self, Rule): > > - if self.__IsKeyword( "COMPRESS"): > > + def _GetRuleEncapsulationSection(self, theRule): > > + if self._IsKeyword("COMPRESS"): > > Type = "PI_STD" > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > - Type = self.__Token > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > + Type = self._Token > > > > - if not self.__IsToken("{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - CompressSectionObj = CompressSection.CompressSection() > > + CompressSectionObj = CompressSection() > > > > CompressSectionObj.CompType = Type > > # Recursive sections... > > while True: > > - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) > > - IsLeaf = self.__GetEfiSection(CompressSectionObj) > > + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) > > + IsLeaf = self._GetEfiSection(CompressSectionObj) > > if not IsEncapsulate and not IsLeaf: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - Rule.SectionList.append(CompressSectionObj) > > + theRule.SectionList.append(CompressSectionObj) > > > > return True > > > > - elif self.__IsKeyword( "GUIDED"): > > + elif self._IsKeyword("GUIDED"): > > GuidValue = None > > - if self.__GetNextGuid(): > > - GuidValue = self.__Token > > + if self._GetNextGuid(): > > + GuidValue = self._Token > > > > - if self.__IsKeyword( "$(NAMED_GUID)"): > > - GuidValue = self.__Token > > + if self._IsKeyword("$(NAMED_GUID)"): > > + GuidValue = self._Token > > > > - AttribDict = self.__GetGuidAttrib() > > + AttribDict = self._GetGuidAttrib() > > > > - if not self.__IsToken("{"): > > + if not self._IsToken("{"): > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > - GuidSectionObj = GuidSection.GuidSection() > > + GuidSectionObj = GuidSection() > > GuidSectionObj.NameGuid = GuidValue > > GuidSectionObj.SectionType = "GUIDED" > > GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] > > @@ -4140,20 +4051,20 @@ class FdfParser: > > > > # Efi sections... > > while True: > > - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) > > - IsLeaf = self.__GetEfiSection(GuidSectionObj) > > + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) > > + IsLeaf = self._GetEfiSection(GuidSectionObj) > > if not IsEncapsulate and not IsLeaf: > > break > > > > - if not self.__IsToken( "}"): > > + if not self._IsToken("}"): > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > - Rule.SectionList.append(GuidSectionObj) > > + theRule.SectionList.append(GuidSectionObj) > > > > return True > > > > return False > > > > - ## __GetVtf() method > > + ## _GetVtf() method > > # > > # Get VTF section contents and store its data into VTF list of self.Profile > > # > > @@ -4161,69 +4072,69 @@ class FdfParser: > > # @retval True Successfully find a VTF > > # @retval False Not able to find a VTF > > # > > - def __GetVtf(self): > > - if not self.__GetNextToken(): > > + def _GetVtf(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[VTF."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[VTF.", True): > > + self._UndoToken() > > + if not self._IsToken("[VTF.", True): > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > FileLineTuple[1], self.CurrentOffsetWithinLine) > > raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) > > > > - if not self.__SkipToToken("."): > > + if not self._SkipToToken(TAB_SPLIT): > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > - Arch = self.__SkippedChars.rstrip(".").upper() > > - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() > > + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) > > - Name = self.__Token.upper() > > + Name = self._Token.upper() > > > > - VtfObj = Vtf.Vtf() > > + VtfObj = Vtf() > > VtfObj.UiName = Name > > VtfObj.KeyArch = Arch > > > > - if self.__IsToken(","): > > - if not self.__GetNextWord(): > > + if self._IsToken(TAB_COMMA_SPLIT): > > + if not self._GetNextWord(): > > raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) > > - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - VtfObj.ArchList = self.__Token.upper() > > + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, > TAB_ARCH_AARCH64): > > + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + VtfObj.ArchList = self._Token.upper() > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - if self.__IsKeyword("IA32_RST_BIN"): > > - if not self.__IsToken("="): > > + if self._IsKeyword("IA32_RST_BIN"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) > > > > - VtfObj.ResetBin = self.__Token > > - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: > > + VtfObj.ResetBin = self._Token > > + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > #check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > - while self.__GetComponentStatement(VtfObj): > > + while self._GetComponentStatement(VtfObj): > > pass > > > > self.Profile.VtfList.append(VtfObj) > > return True > > > > - ## __GetComponentStatement() method > > + ## _GetComponentStatement() method > > # > > # Get components in VTF > > # > > @@ -4232,132 +4143,132 @@ class FdfParser: > > # @retval True Successfully find a component > > # @retval False Not able to find a component > > # > > - def __GetComponentStatement(self, VtfObj): > > - if not self.__IsKeyword("COMP_NAME"): > > + def _GetComponentStatement(self, VtfObj): > > + if not self._IsKeyword("COMP_NAME"): > > return False > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextWord(): > > + if not self._GetNextWord(): > > raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj = ComponentStatement.ComponentStatement() > > - CompStatementObj.CompName = self.__Token > > + CompStatementObj = ComponentStatement() > > + CompStatementObj.CompName = self._Token > > > > - if not self.__IsKeyword("COMP_LOC"): > > + if not self._IsKeyword("COMP_LOC"): > > raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > CompStatementObj.CompLoc = "" > > - if self.__GetNextWord(): > > - CompStatementObj.CompLoc = self.__Token > > - if self.__IsToken('|'): > > - if not self.__GetNextWord(): > > + if self._GetNextWord(): > > + CompStatementObj.CompLoc = self._Token > > + if self._IsToken(TAB_VALUE_SPLIT): > > + if not self._GetNextWord(): > > raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.FilePos = self.__Token > > + CompStatementObj.FilePos = self._Token > > else: > > self.CurrentLineNumber += 1 > > self.CurrentOffsetWithinLine = 0 > > > > - if not self.__IsKeyword("COMP_TYPE"): > > + if not self._IsKeyword("COMP_TYPE"): > > raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) > > - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ > > - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - CompStatementObj.CompType = self.__Token > > + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ > > + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + CompStatementObj.CompType = self._Token > > > > - if not self.__IsKeyword("COMP_VER"): > > + if not self._IsKeyword("COMP_VER"): > > raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) > > > > - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) > > - if Pattern.match(self.__Token) is None: > > - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - CompStatementObj.CompVer = self.__Token > > + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) > > + if Pattern.match(self._Token) is None: > > + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + CompStatementObj.CompVer = self._Token > > > > - if not self.__IsKeyword("COMP_CS"): > > + if not self._IsKeyword("COMP_CS"): > > raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) > > - if self.__Token not in ("1", "0"): > > - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > - CompStatementObj.CompCs = self.__Token > > + if self._Token not in ("1", "0"): > > + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > + CompStatementObj.CompCs = self._Token > > > > > > - if not self.__IsKeyword("COMP_BIN"): > > + if not self._IsKeyword("COMP_BIN"): > > raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.CompBin = self.__Token > > - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: > > + CompStatementObj.CompBin = self._Token > > + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > #check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > - if not self.__IsKeyword("COMP_SYM"): > > + if not self._IsKeyword("COMP_SYM"): > > raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.CompSym = self.__Token > > - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: > > + CompStatementObj.CompSym = self._Token > > + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: > > #check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > - if not self.__IsKeyword("COMP_SIZE"): > > + if not self._IsKeyword("COMP_SIZE"): > > raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken("="): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if self.__IsToken("-"): > > - CompStatementObj.CompSize = self.__Token > > - elif self.__GetNextDecimalNumber(): > > - CompStatementObj.CompSize = self.__Token > > - elif self.__GetNextHexNumber(): > > - CompStatementObj.CompSize = self.__Token > > + if self._IsToken("-"): > > + CompStatementObj.CompSize = self._Token > > + elif self._GetNextDecimalNumber(): > > + CompStatementObj.CompSize = self._Token > > + elif self._GetNextHexNumber(): > > + CompStatementObj.CompSize = self._Token > > else: > > - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > VtfObj.ComponentStatementList.append(CompStatementObj) > > return True > > > > - ## __GetOptionRom() method > > + ## _GetOptionRom() method > > # > > # Get OptionROM section contents and store its data into OptionROM list of self.Profile > > # > > @@ -4365,38 +4276,37 @@ class FdfParser: > > # @retval True Successfully find a OptionROM > > # @retval False Not able to find a OptionROM > > # > > - def __GetOptionRom(self): > > - if not self.__GetNextToken(): > > + def _GetOptionRom(self): > > + if not self._GetNextToken(): > > return False > > > > - S = self.__Token.upper() > > - if S.startswith("[") and not S.startswith("[OPTIONROM."): > > + S = self._Token.upper() > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): > > self.SectionParser(S) > > - self.__UndoToken() > > + self._UndoToken() > > return False > > > > - self.__UndoToken() > > - if not self.__IsToken("[OptionRom.", True): > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > + self._UndoToken() > > + if not self._IsToken("[OptionRom.", True): > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > - OptRomName = self.__GetUiName() > > + OptRomName = self._GetUiName() > > > > - if not self.__IsToken( "]"): > > + if not self._IsToken(TAB_SECTION_END): > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > - OptRomObj = OptionRom.OPTIONROM() > > - OptRomObj.DriverName = OptRomName > > + OptRomObj = OPTIONROM(OptRomName) > > self.Profile.OptRomDict[OptRomName] = OptRomObj > > > > while True: > > - isInf = self.__GetOptRomInfStatement(OptRomObj) > > - isFile = self.__GetOptRomFileStatement(OptRomObj) > > + isInf = self._GetOptRomInfStatement(OptRomObj) > > + isFile = self._GetOptRomFileStatement(OptRomObj) > > if not isInf and not isFile: > > break > > > > return True > > > > - ## __GetOptRomInfStatement() method > > + ## _GetOptRomInfStatement() method > > # > > # Get INF statements > > # > > @@ -4405,17 +4315,17 @@ class FdfParser: > > # @retval True Successfully find inf statement > > # @retval False Not able to find inf statement > > # > > - def __GetOptRomInfStatement(self, Obj): > > - if not self.__IsKeyword( "INF"): > > + def _GetOptRomInfStatement(self, Obj): > > + if not self._IsKeyword("INF"): > > return False > > > > - ffsInf = OptRomInfStatement.OptRomInfStatement() > > - self.__GetInfOptions( ffsInf) > > + ffsInf = OptRomInfStatement() > > + self._GetInfOptions(ffsInf) > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > - ffsInf.InfFileName = self.__Token > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > + ffsInf.InfFileName = self._Token > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > #check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > @@ -4434,71 +4344,71 @@ class FdfParser: > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > > > - self.__GetOptRomOverrides (ffsInf) > > + self._GetOptRomOverrides (ffsInf) > > > > Obj.FfsList.append(ffsInf) > > return True > > > > - ## __GetOptRomOverrides() method > > + ## _GetOptRomOverrides() method > > # > > # Get overrides for OptROM INF & FILE > > # > > # @param self The object pointer > > # @param FfsInfObj for whom overrides is got > > # > > - def __GetOptRomOverrides(self, Obj): > > - if self.__IsToken('{'): > > - Overrides = OptRomInfStatement.OverrideAttribs() > > + def _GetOptRomOverrides(self, Obj): > > + if self._IsToken('{'): > > + Overrides = OverrideAttribs() > > while True: > > - if self.__IsKeyword( "PCI_VENDOR_ID"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("PCI_VENDOR_ID"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) > > - Overrides.PciVendorId = self.__Token > > + Overrides.PciVendorId = self._Token > > continue > > > > - if self.__IsKeyword( "PCI_CLASS_CODE"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("PCI_CLASS_CODE"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) > > - Overrides.PciClassCode = self.__Token > > + Overrides.PciClassCode = self._Token > > continue > > > > - if self.__IsKeyword( "PCI_DEVICE_ID"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("PCI_DEVICE_ID"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) > > > > - Overrides.PciDeviceId = self.__Token > > + Overrides.PciDeviceId = self._Token > > continue > > > > - if self.__IsKeyword( "PCI_REVISION"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("PCI_REVISION"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextHexNumber(): > > + if not self._GetNextHexNumber(): > > raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) > > - Overrides.PciRevision = self.__Token > > + Overrides.PciRevision = self._Token > > continue > > > > - if self.__IsKeyword( "PCI_COMPRESS"): > > - if not self.__IsToken( "="): > > + if self._IsKeyword("PCI_COMPRESS"): > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) > > - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' > > + Overrides.NeedCompress = self._Token.upper() == 'TRUE' > > continue > > > > - if self.__IsToken( "}"): > > + if self._IsToken("}"): > > break > > else: > > EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) > > > > Obj.OverrideAttribs = Overrides > > > > - ## __GetOptRomFileStatement() method > > + ## _GetOptRomFileStatement() method > > # > > # Get FILE statements > > # > > @@ -4507,33 +4417,33 @@ class FdfParser: > > # @retval True Successfully find FILE statement > > # @retval False Not able to find FILE statement > > # > > - def __GetOptRomFileStatement(self, Obj): > > - if not self.__IsKeyword( "FILE"): > > + def _GetOptRomFileStatement(self, Obj): > > + if not self._IsKeyword("FILE"): > > return False > > > > - FfsFileObj = OptRomFileStatement.OptRomFileStatement() > > + FfsFileObj = OptRomFileStatement() > > > > - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): > > + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): > > raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) > > - FfsFileObj.FileType = self.__Token > > + FfsFileObj.FileType = self._Token > > > > - if not self.__GetNextToken(): > > + if not self._GetNextToken(): > > raise Warning("expected File path", self.FileName, self.CurrentLineNumber) > > - FfsFileObj.FileName = self.__Token > > - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: > > + FfsFileObj.FileName = self._Token > > + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > #check for file path > > ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > if ErrorCode != 0: > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > if FfsFileObj.FileType == 'EFI': > > - self.__GetOptRomOverrides(FfsFileObj) > > + self._GetOptRomOverrides(FfsFileObj) > > > > Obj.FfsList.append(FfsFileObj) > > > > return True > > > > - ## __GetCapInFd() method > > + ## _GetCapInFd() method > > # > > # Get Cap list contained in FD > > # > > @@ -4541,7 +4451,7 @@ class FdfParser: > > # @param FdName FD name > > # @retval CapList List of Capsule in FD > > # > > - def __GetCapInFd (self, FdName): > > + def _GetCapInFd (self, FdName): > > CapList = [] > > if FdName.upper() in self.Profile.FdDict: > > FdObj = self.Profile.FdDict[FdName.upper()] > > @@ -4554,7 +4464,7 @@ class FdfParser: > > CapList.append(elementRegionData.upper()) > > return CapList > > > > - ## __GetReferencedFdCapTuple() method > > + ## _GetReferencedFdCapTuple() method > > # > > # Get FV and FD list referenced by a capsule image > > # > > @@ -4563,22 +4473,22 @@ class FdfParser: > > # @param RefFdList referenced FD by section > > # @param RefFvList referenced FV by section > > # > > - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > - for CapsuleDataObj in CapObj.CapsuleDataList : > > + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > + for CapsuleDataObj in CapObj.CapsuleDataList: > > if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and CapsuleDataObj.FvName.upper() > not in RefFvList: > > RefFvList.append (CapsuleDataObj.FvName.upper()) > > elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and > CapsuleDataObj.FdName.upper() not in RefFdList: > > RefFdList.append (CapsuleDataObj.FdName.upper()) > > elif CapsuleDataObj.Ffs is not None: > > - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): > > + if isinstance(CapsuleDataObj.Ffs, FileStatement): > > if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: > > RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) > > elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: > > RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) > > else: > > - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > > > - ## __GetFvInFd() method > > + ## _GetFvInFd() method > > # > > # Get FV list contained in FD > > # > > @@ -4586,7 +4496,7 @@ class FdfParser: > > # @param FdName FD name > > # @retval FvList list of FV in FD > > # > > - def __GetFvInFd (self, FdName): > > + def _GetFvInFd (self, FdName): > > FvList = [] > > if FdName.upper() in self.Profile.FdDict: > > FdObj = self.Profile.FdDict[FdName.upper()] > > @@ -4599,7 +4509,7 @@ class FdfParser: > > FvList.append(elementRegionData.upper()) > > return FvList > > > > - ## __GetReferencedFdFvTuple() method > > + ## _GetReferencedFdFvTuple() method > > # > > # Get FD and FV list referenced by a FFS file > > # > > @@ -4608,17 +4518,17 @@ class FdfParser: > > # @param RefFdList referenced FD by section > > # @param RefFvList referenced FV by section > > # > > - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > for FfsObj in FvObj.FfsList: > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > + if isinstance(FfsObj, FileStatement): > > if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: > > RefFvList.append(FfsObj.FvName.upper()) > > elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: > > RefFdList.append(FfsObj.FdName.upper()) > > else: > > - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > > > - ## __GetReferencedFdFvTupleFromSection() method > > + ## _GetReferencedFdFvTupleFromSection() method > > # > > # Get FD and FV list referenced by a FFS section > > # > > @@ -4627,19 +4537,19 @@ class FdfParser: > > # @param FdList referenced FD by section > > # @param FvList referenced FV by section > > # > > - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > SectionStack = [] > > SectionStack.extend(FfsFile.SectionList) > > while SectionStack != []: > > SectionObj = SectionStack.pop() > > - if isinstance(SectionObj, FvImageSection.FvImageSection): > > + if isinstance(SectionObj, FvImageSection): > > if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: > > FvList.append(SectionObj.FvName.upper()) > > if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not in > FvList: > > FvList.append(SectionObj.Fv.UiFvName.upper()) > > - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > > > - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): > > + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): > > SectionStack.extend(SectionObj.SectionList) > > > > ## CycleReferenceCheck() method > > @@ -4662,7 +4572,7 @@ class FdfParser: > > FdAnalyzedList = [] > > > > Index = 0 > > - while RefFvStack != [] and Index < MaxLength: > > + while RefFvStack and Index < MaxLength: > > Index = Index + 1 > > FvNameFromStack = RefFvStack.pop() > > if FvNameFromStack.upper() in self.Profile.FvDict: > > @@ -4672,14 +4582,14 @@ class FdfParser: > > > > RefFdList = [] > > RefFvList = [] > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > for RefFdName in RefFdList: > > if RefFdName in FdAnalyzedList: > > continue > > > > LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) > > - FvInFdList = self.__GetFvInFd(RefFdName) > > + FvInFdList = self._GetFvInFd(RefFdName) > > if FvInFdList != []: > > for FvNameInFd in FvInFdList: > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > @@ -4725,7 +4635,7 @@ class FdfParser: > > > > RefFvList = [] > > RefFdList = [] > > - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > > > FvListLength = 0 > > FdListLength = 0 > > @@ -4735,7 +4645,7 @@ class FdfParser: > > continue > > > > LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) > > - CapInFdList = self.__GetCapInFd(RefFdName) > > + CapInFdList = self._GetCapInFd(RefFdName) > > if CapInFdList != []: > > for CapNameInFd in CapInFdList: > > LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) > > @@ -4746,7 +4656,7 @@ class FdfParser: > > EdkLogger.info(LogStr) > > return True > > > > - FvInFdList = self.__GetFvInFd(RefFdName) > > + FvInFdList = self._GetFvInFd(RefFdName) > > if FvInFdList != []: > > for FvNameInFd in FvInFdList: > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > @@ -4767,7 +4677,7 @@ class FdfParser: > > FvObj = self.Profile.FvDict[RefFvName.upper()] > > else: > > continue > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > FvAnalyzedList.append(RefFvName) > > > > return False > > diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py > > index 598b2d023124..4c1b43572964 100644 > > --- a/BaseTools/Source/Python/GenFds/Ffs.py > > +++ b/BaseTools/Source/Python/GenFds/Ffs.py > > @@ -15,53 +15,41 @@ > > ## > > # Import Modules > > # > > -from CommonDataClass.FdfClass import FDClassObject > > from Common.DataType import * > > > > -## generate FFS > > -# > > -# > > -class Ffs(FDClassObject): > > - # mapping between FILE type in FDF and file type for GenFfs > > - FdfFvFileTypeToFileType = { > > - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > - 'RAW' : 'EFI_FV_FILETYPE_RAW', > > - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > - 'SMM' : 'EFI_FV_FILETYPE_SMM', > > - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > - } > > +# mapping between FILE type in FDF and file type for GenFfs > > +FdfFvFileTypeToFileType = { > > + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > + 'RAW' : 'EFI_FV_FILETYPE_RAW', > > + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > + 'SMM' : 'EFI_FV_FILETYPE_SMM', > > + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > +} > > > > - # mapping between section type in FDF and file suffix > > - SectionSuffix = { > > - BINARY_FILE_TYPE_PE32 : '.pe32', > > - BINARY_FILE_TYPE_PIC : '.pic', > > - BINARY_FILE_TYPE_TE : '.te', > > - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > - 'VERSION' : '.ver', > > - BINARY_FILE_TYPE_UI : '.ui', > > - 'COMPAT16' : '.com16', > > - 'RAW' : '.raw', > > - 'FREEFORM_SUBTYPE_GUID': '.guid', > > - 'SUBTYPE_GUID' : '.guid', > > - 'FV_IMAGE' : 'fv.sec', > > - 'COMPRESS' : '.com', > > - 'GUIDED' : '.guided', > > - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > - } > > - > > - ## The constructor > > - # > > - # @param self The object pointer > > - # > > - def __init__(self): > > - FfsClassObject.__init__(self) > > +# mapping between section type in FDF and file suffix > > +SectionSuffix = { > > + BINARY_FILE_TYPE_PE32 : '.pe32', > > + BINARY_FILE_TYPE_PIC : '.pic', > > + BINARY_FILE_TYPE_TE : '.te', > > + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > + 'VERSION' : '.ver', > > + BINARY_FILE_TYPE_UI : '.ui', > > + 'COMPAT16' : '.com16', > > + 'RAW' : '.raw', > > + 'FREEFORM_SUBTYPE_GUID': '.guid', > > + 'SUBTYPE_GUID' : '.guid', > > + 'FV_IMAGE' : 'fv.sec', > > + 'COMPRESS' : '.com', > > + 'GUIDED' : '.guided', > > + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > +} > > diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > index 4c35aac76a00..7479efff04c7 100644 > > --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > @@ -16,26 +16,22 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from . import Ffs > > -from . import Rule > > -import Common.LongFilePathOs as os > > from io import BytesIO > > -import subprocess > > - > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > +from struct import pack > > from CommonDataClass.FdfClass import FileStatementClassObject > > from Common import EdkLogger > > -from Common.BuildToolError import * > > -from Common.Misc import GuidStructureByteArrayToGuidString > > +from Common.BuildToolError import GENFDS_ERROR > > +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange > > +import Common.LongFilePathOs as os > > from .GuidSection import GuidSection > > from .FvImageSection import FvImageSection > > -from Common.Misc import SaveFileOnChange > > -from struct import * > > +from .Ffs import FdfFvFileTypeToFileType > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > ## generate FFS from FILE > > # > > # > > -class FileStatement (FileStatementClassObject) : > > +class FileStatement (FileStatementClassObject): > > ## The constructor > > # > > # @param self The object pointer > > @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : > > # > > def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): > > > > - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): > > + if self.NameGuid and self.NameGuid.startswith('PCD('): > > PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) > > if len(PcdValue) == 0: > > EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ > > @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : > > > > Dict.update(self.DefineVarDict) > > SectionAlignments = None > > - if self.FvName is not None : > > + if self.FvName: > > Buffer = BytesIO('') > > if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: > > EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) > > @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : > > FileName = Fv.AddToBuffer(Buffer) > > SectionFiles = [FileName] > > > > - elif self.FdName is not None: > > + elif self.FdName: > > if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: > > EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) > > Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) > > FileName = Fd.GenFd() > > SectionFiles = [FileName] > > > > - elif self.FileName is not None: > > + elif self.FileName: > > if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': > > if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == > len(self.SubAlignment): > > FileContent = '' > > @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : > > Content = f.read() > > f.close() > > AlignValue = 1 > > - if self.SubAlignment[Index] is not None: > > + if self.SubAlignment[Index]: > > AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) > > if AlignValue > MaxAlignValue: > > MaxAlignIndex = Index > > @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : > > SectionFiles = [] > > Index = 0 > > SectionAlignments = [] > > - for section in self.SectionList : > > + for section in self.SectionList: > > Index = Index + 1 > > SecIndex = '%d' %Index > > # process the inside FvImage from FvSection or GuidSection > > @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : > > section.FvAddr = FvChildAddr.pop(0) > > elif isinstance(section, GuidSection): > > section.FvAddr = FvChildAddr > > - if FvParentAddr is not None and isinstance(section, GuidSection): > > + if FvParentAddr and isinstance(section, GuidSection): > > section.FvParentAddr = FvParentAddr > > > > if self.KeepReloc == False: > > @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : > > # > > FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') > > GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, > > - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), > > + FdfFvFileTypeToFileType.get(self.FvFileType), > > self.NameGuid, > > Fixed=self.Fixed, > > CheckSum=self.CheckSum, > > @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : > > ) > > > > return FfsFileOutput > > - > > - > > - > > diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > index 6149bc81b4ef..c1073c96e9aa 100644 > > --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os > > from io import BytesIO > > from struct import * > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -from . import Ffs > > +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType > > import subprocess > > import sys > > from . import Section > > @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > SecNum = '%d' %Index > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > Index = Index + 1 > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) > > @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > else: > > SecNum = '%d' %Index > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) > > > > @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > self.ModuleGuid = RegistryGuidStr > > > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > self.ModuleGuid, Fixed=Rule.Fixed, > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > SectionAlign=SectionAlignments, > > @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > self.ModuleGuid, Fixed=Rule.Fixed, > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > SectionAlign=Alignments, > > diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py > > index c66fc383278d..30525bd11fcc 100644 > > --- a/BaseTools/Source/Python/GenFds/Fv.py > > +++ b/BaseTools/Source/Python/GenFds/Fv.py > > @@ -1,4 +1,3 @@ > > -from __future__ import absolute_import > > ## @file > > # process FV generation > > # > > @@ -16,16 +15,13 @@ from __future__ import absolute_import > > ## > > # Import Modules > > # > > +from __future__ import absolute_import > > import Common.LongFilePathOs as os > > import subprocess > > from io import BytesIO > > from struct import * > > - > > -from . import Ffs > > -from . import AprioriSection > > from . import FfsFileStatement > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -from CommonDataClass.FdfClass import FvClassObject > > from Common.Misc import SaveFileOnChange, PackGUID > > from Common.LongFilePathSupport import CopyLongFilePath > > from Common.LongFilePathSupport import OpenLongFilePath as open > > @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' > > ## generate FV > > # > > # > > -class FV (FvClassObject): > > +class FV (object): > > ## The constructor > > # > > # @param self The object pointer > > # > > def __init__(self): > > - FvClassObject.__init__(self) > > + self.UiFvName = None > > + self.CreateFileName = None > > + self.BlockSizeList = [] > > + self.DefineVarDict = {} > > + self.SetVarDict = {} > > + self.FvAlignment = None > > + self.FvAttributeDict = {} > > + self.FvNameGuid = None > > + self.FvNameString = None > > + self.AprioriSectionList = [] > > + self.FfsList = [] > > + self.BsBaseAddress = None > > + self.RtBaseAddress = None > > self.FvInfFile = None > > self.FvAddressFile = None > > self.BaseAddress = None > > @@ -68,7 +76,7 @@ class FV (FvClassObject): > > # @param MacroDict macro value pair > > # @retval string Generated FV file path > > # > > - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict > = {}, Flag=False) : > > + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, > MacroDict = {}, Flag=False): > > > > if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: > > return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] > > @@ -96,7 +104,7 @@ class FV (FvClassObject): > > if self.FvBaseAddress is not None: > > BaseAddress = self.FvBaseAddress > > if not Flag: > > - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > # > > # First Process the Apriori section > > # > > @@ -114,7 +122,7 @@ class FV (FvClassObject): > > TAB_LINE_BREAK) > > > > # Process Modules in FfsList > > - for FfsFile in self.FfsList : > > + for FfsFile in self.FfsList: > > if Flag: > > if isinstance(FfsFile, FfsFileStatement.FileStatement): > > continue > > @@ -177,7 +185,7 @@ class FV (FvClassObject): > > > > if FvChildAddr != []: > > # Update Ffs again > > - for FfsFile in self.FfsList : > > + for FfsFile in self.FfsList: > > FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, > FvName=self.UiFvName) > > > > if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: > > @@ -252,7 +260,7 @@ class FV (FvClassObject): > > return True > > return False > > > > - ## __InitializeInf__() > > + ## _InitializeInf() > > # > > # Initilize the inf file to create FV > > # > > @@ -263,7 +271,7 @@ class FV (FvClassObject): > > # @param ErasePolarity Flash erase polarity > > # @param VtfDict VTF objects > > # > > - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : > > + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): > > # > > # Create FV inf file > > # > > @@ -275,7 +283,7 @@ class FV (FvClassObject): > > # Add [Options] > > # > > self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) > > - if BaseAddress is not None : > > + if BaseAddress is not None: > > self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ > > BaseAddress + \ > > TAB_LINE_BREAK) > > @@ -294,7 +302,7 @@ class FV (FvClassObject): > > #set default block size is 1 > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) > > > > - for BlockSize in self.BlockSizeList : > > + for BlockSize in self.BlockSizeList: > > if BlockSize[0] is not None: > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ > > '0x%X' %BlockSize[0] + \ > > @@ -320,9 +328,9 @@ class FV (FvClassObject): > > ' %s' %ErasePloarity + \ > > TAB_LINE_BREAK) > > if not (self.FvAttributeDict is None): > > - for FvAttribute in self.FvAttributeDict.keys() : > > + for FvAttribute in self.FvAttributeDict.keys(): > > if FvAttribute == "FvUsedSizeEnable": > > - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : > > + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): > > self.UsedSizeEnable = True > > continue > > self.FvInfFile.writelines("EFI_" + \ > > @@ -365,8 +373,8 @@ class FV (FvClassObject): > > Guid = FV_UI_EXT_ENTY_GUID.split('-') > > # > > # Layout: > > - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 > > - # GUID : size 16 > > + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 > > + # GUID: size 16 > > # FV UI name > > # > > Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) > > @@ -422,7 +430,6 @@ class FV (FvClassObject): > > FvExtHeaderFileName + \ > > TAB_LINE_BREAK) > > > > - > > # > > # Add [Files] > > # > > diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py > > index 04556fc87099..d6e1f3315b7e 100644 > > --- a/BaseTools/Source/Python/GenFds/FvImageSection.py > > +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py > > @@ -18,7 +18,7 @@ > > from __future__ import absolute_import > > from . import Section > > from io import BytesIO > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > import subprocess > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > import Common.LongFilePathOs as os > > @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): > > if FvAlignmentValue > MaxFvAlignment: > > MaxFvAlignment = FvAlignmentValue > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > Ffs.SectionSuffix.get("FV_IMAGE")) > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > SectionSuffix.get("FV_IMAGE")) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > IsMakefile=IsMakefile) > > OutputFileList.append(OutputFile) > > > > @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): > > # > > # Prepare the parameter of GenSection > > # > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > Ffs.SectionSuffix.get("FV_IMAGE")) > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get("FV_IMAGE")) > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > IsMakefile=IsMakefile) > > OutputFileList.append(OutputFile) > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py > > index 9dec9c5eb57d..0c8091b79841 100644 > > --- a/BaseTools/Source/Python/GenFds/GenFds.py > > +++ b/BaseTools/Source/Python/GenFds/GenFds.py > > @@ -17,34 +17,30 @@ > > # > > from __future__ import print_function > > from __future__ import absolute_import > > +from re import compile > > from optparse import OptionParser > > -import sys > > -import Common.LongFilePathOs as os > > -import linecache > > -from . import FdfParser > > -import Common.BuildToolError as BuildToolError > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > -from Workspace.WorkspaceDatabase import WorkspaceDatabase > > -from Workspace.BuildClassObject import PcdClassObject > > -from . import RuleComplexFile > > -from .EfiSection import EfiSection > > +from sys import exit > > +from glob import glob > > +from struct import unpack > > +from linecache import getlines > > from io import BytesIO > > -import Common.TargetTxtClassObject as TargetTxtClassObject > > -import Common.ToolDefClassObject as ToolDefClassObject > > + > > +import Common.LongFilePathOs as os > > +from Common.TargetTxtClassObject import TargetTxtClassObject > > from Common.DataType import * > > import Common.GlobalData as GlobalData > > from Common import EdkLogger > > -from Common.StringUtils import * > > -from Common.Misc import DirCache, PathClass > > -from Common.Misc import SaveFileOnChange > > -from Common.Misc import ClearDuplicatedInf > > -from Common.Misc import GuidStructureStringToGuidString > > +from Common.StringUtils import NormPath > > +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString > > +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf > > from Common.BuildVersion import gBUILD_VERSION > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > -from . import FfsFileStatement > > -import glob > > -from struct import unpack > > -from Common.GlobalData import gGuidPattern > > +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, > FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID > > +from Workspace.WorkspaceDatabase import WorkspaceDatabase > > + > > +from .FdfParser import FdfParser, Warning > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > +from .FfsFileStatement import FileStatement > > > > ## Version and Copyright > > versionNumber = "1.0" + ' ' + gBUILD_VERSION > > @@ -71,22 +67,22 @@ def main(): > > > > EdkLogger.Initialize() > > try: > > - if Options.verbose is not None: > > + if Options.verbose: > > EdkLogger.SetLevel(EdkLogger.VERBOSE) > > GenFdsGlobalVariable.VerboseMode = True > > > > - if Options.FixedAddress is not None: > > + if Options.FixedAddress: > > GenFdsGlobalVariable.FixedLoadAddress = True > > > > - if Options.quiet is not None: > > + if Options.quiet: > > EdkLogger.SetLevel(EdkLogger.QUIET) > > - if Options.debug is not None: > > + if Options.debug: > > EdkLogger.SetLevel(Options.debug + 1) > > GenFdsGlobalVariable.DebugLevel = Options.debug > > else: > > EdkLogger.SetLevel(EdkLogger.INFO) > > > > - if (Options.Workspace is None): > > + if not Options.Workspace: > > EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", > > ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") > > elif not os.path.exists(Options.Workspace): > > @@ -97,7 +93,7 @@ def main(): > > GenFdsGlobalVariable.WorkSpaceDir = Workspace > > if 'EDK_SOURCE' in os.environ: > > GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) > > - if (Options.debug): > > + if Options.debug: > > GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) > > if Options.GenfdsMultiThread: > > GenFdsGlobalVariable.EnableGenfdsMultiThread = True > > @@ -107,7 +103,7 @@ def main(): > > PackagesPath = os.getenv("PACKAGES_PATH") > > mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) > > > > - if (Options.filename): > > + if Options.filename: > > FdfFilename = Options.filename > > FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) > > > > @@ -123,13 +119,13 @@ def main(): > > else: > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") > > > > - if (Options.BuildTarget): > > + if Options.BuildTarget: > > GenFdsGlobalVariable.TargetName = Options.BuildTarget > > > > - if (Options.ToolChain): > > + if Options.ToolChain: > > GenFdsGlobalVariable.ToolChainTag = Options.ToolChain > > > > - if (Options.activePlatform): > > + if Options.activePlatform: > > ActivePlatform = Options.activePlatform > > ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) > > > > @@ -139,15 +135,15 @@ def main(): > > if not os.path.isabs (ActivePlatform): > > ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) > > > > - if not os.path.exists(ActivePlatform) : > > + if not os.path.exists(ActivePlatform): > > EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") > > else: > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") > > > > - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) > > > > - if (Options.ConfDirectory): > > + if Options.ConfDirectory: > > # Get alternate Conf location, if it is absolute, then just use the absolute directory name > > ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) > > if ConfDirectoryPath.startswith('"'): > > @@ -169,7 +165,7 @@ def main(): > > GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir > > BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) > > if os.path.isfile(BuildConfigurationFile) == True: > > - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() > > + TargetTxt = TargetTxtClassObject() > > TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) > > # if no build target given in command line, get it from target.txt > > if not GenFdsGlobalVariable.TargetName: > > @@ -237,10 +233,9 @@ def main(): > > GlobalData.gAllFiles = DirCache(Workspace) > > GlobalData.gWorkspace = Workspace > > > > - if (Options.archList) : > > + if Options.archList: > > ArchList = Options.archList.split(',') > > else: > > -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") > > ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, > Options.ToolChain].SupArchList > > > > TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, > Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) > > @@ -249,9 +244,11 @@ def main(): > > > > for Arch in ArchList: > > GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = > NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, > Options.ToolChain].OutputDirectory) > > - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > Options.BuildTarget, Options.ToolChain].PlatformName > > > > - if (Options.outputDir): > > + # assign platform name based on last entry in ArchList > > + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], > Options.BuildTarget, Options.ToolChain].PlatformName > > + > > + if Options.outputDir: > > OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) > > if not os.path.isabs (OutputDirFromCommandLine): > > OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, OutputDirFromCommandLine) > > @@ -274,27 +271,27 @@ def main(): > > GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir > > > > """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ > > - FdfParserObj = FdfParser.FdfParser(FdfFilename) > > + FdfParserObj = FdfParser(FdfFilename) > > FdfParserObj.ParseFile() > > > > if FdfParserObj.CycleReferenceCheck(): > > EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") > > > > - if (Options.uiFdName) : > > + if Options.uiFdName: > > if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: > > GenFds.OnlyGenerateThisFd = Options.uiFdName > > else: > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > "No such an FD in FDF file: %s" % Options.uiFdName) > > > > - if (Options.uiFvName) : > > + if Options.uiFvName: > > if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: > > GenFds.OnlyGenerateThisFv = Options.uiFvName > > else: > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > "No such an FV in FDF file: %s" % Options.uiFvName) > > > > - if (Options.uiCapName) : > > + if Options.uiCapName: > > if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: > > GenFds.OnlyGenerateThisCap = Options.uiCapName > > else: > > @@ -302,7 +299,7 @@ def main(): > > "No such a Capsule in FDF file: %s" % Options.uiCapName) > > > > GenFdsGlobalVariable.WorkSpace = BuildWorkSpace > > - if ArchList is not None: > > + if ArchList: > > GenFdsGlobalVariable.ArchList = ArchList > > > > # Dsc Build Data will handle Pcd Settings from CommandLine. > > @@ -313,10 +310,8 @@ def main(): > > > > # Record the FV Region info that may specific in the FD > > if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: > > - for Fv in FdfParserObj.Profile.FvDict: > > - FvObj = FdfParserObj.Profile.FvDict[Fv] > > - for Fd in FdfParserObj.Profile.FdDict: > > - FdObj = FdfParserObj.Profile.FdDict[Fd] > > + for FvObj in FdfParserObj.Profile.FvDict.values(): > > + for FdObj in FdfParserObj.Profile.FdDict.values(): > > for RegionObj in FdObj.RegionList: > > if RegionObj.RegionType != BINARY_FILE_TYPE_FV: > > continue > > @@ -338,7 +333,7 @@ def main(): > > """Display FV space info.""" > > GenFds.DisplayFvSpaceInfo(FdfParserObj) > > > > - except FdfParser.Warning as X: > > + except Warning as X: > > EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, > RaiseError=False) > > ReturnCode = FORMAT_INVALID > > except FatalError as X: > > @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): > > # Using standard Python module optparse to parse command line option of this tool. > > # > > # @retval Opt A optparse.Values object containing the parsed options > > -# @retval Args Target of build command > > # > > def myOptionParser(): > > usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= > MacroValue]\"" > > @@ -404,7 +398,7 @@ def myOptionParser(): > > Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: > \"PcdName=Value\" ") > > Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable GenFds > multi thread to generate ffs file.") > > > > - (Options, args) = Parser.parse_args() > > + Options, _ = Parser.parse_args() > > return Options > > > > ## The class implementing the EDK2 flash image generation process > > @@ -414,7 +408,7 @@ def myOptionParser(): > > # 2. Call methods of Fd class to generate FD > > # 3. Call methods of Fv class to generate FV that not belong to FD > > # > > -class GenFds : > > +class GenFds(object): > > FdfParsef = None > > OnlyGenerateThisFd = None > > OnlyGenerateThisFv = None > > @@ -423,12 +417,13 @@ class GenFds : > > ## GenFd() > > # > > # @param OutputDir Output directory > > - # @param FdfParser FDF contents parser > > + # @param FdfParserObject FDF contents parser > > # @param Workspace The directory of workspace > > # @param ArchList The Arch list of platform > > # > > - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): > > - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) > > + @staticmethod > > + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): > > + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) > > > > GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") > > if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in > GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: > > @@ -470,9 +465,10 @@ class GenFds : > > GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") > > for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): > > OptRomObj.AddToBuffer(None) > > + > > @staticmethod > > - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): > > - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) > > + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): > > + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) > > for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): > > FdObj.GenFd(Flag=True) > > > > @@ -490,6 +486,7 @@ class GenFds : > > # @param FvObj Whose block size to get > > # @retval int Block size value > > # > > + @staticmethod > > def GetFvBlockSize(FvObj): > > DefaultBlockSize = 0x1 > > FdObj = None > > @@ -524,16 +521,17 @@ class GenFds : > > # @param FvObj Whose block size to get > > # @retval None > > # > > - def DisplayFvSpaceInfo(FdfParser): > > + @staticmethod > > + def DisplayFvSpaceInfo(FdfParserObject): > > > > FvSpaceInfoList = [] > > MaxFvNameLength = 0 > > - for FvName in FdfParser.Profile.FvDict: > > + for FvName in FdfParserObject.Profile.FvDict: > > if len(FvName) > MaxFvNameLength: > > MaxFvNameLength = len(FvName) > > FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') > > if os.path.exists(FvSpaceInfoFileName): > > - FileLinesList = linecache.getlines(FvSpaceInfoFileName) > > + FileLinesList = getlines(FvSpaceInfoFileName) > > TotalFound = False > > Total = '' > > UsedFound = False > > @@ -575,6 +573,7 @@ class GenFds : > > # @param DscFile modules from dsc file will be preprocessed > > # @retval None > > # > > + @staticmethod > > def PreprocessImage(BuildDb, DscFile): > > PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, > GenFdsGlobalVariable.ToolChainTag].Pcds > > PcdValue = '' > > @@ -600,6 +599,7 @@ class GenFds : > > ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, > GenFdsGlobalVariable.ToolChainTag] > > print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) > > > > + @staticmethod > > def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): > > GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") > > GuidXRefFile = BytesIO('') > > @@ -607,7 +607,6 @@ class GenFds : > > GuidDict = {} > > ModuleList = [] > > FileGuidList = [] > > - GuidPattern = gGuidPattern > > for Arch in ArchList: > > PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) > > @@ -626,20 +625,16 @@ class GenFds : > > continue > > else: > > ModuleList.append(Module) > > - GuidMatch = GuidPattern.match(ModuleFile.BaseName) > > - if GuidMatch is not None: > > + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): > > GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) > > else: > > GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) > > - for key, item in Module.Protocols.items(): > > - GuidDict[key] = item > > - for key, item in Module.Guids.items(): > > - GuidDict[key] = item > > - for key, item in Module.Ppis.items(): > > - GuidDict[key] = item > > + GuidDict.update(Module.Protocols) > > + GuidDict.update(Module.Guids) > > + GuidDict.update(Module.Ppis) > > for FvName in FdfParserObj.Profile.FvDict: > > for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: > > - if not isinstance(FfsObj, FfsFileStatement.FileStatement): > > + if not isinstance(FfsObj, FileStatement): > > InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) > > FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, > GenFdsGlobalVariable.ToolChainTag] > > if FdfModule in ModuleList: > > @@ -647,12 +642,9 @@ class GenFds : > > else: > > ModuleList.append(FdfModule) > > GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) > > - for key, item in FdfModule.Protocols.items(): > > - GuidDict[key] = item > > - for key, item in FdfModule.Guids.items(): > > - GuidDict[key] = item > > - for key, item in FdfModule.Ppis.items(): > > - GuidDict[key] = item > > + GuidDict.update(FdfModule.Protocols) > > + GuidDict.update(FdfModule.Guids) > > + GuidDict.update(FdfModule.Ppis) > > else: > > FileStatementGuid = FfsObj.NameGuid > > if FileStatementGuid in FileGuidList: > > @@ -661,13 +653,13 @@ class GenFds : > > FileGuidList.append(FileStatementGuid) > > Name = [] > > FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > if not FfsPath: > > continue > > if not os.path.exists(FfsPath[0]): > > continue > > MatchDict = {} > > - ReFileEnds = > re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > + ReFileEnds = > compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > FileList = os.listdir(FfsPath[0]) > > for File in FileList: > > Match = ReFileEnds.search(File) > > @@ -724,16 +716,10 @@ class GenFds : > > os.remove(GuidXRefFileName) > > GuidXRefFile.close() > > > > - ##Define GenFd as static function > > - GenFd = staticmethod(GenFd) > > - GetFvBlockSize = staticmethod(GetFvBlockSize) > > - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) > > - PreprocessImage = staticmethod(PreprocessImage) > > - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) > > - > > if __name__ == '__main__': > > r = main() > > ## 0-127 is a safe return range, and 1 is a standard default error > > - if r < 0 or r > 127: r = 1 > > - sys.exit(r) > > + if r < 0 or r > 127: > > + r = 1 > > + exit(r) > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > index 14578a92a9b1..e867693d7c88 100644 > > --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > @@ -16,13 +16,15 @@ > > # Import Modules > > # > > from __future__ import print_function > > +from __future__ import absolute_import > > + > > import Common.LongFilePathOs as os > > -import sys > > -import subprocess > > -import struct > > -import array > > +from sys import stdout > > +from subprocess import PIPE,Popen > > +from struct import Struct > > +from array import array > > > > -from Common.BuildToolError import * > > +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR > > from Common import EdkLogger > > from Common.Misc import SaveFileOnChange > > > > @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: > > EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' > > LARGE_FILE_SIZE = 0x1000000 > > > > - SectionHeader = struct.Struct("3B 1B") > > + SectionHeader = Struct("3B 1B") > > > > # FvName, FdName, CapName in FDF, Image file name > > ImageBinDict = {} > > @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: > > ## LoadBuildRule > > # > > @staticmethod > > - def __LoadBuildRule(): > > + def _LoadBuildRule(): > > if GenFdsGlobalVariable.__BuildRuleDatabase: > > return GenFdsGlobalVariable.__BuildRuleDatabase > > BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) > > @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: > > if not Arch in GenFdsGlobalVariable.OutputDirDict: > > return {} > > > > - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() > > + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() > > if not BuildRuleDatabase: > > return {} > > > > PathClassObj = PathClass(Inf.MetaFile.File, > > GenFdsGlobalVariable.WorkSpaceDir) > > - Macro = {} > > - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir > > - Macro["MODULE_NAME" ] = Inf.BaseName > > - Macro["MODULE_GUID" ] = Inf.Guid > > - Macro["MODULE_VERSION" ] = Inf.Version > > - Macro["MODULE_TYPE" ] = Inf.ModuleType > > - Macro["MODULE_FILE" ] = str(PathClassObj) > > - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName > > - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir > > - Macro["MODULE_DIR" ] = PathClassObj.SubDir > > - > > - Macro["BASE_NAME" ] = Inf.BaseName > > - > > - Macro["ARCH" ] = Arch > > - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag > > - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName > > - > > - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] > > - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > BuildDir = os.path.join( > > GenFdsGlobalVariable.OutputDirDict[Arch], > > Arch, > > PathClassObj.SubDir, > > PathClassObj.BaseName > > ) > > - Macro["MODULE_BUILD_DIR" ] = BuildDir > > - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") > > - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") > > - > > + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > + Macro = { > > + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, > > + "MODULE_NAME":Inf.BaseName, > > + "MODULE_GUID":Inf.Guid, > > + "MODULE_VERSION":Inf.Version, > > + "MODULE_TYPE":Inf.ModuleType, > > + "MODULE_FILE":str(PathClassObj), > > + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, > > + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, > > + "MODULE_DIR":PathClassObj.SubDir, > > + "BASE_NAME":Inf.BaseName, > > + "ARCH":Arch, > > + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, > > + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > + "TARGET":GenFdsGlobalVariable.TargetName, > > + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], > > + "BIN_DIR":BinDir, > > + "LIB_DIR":BinDir, > > + "MODULE_BUILD_DIR":BuildDir, > > + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), > > + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") > > + } > > + > > BuildRules = {} > > for Type in BuildRuleDatabase.FileTypeList: > > #first try getting build rule by BuildRuleFamily > > @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: > > > > if not Inf.IsBinaryModule: > > for File in Inf.Sources: > > - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ > > - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): > > + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ > > + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: > > FileList.append((File, DataType.TAB_UNKNOWN_FILE)) > > > > for File in Inf.Binaries: > > - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: > > + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: > > FileList.append((File, File.Type)) > > > > for File, FileType in FileList: > > @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: > > Source = SourceList[Index] > > Index = Index + 1 > > > > - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: > > + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: > > # Skip all files that are not binary libraries > > if not Inf.LibraryClass: > > continue > > @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: > > # @param Workspace The directory of workspace > > # @param ArchList The Arch list of platform > > # > > + @staticmethod > > def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): > > - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) > > -# GenFdsGlobalVariable.OutputDirDict = OutputDir > > + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) > > GenFdsGlobalVariable.FdfParser = FdfParser > > GenFdsGlobalVariable.WorkSpace = WorkSpace > > GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], > DataType.TAB_FV_DIRECTORY) > > - if not os.path.exists(GenFdsGlobalVariable.FvDir) : > > + if not os.path.exists(GenFdsGlobalVariable.FvDir): > > os.makedirs(GenFdsGlobalVariable.FvDir) > > GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : > > + if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > - T_CHAR_LF = '\n' > > # > > # Create FV Address inf file > > # > > @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: > > # > > # Add [Options] > > # > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > BsAddress = '0' > > for Arch in ArchList: > > if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: > > @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > BsAddress + \ > > - T_CHAR_LF) > > + DataType.TAB_LINE_BREAK) > > > > RtAddress = '0' > > - for Arch in ArchList: > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > + for Arch in reversed(ArchList): > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > + if temp: > > + RtAddress = temp > > + break > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > RtAddress + \ > > - T_CHAR_LF) > > + DataType.TAB_LINE_BREAK) > > > > FvAddressFile.close() > > > > + @staticmethod > > def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): > > GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile > > GenFdsGlobalVariable.FdfParser = FdfParser > > @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: > > if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > - T_CHAR_LF = '\n' > > # > > # Create FV Address inf file > > # > > @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: > > # > > # Add [Options] > > # > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > BsAddress = '0' > > for Arch in ArchList: > > BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > BsAddress + \ > > - T_CHAR_LF) > > + DataType.TAB_LINE_BREAK) > > > > RtAddress = '0' > > - for Arch in ArchList: > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[ > > + for Arch in reversed(ArchList): > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > + if temp: > > + RtAddress = temp > > + break > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > RtAddress + \ > > - T_CHAR_LF) > > + DataType.TAB_LINE_BREAK) > > > > FvAddressFile.close() > > > > @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: > > # > > # @param String String that may contain macro > > # > > + @staticmethod > > def ReplaceWorkspaceMacro(String): > > String = mws.handleWsMacro(String) > > Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) > > @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: > > if not os.path.exists(Output): > > return True > > # always update "Output" if no "Input" given > > - if Input is None or len(Input) == 0: > > + if not Input: > > return True > > > > # if fdf file is changed after the 'Output" is generated, update the 'Output' > > @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: > > Cmd += ("-s", Type) > > if CompressionType: > > Cmd += ("-c", CompressionType) > > - if Guid is not None: > > + if Guid: > > Cmd += ("-g", Guid) > > - if DummyFile is not None: > > + if DummyFile: > > Cmd += ("--dummy", DummyFile) > > if GuidHdrLen: > > Cmd += ("-l", GuidHdrLen) > > @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: > > if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: > > GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) > > else: > > - SectionData = array.array('B', [0, 0, 0, 0]) > > + SectionData = array('B', [0, 0, 0, 0]) > > SectionData.fromstring(Ui.encode("utf_16_le")) > > SectionData.append(0) > > SectionData.append(0) > > @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: > > if BaseAddress: > > Cmd += ("-r", BaseAddress) > > > > - if ForceRebase == False: > > + if not ForceRebase: > > Cmd += ("-F", "FALSE") > > - elif ForceRebase == True: > > + else: > > Cmd += ("-F", "TRUE") > > > > if Capsule: > > @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: > > Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): > > InputList = [] > > Cmd = ["EfiRom"] > > - if len(EfiInput) > 0: > > + if EfiInput: > > > > if Compress: > > Cmd.append("-ec") > > @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: > > Cmd.append(EfiFile) > > InputList.append (EfiFile) > > > > - if len(BinaryInput) > 0: > > + if BinaryInput: > > Cmd.append("-b") > > for BinFile in BinaryInput: > > Cmd.append(BinFile) > > @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: > > return > > GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) > > > > - if ClassCode is not None: > > + if ClassCode: > > Cmd += ("-l", ClassCode) > > - if Revision is not None: > > + if Revision: > > Cmd += ("-r", Revision) > > - if DeviceId is not None: > > + if DeviceId: > > Cmd += ("-i", DeviceId) > > - if VendorId is not None: > > + if VendorId: > > Cmd += ("-f", VendorId) > > > > Cmd += ("-o", Output) > > @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: > > else: > > GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) > > > > + @staticmethod > > def CallExternalTool (cmd, errorMess, returnValue=[]): > > > > if type(cmd) not in (tuple, list): > > @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: > > cmd += ('-v',) > > GenFdsGlobalVariable.InfLogger (cmd) > > else: > > - sys.stdout.write ('#') > > - sys.stdout.flush() > > + stdout.write ('#') > > + stdout.flush() > > GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 > > if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: > > - sys.stdout.write('\n') > > + stdout.write('\n') > > > > try: > > - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) > > + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) > > except Exception as X: > > EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) > > (out, error) = PopenObject.communicate() > > > > - while PopenObject.returncode is None : > > + while PopenObject.returncode is None: > > PopenObject.wait() > > if returnValue != [] and returnValue[0] != 0: > > #get command return value > > @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: > > print("###", cmd) > > EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) > > > > + @staticmethod > > def VerboseLogger (msg): > > EdkLogger.verbose(msg) > > > > + @staticmethod > > def InfLogger (msg): > > EdkLogger.info(msg) > > > > + @staticmethod > > def ErrorLogger (msg, File=None, Line=None, ExtraData=None): > > EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) > > > > + @staticmethod > > def DebugLogger (Level, msg): > > EdkLogger.debug(Level, msg) > > > > - ## ReplaceWorkspaceMacro() > > + ## MacroExtend() > > # > > # @param Str String that may contain macro > > # @param MacroDict Dictionary that contains macro value pair > > # > > + @staticmethod > > def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): > > - if Str is None : > > + if Str is None: > > return None > > > > - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, > > - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, > > + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, > > + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, > > # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, > > - '$(TARGET)' : GenFdsGlobalVariable.TargetName, > > - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, > > - '$(SPACE)' : ' ' > > + '$(TARGET)': GenFdsGlobalVariable.TargetName, > > + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, > > + '$(SPACE)': ' ' > > } > > - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > + > > if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: > > OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] > > + else: > > + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > > > Dict['$(OUTPUT_DIRECTORY)'] = OutputDir > > > > - if MacroDict is not None and len (MacroDict) != 0: > > + if MacroDict: > > Dict.update(MacroDict) > > > > for key in Dict: > > - if Str.find(key) >= 0 : > > + if Str.find(key) >= 0: > > Str = Str.replace (key, Dict[key]) > > > > if Str.find('$(ARCH)') >= 0: > > @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: > > # > > # @param PcdPattern pattern that labels a PCD. > > # > > + @staticmethod > > def GetPcdValue (PcdPattern): > > - if PcdPattern is None : > > + if PcdPattern is None: > > return None > > PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') > > TokenSpace = PcdPair[0] > > TokenCName = PcdPair[1] > > > > - PcdValue = '' > > for Arch in GenFdsGlobalVariable.ArchList: > > Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > PcdDict = Platform.Pcds > > @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: > > if PcdObj.DatumType != DataType.TAB_VOID: > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > - PcdValue = PcdObj.DefaultValue > > - return PcdValue > > + return PcdObj.DefaultValue > > > > for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, > > Arch, > > @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: > > if PcdObj.DatumType != DataType.TAB_VOID: > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > - PcdValue = PcdObj.DefaultValue > > - return PcdValue > > + return PcdObj.DefaultValue > > > > - return PcdValue > > - > > - SetDir = staticmethod(SetDir) > > - SetEnv = staticmethod(SetEnv) > > - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) > > - CallExternalTool = staticmethod(CallExternalTool) > > - VerboseLogger = staticmethod(VerboseLogger) > > - InfLogger = staticmethod(InfLogger) > > - ErrorLogger = staticmethod(ErrorLogger) > > - DebugLogger = staticmethod(DebugLogger) > > - MacroExtend = staticmethod (MacroExtend) > > - GetPcdValue = staticmethod(GetPcdValue) > > + return '' > > > > ## FindExtendTool() > > # > > @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): > > ToolOptionKey = None > > KeyList = None > > for ToolDef in ToolDefinition.items(): > > - if NameGuid.lower() == ToolDef[1].lower() : > > + if NameGuid.lower() == ToolDef[1].lower(): > > KeyList = ToolDef[0].split('_') > > Key = KeyList[0] + \ > > '_' + \ > > diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py > > index fd660737f421..25609e6f2ae1 100644 > > --- a/BaseTools/Source/Python/GenFds/GuidSection.py > > +++ b/BaseTools/Source/Python/GenFds/GuidSection.py > > @@ -19,7 +19,7 @@ > > from __future__ import absolute_import > > from . import Section > > import subprocess > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > import Common.LongFilePathOs as os > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > from .GenFdsGlobalVariable import FindExtendTool > > @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : > > ModuleName + \ > > SUP_MODULE_SEC + \ > > SecNum + \ > > - Ffs.SectionSuffix['GUIDED'] > > + SectionSuffix['GUIDED'] > > OutputFile = os.path.normpath(OutputFile) > > > > ExternalTool = None > > diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py > > index bb7071fa7fca..9d7ad25ecc43 100644 > > --- a/BaseTools/Source/Python/GenFds/OptionRom.py > > +++ b/BaseTools/Source/Python/GenFds/OptionRom.py > > @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange > > from Common import EdkLogger > > from Common.BuildToolError import * > > > > -T_CHAR_LF = '\n' > > - > > ## > > # > > # > > @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): > > # > > # @param self The object pointer > > # > > - def __init__(self): > > + def __init__(self, Name = ""): > > OptionRomClassObject.__init__(self) > > - > > + self.DriverName = Name > > > > ## AddToBuffer() > > # > > diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py > > index 5242b74c9e70..9aeaa45d4566 100644 > > --- a/BaseTools/Source/Python/GenFds/Region.py > > +++ b/BaseTools/Source/Python/GenFds/Region.py > > @@ -20,7 +20,6 @@ from struct import * > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > from io import BytesIO > > import string > > -from CommonDataClass.FdfClass import RegionClassObject > > import Common.LongFilePathOs as os > > from stat import * > > from Common import EdkLogger > > @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV > > ## generate Region > > # > > # > > -class Region(RegionClassObject): > > +class Region(object): > > > > ## The constructor > > # > > # @param self The object pointer > > # > > def __init__(self): > > - RegionClassObject.__init__(self) > > - > > + self.Offset = None # The begin position of the Region > > + self.Size = None # The Size of the Region > > + self.PcdOffset = None > > + self.PcdSize = None > > + self.SetVarDict = {} > > + self.RegionType = None > > + self.RegionDataList = [] > > > > ## PadBuffer() > > # > > diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py > > index 24f2f3ca938d..c1dff188be96 100644 > > --- a/BaseTools/Source/Python/GenFds/UiSection.py > > +++ b/BaseTools/Source/Python/GenFds/UiSection.py > > @@ -17,7 +17,7 @@ > > # > > from __future__ import absolute_import > > from . import Section > > -from .Ffs import Ffs > > +from .Ffs import SectionSuffix > > import subprocess > > import Common.LongFilePathOs as os > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > > > if self.StringData is not None : > > NameString = self.StringData > > diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py > > index db71fe9653d4..051743f10c21 100644 > > --- a/BaseTools/Source/Python/GenFds/VerSection.py > > +++ b/BaseTools/Source/Python/GenFds/VerSection.py > > @@ -1,7 +1,7 @@ > > ## @file > > # process Version section generation > > # > > -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> > > +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> > > # > > # This program and the accompanying materials > > # are licensed and made available under the terms and conditions of the BSD License > > @@ -16,10 +16,8 @@ > > # Import Modules > > # > > from __future__ import absolute_import > > -from .Ffs import Ffs > > -from . import Section > > +from .Ffs import SectionSuffix > > import Common.LongFilePathOs as os > > -import subprocess > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > from CommonDataClass.FdfClass import VerSectionClassObject > > from Common.LongFilePathSupport import OpenLongFilePath as open > > @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): > > # > > # Prepare the parameter of GenSection > > # > > - if FfsInf is not None: > > + if FfsInf: > > self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) > > self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > OutputFile = os.path.join(OutputPath, > > - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) > > + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) > > OutputFile = os.path.normpath(OutputFile) > > > > # Get String Data > > StringData = '' > > - if self.StringData is not None: > > + if self.StringData: > > StringData = self.StringData > > - elif self.FileName is not None: > > + elif self.FileName: > > FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) > > FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) > > FileObj = open(FileNameStr, 'r') > > StringData = FileObj.read() > > StringData = '"' + StringData + '"' > > FileObj.close() > > - else: > > - StringData = '' > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) > > OutputFileList = [] > > diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py > > index 5cb2d4acfb1c..9dcd48b2d812 100644 > > --- a/BaseTools/Source/Python/GenFds/Vtf.py > > +++ b/BaseTools/Source/Python/GenFds/Vtf.py > > @@ -18,21 +18,24 @@ > > from __future__ import absolute_import > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > import Common.LongFilePathOs as os > > -from CommonDataClass.FdfClass import VtfClassObject > > from Common.LongFilePathSupport import OpenLongFilePath as open > > -T_CHAR_LF = '\n' > > +from Common.DataType import TAB_LINE_BREAK > > > > ## generate VTF > > # > > # > > -class Vtf (VtfClassObject): > > +class Vtf (object): > > > > ## The constructor > > # > > # @param self The object pointer > > # > > def __init__(self): > > - VtfClassObject.__init__(self) > > + self.KeyArch = None > > + self.ArchList = None > > + self.UiName = None > > + self.ResetBin = None > > + self.ComponentStatementList = [] > > > > ## GenVtf() method > > # > > @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): > > # > > def GenVtf(self, FdAddressDict) : > > self.GenBsfInf() > > - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') > > BaseAddArg = self.GetBaseAddressArg(FdAddressDict) > > OutputArg, VtfRawDict = self.GenOutputArg() > > > > @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): > > FvList = self.GetFvList() > > self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') > > BsfInf = open(self.BsfInfName, 'w+') > > - if self.ResetBin is not None: > > - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) > > + if self.ResetBin: > > + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) > > BsfInf.writelines ("IA32_RST_BIN" + \ > > " = " + \ > > > GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ > > - T_CHAR_LF) > > - BsfInf.writelines (T_CHAR_LF) > > + TAB_LINE_BREAK) > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) > > + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) > > > > for ComponentObj in self.ComponentStatementList : > > BsfInf.writelines ("COMP_NAME" + \ > > " = " + \ > > ComponentObj.CompName + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > if ComponentObj.CompLoc.upper() == 'NONE': > > BsfInf.writelines ("COMP_LOC" + \ > > " = " + \ > > 'N' + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > > > - elif ComponentObj.FilePos is not None: > > + elif ComponentObj.FilePos: > > BsfInf.writelines ("COMP_LOC" + \ > > " = " + \ > > ComponentObj.FilePos + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > else: > > Index = FvList.index(ComponentObj.CompLoc.upper()) > > if Index == 0: > > BsfInf.writelines ("COMP_LOC" + \ > > " = " + \ > > 'F' + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > elif Index == 1: > > BsfInf.writelines ("COMP_LOC" + \ > > " = " + \ > > 'S' + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > > > BsfInf.writelines ("COMP_TYPE" + \ > > " = " + \ > > ComponentObj.CompType + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > BsfInf.writelines ("COMP_VER" + \ > > " = " + \ > > ComponentObj.CompVer + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > BsfInf.writelines ("COMP_CS" + \ > > " = " + \ > > ComponentObj.CompCs + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > > > BinPath = ComponentObj.CompBin > > if BinPath != '-': > > @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): > > BsfInf.writelines ("COMP_BIN" + \ > > " = " + \ > > BinPath + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > > > SymPath = ComponentObj.CompSym > > if SymPath != '-': > > @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): > > BsfInf.writelines ("COMP_SYM" + \ > > " = " + \ > > SymPath + \ > > - T_CHAR_LF) > > + TAB_LINE_BREAK) > > BsfInf.writelines ("COMP_SIZE" + \ > > " = " + \ > > ComponentObj.CompSize + \ > > - T_CHAR_LF) > > - BsfInf.writelines (T_CHAR_LF) > > + TAB_LINE_BREAK) > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > BsfInf.close() > > > > diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py > > index a66adfb16746..95863fefe151 100644 > > --- a/BaseTools/Source/Python/build/BuildReport.py > > +++ b/BaseTools/Source/Python/build/BuildReport.py > > @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath > > from Common.DataType import * > > import collections > > from Common.Expression import * > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > ## Pattern to extract contents in EDK DXS files > > gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) > > @@ -1853,8 +1854,8 @@ class FdRegionReport(object): > > # > > # Add PEI and DXE a priori files GUIDs defined in PI specification. > > # > > - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" > > - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" > > + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" > > + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" > > # > > # Add ACPI table storage file > > # > > -- > > 2.16.2.windows.1 > > > > _______________________________________________ > > edk2-devel mailing list > > edk2-devel@lists.01.org > > https://lists.01.org/mailman/listinfo/edk2-devel > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-26 5:25 ` Gao, Liming @ 2018-10-26 6:27 ` Gao, Liming 2018-10-26 6:54 ` Gary Lin 0 siblings, 1 reply; 14+ messages in thread From: Gao, Liming @ 2018-10-26 6:27 UTC (permalink / raw) To: Gao, Liming, Gary Lin, Carsey, Jaben; +Cc: edk2-devel@lists.01.org Edk2 599bb2be1f2b119dcde9e5a7515dc6f3d19eb29d has fixed this issue. Please try. > -----Original Message----- > From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Gao, Liming > Sent: Friday, October 26, 2018 1:25 PM > To: Gary Lin <glin@suse.com>; Carsey, Jaben <jaben.carsey@intel.com> > Cc: edk2-devel@lists.01.org > Subject: Re: [edk2] [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds > > Lin: > I am checking this issue. Sorry for the break. > > Thanks > Liming > > -----Original Message----- > > From: Gary Lin [mailto:glin@suse.com] > > Sent: Friday, October 26, 2018 12:54 PM > > To: Carsey, Jaben <jaben.carsey@intel.com> > > Cc: edk2-devel@lists.01.org; Gao, Liming <liming.gao@intel.com> > > Subject: Re: [edk2] [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds > > > > On Tue, Oct 23, 2018 at 10:29:19AM -0700, Jaben Carsey wrote: > > > 1) remove wildcard imports and use explicit imports > > > 2) refactor to use shared variables from Common/DataType > > > 3) rename to not shadow imports > > > 4) don't assign a variable in a loop (just do final assignment) > > > 5) remove spaces, parens, unused or commented out code, etc. > > > 6) merge unnecessary parent classes into child > > > 7) refactor to share DXE and PEI apriori GUIDs from one place > > > this includes changes to Build and EOT files > > > 8) for PEP8, dont use __ for custom methods. > > > > > Hi Jaben, > > > > I found OVMF crashed immediately at starting up after applying this patch :-\ > > > > == > > > > $ qemu-system-x86_64 -s -enable-kvm -smp 1 -m 1024 \ > > -drive if=pflash,format=raw,readonly,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd \ > > -drive if=pflash,format=raw,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd \ > > -machine type=q35 \ > > -drive file=fat:hda-contents/,format=raw,read-only,if=virtio \ > > -monitor stdio -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \ > > -serial file:serial.log > > QEMU 3.0.0 monitor - type 'help' for more information > > (qemu) KVM internal error. Suberror: 1 > > emulation failure > > RAX=000000000081f6a0 RBX=0000000000000200 RCX=000000000081f6a0 RDX=0000000000000200 > > RSI=000000000081f698 RDI=000000000081fd20 RBP=000000000081f698 RSP=000000000081f4a8 > > R8 =000000000081f4d8 R9 =000000000081f748 R10=0000000000000010 R11=0000000000900000 > > R12=0000000000000000 R13=0000000000820000 R14=000000000081fd01 R15=000000000081fd01 > > RIP=000000000009ffff RFL=00010807 [-O---PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 > > ES =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > CS =0018 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] > > SS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > DS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > FS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > GS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT > > TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy > > GDT= 00000000ffffff80 0000001f > > IDT= 000000000081fd70 0000021f > > CR0=c0000033 CR2=0000000000000000 CR3=0000000000800000 CR4=00000660 > > DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 > > DR6=00000000ffff0ff0 DR7=0000000000000400 > > EFER=0000000000000500 > > Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > 00 00 00 00 00 00 00 00 00 > > > > == > > > > And, there is only one line in debug.log: > > > > SecCoreStartupWithStack(0xFFFCC000, 0x820000) > > > > Without "-enable-kvm", the error became: > > > > == > > > > (qemu) qemu-system-x86_64: Trying to execute code outside RAM or ROM at 0x00000000000a0000 > > This usually means one of the following happened: > > > > (1) You told QEMU to execute a kernel for the wrong machine type, and it crashed on startup (eg trying to run a raspberry pi kernel on > a > > versatilepb QEMU machine) > > (2) You didn't give QEMU a kernel or BIOS filename at all, and QEMU executed a ROM full of no-op instructions until it fell off the end > > (3) Your guest kernel has a bug and crashed by jumping off into nowhere > > > > This is almost always one of the first two, so check your command line and that you are using the right type of kernel for this machine. > > If you think option (3) is likely then you can try debugging your guest with the -d debug options; in particular -d guest_errors will cause > > the log to include a dump of the guest register state at this point. > > > > Execution cannot continue; stopping here. > > > > == > > > > Gary Lin > > > > > Cc: Yonghong Zhu <yonghong.zhu@intel.com> > > > Cc: Liming Gao <liming.gao@intel.com> > > > Cc: Bob C Feng <bob.c.feng@intel.com> > > > Contributed-under: TianoCore Contribution Agreement 1.1 > > > Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> > > > --- > > > BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - > > > BaseTools/Source/Python/Eot/EotMain.py | 372 ++- > > > BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- > > > BaseTools/Source/Python/GenFds/Capsule.py | 26 +- > > > BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - > > > BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- > > > BaseTools/Source/Python/GenFds/DataSection.py | 4 +- > > > BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- > > > BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- > > > BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- > > > BaseTools/Source/Python/GenFds/Ffs.py | 82 +- > > > BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- > > > BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- > > > BaseTools/Source/Python/GenFds/Fv.py | 47 +- > > > BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- > > > BaseTools/Source/Python/GenFds/GenFds.py | 160 +- > > > BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- > > > BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- > > > BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- > > > BaseTools/Source/Python/GenFds/Region.py | 12 +- > > > BaseTools/Source/Python/GenFds/UiSection.py | 4 +- > > > BaseTools/Source/Python/GenFds/VerSection.py | 16 +- > > > BaseTools/Source/Python/GenFds/Vtf.py | 48 +- > > > BaseTools/Source/Python/build/BuildReport.py | 5 +- > > > 24 files changed, 2185 insertions(+), 2020 deletions(-) > > > > > > diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > index 64b58c2078a3..3df82082457f 100644 > > > --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > @@ -37,50 +37,6 @@ class FDClassObject: > > > self.RegionList = [] > > > self.vtfRawDict = {} > > > > > > -## FV data in FDF > > > -# > > > -# > > > -class FvClassObject: > > > - ## The constructor > > > - # > > > - # @param self The object pointer > > > - # > > > - def __init__(self): > > > - self.UiFvName = None > > > - self.CreateFileName = None > > > - # 3-tuple list (blockSize, numBlocks, pcd) > > > - self.BlockSizeList = [] > > > - # DefineVarDict[var] = value > > > - self.DefineVarDict = {} > > > - # SetVarDict[var] = value > > > - self.SetVarDict = {} > > > - self.FvAlignment = None > > > - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) > > > - self.FvAttributeDict = {} > > > - self.FvNameGuid = None > > > - self.FvNameString = None > > > - self.AprioriSectionList = [] > > > - self.FfsList = [] > > > - self.BsBaseAddress = None > > > - self.RtBaseAddress = None > > > - > > > -## Region data in FDF > > > -# > > > -# > > > -class RegionClassObject: > > > - ## The constructor > > > - # > > > - # @param self The object pointer > > > - # > > > - def __init__(self): > > > - self.Offset = None # The begin position of the Region > > > - self.Size = None # The Size of the Region > > > - self.PcdOffset = None > > > - self.PcdSize = None > > > - self.SetVarDict = {} > > > - self.RegionType = None > > > - self.RegionDataList = [] > > > - > > > ## FFS data in FDF > > > # > > > # > > > @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : > > > self.FvName = None > > > self.FdName = None > > > self.DefineVarDict = {} > > > - self.AprioriSection = None > > > self.KeepReloc = None > > > > > > ## INF statement data in FDF > > > @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): > > > self.KeepReloc = None > > > self.UseArch = None > > > > > > -## APRIORI section data in FDF > > > -# > > > -# > > > -class AprioriSectionClassObject: > > > - ## The constructor > > > - # > > > - # @param self The object pointer > > > - # > > > - def __init__(self): > > > - # DefineVarDict[var] = value > > > - self.DefineVarDict = {} > > > - self.FfsList = [] > > > - > > > ## section data in FDF > > > # > > > # > > > @@ -363,21 +305,6 @@ class CapsuleClassObject : > > > self.CapsuleDataList = [] > > > self.FmpPayloadList = [] > > > > > > -## VTF data in FDF > > > -# > > > -# > > > -class VtfClassObject : > > > - ## The constructor > > > - # > > > - # @param self The object pointer > > > - # > > > - def __init__(self): > > > - self.KeyArch = None > > > - self.ArchList = None > > > - self.UiName = None > > > - self.ResetBin = None > > > - self.ComponentStatementList = [] > > > - > > > ## VTF component data in FDF > > > # > > > # > > > diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py > > > index 49a1494126c4..2ad93cc6b4e3 100644 > > > --- a/BaseTools/Source/Python/Eot/EotMain.py > > > +++ b/BaseTools/Source/Python/Eot/EotMain.py > > > @@ -36,10 +36,9 @@ import struct > > > import uuid > > > import copy > > > import codecs > > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > > > gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" > > > -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' > > > -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' > > > gIndention = -4 > > > > > > class Image(array): > > > @@ -402,10 +401,10 @@ class FirmwareVolume(Image): > > > if Ffs.Type == 0x05: > > > FfsDxeCoreGuid = FfsID > > > continue > > > - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: > > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > > FfsPeiPrioriGuid = FfsID > > > continue > > > - if Ffs.Guid.lower() == gAprioriGuid: > > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > > FfsDxePrioriGuid = FfsID > > > continue > > > > > > @@ -1084,6 +1083,371 @@ class Ffs(Image): > > > State = property(_GetState, _SetState) > > > > > > > > > + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" > > > + > > > + _GUID_ = struct.Struct("16x 1I2H8B") > > > + _LENGTH_ = struct.Struct("16x 16x 1Q") > > > + _SIG_ = struct.Struct("16x 16x 8x 1I") > > > + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") > > > + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") > > > + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") > > > + > > > + def __init__(self, Name=''): > > > + Image.__init__(self) > > > + self.Name = Name > > > + self.FfsDict = sdict() > > > + self.OrderedFfsDict = sdict() > > > + self.UnDispatchedFfsDict = sdict() > > > + self.ProtocolList = sdict() > > > + > > > + def CheckArchProtocol(self): > > > + for Item in EotGlobalData.gArchProtocolGuids: > > > + if Item.lower() not in EotGlobalData.gProtocolList: > > > + return False > > > + return True > > > + > > > + def ParseDepex(self, Depex, Type): > > > + List = None > > > + if Type == 'Ppi': > > > + List = EotGlobalData.gPpiList > > > + if Type == 'Protocol': > > > + List = EotGlobalData.gProtocolList > > > + DepexStack = [] > > > + DepexList = [] > > > + DepexString = '' > > > + FileDepex = None > > > + CouldBeLoaded = True > > > + for Index in range(0, len(Depex.Expression)): > > > + Item = Depex.Expression[Index] > > > + if Item == 0x00: > > > + Index = Index + 1 > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > > + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) > > > + elif Item == 0x01: > > > + Index = Index + 1 > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > > + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) > > > + elif Item == 0x02: > > > + Index = Index + 1 > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > + if Guid.lower() in List: > > > + DepexStack.append(True) > > > + DepexList.append(Guid) > > > + else: > > > + DepexStack.append(False) > > > + DepexList.append(Guid) > > > + continue > > > + elif Item == 0x03 or Item == 0x04: > > > + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + > > str(DepexStack.pop()))) > > > + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + > str(DepexList.pop())) > > > + elif Item == 0x05: > > > + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) > > > + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) > > > + elif Item == 0x06: > > > + DepexStack.append(True) > > > + DepexList.append('TRUE') > > > + DepexString = DepexString + 'TRUE' + ' ' > > > + elif Item == 0x07: > > > + DepexStack.append(False) > > > + DepexList.append('False') > > > + DepexString = DepexString + 'FALSE' + ' ' > > > + elif Item == 0x08: > > > + if Index != len(Depex.Expression) - 1: > > > + CouldBeLoaded = False > > > + else: > > > + CouldBeLoaded = DepexStack.pop() > > > + else: > > > + CouldBeLoaded = False > > > + if DepexList != []: > > > + DepexString = DepexList[0].strip() > > > + return (CouldBeLoaded, DepexString, FileDepex) > > > + > > > + def Dispatch(self, Db = None): > > > + if Db is None: > > > + return False > > > + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) > > > + # Find PeiCore, DexCore, PeiPriori, DxePriori first > > > + FfsSecCoreGuid = None > > > + FfsPeiCoreGuid = None > > > + FfsDxeCoreGuid = None > > > + FfsPeiPrioriGuid = None > > > + FfsDxePrioriGuid = None > > > + for FfsID in self.UnDispatchedFfsDict: > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > + if Ffs.Type == 0x03: > > > + FfsSecCoreGuid = FfsID > > > + continue > > > + if Ffs.Type == 0x04: > > > + FfsPeiCoreGuid = FfsID > > > + continue > > > + if Ffs.Type == 0x05: > > > + FfsDxeCoreGuid = FfsID > > > + continue > > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > > + FfsPeiPrioriGuid = FfsID > > > + continue > > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > > + FfsDxePrioriGuid = FfsID > > > + continue > > > + > > > + # Parse SEC_CORE first > > > + if FfsSecCoreGuid is not None: > > > + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) > > > + self.LoadPpi(Db, FfsSecCoreGuid) > > > + > > > + # Parse PEI first > > > + if FfsPeiCoreGuid is not None: > > > + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) > > > + self.LoadPpi(Db, FfsPeiCoreGuid) > > > + if FfsPeiPrioriGuid is not None: > > > + # Load PEIM described in priori file > > > + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) > > > + if len(FfsPeiPriori.Sections) == 1: > > > + Section = FfsPeiPriori.Sections.popitem()[1] > > > + if Section.Type == 0x19: > > > + GuidStruct = struct.Struct('1I2H8B') > > > + Start = 4 > > > + while len(Section) > Start: > > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > > + GuidString = gGuidStringFormat % Guid > > > + Start = Start + 16 > > > + if GuidString in self.UnDispatchedFfsDict: > > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > > + self.LoadPpi(Db, GuidString) > > > + > > > + self.DisPatchPei(Db) > > > + > > > + # Parse DXE then > > > + if FfsDxeCoreGuid is not None: > > > + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) > > > + self.LoadProtocol(Db, FfsDxeCoreGuid) > > > + if FfsDxePrioriGuid is not None: > > > + # Load PEIM described in priori file > > > + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) > > > + if len(FfsDxePriori.Sections) == 1: > > > + Section = FfsDxePriori.Sections.popitem()[1] > > > + if Section.Type == 0x19: > > > + GuidStruct = struct.Struct('1I2H8B') > > > + Start = 4 > > > + while len(Section) > Start: > > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > > + GuidString = gGuidStringFormat % Guid > > > + Start = Start + 16 > > > + if GuidString in self.UnDispatchedFfsDict: > > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > > + self.LoadProtocol(Db, GuidString) > > > + > > > + self.DisPatchDxe(Db) > > > + > > > + def LoadProtocol(self, Db, ModuleGuid): > > > + SqlCommand = """select GuidValue from Report > > > + where SourceFileFullPath in > > > + (select Value1 from Inf where BelongsToFile = > > > + (select BelongsToFile from Inf > > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > > + and Model = %s) > > > + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ > > > + % (ModuleGuid, 5001, 3007) > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > + for Record in RecordSet: > > > + SqlCommand = """select Value2 from Inf where BelongsToFile = > > > + (select DISTINCT BelongsToFile from Inf > > > + where Value1 = > > > + (select SourceFileFullPath from Report > > > + where GuidValue like '%s' and ItemMode = 'Callback')) > > > + and Value1 = 'FILE_GUID'""" % Record[0] > > > + CallBackSet = Db.TblReport.Exec(SqlCommand) > > > + if CallBackSet != []: > > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > > + else: > > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > > + > > > + def LoadPpi(self, Db, ModuleGuid): > > > + SqlCommand = """select GuidValue from Report > > > + where SourceFileFullPath in > > > + (select Value1 from Inf where BelongsToFile = > > > + (select BelongsToFile from Inf > > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > > + and Model = %s) > > > + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ > > > + % (ModuleGuid, 5001, 3007) > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > + for Record in RecordSet: > > > + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid > > > + > > > + def DisPatchDxe(self, Db): > > > + IsInstalled = False > > > + ScheduleList = sdict() > > > + for FfsID in self.UnDispatchedFfsDict: > > > + CouldBeLoaded = False > > > + DepexString = '' > > > + FileDepex = None > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > + if Ffs.Type == 0x07: > > > + # Get Depex > > > + IsFoundDepex = False > > > + for Section in Ffs.Sections.values(): > > > + # Find Depex > > > + if Section.Type == 0x13: > > > + IsFoundDepex = True > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') > > > + break > > > + if Section.Type == 0x01: > > > + CompressSections = Section._SubImages[4] > > > + for CompressSection in CompressSections.Sections: > > > + if CompressSection.Type == 0x13: > > > + IsFoundDepex = True > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > > 'Protocol') > > > + break > > > + if CompressSection.Type == 0x02: > > > + NewSections = CompressSection._SubImages[4] > > > + for NewSection in NewSections.Sections: > > > + if NewSection.Type == 0x13: > > > + IsFoundDepex = True > > > + CouldBeLoaded, DepexString, FileDepex = > > self.ParseDepex(NewSection._SubImages[4], 'Protocol') > > > + break > > > + > > > + # Not find Depex > > > + if not IsFoundDepex: > > > + CouldBeLoaded = self.CheckArchProtocol() > > > + DepexString = '' > > > + FileDepex = None > > > + > > > + # Append New Ffs > > > + if CouldBeLoaded: > > > + IsInstalled = True > > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > > + NewFfs.Depex = DepexString > > > + if FileDepex is not None: > > > + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) > > > + else: > > > + ScheduleList[FfsID] = NewFfs > > > + else: > > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > > + > > > + for FfsID in ScheduleList: > > > + NewFfs = ScheduleList.pop(FfsID) > > > + FfsName = 'UnKnown' > > > + self.OrderedFfsDict[FfsID] = NewFfs > > > + self.LoadProtocol(Db, FfsID) > > > + > > > + SqlCommand = """select Value2 from Inf > > > + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and > > lower(Value2) = lower('%s') and Model = %s) > > > + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > + if RecordSet != []: > > > + FfsName = RecordSet[0][0] > > > + > > > + if IsInstalled: > > > + self.DisPatchDxe(Db) > > > + > > > + def DisPatchPei(self, Db): > > > + IsInstalled = False > > > + for FfsID in self.UnDispatchedFfsDict: > > > + CouldBeLoaded = True > > > + DepexString = '' > > > + FileDepex = None > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > + if Ffs.Type == 0x06 or Ffs.Type == 0x08: > > > + # Get Depex > > > + for Section in Ffs.Sections.values(): > > > + if Section.Type == 0x1B: > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') > > > + break > > > + if Section.Type == 0x01: > > > + CompressSections = Section._SubImages[4] > > > + for CompressSection in CompressSections.Sections: > > > + if CompressSection.Type == 0x1B: > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > > 'Ppi') > > > + break > > > + if CompressSection.Type == 0x02: > > > + NewSections = CompressSection._SubImages[4] > > > + for NewSection in NewSections.Sections: > > > + if NewSection.Type == 0x1B: > > > + CouldBeLoaded, DepexString, FileDepex = > > self.ParseDepex(NewSection._SubImages[4], 'Ppi') > > > + break > > > + > > > + # Append New Ffs > > > + if CouldBeLoaded: > > > + IsInstalled = True > > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > > + NewFfs.Depex = DepexString > > > + self.OrderedFfsDict[FfsID] = NewFfs > > > + self.LoadPpi(Db, FfsID) > > > + else: > > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > > + > > > + if IsInstalled: > > > + self.DisPatchPei(Db) > > > + > > > + > > > + def __str__(self): > > > + global gIndention > > > + gIndention += 4 > > > + FvInfo = '\n' + ' ' * gIndention > > > + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) > > > + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) > > > + gIndention -= 4 > > > + return FvInfo + FfsInfo > > > + > > > + def _Unpack(self): > > > + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] > > > + self.empty() > > > + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) > > > + > > > + # traverse the FFS > > > + EndOfFv = Size > > > + FfsStartAddress = self.HeaderSize > > > + LastFfsObj = None > > > + while FfsStartAddress < EndOfFv: > > > + FfsObj = Ffs() > > > + FfsObj.frombuffer(self, FfsStartAddress) > > > + FfsId = repr(FfsObj) > > > + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ > > > + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): > > > + if LastFfsObj is not None: > > > + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) > > > + else: > > > + if FfsId in self.FfsDict: > > > + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", > > > + ExtraData="\t%s @ %s\n\t%s @ %s" \ > > > + % (FfsObj.Guid, FfsObj.Offset, > > > + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) > > > + self.FfsDict[FfsId] = FfsObj > > > + if LastFfsObj is not None: > > > + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) > > > + > > > + FfsStartAddress += len(FfsObj) > > > + # > > > + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) > > > + # The next FFS must be at the latest next 8-byte aligned address > > > + # > > > + FfsStartAddress = (FfsStartAddress + 7) & (~7) > > > + LastFfsObj = FfsObj > > > + > > > + def _GetAttributes(self): > > > + return self.GetField(self._ATTR_, 0)[0] > > > + > > > + def _GetSize(self): > > > + return self.GetField(self._LENGTH_, 0)[0] > > > + > > > + def _GetChecksum(self): > > > + return self.GetField(self._CHECKSUM_, 0)[0] > > > + > > > + def _GetHeaderLength(self): > > > + return self.GetField(self._HLEN_, 0)[0] > > > + > > > + def _GetFileSystemGuid(self): > > > + return gGuidStringFormat % self.GetField(self._GUID_, 0) > > > + > > > + Attributes = property(_GetAttributes) > > > + Size = property(_GetSize) > > > + Checksum = property(_GetChecksum) > > > + HeaderSize = property(_GetHeaderLength) > > > + FileSystemGuid = property(_GetFileSystemGuid) > > > + > > > ## MultipleFv() class > > > # > > > # A class for Multiple FV > > > diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py > > > index 7196f7f2c753..e7b856a1150a 100644 > > > --- a/BaseTools/Source/Python/GenFds/AprioriSection.py > > > +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py > > > @@ -16,28 +16,31 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from struct import * > > > +from struct import pack > > > import Common.LongFilePathOs as os > > > from io import BytesIO > > > -from . import FfsFileStatement > > > +from .FfsFileStatement import FileStatement > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -from CommonDataClass.FdfClass import AprioriSectionClassObject > > > -from Common.StringUtils import * > > > +from Common.StringUtils import NormPath > > > from Common.Misc import SaveFileOnChange, PathClass > > > -from Common import EdkLogger > > > -from Common.BuildToolError import * > > > +from Common.EdkLogger import error as EdkLoggerError > > > +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE > > > from Common.DataType import TAB_COMMON > > > > > > +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > > +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > > + > > > ## process APRIORI file data and generate PEI/DXE APRIORI file > > > # > > > # > > > -class AprioriSection (AprioriSectionClassObject): > > > +class AprioriSection (object): > > > ## The constructor > > > # > > > # @param self The object pointer > > > # > > > def __init__(self): > > > - AprioriSectionClassObject.__init__(self) > > > + self.DefineVarDict = {} > > > + self.FfsList = [] > > > self.AprioriType = "" > > > > > > ## GenFfs() method > > > @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): > > > # @retval string Generated file name > > > # > > > def GenFfs (self, FvName, Dict = {}, IsMakefile = False): > > > - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > > - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > > Buffer = BytesIO('') > > > - AprioriFileGuid = DXE_GUID > > > if self.AprioriType == "PEI": > > > - AprioriFileGuid = PEI_GUID > > > + AprioriFileGuid = PEI_APRIORI_GUID > > > + else: > > > + AprioriFileGuid = DXE_APRIORI_GUID > > > + > > > OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ > > > GenFdsGlobalVariable.FfsDir,\ > > > AprioriFileGuid + FvName) > > > - if not os.path.exists(OutputAprFilePath) : > > > + if not os.path.exists(OutputAprFilePath): > > > os.makedirs(OutputAprFilePath) > > > > > > OutputAprFileName = os.path.join( OutputAprFilePath, \ > > > @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): > > > > > > Dict.update(self.DefineVarDict) > > > InfFileName = None > > > - for FfsObj in self.FfsList : > > > + for FfsObj in self.FfsList: > > > Guid = "" > > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > > + if isinstance(FfsObj, FileStatement): > > > Guid = FfsObj.NameGuid > > > else: > > > InfFileName = NormPath(FfsObj.InfFileName) > > > Arch = FfsObj.GetCurrentArch() > > > > > > - if Arch is not None: > > > + if Arch: > > > Dict['$(ARCH)'] = Arch > > > InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) > > > > > > - if Arch is not None: > > > + if Arch: > > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > > GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > Guid = Inf.Guid > > > - > > > else: > > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > > GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > Guid = Inf.Guid > > > > > > - self.BinFileList = Inf.Module.Binaries > > > - if self.BinFileList == []: > > > - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, > > > + if not Inf.Module.Binaries: > > > + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, > > > "INF %s not found in build ARCH %s!" \ > > > % (InfFileName, GenFdsGlobalVariable.ArchList)) > > > > > > - > > > GuidPart = Guid.split('-') > > > Buffer.write(pack('I', long(GuidPart[0], 16))) > > > Buffer.write(pack('H', int(GuidPart[1], 16))) > > > @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): > > > 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) > > > > > > return AprFfsFileName > > > - > > > diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py > > > index baa6e1dfa2ab..df29c40dbd20 100644 > > > --- a/BaseTools/Source/Python/GenFds/Capsule.py > > > +++ b/BaseTools/Source/Python/GenFds/Capsule.py > > > @@ -16,22 +16,18 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -from .GenFdsGlobalVariable import FindExtendTool > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool > > > from CommonDataClass.FdfClass import CapsuleClassObject > > > import Common.LongFilePathOs as os > > > -import subprocess > > > from io import BytesIO > > > -from Common.Misc import SaveFileOnChange > > > -from Common.Misc import PackRegistryFormatGuid > > > +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid > > > import uuid > > > from struct import pack > > > from Common import EdkLogger > > > -from Common.BuildToolError import * > > > +from Common.BuildToolError import GENFDS_ERROR > > > +from Common.DataType import TAB_LINE_BREAK > > > > > > - > > > -T_CHAR_LF = '\n' > > > -WIN_CERT_REVISION = 0x0200 > > > +WIN_CERT_REVISION = 0x0200 > > > WIN_CERT_TYPE_EFI_GUID = 0x0EF1 > > > EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') > > > EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') > > > @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 > > > ## create inf file describes what goes into capsule and call GenFv to generate capsule > > > # > > > # > > > -class Capsule (CapsuleClassObject) : > > > +class Capsule (CapsuleClassObject): > > > ## The constructor > > > # > > > # @param self The object pointer > > > @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : > > > return self.GenFmpCapsule() > > > > > > CapInfFile = self.GenCapInf() > > > - CapInfFile.writelines("[files]" + T_CHAR_LF) > > > + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) > > > CapFileList = [] > > > - for CapsuleDataObj in self.CapsuleDataList : > > > + for CapsuleDataObj in self.CapsuleDataList: > > > CapsuleDataObj.CapsuleName = self.CapsuleName > > > FileName = CapsuleDataObj.GenCapsuleSubItem() > > > CapsuleDataObj.CapsuleName = None > > > CapFileList.append(FileName) > > > CapInfFile.writelines("EFI_FILE_NAME = " + \ > > > FileName + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) > > > CapInfFile.close() > > > # > > > @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : > > > self.UiCapsuleName + "_Cap" + '.inf') > > > CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') > > > > > > - CapInfFile.writelines("[options]" + T_CHAR_LF) > > > + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) > > > > > > for Item in self.TokensDict: > > > CapInfFile.writelines("EFI_" + \ > > > Item + \ > > > ' = ' + \ > > > self.TokensDict[Item] + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > > > > return CapInfFile > > > diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py > > > index 0caba8983d2e..6071e9f4a545 100644 > > > --- a/BaseTools/Source/Python/GenFds/CapsuleData.py > > > +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py > > > @@ -16,7 +16,6 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from . import Ffs > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > from io import BytesIO > > > from struct import pack > > > diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py > > > index aaaabf84dca8..51f6791fab19 100644 > > > --- a/BaseTools/Source/Python/GenFds/CompressSection.py > > > +++ b/BaseTools/Source/Python/GenFds/CompressSection.py > > > @@ -16,7 +16,7 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > from . import Section > > > import subprocess > > > import Common.LongFilePathOs as os > > > @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : > > > ModuleName + \ > > > SUP_MODULE_SEC + \ > > > SecNum + \ > > > - Ffs.SectionSuffix['COMPRESS'] > > > + SectionSuffix['COMPRESS'] > > > OutputFile = os.path.normpath(OutputFile) > > > DummyFile = OutputFile + '.dummy' > > > GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) > > > diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py > > > index a6387b07c582..28f9b931ca4b 100644 > > > --- a/BaseTools/Source/Python/GenFds/DataSection.py > > > +++ b/BaseTools/Source/Python/GenFds/DataSection.py > > > @@ -19,7 +19,7 @@ from __future__ import absolute_import > > > from . import Section > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > import subprocess > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > import Common.LongFilePathOs as os > > > from CommonDataClass.FdfClass import DataSectionClassObject > > > from Common.Misc import PeImageClass > > > @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): > > > ) > > > self.SectFileName = TeFile > > > > > > - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) > > > + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) > > > OutputFile = os.path.normpath(OutputFile) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), > > IsMakefile = IsMakefile) > > > FileList = [OutputFile] > > > diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py > > > index b2d123bfc045..78efc18816e8 100644 > > > --- a/BaseTools/Source/Python/GenFds/DepexSection.py > > > +++ b/BaseTools/Source/Python/GenFds/DepexSection.py > > > @@ -18,8 +18,6 @@ > > > from __future__ import absolute_import > > > from . import Section > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -import subprocess > > > -from .Ffs import Ffs > > > import Common.LongFilePathOs as os > > > from CommonDataClass.FdfClass import DepexSectionClassObject > > > from AutoGen.GenDepex import DependencyExpression > > > @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): > > > GenFdsGlobalVariable.TargetName, > > > GenFdsGlobalVariable.ToolChainTag) > > > for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: > > > - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) > > > ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > - ModuleFile, > > > + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), > > > Arch, > > > GenFdsGlobalVariable.TargetName, > > > GenFdsGlobalVariable.ToolChainTag > > > diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py > > > index 623b77d27427..f8573b5c7d1b 100644 > > > --- a/BaseTools/Source/Python/GenFds/EfiSection.py > > > +++ b/BaseTools/Source/Python/GenFds/EfiSection.py > > > @@ -20,7 +20,7 @@ from struct import * > > > from . import Section > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > import subprocess > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > import Common.LongFilePathOs as os > > > from CommonDataClass.FdfClass import EfiSectionClassObject > > > from Common import EdkLogger > > > @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): > > > BuildNumTuple = tuple() > > > > > > Num = SecNum > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > SectionSuffix.get(SectionType)) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > #Ui=StringData, > > > Ver=BuildNum, > > > @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): > > > for File in FileList: > > > Index = Index + 1 > > > Num = '%s.%d' %(SecNum, Index) > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > SectionSuffix.get(SectionType)) > > > f = open(File, 'r') > > > VerString = f.read() > > > f.close() > > > @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): > > > else: > > > EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) > > > Num = SecNum > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > SectionSuffix.get(SectionType)) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > #Ui=VerString, > > > Ver=BuildNum, > > > @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): > > > Num = SecNum > > > if IsMakefile and StringData == ModuleNameStr: > > > StringData = "$(MODULE_NAME)" > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > SectionSuffix.get(SectionType)) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > > Ui=StringData, IsMakefile=IsMakefile) > > > OutputFileList.append(OutputFile) > > > @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): > > > for File in FileList: > > > Index = Index + 1 > > > Num = '%s.%d' %(SecNum, Index) > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > SectionSuffix.get(SectionType)) > > > f = open(File, 'r') > > > UiString = f.read() > > > f.close() > > > @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): > > > Num = SecNum > > > if IsMakefile and StringData == ModuleNameStr: > > > StringData = "$(MODULE_NAME)" > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > SectionSuffix.get(SectionType)) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > > Ui=StringData, IsMakefile=IsMakefile) > > > OutputFileList.append(OutputFile) > > > @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): > > > """ Copy Map file to FFS output path """ > > > Index = Index + 1 > > > Num = '%s.%d' %(SecNum, Index) > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > Ffs.SectionSuffix.get(SectionType)) > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > SectionSuffix.get(SectionType)) > > > File = GenFdsGlobalVariable.MacroExtend(File, Dict) > > > > > > #Get PE Section alignment when align is set to AUTO > > > diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py > > > index 63687e98bb31..62bf9f14012a 100644 > > > --- a/BaseTools/Source/Python/GenFds/FdfParser.py > > > +++ b/BaseTools/Source/Python/GenFds/FdfParser.py > > > @@ -18,79 +18,66 @@ > > > # > > > from __future__ import print_function > > > from __future__ import absolute_import > > > -import re > > > +from re import compile, DOTALL > > > +from string import hexdigits > > > +from uuid import UUID > > > > > > -from . import Fd > > > -from . import Region > > > -from . import Fv > > > -from . import AprioriSection > > > -from . import FfsInfStatement > > > -from . import FfsFileStatement > > > -from . import VerSection > > > -from . import UiSection > > > -from . import FvImageSection > > > -from . import DataSection > > > -from . import DepexSection > > > -from . import CompressSection > > > -from . import GuidSection > > > -from . import Capsule > > > -from . import CapsuleData > > > -from . import Rule > > > -from . import RuleComplexFile > > > -from . import RuleSimpleFile > > > -from . import EfiSection > > > -from . import Vtf > > > -from . import ComponentStatement > > > -from . import OptionRom > > > -from . import OptRomInfStatement > > > -from . import OptRomFileStatement > > > -import string > > > - > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > from Common.BuildToolError import * > > > from Common import EdkLogger > > > -from Common.Misc import PathClass > > > -from Common.StringUtils import NormPath > > > -import Common.GlobalData as GlobalData > > > -from Common.Expression import * > > > +from Common.Misc import PathClass, tdict > > > +from Common.StringUtils import NormPath, ReplaceMacro > > > from Common import GlobalData > > > +from Common.Expression import * > > > from Common.DataType import * > > > -from Common.StringUtils import ReplaceMacro > > > -import uuid > > > -from Common.Misc import tdict > > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > > import Common.LongFilePathOs as os > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID > > > -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID > > > from Common.RangeExpression import RangeExpression > > > from collections import OrderedDict > > > > > > -##define T_CHAR_SPACE ' ' > > > -##define T_CHAR_NULL '\0' > > > -##define T_CHAR_CR '\r' > > > -##define T_CHAR_TAB '\t' > > > -##define T_CHAR_LF '\n' > > > -##define T_CHAR_SLASH '/' > > > -##define T_CHAR_BACKSLASH '\\' > > > -##define T_CHAR_DOUBLE_QUOTE '\"' > > > -##define T_CHAR_SINGLE_QUOTE '\'' > > > -##define T_CHAR_STAR '*' > > > -##define T_CHAR_HASH '#' > > > +from .Fd import FD > > > +from .Region import Region > > > +from .Fv import FV > > > +from .AprioriSection import AprioriSection > > > +from .FfsInfStatement import FfsInfStatement > > > +from .FfsFileStatement import FileStatement > > > +from .VerSection import VerSection > > > +from .UiSection import UiSection > > > +from .FvImageSection import FvImageSection > > > +from .DataSection import DataSection > > > +from .DepexSection import DepexSection > > > +from .CompressSection import CompressSection > > > +from .GuidSection import GuidSection > > > +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule > > > +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile > > > +from .RuleComplexFile import RuleComplexFile > > > +from .RuleSimpleFile import RuleSimpleFile > > > +from .EfiSection import EfiSection > > > +from .Vtf import Vtf > > > +from .ComponentStatement import ComponentStatement > > > +from .OptionRom import OPTIONROM > > > +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs > > > +from .OptRomFileStatement import OptRomFileStatement > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > > > -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ > > > -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ > > > -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') > > > +T_CHAR_CR = '\r' > > > +T_CHAR_TAB = '\t' > > > +T_CHAR_DOUBLE_QUOTE = '\"' > > > +T_CHAR_SINGLE_QUOTE = '\'' > > > +T_CHAR_STAR = '*' > > > > > > -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') > > > +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} > > > +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} > > > +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} > > > > > > -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > > -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > > -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > > -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > > -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') > > > -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > > -TokenFindPattern = > > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > > +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > > +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > > +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > > +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > > +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') > > > +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > > +TokenFindPattern = > > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > > AllIncludeFileList = [] > > > > > > # Get the closest parent > > > @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): > > > return True > > > > > > def GetRealFileLine (File, Line): > > > - > > > InsertedLines = 0 > > > for Profile in AllIncludeFileList: > > > if Profile.IsLineInFile(Line): > > > @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): > > > > > > ## The exception class that used to report error messages when parsing FDF > > > # > > > -# Currently the "ToolName" is set to be "FDF Parser". > > > +# Currently the "ToolName" is set to be "FdfParser". > > > # > > > class Warning (Exception): > > > ## The constructor > > > @@ -132,7 +118,6 @@ class Warning (Exception): > > > # @param Line The Line number that error occurs > > > # > > > def __init__(self, Str, File = None, Line = None): > > > - > > > FileLineTuple = GetRealFileLine(File, Line) > > > self.FileName = FileLineTuple[0] > > > self.LineNumber = FileLineTuple[1] > > > @@ -147,7 +132,7 @@ class Warning (Exception): > > > # > > > # May raise Exception when opening file. > > > # > > > -class IncludeFileProfile : > > > +class IncludeFileProfile: > > > ## The constructor > > > # > > > # @param self The object pointer > > > @@ -157,16 +142,11 @@ class IncludeFileProfile : > > > self.FileName = FileName > > > self.FileLinesList = [] > > > try: > > > - fsock = open(FileName, "rb", 0) > > > - try: > > > + with open(FileName, "rb", 0) as fsock: > > > self.FileLinesList = fsock.readlines() > > > for index, line in enumerate(self.FileLinesList): > > > - if not line.endswith('\n'): > > > - self.FileLinesList[index] += '\n' > > > - > > > - finally: > > > - fsock.close() > > > - > > > + if not line.endswith(TAB_LINE_BREAK): > > > + self.FileLinesList[index] += TAB_LINE_BREAK > > > except: > > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > > > > > @@ -209,7 +189,7 @@ class IncludeFileProfile : > > > # > > > # May raise Exception when opening file. > > > # > > > -class FileProfile : > > > +class FileProfile: > > > ## The constructor > > > # > > > # @param self The object pointer > > > @@ -218,11 +198,8 @@ class FileProfile : > > > def __init__(self, FileName): > > > self.FileLinesList = [] > > > try: > > > - fsock = open(FileName, "rb", 0) > > > - try: > > > + with open(FileName, "rb", 0) as fsock: > > > self.FileLinesList = fsock.readlines() > > > - finally: > > > - fsock.close() > > > > > > except: > > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > > @@ -266,41 +243,37 @@ class FdfParser: > > > self.CurrentOffsetWithinLine = 0 > > > self.CurrentFdName = None > > > self.CurrentFvName = None > > > - self.__Token = "" > > > - self.__SkippedChars = "" > > > + self._Token = "" > > > + self._SkippedChars = "" > > > GlobalData.gFdfParser = self > > > > > > # Used to section info > > > - self.__CurSection = [] > > > + self._CurSection = [] > > > # Key: [section name, UI name, arch] > > > - # Value: {MACRO_NAME : MACRO_VALUE} > > > - self.__MacroDict = tdict(True, 3) > > > - self.__PcdDict = OrderedDict() > > > + # Value: {MACRO_NAME: MACRO_VALUE} > > > + self._MacroDict = tdict(True, 3) > > > + self._PcdDict = OrderedDict() > > > > > > - self.__WipeOffArea = [] > > > + self._WipeOffArea = [] > > > if GenFdsGlobalVariable.WorkSpaceDir == '': > > > GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") > > > > > > - ## __SkipWhiteSpace() method > > > + ## _SkipWhiteSpace() method > > > # > > > - # Skip white spaces from current char, return number of chars skipped > > > + # Skip white spaces from current char. > > > # > > > # @param self The object pointer > > > - # @retval Count The number of chars skipped > > > # > > > - def __SkipWhiteSpace(self): > > > - Count = 0 > > > - while not self.__EndOfFile(): > > > - Count += 1 > > > - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): > > > - self.__SkippedChars += str(self.__CurrentChar()) > > > - self.__GetOneChar() > > > - > > > + def _SkipWhiteSpace(self): > > > + while not self._EndOfFile(): > > > + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): > > > + self._SkippedChars += str(self._CurrentChar()) > > > + self._GetOneChar() > > > else: > > > - Count = Count - 1 > > > - return Count > > > + return > > > + return > > > > > > - ## __EndOfFile() method > > > + ## _EndOfFile() method > > > # > > > # Judge current buffer pos is at file end > > > # > > > @@ -308,17 +281,16 @@ class FdfParser: > > > # @retval True Current File buffer position is at file end > > > # @retval False Current File buffer position is NOT at file end > > > # > > > - def __EndOfFile(self): > > > + def _EndOfFile(self): > > > NumberOfLines = len(self.Profile.FileLinesList) > > > SizeOfLastLine = len(self.Profile.FileLinesList[-1]) > > > if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: > > > return True > > > - elif self.CurrentLineNumber > NumberOfLines: > > > + if self.CurrentLineNumber > NumberOfLines: > > > return True > > > - else: > > > - return False > > > + return False > > > > > > - ## __EndOfLine() method > > > + ## _EndOfLine() method > > > # > > > # Judge current buffer pos is at line end > > > # > > > @@ -326,14 +298,13 @@ class FdfParser: > > > # @retval True Current File buffer position is at line end > > > # @retval False Current File buffer position is NOT at line end > > > # > > > - def __EndOfLine(self): > > > + def _EndOfLine(self): > > > if self.CurrentLineNumber > len(self.Profile.FileLinesList): > > > return True > > > SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: > > > return True > > > - else: > > > - return False > > > + return False > > > > > > ## Rewind() method > > > # > > > @@ -347,7 +318,7 @@ class FdfParser: > > > self.CurrentLineNumber = DestLine > > > self.CurrentOffsetWithinLine = DestOffset > > > > > > - ## __UndoOneChar() method > > > + ## _UndoOneChar() method > > > # > > > # Go back one char in the file buffer > > > # > > > @@ -355,80 +326,78 @@ class FdfParser: > > > # @retval True Successfully go back one char > > > # @retval False Not able to go back one char as file beginning reached > > > # > > > - def __UndoOneChar(self): > > > - > > > + def _UndoOneChar(self): > > > if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: > > > return False > > > elif self.CurrentOffsetWithinLine == 0: > > > self.CurrentLineNumber -= 1 > > > - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 > > > + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 > > > else: > > > self.CurrentOffsetWithinLine -= 1 > > > return True > > > > > > - ## __GetOneChar() method > > > + ## _GetOneChar() method > > > # > > > # Move forward one char in the file buffer > > > # > > > # @param self The object pointer > > > # > > > - def __GetOneChar(self): > > > + def _GetOneChar(self): > > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > > self.CurrentLineNumber += 1 > > > self.CurrentOffsetWithinLine = 0 > > > else: > > > self.CurrentOffsetWithinLine += 1 > > > > > > - ## __CurrentChar() method > > > + ## _CurrentChar() method > > > # > > > # Get the char pointed to by the file buffer pointer > > > # > > > # @param self The object pointer > > > # @retval Char Current char > > > # > > > - def __CurrentChar(self): > > > + def _CurrentChar(self): > > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] > > > > > > - ## __NextChar() method > > > + ## _NextChar() method > > > # > > > # Get the one char pass the char pointed to by the file buffer pointer > > > # > > > # @param self The object pointer > > > # @retval Char Next char > > > # > > > - def __NextChar(self): > > > + def _NextChar(self): > > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > > return self.Profile.FileLinesList[self.CurrentLineNumber][0] > > > - else: > > > - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > > + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > > > > > - ## __SetCurrentCharValue() method > > > + ## _SetCurrentCharValue() method > > > # > > > # Modify the value of current char > > > # > > > # @param self The object pointer > > > # @param Value The new value of current char > > > # > > > - def __SetCurrentCharValue(self, Value): > > > + def _SetCurrentCharValue(self, Value): > > > self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value > > > > > > - ## __CurrentLine() method > > > + ## _CurrentLine() method > > > # > > > # Get the list that contains current line contents > > > # > > > # @param self The object pointer > > > # @retval List current line contents > > > # > > > - def __CurrentLine(self): > > > + def _CurrentLine(self): > > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > > > > > - def __StringToList(self): > > > + def _StringToList(self): > > > self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] > > > if not self.Profile.FileLinesList: > > > EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) > > > self.Profile.FileLinesList[-1].append(' ') > > > > > > - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > > + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > > if StartPos[0] == EndPos[0]: > > > Offset = StartPos[1] > > > while Offset <= EndPos[1]: > > > @@ -437,14 +406,14 @@ class FdfParser: > > > return > > > > > > Offset = StartPos[1] > > > - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): > > > + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > > self.Profile.FileLinesList[StartPos[0]][Offset] = Value > > > Offset += 1 > > > > > > Line = StartPos[0] > > > while Line < EndPos[0]: > > > Offset = 0 > > > - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): > > > + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > > self.Profile.FileLinesList[Line][Offset] = Value > > > Offset += 1 > > > Line += 1 > > > @@ -454,45 +423,29 @@ class FdfParser: > > > self.Profile.FileLinesList[EndPos[0]][Offset] = Value > > > Offset += 1 > > > > > > - > > > - def __GetMacroName(self): > > > - if not self.__GetNextToken(): > > > - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > - MacroName = self.__Token > > > - NotFlag = False > > > - if MacroName.startswith('!'): > > > - NotFlag = True > > > - MacroName = MacroName[1:].strip() > > > - > > > - if not MacroName.startswith('$(') or not MacroName.endswith(')'): > > > - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : > MacroName}, > > > - self.FileName, self.CurrentLineNumber) > > > - MacroName = MacroName[2:-1] > > > - return MacroName, NotFlag > > > - > > > - def __SetMacroValue(self, Macro, Value): > > > - if not self.__CurSection: > > > + def _SetMacroValue(self, Macro, Value): > > > + if not self._CurSection: > > > return > > > > > > MacroDict = {} > > > - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: > > > - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict > > > + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: > > > + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict > > > else: > > > - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] > > > + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] > > > MacroDict[Macro] = Value > > > > > > - def __GetMacroValue(self, Macro): > > > + def _GetMacroValue(self, Macro): > > > # Highest priority > > > if Macro in GlobalData.gCommandLineDefines: > > > return GlobalData.gCommandLineDefines[Macro] > > > if Macro in GlobalData.gGlobalDefines: > > > return GlobalData.gGlobalDefines[Macro] > > > > > > - if self.__CurSection: > > > - MacroDict = self.__MacroDict[ > > > - self.__CurSection[0], > > > - self.__CurSection[1], > > > - self.__CurSection[2] > > > + if self._CurSection: > > > + MacroDict = self._MacroDict[ > > > + self._CurSection[0], > > > + self._CurSection[1], > > > + self._CurSection[2] > > > ] > > > if MacroDict and Macro in MacroDict: > > > return MacroDict[Macro] > > > @@ -502,7 +455,7 @@ class FdfParser: > > > return GlobalData.gPlatformDefines[Macro] > > > return None > > > > > > - def __SectionHeaderParser(self, Section): > > > + def _SectionHeaderParser(self, Section): > > > # [Defines] > > > # [FD.UiName]: use dummy instead if UI name is optional > > > # [FV.UiName] > > > @@ -510,25 +463,25 @@ class FdfParser: > > > # [Rule]: don't take rule section into account, macro is not allowed in this section > > > # [VTF.arch.UiName, arch] > > > # [OptionRom.DriverName] > > > - self.__CurSection = [] > > > - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') > > > - ItemList = Section.split('.') > > > + self._CurSection = [] > > > + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) > > > + ItemList = Section.split(TAB_SPLIT) > > > Item = ItemList[0] > > > if Item == '' or Item == 'RULE': > > > return > > > > > > if Item == TAB_COMMON_DEFINES.upper(): > > > - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > elif Item == 'VTF' and len(ItemList) == 3: > > > UiName = ItemList[2] > > > - Pos = UiName.find(',') > > > + Pos = UiName.find(TAB_COMMA_SPLIT) > > > if Pos != -1: > > > UiName = UiName[:Pos] > > > - self.__CurSection = ['VTF', UiName, ItemList[1]] > > > + self._CurSection = ['VTF', UiName, ItemList[1]] > > > elif len(ItemList) > 1: > > > - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > > + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > > elif len(ItemList) > 0: > > > - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > > + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > > > > > ## PreprocessFile() method > > > # > > > @@ -540,7 +493,6 @@ class FdfParser: > > > # @param self The object pointer > > > # > > > def PreprocessFile(self): > > > - > > > self.Rewind() > > > InComment = False > > > DoubleSlashComment = False > > > @@ -548,12 +500,12 @@ class FdfParser: > > > # HashComment in quoted string " " is ignored. > > > InString = False > > > > > > - while not self.__EndOfFile(): > > > + while not self._EndOfFile(): > > > > > > - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > > + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > > InString = not InString > > > # meet new line, then no longer in a comment for // and '#' > > > - if self.__CurrentChar() == T_CHAR_LF: > > > + if self._CurrentChar() == TAB_LINE_BREAK: > > > self.CurrentLineNumber += 1 > > > self.CurrentOffsetWithinLine = 0 > > > if InComment and DoubleSlashComment: > > > @@ -563,33 +515,33 @@ class FdfParser: > > > InComment = False > > > HashComment = False > > > # check for */ comment end > > > - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR > and > > self.__NextChar() == T_CHAR_SLASH: > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > - self.__GetOneChar() > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > - self.__GetOneChar() > > > + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and > > self._NextChar() == TAB_BACK_SLASH: > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > + self._GetOneChar() > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > + self._GetOneChar() > > > InComment = False > > > # set comments to spaces > > > elif InComment: > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > - self.__GetOneChar() > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > + self._GetOneChar() > > > # check for // comment > > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): > > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): > > > InComment = True > > > DoubleSlashComment = True > > > # check for '#' comment > > > - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: > > > + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: > > > InComment = True > > > HashComment = True > > > # check for /* comment start > > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: > > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > > - self.__GetOneChar() > > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > > - self.__GetOneChar() > > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > + self._GetOneChar() > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > + self._GetOneChar() > > > InComment = True > > > else: > > > - self.__GetOneChar() > > > + self._GetOneChar() > > > > > > # restore from ListOfList to ListOfString > > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > > @@ -606,30 +558,30 @@ class FdfParser: > > > # nested include support > > > Processed = False > > > MacroDict = {} > > > - while self.__GetNextToken(): > > > + while self._GetNextToken(): > > > > > > - if self.__Token == 'DEFINE': > > > - if not self.__GetNextToken(): > > > + if self._Token == TAB_DEFINE: > > > + if not self._GetNextToken(): > > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > - Macro = self.__Token > > > - if not self.__IsToken( "="): > > > + Macro = self._Token > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - Value = self.__GetExpression() > > > + Value = self._GetExpression() > > > MacroDict[Macro] = Value > > > > > > - elif self.__Token == '!include': > > > + elif self._Token == TAB_INCLUDE: > > > Processed = True > > > IncludeLine = self.CurrentLineNumber > > > - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') > > > - if not self.__GetNextToken(): > > > + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) > > > + if not self._GetNextToken(): > > > raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) > > > - IncFileName = self.__Token > > > + IncFileName = self._Token > > > PreIndex = 0 > > > StartPos = IncFileName.find('$(', PreIndex) > > > EndPos = IncFileName.find(')', StartPos+2) > > > while StartPos != -1 and EndPos != -1: > > > - Macro = IncFileName[StartPos+2 : EndPos] > > > - MacroVal = self.__GetMacroValue(Macro) > > > + Macro = IncFileName[StartPos+2: EndPos] > > > + MacroVal = self._GetMacroValue(Macro) > > > if not MacroVal: > > > if Macro in MacroDict: > > > MacroVal = MacroDict[Macro] > > > @@ -686,9 +638,9 @@ class FdfParser: > > > IncFileProfile.Level = ParentProfile.Level + 1 > > > IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 > > > # deal with remaining portions after "!include filename", if exists. > > > - if self.__GetNextToken(): > > > + if self._GetNextToken(): > > > if self.CurrentLineNumber == CurrentLine: > > > - RemainingLine = self.__CurrentLine()[CurrentOffset:] > > > + RemainingLine = self._CurrentLine()[CurrentOffset:] > > > self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) > > > IncFileProfile.InsertAdjust += 1 > > > self.CurrentLineNumber += 1 > > > @@ -704,7 +656,7 @@ class FdfParser: > > > > > > # comment out the processed include file statement > > > TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) > > > - TempList.insert(IncludeOffset, '#') > > > + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) > > > self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) > > > if Processed: # Nested and back-to-back support > > > self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) > > > @@ -713,7 +665,7 @@ class FdfParser: > > > self.Rewind() > > > > > > @staticmethod > > > - def __GetIfListCurrentItemStat(IfList): > > > + def _GetIfListCurrentItemStat(IfList): > > > if len(IfList) == 0: > > > return True > > > > > > @@ -735,29 +687,29 @@ class FdfParser: > > > IfList = [] > > > RegionLayoutLine = 0 > > > ReplacedLine = -1 > > > - while self.__GetNextToken(): > > > + while self._GetNextToken(): > > > # Determine section name and the location dependent macro > > > - if self.__GetIfListCurrentItemStat(IfList): > > > - if self.__Token.startswith('['): > > > - Header = self.__Token > > > - if not self.__Token.endswith(']'): > > > - self.__SkipToToken(']') > > > - Header += self.__SkippedChars > > > + if self._GetIfListCurrentItemStat(IfList): > > > + if self._Token.startswith(TAB_SECTION_START): > > > + Header = self._Token > > > + if not self._Token.endswith(TAB_SECTION_END): > > > + self._SkipToToken(TAB_SECTION_END) > > > + Header += self._SkippedChars > > > if Header.find('$(') != -1: > > > raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) > > > - self.__SectionHeaderParser(Header) > > > + self._SectionHeaderParser(Header) > > > continue > > > # Replace macros except in RULE section or out of section > > > - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: > > > + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: > > > ReplacedLine = self.CurrentLineNumber > > > - self.__UndoToken() > > > + self._UndoToken() > > > CurLine = self.Profile.FileLinesList[ReplacedLine - 1] > > > PreIndex = 0 > > > StartPos = CurLine.find('$(', PreIndex) > > > EndPos = CurLine.find(')', StartPos+2) > > > - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: > > > - MacroName = CurLine[StartPos+2 : EndPos] > > > - MacorValue = self.__GetMacroValue(MacroName) > > > + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, > > TAB_ELSE_IF]: > > > + MacroName = CurLine[StartPos+2: EndPos] > > > + MacorValue = self._GetMacroValue(MacroName) > > > if MacorValue is not None: > > > CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) > > > if MacorValue.find('$(') != -1: > > > @@ -771,74 +723,74 @@ class FdfParser: > > > self.Profile.FileLinesList[ReplacedLine - 1] = CurLine > > > continue > > > > > > - if self.__Token == 'DEFINE': > > > - if self.__GetIfListCurrentItemStat(IfList): > > > - if not self.__CurSection: > > > + if self._Token == TAB_DEFINE: > > > + if self._GetIfListCurrentItemStat(IfList): > > > + if not self._CurSection: > > > raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, > > self.CurrentLineNumber) > > > DefineLine = self.CurrentLineNumber - 1 > > > - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') > > > - if not self.__GetNextToken(): > > > + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) > > > + if not self._GetNextToken(): > > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > - Macro = self.__Token > > > - if not self.__IsToken( "="): > > > + Macro = self._Token > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - Value = self.__GetExpression() > > > - self.__SetMacroValue(Macro, Value) > > > - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > > self.CurrentOffsetWithinLine - 1))) > > > - elif self.__Token == 'SET': > > > - if not self.__GetIfListCurrentItemStat(IfList): > > > + Value = self._GetExpression() > > > + self._SetMacroValue(Macro, Value) > > > + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > > self.CurrentOffsetWithinLine - 1))) > > > + elif self._Token == 'SET': > > > + if not self._GetIfListCurrentItemStat(IfList): > > > continue > > > SetLine = self.CurrentLineNumber - 1 > > > SetOffset = self.CurrentOffsetWithinLine - len('SET') > > > - PcdPair = self.__GetNextPcdSettings() > > > + PcdPair = self._GetNextPcdSettings() > > > PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - Value = self.__GetExpression() > > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > + Value = self._GetExpression() > > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > > > - self.__PcdDict[PcdName] = Value > > > + self._PcdDict[PcdName] = Value > > > > > > self.Profile.PcdDict[PcdPair] = Value > > > self.SetPcdLocalation(PcdPair) > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple > > > > > > - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - > 1))) > > > - elif self.__Token in ('!ifdef', '!ifndef', '!if'): > > > - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > > + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): > > > + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > > IfList.append([IfStartPos, None, None]) > > > > > > - CondLabel = self.__Token > > > - Expression = self.__GetExpression() > > > + CondLabel = self._Token > > > + Expression = self._GetExpression() > > > > > > - if CondLabel == '!if': > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > + if CondLabel == TAB_IF: > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > else: > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > > - if CondLabel == '!ifndef': > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > > + if CondLabel == TAB_IF_N_DEF: > > > ConditionSatisfied = not ConditionSatisfied > > > > > > BranchDetermined = ConditionSatisfied > > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] > > > if ConditionSatisfied: > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > - elif self.__Token in ('!elseif', '!else'): > > > - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > + elif self._Token in (TAB_ELSE_IF, '!else'): > > > + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > > if len(IfList) <= 0: > > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > > > > > if IfList[-1][1]: > > > IfList[-1] = [ElseStartPos, False, True] > > > - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > else: > > > - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > > + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > > IfList[-1] = [ElseStartPos, True, IfList[-1][2]] > > > - if self.__Token == '!elseif': > > > - Expression = self.__GetExpression() > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > + if self._Token == TAB_ELSE_IF: > > > + Expression = self._GetExpression() > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] > > > > > > if IfList[-1][1]: > > > @@ -846,14 +798,14 @@ class FdfParser: > > > IfList[-1][1] = False > > > else: > > > IfList[-1][2] = True > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, > self.CurrentOffsetWithinLine > > - 1))) > > > - elif self.__Token == '!endif': > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, > self.CurrentOffsetWithinLine > > - 1))) > > > + elif self._Token == '!endif': > > > if len(IfList) <= 0: > > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > > if IfList[-1][1]: > > > - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > else: > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > > > IfList.pop() > > > elif not IfList: # Don't use PCDs inside conditional directive > > > @@ -862,7 +814,7 @@ class FdfParser: > > > continue > > > SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > if SetPcd: > > > - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > > + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > > RegionLayoutLine = self.CurrentLineNumber > > > continue > > > RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > @@ -873,35 +825,35 @@ class FdfParser: > > > if not RegionSizeGuid: > > > RegionLayoutLine = self.CurrentLineNumber + 1 > > > continue > > > - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > > - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > > + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > > + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > > RegionLayoutLine = self.CurrentLineNumber + 1 > > > > > > if IfList: > > > raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) > > > self.Rewind() > > > > > > - def __CollectMacroPcd(self): > > > + def _CollectMacroPcd(self): > > > MacroDict = {} > > > > > > # PCD macro > > > MacroDict.update(GlobalData.gPlatformPcds) > > > - MacroDict.update(self.__PcdDict) > > > + MacroDict.update(self._PcdDict) > > > > > > # Lowest priority > > > MacroDict.update(GlobalData.gPlatformDefines) > > > > > > - if self.__CurSection: > > > + if self._CurSection: > > > # Defines macro > > > - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > if ScopeMacro: > > > MacroDict.update(ScopeMacro) > > > > > > # Section macro > > > - ScopeMacro = self.__MacroDict[ > > > - self.__CurSection[0], > > > - self.__CurSection[1], > > > - self.__CurSection[2] > > > + ScopeMacro = self._MacroDict[ > > > + self._CurSection[0], > > > + self._CurSection[1], > > > + self._CurSection[2] > > > ] > > > if ScopeMacro: > > > MacroDict.update(ScopeMacro) > > > @@ -912,15 +864,15 @@ class FdfParser: > > > for Item in GlobalData.BuildOptionPcd: > > > if isinstance(Item, tuple): > > > continue > > > - PcdName, TmpValue = Item.split("=") > > > + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) > > > TmpValue = BuildOptionValue(TmpValue, {}) > > > MacroDict[PcdName.strip()] = TmpValue > > > # Highest priority > > > > > > return MacroDict > > > > > > - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > > - MacroPcdDict = self.__CollectMacroPcd() > > > + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > > + MacroPcdDict = self._CollectMacroPcd() > > > if Op == 'eval': > > > try: > > > if Value: > > > @@ -933,7 +885,7 @@ class FdfParser: > > > # the precise number of line and return the evaluation result > > > # > > > EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), > > > - File=self.FileName, ExtraData=self.__CurrentLine(), > > > + File=self.FileName, ExtraData=self._CurrentLine(), > > > Line=Line) > > > return Excpt.result > > > except Exception as Excpt: > > > @@ -955,10 +907,10 @@ class FdfParser: > > > Expression = Expression[2:-1] > > > return Expression in MacroPcdDict > > > > > > - ## __IsToken() method > > > + ## _IsToken() method > > > # > > > # Check whether input string is found from current char position along > > > - # If found, the string value is put into self.__Token > > > + # If found, the string value is put into self._Token > > > # > > > # @param self The object pointer > > > # @param String The string to search > > > @@ -966,26 +918,26 @@ class FdfParser: > > > # @retval True Successfully find string, file buffer pointer moved forward > > > # @retval False Not able to find string, file buffer pointer not changed > > > # > > > - def __IsToken(self, String, IgnoreCase = False): > > > - self.__SkipWhiteSpace() > > > + def _IsToken(self, String, IgnoreCase = False): > > > + self._SkipWhiteSpace() > > > > > > # Only consider the same line, no multi-line token allowed > > > StartPos = self.CurrentOffsetWithinLine > > > index = -1 > > > if IgnoreCase: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > > else: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > > if index == 0: > > > self.CurrentOffsetWithinLine += len(String) > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > return True > > > return False > > > > > > - ## __IsKeyword() method > > > + ## _IsKeyword() method > > > # > > > # Check whether input keyword is found from current char position along, whole word only! > > > - # If found, the string value is put into self.__Token > > > + # If found, the string value is put into self._Token > > > # > > > # @param self The object pointer > > > # @param Keyword The string to search > > > @@ -993,117 +945,117 @@ class FdfParser: > > > # @retval True Successfully find string, file buffer pointer moved forward > > > # @retval False Not able to find string, file buffer pointer not changed > > > # > > > - def __IsKeyword(self, KeyWord, IgnoreCase = False): > > > - self.__SkipWhiteSpace() > > > + def _IsKeyword(self, KeyWord, IgnoreCase = False): > > > + self._SkipWhiteSpace() > > > > > > # Only consider the same line, no multi-line token allowed > > > StartPos = self.CurrentOffsetWithinLine > > > index = -1 > > > if IgnoreCase: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) > > > else: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) > > > if index == 0: > > > - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > > - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: > > > + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > > + if not str(followingChar).isspace() and followingChar not in SEPARATORS: > > > return False > > > self.CurrentOffsetWithinLine += len(KeyWord) > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > return True > > > return False > > > > > > - def __GetExpression(self): > > > + def _GetExpression(self): > > > Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > > Index = len(Line) - 1 > > > - while Line[Index] in ['\r', '\n']: > > > + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: > > > Index -= 1 > > > ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] > > > self.CurrentOffsetWithinLine += len(ExpressionString) > > > ExpressionString = ExpressionString.strip() > > > return ExpressionString > > > > > > - ## __GetNextWord() method > > > + ## _GetNextWord() method > > > # > > > # Get next C name from file lines > > > - # If found, the string value is put into self.__Token > > > + # If found, the string value is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a C name string, file buffer pointer moved forward > > > # @retval False Not able to find a C name string, file buffer pointer not changed > > > # > > > - def __GetNextWord(self): > > > - self.__SkipWhiteSpace() > > > - if self.__EndOfFile(): > > > + def _GetNextWord(self): > > > + self._SkipWhiteSpace() > > > + if self._EndOfFile(): > > > return False > > > > > > - TempChar = self.__CurrentChar() > > > + TempChar = self._CurrentChar() > > > StartPos = self.CurrentOffsetWithinLine > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': > > > - self.__GetOneChar() > > > - while not self.__EndOfLine(): > > > - TempChar = self.__CurrentChar() > > > + self._GetOneChar() > > > + while not self._EndOfLine(): > > > + TempChar = self._CurrentChar() > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > > or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': > > > - self.__GetOneChar() > > > + self._GetOneChar() > > > > > > else: > > > break > > > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > return True > > > > > > return False > > > > > > - def __GetNextPcdWord(self): > > > - self.__SkipWhiteSpace() > > > - if self.__EndOfFile(): > > > + def _GetNextPcdWord(self): > > > + self._SkipWhiteSpace() > > > + if self._EndOfFile(): > > > return False > > > > > > - TempChar = self.__CurrentChar() > > > + TempChar = self._CurrentChar() > > > StartPos = self.CurrentOffsetWithinLine > > > - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == > '[' > > or TempChar == ']': > > > - self.__GetOneChar() > > > - while not self.__EndOfLine(): > > > - TempChar = self.__CurrentChar() > > > + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == > > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > > + self._GetOneChar() > > > + while not self._EndOfLine(): > > > + TempChar = self._CurrentChar() > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > > - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or > > TempChar == ']': > > > - self.__GetOneChar() > > > + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == > > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > > + self._GetOneChar() > > > > > > else: > > > break > > > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > return True > > > > > > return False > > > > > > - ## __GetNextToken() method > > > + ## _GetNextToken() method > > > # > > > # Get next token unit before a seperator > > > - # If found, the string value is put into self.__Token > > > + # If found, the string value is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a token unit, file buffer pointer moved forward > > > # @retval False Not able to find a token unit, file buffer pointer not changed > > > # > > > - def __GetNextToken(self): > > > + def _GetNextToken(self): > > > # Skip leading spaces, if exist. > > > - self.__SkipWhiteSpace() > > > - if self.__EndOfFile(): > > > + self._SkipWhiteSpace() > > > + if self._EndOfFile(): > > > return False > > > # Record the token start position, the position of the first non-space char. > > > StartPos = self.CurrentOffsetWithinLine > > > StartLine = self.CurrentLineNumber > > > while StartLine == self.CurrentLineNumber: > > > - TempChar = self.__CurrentChar() > > > + TempChar = self._CurrentChar() > > > # Try to find the end char that is not a space and not in seperator tuple. > > > # That is, when we got a space or any char in the tuple, we got the end of token. > > > - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: > > > - self.__GetOneChar() > > > + if not str(TempChar).isspace() and TempChar not in SEPARATORS: > > > + self._GetOneChar() > > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > > - self.__GetOneChar() > > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > > + self._GetOneChar() > > > break > > > else: > > > break > > > @@ -1113,35 +1065,34 @@ class FdfParser: > > > EndPos = self.CurrentOffsetWithinLine > > > if self.CurrentLineNumber != StartLine: > > > EndPos = len(self.Profile.FileLinesList[StartLine-1]) > > > - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] > > > - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > > TAB_INCLUDE]: > > > - self.__Token = self.__Token.lower() > > > + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] > > > + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > > TAB_INCLUDE]: > > > + self._Token = self._Token.lower() > > > if StartPos != self.CurrentOffsetWithinLine: > > > return True > > > else: > > > return False > > > > > > - ## __GetNextGuid() method > > > + ## _GetNextGuid() method > > > # > > > # Get next token unit before a seperator > > > - # If found, the GUID string is put into self.__Token > > > + # If found, the GUID string is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a registry format GUID, file buffer pointer moved forward > > > # @retval False Not able to find a registry format GUID, file buffer pointer not changed > > > # > > > - def __GetNextGuid(self): > > > - > > > - if not self.__GetNextToken(): > > > + def _GetNextGuid(self): > > > + if not self._GetNextToken(): > > > return False > > > - if gGuidPattern.match(self.__Token) is not None: > > > + if gGuidPattern.match(self._Token) is not None: > > > return True > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > @staticmethod > > > - def __Verify(Name, Value, Scope): > > > + def _Verify(Name, Value, Scope): > > > # value verification only applies to numeric values. > > > if Scope not in TAB_PCD_NUMERIC_TYPES: > > > return > > > @@ -1157,17 +1108,17 @@ class FdfParser: > > > EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) > > > return True > > > > > > - ## __UndoToken() method > > > + ## _UndoToken() method > > > # > > > # Go back one token unit in file buffer > > > # > > > # @param self The object pointer > > > # > > > - def __UndoToken(self): > > > - self.__UndoOneChar() > > > - while self.__CurrentChar().isspace(): > > > - if not self.__UndoOneChar(): > > > - self.__GetOneChar() > > > + def _UndoToken(self): > > > + self._UndoOneChar() > > > + while self._CurrentChar().isspace(): > > > + if not self._UndoOneChar(): > > > + self._GetOneChar() > > > return > > > > > > > > > @@ -1175,118 +1126,118 @@ class FdfParser: > > > CurrentLine = self.CurrentLineNumber > > > while CurrentLine == self.CurrentLineNumber: > > > > > > - TempChar = self.__CurrentChar() > > > + TempChar = self._CurrentChar() > > > # Try to find the end char that is not a space and not in seperator tuple. > > > # That is, when we got a space or any char in the tuple, we got the end of token. > > > - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: > > > - if not self.__UndoOneChar(): > > > + if not str(TempChar).isspace() and not TempChar in SEPARATORS: > > > + if not self._UndoOneChar(): > > > return > > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > > return > > > else: > > > break > > > > > > - self.__GetOneChar() > > > + self._GetOneChar() > > > > > > - ## __GetNextHexNumber() method > > > + ## _GetNextHexNumber() method > > > # > > > # Get next HEX data before a seperator > > > - # If found, the HEX data is put into self.__Token > > > + # If found, the HEX data is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a HEX data, file buffer pointer moved forward > > > # @retval False Not able to find a HEX data, file buffer pointer not changed > > > # > > > - def __GetNextHexNumber(self): > > > - if not self.__GetNextToken(): > > > + def _GetNextHexNumber(self): > > > + if not self._GetNextToken(): > > > return False > > > - if gHexPatternAll.match(self.__Token): > > > + if gHexPatternAll.match(self._Token): > > > return True > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - ## __GetNextDecimalNumber() method > > > + ## _GetNextDecimalNumber() method > > > # > > > # Get next decimal data before a seperator > > > - # If found, the decimal data is put into self.__Token > > > + # If found, the decimal data is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a decimal data, file buffer pointer moved forward > > > # @retval False Not able to find a decimal data, file buffer pointer not changed > > > # > > > - def __GetNextDecimalNumber(self): > > > - if not self.__GetNextToken(): > > > + def _GetNextDecimalNumber(self): > > > + if not self._GetNextToken(): > > > return False > > > - if self.__Token.isdigit(): > > > + if self._Token.isdigit(): > > > return True > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - def __GetNextPcdSettings(self): > > > - if not self.__GetNextWord(): > > > + def _GetNextPcdSettings(self): > > > + if not self._GetNextWord(): > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > - pcdTokenSpaceCName = self.__Token > > > + pcdTokenSpaceCName = self._Token > > > > > > - if not self.__IsToken( "."): > > > + if not self._IsToken(TAB_SPLIT): > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > - pcdCName = self.__Token > > > + pcdCName = self._Token > > > > > > Fields = [] > > > - while self.__IsToken("."): > > > - if not self.__GetNextPcdWord(): > > > + while self._IsToken(TAB_SPLIT): > > > + if not self._GetNextPcdWord(): > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, > > self.CurrentLineNumber) > > > - Fields.append(self.__Token) > > > + Fields.append(self._Token) > > > > > > - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) > > > + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) > > > > > > - ## __GetStringData() method > > > + ## _GetStringData() method > > > # > > > # Get string contents quoted in "" > > > - # If found, the decimal data is put into self.__Token > > > + # If found, the decimal data is put into self._Token > > > # > > > # @param self The object pointer > > > # @retval True Successfully find a string data, file buffer pointer moved forward > > > # @retval False Not able to find a string data, file buffer pointer not changed > > > # > > > - def __GetStringData(self): > > > - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): > > > - self.__UndoToken() > > > - self.__SkipToToken("\"") > > > + def _GetStringData(self): > > > + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): > > > + self._UndoToken() > > > + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) > > > currentLineNumber = self.CurrentLineNumber > > > > > > - if not self.__SkipToToken("\""): > > > + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): > > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > > if currentLineNumber != self.CurrentLineNumber: > > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > > - self.__Token = self.__SkippedChars.rstrip('\"') > > > + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) > > > return True > > > > > > - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): > > > - self.__UndoToken() > > > - self.__SkipToToken("\'") > > > + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): > > > + self._UndoToken() > > > + self._SkipToToken(T_CHAR_SINGLE_QUOTE) > > > currentLineNumber = self.CurrentLineNumber > > > > > > - if not self.__SkipToToken("\'"): > > > + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): > > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > > if currentLineNumber != self.CurrentLineNumber: > > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > > - self.__Token = self.__SkippedChars.rstrip('\'') > > > + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) > > > return True > > > > > > else: > > > return False > > > > > > - ## __SkipToToken() method > > > + ## _SkipToToken() method > > > # > > > # Search forward in file buffer for the string > > > - # The skipped chars are put into self.__SkippedChars > > > + # The skipped chars are put into self._SkippedChars > > > # > > > # @param self The object pointer > > > # @param String The string to search > > > @@ -1294,25 +1245,25 @@ class FdfParser: > > > # @retval True Successfully find the string, file buffer pointer moved forward > > > # @retval False Not able to find the string, file buffer pointer not changed > > > # > > > - def __SkipToToken(self, String, IgnoreCase = False): > > > + def _SkipToToken(self, String, IgnoreCase = False): > > > StartPos = self.GetFileBufferPos() > > > > > > - self.__SkippedChars = "" > > > - while not self.__EndOfFile(): > > > + self._SkippedChars = "" > > > + while not self._EndOfFile(): > > > index = -1 > > > if IgnoreCase: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > > else: > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > > if index == 0: > > > self.CurrentOffsetWithinLine += len(String) > > > - self.__SkippedChars += String > > > + self._SkippedChars += String > > > return True > > > - self.__SkippedChars += str(self.__CurrentChar()) > > > - self.__GetOneChar() > > > + self._SkippedChars += str(self._CurrentChar()) > > > + self._GetOneChar() > > > > > > - self.SetFileBufferPos( StartPos) > > > - self.__SkippedChars = "" > > > + self.SetFileBufferPos(StartPos) > > > + self._SkippedChars = "" > > > return False > > > > > > ## GetFileBufferPos() method > > > @@ -1343,18 +1294,18 @@ class FdfParser: > > > # @param self The object pointer > > > # > > > def Preprocess(self): > > > - self.__StringToList() > > > + self._StringToList() > > > self.PreprocessFile() > > > self.PreprocessIncludeFile() > > > - self.__StringToList() > > > + self._StringToList() > > > self.PreprocessFile() > > > self.PreprocessConditionalStatement() > > > - self.__StringToList() > > > - for Pos in self.__WipeOffArea: > > > - self.__ReplaceFragment(Pos[0], Pos[1]) > > > + self._StringToList() > > > + for Pos in self._WipeOffArea: > > > + self._ReplaceFragment(Pos[0], Pos[1]) > > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > > > > > - while self.__GetDefines(): > > > + while self._GetDefines(): > > > pass > > > > > > ## ParseFile() method > > > @@ -1365,19 +1316,18 @@ class FdfParser: > > > # @param self The object pointer > > > # > > > def ParseFile(self): > > > - > > > try: > > > self.Preprocess() > > > - self.__GetError() > > > + self._GetError() > > > # > > > # Keep processing sections of the FDF until no new sections or a syntax error is found > > > # > > > - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() > or > > self.__GetOptionRom(): > > > + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or > > self._GetOptionRom(): > > > pass > > > > > > except Warning as X: > > > - self.__UndoToken() > > > - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ > > > + self._UndoToken() > > > + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ > > > # At this point, the closest parent would be the included file itself > > > Profile = GetParentAtLine(X.OriginalLineNumber) > > > if Profile is not None: > > > @@ -1386,7 +1336,7 @@ class FdfParser: > > > else: > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > X.Message += ' near line %d, column %d: %s' \ > > > - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > > 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) > > > + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > > 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) > > > raise > > > > > > ## SectionParser() method > > > @@ -1403,7 +1353,7 @@ class FdfParser: > > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not > > S.startswith('[FMPPAYLOAD.'): > > > raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], > [FD.], > > [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) > > > > > > - ## __GetDefines() method > > > + ## _GetDefines() method > > > # > > > # Get Defines section contents and store its data into AllMacrosList > > > # > > > @@ -1411,56 +1361,55 @@ class FdfParser: > > > # @retval True Successfully find a Defines > > > # @retval False Not able to find a Defines > > > # > > > - def __GetDefines(self): > > > - > > > - if not self.__GetNextToken(): > > > + def _GetDefines(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[DEFINES"): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[DEFINES", True): > > > + self._UndoToken() > > > + if not self._IsToken("[DEFINES", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - while self.__GetNextWord(): > > > + while self._GetNextWord(): > > > # handle the SET statement > > > - if self.__Token == 'SET': > > > - self.__UndoToken() > > > - self.__GetSetStatement(None) > > > + if self._Token == 'SET': > > > + self._UndoToken() > > > + self._GetSetStatement(None) > > > continue > > > > > > - Macro = self.__Token > > > + Macro = self._Token > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken() or self.__Token.startswith('['): > > > + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): > > > raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) > > > - Value = self.__Token > > > + Value = self._Token > > > > > > return False > > > > > > - ##__GetError() method > > > - def __GetError(self): > > > + ##_GetError() method > > > + def _GetError(self): > > > #save the Current information > > > CurrentLine = self.CurrentLineNumber > > > CurrentOffset = self.CurrentOffsetWithinLine > > > - while self.__GetNextToken(): > > > - if self.__Token == TAB_ERROR: > > > - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), > > File=self.FileName, Line=self.CurrentLineNumber) > > > + while self._GetNextToken(): > > > + if self._Token == TAB_ERROR: > > > + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), > > File=self.FileName, Line=self.CurrentLineNumber) > > > self.CurrentLineNumber = CurrentLine > > > self.CurrentOffsetWithinLine = CurrentOffset > > > > > > - ## __GetFd() method > > > + ## _GetFd() method > > > # > > > # Get FD section contents and store its data into FD dictionary of self.Profile > > > # > > > @@ -1468,27 +1417,26 @@ class FdfParser: > > > # @retval True Successfully find a FD > > > # @retval False Not able to find a FD > > > # > > > - def __GetFd(self): > > > - > > > - if not self.__GetNextToken(): > > > + def _GetFd(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[FD."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): > > > if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ > > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): > > > raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[FD.", True): > > > + self._UndoToken() > > > + if not self._IsToken("[FD.", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) > > > > > > - FdName = self.__GetUiName() > > > + FdName = self._GetUiName() > > > if FdName == "": > > > if len (self.Profile.FdDict) == 0: > > > FdName = GenFdsGlobalVariable.PlatformName > > > @@ -1502,56 +1450,56 @@ class FdfParser: > > > if self.CurrentFdName in self.Profile.FdDict: > > > raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - FdObj = Fd.FD() > > > + FdObj = FD() > > > FdObj.FdUiName = self.CurrentFdName > > > self.Profile.FdDict[self.CurrentFdName] = FdObj > > > > > > if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: > > > raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) > > > > > > - Status = self.__GetCreateFile(FdObj) > > > + Status = self._GetCreateFile(FdObj) > > > if not Status: > > > raise Warning("FD name error", self.FileName, self.CurrentLineNumber) > > > > > > - while self.__GetTokenStatements(FdObj): > > > + while self._GetTokenStatements(FdObj): > > > pass > > > for Attr in ("BaseAddress", "Size", "ErasePolarity"): > > > if getattr(FdObj, Attr) is None: > > > - self.__GetNextToken() > > > + self._GetNextToken() > > > raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) > > > > > > if not FdObj.BlockSizeList: > > > FdObj.BlockSizeList.append((1, FdObj.Size, None)) > > > > > > - self.__GetDefineStatements(FdObj) > > > + self._GetDefineStatements(FdObj) > > > > > > - self.__GetSetStatements(FdObj) > > > + self._GetSetStatements(FdObj) > > > > > > - if not self.__GetRegionLayout(FdObj): > > > + if not self._GetRegionLayout(FdObj): > > > raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) > > > > > > - while self.__GetRegionLayout(FdObj): > > > + while self._GetRegionLayout(FdObj): > > > pass > > > return True > > > > > > - ## __GetUiName() method > > > + ## _GetUiName() method > > > # > > > # Return the UI name of a section > > > # > > > # @param self The object pointer > > > # @retval FdName UI name > > > # > > > - def __GetUiName(self): > > > + def _GetUiName(self): > > > Name = "" > > > - if self.__GetNextWord(): > > > - Name = self.__Token > > > + if self._GetNextWord(): > > > + Name = self._Token > > > > > > return Name > > > > > > - ## __GetCreateFile() method > > > + ## _GetCreateFile() method > > > # > > > # Return the output file name of object > > > # > > > @@ -1559,16 +1507,15 @@ class FdfParser: > > > # @param Obj object whose data will be stored in file > > > # @retval FdName UI name > > > # > > > - def __GetCreateFile(self, Obj): > > > - > > > - if self.__IsKeyword( "CREATE_FILE"): > > > - if not self.__IsToken( "="): > > > + def _GetCreateFile(self, Obj): > > > + if self._IsKeyword("CREATE_FILE"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > > > - FileName = self.__Token > > > + FileName = self._Token > > > Obj.CreateFileName = FileName > > > > > > return True > > > @@ -1576,25 +1523,25 @@ class FdfParser: > > > def SetPcdLocalation(self,pcdpair): > > > self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) > > > > > > - ## __GetTokenStatements() method > > > + ## _GetTokenStatements() method > > > # > > > # Get token statements > > > # > > > # @param self The object pointer > > > # @param Obj for whom token statement is got > > > # > > > - def __GetTokenStatements(self, Obj): > > > - if self.__IsKeyword( "BaseAddress"): > > > - if not self.__IsToken( "="): > > > + def _GetTokenStatements(self, Obj): > > > + if self._IsKeyword("BaseAddress"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) > > > > > > - Obj.BaseAddress = self.__Token > > > + Obj.BaseAddress = self._Token > > > > > > - if self.__IsToken( "|"): > > > - pcdPair = self.__GetNextPcdSettings() > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + pcdPair = self._GetNextPcdSettings() > > > Obj.BaseAddressPcd = pcdPair > > > self.Profile.PcdDict[pcdPair] = Obj.BaseAddress > > > self.SetPcdLocalation(pcdPair) > > > @@ -1602,16 +1549,16 @@ class FdfParser: > > > self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple > > > return True > > > > > > - if self.__IsKeyword( "Size"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("Size"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) > > > > > > - Size = self.__Token > > > - if self.__IsToken( "|"): > > > - pcdPair = self.__GetNextPcdSettings() > > > + Size = self._Token > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + pcdPair = self._GetNextPcdSettings() > > > Obj.SizePcd = pcdPair > > > self.Profile.PcdDict[pcdPair] = Size > > > self.SetPcdLocalation(pcdPair) > > > @@ -1620,22 +1567,22 @@ class FdfParser: > > > Obj.Size = long(Size, 0) > > > return True > > > > > > - if self.__IsKeyword( "ErasePolarity"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("ErasePolarity"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token != "1" and self.__Token != "0": > > > + if self._Token != "1" and self._Token != "0": > > > raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > > > - Obj.ErasePolarity = self.__Token > > > + Obj.ErasePolarity = self._Token > > > return True > > > > > > - return self.__GetBlockStatements(Obj) > > > + return self._GetBlockStatements(Obj) > > > > > > - ## __GetAddressStatements() method > > > + ## _GetAddressStatements() method > > > # > > > # Get address statements > > > # > > > @@ -1644,38 +1591,37 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetAddressStatements(self, Obj): > > > - > > > - if self.__IsKeyword("BsBaseAddress"): > > > - if not self.__IsToken( "="): > > > + def _GetAddressStatements(self, Obj): > > > + if self._IsKeyword("BsBaseAddress"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > > > - BsAddress = long(self.__Token, 0) > > > + BsAddress = long(self._Token, 0) > > > Obj.BsBaseAddress = BsAddress > > > > > > - if self.__IsKeyword("RtBaseAddress"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("RtBaseAddress"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > > > - RtAddress = long(self.__Token, 0) > > > + RtAddress = long(self._Token, 0) > > > Obj.RtBaseAddress = RtAddress > > > > > > - ## __GetBlockStatements() method > > > + ## _GetBlockStatements() method > > > # > > > # Get block statements > > > # > > > # @param self The object pointer > > > # @param Obj for whom block statement is got > > > # > > > - def __GetBlockStatements(self, Obj): > > > + def _GetBlockStatements(self, Obj): > > > IsBlock = False > > > - while self.__GetBlockStatement(Obj): > > > + while self._GetBlockStatement(Obj): > > > IsBlock = True > > > > > > Item = Obj.BlockSizeList[-1] > > > @@ -1683,7 +1629,7 @@ class FdfParser: > > > raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) > > > return IsBlock > > > > > > - ## __GetBlockStatement() method > > > + ## _GetBlockStatement() method > > > # > > > # Get block statement > > > # > > > @@ -1692,20 +1638,20 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetBlockStatement(self, Obj): > > > - if not self.__IsKeyword( "BlockSize"): > > > + def _GetBlockStatement(self, Obj): > > > + if not self._IsKeyword("BlockSize"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > > raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) > > > > > > - BlockSize = self.__Token > > > + BlockSize = self._Token > > > BlockSizePcd = None > > > - if self.__IsToken( "|"): > > > - PcdPair = self.__GetNextPcdSettings() > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + PcdPair = self._GetNextPcdSettings() > > > BlockSizePcd = PcdPair > > > self.Profile.PcdDict[PcdPair] = BlockSize > > > self.SetPcdLocalation(PcdPair) > > > @@ -1714,19 +1660,19 @@ class FdfParser: > > > BlockSize = long(BlockSize, 0) > > > > > > BlockNumber = None > > > - if self.__IsKeyword( "NumBlocks"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("NumBlocks"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) > > > > > > - BlockNumber = long(self.__Token, 0) > > > + BlockNumber = long(self._Token, 0) > > > > > > Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) > > > return True > > > > > > - ## __GetDefineStatements() method > > > + ## _GetDefineStatements() method > > > # > > > # Get define statements > > > # > > > @@ -1735,11 +1681,11 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetDefineStatements(self, Obj): > > > - while self.__GetDefineStatement( Obj): > > > + def _GetDefineStatements(self, Obj): > > > + while self._GetDefineStatement(Obj): > > > pass > > > > > > - ## __GetDefineStatement() method > > > + ## _GetDefineStatement() method > > > # > > > # Get define statement > > > # > > > @@ -1748,24 +1694,24 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetDefineStatement(self, Obj): > > > - if self.__IsKeyword("DEFINE"): > > > - self.__GetNextToken() > > > - Macro = self.__Token > > > - if not self.__IsToken( "="): > > > + def _GetDefineStatement(self, Obj): > > > + if self._IsKeyword(TAB_DEFINE): > > > + self._GetNextToken() > > > + Macro = self._Token > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > > > > - Value = self.__Token > > > + Value = self._Token > > > Macro = '$(' + Macro + ')' > > > Obj.DefineVarDict[Macro] = Value > > > return True > > > > > > return False > > > > > > - ## __GetSetStatements() method > > > + ## _GetSetStatements() method > > > # > > > # Get set statements > > > # > > > @@ -1774,11 +1720,11 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetSetStatements(self, Obj): > > > - while self.__GetSetStatement(Obj): > > > + def _GetSetStatements(self, Obj): > > > + while self._GetSetStatement(Obj): > > > pass > > > > > > - ## __GetSetStatement() method > > > + ## _GetSetStatement() method > > > # > > > # Get set statement > > > # > > > @@ -1787,15 +1733,15 @@ class FdfParser: > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetSetStatement(self, Obj): > > > - if self.__IsKeyword("SET"): > > > - PcdPair = self.__GetNextPcdSettings() > > > + def _GetSetStatement(self, Obj): > > > + if self._IsKeyword("SET"): > > > + PcdPair = self._GetNextPcdSettings() > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - Value = self.__GetExpression() > > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > + Value = self._GetExpression() > > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > > > if Obj: > > > Obj.SetVarDict[PcdPair] = Value > > > @@ -1807,19 +1753,19 @@ class FdfParser: > > > > > > return False > > > > > > - ## __CalcRegionExpr(self) > > > + ## _CalcRegionExpr(self) > > > # > > > # Calculate expression for offset or size of a region > > > # > > > # @return: None if invalid expression > > > # Calculated number if successfully > > > # > > > - def __CalcRegionExpr(self): > > > + def _CalcRegionExpr(self): > > > StartPos = self.GetFileBufferPos() > > > Expr = '' > > > PairCount = 0 > > > - while not self.__EndOfFile(): > > > - CurCh = self.__CurrentChar() > > > + while not self._EndOfFile(): > > > + CurCh = self._CurrentChar() > > > if CurCh == '(': > > > PairCount += 1 > > > elif CurCh == ')': > > > @@ -1828,106 +1774,106 @@ class FdfParser: > > > if CurCh in '|\r\n' and PairCount == 0: > > > break > > > Expr += CurCh > > > - self.__GetOneChar() > > > + self._GetOneChar() > > > try: > > > return long( > > > ValueExpression(Expr, > > > - self.__CollectMacroPcd() > > > + self._CollectMacroPcd() > > > )(True), 0) > > > except Exception: > > > self.SetFileBufferPos(StartPos) > > > return None > > > > > > - ## __GetRegionLayout() method > > > + ## _GetRegionLayout() method > > > # > > > # Get region layout for FD > > > # > > > # @param self The object pointer > > > - # @param Fd for whom region is got > > > + # @param theFd for whom region is got > > > # @retval True Successfully find > > > # @retval False Not able to find > > > # > > > - def __GetRegionLayout(self, Fd): > > > - Offset = self.__CalcRegionExpr() > > > + def _GetRegionLayout(self, theFd): > > > + Offset = self._CalcRegionExpr() > > > if Offset is None: > > > return False > > > > > > - RegionObj = Region.Region() > > > + RegionObj = Region() > > > RegionObj.Offset = Offset > > > - Fd.RegionList.append(RegionObj) > > > + theFd.RegionList.append(RegionObj) > > > > > > - if not self.__IsToken( "|"): > > > + if not self._IsToken(TAB_VALUE_SPLIT): > > > raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) > > > > > > - Size = self.__CalcRegionExpr() > > > + Size = self._CalcRegionExpr() > > > if Size is None: > > > raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) > > > RegionObj.Size = Size > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > return True > > > > > > - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > > + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > > # > > > # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] > > > # Or it might be next region's offset described by an expression which starts with a PCD. > > > # PcdOffset[|PcdSize] or OffsetPcdExpression|Size > > > # > > > - self.__UndoToken() > > > - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or > > > - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) > > > + self._UndoToken() > > > + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or > > > + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) > > > if IsRegionPcd: > > > - RegionObj.PcdOffset = self.__GetNextPcdSettings() > > > - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) > > > + RegionObj.PcdOffset = self._GetNextPcdSettings() > > > + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) > > > self.SetPcdLocalation(RegionObj.PcdOffset) > > > - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > > + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple > > > - if self.__IsToken( "|"): > > > - RegionObj.PcdSize = self.__GetNextPcdSettings() > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + RegionObj.PcdSize = self._GetNextPcdSettings() > > > self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size > > > self.SetPcdLocalation(RegionObj.PcdSize) > > > - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > > + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > return True > > > > > > - if self.__Token == "SET": > > > - self.__UndoToken() > > > - self.__GetSetStatements( RegionObj) > > > - if not self.__GetNextWord(): > > > + if self._Token == "SET": > > > + self._UndoToken() > > > + self._GetSetStatements(RegionObj) > > > + if not self._GetNextWord(): > > > return True > > > > > > - elif self.__Token == BINARY_FILE_TYPE_FV: > > > - self.__UndoToken() > > > - self.__GetRegionFvType( RegionObj) > > > + elif self._Token == BINARY_FILE_TYPE_FV: > > > + self._UndoToken() > > > + self._GetRegionFvType(RegionObj) > > > > > > - elif self.__Token == "CAPSULE": > > > - self.__UndoToken() > > > - self.__GetRegionCapType( RegionObj) > > > + elif self._Token == "CAPSULE": > > > + self._UndoToken() > > > + self._GetRegionCapType(RegionObj) > > > > > > - elif self.__Token == "FILE": > > > - self.__UndoToken() > > > - self.__GetRegionFileType(RegionObj) > > > + elif self._Token == "FILE": > > > + self._UndoToken() > > > + self._GetRegionFileType(RegionObj) > > > > > > - elif self.__Token == "INF": > > > - self.__UndoToken() > > > + elif self._Token == "INF": > > > + self._UndoToken() > > > RegionObj.RegionType = "INF" > > > - while self.__IsKeyword("INF"): > > > - self.__UndoToken() > > > - ffsInf = self.__ParseInfStatement() > > > + while self._IsKeyword("INF"): > > > + self._UndoToken() > > > + ffsInf = self._ParseInfStatement() > > > if not ffsInf: > > > break > > > RegionObj.RegionDataList.append(ffsInf) > > > > > > - elif self.__Token == "DATA": > > > - self.__UndoToken() > > > - self.__GetRegionDataType(RegionObj) > > > + elif self._Token == "DATA": > > > + self._UndoToken() > > > + self._GetRegionDataType(RegionObj) > > > else: > > > - self.__UndoToken() > > > - if self.__GetRegionLayout(Fd): > > > + self._UndoToken() > > > + if self._GetRegionLayout(theFd): > > > return True > > > raise Warning("A valid region type was not found. " > > > "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", > > > @@ -1935,189 +1881,185 @@ class FdfParser: > > > > > > return True > > > > > > - ## __GetRegionFvType() method > > > + ## _GetRegionFvType() method > > > # > > > # Get region fv data for region > > > # > > > # @param self The object pointer > > > # @param RegionObj for whom region data is got > > > # > > > - def __GetRegionFvType(self, RegionObj): > > > - > > > - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): > > > + def _GetRegionFvType(self, RegionObj): > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > RegionObj.RegionType = BINARY_FILE_TYPE_FV > > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > > > - while self.__IsKeyword( BINARY_FILE_TYPE_FV): > > > + while self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > > > - ## __GetRegionCapType() method > > > + ## _GetRegionCapType() method > > > # > > > # Get region capsule data for region > > > # > > > # @param self The object pointer > > > # @param RegionObj for whom region data is got > > > # > > > - def __GetRegionCapType(self, RegionObj): > > > - > > > - if not self.__IsKeyword("CAPSULE"): > > > + def _GetRegionCapType(self, RegionObj): > > > + if not self._IsKeyword("CAPSULE"): > > > raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > > > RegionObj.RegionType = "CAPSULE" > > > - RegionObj.RegionDataList.append(self.__Token) > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > - while self.__IsKeyword("CAPSULE"): > > > + while self._IsKeyword("CAPSULE"): > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > > > - RegionObj.RegionDataList.append(self.__Token) > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > - ## __GetRegionFileType() method > > > + ## _GetRegionFileType() method > > > # > > > # Get region file data for region > > > # > > > # @param self The object pointer > > > # @param RegionObj for whom region data is got > > > # > > > - def __GetRegionFileType(self, RegionObj): > > > - > > > - if not self.__IsKeyword( "FILE"): > > > + def _GetRegionFileType(self, RegionObj): > > > + if not self._IsKeyword("FILE"): > > > raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > RegionObj.RegionType = "FILE" > > > - RegionObj.RegionDataList.append( self.__Token) > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > - while self.__IsKeyword( "FILE"): > > > + while self._IsKeyword("FILE"): > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) > > > > > > - RegionObj.RegionDataList.append(self.__Token) > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > - ## __GetRegionDataType() method > > > + ## _GetRegionDataType() method > > > # > > > # Get region array data for region > > > # > > > # @param self The object pointer > > > # @param RegionObj for whom region data is got > > > # > > > - def __GetRegionDataType(self, RegionObj): > > > - > > > - if not self.__IsKeyword( "DATA"): > > > + def _GetRegionDataType(self, RegionObj): > > > + if not self._IsKeyword("DATA"): > > > raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > - if len(self.__Token) > 18: > > > + if len(self._Token) > 18: > > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > > > # convert hex string value to byte hex string array > > > - AllString = self.__Token > > > + AllString = self._Token > > > AllStrLen = len (AllString) > > > DataString = "" > > > while AllStrLen > 4: > > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > > AllStrLen = AllStrLen - 2 > > > - DataString = DataString + AllString[:AllStrLen] + "," > > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > > > # byte value array > > > - if len (self.__Token) <= 4: > > > - while self.__IsToken(","): > > > - if not self.__GetNextHexNumber(): > > > + if len (self._Token) <= 4: > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > + if not self._GetNextHexNumber(): > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > - if len(self.__Token) > 4: > > > + if len(self._Token) > 4: > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > - DataString += self.__Token > > > - DataString += "," > > > + DataString += self._Token > > > + DataString += TAB_COMMA_SPLIT > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - DataString = DataString.rstrip(",") > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > RegionObj.RegionType = "DATA" > > > - RegionObj.RegionDataList.append( DataString) > > > + RegionObj.RegionDataList.append(DataString) > > > > > > - while self.__IsKeyword( "DATA"): > > > + while self._IsKeyword("DATA"): > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > - if len(self.__Token) > 18: > > > + if len(self._Token) > 18: > > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > > > # convert hex string value to byte hex string array > > > - AllString = self.__Token > > > + AllString = self._Token > > > AllStrLen = len (AllString) > > > DataString = "" > > > while AllStrLen > 4: > > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > > AllStrLen = AllStrLen - 2 > > > - DataString = DataString + AllString[:AllStrLen] + "," > > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > > > # byte value array > > > - if len (self.__Token) <= 4: > > > - while self.__IsToken(","): > > > - if not self.__GetNextHexNumber(): > > > + if len (self._Token) <= 4: > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > + if not self._GetNextHexNumber(): > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > - if len(self.__Token) > 4: > > > + if len(self._Token) > 4: > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > - DataString += self.__Token > > > - DataString += "," > > > + DataString += self._Token > > > + DataString += TAB_COMMA_SPLIT > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - DataString = DataString.rstrip(",") > > > - RegionObj.RegionDataList.append( DataString) > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > + RegionObj.RegionDataList.append(DataString) > > > > > > - ## __GetFv() method > > > + ## _GetFv() method > > > # > > > # Get FV section contents and store its data into FV dictionary of self.Profile > > > # > > > @@ -2125,68 +2067,68 @@ class FdfParser: > > > # @retval True Successfully find a FV > > > # @retval False Not able to find a FV > > > # > > > - def __GetFv(self): > > > - if not self.__GetNextToken(): > > > + def _GetFv(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[FV."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[FV.", True): > > > + self._UndoToken() > > > + if not self._IsToken("[FV.", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > - FvName = self.__GetUiName() > > > + FvName = self._GetUiName() > > > self.CurrentFvName = FvName.upper() > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj = Fv.FV() > > > + FvObj = FV() > > > FvObj.UiFvName = self.CurrentFvName > > > self.Profile.FvDict[self.CurrentFvName] = FvObj > > > > > > - Status = self.__GetCreateFile(FvObj) > > > + Status = self._GetCreateFile(FvObj) > > > if not Status: > > > raise Warning("FV name error", self.FileName, self.CurrentLineNumber) > > > > > > - self.__GetDefineStatements(FvObj) > > > + self._GetDefineStatements(FvObj) > > > > > > - self.__GetAddressStatements(FvObj) > > > + self._GetAddressStatements(FvObj) > > > > > > FvObj.FvExtEntryTypeValue = [] > > > FvObj.FvExtEntryType = [] > > > FvObj.FvExtEntryData = [] > > > while True: > > > - self.__GetSetStatements(FvObj) > > > + self._GetSetStatements(FvObj) > > > > > > - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or > > > - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or > > > - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or > > > - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): > > > + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or > > > + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or > > > + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or > > > + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): > > > break > > > > > > if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: > > > raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) > > > > > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > > > while True: > > > - isInf = self.__GetInfStatement(FvObj) > > > - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > > + isInf = self._GetInfStatement(FvObj) > > > + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > > if not isInf and not isFile: > > > break > > > > > > return True > > > > > > - ## __GetFvAlignment() method > > > + ## _GetFvAlignment() method > > > # > > > # Get alignment for FV > > > # > > > @@ -2195,26 +2137,25 @@ class FdfParser: > > > # @retval True Successfully find a alignment statement > > > # @retval False Not able to find a alignment statement > > > # > > > - def __GetFvAlignment(self, Obj): > > > - > > > - if not self.__IsKeyword( "FvAlignment"): > > > + def _GetFvAlignment(self, Obj): > > > + if not self._IsKeyword("FvAlignment"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > > + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > > "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ > > > "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ > > > "1G", "2G"): > > > - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - Obj.FvAlignment = self.__Token > > > + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + Obj.FvAlignment = self._Token > > > return True > > > > > > - ## __GetFvBaseAddress() method > > > + ## _GetFvBaseAddress() method > > > # > > > # Get BaseAddress for FV > > > # > > > @@ -2223,23 +2164,22 @@ class FdfParser: > > > # @retval True Successfully find a FvBaseAddress statement > > > # @retval False Not able to find a FvBaseAddress statement > > > # > > > - def __GetFvBaseAddress(self, Obj): > > > - > > > - if not self.__IsKeyword("FvBaseAddress"): > > > + def _GetFvBaseAddress(self, Obj): > > > + if not self._IsKeyword("FvBaseAddress"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) > > > > > > - if not BaseAddrValuePattern.match(self.__Token.upper()): > > > - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - Obj.FvBaseAddress = self.__Token > > > + if not BaseAddrValuePattern.match(self._Token.upper()): > > > + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + Obj.FvBaseAddress = self._Token > > > return True > > > > > > - ## __GetFvForceRebase() method > > > + ## _GetFvForceRebase() method > > > # > > > # Get FvForceRebase for FV > > > # > > > @@ -2248,23 +2188,22 @@ class FdfParser: > > > # @retval True Successfully find a FvForceRebase statement > > > # @retval False Not able to find a FvForceRebase statement > > > # > > > - def __GetFvForceRebase(self, Obj): > > > - > > > - if not self.__IsKeyword("FvForceRebase"): > > > + def _GetFvForceRebase(self, Obj): > > > + if not self._IsKeyword("FvForceRebase"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > > - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > > + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > > + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > > Obj.FvForceRebase = True > > > - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > > + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > > Obj.FvForceRebase = False > > > else: > > > Obj.FvForceRebase = None > > > @@ -2272,7 +2211,7 @@ class FdfParser: > > > return True > > > > > > > > > - ## __GetFvAttributes() method > > > + ## _GetFvAttributes() method > > > # > > > # Get attributes for FV > > > # > > > @@ -2280,31 +2219,31 @@ class FdfParser: > > > # @param Obj for whom attribute is got > > > # @retval None > > > # > > > - def __GetFvAttributes(self, FvObj): > > > + def _GetFvAttributes(self, FvObj): > > > IsWordToken = False > > > - while self.__GetNextWord(): > > > + while self._GetNextWord(): > > > IsWordToken = True > > > - name = self.__Token > > > + name = self._Token > > > if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ > > > "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ > > > "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ > > > "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ > > > "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ > > > "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvAttributeDict[name] = self.__Token > > > + FvObj.FvAttributeDict[name] = self._Token > > > > > > return IsWordToken > > > > > > - ## __GetFvNameGuid() method > > > + ## _GetFvNameGuid() method > > > # > > > # Get FV GUID for FV > > > # > > > @@ -2312,110 +2251,107 @@ class FdfParser: > > > # @param Obj for whom GUID is got > > > # @retval None > > > # > > > - def __GetFvNameGuid(self, FvObj): > > > - > > > - if not self.__IsKeyword( "FvNameGuid"): > > > + def _GetFvNameGuid(self, FvObj): > > > + if not self._IsKeyword("FvNameGuid"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextGuid(): > > > + if not self._GetNextGuid(): > > > raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvNameGuid = self.__Token > > > + FvObj.FvNameGuid = self._Token > > > > > > return True > > > > > > - def __GetFvNameString(self, FvObj): > > > - > > > - if not self.__IsKeyword( "FvNameString"): > > > + def _GetFvNameString(self, FvObj): > > > + if not self._IsKeyword("FvNameString"): > > > return False > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): > > > + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): > > > raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvNameString = self.__Token > > > + FvObj.FvNameString = self._Token > > > > > > return True > > > > > > - def __GetFvExtEntryStatement(self, FvObj): > > > - > > > - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): > > > + def _GetFvExtEntryStatement(self, FvObj): > > > + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): > > > return False > > > > > > - if not self.__IsKeyword ("TYPE"): > > > + if not self._IsKeyword ("TYPE"): > > > raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > > raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvExtEntryTypeValue.append(self.__Token) > > > + FvObj.FvExtEntryTypeValue.append(self._Token) > > > > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): > > > + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): > > > raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvExtEntryType.append(self.__Token) > > > + FvObj.FvExtEntryType.append(self._Token) > > > > > > - if self.__Token == 'DATA': > > > + if self._Token == 'DATA': > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > - if len(self.__Token) > 4: > > > + if len(self._Token) > 4: > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > > > - DataString = self.__Token > > > - DataString += "," > > > + DataString = self._Token > > > + DataString += TAB_COMMA_SPLIT > > > > > > - while self.__IsToken(","): > > > - if not self.__GetNextHexNumber(): > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > + if not self._GetNextHexNumber(): > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > - if len(self.__Token) > 4: > > > + if len(self._Token) > 4: > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > - DataString += self.__Token > > > - DataString += "," > > > + DataString += self._Token > > > + DataString += TAB_COMMA_SPLIT > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - DataString = DataString.rstrip(",") > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > FvObj.FvExtEntryData.append(DataString) > > > > > > - if self.__Token == 'FILE': > > > + if self._Token == 'FILE': > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) > > > > > > - FvObj.FvExtEntryData.append(self.__Token) > > > + FvObj.FvExtEntryData.append(self._Token) > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > return True > > > > > > - ## __GetAprioriSection() method > > > + ## _GetAprioriSection() method > > > # > > > # Get token statements > > > # > > > @@ -2425,56 +2361,55 @@ class FdfParser: > > > # @retval True Successfully find apriori statement > > > # @retval False Not able to find apriori statement > > > # > > > - def __GetAprioriSection(self, FvObj, MacroDict = {}): > > > - > > > - if not self.__IsKeyword( "APRIORI"): > > > + def _GetAprioriSection(self, FvObj, MacroDict = {}): > > > + if not self._IsKeyword("APRIORI"): > > > return False > > > > > > - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): > > > + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): > > > raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) > > > - AprType = self.__Token > > > + AprType = self._Token > > > > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - AprSectionObj = AprioriSection.AprioriSection() > > > + AprSectionObj = AprioriSection() > > > AprSectionObj.AprioriType = AprType > > > > > > - self.__GetDefineStatements(AprSectionObj) > > > + self._GetDefineStatements(AprSectionObj) > > > MacroDict.update(AprSectionObj.DefineVarDict) > > > > > > while True: > > > - IsInf = self.__GetInfStatement(AprSectionObj) > > > - IsFile = self.__GetFileStatement( AprSectionObj) > > > + IsInf = self._GetInfStatement(AprSectionObj) > > > + IsFile = self._GetFileStatement(AprSectionObj) > > > if not IsInf and not IsFile: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > FvObj.AprioriSectionList.append(AprSectionObj) > > > return True > > > > > > - def __ParseInfStatement(self): > > > - if not self.__IsKeyword("INF"): > > > + def _ParseInfStatement(self): > > > + if not self._IsKeyword("INF"): > > > return None > > > > > > - ffsInf = FfsInfStatement.FfsInfStatement() > > > - self.__GetInfOptions(ffsInf) > > > + ffsInf = FfsInfStatement() > > > + self._GetInfOptions(ffsInf) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > > - ffsInf.InfFileName = self.__Token > > > + ffsInf.InfFileName = self._Token > > > if not ffsInf.InfFileName.endswith('.inf'): > > > raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) > > > > > > ffsInf.CurrentLineNum = self.CurrentLineNumber > > > - ffsInf.CurrentLineContent = self.__CurrentLine() > > > + ffsInf.CurrentLineContent = self._CurrentLine() > > > > > > #Replace $(SAPCE) with real space > > > ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') > > > > > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #do case sensitive check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > @@ -2492,16 +2427,16 @@ class FdfParser: > > > else: > > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > > > - if self.__IsToken('|'): > > > - if self.__IsKeyword('RELOCS_STRIPPED'): > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + if self._IsKeyword('RELOCS_STRIPPED'): > > > ffsInf.KeepReloc = False > > > - elif self.__IsKeyword('RELOCS_RETAINED'): > > > + elif self._IsKeyword('RELOCS_RETAINED'): > > > ffsInf.KeepReloc = True > > > else: > > > - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > return ffsInf > > > > > > - ## __GetInfStatement() method > > > + ## _GetInfStatement() method > > > # > > > # Get INF statements > > > # > > > @@ -2510,86 +2445,86 @@ class FdfParser: > > > # @retval True Successfully find inf statement > > > # @retval False Not able to find inf statement > > > # > > > - def __GetInfStatement(self, Obj, ForCapsule=False): > > > - ffsInf = self.__ParseInfStatement() > > > + def _GetInfStatement(self, Obj, ForCapsule=False): > > > + ffsInf = self._ParseInfStatement() > > > if not ffsInf: > > > return False > > > > > > if ForCapsule: > > > - capsuleFfs = CapsuleData.CapsuleFfs() > > > - capsuleFfs.Ffs = ffsInf > > > - Obj.CapsuleDataList.append(capsuleFfs) > > > + myCapsuleFfs = CapsuleFfs() > > > + myCapsuleFfs.Ffs = ffsInf > > > + Obj.CapsuleDataList.append(myCapsuleFfs) > > > else: > > > Obj.FfsList.append(ffsInf) > > > return True > > > > > > - ## __GetInfOptions() method > > > + ## _GetInfOptions() method > > > # > > > # Get options for INF > > > # > > > # @param self The object pointer > > > # @param FfsInfObj for whom option is got > > > # > > > - def __GetInfOptions(self, FfsInfObj): > > > - if self.__IsKeyword("FILE_GUID"): > > > - if not self.__IsToken("="): > > > + def _GetInfOptions(self, FfsInfObj): > > > + if self._IsKeyword("FILE_GUID"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextGuid(): > > > + if not self._GetNextGuid(): > > > raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) > > > - FfsInfObj.OverrideGuid = self.__Token > > > + FfsInfObj.OverrideGuid = self._Token > > > > > > - if self.__IsKeyword( "RuleOverride"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("RuleOverride"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) > > > - FfsInfObj.Rule = self.__Token > > > + FfsInfObj.Rule = self._Token > > > > > > - if self.__IsKeyword( "VERSION"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("VERSION"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Version", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__GetStringData(): > > > - FfsInfObj.Version = self.__Token > > > + if self._GetStringData(): > > > + FfsInfObj.Version = self._Token > > > > > > - if self.__IsKeyword( BINARY_FILE_TYPE_UI): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword(BINARY_FILE_TYPE_UI): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__GetStringData(): > > > - FfsInfObj.Ui = self.__Token > > > + if self._GetStringData(): > > > + FfsInfObj.Ui = self._Token > > > > > > - if self.__IsKeyword( "USE"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("USE"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) > > > - FfsInfObj.UseArch = self.__Token > > > + FfsInfObj.UseArch = self._Token > > > > > > > > > - if self.__GetNextToken(): > > > - p = > > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > > - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): > > > - FfsInfObj.KeyStringList.append(self.__Token) > > > - if not self.__IsToken(","): > > > + if self._GetNextToken(): > > > + p = > > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > > + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): > > > + FfsInfObj.KeyStringList.append(self._Token) > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > return > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > return > > > > > > - while self.__GetNextToken(): > > > - if not p.match(self.__Token): > > > + while self._GetNextToken(): > > > + if not p.match(self._Token): > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > > > - FfsInfObj.KeyStringList.append(self.__Token) > > > + FfsInfObj.KeyStringList.append(self._Token) > > > > > > - if not self.__IsToken(","): > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > break > > > > > > - ## __GetFileStatement() method > > > + ## _GetFileStatement() method > > > # > > > # Get FILE statements > > > # > > > @@ -2599,42 +2534,41 @@ class FdfParser: > > > # @retval True Successfully find FILE statement > > > # @retval False Not able to find FILE statement > > > # > > > - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > > - > > > - if not self.__IsKeyword( "FILE"): > > > + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > > + if not self._IsKeyword("FILE"): > > > return False > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > > > - if ForCapsule and self.__Token == 'DATA': > > > - self.__UndoToken() > > > - self.__UndoToken() > > > + if ForCapsule and self._Token == 'DATA': > > > + self._UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - FfsFileObj = FfsFileStatement.FileStatement() > > > - FfsFileObj.FvFileType = self.__Token > > > + FfsFileObj = FileStatement() > > > + FfsFileObj.FvFileType = self._Token > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextGuid(): > > > - if not self.__GetNextWord(): > > > + if not self._GetNextGuid(): > > > + if not self._GetNextWord(): > > > raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) > > > - if self.__Token == 'PCD': > > > - if not self.__IsToken( "("): > > > + if self._Token == 'PCD': > > > + if not self._IsToken("("): > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > - PcdPair = self.__GetNextPcdSettings() > > > - if not self.__IsToken( ")"): > > > + PcdPair = self._GetNextPcdSettings() > > > + if not self._IsToken(")"): > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > - FfsFileObj.NameGuid = self.__Token > > > + FfsFileObj.NameGuid = self._Token > > > > > > - self.__GetFilePart( FfsFileObj, MacroDict.copy()) > > > + self._GetFilePart(FfsFileObj, MacroDict.copy()) > > > > > > if ForCapsule: > > > - capsuleFfs = CapsuleData.CapsuleFfs() > > > + capsuleFfs = CapsuleFfs() > > > capsuleFfs.Ffs = FfsFileObj > > > Obj.CapsuleDataList.append(capsuleFfs) > > > else: > > > @@ -2642,7 +2576,7 @@ class FdfParser: > > > > > > return True > > > > > > - ## __FileCouldHaveRelocFlag() method > > > + ## _FileCouldHaveRelocFlag() method > > > # > > > # Check whether reloc strip flag can be set for a file type. > > > # > > > @@ -2651,13 +2585,13 @@ class FdfParser: > > > # @retval False No way to have it > > > # > > > @staticmethod > > > - def __FileCouldHaveRelocFlag (FileType): > > > + def _FileCouldHaveRelocFlag (FileType): > > > if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): > > > return True > > > else: > > > return False > > > > > > - ## __SectionCouldHaveRelocFlag() method > > > + ## _SectionCouldHaveRelocFlag() method > > > # > > > # Check whether reloc strip flag can be set for a section type. > > > # > > > @@ -2666,13 +2600,13 @@ class FdfParser: > > > # @retval False No way to have it > > > # > > > @staticmethod > > > - def __SectionCouldHaveRelocFlag (SectionType): > > > + def _SectionCouldHaveRelocFlag (SectionType): > > > if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): > > > return True > > > else: > > > return False > > > > > > - ## __GetFilePart() method > > > + ## _GetFilePart() method > > > # > > > # Get components for FILE statement > > > # > > > @@ -2680,58 +2614,57 @@ class FdfParser: > > > # @param FfsFileObj for whom component is got > > > # @param MacroDict dictionary used to replace macro > > > # > > > - def __GetFilePart(self, FfsFileObj, MacroDict = {}): > > > + def _GetFilePart(self, FfsFileObj, MacroDict = {}): > > > + self._GetFileOpts(FfsFileObj) > > > > > > - self.__GetFileOpts( FfsFileObj) > > > - > > > - if not self.__IsToken("{"): > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > > - if self.__Token == 'RELOCS_STRIPPED': > > > + if not self._IsToken("{"): > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > > + if self._Token == 'RELOCS_STRIPPED': > > > FfsFileObj.KeepReloc = False > > > else: > > > FfsFileObj.KeepReloc = True > > > else: > > > raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token == BINARY_FILE_TYPE_FV: > > > - if not self.__IsToken( "="): > > > + if self._Token == BINARY_FILE_TYPE_FV: > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > - FfsFileObj.FvName = self.__Token > > > + FfsFileObj.FvName = self._Token > > > > > > - elif self.__Token == "FD": > > > - if not self.__IsToken( "="): > > > + elif self._Token == "FD": > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > > - FfsFileObj.FdName = self.__Token > > > + FfsFileObj.FdName = self._Token > > > > > > - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): > > > - self.__UndoToken() > > > - self.__GetSectionData( FfsFileObj, MacroDict) > > > + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): > > > + self._UndoToken() > > > + self._GetSectionData(FfsFileObj, MacroDict) > > > > > > elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': > > > - self.__UndoToken() > > > - self.__GetRAWData(FfsFileObj, MacroDict) > > > + self._UndoToken() > > > + self._GetRAWData(FfsFileObj, MacroDict) > > > > > > else: > > > FfsFileObj.CurrentLineNum = self.CurrentLineNumber > > > - FfsFileObj.CurrentLineContent = self.__CurrentLine() > > > - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') > > > - self.__VerifyFile(FfsFileObj.FileName) > > > + FfsFileObj.CurrentLineContent = self._CurrentLine() > > > + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') > > > + self._VerifyFile(FfsFileObj.FileName) > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - ## __GetRAWData() method > > > + ## _GetRAWData() method > > > # > > > # Get RAW data for FILE statement > > > # > > > @@ -2739,33 +2672,32 @@ class FdfParser: > > > # @param FfsFileObj for whom section is got > > > # @param MacroDict dictionary used to replace macro > > > # > > > - def __GetRAWData(self, FfsFileObj, MacroDict = {}): > > > + def _GetRAWData(self, FfsFileObj, MacroDict = {}): > > > FfsFileObj.FileName = [] > > > FfsFileObj.SubAlignment = [] > > > while True: > > > AlignValue = None > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > #For FFS, Auto is default option same to "" > > > - if not self.__Token == "Auto": > > > - AlignValue = self.__Token > > > - if not self.__GetNextToken(): > > > + if not self._Token == "Auto": > > > + AlignValue = self._Token > > > + if not self._GetNextToken(): > > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > > > - FileName = self.__Token.replace('$(SPACE)', ' ') > > > + FileName = self._Token.replace('$(SPACE)', ' ') > > > if FileName == '}': > > > - self.__UndoToken() > > > + self._UndoToken() > > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > > > - self.__VerifyFile(FileName) > > > + self._VerifyFile(FileName) > > > File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) > > > FfsFileObj.FileName.append(File.Path) > > > FfsFileObj.SubAlignment.append(AlignValue) > > > > > > - if self.__IsToken( "}"): > > > - self.__UndoToken() > > > + if self._IsToken("}"): > > > + self._UndoToken() > > > break > > > > > > if len(FfsFileObj.SubAlignment) == 1: > > > @@ -2773,45 +2705,43 @@ class FdfParser: > > > if len(FfsFileObj.FileName) == 1: > > > FfsFileObj.FileName = FfsFileObj.FileName[0] > > > > > > - ## __GetFileOpts() method > > > + ## _GetFileOpts() method > > > # > > > # Get options for FILE statement > > > # > > > # @param self The object pointer > > > # @param FfsFileObj for whom options is got > > > # > > > - def __GetFileOpts(self, FfsFileObj): > > > - > > > - if self.__GetNextToken(): > > > - if TokenFindPattern.match(self.__Token): > > > - FfsFileObj.KeyStringList.append(self.__Token) > > > - if self.__IsToken(","): > > > - while self.__GetNextToken(): > > > - if not TokenFindPattern.match(self.__Token): > > > + def _GetFileOpts(self, FfsFileObj): > > > + if self._GetNextToken(): > > > + if TokenFindPattern.match(self._Token): > > > + FfsFileObj.KeyStringList.append(self._Token) > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > + while self._GetNextToken(): > > > + if not TokenFindPattern.match(self._Token): > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, > self.CurrentLineNumber) > > > - FfsFileObj.KeyStringList.append(self.__Token) > > > + FfsFileObj.KeyStringList.append(self._Token) > > > > > > - if not self.__IsToken(","): > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > break > > > > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > > > > - if self.__IsKeyword( "FIXED", True): > > > + if self._IsKeyword("FIXED", True): > > > FfsFileObj.Fixed = True > > > > > > - if self.__IsKeyword( "CHECKSUM", True): > > > + if self._IsKeyword("CHECKSUM", True): > > > FfsFileObj.CheckSum = True > > > > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > #For FFS, Auto is default option same to "" > > > - if not self.__Token == "Auto": > > > - FfsFileObj.Alignment = self.__Token > > > + if not self._Token == "Auto": > > > + FfsFileObj.Alignment = self._Token > > > > > > - ## __GetAlignment() method > > > + ## _GetAlignment() method > > > # > > > # Return the alignment value > > > # > > > @@ -2819,18 +2749,18 @@ class FdfParser: > > > # @retval True Successfully find alignment > > > # @retval False Not able to find alignment > > > # > > > - def __GetAlignment(self): > > > - if self.__IsKeyword( "Align", True): > > > - if not self.__IsToken( "="): > > > + def _GetAlignment(self): > > > + if self._IsKeyword("Align", True): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > > return True > > > > > > return False > > > > > > - ## __GetFilePart() method > > > + ## _GetFilePart() method > > > # > > > # Get section data for FILE statement > > > # > > > @@ -2838,23 +2768,23 @@ class FdfParser: > > > # @param FfsFileObj for whom section is got > > > # @param MacroDict dictionary used to replace macro > > > # > > > - def __GetSectionData(self, FfsFileObj, MacroDict = {}): > > > + def _GetSectionData(self, FfsFileObj, MacroDict = {}): > > > Dict = {} > > > Dict.update(MacroDict) > > > > > > - self.__GetDefineStatements(FfsFileObj) > > > + self._GetDefineStatements(FfsFileObj) > > > > > > Dict.update(FfsFileObj.DefineVarDict) > > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > > > > > while True: > > > - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) > > > - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) > > > + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) > > > + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) > > > if not IsLeafSection and not IsEncapSection: > > > break > > > > > > - ## __GetLeafSection() method > > > + ## _GetLeafSection() method > > > # > > > # Get leaf section for Obj > > > # > > > @@ -2864,97 +2794,95 @@ class FdfParser: > > > # @retval True Successfully find section statement > > > # @retval False Not able to find section statement > > > # > > > - def __GetLeafSection(self, Obj, MacroDict = {}): > > > - > > > + def _GetLeafSection(self, Obj, MacroDict = {}): > > > OldPos = self.GetFileBufferPos() > > > > > > - if not self.__IsKeyword( "SECTION"): > > > + if not self._IsKeyword("SECTION"): > > > if len(Obj.SectionList) == 0: > > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > > else: > > > return False > > > > > > AlignValue = None > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - AlignValue = self.__Token > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + AlignValue = self._Token > > > > > > BuildNum = None > > > - if self.__IsKeyword( "BUILD_NUM"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("BUILD_NUM"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) > > > > > > - BuildNum = self.__Token > > > + BuildNum = self._Token > > > > > > - if self.__IsKeyword( "VERSION"): > > > + if self._IsKeyword("VERSION"): > > > if AlignValue == 'Auto': > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected version", self.FileName, self.CurrentLineNumber) > > > - VerSectionObj = VerSection.VerSection() > > > + VerSectionObj = VerSection() > > > VerSectionObj.Alignment = AlignValue > > > VerSectionObj.BuildNum = BuildNum > > > - if self.__GetStringData(): > > > - VerSectionObj.StringData = self.__Token > > > + if self._GetStringData(): > > > + VerSectionObj.StringData = self._Token > > > else: > > > - VerSectionObj.FileName = self.__Token > > > + VerSectionObj.FileName = self._Token > > > Obj.SectionList.append(VerSectionObj) > > > > > > - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): > > > + elif self._IsKeyword(BINARY_FILE_TYPE_UI): > > > if AlignValue == 'Auto': > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected UI", self.FileName, self.CurrentLineNumber) > > > - UiSectionObj = UiSection.UiSection() > > > + UiSectionObj = UiSection() > > > UiSectionObj.Alignment = AlignValue > > > - if self.__GetStringData(): > > > - UiSectionObj.StringData = self.__Token > > > + if self._GetStringData(): > > > + UiSectionObj.StringData = self._Token > > > else: > > > - UiSectionObj.FileName = self.__Token > > > + UiSectionObj.FileName = self._Token > > > Obj.SectionList.append(UiSectionObj) > > > > > > - elif self.__IsKeyword( "FV_IMAGE"): > > > + elif self._IsKeyword("FV_IMAGE"): > > > if AlignValue == 'Auto': > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) > > > > > > - FvName = self.__Token > > > + FvName = self._Token > > > FvObj = None > > > > > > - if self.__IsToken( "{"): > > > - FvObj = Fv.FV() > > > + if self._IsToken("{"): > > > + FvObj = FV() > > > FvObj.UiFvName = FvName.upper() > > > - self.__GetDefineStatements(FvObj) > > > + self._GetDefineStatements(FvObj) > > > MacroDict.update(FvObj.DefineVarDict) > > > - self.__GetBlockStatement(FvObj) > > > - self.__GetSetStatements(FvObj) > > > - self.__GetFvAlignment(FvObj) > > > - self.__GetFvAttributes(FvObj) > > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > > + self._GetBlockStatement(FvObj) > > > + self._GetSetStatements(FvObj) > > > + self._GetFvAlignment(FvObj) > > > + self._GetFvAttributes(FvObj) > > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > > > > > while True: > > > - IsInf = self.__GetInfStatement(FvObj) > > > - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) > > > + IsInf = self._GetInfStatement(FvObj) > > > + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) > > > if not IsInf and not IsFile: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - FvImageSectionObj = FvImageSection.FvImageSection() > > > + FvImageSectionObj = FvImageSection() > > > FvImageSectionObj.Alignment = AlignValue > > > if FvObj is not None: > > > FvImageSectionObj.Fv = FvObj > > > @@ -2965,81 +2893,81 @@ class FdfParser: > > > > > > Obj.SectionList.append(FvImageSectionObj) > > > > > > - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): > > > + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): > > > if AlignValue == 'Auto': > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > - DepexSectionObj = DepexSection.DepexSection() > > > + DepexSectionObj = DepexSection() > > > DepexSectionObj.Alignment = AlignValue > > > - DepexSectionObj.DepexType = self.__Token > > > + DepexSectionObj.DepexType = self._Token > > > > > > - if not self.__IsToken( "="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__IsToken( "{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > - if not self.__SkipToToken( "}"): > > > + if not self._SkipToToken("}"): > > > raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) > > > > > > - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') > > > + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') > > > Obj.SectionList.append(DepexSectionObj) > > > > > > else: > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected section type", self.FileName, self.CurrentLineNumber) > > > > > > # Encapsulation section appear, UndoToken and return > > > - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": > > > + if self._Token == "COMPRESS" or self._Token == "GUIDED": > > > self.SetFileBufferPos(OldPos) > > > return False > > > > > > - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", > > BINARY_FILE_TYPE_SMM_DEPEX): > > > - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == > > BINARY_FILE_TYPE_TE): > > > + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == > > BINARY_FILE_TYPE_TE): > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > > > > # DataSection > > > - DataSectionObj = DataSection.DataSection() > > > + DataSectionObj = DataSection() > > > DataSectionObj.Alignment = AlignValue > > > - DataSectionObj.SecType = self.__Token > > > + DataSectionObj.SecType = self._Token > > > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and > self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > > - if self.__Token == 'RELOCS_STRIPPED': > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > > + if self._Token == 'RELOCS_STRIPPED': > > > DataSectionObj.KeepReloc = False > > > else: > > > DataSectionObj.KeepReloc = True > > > else: > > > raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, > > DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > - if self.__IsToken("="): > > > - if not self.__GetNextToken(): > > > + if self._IsToken(TAB_EQUAL_SPLIT): > > > + if not self._GetNextToken(): > > > raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) > > > - DataSectionObj.SectFileName = self.__Token > > > - self.__VerifyFile(DataSectionObj.SectFileName) > > > + DataSectionObj.SectFileName = self._Token > > > + self._VerifyFile(DataSectionObj.SectFileName) > > > else: > > > - if not self.__GetCglSection(DataSectionObj): > > > + if not self._GetCglSection(DataSectionObj): > > > return False > > > > > > Obj.SectionList.append(DataSectionObj) > > > > > > return True > > > > > > - ## __VerifyFile > > > + ## _VerifyFile > > > # > > > # Check if file exists or not: > > > # If current phase if GenFds, the file must exist; > > > # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist > > > # @param FileName: File path to be verified. > > > # > > > - def __VerifyFile(self, FileName): > > > - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: > > > + def _VerifyFile(self, FileName): > > > + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: > > > return > > > - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: > > > + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: > > > ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > - ## __GetCglSection() method > > > + ## _GetCglSection() method > > > # > > > # Get compressed or GUIDed section for Obj > > > # > > > @@ -3049,28 +2977,28 @@ class FdfParser: > > > # @retval True Successfully find section statement > > > # @retval False Not able to find section statement > > > # > > > - def __GetCglSection(self, Obj, AlignValue = None): > > > + def _GetCglSection(self, Obj, AlignValue = None): > > > > > > - if self.__IsKeyword( "COMPRESS"): > > > + if self._IsKeyword("COMPRESS"): > > > type = "PI_STD" > > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > > - type = self.__Token > > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > > + type = self._Token > > > > > > - if not self.__IsToken("{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - CompressSectionObj = CompressSection.CompressSection() > > > + CompressSectionObj = CompressSection() > > > CompressSectionObj.Alignment = AlignValue > > > CompressSectionObj.CompType = type > > > # Recursive sections... > > > while True: > > > - IsLeafSection = self.__GetLeafSection(CompressSectionObj) > > > - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) > > > + IsLeafSection = self._GetLeafSection(CompressSectionObj) > > > + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) > > > if not IsLeafSection and not IsEncapSection: > > > break > > > > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > Obj.SectionList.append(CompressSectionObj) > > > > > > @@ -3079,15 +3007,15 @@ class FdfParser: > > > > > > return True > > > > > > - elif self.__IsKeyword( "GUIDED"): > > > + elif self._IsKeyword("GUIDED"): > > > GuidValue = None > > > - if self.__GetNextGuid(): > > > - GuidValue = self.__Token > > > + if self._GetNextGuid(): > > > + GuidValue = self._Token > > > > > > - AttribDict = self.__GetGuidAttrib() > > > - if not self.__IsToken("{"): > > > + AttribDict = self._GetGuidAttrib() > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > - GuidSectionObj = GuidSection.GuidSection() > > > + GuidSectionObj = GuidSection() > > > GuidSectionObj.Alignment = AlignValue > > > GuidSectionObj.NameGuid = GuidValue > > > GuidSectionObj.SectionType = "GUIDED" > > > @@ -3096,12 +3024,12 @@ class FdfParser: > > > GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] > > > # Recursive sections... > > > while True: > > > - IsLeafSection = self.__GetLeafSection(GuidSectionObj) > > > - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) > > > + IsLeafSection = self._GetLeafSection(GuidSectionObj) > > > + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) > > > if not IsLeafSection and not IsEncapSection: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > Obj.SectionList.append(GuidSectionObj) > > > > > > @@ -3109,44 +3037,43 @@ class FdfParser: > > > > > > return False > > > > > > - ## __GetGuidAttri() method > > > + ## _GetGuidAttri() method > > > # > > > # Get attributes for GUID section > > > # > > > # @param self The object pointer > > > # @retval AttribDict Dictionary of key-value pair of section attributes > > > # > > > - def __GetGuidAttrib(self): > > > - > > > + def _GetGuidAttrib(self): > > > AttribDict = {} > > > AttribDict["PROCESSING_REQUIRED"] = "NONE" > > > AttribDict["AUTH_STATUS_VALID"] = "NONE" > > > AttribDict["EXTRA_HEADER_SIZE"] = -1 > > > - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ > > > - or self.__IsKeyword("EXTRA_HEADER_SIZE"): > > > - AttribKey = self.__Token > > > + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ > > > + or self._IsKeyword("EXTRA_HEADER_SIZE"): > > > + AttribKey = self._Token > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) > > > elif AttribKey == "EXTRA_HEADER_SIZE": > > > Base = 10 > > > - if self.__Token[0:2].upper() == "0X": > > > + if self._Token[0:2].upper() == "0X": > > > Base = 16 > > > try: > > > - AttribDict[AttribKey] = int(self.__Token, Base) > > > + AttribDict[AttribKey] = int(self._Token, Base) > > > continue > > > except ValueError: > > > raise Warning("expected Number", self.FileName, self.CurrentLineNumber) > > > - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > > - AttribDict[AttribKey] = self.__Token > > > + AttribDict[AttribKey] = self._Token > > > > > > return AttribDict > > > > > > - ## __GetEncapsulationSec() method > > > + ## _GetEncapsulationSec() method > > > # > > > # Get encapsulation section for FILE > > > # > > > @@ -3155,91 +3082,89 @@ class FdfParser: > > > # @retval True Successfully find section statement > > > # @retval False Not able to find section statement > > > # > > > - def __GetEncapsulationSec(self, FfsFileObj): > > > - > > > + def _GetEncapsulationSec(self, FfsFileObj): > > > OldPos = self.GetFileBufferPos() > > > - if not self.__IsKeyword( "SECTION"): > > > + if not self._IsKeyword("SECTION"): > > > if len(FfsFileObj.SectionList) == 0: > > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > > else: > > > return False > > > > > > AlignValue = None > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - AlignValue = self.__Token > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENT_NOAUTO: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + AlignValue = self._Token > > > > > > - if not self.__GetCglSection(FfsFileObj, AlignValue): > > > + if not self._GetCglSection(FfsFileObj, AlignValue): > > > self.SetFileBufferPos(OldPos) > > > return False > > > else: > > > return True > > > > > > - def __GetFmp(self): > > > - if not self.__GetNextToken(): > > > + def _GetFmp(self): > > > + if not self._GetNextToken(): > > > return False > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - self.__SkipToToken("[FMPPAYLOAD.", True) > > > - FmpUiName = self.__GetUiName().upper() > > > + self._UndoToken() > > > + self._SkipToToken("[FMPPAYLOAD.", True) > > > + FmpUiName = self._GetUiName().upper() > > > if FmpUiName in self.Profile.FmpPayloadDict: > > > raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) > > > > > > - FmpData = CapsuleData.CapsulePayload() > > > + FmpData = CapsulePayload() > > > FmpData.UiName = FmpUiName > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) > > > FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', > > 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] > > > - while self.__Token in FmpKeyList: > > > - Name = self.__Token > > > + while self._Token in FmpKeyList: > > > + Name = self._Token > > > FmpKeyList.remove(Name) > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > if Name == 'IMAGE_TYPE_ID': > > > - if not self.__GetNextGuid(): > > > + if not self._GetNextGuid(): > > > raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) > > > - FmpData.ImageTypeId = self.__Token > > > + FmpData.ImageTypeId = self._Token > > > elif Name == 'CERTIFICATE_GUID': > > > - if not self.__GetNextGuid(): > > > + if not self._GetNextGuid(): > > > raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > > - FmpData.Certificate_Guid = self.__Token > > > - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > > uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > > + FmpData.Certificate_Guid = self._Token > > > + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > > UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > > raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for > > CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > > else: > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) > > > - Value = self.__Token > > > + Value = self._Token > > > if Name == 'IMAGE_HEADER_INIT_VERSION': > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > FmpData.Version = Value > > > elif Name == 'IMAGE_INDEX': > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > FmpData.ImageIndex = Value > > > elif Name == 'HARDWARE_INSTANCE': > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > FmpData.HardwareInstance = Value > > > elif Name == 'MONOTONIC_COUNT': > > > - if FdfParser.__Verify(Name, Value, 'UINT64'): > > > + if FdfParser._Verify(Name, Value, 'UINT64'): > > > FmpData.MonotonicCount = Value > > > if FmpData.MonotonicCount.upper().startswith('0X'): > > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) > > > else: > > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > break > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > > > > if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and > > FmpData.Certificate_Guid): > > > EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a > > pair.") > > > @@ -3248,7 +3173,7 @@ class FdfParser: > > > if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: > > > raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, > self.CurrentLineNumber) > > > # get the Image file and Vendor code file > > > - self.__GetFMPCapsuleData(FmpData) > > > + self._GetFMPCapsuleData(FmpData) > > > if not FmpData.ImageFile: > > > raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) > > > # check whether more than one Vendor code file > > > @@ -3257,7 +3182,7 @@ class FdfParser: > > > self.Profile.FmpPayloadDict[FmpUiName] = FmpData > > > return True > > > > > > - ## __GetCapsule() method > > > + ## _GetCapsule() method > > > # > > > # Get capsule section contents and store its data into capsule list of self.Profile > > > # > > > @@ -3265,90 +3190,89 @@ class FdfParser: > > > # @retval True Successfully find a capsule > > > # @retval False Not able to find a capsule > > > # > > > - def __GetCapsule(self): > > > - > > > - if not self.__GetNextToken(): > > > + def _GetCapsule(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[CAPSULE."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[CAPSULE.", True): > > > + self._UndoToken() > > > + if not self._IsToken("[CAPSULE.", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) > > > > > > - CapsuleObj = Capsule.Capsule() > > > + CapsuleObj = Capsule() > > > > > > - CapsuleName = self.__GetUiName() > > > + CapsuleName = self._GetUiName() > > > if not CapsuleName: > > > raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) > > > > > > CapsuleObj.UiCapsuleName = CapsuleName.upper() > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__IsKeyword("CREATE_FILE"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("CREATE_FILE"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > > > - CapsuleObj.CreateFile = self.__Token > > > + CapsuleObj.CreateFile = self._Token > > > > > > - self.__GetCapsuleStatements(CapsuleObj) > > > + self._GetCapsuleStatements(CapsuleObj) > > > self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj > > > return True > > > > > > - ## __GetCapsuleStatements() method > > > + ## _GetCapsuleStatements() method > > > # > > > # Get statements for capsule > > > # > > > # @param self The object pointer > > > # @param Obj for whom statements are got > > > # > > > - def __GetCapsuleStatements(self, Obj): > > > - self.__GetCapsuleTokens(Obj) > > > - self.__GetDefineStatements(Obj) > > > - self.__GetSetStatements(Obj) > > > - self.__GetCapsuleData(Obj) > > > + def _GetCapsuleStatements(self, Obj): > > > + self._GetCapsuleTokens(Obj) > > > + self._GetDefineStatements(Obj) > > > + self._GetSetStatements(Obj) > > > + self._GetCapsuleData(Obj) > > > > > > - ## __GetCapsuleTokens() method > > > + ## _GetCapsuleTokens() method > > > # > > > # Get token statements for capsule > > > # > > > # @param self The object pointer > > > # @param Obj for whom token statements are got > > > # > > > - def __GetCapsuleTokens(self, Obj): > > > - if not self.__GetNextToken(): > > > + def _GetCapsuleTokens(self, Obj): > > > + if not self._GetNextToken(): > > > return False > > > - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > > "CAPSULE_HEADER_INIT_VERSION"): > > > - Name = self.__Token.strip() > > > - if not self.__IsToken("="): > > > + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > > "CAPSULE_HEADER_INIT_VERSION"): > > > + Name = self._Token.strip() > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > if Name == 'CAPSULE_FLAGS': > > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > > self.CurrentLineNumber) > > > - Value = self.__Token.strip() > > > - while self.__IsToken(","): > > > - Value += ',' > > > - if not self.__GetNextToken(): > > > + Value = self._Token.strip() > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > + Value += TAB_COMMA_SPLIT > > > + if not self._GetNextToken(): > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > > self.CurrentLineNumber) > > > - Value += self.__Token.strip() > > > + Value += self._Token.strip() > > > elif Name == 'OEM_CAPSULE_FLAGS': > > > - Value = self.__Token.strip() > > > + Value = self._Token.strip() > > > if not Value.upper().startswith('0X'): > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > try: > > > @@ -3357,51 +3281,49 @@ class FdfParser: > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > if not 0x0000 <= Value <= 0xFFFF: > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > - Value = self.__Token.strip() > > > + Value = self._Token.strip() > > > else: > > > - Value = self.__Token.strip() > > > + Value = self._Token.strip() > > > Obj.TokensDict[Name] = Value > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > return False > > > - self.__UndoToken() > > > + self._UndoToken() > > > > > > - ## __GetCapsuleData() method > > > + ## _GetCapsuleData() method > > > # > > > # Get capsule data for capsule > > > # > > > # @param self The object pointer > > > # @param Obj for whom capsule data are got > > > # > > > - def __GetCapsuleData(self, Obj): > > > - > > > + def _GetCapsuleData(self, Obj): > > > while True: > > > - IsInf = self.__GetInfStatement(Obj, True) > > > - IsFile = self.__GetFileStatement(Obj, True) > > > - IsFv = self.__GetFvStatement(Obj) > > > - IsFd = self.__GetFdStatement(Obj) > > > - IsAnyFile = self.__GetAnyFileStatement(Obj) > > > - IsAfile = self.__GetAfileStatement(Obj) > > > - IsFmp = self.__GetFmpStatement(Obj) > > > + IsInf = self._GetInfStatement(Obj, True) > > > + IsFile = self._GetFileStatement(Obj, True) > > > + IsFv = self._GetFvStatement(Obj) > > > + IsFd = self._GetFdStatement(Obj) > > > + IsAnyFile = self._GetAnyFileStatement(Obj) > > > + IsAfile = self._GetAfileStatement(Obj) > > > + IsFmp = self._GetFmpStatement(Obj) > > > if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): > > > break > > > > > > - ## __GetFMPCapsuleData() method > > > + ## _GetFMPCapsuleData() method > > > # > > > # Get capsule data for FMP capsule > > > # > > > # @param self The object pointer > > > # @param Obj for whom capsule data are got > > > # > > > - def __GetFMPCapsuleData(self, Obj): > > > - > > > + def _GetFMPCapsuleData(self, Obj): > > > while True: > > > - IsFv = self.__GetFvStatement(Obj, True) > > > - IsFd = self.__GetFdStatement(Obj, True) > > > - IsAnyFile = self.__GetAnyFileStatement(Obj, True) > > > + IsFv = self._GetFvStatement(Obj, True) > > > + IsFd = self._GetFdStatement(Obj, True) > > > + IsAnyFile = self._GetAnyFileStatement(Obj, True) > > > if not (IsFv or IsFd or IsAnyFile): > > > break > > > > > > - ## __GetFvStatement() method > > > + ## _GetFvStatement() method > > > # > > > # Get FV for capsule > > > # > > > @@ -3410,32 +3332,31 @@ class FdfParser: > > > # @retval True Successfully find a FV statement > > > # @retval False Not able to find a FV statement > > > # > > > - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > > - > > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > > + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > return False > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token.upper() not in self.Profile.FvDict: > > > + if self._Token.upper() not in self.Profile.FvDict: > > > raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) > > > > > > - CapsuleFv = CapsuleData.CapsuleFv() > > > - CapsuleFv.FvName = self.__Token > > > + myCapsuleFv = CapsuleFv() > > > + myCapsuleFv.FvName = self._Token > > > if FMPCapsule: > > > if not CapsuleObj.ImageFile: > > > - CapsuleObj.ImageFile.append(CapsuleFv) > > > + CapsuleObj.ImageFile.append(myCapsuleFv) > > > else: > > > - CapsuleObj.VendorCodeFile.append(CapsuleFv) > > > + CapsuleObj.VendorCodeFile.append(myCapsuleFv) > > > else: > > > - CapsuleObj.CapsuleDataList.append(CapsuleFv) > > > + CapsuleObj.CapsuleDataList.append(myCapsuleFv) > > > return True > > > > > > - ## __GetFdStatement() method > > > + ## _GetFdStatement() method > > > # > > > # Get FD for capsule > > > # > > > @@ -3444,74 +3365,73 @@ class FdfParser: > > > # @retval True Successfully find a FD statement > > > # @retval False Not able to find a FD statement > > > # > > > - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > > - > > > - if not self.__IsKeyword("FD"): > > > + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > > + if not self._IsKeyword("FD"): > > > return False > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token.upper() not in self.Profile.FdDict: > > > + if self._Token.upper() not in self.Profile.FdDict: > > > raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) > > > > > > - CapsuleFd = CapsuleData.CapsuleFd() > > > - CapsuleFd.FdName = self.__Token > > > + myCapsuleFd = CapsuleFd() > > > + myCapsuleFd.FdName = self._Token > > > if FMPCapsule: > > > if not CapsuleObj.ImageFile: > > > - CapsuleObj.ImageFile.append(CapsuleFd) > > > + CapsuleObj.ImageFile.append(myCapsuleFd) > > > else: > > > - CapsuleObj.VendorCodeFile.append(CapsuleFd) > > > + CapsuleObj.VendorCodeFile.append(myCapsuleFd) > > > else: > > > - CapsuleObj.CapsuleDataList.append(CapsuleFd) > > > + CapsuleObj.CapsuleDataList.append(myCapsuleFd) > > > return True > > > > > > - def __GetFmpStatement(self, CapsuleObj): > > > - if not self.__IsKeyword("FMP_PAYLOAD"): > > > - if not self.__IsKeyword("FMP"): > > > + def _GetFmpStatement(self, CapsuleObj): > > > + if not self._IsKeyword("FMP_PAYLOAD"): > > > + if not self._IsKeyword("FMP"): > > > return False > > > > > > - if not self.__IsKeyword("PAYLOAD"): > > > - self.__UndoToken() > > > + if not self._IsKeyword("PAYLOAD"): > > > + self._UndoToken() > > > return False > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) > > > - Payload = self.__Token.upper() > > > + Payload = self._Token.upper() > > > if Payload not in self.Profile.FmpPayloadDict: > > > - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) > > > CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) > > > return True > > > > > > - def __ParseRawFileStatement(self): > > > - if not self.__IsKeyword("FILE"): > > > + def _ParseRawFileStatement(self): > > > + if not self._IsKeyword("FILE"): > > > return None > > > > > > - if not self.__IsKeyword("DATA"): > > > - self.__UndoToken() > > > + if not self._IsKeyword("DATA"): > > > + self._UndoToken() > > > return None > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > - AnyFileName = self.__Token > > > - self.__VerifyFile(AnyFileName) > > > + AnyFileName = self._Token > > > + self._VerifyFile(AnyFileName) > > > > > > if not os.path.isabs(AnyFileName): > > > AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) > > > > > > return AnyFileName > > > > > > - ## __GetAnyFileStatement() method > > > + ## _GetAnyFileStatement() method > > > # > > > # Get AnyFile for capsule > > > # > > > @@ -3520,23 +3440,23 @@ class FdfParser: > > > # @retval True Successfully find a Anyfile statement > > > # @retval False Not able to find a AnyFile statement > > > # > > > - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > > - AnyFileName = self.__ParseRawFileStatement() > > > + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > > + AnyFileName = self._ParseRawFileStatement() > > > if not AnyFileName: > > > return False > > > > > > - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() > > > - CapsuleAnyFile.FileName = AnyFileName > > > + myCapsuleAnyFile = CapsuleAnyFile() > > > + myCapsuleAnyFile.FileName = AnyFileName > > > if FMPCapsule: > > > if not CapsuleObj.ImageFile: > > > - CapsuleObj.ImageFile.append(CapsuleAnyFile) > > > + CapsuleObj.ImageFile.append(myCapsuleAnyFile) > > > else: > > > - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) > > > + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) > > > else: > > > - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) > > > + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) > > > return True > > > > > > - ## __GetAfileStatement() method > > > + ## _GetAfileStatement() method > > > # > > > # Get Afile for capsule > > > # > > > @@ -3545,18 +3465,17 @@ class FdfParser: > > > # @retval True Successfully find a Afile statement > > > # @retval False Not able to find a Afile statement > > > # > > > - def __GetAfileStatement(self, CapsuleObj): > > > - > > > - if not self.__IsKeyword("APPEND"): > > > + def _GetAfileStatement(self, CapsuleObj): > > > + if not self._IsKeyword("APPEND"): > > > return False > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) > > > > > > - AfileName = self.__Token > > > + AfileName = self._Token > > > AfileBaseName = os.path.basename(AfileName) > > > > > > if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: > > > @@ -3565,19 +3484,19 @@ class FdfParser: > > > > > > if not os.path.isabs(AfileName): > > > AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) > > > - self.__VerifyFile(AfileName) > > > + self._VerifyFile(AfileName) > > > else: > > > if not os.path.exists(AfileName): > > > raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) > > > else: > > > pass > > > > > > - CapsuleAfile = CapsuleData.CapsuleAfile() > > > - CapsuleAfile.FileName = AfileName > > > - CapsuleObj.CapsuleDataList.append(CapsuleAfile) > > > + myCapsuleAfile = CapsuleAfile() > > > + myCapsuleAfile.FileName = AfileName > > > + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) > > > return True > > > > > > - ## __GetRule() method > > > + ## _GetRule() method > > > # > > > # Get Rule section contents and store its data into rule list of self.Profile > > > # > > > @@ -3585,143 +3504,140 @@ class FdfParser: > > > # @retval True Successfully find a Rule > > > # @retval False Not able to find a Rule > > > # > > > - def __GetRule(self): > > > - > > > - if not self.__GetNextToken(): > > > + def _GetRule(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[RULE."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > - self.__UndoToken() > > > - if not self.__IsToken("[Rule.", True): > > > + self._UndoToken() > > > + if not self._IsToken("[Rule.", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__SkipToToken("."): > > > + if not self._SkipToToken(TAB_SPLIT): > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > - Arch = self.__SkippedChars.rstrip(".") > > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT) > > > if Arch.upper() not in ARCH_SET_FULL: > > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > > > - ModuleType = self.__GetModuleType() > > > + ModuleType = self._GetModuleType() > > > > > > TemplateName = "" > > > - if self.__IsToken("."): > > > - if not self.__GetNextWord(): > > > + if self._IsToken(TAB_SPLIT): > > > + if not self._GetNextWord(): > > > raise Warning("expected template name", self.FileName, self.CurrentLineNumber) > > > - TemplateName = self.__Token > > > + TemplateName = self._Token > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - RuleObj = self.__GetRuleFileStatements() > > > + RuleObj = self._GetRuleFileStatements() > > > RuleObj.Arch = Arch.upper() > > > RuleObj.ModuleType = ModuleType > > > RuleObj.TemplateName = TemplateName > > > - if TemplateName == '' : > > > + if TemplateName == '': > > > self.Profile.RuleDict['RULE' + \ > > > - '.' + \ > > > + TAB_SPLIT + \ > > > Arch.upper() + \ > > > - '.' + \ > > > + TAB_SPLIT + \ > > > ModuleType.upper() ] = RuleObj > > > - else : > > > + else: > > > self.Profile.RuleDict['RULE' + \ > > > - '.' + \ > > > + TAB_SPLIT + \ > > > Arch.upper() + \ > > > - '.' + \ > > > + TAB_SPLIT + \ > > > ModuleType.upper() + \ > > > - '.' + \ > > > + TAB_SPLIT + \ > > > TemplateName.upper() ] = RuleObj > > > return True > > > > > > - ## __GetModuleType() method > > > + ## _GetModuleType() method > > > # > > > # Return the module type > > > # > > > # @param self The object pointer > > > # @retval string module type > > > # > > > - def __GetModuleType(self): > > > - > > > - if not self.__GetNextWord(): > > > + def _GetModuleType(self): > > > + if not self._GetNextWord(): > > > raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) > > > - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > > SUP_MODULE_DXE_CORE, \ > > > + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > > SUP_MODULE_DXE_CORE, \ > > > SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ > > > SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ > > > - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, > SUP_MODULE_USER_DEFINED, > > "DEFAULT", SUP_MODULE_BASE, \ > > > + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, > SUP_MODULE_USER_DEFINED, > > TAB_DEFAULT, SUP_MODULE_BASE, \ > > > EDK_COMPONENT_TYPE_SECURITY_CORE, > > EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, > > EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ > > > "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, > > EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, > "ACPITABLE", > > SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): > > > - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - return self.__Token > > > + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + return self._Token > > > > > > - ## __GetFileExtension() method > > > + ## _GetFileExtension() method > > > # > > > # Return the file extension > > > # > > > # @param self The object pointer > > > # @retval string file name extension > > > # > > > - def __GetFileExtension(self): > > > - if not self.__IsToken("."): > > > + def _GetFileExtension(self): > > > + if not self._IsToken(TAB_SPLIT): > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > Ext = "" > > > - if self.__GetNextToken(): > > > - if FileExtensionPattern.match(self.__Token): > > > - Ext = self.__Token > > > - return '.' + Ext > > > + if self._GetNextToken(): > > > + if FileExtensionPattern.match(self._Token): > > > + Ext = self._Token > > > + return TAB_SPLIT + Ext > > > else: > > > - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > else: > > > raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) > > > > > > - ## __GetRuleFileStatement() method > > > + ## _GetRuleFileStatement() method > > > # > > > # Get rule contents > > > # > > > # @param self The object pointer > > > # @retval Rule Rule object > > > # > > > - def __GetRuleFileStatements(self): > > > - > > > - if not self.__IsKeyword("FILE"): > > > + def _GetRuleFileStatements(self): > > > + if not self._IsKeyword("FILE"): > > > raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > > > - Type = self.__Token.strip().upper() > > > + Type = self._Token.strip().upper() > > > if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ > > > "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, > > EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, > > SUP_MODULE_MM_CORE_STANDALONE): > > > - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsKeyword("$(NAMED_GUID)"): > > > - if not self.__GetNextWord(): > > > + if not self._IsKeyword("$(NAMED_GUID)"): > > > + if not self._GetNextWord(): > > > raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) > > > - if self.__Token == 'PCD': > > > - if not self.__IsToken( "("): > > > + if self._Token == 'PCD': > > > + if not self._IsToken("("): > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > - PcdPair = self.__GetNextPcdSettings() > > > - if not self.__IsToken( ")"): > > > + PcdPair = self._GetNextPcdSettings() > > > + if not self._IsToken(")"): > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > - NameGuid = self.__Token > > > + NameGuid = self._Token > > > > > > KeepReloc = None > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > - if self.__FileCouldHaveRelocFlag(Type): > > > - if self.__Token == 'RELOCS_STRIPPED': > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > + if self._FileCouldHaveRelocFlag(Type): > > > + if self._Token == 'RELOCS_STRIPPED': > > > KeepReloc = False > > > else: > > > KeepReloc = True > > > @@ -3729,111 +3645,109 @@ class FdfParser: > > > raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), > self.FileName, > > self.CurrentLineNumber) > > > > > > KeyStringList = [] > > > - if self.__GetNextToken(): > > > - if TokenFindPattern.match(self.__Token): > > > - KeyStringList.append(self.__Token) > > > - if self.__IsToken(","): > > > - while self.__GetNextToken(): > > > - if not TokenFindPattern.match(self.__Token): > > > + if self._GetNextToken(): > > > + if TokenFindPattern.match(self._Token): > > > + KeyStringList.append(self._Token) > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > + while self._GetNextToken(): > > > + if not TokenFindPattern.match(self._Token): > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, > self.CurrentLineNumber) > > > - KeyStringList.append(self.__Token) > > > + KeyStringList.append(self._Token) > > > > > > - if not self.__IsToken(","): > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > break > > > > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > > > > > > > Fixed = False > > > - if self.__IsKeyword("Fixed", True): > > > + if self._IsKeyword("Fixed", True): > > > Fixed = True > > > > > > CheckSum = False > > > - if self.__IsKeyword("CheckSum", True): > > > + if self._IsKeyword("CheckSum", True): > > > CheckSum = True > > > > > > AlignValue = "" > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > #For FFS, Auto is default option same to "" > > > - if not self.__Token == "Auto": > > > - AlignValue = self.__Token > > > + if not self._Token == "Auto": > > > + AlignValue = self._Token > > > > > > - if self.__IsToken("{"): > > > + if self._IsToken("{"): > > > # Complex file rule expected > > > - Rule = RuleComplexFile.RuleComplexFile() > > > - Rule.FvFileType = Type > > > - Rule.NameGuid = NameGuid > > > - Rule.Alignment = AlignValue > > > - Rule.CheckSum = CheckSum > > > - Rule.Fixed = Fixed > > > - Rule.KeyStringList = KeyStringList > > > + NewRule = RuleComplexFile() > > > + NewRule.FvFileType = Type > > > + NewRule.NameGuid = NameGuid > > > + NewRule.Alignment = AlignValue > > > + NewRule.CheckSum = CheckSum > > > + NewRule.Fixed = Fixed > > > + NewRule.KeyStringList = KeyStringList > > > if KeepReloc is not None: > > > - Rule.KeepReloc = KeepReloc > > > + NewRule.KeepReloc = KeepReloc > > > > > > while True: > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) > > > - IsLeaf = self.__GetEfiSection(Rule) > > > + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) > > > + IsLeaf = self._GetEfiSection(NewRule) > > > if not IsEncapsulate and not IsLeaf: > > > break > > > > > > - if not self.__IsToken("}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > - return Rule > > > + return NewRule > > > > > > else: > > > # Simple file rule expected > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) > > > > > > - SectionName = self.__Token > > > + SectionName = self._Token > > > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", > > BINARY_FILE_TYPE_SMM_DEPEX): > > > raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) > > > > > > > > > - if self.__IsKeyword("Fixed", True): > > > + if self._IsKeyword("Fixed", True): > > > Fixed = True > > > > > > - if self.__IsKeyword("CheckSum", True): > > > + if self._IsKeyword("CheckSum", True): > > > CheckSum = True > > > > > > SectAlignment = "" > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > BINARY_FILE_TYPE_TE): > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > BINARY_FILE_TYPE_TE): > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > - SectAlignment = self.__Token > > > + SectAlignment = self._Token > > > > > > Ext = None > > > - if self.__IsToken('|'): > > > - Ext = self.__GetFileExtension() > > > - elif not self.__GetNextToken(): > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + Ext = self._GetFileExtension() > > > + elif not self._GetNextToken(): > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > - Rule = RuleSimpleFile.RuleSimpleFile() > > > - Rule.SectionType = SectionName > > > - Rule.FvFileType = Type > > > - Rule.NameGuid = NameGuid > > > - Rule.Alignment = AlignValue > > > - Rule.SectAlignment = SectAlignment > > > - Rule.CheckSum = CheckSum > > > - Rule.Fixed = Fixed > > > - Rule.KeyStringList = KeyStringList > > > + NewRule = RuleSimpleFile() > > > + NewRule.SectionType = SectionName > > > + NewRule.FvFileType = Type > > > + NewRule.NameGuid = NameGuid > > > + NewRule.Alignment = AlignValue > > > + NewRule.SectAlignment = SectAlignment > > > + NewRule.CheckSum = CheckSum > > > + NewRule.Fixed = Fixed > > > + NewRule.KeyStringList = KeyStringList > > > if KeepReloc is not None: > > > - Rule.KeepReloc = KeepReloc > > > - Rule.FileExtension = Ext > > > - Rule.FileName = self.__Token > > > - return Rule > > > + NewRule.KeepReloc = KeepReloc > > > + NewRule.FileExtension = Ext > > > + NewRule.FileName = self._Token > > > + return NewRule > > > > > > - ## __GetEfiSection() method > > > + ## _GetEfiSection() method > > > # > > > # Get section list for Rule > > > # > > > @@ -3842,127 +3756,124 @@ class FdfParser: > > > # @retval True Successfully find section statement > > > # @retval False Not able to find section statement > > > # > > > - def __GetEfiSection(self, Obj): > > > - > > > + def _GetEfiSection(self, Obj): > > > OldPos = self.GetFileBufferPos() > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > return False > > > - SectionName = self.__Token > > > + SectionName = self._Token > > > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > BINARY_FILE_TYPE_GUID, > > BINARY_FILE_TYPE_SMM_DEPEX): > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > if SectionName == "FV_IMAGE": > > > - FvImageSectionObj = FvImageSection.FvImageSection() > > > - if self.__IsKeyword("FV_IMAGE"): > > > + FvImageSectionObj = FvImageSection() > > > + if self._IsKeyword("FV_IMAGE"): > > > pass > > > - if self.__IsToken( "{"): > > > - FvObj = Fv.FV() > > > - self.__GetDefineStatements(FvObj) > > > - self.__GetBlockStatement(FvObj) > > > - self.__GetSetStatements(FvObj) > > > - self.__GetFvAlignment(FvObj) > > > - self.__GetFvAttributes(FvObj) > > > - self.__GetAprioriSection(FvObj) > > > - self.__GetAprioriSection(FvObj) > > > + if self._IsToken("{"): > > > + FvObj = FV() > > > + self._GetDefineStatements(FvObj) > > > + self._GetBlockStatement(FvObj) > > > + self._GetSetStatements(FvObj) > > > + self._GetFvAlignment(FvObj) > > > + self._GetFvAttributes(FvObj) > > > + self._GetAprioriSection(FvObj) > > > + self._GetAprioriSection(FvObj) > > > > > > while True: > > > - IsInf = self.__GetInfStatement(FvObj) > > > - IsFile = self.__GetFileStatement(FvObj) > > > + IsInf = self._GetInfStatement(FvObj) > > > + IsFile = self._GetFileStatement(FvObj) > > > if not IsInf and not IsFile: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > FvImageSectionObj.Fv = FvObj > > > FvImageSectionObj.FvName = None > > > > > > else: > > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > > - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > > - FvImageSectionObj.FvFileType = self.__Token > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) > > > + FvImageSectionObj.FvFileType = self._Token > > > > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - FvImageSectionObj.Alignment = self.__Token > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENT_NOAUTO: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + FvImageSectionObj.Alignment = self._Token > > > > > > - if self.__IsToken('|'): > > > - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() > > > - elif self.__GetNextToken(): > > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + FvImageSectionObj.FvFileExtension = self._GetFileExtension() > > > + elif self._GetNextToken(): > > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > BINARY_FILE_TYPE_GUID, > > BINARY_FILE_TYPE_SMM_DEPEX): > > > - FvImageSectionObj.FvFileName = self.__Token > > > + FvImageSectionObj.FvFileName = self._Token > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > else: > > > raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) > > > > > > Obj.SectionList.append(FvImageSectionObj) > > > return True > > > > > > - EfiSectionObj = EfiSection.EfiSection() > > > + EfiSectionObj = EfiSection() > > > EfiSectionObj.SectionType = SectionName > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected file type", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token == "STRING": > > > - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > > + if self._Token == "STRING": > > > + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > > raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, > self.CurrentLineNumber), > > self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken('='): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__GetStringData(): > > > - EfiSectionObj.StringData = self.__Token > > > + if self._GetStringData(): > > > + EfiSectionObj.StringData = self._Token > > > > > > - if self.__IsKeyword("BUILD_NUM"): > > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > + if self._IsKeyword("BUILD_NUM"): > > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > > - EfiSectionObj.BuildNum = self.__Token > > > + EfiSectionObj.BuildNum = self._Token > > > > > > else: > > > - EfiSectionObj.FileType = self.__Token > > > - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > > + EfiSectionObj.FileType = self._Token > > > + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > > > > > - if self.__IsKeyword("Optional"): > > > - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > > + if self._IsKeyword("Optional"): > > > + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > > raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), > > self.FileName, self.CurrentLineNumber) > > > EfiSectionObj.Optional = True > > > > > > - if self.__IsKeyword("BUILD_NUM"): > > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > + if self._IsKeyword("BUILD_NUM"): > > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > > - EfiSectionObj.BuildNum = self.__Token > > > + EfiSectionObj.BuildNum = self._Token > > > > > > - if self.__GetAlignment(): > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > BINARY_FILE_TYPE_TE): > > > + if self._GetAlignment(): > > > + if self._Token not in ALIGNMENTS: > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > BINARY_FILE_TYPE_TE): > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > self.CurrentLineNumber) > > > - EfiSectionObj.Alignment = self.__Token > > > + EfiSectionObj.Alignment = self._Token > > > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > > - if self.__Token == 'RELOCS_STRIPPED': > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > > + if self._Token == 'RELOCS_STRIPPED': > > > EfiSectionObj.KeepReloc = False > > > else: > > > EfiSectionObj.KeepReloc = True > > > @@ -3972,35 +3883,35 @@ class FdfParser: > > > raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, > > self.CurrentLineNumber) > > > > > > > > > - if self.__IsToken('|'): > > > - EfiSectionObj.FileExtension = self.__GetFileExtension() > > > - elif self.__GetNextToken(): > > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + EfiSectionObj.FileExtension = self._GetFileExtension() > > > + elif self._GetNextToken(): > > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > > > - if self.__Token.startswith('PCD'): > > > - self.__UndoToken() > > > - self.__GetNextWord() > > > + if self._Token.startswith('PCD'): > > > + self._UndoToken() > > > + self._GetNextWord() > > > > > > - if self.__Token == 'PCD': > > > - if not self.__IsToken( "("): > > > + if self._Token == 'PCD': > > > + if not self._IsToken("("): > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > - PcdPair = self.__GetNextPcdSettings() > > > - if not self.__IsToken( ")"): > > > + PcdPair = self._GetNextPcdSettings() > > > + if not self._IsToken(")"): > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > - EfiSectionObj.FileName = self.__Token > > > + EfiSectionObj.FileName = self._Token > > > > > > else: > > > - self.__UndoToken() > > > + self._UndoToken() > > > else: > > > raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) > > > > > > Obj.SectionList.append(EfiSectionObj) > > > return True > > > > > > - ## __RuleSectionCouldBeOptional() method > > > + ## _RuleSectionCouldBeOptional() method > > > # > > > # Get whether a section could be optional > > > # > > > @@ -4009,13 +3920,13 @@ class FdfParser: > > > # @retval False section never optional > > > # > > > @staticmethod > > > - def __RuleSectionCouldBeOptional(SectionType): > > > + def _RuleSectionCouldBeOptional(SectionType): > > > if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > > "RAW", BINARY_FILE_TYPE_SMM_DEPEX): > > > return True > > > else: > > > return False > > > > > > - ## __RuleSectionCouldHaveBuildNum() method > > > + ## _RuleSectionCouldHaveBuildNum() method > > > # > > > # Get whether a section could have build number information > > > # > > > @@ -4024,13 +3935,13 @@ class FdfParser: > > > # @retval False section never have build number information > > > # > > > @staticmethod > > > - def __RuleSectionCouldHaveBuildNum(SectionType): > > > + def _RuleSectionCouldHaveBuildNum(SectionType): > > > if SectionType in ("VERSION"): > > > return True > > > else: > > > return False > > > > > > - ## __RuleSectionCouldHaveString() method > > > + ## _RuleSectionCouldHaveString() method > > > # > > > # Get whether a section could have string > > > # > > > @@ -4039,13 +3950,13 @@ class FdfParser: > > > # @retval False section never have string > > > # > > > @staticmethod > > > - def __RuleSectionCouldHaveString(SectionType): > > > + def _RuleSectionCouldHaveString(SectionType): > > > if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): > > > return True > > > else: > > > return False > > > > > > - ## __CheckRuleSectionFileType() method > > > + ## _CheckRuleSectionFileType() method > > > # > > > # Get whether a section matches a file type > > > # > > > @@ -4053,7 +3964,7 @@ class FdfParser: > > > # @param SectionType The section type to check > > > # @param FileType The file type to check > > > # > > > - def __CheckRuleSectionFileType(self, SectionType, FileType): > > > + def _CheckRuleSectionFileType(self, SectionType, FileType): > > > if SectionType == "COMPAT16": > > > if FileType not in ("COMPAT16", "SEC_COMPAT16"): > > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > > @@ -4085,53 +3996,53 @@ class FdfParser: > > > if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): > > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > > > > > - ## __GetRuleEncapsulationSection() method > > > + ## _GetRuleEncapsulationSection() method > > > # > > > # Get encapsulation section for Rule > > > # > > > # @param self The object pointer > > > - # @param Rule for whom section is got > > > + # @param theRule for whom section is got > > > # @retval True Successfully find section statement > > > # @retval False Not able to find section statement > > > # > > > - def __GetRuleEncapsulationSection(self, Rule): > > > - if self.__IsKeyword( "COMPRESS"): > > > + def _GetRuleEncapsulationSection(self, theRule): > > > + if self._IsKeyword("COMPRESS"): > > > Type = "PI_STD" > > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > > - Type = self.__Token > > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > > + Type = self._Token > > > > > > - if not self.__IsToken("{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > - CompressSectionObj = CompressSection.CompressSection() > > > + CompressSectionObj = CompressSection() > > > > > > CompressSectionObj.CompType = Type > > > # Recursive sections... > > > while True: > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) > > > - IsLeaf = self.__GetEfiSection(CompressSectionObj) > > > + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) > > > + IsLeaf = self._GetEfiSection(CompressSectionObj) > > > if not IsEncapsulate and not IsLeaf: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > - Rule.SectionList.append(CompressSectionObj) > > > + theRule.SectionList.append(CompressSectionObj) > > > > > > return True > > > > > > - elif self.__IsKeyword( "GUIDED"): > > > + elif self._IsKeyword("GUIDED"): > > > GuidValue = None > > > - if self.__GetNextGuid(): > > > - GuidValue = self.__Token > > > + if self._GetNextGuid(): > > > + GuidValue = self._Token > > > > > > - if self.__IsKeyword( "$(NAMED_GUID)"): > > > - GuidValue = self.__Token > > > + if self._IsKeyword("$(NAMED_GUID)"): > > > + GuidValue = self._Token > > > > > > - AttribDict = self.__GetGuidAttrib() > > > + AttribDict = self._GetGuidAttrib() > > > > > > - if not self.__IsToken("{"): > > > + if not self._IsToken("{"): > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > - GuidSectionObj = GuidSection.GuidSection() > > > + GuidSectionObj = GuidSection() > > > GuidSectionObj.NameGuid = GuidValue > > > GuidSectionObj.SectionType = "GUIDED" > > > GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] > > > @@ -4140,20 +4051,20 @@ class FdfParser: > > > > > > # Efi sections... > > > while True: > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) > > > - IsLeaf = self.__GetEfiSection(GuidSectionObj) > > > + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) > > > + IsLeaf = self._GetEfiSection(GuidSectionObj) > > > if not IsEncapsulate and not IsLeaf: > > > break > > > > > > - if not self.__IsToken( "}"): > > > + if not self._IsToken("}"): > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > - Rule.SectionList.append(GuidSectionObj) > > > + theRule.SectionList.append(GuidSectionObj) > > > > > > return True > > > > > > return False > > > > > > - ## __GetVtf() method > > > + ## _GetVtf() method > > > # > > > # Get VTF section contents and store its data into VTF list of self.Profile > > > # > > > @@ -4161,69 +4072,69 @@ class FdfParser: > > > # @retval True Successfully find a VTF > > > # @retval False Not able to find a VTF > > > # > > > - def __GetVtf(self): > > > - if not self.__GetNextToken(): > > > + def _GetVtf(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[VTF."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[VTF.", True): > > > + self._UndoToken() > > > + if not self._IsToken("[VTF.", True): > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__SkipToToken("."): > > > + if not self._SkipToToken(TAB_SPLIT): > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > - Arch = self.__SkippedChars.rstrip(".").upper() > > > - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() > > > + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: > > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) > > > - Name = self.__Token.upper() > > > + Name = self._Token.upper() > > > > > > - VtfObj = Vtf.Vtf() > > > + VtfObj = Vtf() > > > VtfObj.UiName = Name > > > VtfObj.KeyArch = Arch > > > > > > - if self.__IsToken(","): > > > - if not self.__GetNextWord(): > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > + if not self._GetNextWord(): > > > raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) > > > - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > > - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - VtfObj.ArchList = self.__Token.upper() > > > + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, > > TAB_ARCH_AARCH64): > > > + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + VtfObj.ArchList = self._Token.upper() > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__IsKeyword("IA32_RST_BIN"): > > > - if not self.__IsToken("="): > > > + if self._IsKeyword("IA32_RST_BIN"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) > > > > > > - VtfObj.ResetBin = self.__Token > > > - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: > > > + VtfObj.ResetBin = self._Token > > > + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > - while self.__GetComponentStatement(VtfObj): > > > + while self._GetComponentStatement(VtfObj): > > > pass > > > > > > self.Profile.VtfList.append(VtfObj) > > > return True > > > > > > - ## __GetComponentStatement() method > > > + ## _GetComponentStatement() method > > > # > > > # Get components in VTF > > > # > > > @@ -4232,132 +4143,132 @@ class FdfParser: > > > # @retval True Successfully find a component > > > # @retval False Not able to find a component > > > # > > > - def __GetComponentStatement(self, VtfObj): > > > - if not self.__IsKeyword("COMP_NAME"): > > > + def _GetComponentStatement(self, VtfObj): > > > + if not self._IsKeyword("COMP_NAME"): > > > return False > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextWord(): > > > + if not self._GetNextWord(): > > > raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) > > > > > > - CompStatementObj = ComponentStatement.ComponentStatement() > > > - CompStatementObj.CompName = self.__Token > > > + CompStatementObj = ComponentStatement() > > > + CompStatementObj.CompName = self._Token > > > > > > - if not self.__IsKeyword("COMP_LOC"): > > > + if not self._IsKeyword("COMP_LOC"): > > > raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > CompStatementObj.CompLoc = "" > > > - if self.__GetNextWord(): > > > - CompStatementObj.CompLoc = self.__Token > > > - if self.__IsToken('|'): > > > - if not self.__GetNextWord(): > > > + if self._GetNextWord(): > > > + CompStatementObj.CompLoc = self._Token > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > + if not self._GetNextWord(): > > > raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > - CompStatementObj.FilePos = self.__Token > > > + CompStatementObj.FilePos = self._Token > > > else: > > > self.CurrentLineNumber += 1 > > > self.CurrentOffsetWithinLine = 0 > > > > > > - if not self.__IsKeyword("COMP_TYPE"): > > > + if not self._IsKeyword("COMP_TYPE"): > > > raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) > > > - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > > - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ > > > - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: > > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - CompStatementObj.CompType = self.__Token > > > + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > > + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ > > > + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: > > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + CompStatementObj.CompType = self._Token > > > > > > - if not self.__IsKeyword("COMP_VER"): > > > + if not self._IsKeyword("COMP_VER"): > > > raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) > > > > > > - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) > > > - if Pattern.match(self.__Token) is None: > > > - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - CompStatementObj.CompVer = self.__Token > > > + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) > > > + if Pattern.match(self._Token) is None: > > > + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + CompStatementObj.CompVer = self._Token > > > > > > - if not self.__IsKeyword("COMP_CS"): > > > + if not self._IsKeyword("COMP_CS"): > > > raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) > > > - if self.__Token not in ("1", "0"): > > > - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > - CompStatementObj.CompCs = self.__Token > > > + if self._Token not in ("1", "0"): > > > + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > + CompStatementObj.CompCs = self._Token > > > > > > > > > - if not self.__IsKeyword("COMP_BIN"): > > > + if not self._IsKeyword("COMP_BIN"): > > > raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) > > > > > > - CompStatementObj.CompBin = self.__Token > > > - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: > > > + CompStatementObj.CompBin = self._Token > > > + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), > > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > - if not self.__IsKeyword("COMP_SYM"): > > > + if not self._IsKeyword("COMP_SYM"): > > > raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) > > > > > > - CompStatementObj.CompSym = self.__Token > > > - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: > > > + CompStatementObj.CompSym = self._Token > > > + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), > > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > - if not self.__IsKeyword("COMP_SIZE"): > > > + if not self._IsKeyword("COMP_SIZE"): > > > raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) > > > > > > - if not self.__IsToken("="): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > - if self.__IsToken("-"): > > > - CompStatementObj.CompSize = self.__Token > > > - elif self.__GetNextDecimalNumber(): > > > - CompStatementObj.CompSize = self.__Token > > > - elif self.__GetNextHexNumber(): > > > - CompStatementObj.CompSize = self.__Token > > > + if self._IsToken("-"): > > > + CompStatementObj.CompSize = self._Token > > > + elif self._GetNextDecimalNumber(): > > > + CompStatementObj.CompSize = self._Token > > > + elif self._GetNextHexNumber(): > > > + CompStatementObj.CompSize = self._Token > > > else: > > > - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > VtfObj.ComponentStatementList.append(CompStatementObj) > > > return True > > > > > > - ## __GetOptionRom() method > > > + ## _GetOptionRom() method > > > # > > > # Get OptionROM section contents and store its data into OptionROM list of self.Profile > > > # > > > @@ -4365,38 +4276,37 @@ class FdfParser: > > > # @retval True Successfully find a OptionROM > > > # @retval False Not able to find a OptionROM > > > # > > > - def __GetOptionRom(self): > > > - if not self.__GetNextToken(): > > > + def _GetOptionRom(self): > > > + if not self._GetNextToken(): > > > return False > > > > > > - S = self.__Token.upper() > > > - if S.startswith("[") and not S.startswith("[OPTIONROM."): > > > + S = self._Token.upper() > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): > > > self.SectionParser(S) > > > - self.__UndoToken() > > > + self._UndoToken() > > > return False > > > > > > - self.__UndoToken() > > > - if not self.__IsToken("[OptionRom.", True): > > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > + self._UndoToken() > > > + if not self._IsToken("[OptionRom.", True): > > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > - OptRomName = self.__GetUiName() > > > + OptRomName = self._GetUiName() > > > > > > - if not self.__IsToken( "]"): > > > + if not self._IsToken(TAB_SECTION_END): > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > - OptRomObj = OptionRom.OPTIONROM() > > > - OptRomObj.DriverName = OptRomName > > > + OptRomObj = OPTIONROM(OptRomName) > > > self.Profile.OptRomDict[OptRomName] = OptRomObj > > > > > > while True: > > > - isInf = self.__GetOptRomInfStatement(OptRomObj) > > > - isFile = self.__GetOptRomFileStatement(OptRomObj) > > > + isInf = self._GetOptRomInfStatement(OptRomObj) > > > + isFile = self._GetOptRomFileStatement(OptRomObj) > > > if not isInf and not isFile: > > > break > > > > > > return True > > > > > > - ## __GetOptRomInfStatement() method > > > + ## _GetOptRomInfStatement() method > > > # > > > # Get INF statements > > > # > > > @@ -4405,17 +4315,17 @@ class FdfParser: > > > # @retval True Successfully find inf statement > > > # @retval False Not able to find inf statement > > > # > > > - def __GetOptRomInfStatement(self, Obj): > > > - if not self.__IsKeyword( "INF"): > > > + def _GetOptRomInfStatement(self, Obj): > > > + if not self._IsKeyword("INF"): > > > return False > > > > > > - ffsInf = OptRomInfStatement.OptRomInfStatement() > > > - self.__GetInfOptions( ffsInf) > > > + ffsInf = OptRomInfStatement() > > > + self._GetInfOptions(ffsInf) > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > > - ffsInf.InfFileName = self.__Token > > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > + ffsInf.InfFileName = self._Token > > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > @@ -4434,71 +4344,71 @@ class FdfParser: > > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > > > > > > - self.__GetOptRomOverrides (ffsInf) > > > + self._GetOptRomOverrides (ffsInf) > > > > > > Obj.FfsList.append(ffsInf) > > > return True > > > > > > - ## __GetOptRomOverrides() method > > > + ## _GetOptRomOverrides() method > > > # > > > # Get overrides for OptROM INF & FILE > > > # > > > # @param self The object pointer > > > # @param FfsInfObj for whom overrides is got > > > # > > > - def __GetOptRomOverrides(self, Obj): > > > - if self.__IsToken('{'): > > > - Overrides = OptRomInfStatement.OverrideAttribs() > > > + def _GetOptRomOverrides(self, Obj): > > > + if self._IsToken('{'): > > > + Overrides = OverrideAttribs() > > > while True: > > > - if self.__IsKeyword( "PCI_VENDOR_ID"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("PCI_VENDOR_ID"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) > > > - Overrides.PciVendorId = self.__Token > > > + Overrides.PciVendorId = self._Token > > > continue > > > > > > - if self.__IsKeyword( "PCI_CLASS_CODE"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("PCI_CLASS_CODE"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) > > > - Overrides.PciClassCode = self.__Token > > > + Overrides.PciClassCode = self._Token > > > continue > > > > > > - if self.__IsKeyword( "PCI_DEVICE_ID"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("PCI_DEVICE_ID"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) > > > > > > - Overrides.PciDeviceId = self.__Token > > > + Overrides.PciDeviceId = self._Token > > > continue > > > > > > - if self.__IsKeyword( "PCI_REVISION"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("PCI_REVISION"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextHexNumber(): > > > + if not self._GetNextHexNumber(): > > > raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) > > > - Overrides.PciRevision = self.__Token > > > + Overrides.PciRevision = self._Token > > > continue > > > > > > - if self.__IsKeyword( "PCI_COMPRESS"): > > > - if not self.__IsToken( "="): > > > + if self._IsKeyword("PCI_COMPRESS"): > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) > > > - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' > > > + Overrides.NeedCompress = self._Token.upper() == 'TRUE' > > > continue > > > > > > - if self.__IsToken( "}"): > > > + if self._IsToken("}"): > > > break > > > else: > > > EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) > > > > > > Obj.OverrideAttribs = Overrides > > > > > > - ## __GetOptRomFileStatement() method > > > + ## _GetOptRomFileStatement() method > > > # > > > # Get FILE statements > > > # > > > @@ -4507,33 +4417,33 @@ class FdfParser: > > > # @retval True Successfully find FILE statement > > > # @retval False Not able to find FILE statement > > > # > > > - def __GetOptRomFileStatement(self, Obj): > > > - if not self.__IsKeyword( "FILE"): > > > + def _GetOptRomFileStatement(self, Obj): > > > + if not self._IsKeyword("FILE"): > > > return False > > > > > > - FfsFileObj = OptRomFileStatement.OptRomFileStatement() > > > + FfsFileObj = OptRomFileStatement() > > > > > > - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): > > > + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): > > > raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) > > > - FfsFileObj.FileType = self.__Token > > > + FfsFileObj.FileType = self._Token > > > > > > - if not self.__GetNextToken(): > > > + if not self._GetNextToken(): > > > raise Warning("expected File path", self.FileName, self.CurrentLineNumber) > > > - FfsFileObj.FileName = self.__Token > > > - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > + FfsFileObj.FileName = self._Token > > > + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > #check for file path > > > ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > if ErrorCode != 0: > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > if FfsFileObj.FileType == 'EFI': > > > - self.__GetOptRomOverrides(FfsFileObj) > > > + self._GetOptRomOverrides(FfsFileObj) > > > > > > Obj.FfsList.append(FfsFileObj) > > > > > > return True > > > > > > - ## __GetCapInFd() method > > > + ## _GetCapInFd() method > > > # > > > # Get Cap list contained in FD > > > # > > > @@ -4541,7 +4451,7 @@ class FdfParser: > > > # @param FdName FD name > > > # @retval CapList List of Capsule in FD > > > # > > > - def __GetCapInFd (self, FdName): > > > + def _GetCapInFd (self, FdName): > > > CapList = [] > > > if FdName.upper() in self.Profile.FdDict: > > > FdObj = self.Profile.FdDict[FdName.upper()] > > > @@ -4554,7 +4464,7 @@ class FdfParser: > > > CapList.append(elementRegionData.upper()) > > > return CapList > > > > > > - ## __GetReferencedFdCapTuple() method > > > + ## _GetReferencedFdCapTuple() method > > > # > > > # Get FV and FD list referenced by a capsule image > > > # > > > @@ -4563,22 +4473,22 @@ class FdfParser: > > > # @param RefFdList referenced FD by section > > > # @param RefFvList referenced FV by section > > > # > > > - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > > - for CapsuleDataObj in CapObj.CapsuleDataList : > > > + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > > + for CapsuleDataObj in CapObj.CapsuleDataList: > > > if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and > CapsuleDataObj.FvName.upper() > > not in RefFvList: > > > RefFvList.append (CapsuleDataObj.FvName.upper()) > > > elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and > > CapsuleDataObj.FdName.upper() not in RefFdList: > > > RefFdList.append (CapsuleDataObj.FdName.upper()) > > > elif CapsuleDataObj.Ffs is not None: > > > - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): > > > + if isinstance(CapsuleDataObj.Ffs, FileStatement): > > > if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: > > > RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) > > > elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: > > > RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) > > > else: > > > - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > > + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > > > > > - ## __GetFvInFd() method > > > + ## _GetFvInFd() method > > > # > > > # Get FV list contained in FD > > > # > > > @@ -4586,7 +4496,7 @@ class FdfParser: > > > # @param FdName FD name > > > # @retval FvList list of FV in FD > > > # > > > - def __GetFvInFd (self, FdName): > > > + def _GetFvInFd (self, FdName): > > > FvList = [] > > > if FdName.upper() in self.Profile.FdDict: > > > FdObj = self.Profile.FdDict[FdName.upper()] > > > @@ -4599,7 +4509,7 @@ class FdfParser: > > > FvList.append(elementRegionData.upper()) > > > return FvList > > > > > > - ## __GetReferencedFdFvTuple() method > > > + ## _GetReferencedFdFvTuple() method > > > # > > > # Get FD and FV list referenced by a FFS file > > > # > > > @@ -4608,17 +4518,17 @@ class FdfParser: > > > # @param RefFdList referenced FD by section > > > # @param RefFvList referenced FV by section > > > # > > > - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > > + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > > for FfsObj in FvObj.FfsList: > > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > > + if isinstance(FfsObj, FileStatement): > > > if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: > > > RefFvList.append(FfsObj.FvName.upper()) > > > elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: > > > RefFdList.append(FfsObj.FdName.upper()) > > > else: > > > - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > > + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > > > > > - ## __GetReferencedFdFvTupleFromSection() method > > > + ## _GetReferencedFdFvTupleFromSection() method > > > # > > > # Get FD and FV list referenced by a FFS section > > > # > > > @@ -4627,19 +4537,19 @@ class FdfParser: > > > # @param FdList referenced FD by section > > > # @param FvList referenced FV by section > > > # > > > - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > > + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > > SectionStack = [] > > > SectionStack.extend(FfsFile.SectionList) > > > while SectionStack != []: > > > SectionObj = SectionStack.pop() > > > - if isinstance(SectionObj, FvImageSection.FvImageSection): > > > + if isinstance(SectionObj, FvImageSection): > > > if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: > > > FvList.append(SectionObj.FvName.upper()) > > > if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not > in > > FvList: > > > FvList.append(SectionObj.Fv.UiFvName.upper()) > > > - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > > + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > > > > > - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): > > > + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): > > > SectionStack.extend(SectionObj.SectionList) > > > > > > ## CycleReferenceCheck() method > > > @@ -4662,7 +4572,7 @@ class FdfParser: > > > FdAnalyzedList = [] > > > > > > Index = 0 > > > - while RefFvStack != [] and Index < MaxLength: > > > + while RefFvStack and Index < MaxLength: > > > Index = Index + 1 > > > FvNameFromStack = RefFvStack.pop() > > > if FvNameFromStack.upper() in self.Profile.FvDict: > > > @@ -4672,14 +4582,14 @@ class FdfParser: > > > > > > RefFdList = [] > > > RefFvList = [] > > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > > > for RefFdName in RefFdList: > > > if RefFdName in FdAnalyzedList: > > > continue > > > > > > LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) > > > - FvInFdList = self.__GetFvInFd(RefFdName) > > > + FvInFdList = self._GetFvInFd(RefFdName) > > > if FvInFdList != []: > > > for FvNameInFd in FvInFdList: > > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > > @@ -4725,7 +4635,7 @@ class FdfParser: > > > > > > RefFvList = [] > > > RefFdList = [] > > > - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > > + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > > > > > FvListLength = 0 > > > FdListLength = 0 > > > @@ -4735,7 +4645,7 @@ class FdfParser: > > > continue > > > > > > LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) > > > - CapInFdList = self.__GetCapInFd(RefFdName) > > > + CapInFdList = self._GetCapInFd(RefFdName) > > > if CapInFdList != []: > > > for CapNameInFd in CapInFdList: > > > LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) > > > @@ -4746,7 +4656,7 @@ class FdfParser: > > > EdkLogger.info(LogStr) > > > return True > > > > > > - FvInFdList = self.__GetFvInFd(RefFdName) > > > + FvInFdList = self._GetFvInFd(RefFdName) > > > if FvInFdList != []: > > > for FvNameInFd in FvInFdList: > > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > > @@ -4767,7 +4677,7 @@ class FdfParser: > > > FvObj = self.Profile.FvDict[RefFvName.upper()] > > > else: > > > continue > > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > FvAnalyzedList.append(RefFvName) > > > > > > return False > > > diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py > > > index 598b2d023124..4c1b43572964 100644 > > > --- a/BaseTools/Source/Python/GenFds/Ffs.py > > > +++ b/BaseTools/Source/Python/GenFds/Ffs.py > > > @@ -15,53 +15,41 @@ > > > ## > > > # Import Modules > > > # > > > -from CommonDataClass.FdfClass import FDClassObject > > > from Common.DataType import * > > > > > > -## generate FFS > > > -# > > > -# > > > -class Ffs(FDClassObject): > > > - # mapping between FILE type in FDF and file type for GenFfs > > > - FdfFvFileTypeToFileType = { > > > - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > > - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > > - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > > - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > > - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > > - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > > - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > > - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > > - 'RAW' : 'EFI_FV_FILETYPE_RAW', > > > - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > > - 'SMM' : 'EFI_FV_FILETYPE_SMM', > > > - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > > - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > > - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > > - } > > > +# mapping between FILE type in FDF and file type for GenFfs > > > +FdfFvFileTypeToFileType = { > > > + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > > + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > > + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > > + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > > + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > > + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > > + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > > + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > > + 'RAW' : 'EFI_FV_FILETYPE_RAW', > > > + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > > + 'SMM' : 'EFI_FV_FILETYPE_SMM', > > > + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > > + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > > + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > > +} > > > > > > - # mapping between section type in FDF and file suffix > > > - SectionSuffix = { > > > - BINARY_FILE_TYPE_PE32 : '.pe32', > > > - BINARY_FILE_TYPE_PIC : '.pic', > > > - BINARY_FILE_TYPE_TE : '.te', > > > - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > > - 'VERSION' : '.ver', > > > - BINARY_FILE_TYPE_UI : '.ui', > > > - 'COMPAT16' : '.com16', > > > - 'RAW' : '.raw', > > > - 'FREEFORM_SUBTYPE_GUID': '.guid', > > > - 'SUBTYPE_GUID' : '.guid', > > > - 'FV_IMAGE' : 'fv.sec', > > > - 'COMPRESS' : '.com', > > > - 'GUIDED' : '.guided', > > > - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > > - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > > - } > > > - > > > - ## The constructor > > > - # > > > - # @param self The object pointer > > > - # > > > - def __init__(self): > > > - FfsClassObject.__init__(self) > > > +# mapping between section type in FDF and file suffix > > > +SectionSuffix = { > > > + BINARY_FILE_TYPE_PE32 : '.pe32', > > > + BINARY_FILE_TYPE_PIC : '.pic', > > > + BINARY_FILE_TYPE_TE : '.te', > > > + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > > + 'VERSION' : '.ver', > > > + BINARY_FILE_TYPE_UI : '.ui', > > > + 'COMPAT16' : '.com16', > > > + 'RAW' : '.raw', > > > + 'FREEFORM_SUBTYPE_GUID': '.guid', > > > + 'SUBTYPE_GUID' : '.guid', > > > + 'FV_IMAGE' : 'fv.sec', > > > + 'COMPRESS' : '.com', > > > + 'GUIDED' : '.guided', > > > + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > > + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > > +} > > > diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > index 4c35aac76a00..7479efff04c7 100644 > > > --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > @@ -16,26 +16,22 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from . import Ffs > > > -from . import Rule > > > -import Common.LongFilePathOs as os > > > from io import BytesIO > > > -import subprocess > > > - > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > +from struct import pack > > > from CommonDataClass.FdfClass import FileStatementClassObject > > > from Common import EdkLogger > > > -from Common.BuildToolError import * > > > -from Common.Misc import GuidStructureByteArrayToGuidString > > > +from Common.BuildToolError import GENFDS_ERROR > > > +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange > > > +import Common.LongFilePathOs as os > > > from .GuidSection import GuidSection > > > from .FvImageSection import FvImageSection > > > -from Common.Misc import SaveFileOnChange > > > -from struct import * > > > +from .Ffs import FdfFvFileTypeToFileType > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > > > ## generate FFS from FILE > > > # > > > # > > > -class FileStatement (FileStatementClassObject) : > > > +class FileStatement (FileStatementClassObject): > > > ## The constructor > > > # > > > # @param self The object pointer > > > @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : > > > # > > > def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): > > > > > > - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): > > > + if self.NameGuid and self.NameGuid.startswith('PCD('): > > > PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) > > > if len(PcdValue) == 0: > > > EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ > > > @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : > > > > > > Dict.update(self.DefineVarDict) > > > SectionAlignments = None > > > - if self.FvName is not None : > > > + if self.FvName: > > > Buffer = BytesIO('') > > > if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: > > > EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) > > > @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : > > > FileName = Fv.AddToBuffer(Buffer) > > > SectionFiles = [FileName] > > > > > > - elif self.FdName is not None: > > > + elif self.FdName: > > > if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: > > > EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) > > > Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) > > > FileName = Fd.GenFd() > > > SectionFiles = [FileName] > > > > > > - elif self.FileName is not None: > > > + elif self.FileName: > > > if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': > > > if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == > > len(self.SubAlignment): > > > FileContent = '' > > > @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : > > > Content = f.read() > > > f.close() > > > AlignValue = 1 > > > - if self.SubAlignment[Index] is not None: > > > + if self.SubAlignment[Index]: > > > AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) > > > if AlignValue > MaxAlignValue: > > > MaxAlignIndex = Index > > > @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : > > > SectionFiles = [] > > > Index = 0 > > > SectionAlignments = [] > > > - for section in self.SectionList : > > > + for section in self.SectionList: > > > Index = Index + 1 > > > SecIndex = '%d' %Index > > > # process the inside FvImage from FvSection or GuidSection > > > @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : > > > section.FvAddr = FvChildAddr.pop(0) > > > elif isinstance(section, GuidSection): > > > section.FvAddr = FvChildAddr > > > - if FvParentAddr is not None and isinstance(section, GuidSection): > > > + if FvParentAddr and isinstance(section, GuidSection): > > > section.FvParentAddr = FvParentAddr > > > > > > if self.KeepReloc == False: > > > @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : > > > # > > > FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') > > > GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, > > > - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), > > > + FdfFvFileTypeToFileType.get(self.FvFileType), > > > self.NameGuid, > > > Fixed=self.Fixed, > > > CheckSum=self.CheckSum, > > > @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : > > > ) > > > > > > return FfsFileOutput > > > - > > > - > > > - > > > diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > index 6149bc81b4ef..c1073c96e9aa 100644 > > > --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os > > > from io import BytesIO > > > from struct import * > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -from . import Ffs > > > +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType > > > import subprocess > > > import sys > > > from . import Section > > > @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > > > SecNum = '%d' %Index > > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > Index = Index + 1 > > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > > File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) > > > @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > else: > > > SecNum = '%d' %Index > > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > > GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) > > > > > > @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > self.ModuleGuid = RegistryGuidStr > > > > > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, > > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > > self.ModuleGuid, Fixed=Rule.Fixed, > > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > > SectionAlign=SectionAlignments, > > > @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > > > FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') > > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, > > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > > self.ModuleGuid, Fixed=Rule.Fixed, > > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > > SectionAlign=Alignments, > > > diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py > > > index c66fc383278d..30525bd11fcc 100644 > > > --- a/BaseTools/Source/Python/GenFds/Fv.py > > > +++ b/BaseTools/Source/Python/GenFds/Fv.py > > > @@ -1,4 +1,3 @@ > > > -from __future__ import absolute_import > > > ## @file > > > # process FV generation > > > # > > > @@ -16,16 +15,13 @@ from __future__ import absolute_import > > > ## > > > # Import Modules > > > # > > > +from __future__ import absolute_import > > > import Common.LongFilePathOs as os > > > import subprocess > > > from io import BytesIO > > > from struct import * > > > - > > > -from . import Ffs > > > -from . import AprioriSection > > > from . import FfsFileStatement > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -from CommonDataClass.FdfClass import FvClassObject > > > from Common.Misc import SaveFileOnChange, PackGUID > > > from Common.LongFilePathSupport import CopyLongFilePath > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' > > > ## generate FV > > > # > > > # > > > -class FV (FvClassObject): > > > +class FV (object): > > > ## The constructor > > > # > > > # @param self The object pointer > > > # > > > def __init__(self): > > > - FvClassObject.__init__(self) > > > + self.UiFvName = None > > > + self.CreateFileName = None > > > + self.BlockSizeList = [] > > > + self.DefineVarDict = {} > > > + self.SetVarDict = {} > > > + self.FvAlignment = None > > > + self.FvAttributeDict = {} > > > + self.FvNameGuid = None > > > + self.FvNameString = None > > > + self.AprioriSectionList = [] > > > + self.FfsList = [] > > > + self.BsBaseAddress = None > > > + self.RtBaseAddress = None > > > self.FvInfFile = None > > > self.FvAddressFile = None > > > self.BaseAddress = None > > > @@ -68,7 +76,7 @@ class FV (FvClassObject): > > > # @param MacroDict macro value pair > > > # @retval string Generated FV file path > > > # > > > - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, > MacroDict > > = {}, Flag=False) : > > > + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, > > MacroDict = {}, Flag=False): > > > > > > if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: > > > return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] > > > @@ -96,7 +104,7 @@ class FV (FvClassObject): > > > if self.FvBaseAddress is not None: > > > BaseAddress = self.FvBaseAddress > > > if not Flag: > > > - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > > + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > > # > > > # First Process the Apriori section > > > # > > > @@ -114,7 +122,7 @@ class FV (FvClassObject): > > > TAB_LINE_BREAK) > > > > > > # Process Modules in FfsList > > > - for FfsFile in self.FfsList : > > > + for FfsFile in self.FfsList: > > > if Flag: > > > if isinstance(FfsFile, FfsFileStatement.FileStatement): > > > continue > > > @@ -177,7 +185,7 @@ class FV (FvClassObject): > > > > > > if FvChildAddr != []: > > > # Update Ffs again > > > - for FfsFile in self.FfsList : > > > + for FfsFile in self.FfsList: > > > FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, > > FvName=self.UiFvName) > > > > > > if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: > > > @@ -252,7 +260,7 @@ class FV (FvClassObject): > > > return True > > > return False > > > > > > - ## __InitializeInf__() > > > + ## _InitializeInf() > > > # > > > # Initilize the inf file to create FV > > > # > > > @@ -263,7 +271,7 @@ class FV (FvClassObject): > > > # @param ErasePolarity Flash erase polarity > > > # @param VtfDict VTF objects > > > # > > > - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : > > > + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): > > > # > > > # Create FV inf file > > > # > > > @@ -275,7 +283,7 @@ class FV (FvClassObject): > > > # Add [Options] > > > # > > > self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) > > > - if BaseAddress is not None : > > > + if BaseAddress is not None: > > > self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ > > > BaseAddress + \ > > > TAB_LINE_BREAK) > > > @@ -294,7 +302,7 @@ class FV (FvClassObject): > > > #set default block size is 1 > > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) > > > > > > - for BlockSize in self.BlockSizeList : > > > + for BlockSize in self.BlockSizeList: > > > if BlockSize[0] is not None: > > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ > > > '0x%X' %BlockSize[0] + \ > > > @@ -320,9 +328,9 @@ class FV (FvClassObject): > > > ' %s' %ErasePloarity + \ > > > TAB_LINE_BREAK) > > > if not (self.FvAttributeDict is None): > > > - for FvAttribute in self.FvAttributeDict.keys() : > > > + for FvAttribute in self.FvAttributeDict.keys(): > > > if FvAttribute == "FvUsedSizeEnable": > > > - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : > > > + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): > > > self.UsedSizeEnable = True > > > continue > > > self.FvInfFile.writelines("EFI_" + \ > > > @@ -365,8 +373,8 @@ class FV (FvClassObject): > > > Guid = FV_UI_EXT_ENTY_GUID.split('-') > > > # > > > # Layout: > > > - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 > > > - # GUID : size 16 > > > + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 > > > + # GUID: size 16 > > > # FV UI name > > > # > > > Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) > > > @@ -422,7 +430,6 @@ class FV (FvClassObject): > > > FvExtHeaderFileName + \ > > > TAB_LINE_BREAK) > > > > > > - > > > # > > > # Add [Files] > > > # > > > diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py > > > index 04556fc87099..d6e1f3315b7e 100644 > > > --- a/BaseTools/Source/Python/GenFds/FvImageSection.py > > > +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py > > > @@ -18,7 +18,7 @@ > > > from __future__ import absolute_import > > > from . import Section > > > from io import BytesIO > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > import subprocess > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > import Common.LongFilePathOs as os > > > @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): > > > if FvAlignmentValue > MaxFvAlignment: > > > MaxFvAlignment = FvAlignmentValue > > > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > Ffs.SectionSuffix.get("FV_IMAGE")) > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > SectionSuffix.get("FV_IMAGE")) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], > 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > > IsMakefile=IsMakefile) > > > OutputFileList.append(OutputFile) > > > > > > @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): > > > # > > > # Prepare the parameter of GenSection > > > # > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > Ffs.SectionSuffix.get("FV_IMAGE")) > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > SectionSuffix.get("FV_IMAGE")) > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > > IsMakefile=IsMakefile) > > > OutputFileList.append(OutputFile) > > > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py > > > index 9dec9c5eb57d..0c8091b79841 100644 > > > --- a/BaseTools/Source/Python/GenFds/GenFds.py > > > +++ b/BaseTools/Source/Python/GenFds/GenFds.py > > > @@ -17,34 +17,30 @@ > > > # > > > from __future__ import print_function > > > from __future__ import absolute_import > > > +from re import compile > > > from optparse import OptionParser > > > -import sys > > > -import Common.LongFilePathOs as os > > > -import linecache > > > -from . import FdfParser > > > -import Common.BuildToolError as BuildToolError > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > -from Workspace.WorkspaceDatabase import WorkspaceDatabase > > > -from Workspace.BuildClassObject import PcdClassObject > > > -from . import RuleComplexFile > > > -from .EfiSection import EfiSection > > > +from sys import exit > > > +from glob import glob > > > +from struct import unpack > > > +from linecache import getlines > > > from io import BytesIO > > > -import Common.TargetTxtClassObject as TargetTxtClassObject > > > -import Common.ToolDefClassObject as ToolDefClassObject > > > + > > > +import Common.LongFilePathOs as os > > > +from Common.TargetTxtClassObject import TargetTxtClassObject > > > from Common.DataType import * > > > import Common.GlobalData as GlobalData > > > from Common import EdkLogger > > > -from Common.StringUtils import * > > > -from Common.Misc import DirCache, PathClass > > > -from Common.Misc import SaveFileOnChange > > > -from Common.Misc import ClearDuplicatedInf > > > -from Common.Misc import GuidStructureStringToGuidString > > > +from Common.StringUtils import NormPath > > > +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString > > > +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf > > > from Common.BuildVersion import gBUILD_VERSION > > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > > -from . import FfsFileStatement > > > -import glob > > > -from struct import unpack > > > -from Common.GlobalData import gGuidPattern > > > +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, > > FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID > > > +from Workspace.WorkspaceDatabase import WorkspaceDatabase > > > + > > > +from .FdfParser import FdfParser, Warning > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > +from .FfsFileStatement import FileStatement > > > > > > ## Version and Copyright > > > versionNumber = "1.0" + ' ' + gBUILD_VERSION > > > @@ -71,22 +67,22 @@ def main(): > > > > > > EdkLogger.Initialize() > > > try: > > > - if Options.verbose is not None: > > > + if Options.verbose: > > > EdkLogger.SetLevel(EdkLogger.VERBOSE) > > > GenFdsGlobalVariable.VerboseMode = True > > > > > > - if Options.FixedAddress is not None: > > > + if Options.FixedAddress: > > > GenFdsGlobalVariable.FixedLoadAddress = True > > > > > > - if Options.quiet is not None: > > > + if Options.quiet: > > > EdkLogger.SetLevel(EdkLogger.QUIET) > > > - if Options.debug is not None: > > > + if Options.debug: > > > EdkLogger.SetLevel(Options.debug + 1) > > > GenFdsGlobalVariable.DebugLevel = Options.debug > > > else: > > > EdkLogger.SetLevel(EdkLogger.INFO) > > > > > > - if (Options.Workspace is None): > > > + if not Options.Workspace: > > > EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", > > > ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") > > > elif not os.path.exists(Options.Workspace): > > > @@ -97,7 +93,7 @@ def main(): > > > GenFdsGlobalVariable.WorkSpaceDir = Workspace > > > if 'EDK_SOURCE' in os.environ: > > > GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) > > > - if (Options.debug): > > > + if Options.debug: > > > GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) > > > if Options.GenfdsMultiThread: > > > GenFdsGlobalVariable.EnableGenfdsMultiThread = True > > > @@ -107,7 +103,7 @@ def main(): > > > PackagesPath = os.getenv("PACKAGES_PATH") > > > mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) > > > > > > - if (Options.filename): > > > + if Options.filename: > > > FdfFilename = Options.filename > > > FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) > > > > > > @@ -123,13 +119,13 @@ def main(): > > > else: > > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") > > > > > > - if (Options.BuildTarget): > > > + if Options.BuildTarget: > > > GenFdsGlobalVariable.TargetName = Options.BuildTarget > > > > > > - if (Options.ToolChain): > > > + if Options.ToolChain: > > > GenFdsGlobalVariable.ToolChainTag = Options.ToolChain > > > > > > - if (Options.activePlatform): > > > + if Options.activePlatform: > > > ActivePlatform = Options.activePlatform > > > ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) > > > > > > @@ -139,15 +135,15 @@ def main(): > > > if not os.path.isabs (ActivePlatform): > > > ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) > > > > > > - if not os.path.exists(ActivePlatform) : > > > + if not os.path.exists(ActivePlatform): > > > EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") > > > else: > > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") > > > > > > - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > > + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > > GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) > > > > > > - if (Options.ConfDirectory): > > > + if Options.ConfDirectory: > > > # Get alternate Conf location, if it is absolute, then just use the absolute directory name > > > ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) > > > if ConfDirectoryPath.startswith('"'): > > > @@ -169,7 +165,7 @@ def main(): > > > GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir > > > BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) > > > if os.path.isfile(BuildConfigurationFile) == True: > > > - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() > > > + TargetTxt = TargetTxtClassObject() > > > TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) > > > # if no build target given in command line, get it from target.txt > > > if not GenFdsGlobalVariable.TargetName: > > > @@ -237,10 +233,9 @@ def main(): > > > GlobalData.gAllFiles = DirCache(Workspace) > > > GlobalData.gWorkspace = Workspace > > > > > > - if (Options.archList) : > > > + if Options.archList: > > > ArchList = Options.archList.split(',') > > > else: > > > -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") > > > ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, > > Options.ToolChain].SupArchList > > > > > > TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, > > Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) > > > @@ -249,9 +244,11 @@ def main(): > > > > > > for Arch in ArchList: > > > GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = > > NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, > > Options.ToolChain].OutputDirectory) > > > - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > Options.BuildTarget, Options.ToolChain].PlatformName > > > > > > - if (Options.outputDir): > > > + # assign platform name based on last entry in ArchList > > > + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], > > Options.BuildTarget, Options.ToolChain].PlatformName > > > + > > > + if Options.outputDir: > > > OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) > > > if not os.path.isabs (OutputDirFromCommandLine): > > > OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, > OutputDirFromCommandLine) > > > @@ -274,27 +271,27 @@ def main(): > > > GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir > > > > > > """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ > > > - FdfParserObj = FdfParser.FdfParser(FdfFilename) > > > + FdfParserObj = FdfParser(FdfFilename) > > > FdfParserObj.ParseFile() > > > > > > if FdfParserObj.CycleReferenceCheck(): > > > EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") > > > > > > - if (Options.uiFdName) : > > > + if Options.uiFdName: > > > if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: > > > GenFds.OnlyGenerateThisFd = Options.uiFdName > > > else: > > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > > "No such an FD in FDF file: %s" % Options.uiFdName) > > > > > > - if (Options.uiFvName) : > > > + if Options.uiFvName: > > > if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: > > > GenFds.OnlyGenerateThisFv = Options.uiFvName > > > else: > > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > > "No such an FV in FDF file: %s" % Options.uiFvName) > > > > > > - if (Options.uiCapName) : > > > + if Options.uiCapName: > > > if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: > > > GenFds.OnlyGenerateThisCap = Options.uiCapName > > > else: > > > @@ -302,7 +299,7 @@ def main(): > > > "No such a Capsule in FDF file: %s" % Options.uiCapName) > > > > > > GenFdsGlobalVariable.WorkSpace = BuildWorkSpace > > > - if ArchList is not None: > > > + if ArchList: > > > GenFdsGlobalVariable.ArchList = ArchList > > > > > > # Dsc Build Data will handle Pcd Settings from CommandLine. > > > @@ -313,10 +310,8 @@ def main(): > > > > > > # Record the FV Region info that may specific in the FD > > > if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: > > > - for Fv in FdfParserObj.Profile.FvDict: > > > - FvObj = FdfParserObj.Profile.FvDict[Fv] > > > - for Fd in FdfParserObj.Profile.FdDict: > > > - FdObj = FdfParserObj.Profile.FdDict[Fd] > > > + for FvObj in FdfParserObj.Profile.FvDict.values(): > > > + for FdObj in FdfParserObj.Profile.FdDict.values(): > > > for RegionObj in FdObj.RegionList: > > > if RegionObj.RegionType != BINARY_FILE_TYPE_FV: > > > continue > > > @@ -338,7 +333,7 @@ def main(): > > > """Display FV space info.""" > > > GenFds.DisplayFvSpaceInfo(FdfParserObj) > > > > > > - except FdfParser.Warning as X: > > > + except Warning as X: > > > EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, > > RaiseError=False) > > > ReturnCode = FORMAT_INVALID > > > except FatalError as X: > > > @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): > > > # Using standard Python module optparse to parse command line option of this tool. > > > # > > > # @retval Opt A optparse.Values object containing the parsed options > > > -# @retval Args Target of build command > > > # > > > def myOptionParser(): > > > usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= > > MacroValue]\"" > > > @@ -404,7 +398,7 @@ def myOptionParser(): > > > Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: > > \"PcdName=Value\" ") > > > Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable > GenFds > > multi thread to generate ffs file.") > > > > > > - (Options, args) = Parser.parse_args() > > > + Options, _ = Parser.parse_args() > > > return Options > > > > > > ## The class implementing the EDK2 flash image generation process > > > @@ -414,7 +408,7 @@ def myOptionParser(): > > > # 2. Call methods of Fd class to generate FD > > > # 3. Call methods of Fv class to generate FV that not belong to FD > > > # > > > -class GenFds : > > > +class GenFds(object): > > > FdfParsef = None > > > OnlyGenerateThisFd = None > > > OnlyGenerateThisFv = None > > > @@ -423,12 +417,13 @@ class GenFds : > > > ## GenFd() > > > # > > > # @param OutputDir Output directory > > > - # @param FdfParser FDF contents parser > > > + # @param FdfParserObject FDF contents parser > > > # @param Workspace The directory of workspace > > > # @param ArchList The Arch list of platform > > > # > > > - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): > > > - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) > > > + @staticmethod > > > + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): > > > + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) > > > > > > GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") > > > if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in > > GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: > > > @@ -470,9 +465,10 @@ class GenFds : > > > GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") > > > for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): > > > OptRomObj.AddToBuffer(None) > > > + > > > @staticmethod > > > - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): > > > - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) > > > + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): > > > + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) > > > for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): > > > FdObj.GenFd(Flag=True) > > > > > > @@ -490,6 +486,7 @@ class GenFds : > > > # @param FvObj Whose block size to get > > > # @retval int Block size value > > > # > > > + @staticmethod > > > def GetFvBlockSize(FvObj): > > > DefaultBlockSize = 0x1 > > > FdObj = None > > > @@ -524,16 +521,17 @@ class GenFds : > > > # @param FvObj Whose block size to get > > > # @retval None > > > # > > > - def DisplayFvSpaceInfo(FdfParser): > > > + @staticmethod > > > + def DisplayFvSpaceInfo(FdfParserObject): > > > > > > FvSpaceInfoList = [] > > > MaxFvNameLength = 0 > > > - for FvName in FdfParser.Profile.FvDict: > > > + for FvName in FdfParserObject.Profile.FvDict: > > > if len(FvName) > MaxFvNameLength: > > > MaxFvNameLength = len(FvName) > > > FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') > > > if os.path.exists(FvSpaceInfoFileName): > > > - FileLinesList = linecache.getlines(FvSpaceInfoFileName) > > > + FileLinesList = getlines(FvSpaceInfoFileName) > > > TotalFound = False > > > Total = '' > > > UsedFound = False > > > @@ -575,6 +573,7 @@ class GenFds : > > > # @param DscFile modules from dsc file will be preprocessed > > > # @retval None > > > # > > > + @staticmethod > > > def PreprocessImage(BuildDb, DscFile): > > > PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, > > GenFdsGlobalVariable.ToolChainTag].Pcds > > > PcdValue = '' > > > @@ -600,6 +599,7 @@ class GenFds : > > > ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, > > GenFdsGlobalVariable.ToolChainTag] > > > print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) > > > > > > + @staticmethod > > > def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): > > > GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") > > > GuidXRefFile = BytesIO('') > > > @@ -607,7 +607,6 @@ class GenFds : > > > GuidDict = {} > > > ModuleList = [] > > > FileGuidList = [] > > > - GuidPattern = gGuidPattern > > > for Arch in ArchList: > > > PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) > > > @@ -626,20 +625,16 @@ class GenFds : > > > continue > > > else: > > > ModuleList.append(Module) > > > - GuidMatch = GuidPattern.match(ModuleFile.BaseName) > > > - if GuidMatch is not None: > > > + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): > > > GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) > > > else: > > > GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) > > > - for key, item in Module.Protocols.items(): > > > - GuidDict[key] = item > > > - for key, item in Module.Guids.items(): > > > - GuidDict[key] = item > > > - for key, item in Module.Ppis.items(): > > > - GuidDict[key] = item > > > + GuidDict.update(Module.Protocols) > > > + GuidDict.update(Module.Guids) > > > + GuidDict.update(Module.Ppis) > > > for FvName in FdfParserObj.Profile.FvDict: > > > for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: > > > - if not isinstance(FfsObj, FfsFileStatement.FileStatement): > > > + if not isinstance(FfsObj, FileStatement): > > > InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) > > > FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, > > GenFdsGlobalVariable.ToolChainTag] > > > if FdfModule in ModuleList: > > > @@ -647,12 +642,9 @@ class GenFds : > > > else: > > > ModuleList.append(FdfModule) > > > GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) > > > - for key, item in FdfModule.Protocols.items(): > > > - GuidDict[key] = item > > > - for key, item in FdfModule.Guids.items(): > > > - GuidDict[key] = item > > > - for key, item in FdfModule.Ppis.items(): > > > - GuidDict[key] = item > > > + GuidDict.update(FdfModule.Protocols) > > > + GuidDict.update(FdfModule.Guids) > > > + GuidDict.update(FdfModule.Ppis) > > > else: > > > FileStatementGuid = FfsObj.NameGuid > > > if FileStatementGuid in FileGuidList: > > > @@ -661,13 +653,13 @@ class GenFds : > > > FileGuidList.append(FileStatementGuid) > > > Name = [] > > > FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > > - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > > + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > > if not FfsPath: > > > continue > > > if not os.path.exists(FfsPath[0]): > > > continue > > > MatchDict = {} > > > - ReFileEnds = > > re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > > + ReFileEnds = > > compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > > FileList = os.listdir(FfsPath[0]) > > > for File in FileList: > > > Match = ReFileEnds.search(File) > > > @@ -724,16 +716,10 @@ class GenFds : > > > os.remove(GuidXRefFileName) > > > GuidXRefFile.close() > > > > > > - ##Define GenFd as static function > > > - GenFd = staticmethod(GenFd) > > > - GetFvBlockSize = staticmethod(GetFvBlockSize) > > > - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) > > > - PreprocessImage = staticmethod(PreprocessImage) > > > - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) > > > - > > > if __name__ == '__main__': > > > r = main() > > > ## 0-127 is a safe return range, and 1 is a standard default error > > > - if r < 0 or r > 127: r = 1 > > > - sys.exit(r) > > > + if r < 0 or r > 127: > > > + r = 1 > > > + exit(r) > > > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > index 14578a92a9b1..e867693d7c88 100644 > > > --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > @@ -16,13 +16,15 @@ > > > # Import Modules > > > # > > > from __future__ import print_function > > > +from __future__ import absolute_import > > > + > > > import Common.LongFilePathOs as os > > > -import sys > > > -import subprocess > > > -import struct > > > -import array > > > +from sys import stdout > > > +from subprocess import PIPE,Popen > > > +from struct import Struct > > > +from array import array > > > > > > -from Common.BuildToolError import * > > > +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR > > > from Common import EdkLogger > > > from Common.Misc import SaveFileOnChange > > > > > > @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: > > > EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' > > > LARGE_FILE_SIZE = 0x1000000 > > > > > > - SectionHeader = struct.Struct("3B 1B") > > > + SectionHeader = Struct("3B 1B") > > > > > > # FvName, FdName, CapName in FDF, Image file name > > > ImageBinDict = {} > > > @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: > > > ## LoadBuildRule > > > # > > > @staticmethod > > > - def __LoadBuildRule(): > > > + def _LoadBuildRule(): > > > if GenFdsGlobalVariable.__BuildRuleDatabase: > > > return GenFdsGlobalVariable.__BuildRuleDatabase > > > BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) > > > @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: > > > if not Arch in GenFdsGlobalVariable.OutputDirDict: > > > return {} > > > > > > - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() > > > + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() > > > if not BuildRuleDatabase: > > > return {} > > > > > > PathClassObj = PathClass(Inf.MetaFile.File, > > > GenFdsGlobalVariable.WorkSpaceDir) > > > - Macro = {} > > > - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir > > > - Macro["MODULE_NAME" ] = Inf.BaseName > > > - Macro["MODULE_GUID" ] = Inf.Guid > > > - Macro["MODULE_VERSION" ] = Inf.Version > > > - Macro["MODULE_TYPE" ] = Inf.ModuleType > > > - Macro["MODULE_FILE" ] = str(PathClassObj) > > > - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName > > > - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir > > > - Macro["MODULE_DIR" ] = PathClassObj.SubDir > > > - > > > - Macro["BASE_NAME" ] = Inf.BaseName > > > - > > > - Macro["ARCH" ] = Arch > > > - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag > > > - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > > - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > > - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName > > > - > > > - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] > > > - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > BuildDir = os.path.join( > > > GenFdsGlobalVariable.OutputDirDict[Arch], > > > Arch, > > > PathClassObj.SubDir, > > > PathClassObj.BaseName > > > ) > > > - Macro["MODULE_BUILD_DIR" ] = BuildDir > > > - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") > > > - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") > > > - > > > + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > + Macro = { > > > + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, > > > + "MODULE_NAME":Inf.BaseName, > > > + "MODULE_GUID":Inf.Guid, > > > + "MODULE_VERSION":Inf.Version, > > > + "MODULE_TYPE":Inf.ModuleType, > > > + "MODULE_FILE":str(PathClassObj), > > > + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, > > > + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, > > > + "MODULE_DIR":PathClassObj.SubDir, > > > + "BASE_NAME":Inf.BaseName, > > > + "ARCH":Arch, > > > + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, > > > + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > > + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > > + "TARGET":GenFdsGlobalVariable.TargetName, > > > + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], > > > + "BIN_DIR":BinDir, > > > + "LIB_DIR":BinDir, > > > + "MODULE_BUILD_DIR":BuildDir, > > > + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), > > > + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") > > > + } > > > + > > > BuildRules = {} > > > for Type in BuildRuleDatabase.FileTypeList: > > > #first try getting build rule by BuildRuleFamily > > > @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: > > > > > > if not Inf.IsBinaryModule: > > > for File in Inf.Sources: > > > - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ > > > - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): > > > + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ > > > + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: > > > FileList.append((File, DataType.TAB_UNKNOWN_FILE)) > > > > > > for File in Inf.Binaries: > > > - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: > > > + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: > > > FileList.append((File, File.Type)) > > > > > > for File, FileType in FileList: > > > @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: > > > Source = SourceList[Index] > > > Index = Index + 1 > > > > > > - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: > > > + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: > > > # Skip all files that are not binary libraries > > > if not Inf.LibraryClass: > > > continue > > > @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: > > > # @param Workspace The directory of workspace > > > # @param ArchList The Arch list of platform > > > # > > > + @staticmethod > > > def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): > > > - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) > > > -# GenFdsGlobalVariable.OutputDirDict = OutputDir > > > + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) > > > GenFdsGlobalVariable.FdfParser = FdfParser > > > GenFdsGlobalVariable.WorkSpace = WorkSpace > > > GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], > > DataType.TAB_FV_DIRECTORY) > > > - if not os.path.exists(GenFdsGlobalVariable.FvDir) : > > > + if not os.path.exists(GenFdsGlobalVariable.FvDir): > > > os.makedirs(GenFdsGlobalVariable.FvDir) > > > GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > > - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : > > > + if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > > > - T_CHAR_LF = '\n' > > > # > > > # Create FV Address inf file > > > # > > > @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: > > > # > > > # Add [Options] > > > # > > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > > BsAddress = '0' > > > for Arch in ArchList: > > > if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: > > > @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: > > > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > > BsAddress + \ > > > - T_CHAR_LF) > > > + DataType.TAB_LINE_BREAK) > > > > > > RtAddress = '0' > > > - for Arch in ArchList: > > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: > > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > > + for Arch in reversed(ArchList): > > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > > + if temp: > > > + RtAddress = temp > > > + break > > > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > > RtAddress + \ > > > - T_CHAR_LF) > > > + DataType.TAB_LINE_BREAK) > > > > > > FvAddressFile.close() > > > > > > + @staticmethod > > > def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): > > > GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile > > > GenFdsGlobalVariable.FdfParser = FdfParser > > > @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: > > > if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > > > - T_CHAR_LF = '\n' > > > # > > > # Create FV Address inf file > > > # > > > @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: > > > # > > > # Add [Options] > > > # > > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > > BsAddress = '0' > > > for Arch in ArchList: > > > BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: > > > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > > BsAddress + \ > > > - T_CHAR_LF) > > > + DataType.TAB_LINE_BREAK) > > > > > > RtAddress = '0' > > > - for Arch in ArchList: > > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > + for Arch in reversed(ArchList): > > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: > > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > > + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > > + if temp: > > > + RtAddress = temp > > > + break > > > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > > RtAddress + \ > > > - T_CHAR_LF) > > > + DataType.TAB_LINE_BREAK) > > > > > > FvAddressFile.close() > > > > > > @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: > > > # > > > # @param String String that may contain macro > > > # > > > + @staticmethod > > > def ReplaceWorkspaceMacro(String): > > > String = mws.handleWsMacro(String) > > > Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) > > > @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: > > > if not os.path.exists(Output): > > > return True > > > # always update "Output" if no "Input" given > > > - if Input is None or len(Input) == 0: > > > + if not Input: > > > return True > > > > > > # if fdf file is changed after the 'Output" is generated, update the 'Output' > > > @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: > > > Cmd += ("-s", Type) > > > if CompressionType: > > > Cmd += ("-c", CompressionType) > > > - if Guid is not None: > > > + if Guid: > > > Cmd += ("-g", Guid) > > > - if DummyFile is not None: > > > + if DummyFile: > > > Cmd += ("--dummy", DummyFile) > > > if GuidHdrLen: > > > Cmd += ("-l", GuidHdrLen) > > > @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: > > > if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: > > > GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) > > > else: > > > - SectionData = array.array('B', [0, 0, 0, 0]) > > > + SectionData = array('B', [0, 0, 0, 0]) > > > SectionData.fromstring(Ui.encode("utf_16_le")) > > > SectionData.append(0) > > > SectionData.append(0) > > > @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: > > > if BaseAddress: > > > Cmd += ("-r", BaseAddress) > > > > > > - if ForceRebase == False: > > > + if not ForceRebase: > > > Cmd += ("-F", "FALSE") > > > - elif ForceRebase == True: > > > + else: > > > Cmd += ("-F", "TRUE") > > > > > > if Capsule: > > > @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: > > > Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): > > > InputList = [] > > > Cmd = ["EfiRom"] > > > - if len(EfiInput) > 0: > > > + if EfiInput: > > > > > > if Compress: > > > Cmd.append("-ec") > > > @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: > > > Cmd.append(EfiFile) > > > InputList.append (EfiFile) > > > > > > - if len(BinaryInput) > 0: > > > + if BinaryInput: > > > Cmd.append("-b") > > > for BinFile in BinaryInput: > > > Cmd.append(BinFile) > > > @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: > > > return > > > GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) > > > > > > - if ClassCode is not None: > > > + if ClassCode: > > > Cmd += ("-l", ClassCode) > > > - if Revision is not None: > > > + if Revision: > > > Cmd += ("-r", Revision) > > > - if DeviceId is not None: > > > + if DeviceId: > > > Cmd += ("-i", DeviceId) > > > - if VendorId is not None: > > > + if VendorId: > > > Cmd += ("-f", VendorId) > > > > > > Cmd += ("-o", Output) > > > @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: > > > else: > > > GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) > > > > > > + @staticmethod > > > def CallExternalTool (cmd, errorMess, returnValue=[]): > > > > > > if type(cmd) not in (tuple, list): > > > @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: > > > cmd += ('-v',) > > > GenFdsGlobalVariable.InfLogger (cmd) > > > else: > > > - sys.stdout.write ('#') > > > - sys.stdout.flush() > > > + stdout.write ('#') > > > + stdout.flush() > > > GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 > > > if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: > > > - sys.stdout.write('\n') > > > + stdout.write('\n') > > > > > > try: > > > - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) > > > + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) > > > except Exception as X: > > > EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) > > > (out, error) = PopenObject.communicate() > > > > > > - while PopenObject.returncode is None : > > > + while PopenObject.returncode is None: > > > PopenObject.wait() > > > if returnValue != [] and returnValue[0] != 0: > > > #get command return value > > > @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: > > > print("###", cmd) > > > EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) > > > > > > + @staticmethod > > > def VerboseLogger (msg): > > > EdkLogger.verbose(msg) > > > > > > + @staticmethod > > > def InfLogger (msg): > > > EdkLogger.info(msg) > > > > > > + @staticmethod > > > def ErrorLogger (msg, File=None, Line=None, ExtraData=None): > > > EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) > > > > > > + @staticmethod > > > def DebugLogger (Level, msg): > > > EdkLogger.debug(Level, msg) > > > > > > - ## ReplaceWorkspaceMacro() > > > + ## MacroExtend() > > > # > > > # @param Str String that may contain macro > > > # @param MacroDict Dictionary that contains macro value pair > > > # > > > + @staticmethod > > > def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): > > > - if Str is None : > > > + if Str is None: > > > return None > > > > > > - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, > > > - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, > > > + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, > > > + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, > > > # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, > > > - '$(TARGET)' : GenFdsGlobalVariable.TargetName, > > > - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, > > > - '$(SPACE)' : ' ' > > > + '$(TARGET)': GenFdsGlobalVariable.TargetName, > > > + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, > > > + '$(SPACE)': ' ' > > > } > > > - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > > + > > > if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: > > > OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] > > > + else: > > > + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > > > > > Dict['$(OUTPUT_DIRECTORY)'] = OutputDir > > > > > > - if MacroDict is not None and len (MacroDict) != 0: > > > + if MacroDict: > > > Dict.update(MacroDict) > > > > > > for key in Dict: > > > - if Str.find(key) >= 0 : > > > + if Str.find(key) >= 0: > > > Str = Str.replace (key, Dict[key]) > > > > > > if Str.find('$(ARCH)') >= 0: > > > @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: > > > # > > > # @param PcdPattern pattern that labels a PCD. > > > # > > > + @staticmethod > > > def GetPcdValue (PcdPattern): > > > - if PcdPattern is None : > > > + if PcdPattern is None: > > > return None > > > PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') > > > TokenSpace = PcdPair[0] > > > TokenCName = PcdPair[1] > > > > > > - PcdValue = '' > > > for Arch in GenFdsGlobalVariable.ArchList: > > > Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > PcdDict = Platform.Pcds > > > @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: > > > if PcdObj.DatumType != DataType.TAB_VOID: > > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > > > - PcdValue = PcdObj.DefaultValue > > > - return PcdValue > > > + return PcdObj.DefaultValue > > > > > > for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, > > > Arch, > > > @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: > > > if PcdObj.DatumType != DataType.TAB_VOID: > > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > > > - PcdValue = PcdObj.DefaultValue > > > - return PcdValue > > > + return PcdObj.DefaultValue > > > > > > - return PcdValue > > > - > > > - SetDir = staticmethod(SetDir) > > > - SetEnv = staticmethod(SetEnv) > > > - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) > > > - CallExternalTool = staticmethod(CallExternalTool) > > > - VerboseLogger = staticmethod(VerboseLogger) > > > - InfLogger = staticmethod(InfLogger) > > > - ErrorLogger = staticmethod(ErrorLogger) > > > - DebugLogger = staticmethod(DebugLogger) > > > - MacroExtend = staticmethod (MacroExtend) > > > - GetPcdValue = staticmethod(GetPcdValue) > > > + return '' > > > > > > ## FindExtendTool() > > > # > > > @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): > > > ToolOptionKey = None > > > KeyList = None > > > for ToolDef in ToolDefinition.items(): > > > - if NameGuid.lower() == ToolDef[1].lower() : > > > + if NameGuid.lower() == ToolDef[1].lower(): > > > KeyList = ToolDef[0].split('_') > > > Key = KeyList[0] + \ > > > '_' + \ > > > diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py > > > index fd660737f421..25609e6f2ae1 100644 > > > --- a/BaseTools/Source/Python/GenFds/GuidSection.py > > > +++ b/BaseTools/Source/Python/GenFds/GuidSection.py > > > @@ -19,7 +19,7 @@ > > > from __future__ import absolute_import > > > from . import Section > > > import subprocess > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > import Common.LongFilePathOs as os > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > from .GenFdsGlobalVariable import FindExtendTool > > > @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : > > > ModuleName + \ > > > SUP_MODULE_SEC + \ > > > SecNum + \ > > > - Ffs.SectionSuffix['GUIDED'] > > > + SectionSuffix['GUIDED'] > > > OutputFile = os.path.normpath(OutputFile) > > > > > > ExternalTool = None > > > diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py > > > index bb7071fa7fca..9d7ad25ecc43 100644 > > > --- a/BaseTools/Source/Python/GenFds/OptionRom.py > > > +++ b/BaseTools/Source/Python/GenFds/OptionRom.py > > > @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange > > > from Common import EdkLogger > > > from Common.BuildToolError import * > > > > > > -T_CHAR_LF = '\n' > > > - > > > ## > > > # > > > # > > > @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): > > > # > > > # @param self The object pointer > > > # > > > - def __init__(self): > > > + def __init__(self, Name = ""): > > > OptionRomClassObject.__init__(self) > > > - > > > + self.DriverName = Name > > > > > > ## AddToBuffer() > > > # > > > diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py > > > index 5242b74c9e70..9aeaa45d4566 100644 > > > --- a/BaseTools/Source/Python/GenFds/Region.py > > > +++ b/BaseTools/Source/Python/GenFds/Region.py > > > @@ -20,7 +20,6 @@ from struct import * > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > from io import BytesIO > > > import string > > > -from CommonDataClass.FdfClass import RegionClassObject > > > import Common.LongFilePathOs as os > > > from stat import * > > > from Common import EdkLogger > > > @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV > > > ## generate Region > > > # > > > # > > > -class Region(RegionClassObject): > > > +class Region(object): > > > > > > ## The constructor > > > # > > > # @param self The object pointer > > > # > > > def __init__(self): > > > - RegionClassObject.__init__(self) > > > - > > > + self.Offset = None # The begin position of the Region > > > + self.Size = None # The Size of the Region > > > + self.PcdOffset = None > > > + self.PcdSize = None > > > + self.SetVarDict = {} > > > + self.RegionType = None > > > + self.RegionDataList = [] > > > > > > ## PadBuffer() > > > # > > > diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py > > > index 24f2f3ca938d..c1dff188be96 100644 > > > --- a/BaseTools/Source/Python/GenFds/UiSection.py > > > +++ b/BaseTools/Source/Python/GenFds/UiSection.py > > > @@ -17,7 +17,7 @@ > > > # > > > from __future__ import absolute_import > > > from . import Section > > > -from .Ffs import Ffs > > > +from .Ffs import SectionSuffix > > > import subprocess > > > import Common.LongFilePathOs as os > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): > > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > > > > > if self.StringData is not None : > > > NameString = self.StringData > > > diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py > > > index db71fe9653d4..051743f10c21 100644 > > > --- a/BaseTools/Source/Python/GenFds/VerSection.py > > > +++ b/BaseTools/Source/Python/GenFds/VerSection.py > > > @@ -1,7 +1,7 @@ > > > ## @file > > > # process Version section generation > > > # > > > -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> > > > +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> > > > # > > > # This program and the accompanying materials > > > # are licensed and made available under the terms and conditions of the BSD License > > > @@ -16,10 +16,8 @@ > > > # Import Modules > > > # > > > from __future__ import absolute_import > > > -from .Ffs import Ffs > > > -from . import Section > > > +from .Ffs import SectionSuffix > > > import Common.LongFilePathOs as os > > > -import subprocess > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > from CommonDataClass.FdfClass import VerSectionClassObject > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): > > > # > > > # Prepare the parameter of GenSection > > > # > > > - if FfsInf is not None: > > > + if FfsInf: > > > self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) > > > self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) > > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > > > OutputFile = os.path.join(OutputPath, > > > - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) > > > + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) > > > OutputFile = os.path.normpath(OutputFile) > > > > > > # Get String Data > > > StringData = '' > > > - if self.StringData is not None: > > > + if self.StringData: > > > StringData = self.StringData > > > - elif self.FileName is not None: > > > + elif self.FileName: > > > FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) > > > FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) > > > FileObj = open(FileNameStr, 'r') > > > StringData = FileObj.read() > > > StringData = '"' + StringData + '"' > > > FileObj.close() > > > - else: > > > - StringData = '' > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) > > > OutputFileList = [] > > > diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py > > > index 5cb2d4acfb1c..9dcd48b2d812 100644 > > > --- a/BaseTools/Source/Python/GenFds/Vtf.py > > > +++ b/BaseTools/Source/Python/GenFds/Vtf.py > > > @@ -18,21 +18,24 @@ > > > from __future__ import absolute_import > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > import Common.LongFilePathOs as os > > > -from CommonDataClass.FdfClass import VtfClassObject > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > -T_CHAR_LF = '\n' > > > +from Common.DataType import TAB_LINE_BREAK > > > > > > ## generate VTF > > > # > > > # > > > -class Vtf (VtfClassObject): > > > +class Vtf (object): > > > > > > ## The constructor > > > # > > > # @param self The object pointer > > > # > > > def __init__(self): > > > - VtfClassObject.__init__(self) > > > + self.KeyArch = None > > > + self.ArchList = None > > > + self.UiName = None > > > + self.ResetBin = None > > > + self.ComponentStatementList = [] > > > > > > ## GenVtf() method > > > # > > > @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): > > > # > > > def GenVtf(self, FdAddressDict) : > > > self.GenBsfInf() > > > - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') > > > BaseAddArg = self.GetBaseAddressArg(FdAddressDict) > > > OutputArg, VtfRawDict = self.GenOutputArg() > > > > > > @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): > > > FvList = self.GetFvList() > > > self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') > > > BsfInf = open(self.BsfInfName, 'w+') > > > - if self.ResetBin is not None: > > > - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) > > > + if self.ResetBin: > > > + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) > > > BsfInf.writelines ("IA32_RST_BIN" + \ > > > " = " + \ > > > > > GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ > > > - T_CHAR_LF) > > > - BsfInf.writelines (T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > > > - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) > > > + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) > > > > > > for ComponentObj in self.ComponentStatementList : > > > BsfInf.writelines ("COMP_NAME" + \ > > > " = " + \ > > > ComponentObj.CompName + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > if ComponentObj.CompLoc.upper() == 'NONE': > > > BsfInf.writelines ("COMP_LOC" + \ > > > " = " + \ > > > 'N' + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > > > > - elif ComponentObj.FilePos is not None: > > > + elif ComponentObj.FilePos: > > > BsfInf.writelines ("COMP_LOC" + \ > > > " = " + \ > > > ComponentObj.FilePos + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > else: > > > Index = FvList.index(ComponentObj.CompLoc.upper()) > > > if Index == 0: > > > BsfInf.writelines ("COMP_LOC" + \ > > > " = " + \ > > > 'F' + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > elif Index == 1: > > > BsfInf.writelines ("COMP_LOC" + \ > > > " = " + \ > > > 'S' + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > > > > BsfInf.writelines ("COMP_TYPE" + \ > > > " = " + \ > > > ComponentObj.CompType + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > BsfInf.writelines ("COMP_VER" + \ > > > " = " + \ > > > ComponentObj.CompVer + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > BsfInf.writelines ("COMP_CS" + \ > > > " = " + \ > > > ComponentObj.CompCs + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > > > > BinPath = ComponentObj.CompBin > > > if BinPath != '-': > > > @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): > > > BsfInf.writelines ("COMP_BIN" + \ > > > " = " + \ > > > BinPath + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > > > > SymPath = ComponentObj.CompSym > > > if SymPath != '-': > > > @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): > > > BsfInf.writelines ("COMP_SYM" + \ > > > " = " + \ > > > SymPath + \ > > > - T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > BsfInf.writelines ("COMP_SIZE" + \ > > > " = " + \ > > > ComponentObj.CompSize + \ > > > - T_CHAR_LF) > > > - BsfInf.writelines (T_CHAR_LF) > > > + TAB_LINE_BREAK) > > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > > > BsfInf.close() > > > > > > diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py > > > index a66adfb16746..95863fefe151 100644 > > > --- a/BaseTools/Source/Python/build/BuildReport.py > > > +++ b/BaseTools/Source/Python/build/BuildReport.py > > > @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath > > > from Common.DataType import * > > > import collections > > > from Common.Expression import * > > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > > > ## Pattern to extract contents in EDK DXS files > > > gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) > > > @@ -1853,8 +1854,8 @@ class FdRegionReport(object): > > > # > > > # Add PEI and DXE a priori files GUIDs defined in PI specification. > > > # > > > - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" > > > - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" > > > + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" > > > + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" > > > # > > > # Add ACPI table storage file > > > # > > > -- > > > 2.16.2.windows.1 > > > > > > _______________________________________________ > > > edk2-devel mailing list > > > edk2-devel@lists.01.org > > > https://lists.01.org/mailman/listinfo/edk2-devel > > > > _______________________________________________ > edk2-devel mailing list > edk2-devel@lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds 2018-10-26 6:27 ` Gao, Liming @ 2018-10-26 6:54 ` Gary Lin 0 siblings, 0 replies; 14+ messages in thread From: Gary Lin @ 2018-10-26 6:54 UTC (permalink / raw) To: Gao, Liming; +Cc: Carsey, Jaben, edk2-devel@lists.01.org On Fri, Oct 26, 2018 at 06:27:53AM +0000, Gao, Liming wrote: > Edk2 599bb2be1f2b119dcde9e5a7515dc6f3d19eb29d has fixed this issue. Please try. I rebuilt OVMF with git HEAD, and the crash was gone. Thanks! Gary Lin > > > -----Original Message----- > > From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Gao, Liming > > Sent: Friday, October 26, 2018 1:25 PM > > To: Gary Lin <glin@suse.com>; Carsey, Jaben <jaben.carsey@intel.com> > > Cc: edk2-devel@lists.01.org > > Subject: Re: [edk2] [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds > > > > Lin: > > I am checking this issue. Sorry for the break. > > > > Thanks > > Liming > > > -----Original Message----- > > > From: Gary Lin [mailto:glin@suse.com] > > > Sent: Friday, October 26, 2018 12:54 PM > > > To: Carsey, Jaben <jaben.carsey@intel.com> > > > Cc: edk2-devel@lists.01.org; Gao, Liming <liming.gao@intel.com> > > > Subject: Re: [edk2] [Patch v3 1/6] BaseTools/GenFds: cleanup GenFds > > > > > > On Tue, Oct 23, 2018 at 10:29:19AM -0700, Jaben Carsey wrote: > > > > 1) remove wildcard imports and use explicit imports > > > > 2) refactor to use shared variables from Common/DataType > > > > 3) rename to not shadow imports > > > > 4) don't assign a variable in a loop (just do final assignment) > > > > 5) remove spaces, parens, unused or commented out code, etc. > > > > 6) merge unnecessary parent classes into child > > > > 7) refactor to share DXE and PEI apriori GUIDs from one place > > > > this includes changes to Build and EOT files > > > > 8) for PEP8, dont use __ for custom methods. > > > > > > > Hi Jaben, > > > > > > I found OVMF crashed immediately at starting up after applying this patch :-\ > > > > > > == > > > > > > $ qemu-system-x86_64 -s -enable-kvm -smp 1 -m 1024 \ > > > -drive if=pflash,format=raw,readonly,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd \ > > > -drive if=pflash,format=raw,file=/home/gary/git/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd \ > > > -machine type=q35 \ > > > -drive file=fat:hda-contents/,format=raw,read-only,if=virtio \ > > > -monitor stdio -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \ > > > -serial file:serial.log > > > QEMU 3.0.0 monitor - type 'help' for more information > > > (qemu) KVM internal error. Suberror: 1 > > > emulation failure > > > RAX=000000000081f6a0 RBX=0000000000000200 RCX=000000000081f6a0 RDX=0000000000000200 > > > RSI=000000000081f698 RDI=000000000081fd20 RBP=000000000081f698 RSP=000000000081f4a8 > > > R8 =000000000081f4d8 R9 =000000000081f748 R10=0000000000000010 R11=0000000000900000 > > > R12=0000000000000000 R13=0000000000820000 R14=000000000081fd01 R15=000000000081fd01 > > > RIP=000000000009ffff RFL=00010807 [-O---PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 > > > ES =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > > CS =0018 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] > > > SS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > > DS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > > FS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > > GS =0008 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] > > > LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT > > > TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy > > > GDT= 00000000ffffff80 0000001f > > > IDT= 000000000081fd70 0000021f > > > CR0=c0000033 CR2=0000000000000000 CR3=0000000000800000 CR4=00000660 > > > DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 > > > DR6=00000000ffff0ff0 DR7=0000000000000400 > > > EFER=0000000000000500 > > > Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > > 00 00 00 00 00 00 00 00 00 > > > > > > == > > > > > > And, there is only one line in debug.log: > > > > > > SecCoreStartupWithStack(0xFFFCC000, 0x820000) > > > > > > Without "-enable-kvm", the error became: > > > > > > == > > > > > > (qemu) qemu-system-x86_64: Trying to execute code outside RAM or ROM at 0x00000000000a0000 > > > This usually means one of the following happened: > > > > > > (1) You told QEMU to execute a kernel for the wrong machine type, and it crashed on startup (eg trying to run a raspberry pi kernel on > > a > > > versatilepb QEMU machine) > > > (2) You didn't give QEMU a kernel or BIOS filename at all, and QEMU executed a ROM full of no-op instructions until it fell off the end > > > (3) Your guest kernel has a bug and crashed by jumping off into nowhere > > > > > > This is almost always one of the first two, so check your command line and that you are using the right type of kernel for this machine. > > > If you think option (3) is likely then you can try debugging your guest with the -d debug options; in particular -d guest_errors will cause > > > the log to include a dump of the guest register state at this point. > > > > > > Execution cannot continue; stopping here. > > > > > > == > > > > > > Gary Lin > > > > > > > Cc: Yonghong Zhu <yonghong.zhu@intel.com> > > > > Cc: Liming Gao <liming.gao@intel.com> > > > > Cc: Bob C Feng <bob.c.feng@intel.com> > > > > Contributed-under: TianoCore Contribution Agreement 1.1 > > > > Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> > > > > --- > > > > BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - > > > > BaseTools/Source/Python/Eot/EotMain.py | 372 ++- > > > > BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- > > > > BaseTools/Source/Python/GenFds/Capsule.py | 26 +- > > > > BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - > > > > BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- > > > > BaseTools/Source/Python/GenFds/DataSection.py | 4 +- > > > > BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- > > > > BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- > > > > BaseTools/Source/Python/GenFds/FdfParser.py | 3014 ++++++++++---------- > > > > BaseTools/Source/Python/GenFds/Ffs.py | 82 +- > > > > BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- > > > > BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- > > > > BaseTools/Source/Python/GenFds/Fv.py | 47 +- > > > > BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- > > > > BaseTools/Source/Python/GenFds/GenFds.py | 160 +- > > > > BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- > > > > BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- > > > > BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- > > > > BaseTools/Source/Python/GenFds/Region.py | 12 +- > > > > BaseTools/Source/Python/GenFds/UiSection.py | 4 +- > > > > BaseTools/Source/Python/GenFds/VerSection.py | 16 +- > > > > BaseTools/Source/Python/GenFds/Vtf.py | 48 +- > > > > BaseTools/Source/Python/build/BuildReport.py | 5 +- > > > > 24 files changed, 2185 insertions(+), 2020 deletions(-) > > > > > > > > diff --git a/BaseTools/Source/Python/CommonDataClass/FdfClass.py b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > > index 64b58c2078a3..3df82082457f 100644 > > > > --- a/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > > +++ b/BaseTools/Source/Python/CommonDataClass/FdfClass.py > > > > @@ -37,50 +37,6 @@ class FDClassObject: > > > > self.RegionList = [] > > > > self.vtfRawDict = {} > > > > > > > > -## FV data in FDF > > > > -# > > > > -# > > > > -class FvClassObject: > > > > - ## The constructor > > > > - # > > > > - # @param self The object pointer > > > > - # > > > > - def __init__(self): > > > > - self.UiFvName = None > > > > - self.CreateFileName = None > > > > - # 3-tuple list (blockSize, numBlocks, pcd) > > > > - self.BlockSizeList = [] > > > > - # DefineVarDict[var] = value > > > > - self.DefineVarDict = {} > > > > - # SetVarDict[var] = value > > > > - self.SetVarDict = {} > > > > - self.FvAlignment = None > > > > - # FvAttributeDict[attribute] = TRUE/FALSE (1/0) > > > > - self.FvAttributeDict = {} > > > > - self.FvNameGuid = None > > > > - self.FvNameString = None > > > > - self.AprioriSectionList = [] > > > > - self.FfsList = [] > > > > - self.BsBaseAddress = None > > > > - self.RtBaseAddress = None > > > > - > > > > -## Region data in FDF > > > > -# > > > > -# > > > > -class RegionClassObject: > > > > - ## The constructor > > > > - # > > > > - # @param self The object pointer > > > > - # > > > > - def __init__(self): > > > > - self.Offset = None # The begin position of the Region > > > > - self.Size = None # The Size of the Region > > > > - self.PcdOffset = None > > > > - self.PcdSize = None > > > > - self.SetVarDict = {} > > > > - self.RegionType = None > > > > - self.RegionDataList = [] > > > > - > > > > ## FFS data in FDF > > > > # > > > > # > > > > @@ -112,7 +68,6 @@ class FileStatementClassObject (FfsClassObject) : > > > > self.FvName = None > > > > self.FdName = None > > > > self.DefineVarDict = {} > > > > - self.AprioriSection = None > > > > self.KeepReloc = None > > > > > > > > ## INF statement data in FDF > > > > @@ -134,19 +89,6 @@ class FfsInfStatementClassObject(FfsClassObject): > > > > self.KeepReloc = None > > > > self.UseArch = None > > > > > > > > -## APRIORI section data in FDF > > > > -# > > > > -# > > > > -class AprioriSectionClassObject: > > > > - ## The constructor > > > > - # > > > > - # @param self The object pointer > > > > - # > > > > - def __init__(self): > > > > - # DefineVarDict[var] = value > > > > - self.DefineVarDict = {} > > > > - self.FfsList = [] > > > > - > > > > ## section data in FDF > > > > # > > > > # > > > > @@ -363,21 +305,6 @@ class CapsuleClassObject : > > > > self.CapsuleDataList = [] > > > > self.FmpPayloadList = [] > > > > > > > > -## VTF data in FDF > > > > -# > > > > -# > > > > -class VtfClassObject : > > > > - ## The constructor > > > > - # > > > > - # @param self The object pointer > > > > - # > > > > - def __init__(self): > > > > - self.KeyArch = None > > > > - self.ArchList = None > > > > - self.UiName = None > > > > - self.ResetBin = None > > > > - self.ComponentStatementList = [] > > > > - > > > > ## VTF component data in FDF > > > > # > > > > # > > > > diff --git a/BaseTools/Source/Python/Eot/EotMain.py b/BaseTools/Source/Python/Eot/EotMain.py > > > > index 49a1494126c4..2ad93cc6b4e3 100644 > > > > --- a/BaseTools/Source/Python/Eot/EotMain.py > > > > +++ b/BaseTools/Source/Python/Eot/EotMain.py > > > > @@ -36,10 +36,9 @@ import struct > > > > import uuid > > > > import copy > > > > import codecs > > > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > > > > > gGuidStringFormat = "%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X" > > > > -gPeiAprioriFileNameGuid = '1b45cc0a-156a-428a-af62-49864da0e6e6' > > > > -gAprioriGuid = 'fc510ee7-ffdc-11d4-bd41-0080c73c8881' > > > > gIndention = -4 > > > > > > > > class Image(array): > > > > @@ -402,10 +401,10 @@ class FirmwareVolume(Image): > > > > if Ffs.Type == 0x05: > > > > FfsDxeCoreGuid = FfsID > > > > continue > > > > - if Ffs.Guid.lower() == gPeiAprioriFileNameGuid: > > > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > > > FfsPeiPrioriGuid = FfsID > > > > continue > > > > - if Ffs.Guid.lower() == gAprioriGuid: > > > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > > > FfsDxePrioriGuid = FfsID > > > > continue > > > > > > > > @@ -1084,6 +1083,371 @@ class Ffs(Image): > > > > State = property(_GetState, _SetState) > > > > > > > > > > > > + _FfsGuid = "8C8CE578-8A3D-4F1C-9935-896185C32DD3" > > > > + > > > > + _GUID_ = struct.Struct("16x 1I2H8B") > > > > + _LENGTH_ = struct.Struct("16x 16x 1Q") > > > > + _SIG_ = struct.Struct("16x 16x 8x 1I") > > > > + _ATTR_ = struct.Struct("16x 16x 8x 4x 1I") > > > > + _HLEN_ = struct.Struct("16x 16x 8x 4x 4x 1H") > > > > + _CHECKSUM_ = struct.Struct("16x 16x 8x 4x 4x 2x 1H") > > > > + > > > > + def __init__(self, Name=''): > > > > + Image.__init__(self) > > > > + self.Name = Name > > > > + self.FfsDict = sdict() > > > > + self.OrderedFfsDict = sdict() > > > > + self.UnDispatchedFfsDict = sdict() > > > > + self.ProtocolList = sdict() > > > > + > > > > + def CheckArchProtocol(self): > > > > + for Item in EotGlobalData.gArchProtocolGuids: > > > > + if Item.lower() not in EotGlobalData.gProtocolList: > > > > + return False > > > > + return True > > > > + > > > > + def ParseDepex(self, Depex, Type): > > > > + List = None > > > > + if Type == 'Ppi': > > > > + List = EotGlobalData.gPpiList > > > > + if Type == 'Protocol': > > > > + List = EotGlobalData.gProtocolList > > > > + DepexStack = [] > > > > + DepexList = [] > > > > + DepexString = '' > > > > + FileDepex = None > > > > + CouldBeLoaded = True > > > > + for Index in range(0, len(Depex.Expression)): > > > > + Item = Depex.Expression[Index] > > > > + if Item == 0x00: > > > > + Index = Index + 1 > > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > > > + return (True, 'BEFORE %s' % Guid, [Guid, 'BEFORE']) > > > > + elif Item == 0x01: > > > > + Index = Index + 1 > > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > > + if Guid in self.OrderedFfsDict and Depex.Expression[Index + 1] == 0x08: > > > > + return (True, 'AFTER %s' % Guid, [Guid, 'AFTER']) > > > > + elif Item == 0x02: > > > > + Index = Index + 1 > > > > + Guid = gGuidStringFormat % Depex.Expression[Index] > > > > + if Guid.lower() in List: > > > > + DepexStack.append(True) > > > > + DepexList.append(Guid) > > > > + else: > > > > + DepexStack.append(False) > > > > + DepexList.append(Guid) > > > > + continue > > > > + elif Item == 0x03 or Item == 0x04: > > > > + DepexStack.append(eval(str(DepexStack.pop()) + ' ' + Depex._OPCODE_STRING_[Item].lower() + ' ' + > > > str(DepexStack.pop()))) > > > > + DepexList.append(str(DepexList.pop()) + ' ' + Depex._OPCODE_STRING_[Item].upper() + ' ' + > > str(DepexList.pop())) > > > > + elif Item == 0x05: > > > > + DepexStack.append(eval(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexStack.pop()))) > > > > + DepexList.append(Depex._OPCODE_STRING_[Item].lower() + ' ' + str(DepexList.pop())) > > > > + elif Item == 0x06: > > > > + DepexStack.append(True) > > > > + DepexList.append('TRUE') > > > > + DepexString = DepexString + 'TRUE' + ' ' > > > > + elif Item == 0x07: > > > > + DepexStack.append(False) > > > > + DepexList.append('False') > > > > + DepexString = DepexString + 'FALSE' + ' ' > > > > + elif Item == 0x08: > > > > + if Index != len(Depex.Expression) - 1: > > > > + CouldBeLoaded = False > > > > + else: > > > > + CouldBeLoaded = DepexStack.pop() > > > > + else: > > > > + CouldBeLoaded = False > > > > + if DepexList != []: > > > > + DepexString = DepexList[0].strip() > > > > + return (CouldBeLoaded, DepexString, FileDepex) > > > > + > > > > + def Dispatch(self, Db = None): > > > > + if Db is None: > > > > + return False > > > > + self.UnDispatchedFfsDict = copy.copy(self.FfsDict) > > > > + # Find PeiCore, DexCore, PeiPriori, DxePriori first > > > > + FfsSecCoreGuid = None > > > > + FfsPeiCoreGuid = None > > > > + FfsDxeCoreGuid = None > > > > + FfsPeiPrioriGuid = None > > > > + FfsDxePrioriGuid = None > > > > + for FfsID in self.UnDispatchedFfsDict: > > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > > + if Ffs.Type == 0x03: > > > > + FfsSecCoreGuid = FfsID > > > > + continue > > > > + if Ffs.Type == 0x04: > > > > + FfsPeiCoreGuid = FfsID > > > > + continue > > > > + if Ffs.Type == 0x05: > > > > + FfsDxeCoreGuid = FfsID > > > > + continue > > > > + if Ffs.Guid.lower() == PEI_APRIORI_GUID.lower(): > > > > + FfsPeiPrioriGuid = FfsID > > > > + continue > > > > + if Ffs.Guid.lower() == DXE_APRIORI_GUID.lower(): > > > > + FfsDxePrioriGuid = FfsID > > > > + continue > > > > + > > > > + # Parse SEC_CORE first > > > > + if FfsSecCoreGuid is not None: > > > > + self.OrderedFfsDict[FfsSecCoreGuid] = self.UnDispatchedFfsDict.pop(FfsSecCoreGuid) > > > > + self.LoadPpi(Db, FfsSecCoreGuid) > > > > + > > > > + # Parse PEI first > > > > + if FfsPeiCoreGuid is not None: > > > > + self.OrderedFfsDict[FfsPeiCoreGuid] = self.UnDispatchedFfsDict.pop(FfsPeiCoreGuid) > > > > + self.LoadPpi(Db, FfsPeiCoreGuid) > > > > + if FfsPeiPrioriGuid is not None: > > > > + # Load PEIM described in priori file > > > > + FfsPeiPriori = self.UnDispatchedFfsDict.pop(FfsPeiPrioriGuid) > > > > + if len(FfsPeiPriori.Sections) == 1: > > > > + Section = FfsPeiPriori.Sections.popitem()[1] > > > > + if Section.Type == 0x19: > > > > + GuidStruct = struct.Struct('1I2H8B') > > > > + Start = 4 > > > > + while len(Section) > Start: > > > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > > > + GuidString = gGuidStringFormat % Guid > > > > + Start = Start + 16 > > > > + if GuidString in self.UnDispatchedFfsDict: > > > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > > > + self.LoadPpi(Db, GuidString) > > > > + > > > > + self.DisPatchPei(Db) > > > > + > > > > + # Parse DXE then > > > > + if FfsDxeCoreGuid is not None: > > > > + self.OrderedFfsDict[FfsDxeCoreGuid] = self.UnDispatchedFfsDict.pop(FfsDxeCoreGuid) > > > > + self.LoadProtocol(Db, FfsDxeCoreGuid) > > > > + if FfsDxePrioriGuid is not None: > > > > + # Load PEIM described in priori file > > > > + FfsDxePriori = self.UnDispatchedFfsDict.pop(FfsDxePrioriGuid) > > > > + if len(FfsDxePriori.Sections) == 1: > > > > + Section = FfsDxePriori.Sections.popitem()[1] > > > > + if Section.Type == 0x19: > > > > + GuidStruct = struct.Struct('1I2H8B') > > > > + Start = 4 > > > > + while len(Section) > Start: > > > > + Guid = GuidStruct.unpack_from(Section[Start : Start + 16]) > > > > + GuidString = gGuidStringFormat % Guid > > > > + Start = Start + 16 > > > > + if GuidString in self.UnDispatchedFfsDict: > > > > + self.OrderedFfsDict[GuidString] = self.UnDispatchedFfsDict.pop(GuidString) > > > > + self.LoadProtocol(Db, GuidString) > > > > + > > > > + self.DisPatchDxe(Db) > > > > + > > > > + def LoadProtocol(self, Db, ModuleGuid): > > > > + SqlCommand = """select GuidValue from Report > > > > + where SourceFileFullPath in > > > > + (select Value1 from Inf where BelongsToFile = > > > > + (select BelongsToFile from Inf > > > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > > > + and Model = %s) > > > > + and ItemType = 'Protocol' and ItemMode = 'Produced'""" \ > > > > + % (ModuleGuid, 5001, 3007) > > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > > + for Record in RecordSet: > > > > + SqlCommand = """select Value2 from Inf where BelongsToFile = > > > > + (select DISTINCT BelongsToFile from Inf > > > > + where Value1 = > > > > + (select SourceFileFullPath from Report > > > > + where GuidValue like '%s' and ItemMode = 'Callback')) > > > > + and Value1 = 'FILE_GUID'""" % Record[0] > > > > + CallBackSet = Db.TblReport.Exec(SqlCommand) > > > > + if CallBackSet != []: > > > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > > > + else: > > > > + EotGlobalData.gProtocolList[Record[0].lower()] = ModuleGuid > > > > + > > > > + def LoadPpi(self, Db, ModuleGuid): > > > > + SqlCommand = """select GuidValue from Report > > > > + where SourceFileFullPath in > > > > + (select Value1 from Inf where BelongsToFile = > > > > + (select BelongsToFile from Inf > > > > + where Value1 = 'FILE_GUID' and Value2 like '%s' and Model = %s) > > > > + and Model = %s) > > > > + and ItemType = 'Ppi' and ItemMode = 'Produced'""" \ > > > > + % (ModuleGuid, 5001, 3007) > > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > > + for Record in RecordSet: > > > > + EotGlobalData.gPpiList[Record[0].lower()] = ModuleGuid > > > > + > > > > + def DisPatchDxe(self, Db): > > > > + IsInstalled = False > > > > + ScheduleList = sdict() > > > > + for FfsID in self.UnDispatchedFfsDict: > > > > + CouldBeLoaded = False > > > > + DepexString = '' > > > > + FileDepex = None > > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > > + if Ffs.Type == 0x07: > > > > + # Get Depex > > > > + IsFoundDepex = False > > > > + for Section in Ffs.Sections.values(): > > > > + # Find Depex > > > > + if Section.Type == 0x13: > > > > + IsFoundDepex = True > > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Protocol') > > > > + break > > > > + if Section.Type == 0x01: > > > > + CompressSections = Section._SubImages[4] > > > > + for CompressSection in CompressSections.Sections: > > > > + if CompressSection.Type == 0x13: > > > > + IsFoundDepex = True > > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > > > 'Protocol') > > > > + break > > > > + if CompressSection.Type == 0x02: > > > > + NewSections = CompressSection._SubImages[4] > > > > + for NewSection in NewSections.Sections: > > > > + if NewSection.Type == 0x13: > > > > + IsFoundDepex = True > > > > + CouldBeLoaded, DepexString, FileDepex = > > > self.ParseDepex(NewSection._SubImages[4], 'Protocol') > > > > + break > > > > + > > > > + # Not find Depex > > > > + if not IsFoundDepex: > > > > + CouldBeLoaded = self.CheckArchProtocol() > > > > + DepexString = '' > > > > + FileDepex = None > > > > + > > > > + # Append New Ffs > > > > + if CouldBeLoaded: > > > > + IsInstalled = True > > > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > > > + NewFfs.Depex = DepexString > > > > + if FileDepex is not None: > > > > + ScheduleList.insert(FileDepex[1], FfsID, NewFfs, FileDepex[0]) > > > > + else: > > > > + ScheduleList[FfsID] = NewFfs > > > > + else: > > > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > > > + > > > > + for FfsID in ScheduleList: > > > > + NewFfs = ScheduleList.pop(FfsID) > > > > + FfsName = 'UnKnown' > > > > + self.OrderedFfsDict[FfsID] = NewFfs > > > > + self.LoadProtocol(Db, FfsID) > > > > + > > > > + SqlCommand = """select Value2 from Inf > > > > + where BelongsToFile = (select BelongsToFile from Inf where Value1 = 'FILE_GUID' and > > > lower(Value2) = lower('%s') and Model = %s) > > > > + and Model = %s and Value1='BASE_NAME'""" % (FfsID, 5001, 5001) > > > > + RecordSet = Db.TblReport.Exec(SqlCommand) > > > > + if RecordSet != []: > > > > + FfsName = RecordSet[0][0] > > > > + > > > > + if IsInstalled: > > > > + self.DisPatchDxe(Db) > > > > + > > > > + def DisPatchPei(self, Db): > > > > + IsInstalled = False > > > > + for FfsID in self.UnDispatchedFfsDict: > > > > + CouldBeLoaded = True > > > > + DepexString = '' > > > > + FileDepex = None > > > > + Ffs = self.UnDispatchedFfsDict[FfsID] > > > > + if Ffs.Type == 0x06 or Ffs.Type == 0x08: > > > > + # Get Depex > > > > + for Section in Ffs.Sections.values(): > > > > + if Section.Type == 0x1B: > > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(Section._SubImages[4], 'Ppi') > > > > + break > > > > + if Section.Type == 0x01: > > > > + CompressSections = Section._SubImages[4] > > > > + for CompressSection in CompressSections.Sections: > > > > + if CompressSection.Type == 0x1B: > > > > + CouldBeLoaded, DepexString, FileDepex = self.ParseDepex(CompressSection._SubImages[4], > > > 'Ppi') > > > > + break > > > > + if CompressSection.Type == 0x02: > > > > + NewSections = CompressSection._SubImages[4] > > > > + for NewSection in NewSections.Sections: > > > > + if NewSection.Type == 0x1B: > > > > + CouldBeLoaded, DepexString, FileDepex = > > > self.ParseDepex(NewSection._SubImages[4], 'Ppi') > > > > + break > > > > + > > > > + # Append New Ffs > > > > + if CouldBeLoaded: > > > > + IsInstalled = True > > > > + NewFfs = self.UnDispatchedFfsDict.pop(FfsID) > > > > + NewFfs.Depex = DepexString > > > > + self.OrderedFfsDict[FfsID] = NewFfs > > > > + self.LoadPpi(Db, FfsID) > > > > + else: > > > > + self.UnDispatchedFfsDict[FfsID].Depex = DepexString > > > > + > > > > + if IsInstalled: > > > > + self.DisPatchPei(Db) > > > > + > > > > + > > > > + def __str__(self): > > > > + global gIndention > > > > + gIndention += 4 > > > > + FvInfo = '\n' + ' ' * gIndention > > > > + FvInfo += "[FV:%s] file_system=%s size=%x checksum=%s\n" % (self.Name, self.FileSystemGuid, self.Size, self.Checksum) > > > > + FfsInfo = "\n".join([str(self.FfsDict[FfsId]) for FfsId in self.FfsDict]) > > > > + gIndention -= 4 > > > > + return FvInfo + FfsInfo > > > > + > > > > + def _Unpack(self): > > > > + Size = self._LENGTH_.unpack_from(self._BUF_, self._OFF_)[0] > > > > + self.empty() > > > > + self.extend(self._BUF_[self._OFF_:self._OFF_+Size]) > > > > + > > > > + # traverse the FFS > > > > + EndOfFv = Size > > > > + FfsStartAddress = self.HeaderSize > > > > + LastFfsObj = None > > > > + while FfsStartAddress < EndOfFv: > > > > + FfsObj = Ffs() > > > > + FfsObj.frombuffer(self, FfsStartAddress) > > > > + FfsId = repr(FfsObj) > > > > + if ((self.Attributes & 0x00000800) != 0 and len(FfsObj) == 0xFFFFFF) \ > > > > + or ((self.Attributes & 0x00000800) == 0 and len(FfsObj) == 0): > > > > + if LastFfsObj is not None: > > > > + LastFfsObj.FreeSpace = EndOfFv - LastFfsObj._OFF_ - len(LastFfsObj) > > > > + else: > > > > + if FfsId in self.FfsDict: > > > > + EdkLogger.error("FV", 0, "Duplicate GUID in FFS", > > > > + ExtraData="\t%s @ %s\n\t%s @ %s" \ > > > > + % (FfsObj.Guid, FfsObj.Offset, > > > > + self.FfsDict[FfsId].Guid, self.FfsDict[FfsId].Offset)) > > > > + self.FfsDict[FfsId] = FfsObj > > > > + if LastFfsObj is not None: > > > > + LastFfsObj.FreeSpace = FfsStartAddress - LastFfsObj._OFF_ - len(LastFfsObj) > > > > + > > > > + FfsStartAddress += len(FfsObj) > > > > + # > > > > + # align to next 8-byte aligned address: A = (A + 8 - 1) & (~(8 - 1)) > > > > + # The next FFS must be at the latest next 8-byte aligned address > > > > + # > > > > + FfsStartAddress = (FfsStartAddress + 7) & (~7) > > > > + LastFfsObj = FfsObj > > > > + > > > > + def _GetAttributes(self): > > > > + return self.GetField(self._ATTR_, 0)[0] > > > > + > > > > + def _GetSize(self): > > > > + return self.GetField(self._LENGTH_, 0)[0] > > > > + > > > > + def _GetChecksum(self): > > > > + return self.GetField(self._CHECKSUM_, 0)[0] > > > > + > > > > + def _GetHeaderLength(self): > > > > + return self.GetField(self._HLEN_, 0)[0] > > > > + > > > > + def _GetFileSystemGuid(self): > > > > + return gGuidStringFormat % self.GetField(self._GUID_, 0) > > > > + > > > > + Attributes = property(_GetAttributes) > > > > + Size = property(_GetSize) > > > > + Checksum = property(_GetChecksum) > > > > + HeaderSize = property(_GetHeaderLength) > > > > + FileSystemGuid = property(_GetFileSystemGuid) > > > > + > > > > ## MultipleFv() class > > > > # > > > > # A class for Multiple FV > > > > diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py > > > > index 7196f7f2c753..e7b856a1150a 100644 > > > > --- a/BaseTools/Source/Python/GenFds/AprioriSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/AprioriSection.py > > > > @@ -16,28 +16,31 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from struct import * > > > > +from struct import pack > > > > import Common.LongFilePathOs as os > > > > from io import BytesIO > > > > -from . import FfsFileStatement > > > > +from .FfsFileStatement import FileStatement > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -from CommonDataClass.FdfClass import AprioriSectionClassObject > > > > -from Common.StringUtils import * > > > > +from Common.StringUtils import NormPath > > > > from Common.Misc import SaveFileOnChange, PathClass > > > > -from Common import EdkLogger > > > > -from Common.BuildToolError import * > > > > +from Common.EdkLogger import error as EdkLoggerError > > > > +from Common.BuildToolError import RESOURCE_NOT_AVAILABLE > > > > from Common.DataType import TAB_COMMON > > > > > > > > +DXE_APRIORI_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > > > +PEI_APRIORI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > > > + > > > > ## process APRIORI file data and generate PEI/DXE APRIORI file > > > > # > > > > # > > > > -class AprioriSection (AprioriSectionClassObject): > > > > +class AprioriSection (object): > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > # > > > > def __init__(self): > > > > - AprioriSectionClassObject.__init__(self) > > > > + self.DefineVarDict = {} > > > > + self.FfsList = [] > > > > self.AprioriType = "" > > > > > > > > ## GenFfs() method > > > > @@ -50,16 +53,16 @@ class AprioriSection (AprioriSectionClassObject): > > > > # @retval string Generated file name > > > > # > > > > def GenFfs (self, FvName, Dict = {}, IsMakefile = False): > > > > - DXE_GUID = "FC510EE7-FFDC-11D4-BD41-0080C73C8881" > > > > - PEI_GUID = "1B45CC0A-156A-428A-AF62-49864DA0E6E6" > > > > Buffer = BytesIO('') > > > > - AprioriFileGuid = DXE_GUID > > > > if self.AprioriType == "PEI": > > > > - AprioriFileGuid = PEI_GUID > > > > + AprioriFileGuid = PEI_APRIORI_GUID > > > > + else: > > > > + AprioriFileGuid = DXE_APRIORI_GUID > > > > + > > > > OutputAprFilePath = os.path.join (GenFdsGlobalVariable.WorkSpaceDir, \ > > > > GenFdsGlobalVariable.FfsDir,\ > > > > AprioriFileGuid + FvName) > > > > - if not os.path.exists(OutputAprFilePath) : > > > > + if not os.path.exists(OutputAprFilePath): > > > > os.makedirs(OutputAprFilePath) > > > > > > > > OutputAprFileName = os.path.join( OutputAprFilePath, \ > > > > @@ -69,33 +72,30 @@ class AprioriSection (AprioriSectionClassObject): > > > > > > > > Dict.update(self.DefineVarDict) > > > > InfFileName = None > > > > - for FfsObj in self.FfsList : > > > > + for FfsObj in self.FfsList: > > > > Guid = "" > > > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > > > + if isinstance(FfsObj, FileStatement): > > > > Guid = FfsObj.NameGuid > > > > else: > > > > InfFileName = NormPath(FfsObj.InfFileName) > > > > Arch = FfsObj.GetCurrentArch() > > > > > > > > - if Arch is not None: > > > > + if Arch: > > > > Dict['$(ARCH)'] = Arch > > > > InfFileName = GenFdsGlobalVariable.MacroExtend(InfFileName, Dict, Arch) > > > > > > > > - if Arch is not None: > > > > + if Arch: > > > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > > > GenFdsGlobalVariable.WorkSpaceDir), Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > > Guid = Inf.Guid > > > > - > > > > else: > > > > Inf = GenFdsGlobalVariable.WorkSpace.BuildObject[PathClass(InfFileName, > > > GenFdsGlobalVariable.WorkSpaceDir), TAB_COMMON, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > > Guid = Inf.Guid > > > > > > > > - self.BinFileList = Inf.Module.Binaries > > > > - if self.BinFileList == []: > > > > - EdkLogger.error("GenFds", RESOURCE_NOT_AVAILABLE, > > > > + if not Inf.Module.Binaries: > > > > + EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE, > > > > "INF %s not found in build ARCH %s!" \ > > > > % (InfFileName, GenFdsGlobalVariable.ArchList)) > > > > > > > > - > > > > GuidPart = Guid.split('-') > > > > Buffer.write(pack('I', long(GuidPart[0], 16))) > > > > Buffer.write(pack('H', int(GuidPart[1], 16))) > > > > @@ -123,4 +123,3 @@ class AprioriSection (AprioriSectionClassObject): > > > > 'EFI_FV_FILETYPE_FREEFORM', AprioriFileGuid, MakefilePath=MakefilePath) > > > > > > > > return AprFfsFileName > > > > - > > > > diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py > > > > index baa6e1dfa2ab..df29c40dbd20 100644 > > > > --- a/BaseTools/Source/Python/GenFds/Capsule.py > > > > +++ b/BaseTools/Source/Python/GenFds/Capsule.py > > > > @@ -16,22 +16,18 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -from .GenFdsGlobalVariable import FindExtendTool > > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable, FindExtendTool > > > > from CommonDataClass.FdfClass import CapsuleClassObject > > > > import Common.LongFilePathOs as os > > > > -import subprocess > > > > from io import BytesIO > > > > -from Common.Misc import SaveFileOnChange > > > > -from Common.Misc import PackRegistryFormatGuid > > > > +from Common.Misc import SaveFileOnChange, PackRegistryFormatGuid > > > > import uuid > > > > from struct import pack > > > > from Common import EdkLogger > > > > -from Common.BuildToolError import * > > > > +from Common.BuildToolError import GENFDS_ERROR > > > > +from Common.DataType import TAB_LINE_BREAK > > > > > > > > - > > > > -T_CHAR_LF = '\n' > > > > -WIN_CERT_REVISION = 0x0200 > > > > +WIN_CERT_REVISION = 0x0200 > > > > WIN_CERT_TYPE_EFI_GUID = 0x0EF1 > > > > EFI_CERT_TYPE_PKCS7_GUID = uuid.UUID('{4aafd29d-68df-49ee-8aa9-347d375665a7}') > > > > EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a735bf}') > > > > @@ -39,7 +35,7 @@ EFI_CERT_TYPE_RSA2048_SHA256_GUID = uuid.UUID('{a7717414-c616-4977-9420-844712a7 > > > > ## create inf file describes what goes into capsule and call GenFv to generate capsule > > > > # > > > > # > > > > -class Capsule (CapsuleClassObject) : > > > > +class Capsule (CapsuleClassObject): > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > @@ -210,16 +206,16 @@ class Capsule (CapsuleClassObject) : > > > > return self.GenFmpCapsule() > > > > > > > > CapInfFile = self.GenCapInf() > > > > - CapInfFile.writelines("[files]" + T_CHAR_LF) > > > > + CapInfFile.writelines("[files]" + TAB_LINE_BREAK) > > > > CapFileList = [] > > > > - for CapsuleDataObj in self.CapsuleDataList : > > > > + for CapsuleDataObj in self.CapsuleDataList: > > > > CapsuleDataObj.CapsuleName = self.CapsuleName > > > > FileName = CapsuleDataObj.GenCapsuleSubItem() > > > > CapsuleDataObj.CapsuleName = None > > > > CapFileList.append(FileName) > > > > CapInfFile.writelines("EFI_FILE_NAME = " + \ > > > > FileName + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False) > > > > CapInfFile.close() > > > > # > > > > @@ -249,13 +245,13 @@ class Capsule (CapsuleClassObject) : > > > > self.UiCapsuleName + "_Cap" + '.inf') > > > > CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+') > > > > > > > > - CapInfFile.writelines("[options]" + T_CHAR_LF) > > > > + CapInfFile.writelines("[options]" + TAB_LINE_BREAK) > > > > > > > > for Item in self.TokensDict: > > > > CapInfFile.writelines("EFI_" + \ > > > > Item + \ > > > > ' = ' + \ > > > > self.TokensDict[Item] + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > > > > > return CapInfFile > > > > diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py > > > > index 0caba8983d2e..6071e9f4a545 100644 > > > > --- a/BaseTools/Source/Python/GenFds/CapsuleData.py > > > > +++ b/BaseTools/Source/Python/GenFds/CapsuleData.py > > > > @@ -16,7 +16,6 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from . import Ffs > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > from io import BytesIO > > > > from struct import pack > > > > diff --git a/BaseTools/Source/Python/GenFds/CompressSection.py b/BaseTools/Source/Python/GenFds/CompressSection.py > > > > index aaaabf84dca8..51f6791fab19 100644 > > > > --- a/BaseTools/Source/Python/GenFds/CompressSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/CompressSection.py > > > > @@ -16,7 +16,7 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > from . import Section > > > > import subprocess > > > > import Common.LongFilePathOs as os > > > > @@ -86,7 +86,7 @@ class CompressSection (CompressSectionClassObject) : > > > > ModuleName + \ > > > > SUP_MODULE_SEC + \ > > > > SecNum + \ > > > > - Ffs.SectionSuffix['COMPRESS'] > > > > + SectionSuffix['COMPRESS'] > > > > OutputFile = os.path.normpath(OutputFile) > > > > DummyFile = OutputFile + '.dummy' > > > > GenFdsGlobalVariable.GenerateSection(DummyFile, SectFiles, InputAlign=SectAlign, IsMakefile=IsMakefile) > > > > diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py > > > > index a6387b07c582..28f9b931ca4b 100644 > > > > --- a/BaseTools/Source/Python/GenFds/DataSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/DataSection.py > > > > @@ -19,7 +19,7 @@ from __future__ import absolute_import > > > > from . import Section > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > import subprocess > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > import Common.LongFilePathOs as os > > > > from CommonDataClass.FdfClass import DataSectionClassObject > > > > from Common.Misc import PeImageClass > > > > @@ -121,7 +121,7 @@ class DataSection (DataSectionClassObject): > > > > ) > > > > self.SectFileName = TeFile > > > > > > > > - OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get(self.SecType)) > > > > + OutputFile = os.path.join (OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get(self.SecType)) > > > > OutputFile = os.path.normpath(OutputFile) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [self.SectFileName], Section.Section.SectionType.get(self.SecType), > > > IsMakefile = IsMakefile) > > > > FileList = [OutputFile] > > > > diff --git a/BaseTools/Source/Python/GenFds/DepexSection.py b/BaseTools/Source/Python/GenFds/DepexSection.py > > > > index b2d123bfc045..78efc18816e8 100644 > > > > --- a/BaseTools/Source/Python/GenFds/DepexSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/DepexSection.py > > > > @@ -18,8 +18,6 @@ > > > > from __future__ import absolute_import > > > > from . import Section > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -import subprocess > > > > -from .Ffs import Ffs > > > > import Common.LongFilePathOs as os > > > > from CommonDataClass.FdfClass import DepexSectionClassObject > > > > from AutoGen.GenDepex import DependencyExpression > > > > @@ -46,9 +44,8 @@ class DepexSection (DepexSectionClassObject): > > > > GenFdsGlobalVariable.TargetName, > > > > GenFdsGlobalVariable.ToolChainTag) > > > > for Inf in GenFdsGlobalVariable.FdfParser.Profile.InfList: > > > > - ModuleFile = PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir) > > > > ModuleData = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > > - ModuleFile, > > > > + PathClass(Inf, GenFdsGlobalVariable.WorkSpaceDir), > > > > Arch, > > > > GenFdsGlobalVariable.TargetName, > > > > GenFdsGlobalVariable.ToolChainTag > > > > diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py > > > > index 623b77d27427..f8573b5c7d1b 100644 > > > > --- a/BaseTools/Source/Python/GenFds/EfiSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/EfiSection.py > > > > @@ -20,7 +20,7 @@ from struct import * > > > > from . import Section > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > import subprocess > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > import Common.LongFilePathOs as os > > > > from CommonDataClass.FdfClass import EfiSectionClassObject > > > > from Common import EdkLogger > > > > @@ -124,7 +124,7 @@ class EfiSection (EfiSectionClassObject): > > > > BuildNumTuple = tuple() > > > > > > > > Num = SecNum > > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > SectionSuffix.get(SectionType)) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > > #Ui=StringData, > > > > Ver=BuildNum, > > > > @@ -135,7 +135,7 @@ class EfiSection (EfiSectionClassObject): > > > > for File in FileList: > > > > Index = Index + 1 > > > > Num = '%s.%d' %(SecNum, Index) > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > SectionSuffix.get(SectionType)) > > > > f = open(File, 'r') > > > > VerString = f.read() > > > > f.close() > > > > @@ -164,7 +164,7 @@ class EfiSection (EfiSectionClassObject): > > > > else: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, "File: %s miss Version Section value" %InfFileName) > > > > Num = SecNum > > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > SectionSuffix.get(SectionType)) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > > #Ui=VerString, > > > > Ver=BuildNum, > > > > @@ -185,7 +185,7 @@ class EfiSection (EfiSectionClassObject): > > > > Num = SecNum > > > > if IsMakefile and StringData == ModuleNameStr: > > > > StringData = "$(MODULE_NAME)" > > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > SectionSuffix.get(SectionType)) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > > > Ui=StringData, IsMakefile=IsMakefile) > > > > OutputFileList.append(OutputFile) > > > > @@ -194,7 +194,7 @@ class EfiSection (EfiSectionClassObject): > > > > for File in FileList: > > > > Index = Index + 1 > > > > Num = '%s.%d' %(SecNum, Index) > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > SectionSuffix.get(SectionType)) > > > > f = open(File, 'r') > > > > UiString = f.read() > > > > f.close() > > > > @@ -218,7 +218,7 @@ class EfiSection (EfiSectionClassObject): > > > > Num = SecNum > > > > if IsMakefile and StringData == ModuleNameStr: > > > > StringData = "$(MODULE_NAME)" > > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + str(Num) + > > > SectionSuffix.get(SectionType)) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_USER_INTERFACE', > > > > Ui=StringData, IsMakefile=IsMakefile) > > > > OutputFileList.append(OutputFile) > > > > @@ -239,7 +239,7 @@ class EfiSection (EfiSectionClassObject): > > > > """ Copy Map file to FFS output path """ > > > > Index = Index + 1 > > > > Num = '%s.%d' %(SecNum, Index) > > > > - OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > Ffs.SectionSuffix.get(SectionType)) > > > > + OutputFile = os.path.join( OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > SectionSuffix.get(SectionType)) > > > > File = GenFdsGlobalVariable.MacroExtend(File, Dict) > > > > > > > > #Get PE Section alignment when align is set to AUTO > > > > diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py > > > > index 63687e98bb31..62bf9f14012a 100644 > > > > --- a/BaseTools/Source/Python/GenFds/FdfParser.py > > > > +++ b/BaseTools/Source/Python/GenFds/FdfParser.py > > > > @@ -18,79 +18,66 @@ > > > > # > > > > from __future__ import print_function > > > > from __future__ import absolute_import > > > > -import re > > > > +from re import compile, DOTALL > > > > +from string import hexdigits > > > > +from uuid import UUID > > > > > > > > -from . import Fd > > > > -from . import Region > > > > -from . import Fv > > > > -from . import AprioriSection > > > > -from . import FfsInfStatement > > > > -from . import FfsFileStatement > > > > -from . import VerSection > > > > -from . import UiSection > > > > -from . import FvImageSection > > > > -from . import DataSection > > > > -from . import DepexSection > > > > -from . import CompressSection > > > > -from . import GuidSection > > > > -from . import Capsule > > > > -from . import CapsuleData > > > > -from . import Rule > > > > -from . import RuleComplexFile > > > > -from . import RuleSimpleFile > > > > -from . import EfiSection > > > > -from . import Vtf > > > > -from . import ComponentStatement > > > > -from . import OptionRom > > > > -from . import OptRomInfStatement > > > > -from . import OptRomFileStatement > > > > -import string > > > > - > > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > from Common.BuildToolError import * > > > > from Common import EdkLogger > > > > -from Common.Misc import PathClass > > > > -from Common.StringUtils import NormPath > > > > -import Common.GlobalData as GlobalData > > > > -from Common.Expression import * > > > > +from Common.Misc import PathClass, tdict > > > > +from Common.StringUtils import NormPath, ReplaceMacro > > > > from Common import GlobalData > > > > +from Common.Expression import * > > > > from Common.DataType import * > > > > -from Common.StringUtils import ReplaceMacro > > > > -import uuid > > > > -from Common.Misc import tdict > > > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > > > import Common.LongFilePathOs as os > > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > > -from .Capsule import EFI_CERT_TYPE_PKCS7_GUID > > > > -from .Capsule import EFI_CERT_TYPE_RSA2048_SHA256_GUID > > > > from Common.RangeExpression import RangeExpression > > > > from collections import OrderedDict > > > > > > > > -##define T_CHAR_SPACE ' ' > > > > -##define T_CHAR_NULL '\0' > > > > -##define T_CHAR_CR '\r' > > > > -##define T_CHAR_TAB '\t' > > > > -##define T_CHAR_LF '\n' > > > > -##define T_CHAR_SLASH '/' > > > > -##define T_CHAR_BACKSLASH '\\' > > > > -##define T_CHAR_DOUBLE_QUOTE '\"' > > > > -##define T_CHAR_SINGLE_QUOTE '\'' > > > > -##define T_CHAR_STAR '*' > > > > -##define T_CHAR_HASH '#' > > > > +from .Fd import FD > > > > +from .Region import Region > > > > +from .Fv import FV > > > > +from .AprioriSection import AprioriSection > > > > +from .FfsInfStatement import FfsInfStatement > > > > +from .FfsFileStatement import FileStatement > > > > +from .VerSection import VerSection > > > > +from .UiSection import UiSection > > > > +from .FvImageSection import FvImageSection > > > > +from .DataSection import DataSection > > > > +from .DepexSection import DepexSection > > > > +from .CompressSection import CompressSection > > > > +from .GuidSection import GuidSection > > > > +from .Capsule import EFI_CERT_TYPE_PKCS7_GUID, EFI_CERT_TYPE_RSA2048_SHA256_GUID, Capsule > > > > +from .CapsuleData import CapsuleFfs, CapsulePayload, CapsuleFv, CapsuleFd, CapsuleAnyFile, CapsuleAfile > > > > +from .RuleComplexFile import RuleComplexFile > > > > +from .RuleSimpleFile import RuleSimpleFile > > > > +from .EfiSection import EfiSection > > > > +from .Vtf import Vtf > > > > +from .ComponentStatement import ComponentStatement > > > > +from .OptionRom import OPTIONROM > > > > +from .OptRomInfStatement import OptRomInfStatement, OverrideAttribs > > > > +from .OptRomFileStatement import OptRomFileStatement > > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > > > > > -(T_CHAR_SPACE, T_CHAR_NULL, T_CHAR_CR, T_CHAR_TAB, T_CHAR_LF, T_CHAR_SLASH, \ > > > > -T_CHAR_BACKSLASH, T_CHAR_DOUBLE_QUOTE, T_CHAR_SINGLE_QUOTE, T_CHAR_STAR, T_CHAR_HASH) = \ > > > > -(' ', '\0', '\r', '\t', '\n', '/', '\\', '\"', '\'', '*', '#') > > > > +T_CHAR_CR = '\r' > > > > +T_CHAR_TAB = '\t' > > > > +T_CHAR_DOUBLE_QUOTE = '\"' > > > > +T_CHAR_SINGLE_QUOTE = '\'' > > > > +T_CHAR_STAR = '*' > > > > > > > > -SEPERATOR_TUPLE = ('=', '|', ',', '{', '}') > > > > +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} > > > > +ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > + "256K", "512K", "1M", "2M", "4M", "8M", "16M"} > > > > +ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} > > > > > > > > -RegionSizePattern = re.compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > > > -RegionSizeGuidPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > > > -RegionOffsetPcdPattern = re.compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > > > -ShortcutPcdPattern = re.compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > > > -BaseAddrValuePattern = re.compile('^0[xX][0-9a-fA-F]+') > > > > -FileExtensionPattern = re.compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > > > -TokenFindPattern = > > > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > > > +RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") > > > > +RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") > > > > +RegionOffsetPcdPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*$") > > > > +ShortcutPcdPattern = compile("\s*\w+\s*=\s*(?P<value>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<name>\w+\.\w+)\s*") > > > > +BaseAddrValuePattern = compile('^0[xX][0-9a-fA-F]+') > > > > +FileExtensionPattern = compile(r'([a-zA-Z][a-zA-Z0-9]*)') > > > > +TokenFindPattern = > > > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\)|\*)') > > > > AllIncludeFileList = [] > > > > > > > > # Get the closest parent > > > > @@ -109,7 +96,6 @@ def IsValidInclude (File, Line): > > > > return True > > > > > > > > def GetRealFileLine (File, Line): > > > > - > > > > InsertedLines = 0 > > > > for Profile in AllIncludeFileList: > > > > if Profile.IsLineInFile(Line): > > > > @@ -121,7 +107,7 @@ def GetRealFileLine (File, Line): > > > > > > > > ## The exception class that used to report error messages when parsing FDF > > > > # > > > > -# Currently the "ToolName" is set to be "FDF Parser". > > > > +# Currently the "ToolName" is set to be "FdfParser". > > > > # > > > > class Warning (Exception): > > > > ## The constructor > > > > @@ -132,7 +118,6 @@ class Warning (Exception): > > > > # @param Line The Line number that error occurs > > > > # > > > > def __init__(self, Str, File = None, Line = None): > > > > - > > > > FileLineTuple = GetRealFileLine(File, Line) > > > > self.FileName = FileLineTuple[0] > > > > self.LineNumber = FileLineTuple[1] > > > > @@ -147,7 +132,7 @@ class Warning (Exception): > > > > # > > > > # May raise Exception when opening file. > > > > # > > > > -class IncludeFileProfile : > > > > +class IncludeFileProfile: > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > @@ -157,16 +142,11 @@ class IncludeFileProfile : > > > > self.FileName = FileName > > > > self.FileLinesList = [] > > > > try: > > > > - fsock = open(FileName, "rb", 0) > > > > - try: > > > > + with open(FileName, "rb", 0) as fsock: > > > > self.FileLinesList = fsock.readlines() > > > > for index, line in enumerate(self.FileLinesList): > > > > - if not line.endswith('\n'): > > > > - self.FileLinesList[index] += '\n' > > > > - > > > > - finally: > > > > - fsock.close() > > > > - > > > > + if not line.endswith(TAB_LINE_BREAK): > > > > + self.FileLinesList[index] += TAB_LINE_BREAK > > > > except: > > > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > > > > > > > @@ -209,7 +189,7 @@ class IncludeFileProfile : > > > > # > > > > # May raise Exception when opening file. > > > > # > > > > -class FileProfile : > > > > +class FileProfile: > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > @@ -218,11 +198,8 @@ class FileProfile : > > > > def __init__(self, FileName): > > > > self.FileLinesList = [] > > > > try: > > > > - fsock = open(FileName, "rb", 0) > > > > - try: > > > > + with open(FileName, "rb", 0) as fsock: > > > > self.FileLinesList = fsock.readlines() > > > > - finally: > > > > - fsock.close() > > > > > > > > except: > > > > EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName) > > > > @@ -266,41 +243,37 @@ class FdfParser: > > > > self.CurrentOffsetWithinLine = 0 > > > > self.CurrentFdName = None > > > > self.CurrentFvName = None > > > > - self.__Token = "" > > > > - self.__SkippedChars = "" > > > > + self._Token = "" > > > > + self._SkippedChars = "" > > > > GlobalData.gFdfParser = self > > > > > > > > # Used to section info > > > > - self.__CurSection = [] > > > > + self._CurSection = [] > > > > # Key: [section name, UI name, arch] > > > > - # Value: {MACRO_NAME : MACRO_VALUE} > > > > - self.__MacroDict = tdict(True, 3) > > > > - self.__PcdDict = OrderedDict() > > > > + # Value: {MACRO_NAME: MACRO_VALUE} > > > > + self._MacroDict = tdict(True, 3) > > > > + self._PcdDict = OrderedDict() > > > > > > > > - self.__WipeOffArea = [] > > > > + self._WipeOffArea = [] > > > > if GenFdsGlobalVariable.WorkSpaceDir == '': > > > > GenFdsGlobalVariable.WorkSpaceDir = os.getenv("WORKSPACE") > > > > > > > > - ## __SkipWhiteSpace() method > > > > + ## _SkipWhiteSpace() method > > > > # > > > > - # Skip white spaces from current char, return number of chars skipped > > > > + # Skip white spaces from current char. > > > > # > > > > # @param self The object pointer > > > > - # @retval Count The number of chars skipped > > > > # > > > > - def __SkipWhiteSpace(self): > > > > - Count = 0 > > > > - while not self.__EndOfFile(): > > > > - Count += 1 > > > > - if self.__CurrentChar() in (T_CHAR_NULL, T_CHAR_CR, T_CHAR_LF, T_CHAR_SPACE, T_CHAR_TAB): > > > > - self.__SkippedChars += str(self.__CurrentChar()) > > > > - self.__GetOneChar() > > > > - > > > > + def _SkipWhiteSpace(self): > > > > + while not self._EndOfFile(): > > > > + if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): > > > > + self._SkippedChars += str(self._CurrentChar()) > > > > + self._GetOneChar() > > > > else: > > > > - Count = Count - 1 > > > > - return Count > > > > + return > > > > + return > > > > > > > > - ## __EndOfFile() method > > > > + ## _EndOfFile() method > > > > # > > > > # Judge current buffer pos is at file end > > > > # > > > > @@ -308,17 +281,16 @@ class FdfParser: > > > > # @retval True Current File buffer position is at file end > > > > # @retval False Current File buffer position is NOT at file end > > > > # > > > > - def __EndOfFile(self): > > > > + def _EndOfFile(self): > > > > NumberOfLines = len(self.Profile.FileLinesList) > > > > SizeOfLastLine = len(self.Profile.FileLinesList[-1]) > > > > if self.CurrentLineNumber == NumberOfLines and self.CurrentOffsetWithinLine >= SizeOfLastLine - 1: > > > > return True > > > > - elif self.CurrentLineNumber > NumberOfLines: > > > > + if self.CurrentLineNumber > NumberOfLines: > > > > return True > > > > - else: > > > > - return False > > > > + return False > > > > > > > > - ## __EndOfLine() method > > > > + ## _EndOfLine() method > > > > # > > > > # Judge current buffer pos is at line end > > > > # > > > > @@ -326,14 +298,13 @@ class FdfParser: > > > > # @retval True Current File buffer position is at line end > > > > # @retval False Current File buffer position is NOT at line end > > > > # > > > > - def __EndOfLine(self): > > > > + def _EndOfLine(self): > > > > if self.CurrentLineNumber > len(self.Profile.FileLinesList): > > > > return True > > > > SizeOfCurrentLine = len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > > if self.CurrentOffsetWithinLine >= SizeOfCurrentLine: > > > > return True > > > > - else: > > > > - return False > > > > + return False > > > > > > > > ## Rewind() method > > > > # > > > > @@ -347,7 +318,7 @@ class FdfParser: > > > > self.CurrentLineNumber = DestLine > > > > self.CurrentOffsetWithinLine = DestOffset > > > > > > > > - ## __UndoOneChar() method > > > > + ## _UndoOneChar() method > > > > # > > > > # Go back one char in the file buffer > > > > # > > > > @@ -355,80 +326,78 @@ class FdfParser: > > > > # @retval True Successfully go back one char > > > > # @retval False Not able to go back one char as file beginning reached > > > > # > > > > - def __UndoOneChar(self): > > > > - > > > > + def _UndoOneChar(self): > > > > if self.CurrentLineNumber == 1 and self.CurrentOffsetWithinLine == 0: > > > > return False > > > > elif self.CurrentOffsetWithinLine == 0: > > > > self.CurrentLineNumber -= 1 > > > > - self.CurrentOffsetWithinLine = len(self.__CurrentLine()) - 1 > > > > + self.CurrentOffsetWithinLine = len(self._CurrentLine()) - 1 > > > > else: > > > > self.CurrentOffsetWithinLine -= 1 > > > > return True > > > > > > > > - ## __GetOneChar() method > > > > + ## _GetOneChar() method > > > > # > > > > # Move forward one char in the file buffer > > > > # > > > > # @param self The object pointer > > > > # > > > > - def __GetOneChar(self): > > > > + def _GetOneChar(self): > > > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > > > self.CurrentLineNumber += 1 > > > > self.CurrentOffsetWithinLine = 0 > > > > else: > > > > self.CurrentOffsetWithinLine += 1 > > > > > > > > - ## __CurrentChar() method > > > > + ## _CurrentChar() method > > > > # > > > > # Get the char pointed to by the file buffer pointer > > > > # > > > > # @param self The object pointer > > > > # @retval Char Current char > > > > # > > > > - def __CurrentChar(self): > > > > + def _CurrentChar(self): > > > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] > > > > > > > > - ## __NextChar() method > > > > + ## _NextChar() method > > > > # > > > > # Get the one char pass the char pointed to by the file buffer pointer > > > > # > > > > # @param self The object pointer > > > > # @retval Char Next char > > > > # > > > > - def __NextChar(self): > > > > + def _NextChar(self): > > > > if self.CurrentOffsetWithinLine == len(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) - 1: > > > > return self.Profile.FileLinesList[self.CurrentLineNumber][0] > > > > - else: > > > > - return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > > > + return self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine + 1] > > > > > > > > - ## __SetCurrentCharValue() method > > > > + ## _SetCurrentCharValue() method > > > > # > > > > # Modify the value of current char > > > > # > > > > # @param self The object pointer > > > > # @param Value The new value of current char > > > > # > > > > - def __SetCurrentCharValue(self, Value): > > > > + def _SetCurrentCharValue(self, Value): > > > > self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine] = Value > > > > > > > > - ## __CurrentLine() method > > > > + ## _CurrentLine() method > > > > # > > > > # Get the list that contains current line contents > > > > # > > > > # @param self The object pointer > > > > # @retval List current line contents > > > > # > > > > - def __CurrentLine(self): > > > > + def _CurrentLine(self): > > > > return self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > > > > > > > - def __StringToList(self): > > > > + def _StringToList(self): > > > > self.Profile.FileLinesList = [list(s) for s in self.Profile.FileLinesList] > > > > if not self.Profile.FileLinesList: > > > > EdkLogger.error('FdfParser', FILE_READ_FAILURE, 'The file is empty!', File=self.FileName) > > > > self.Profile.FileLinesList[-1].append(' ') > > > > > > > > - def __ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > > > + def _ReplaceFragment(self, StartPos, EndPos, Value = ' '): > > > > if StartPos[0] == EndPos[0]: > > > > Offset = StartPos[1] > > > > while Offset <= EndPos[1]: > > > > @@ -437,14 +406,14 @@ class FdfParser: > > > > return > > > > > > > > Offset = StartPos[1] > > > > - while self.Profile.FileLinesList[StartPos[0]][Offset] not in ('\r', '\n'): > > > > + while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > > > self.Profile.FileLinesList[StartPos[0]][Offset] = Value > > > > Offset += 1 > > > > > > > > Line = StartPos[0] > > > > while Line < EndPos[0]: > > > > Offset = 0 > > > > - while self.Profile.FileLinesList[Line][Offset] not in ('\r', '\n'): > > > > + while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): > > > > self.Profile.FileLinesList[Line][Offset] = Value > > > > Offset += 1 > > > > Line += 1 > > > > @@ -454,45 +423,29 @@ class FdfParser: > > > > self.Profile.FileLinesList[EndPos[0]][Offset] = Value > > > > Offset += 1 > > > > > > > > - > > > > - def __GetMacroName(self): > > > > - if not self.__GetNextToken(): > > > > - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > > - MacroName = self.__Token > > > > - NotFlag = False > > > > - if MacroName.startswith('!'): > > > > - NotFlag = True > > > > - MacroName = MacroName[1:].strip() > > > > - > > > > - if not MacroName.startswith('$(') or not MacroName.endswith(')'): > > > > - raise Warning("Macro name expected(Please use '$(%(Token)s)' if '%(Token)s' is a macro.)" % {"Token" : > > MacroName}, > > > > - self.FileName, self.CurrentLineNumber) > > > > - MacroName = MacroName[2:-1] > > > > - return MacroName, NotFlag > > > > - > > > > - def __SetMacroValue(self, Macro, Value): > > > > - if not self.__CurSection: > > > > + def _SetMacroValue(self, Macro, Value): > > > > + if not self._CurSection: > > > > return > > > > > > > > MacroDict = {} > > > > - if not self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]]: > > > > - self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] = MacroDict > > > > + if not self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]]: > > > > + self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] = MacroDict > > > > else: > > > > - MacroDict = self.__MacroDict[self.__CurSection[0], self.__CurSection[1], self.__CurSection[2]] > > > > + MacroDict = self._MacroDict[self._CurSection[0], self._CurSection[1], self._CurSection[2]] > > > > MacroDict[Macro] = Value > > > > > > > > - def __GetMacroValue(self, Macro): > > > > + def _GetMacroValue(self, Macro): > > > > # Highest priority > > > > if Macro in GlobalData.gCommandLineDefines: > > > > return GlobalData.gCommandLineDefines[Macro] > > > > if Macro in GlobalData.gGlobalDefines: > > > > return GlobalData.gGlobalDefines[Macro] > > > > > > > > - if self.__CurSection: > > > > - MacroDict = self.__MacroDict[ > > > > - self.__CurSection[0], > > > > - self.__CurSection[1], > > > > - self.__CurSection[2] > > > > + if self._CurSection: > > > > + MacroDict = self._MacroDict[ > > > > + self._CurSection[0], > > > > + self._CurSection[1], > > > > + self._CurSection[2] > > > > ] > > > > if MacroDict and Macro in MacroDict: > > > > return MacroDict[Macro] > > > > @@ -502,7 +455,7 @@ class FdfParser: > > > > return GlobalData.gPlatformDefines[Macro] > > > > return None > > > > > > > > - def __SectionHeaderParser(self, Section): > > > > + def _SectionHeaderParser(self, Section): > > > > # [Defines] > > > > # [FD.UiName]: use dummy instead if UI name is optional > > > > # [FV.UiName] > > > > @@ -510,25 +463,25 @@ class FdfParser: > > > > # [Rule]: don't take rule section into account, macro is not allowed in this section > > > > # [VTF.arch.UiName, arch] > > > > # [OptionRom.DriverName] > > > > - self.__CurSection = [] > > > > - Section = Section.strip()[1:-1].upper().replace(' ', '').strip('.') > > > > - ItemList = Section.split('.') > > > > + self._CurSection = [] > > > > + Section = Section.strip()[1:-1].upper().replace(' ', '').strip(TAB_SPLIT) > > > > + ItemList = Section.split(TAB_SPLIT) > > > > Item = ItemList[0] > > > > if Item == '' or Item == 'RULE': > > > > return > > > > > > > > if Item == TAB_COMMON_DEFINES.upper(): > > > > - self.__CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > > + self._CurSection = [TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > > elif Item == 'VTF' and len(ItemList) == 3: > > > > UiName = ItemList[2] > > > > - Pos = UiName.find(',') > > > > + Pos = UiName.find(TAB_COMMA_SPLIT) > > > > if Pos != -1: > > > > UiName = UiName[:Pos] > > > > - self.__CurSection = ['VTF', UiName, ItemList[1]] > > > > + self._CurSection = ['VTF', UiName, ItemList[1]] > > > > elif len(ItemList) > 1: > > > > - self.__CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > > > + self._CurSection = [ItemList[0], ItemList[1], TAB_COMMON] > > > > elif len(ItemList) > 0: > > > > - self.__CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > > > + self._CurSection = [ItemList[0], 'DUMMY', TAB_COMMON] > > > > > > > > ## PreprocessFile() method > > > > # > > > > @@ -540,7 +493,6 @@ class FdfParser: > > > > # @param self The object pointer > > > > # > > > > def PreprocessFile(self): > > > > - > > > > self.Rewind() > > > > InComment = False > > > > DoubleSlashComment = False > > > > @@ -548,12 +500,12 @@ class FdfParser: > > > > # HashComment in quoted string " " is ignored. > > > > InString = False > > > > > > > > - while not self.__EndOfFile(): > > > > + while not self._EndOfFile(): > > > > > > > > - if self.__CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > > > + if self._CurrentChar() == T_CHAR_DOUBLE_QUOTE and not InComment: > > > > InString = not InString > > > > # meet new line, then no longer in a comment for // and '#' > > > > - if self.__CurrentChar() == T_CHAR_LF: > > > > + if self._CurrentChar() == TAB_LINE_BREAK: > > > > self.CurrentLineNumber += 1 > > > > self.CurrentOffsetWithinLine = 0 > > > > if InComment and DoubleSlashComment: > > > > @@ -563,33 +515,33 @@ class FdfParser: > > > > InComment = False > > > > HashComment = False > > > > # check for */ comment end > > > > - elif InComment and not DoubleSlashComment and not HashComment and self.__CurrentChar() == T_CHAR_STAR > > and > > > self.__NextChar() == T_CHAR_SLASH: > > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > > - self.__GetOneChar() > > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > > - self.__GetOneChar() > > > > + elif InComment and not DoubleSlashComment and not HashComment and self._CurrentChar() == T_CHAR_STAR and > > > self._NextChar() == TAB_BACK_SLASH: > > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > > + self._GetOneChar() > > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > > + self._GetOneChar() > > > > InComment = False > > > > # set comments to spaces > > > > elif InComment: > > > > - self.__SetCurrentCharValue(T_CHAR_SPACE) > > > > - self.__GetOneChar() > > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > > + self._GetOneChar() > > > > # check for // comment > > > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_SLASH and not self.__EndOfLine(): > > > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == TAB_BACK_SLASH and not self._EndOfLine(): > > > > InComment = True > > > > DoubleSlashComment = True > > > > # check for '#' comment > > > > - elif self.__CurrentChar() == T_CHAR_HASH and not self.__EndOfLine() and not InString: > > > > + elif self._CurrentChar() == TAB_COMMENT_SPLIT and not self._EndOfLine() and not InString: > > > > InComment = True > > > > HashComment = True > > > > # check for /* comment start > > > > - elif self.__CurrentChar() == T_CHAR_SLASH and self.__NextChar() == T_CHAR_STAR: > > > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > > > - self.__GetOneChar() > > > > - self.__SetCurrentCharValue( T_CHAR_SPACE) > > > > - self.__GetOneChar() > > > > + elif self._CurrentChar() == TAB_BACK_SLASH and self._NextChar() == T_CHAR_STAR: > > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > > + self._GetOneChar() > > > > + self._SetCurrentCharValue(TAB_SPACE_SPLIT) > > > > + self._GetOneChar() > > > > InComment = True > > > > else: > > > > - self.__GetOneChar() > > > > + self._GetOneChar() > > > > > > > > # restore from ListOfList to ListOfString > > > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > > > @@ -606,30 +558,30 @@ class FdfParser: > > > > # nested include support > > > > Processed = False > > > > MacroDict = {} > > > > - while self.__GetNextToken(): > > > > + while self._GetNextToken(): > > > > > > > > - if self.__Token == 'DEFINE': > > > > - if not self.__GetNextToken(): > > > > + if self._Token == TAB_DEFINE: > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > > - Macro = self.__Token > > > > - if not self.__IsToken( "="): > > > > + Macro = self._Token > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__GetExpression() > > > > + Value = self._GetExpression() > > > > MacroDict[Macro] = Value > > > > > > > > - elif self.__Token == '!include': > > > > + elif self._Token == TAB_INCLUDE: > > > > Processed = True > > > > IncludeLine = self.CurrentLineNumber > > > > - IncludeOffset = self.CurrentOffsetWithinLine - len('!include') > > > > - if not self.__GetNextToken(): > > > > + IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) > > > > + if not self._GetNextToken(): > > > > raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) > > > > - IncFileName = self.__Token > > > > + IncFileName = self._Token > > > > PreIndex = 0 > > > > StartPos = IncFileName.find('$(', PreIndex) > > > > EndPos = IncFileName.find(')', StartPos+2) > > > > while StartPos != -1 and EndPos != -1: > > > > - Macro = IncFileName[StartPos+2 : EndPos] > > > > - MacroVal = self.__GetMacroValue(Macro) > > > > + Macro = IncFileName[StartPos+2: EndPos] > > > > + MacroVal = self._GetMacroValue(Macro) > > > > if not MacroVal: > > > > if Macro in MacroDict: > > > > MacroVal = MacroDict[Macro] > > > > @@ -686,9 +638,9 @@ class FdfParser: > > > > IncFileProfile.Level = ParentProfile.Level + 1 > > > > IncFileProfile.InsertStartLineNumber = InsertAtLine + 1 > > > > # deal with remaining portions after "!include filename", if exists. > > > > - if self.__GetNextToken(): > > > > + if self._GetNextToken(): > > > > if self.CurrentLineNumber == CurrentLine: > > > > - RemainingLine = self.__CurrentLine()[CurrentOffset:] > > > > + RemainingLine = self._CurrentLine()[CurrentOffset:] > > > > self.Profile.FileLinesList.insert(self.CurrentLineNumber, RemainingLine) > > > > IncFileProfile.InsertAdjust += 1 > > > > self.CurrentLineNumber += 1 > > > > @@ -704,7 +656,7 @@ class FdfParser: > > > > > > > > # comment out the processed include file statement > > > > TempList = list(self.Profile.FileLinesList[IncludeLine - 1]) > > > > - TempList.insert(IncludeOffset, '#') > > > > + TempList.insert(IncludeOffset, TAB_COMMENT_SPLIT) > > > > self.Profile.FileLinesList[IncludeLine - 1] = ''.join(TempList) > > > > if Processed: # Nested and back-to-back support > > > > self.Rewind(DestLine = IncFileProfile.InsertStartLineNumber - 1) > > > > @@ -713,7 +665,7 @@ class FdfParser: > > > > self.Rewind() > > > > > > > > @staticmethod > > > > - def __GetIfListCurrentItemStat(IfList): > > > > + def _GetIfListCurrentItemStat(IfList): > > > > if len(IfList) == 0: > > > > return True > > > > > > > > @@ -735,29 +687,29 @@ class FdfParser: > > > > IfList = [] > > > > RegionLayoutLine = 0 > > > > ReplacedLine = -1 > > > > - while self.__GetNextToken(): > > > > + while self._GetNextToken(): > > > > # Determine section name and the location dependent macro > > > > - if self.__GetIfListCurrentItemStat(IfList): > > > > - if self.__Token.startswith('['): > > > > - Header = self.__Token > > > > - if not self.__Token.endswith(']'): > > > > - self.__SkipToToken(']') > > > > - Header += self.__SkippedChars > > > > + if self._GetIfListCurrentItemStat(IfList): > > > > + if self._Token.startswith(TAB_SECTION_START): > > > > + Header = self._Token > > > > + if not self._Token.endswith(TAB_SECTION_END): > > > > + self._SkipToToken(TAB_SECTION_END) > > > > + Header += self._SkippedChars > > > > if Header.find('$(') != -1: > > > > raise Warning("macro cannot be used in section header", self.FileName, self.CurrentLineNumber) > > > > - self.__SectionHeaderParser(Header) > > > > + self._SectionHeaderParser(Header) > > > > continue > > > > # Replace macros except in RULE section or out of section > > > > - elif self.__CurSection and ReplacedLine != self.CurrentLineNumber: > > > > + elif self._CurSection and ReplacedLine != self.CurrentLineNumber: > > > > ReplacedLine = self.CurrentLineNumber > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > CurLine = self.Profile.FileLinesList[ReplacedLine - 1] > > > > PreIndex = 0 > > > > StartPos = CurLine.find('$(', PreIndex) > > > > EndPos = CurLine.find(')', StartPos+2) > > > > - while StartPos != -1 and EndPos != -1 and self.__Token not in ['!ifdef', '!ifndef', '!if', '!elseif']: > > > > - MacroName = CurLine[StartPos+2 : EndPos] > > > > - MacorValue = self.__GetMacroValue(MacroName) > > > > + while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, > > > TAB_ELSE_IF]: > > > > + MacroName = CurLine[StartPos+2: EndPos] > > > > + MacorValue = self._GetMacroValue(MacroName) > > > > if MacorValue is not None: > > > > CurLine = CurLine.replace('$(' + MacroName + ')', MacorValue, 1) > > > > if MacorValue.find('$(') != -1: > > > > @@ -771,74 +723,74 @@ class FdfParser: > > > > self.Profile.FileLinesList[ReplacedLine - 1] = CurLine > > > > continue > > > > > > > > - if self.__Token == 'DEFINE': > > > > - if self.__GetIfListCurrentItemStat(IfList): > > > > - if not self.__CurSection: > > > > + if self._Token == TAB_DEFINE: > > > > + if self._GetIfListCurrentItemStat(IfList): > > > > + if not self._CurSection: > > > > raise Warning("macro cannot be defined in Rule section or out of section", self.FileName, > > > self.CurrentLineNumber) > > > > DefineLine = self.CurrentLineNumber - 1 > > > > - DefineOffset = self.CurrentOffsetWithinLine - len('DEFINE') > > > > - if not self.__GetNextToken(): > > > > + DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) > > > > - Macro = self.__Token > > > > - if not self.__IsToken( "="): > > > > + Macro = self._Token > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - Value = self.__GetExpression() > > > > - self.__SetMacroValue(Macro, Value) > > > > - self.__WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > > > self.CurrentOffsetWithinLine - 1))) > > > > - elif self.__Token == 'SET': > > > > - if not self.__GetIfListCurrentItemStat(IfList): > > > > + Value = self._GetExpression() > > > > + self._SetMacroValue(Macro, Value) > > > > + self._WipeOffArea.append(((DefineLine, DefineOffset), (self.CurrentLineNumber - 1, > > > self.CurrentOffsetWithinLine - 1))) > > > > + elif self._Token == 'SET': > > > > + if not self._GetIfListCurrentItemStat(IfList): > > > > continue > > > > SetLine = self.CurrentLineNumber - 1 > > > > SetOffset = self.CurrentOffsetWithinLine - len('SET') > > > > - PcdPair = self.__GetNextPcdSettings() > > > > + PcdPair = self._GetNextPcdSettings() > > > > PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - Value = self.__GetExpression() > > > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > + Value = self._GetExpression() > > > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > > > > > - self.__PcdDict[PcdName] = Value > > > > + self._PcdDict[PcdName] = Value > > > > > > > > self.Profile.PcdDict[PcdPair] = Value > > > > self.SetPcdLocalation(PcdPair) > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple > > > > > > > > - self.__WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - > > 1))) > > > > - elif self.__Token in ('!ifdef', '!ifndef', '!if'): > > > > - IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > > > + self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > + elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): > > > > + IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > > > IfList.append([IfStartPos, None, None]) > > > > > > > > - CondLabel = self.__Token > > > > - Expression = self.__GetExpression() > > > > + CondLabel = self._Token > > > > + Expression = self._GetExpression() > > > > > > > > - if CondLabel == '!if': > > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > > + if CondLabel == TAB_IF: > > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > > else: > > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > > > - if CondLabel == '!ifndef': > > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'in') > > > > + if CondLabel == TAB_IF_N_DEF: > > > > ConditionSatisfied = not ConditionSatisfied > > > > > > > > BranchDetermined = ConditionSatisfied > > > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] > > > > if ConditionSatisfied: > > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > - elif self.__Token in ('!elseif', '!else'): > > > > - ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self.__Token)) > > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > + elif self._Token in (TAB_ELSE_IF, '!else'): > > > > + ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) > > > > if len(IfList) <= 0: > > > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > > > > > > > if IfList[-1][1]: > > > > IfList[-1] = [ElseStartPos, False, True] > > > > - self.__WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > + self._WipeOffArea.append((ElseStartPos, (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > else: > > > > - self.__WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > > > + self._WipeOffArea.append((IfList[-1][0], ElseStartPos)) > > > > IfList[-1] = [ElseStartPos, True, IfList[-1][2]] > > > > - if self.__Token == '!elseif': > > > > - Expression = self.__GetExpression() > > > > - ConditionSatisfied = self.__EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > > + if self._Token == TAB_ELSE_IF: > > > > + Expression = self._GetExpression() > > > > + ConditionSatisfied = self._EvaluateConditional(Expression, IfList[-1][0][0] + 1, 'eval') > > > > IfList[-1] = [IfList[-1][0], ConditionSatisfied, IfList[-1][2]] > > > > > > > > if IfList[-1][1]: > > > > @@ -846,14 +798,14 @@ class FdfParser: > > > > IfList[-1][1] = False > > > > else: > > > > IfList[-1][2] = True > > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, > > self.CurrentOffsetWithinLine > > > - 1))) > > > > - elif self.__Token == '!endif': > > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, > > self.CurrentOffsetWithinLine > > > - 1))) > > > > + elif self._Token == '!endif': > > > > if len(IfList) <= 0: > > > > raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) > > > > if IfList[-1][1]: > > > > - self.__WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > > > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > + self._WipeOffArea.append(((self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len('!endif')), > > > (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > else: > > > > - self.__WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > + self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) > > > > > > > > IfList.pop() > > > > elif not IfList: # Don't use PCDs inside conditional directive > > > > @@ -862,7 +814,7 @@ class FdfParser: > > > > continue > > > > SetPcd = ShortcutPcdPattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > > if SetPcd: > > > > - self.__PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > > > + self._PcdDict[SetPcd.group('name')] = SetPcd.group('value') > > > > RegionLayoutLine = self.CurrentLineNumber > > > > continue > > > > RegionSize = RegionSizePattern.match(self.Profile.FileLinesList[self.CurrentLineNumber - 1]) > > > > @@ -873,35 +825,35 @@ class FdfParser: > > > > if not RegionSizeGuid: > > > > RegionLayoutLine = self.CurrentLineNumber + 1 > > > > continue > > > > - self.__PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > > > - self.__PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > > > + self._PcdDict[RegionSizeGuid.group('base')] = RegionSize.group('base') > > > > + self._PcdDict[RegionSizeGuid.group('size')] = RegionSize.group('size') > > > > RegionLayoutLine = self.CurrentLineNumber + 1 > > > > > > > > if IfList: > > > > raise Warning("Missing !endif", self.FileName, self.CurrentLineNumber) > > > > self.Rewind() > > > > > > > > - def __CollectMacroPcd(self): > > > > + def _CollectMacroPcd(self): > > > > MacroDict = {} > > > > > > > > # PCD macro > > > > MacroDict.update(GlobalData.gPlatformPcds) > > > > - MacroDict.update(self.__PcdDict) > > > > + MacroDict.update(self._PcdDict) > > > > > > > > # Lowest priority > > > > MacroDict.update(GlobalData.gPlatformDefines) > > > > > > > > - if self.__CurSection: > > > > + if self._CurSection: > > > > # Defines macro > > > > - ScopeMacro = self.__MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > > + ScopeMacro = self._MacroDict[TAB_COMMON, TAB_COMMON, TAB_COMMON] > > > > if ScopeMacro: > > > > MacroDict.update(ScopeMacro) > > > > > > > > # Section macro > > > > - ScopeMacro = self.__MacroDict[ > > > > - self.__CurSection[0], > > > > - self.__CurSection[1], > > > > - self.__CurSection[2] > > > > + ScopeMacro = self._MacroDict[ > > > > + self._CurSection[0], > > > > + self._CurSection[1], > > > > + self._CurSection[2] > > > > ] > > > > if ScopeMacro: > > > > MacroDict.update(ScopeMacro) > > > > @@ -912,15 +864,15 @@ class FdfParser: > > > > for Item in GlobalData.BuildOptionPcd: > > > > if isinstance(Item, tuple): > > > > continue > > > > - PcdName, TmpValue = Item.split("=") > > > > + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) > > > > TmpValue = BuildOptionValue(TmpValue, {}) > > > > MacroDict[PcdName.strip()] = TmpValue > > > > # Highest priority > > > > > > > > return MacroDict > > > > > > > > - def __EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > > > - MacroPcdDict = self.__CollectMacroPcd() > > > > + def _EvaluateConditional(self, Expression, Line, Op = None, Value = None): > > > > + MacroPcdDict = self._CollectMacroPcd() > > > > if Op == 'eval': > > > > try: > > > > if Value: > > > > @@ -933,7 +885,7 @@ class FdfParser: > > > > # the precise number of line and return the evaluation result > > > > # > > > > EdkLogger.warn('Parser', "Suspicious expression: %s" % str(Excpt), > > > > - File=self.FileName, ExtraData=self.__CurrentLine(), > > > > + File=self.FileName, ExtraData=self._CurrentLine(), > > > > Line=Line) > > > > return Excpt.result > > > > except Exception as Excpt: > > > > @@ -955,10 +907,10 @@ class FdfParser: > > > > Expression = Expression[2:-1] > > > > return Expression in MacroPcdDict > > > > > > > > - ## __IsToken() method > > > > + ## _IsToken() method > > > > # > > > > # Check whether input string is found from current char position along > > > > - # If found, the string value is put into self.__Token > > > > + # If found, the string value is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @param String The string to search > > > > @@ -966,26 +918,26 @@ class FdfParser: > > > > # @retval True Successfully find string, file buffer pointer moved forward > > > > # @retval False Not able to find string, file buffer pointer not changed > > > > # > > > > - def __IsToken(self, String, IgnoreCase = False): > > > > - self.__SkipWhiteSpace() > > > > + def _IsToken(self, String, IgnoreCase = False): > > > > + self._SkipWhiteSpace() > > > > > > > > # Only consider the same line, no multi-line token allowed > > > > StartPos = self.CurrentOffsetWithinLine > > > > index = -1 > > > > if IgnoreCase: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > > > else: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > > > if index == 0: > > > > self.CurrentOffsetWithinLine += len(String) > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > > return True > > > > return False > > > > > > > > - ## __IsKeyword() method > > > > + ## _IsKeyword() method > > > > # > > > > # Check whether input keyword is found from current char position along, whole word only! > > > > - # If found, the string value is put into self.__Token > > > > + # If found, the string value is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @param Keyword The string to search > > > > @@ -993,117 +945,117 @@ class FdfParser: > > > > # @retval True Successfully find string, file buffer pointer moved forward > > > > # @retval False Not able to find string, file buffer pointer not changed > > > > # > > > > - def __IsKeyword(self, KeyWord, IgnoreCase = False): > > > > - self.__SkipWhiteSpace() > > > > + def _IsKeyword(self, KeyWord, IgnoreCase = False): > > > > + self._SkipWhiteSpace() > > > > > > > > # Only consider the same line, no multi-line token allowed > > > > StartPos = self.CurrentOffsetWithinLine > > > > index = -1 > > > > if IgnoreCase: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(KeyWord.upper()) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(KeyWord.upper()) > > > > else: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(KeyWord) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(KeyWord) > > > > if index == 0: > > > > - followingChar = self.__CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > > > - if not str(followingChar).isspace() and followingChar not in SEPERATOR_TUPLE: > > > > + followingChar = self._CurrentLine()[self.CurrentOffsetWithinLine + len(KeyWord)] > > > > + if not str(followingChar).isspace() and followingChar not in SEPARATORS: > > > > return False > > > > self.CurrentOffsetWithinLine += len(KeyWord) > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > > return True > > > > return False > > > > > > > > - def __GetExpression(self): > > > > + def _GetExpression(self): > > > > Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] > > > > Index = len(Line) - 1 > > > > - while Line[Index] in ['\r', '\n']: > > > > + while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: > > > > Index -= 1 > > > > ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] > > > > self.CurrentOffsetWithinLine += len(ExpressionString) > > > > ExpressionString = ExpressionString.strip() > > > > return ExpressionString > > > > > > > > - ## __GetNextWord() method > > > > + ## _GetNextWord() method > > > > # > > > > # Get next C name from file lines > > > > - # If found, the string value is put into self.__Token > > > > + # If found, the string value is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a C name string, file buffer pointer moved forward > > > > # @retval False Not able to find a C name string, file buffer pointer not changed > > > > # > > > > - def __GetNextWord(self): > > > > - self.__SkipWhiteSpace() > > > > - if self.__EndOfFile(): > > > > + def _GetNextWord(self): > > > > + self._SkipWhiteSpace() > > > > + if self._EndOfFile(): > > > > return False > > > > > > > > - TempChar = self.__CurrentChar() > > > > + TempChar = self._CurrentChar() > > > > StartPos = self.CurrentOffsetWithinLine > > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_': > > > > - self.__GetOneChar() > > > > - while not self.__EndOfLine(): > > > > - TempChar = self.__CurrentChar() > > > > + self._GetOneChar() > > > > + while not self._EndOfLine(): > > > > + TempChar = self._CurrentChar() > > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > > > or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-': > > > > - self.__GetOneChar() > > > > + self._GetOneChar() > > > > > > > > else: > > > > break > > > > > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > > return True > > > > > > > > return False > > > > > > > > - def __GetNextPcdWord(self): > > > > - self.__SkipWhiteSpace() > > > > - if self.__EndOfFile(): > > > > + def _GetNextPcdWord(self): > > > > + self._SkipWhiteSpace() > > > > + if self._EndOfFile(): > > > > return False > > > > > > > > - TempChar = self.__CurrentChar() > > > > + TempChar = self._CurrentChar() > > > > StartPos = self.CurrentOffsetWithinLine > > > > - if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == > > '[' > > > or TempChar == ']': > > > > - self.__GetOneChar() > > > > - while not self.__EndOfLine(): > > > > - TempChar = self.__CurrentChar() > > > > + if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') or TempChar == '_' or TempChar == > > > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > > > + self._GetOneChar() > > > > + while not self._EndOfLine(): > > > > + TempChar = self._CurrentChar() > > > > if (TempChar >= 'a' and TempChar <= 'z') or (TempChar >= 'A' and TempChar <= 'Z') \ > > > > - or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == '[' or > > > TempChar == ']': > > > > - self.__GetOneChar() > > > > + or (TempChar >= '0' and TempChar <= '9') or TempChar == '_' or TempChar == '-' or TempChar == > > > TAB_SECTION_START or TempChar == TAB_SECTION_END: > > > > + self._GetOneChar() > > > > > > > > else: > > > > break > > > > > > > > - self.__Token = self.__CurrentLine()[StartPos : self.CurrentOffsetWithinLine] > > > > + self._Token = self._CurrentLine()[StartPos: self.CurrentOffsetWithinLine] > > > > return True > > > > > > > > return False > > > > > > > > - ## __GetNextToken() method > > > > + ## _GetNextToken() method > > > > # > > > > # Get next token unit before a seperator > > > > - # If found, the string value is put into self.__Token > > > > + # If found, the string value is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a token unit, file buffer pointer moved forward > > > > # @retval False Not able to find a token unit, file buffer pointer not changed > > > > # > > > > - def __GetNextToken(self): > > > > + def _GetNextToken(self): > > > > # Skip leading spaces, if exist. > > > > - self.__SkipWhiteSpace() > > > > - if self.__EndOfFile(): > > > > + self._SkipWhiteSpace() > > > > + if self._EndOfFile(): > > > > return False > > > > # Record the token start position, the position of the first non-space char. > > > > StartPos = self.CurrentOffsetWithinLine > > > > StartLine = self.CurrentLineNumber > > > > while StartLine == self.CurrentLineNumber: > > > > - TempChar = self.__CurrentChar() > > > > + TempChar = self._CurrentChar() > > > > # Try to find the end char that is not a space and not in seperator tuple. > > > > # That is, when we got a space or any char in the tuple, we got the end of token. > > > > - if not str(TempChar).isspace() and TempChar not in SEPERATOR_TUPLE: > > > > - self.__GetOneChar() > > > > + if not str(TempChar).isspace() and TempChar not in SEPARATORS: > > > > + self._GetOneChar() > > > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > > > - self.__GetOneChar() > > > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > > > + self._GetOneChar() > > > > break > > > > else: > > > > break > > > > @@ -1113,35 +1065,34 @@ class FdfParser: > > > > EndPos = self.CurrentOffsetWithinLine > > > > if self.CurrentLineNumber != StartLine: > > > > EndPos = len(self.Profile.FileLinesList[StartLine-1]) > > > > - self.__Token = self.Profile.FileLinesList[StartLine-1][StartPos : EndPos] > > > > - if self.__Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > > > TAB_INCLUDE]: > > > > - self.__Token = self.__Token.lower() > > > > + self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] > > > > + if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, > > > TAB_INCLUDE]: > > > > + self._Token = self._Token.lower() > > > > if StartPos != self.CurrentOffsetWithinLine: > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __GetNextGuid() method > > > > + ## _GetNextGuid() method > > > > # > > > > # Get next token unit before a seperator > > > > - # If found, the GUID string is put into self.__Token > > > > + # If found, the GUID string is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a registry format GUID, file buffer pointer moved forward > > > > # @retval False Not able to find a registry format GUID, file buffer pointer not changed > > > > # > > > > - def __GetNextGuid(self): > > > > - > > > > - if not self.__GetNextToken(): > > > > + def _GetNextGuid(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > - if gGuidPattern.match(self.__Token) is not None: > > > > + if gGuidPattern.match(self._Token) is not None: > > > > return True > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > @staticmethod > > > > - def __Verify(Name, Value, Scope): > > > > + def _Verify(Name, Value, Scope): > > > > # value verification only applies to numeric values. > > > > if Scope not in TAB_PCD_NUMERIC_TYPES: > > > > return > > > > @@ -1157,17 +1108,17 @@ class FdfParser: > > > > EdkLogger.error("FdfParser", FORMAT_INVALID, "Too large value for %s." % Name) > > > > return True > > > > > > > > - ## __UndoToken() method > > > > + ## _UndoToken() method > > > > # > > > > # Go back one token unit in file buffer > > > > # > > > > # @param self The object pointer > > > > # > > > > - def __UndoToken(self): > > > > - self.__UndoOneChar() > > > > - while self.__CurrentChar().isspace(): > > > > - if not self.__UndoOneChar(): > > > > - self.__GetOneChar() > > > > + def _UndoToken(self): > > > > + self._UndoOneChar() > > > > + while self._CurrentChar().isspace(): > > > > + if not self._UndoOneChar(): > > > > + self._GetOneChar() > > > > return > > > > > > > > > > > > @@ -1175,118 +1126,118 @@ class FdfParser: > > > > CurrentLine = self.CurrentLineNumber > > > > while CurrentLine == self.CurrentLineNumber: > > > > > > > > - TempChar = self.__CurrentChar() > > > > + TempChar = self._CurrentChar() > > > > # Try to find the end char that is not a space and not in seperator tuple. > > > > # That is, when we got a space or any char in the tuple, we got the end of token. > > > > - if not str(TempChar).isspace() and not TempChar in SEPERATOR_TUPLE: > > > > - if not self.__UndoOneChar(): > > > > + if not str(TempChar).isspace() and not TempChar in SEPARATORS: > > > > + if not self._UndoOneChar(): > > > > return > > > > # if we happen to meet a seperator as the first char, we must proceed to get it. > > > > # That is, we get a token that is a seperator char. nomally it is the boundary of other tokens. > > > > - elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPERATOR_TUPLE: > > > > + elif StartPos == self.CurrentOffsetWithinLine and TempChar in SEPARATORS: > > > > return > > > > else: > > > > break > > > > > > > > - self.__GetOneChar() > > > > + self._GetOneChar() > > > > > > > > - ## __GetNextHexNumber() method > > > > + ## _GetNextHexNumber() method > > > > # > > > > # Get next HEX data before a seperator > > > > - # If found, the HEX data is put into self.__Token > > > > + # If found, the HEX data is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a HEX data, file buffer pointer moved forward > > > > # @retval False Not able to find a HEX data, file buffer pointer not changed > > > > # > > > > - def __GetNextHexNumber(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetNextHexNumber(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > - if gHexPatternAll.match(self.__Token): > > > > + if gHexPatternAll.match(self._Token): > > > > return True > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - ## __GetNextDecimalNumber() method > > > > + ## _GetNextDecimalNumber() method > > > > # > > > > # Get next decimal data before a seperator > > > > - # If found, the decimal data is put into self.__Token > > > > + # If found, the decimal data is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a decimal data, file buffer pointer moved forward > > > > # @retval False Not able to find a decimal data, file buffer pointer not changed > > > > # > > > > - def __GetNextDecimalNumber(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetNextDecimalNumber(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > - if self.__Token.isdigit(): > > > > + if self._Token.isdigit(): > > > > return True > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - def __GetNextPcdSettings(self): > > > > - if not self.__GetNextWord(): > > > > + def _GetNextPcdSettings(self): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > > - pcdTokenSpaceCName = self.__Token > > > > + pcdTokenSpaceCName = self._Token > > > > > > > > - if not self.__IsToken( "."): > > > > + if not self._IsToken(TAB_SPLIT): > > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) > > > > - pcdCName = self.__Token > > > > + pcdCName = self._Token > > > > > > > > Fields = [] > > > > - while self.__IsToken("."): > > > > - if not self.__GetNextPcdWord(): > > > > + while self._IsToken(TAB_SPLIT): > > > > + if not self._GetNextPcdWord(): > > > > raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, > > > self.CurrentLineNumber) > > > > - Fields.append(self.__Token) > > > > + Fields.append(self._Token) > > > > > > > > - return (pcdCName, pcdTokenSpaceCName,".".join(Fields)) > > > > + return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) > > > > > > > > - ## __GetStringData() method > > > > + ## _GetStringData() method > > > > # > > > > # Get string contents quoted in "" > > > > - # If found, the decimal data is put into self.__Token > > > > + # If found, the decimal data is put into self._Token > > > > # > > > > # @param self The object pointer > > > > # @retval True Successfully find a string data, file buffer pointer moved forward > > > > # @retval False Not able to find a string data, file buffer pointer not changed > > > > # > > > > - def __GetStringData(self): > > > > - if self.__Token.startswith("\"") or self.__Token.startswith("L\""): > > > > - self.__UndoToken() > > > > - self.__SkipToToken("\"") > > > > + def _GetStringData(self): > > > > + if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): > > > > + self._UndoToken() > > > > + self._SkipToToken(T_CHAR_DOUBLE_QUOTE) > > > > currentLineNumber = self.CurrentLineNumber > > > > > > > > - if not self.__SkipToToken("\""): > > > > + if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): > > > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > > > if currentLineNumber != self.CurrentLineNumber: > > > > raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) > > > > - self.__Token = self.__SkippedChars.rstrip('\"') > > > > + self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) > > > > return True > > > > > > > > - elif self.__Token.startswith("\'") or self.__Token.startswith("L\'"): > > > > - self.__UndoToken() > > > > - self.__SkipToToken("\'") > > > > + elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): > > > > + self._UndoToken() > > > > + self._SkipToToken(T_CHAR_SINGLE_QUOTE) > > > > currentLineNumber = self.CurrentLineNumber > > > > > > > > - if not self.__SkipToToken("\'"): > > > > + if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): > > > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > > > if currentLineNumber != self.CurrentLineNumber: > > > > raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) > > > > - self.__Token = self.__SkippedChars.rstrip('\'') > > > > + self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) > > > > return True > > > > > > > > else: > > > > return False > > > > > > > > - ## __SkipToToken() method > > > > + ## _SkipToToken() method > > > > # > > > > # Search forward in file buffer for the string > > > > - # The skipped chars are put into self.__SkippedChars > > > > + # The skipped chars are put into self._SkippedChars > > > > # > > > > # @param self The object pointer > > > > # @param String The string to search > > > > @@ -1294,25 +1245,25 @@ class FdfParser: > > > > # @retval True Successfully find the string, file buffer pointer moved forward > > > > # @retval False Not able to find the string, file buffer pointer not changed > > > > # > > > > - def __SkipToToken(self, String, IgnoreCase = False): > > > > + def _SkipToToken(self, String, IgnoreCase = False): > > > > StartPos = self.GetFileBufferPos() > > > > > > > > - self.__SkippedChars = "" > > > > - while not self.__EndOfFile(): > > > > + self._SkippedChars = "" > > > > + while not self._EndOfFile(): > > > > index = -1 > > > > if IgnoreCase: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].upper().find(String.upper()) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].upper().find(String.upper()) > > > > else: > > > > - index = self.__CurrentLine()[self.CurrentOffsetWithinLine : ].find(String) > > > > + index = self._CurrentLine()[self.CurrentOffsetWithinLine: ].find(String) > > > > if index == 0: > > > > self.CurrentOffsetWithinLine += len(String) > > > > - self.__SkippedChars += String > > > > + self._SkippedChars += String > > > > return True > > > > - self.__SkippedChars += str(self.__CurrentChar()) > > > > - self.__GetOneChar() > > > > + self._SkippedChars += str(self._CurrentChar()) > > > > + self._GetOneChar() > > > > > > > > - self.SetFileBufferPos( StartPos) > > > > - self.__SkippedChars = "" > > > > + self.SetFileBufferPos(StartPos) > > > > + self._SkippedChars = "" > > > > return False > > > > > > > > ## GetFileBufferPos() method > > > > @@ -1343,18 +1294,18 @@ class FdfParser: > > > > # @param self The object pointer > > > > # > > > > def Preprocess(self): > > > > - self.__StringToList() > > > > + self._StringToList() > > > > self.PreprocessFile() > > > > self.PreprocessIncludeFile() > > > > - self.__StringToList() > > > > + self._StringToList() > > > > self.PreprocessFile() > > > > self.PreprocessConditionalStatement() > > > > - self.__StringToList() > > > > - for Pos in self.__WipeOffArea: > > > > - self.__ReplaceFragment(Pos[0], Pos[1]) > > > > + self._StringToList() > > > > + for Pos in self._WipeOffArea: > > > > + self._ReplaceFragment(Pos[0], Pos[1]) > > > > self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList] > > > > > > > > - while self.__GetDefines(): > > > > + while self._GetDefines(): > > > > pass > > > > > > > > ## ParseFile() method > > > > @@ -1365,19 +1316,18 @@ class FdfParser: > > > > # @param self The object pointer > > > > # > > > > def ParseFile(self): > > > > - > > > > try: > > > > self.Preprocess() > > > > - self.__GetError() > > > > + self._GetError() > > > > # > > > > # Keep processing sections of the FDF until no new sections or a syntax error is found > > > > # > > > > - while self.__GetFd() or self.__GetFv() or self.__GetFmp() or self.__GetCapsule() or self.__GetVtf() or self.__GetRule() > > or > > > self.__GetOptionRom(): > > > > + while self._GetFd() or self._GetFv() or self._GetFmp() or self._GetCapsule() or self._GetVtf() or self._GetRule() or > > > self._GetOptionRom(): > > > > pass > > > > > > > > except Warning as X: > > > > - self.__UndoToken() > > > > - #'\n\tGot Token: \"%s\" from File %s\n' % (self.__Token, FileLineTuple[0]) + \ > > > > + self._UndoToken() > > > > + #'\n\tGot Token: \"%s\" from File %s\n' % (self._Token, FileLineTuple[0]) + \ > > > > # At this point, the closest parent would be the included file itself > > > > Profile = GetParentAtLine(X.OriginalLineNumber) > > > > if Profile is not None: > > > > @@ -1386,7 +1336,7 @@ class FdfParser: > > > > else: > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > X.Message += ' near line %d, column %d: %s' \ > > > > - % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > > > 1][self.CurrentOffsetWithinLine :].rstrip('\n').rstrip('\r')) > > > > + % (FileLineTuple[1], self.CurrentOffsetWithinLine + 1, self.Profile.FileLinesList[self.CurrentLineNumber - > > > 1][self.CurrentOffsetWithinLine:].rstrip(TAB_LINE_BREAK).rstrip(T_CHAR_CR)) > > > > raise > > > > > > > > ## SectionParser() method > > > > @@ -1403,7 +1353,7 @@ class FdfParser: > > > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM.") and not > > > S.startswith('[FMPPAYLOAD.'): > > > > raise Warning("Unknown section or section appear sequence error (The correct sequence should be [DEFINES], > > [FD.], > > > [FV.], [Capsule.], [VTF.], [Rule.], [OptionRom.], [FMPPAYLOAD.])", self.FileName, self.CurrentLineNumber) > > > > > > > > - ## __GetDefines() method > > > > + ## _GetDefines() method > > > > # > > > > # Get Defines section contents and store its data into AllMacrosList > > > > # > > > > @@ -1411,56 +1361,55 @@ class FdfParser: > > > > # @retval True Successfully find a Defines > > > > # @retval False Not able to find a Defines > > > > # > > > > - def __GetDefines(self): > > > > - > > > > - if not self.__GetNextToken(): > > > > + def _GetDefines(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[DEFINES"): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[DEFINES"): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[DEFINES", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[DEFINES", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - while self.__GetNextWord(): > > > > + while self._GetNextWord(): > > > > # handle the SET statement > > > > - if self.__Token == 'SET': > > > > - self.__UndoToken() > > > > - self.__GetSetStatement(None) > > > > + if self._Token == 'SET': > > > > + self._UndoToken() > > > > + self._GetSetStatement(None) > > > > continue > > > > > > > > - Macro = self.__Token > > > > + Macro = self._Token > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken() or self.__Token.startswith('['): > > > > + if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): > > > > raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__Token > > > > + Value = self._Token > > > > > > > > return False > > > > > > > > - ##__GetError() method > > > > - def __GetError(self): > > > > + ##_GetError() method > > > > + def _GetError(self): > > > > #save the Current information > > > > CurrentLine = self.CurrentLineNumber > > > > CurrentOffset = self.CurrentOffsetWithinLine > > > > - while self.__GetNextToken(): > > > > - if self.__Token == TAB_ERROR: > > > > - EdkLogger.error('FdfParser', ERROR_STATEMENT, self.__CurrentLine().replace(TAB_ERROR, '', 1), > > > File=self.FileName, Line=self.CurrentLineNumber) > > > > + while self._GetNextToken(): > > > > + if self._Token == TAB_ERROR: > > > > + EdkLogger.error('FdfParser', ERROR_STATEMENT, self._CurrentLine().replace(TAB_ERROR, '', 1), > > > File=self.FileName, Line=self.CurrentLineNumber) > > > > self.CurrentLineNumber = CurrentLine > > > > self.CurrentOffsetWithinLine = CurrentOffset > > > > > > > > - ## __GetFd() method > > > > + ## _GetFd() method > > > > # > > > > # Get FD section contents and store its data into FD dictionary of self.Profile > > > > # > > > > @@ -1468,27 +1417,26 @@ class FdfParser: > > > > # @retval True Successfully find a FD > > > > # @retval False Not able to find a FD > > > > # > > > > - def __GetFd(self): > > > > - > > > > - if not self.__GetNextToken(): > > > > + def _GetFd(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[FD."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FD."): > > > > if not S.startswith("[FV.") and not S.startswith('[FMPPAYLOAD.') and not S.startswith("[CAPSULE.") \ > > > > and not S.startswith("[VTF.") and not S.startswith("[RULE.") and not S.startswith("[OPTIONROM."): > > > > raise Warning("Unknown section", self.FileName, self.CurrentLineNumber) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[FD.", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[FD.", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) > > > > > > > > - FdName = self.__GetUiName() > > > > + FdName = self._GetUiName() > > > > if FdName == "": > > > > if len (self.Profile.FdDict) == 0: > > > > FdName = GenFdsGlobalVariable.PlatformName > > > > @@ -1502,56 +1450,56 @@ class FdfParser: > > > > if self.CurrentFdName in self.Profile.FdDict: > > > > raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - FdObj = Fd.FD() > > > > + FdObj = FD() > > > > FdObj.FdUiName = self.CurrentFdName > > > > self.Profile.FdDict[self.CurrentFdName] = FdObj > > > > > > > > if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: > > > > raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) > > > > > > > > - Status = self.__GetCreateFile(FdObj) > > > > + Status = self._GetCreateFile(FdObj) > > > > if not Status: > > > > raise Warning("FD name error", self.FileName, self.CurrentLineNumber) > > > > > > > > - while self.__GetTokenStatements(FdObj): > > > > + while self._GetTokenStatements(FdObj): > > > > pass > > > > for Attr in ("BaseAddress", "Size", "ErasePolarity"): > > > > if getattr(FdObj, Attr) is None: > > > > - self.__GetNextToken() > > > > + self._GetNextToken() > > > > raise Warning("Keyword %s missing" % Attr, self.FileName, self.CurrentLineNumber) > > > > > > > > if not FdObj.BlockSizeList: > > > > FdObj.BlockSizeList.append((1, FdObj.Size, None)) > > > > > > > > - self.__GetDefineStatements(FdObj) > > > > + self._GetDefineStatements(FdObj) > > > > > > > > - self.__GetSetStatements(FdObj) > > > > + self._GetSetStatements(FdObj) > > > > > > > > - if not self.__GetRegionLayout(FdObj): > > > > + if not self._GetRegionLayout(FdObj): > > > > raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) > > > > > > > > - while self.__GetRegionLayout(FdObj): > > > > + while self._GetRegionLayout(FdObj): > > > > pass > > > > return True > > > > > > > > - ## __GetUiName() method > > > > + ## _GetUiName() method > > > > # > > > > # Return the UI name of a section > > > > # > > > > # @param self The object pointer > > > > # @retval FdName UI name > > > > # > > > > - def __GetUiName(self): > > > > + def _GetUiName(self): > > > > Name = "" > > > > - if self.__GetNextWord(): > > > > - Name = self.__Token > > > > + if self._GetNextWord(): > > > > + Name = self._Token > > > > > > > > return Name > > > > > > > > - ## __GetCreateFile() method > > > > + ## _GetCreateFile() method > > > > # > > > > # Return the output file name of object > > > > # > > > > @@ -1559,16 +1507,15 @@ class FdfParser: > > > > # @param Obj object whose data will be stored in file > > > > # @retval FdName UI name > > > > # > > > > - def __GetCreateFile(self, Obj): > > > > - > > > > - if self.__IsKeyword( "CREATE_FILE"): > > > > - if not self.__IsToken( "="): > > > > + def _GetCreateFile(self, Obj): > > > > + if self._IsKeyword("CREATE_FILE"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > > > > > - FileName = self.__Token > > > > + FileName = self._Token > > > > Obj.CreateFileName = FileName > > > > > > > > return True > > > > @@ -1576,25 +1523,25 @@ class FdfParser: > > > > def SetPcdLocalation(self,pcdpair): > > > > self.Profile.PcdLocalDict[pcdpair] = (self.Profile.FileName,self.CurrentLineNumber) > > > > > > > > - ## __GetTokenStatements() method > > > > + ## _GetTokenStatements() method > > > > # > > > > # Get token statements > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom token statement is got > > > > # > > > > - def __GetTokenStatements(self, Obj): > > > > - if self.__IsKeyword( "BaseAddress"): > > > > - if not self.__IsToken( "="): > > > > + def _GetTokenStatements(self, Obj): > > > > + if self._IsKeyword("BaseAddress"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) > > > > > > > > - Obj.BaseAddress = self.__Token > > > > + Obj.BaseAddress = self._Token > > > > > > > > - if self.__IsToken( "|"): > > > > - pcdPair = self.__GetNextPcdSettings() > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + pcdPair = self._GetNextPcdSettings() > > > > Obj.BaseAddressPcd = pcdPair > > > > self.Profile.PcdDict[pcdPair] = Obj.BaseAddress > > > > self.SetPcdLocalation(pcdPair) > > > > @@ -1602,16 +1549,16 @@ class FdfParser: > > > > self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple > > > > return True > > > > > > > > - if self.__IsKeyword( "Size"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("Size"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) > > > > > > > > - Size = self.__Token > > > > - if self.__IsToken( "|"): > > > > - pcdPair = self.__GetNextPcdSettings() > > > > + Size = self._Token > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + pcdPair = self._GetNextPcdSettings() > > > > Obj.SizePcd = pcdPair > > > > self.Profile.PcdDict[pcdPair] = Size > > > > self.SetPcdLocalation(pcdPair) > > > > @@ -1620,22 +1567,22 @@ class FdfParser: > > > > Obj.Size = long(Size, 0) > > > > return True > > > > > > > > - if self.__IsKeyword( "ErasePolarity"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("ErasePolarity"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token != "1" and self.__Token != "0": > > > > + if self._Token != "1" and self._Token != "0": > > > > raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) > > > > > > > > - Obj.ErasePolarity = self.__Token > > > > + Obj.ErasePolarity = self._Token > > > > return True > > > > > > > > - return self.__GetBlockStatements(Obj) > > > > + return self._GetBlockStatements(Obj) > > > > > > > > - ## __GetAddressStatements() method > > > > + ## _GetAddressStatements() method > > > > # > > > > # Get address statements > > > > # > > > > @@ -1644,38 +1591,37 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetAddressStatements(self, Obj): > > > > - > > > > - if self.__IsKeyword("BsBaseAddress"): > > > > - if not self.__IsToken( "="): > > > > + def _GetAddressStatements(self, Obj): > > > > + if self._IsKeyword("BsBaseAddress"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > > > > > - BsAddress = long(self.__Token, 0) > > > > + BsAddress = long(self._Token, 0) > > > > Obj.BsBaseAddress = BsAddress > > > > > > > > - if self.__IsKeyword("RtBaseAddress"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("RtBaseAddress"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > > raise Warning("expected address", self.FileName, self.CurrentLineNumber) > > > > > > > > - RtAddress = long(self.__Token, 0) > > > > + RtAddress = long(self._Token, 0) > > > > Obj.RtBaseAddress = RtAddress > > > > > > > > - ## __GetBlockStatements() method > > > > + ## _GetBlockStatements() method > > > > # > > > > # Get block statements > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom block statement is got > > > > # > > > > - def __GetBlockStatements(self, Obj): > > > > + def _GetBlockStatements(self, Obj): > > > > IsBlock = False > > > > - while self.__GetBlockStatement(Obj): > > > > + while self._GetBlockStatement(Obj): > > > > IsBlock = True > > > > > > > > Item = Obj.BlockSizeList[-1] > > > > @@ -1683,7 +1629,7 @@ class FdfParser: > > > > raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) > > > > return IsBlock > > > > > > > > - ## __GetBlockStatement() method > > > > + ## _GetBlockStatement() method > > > > # > > > > # Get block statement > > > > # > > > > @@ -1692,20 +1638,20 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetBlockStatement(self, Obj): > > > > - if not self.__IsKeyword( "BlockSize"): > > > > + def _GetBlockStatement(self, Obj): > > > > + if not self._IsKeyword("BlockSize"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > > > raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) > > > > > > > > - BlockSize = self.__Token > > > > + BlockSize = self._Token > > > > BlockSizePcd = None > > > > - if self.__IsToken( "|"): > > > > - PcdPair = self.__GetNextPcdSettings() > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + PcdPair = self._GetNextPcdSettings() > > > > BlockSizePcd = PcdPair > > > > self.Profile.PcdDict[PcdPair] = BlockSize > > > > self.SetPcdLocalation(PcdPair) > > > > @@ -1714,19 +1660,19 @@ class FdfParser: > > > > BlockSize = long(BlockSize, 0) > > > > > > > > BlockNumber = None > > > > - if self.__IsKeyword( "NumBlocks"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("NumBlocks"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextDecimalNumber() and not self.__GetNextHexNumber(): > > > > + if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): > > > > raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) > > > > > > > > - BlockNumber = long(self.__Token, 0) > > > > + BlockNumber = long(self._Token, 0) > > > > > > > > Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd)) > > > > return True > > > > > > > > - ## __GetDefineStatements() method > > > > + ## _GetDefineStatements() method > > > > # > > > > # Get define statements > > > > # > > > > @@ -1735,11 +1681,11 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetDefineStatements(self, Obj): > > > > - while self.__GetDefineStatement( Obj): > > > > + def _GetDefineStatements(self, Obj): > > > > + while self._GetDefineStatement(Obj): > > > > pass > > > > > > > > - ## __GetDefineStatement() method > > > > + ## _GetDefineStatement() method > > > > # > > > > # Get define statement > > > > # > > > > @@ -1748,24 +1694,24 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetDefineStatement(self, Obj): > > > > - if self.__IsKeyword("DEFINE"): > > > > - self.__GetNextToken() > > > > - Macro = self.__Token > > > > - if not self.__IsToken( "="): > > > > + def _GetDefineStatement(self, Obj): > > > > + if self._IsKeyword(TAB_DEFINE): > > > > + self._GetNextToken() > > > > + Macro = self._Token > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > > > > > > - Value = self.__Token > > > > + Value = self._Token > > > > Macro = '$(' + Macro + ')' > > > > Obj.DefineVarDict[Macro] = Value > > > > return True > > > > > > > > return False > > > > > > > > - ## __GetSetStatements() method > > > > + ## _GetSetStatements() method > > > > # > > > > # Get set statements > > > > # > > > > @@ -1774,11 +1720,11 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetSetStatements(self, Obj): > > > > - while self.__GetSetStatement(Obj): > > > > + def _GetSetStatements(self, Obj): > > > > + while self._GetSetStatement(Obj): > > > > pass > > > > > > > > - ## __GetSetStatement() method > > > > + ## _GetSetStatement() method > > > > # > > > > # Get set statement > > > > # > > > > @@ -1787,15 +1733,15 @@ class FdfParser: > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetSetStatement(self, Obj): > > > > - if self.__IsKeyword("SET"): > > > > - PcdPair = self.__GetNextPcdSettings() > > > > + def _GetSetStatement(self, Obj): > > > > + if self._IsKeyword("SET"): > > > > + PcdPair = self._GetNextPcdSettings() > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - Value = self.__GetExpression() > > > > - Value = self.__EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > + Value = self._GetExpression() > > > > + Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) > > > > > > > > if Obj: > > > > Obj.SetVarDict[PcdPair] = Value > > > > @@ -1807,19 +1753,19 @@ class FdfParser: > > > > > > > > return False > > > > > > > > - ## __CalcRegionExpr(self) > > > > + ## _CalcRegionExpr(self) > > > > # > > > > # Calculate expression for offset or size of a region > > > > # > > > > # @return: None if invalid expression > > > > # Calculated number if successfully > > > > # > > > > - def __CalcRegionExpr(self): > > > > + def _CalcRegionExpr(self): > > > > StartPos = self.GetFileBufferPos() > > > > Expr = '' > > > > PairCount = 0 > > > > - while not self.__EndOfFile(): > > > > - CurCh = self.__CurrentChar() > > > > + while not self._EndOfFile(): > > > > + CurCh = self._CurrentChar() > > > > if CurCh == '(': > > > > PairCount += 1 > > > > elif CurCh == ')': > > > > @@ -1828,106 +1774,106 @@ class FdfParser: > > > > if CurCh in '|\r\n' and PairCount == 0: > > > > break > > > > Expr += CurCh > > > > - self.__GetOneChar() > > > > + self._GetOneChar() > > > > try: > > > > return long( > > > > ValueExpression(Expr, > > > > - self.__CollectMacroPcd() > > > > + self._CollectMacroPcd() > > > > )(True), 0) > > > > except Exception: > > > > self.SetFileBufferPos(StartPos) > > > > return None > > > > > > > > - ## __GetRegionLayout() method > > > > + ## _GetRegionLayout() method > > > > # > > > > # Get region layout for FD > > > > # > > > > # @param self The object pointer > > > > - # @param Fd for whom region is got > > > > + # @param theFd for whom region is got > > > > # @retval True Successfully find > > > > # @retval False Not able to find > > > > # > > > > - def __GetRegionLayout(self, Fd): > > > > - Offset = self.__CalcRegionExpr() > > > > + def _GetRegionLayout(self, theFd): > > > > + Offset = self._CalcRegionExpr() > > > > if Offset is None: > > > > return False > > > > > > > > - RegionObj = Region.Region() > > > > + RegionObj = Region() > > > > RegionObj.Offset = Offset > > > > - Fd.RegionList.append(RegionObj) > > > > + theFd.RegionList.append(RegionObj) > > > > > > > > - if not self.__IsToken( "|"): > > > > + if not self._IsToken(TAB_VALUE_SPLIT): > > > > raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) > > > > > > > > - Size = self.__CalcRegionExpr() > > > > + Size = self._CalcRegionExpr() > > > > if Size is None: > > > > raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) > > > > RegionObj.Size = Size > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > return True > > > > > > > > - if not self.__Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > > > + if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): > > > > # > > > > # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] > > > > # Or it might be next region's offset described by an expression which starts with a PCD. > > > > # PcdOffset[|PcdSize] or OffsetPcdExpression|Size > > > > # > > > > - self.__UndoToken() > > > > - IsRegionPcd = (RegionSizeGuidPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:]) or > > > > - RegionOffsetPcdPattern.match(self.__CurrentLine()[self.CurrentOffsetWithinLine:])) > > > > + self._UndoToken() > > > > + IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or > > > > + RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:])) > > > > if IsRegionPcd: > > > > - RegionObj.PcdOffset = self.__GetNextPcdSettings() > > > > - self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(Fd.BaseAddress, 0)) > > > > + RegionObj.PcdOffset = self._GetNextPcdSettings() > > > > + self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0)) > > > > self.SetPcdLocalation(RegionObj.PcdOffset) > > > > - self.__PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > > > + self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple > > > > - if self.__IsToken( "|"): > > > > - RegionObj.PcdSize = self.__GetNextPcdSettings() > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + RegionObj.PcdSize = self._GetNextPcdSettings() > > > > self.Profile.PcdDict[RegionObj.PcdSize] = "0x%08X" % RegionObj.Size > > > > self.SetPcdLocalation(RegionObj.PcdSize) > > > > - self.__PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > > > + self._PcdDict['%s.%s' % (RegionObj.PcdSize[1], RegionObj.PcdSize[0])] = "0x%x" % RegionObj.Size > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > self.Profile.PcdFileLineDict[RegionObj.PcdSize] = FileLineTuple > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > return True > > > > > > > > - if self.__Token == "SET": > > > > - self.__UndoToken() > > > > - self.__GetSetStatements( RegionObj) > > > > - if not self.__GetNextWord(): > > > > + if self._Token == "SET": > > > > + self._UndoToken() > > > > + self._GetSetStatements(RegionObj) > > > > + if not self._GetNextWord(): > > > > return True > > > > > > > > - elif self.__Token == BINARY_FILE_TYPE_FV: > > > > - self.__UndoToken() > > > > - self.__GetRegionFvType( RegionObj) > > > > + elif self._Token == BINARY_FILE_TYPE_FV: > > > > + self._UndoToken() > > > > + self._GetRegionFvType(RegionObj) > > > > > > > > - elif self.__Token == "CAPSULE": > > > > - self.__UndoToken() > > > > - self.__GetRegionCapType( RegionObj) > > > > + elif self._Token == "CAPSULE": > > > > + self._UndoToken() > > > > + self._GetRegionCapType(RegionObj) > > > > > > > > - elif self.__Token == "FILE": > > > > - self.__UndoToken() > > > > - self.__GetRegionFileType(RegionObj) > > > > + elif self._Token == "FILE": > > > > + self._UndoToken() > > > > + self._GetRegionFileType(RegionObj) > > > > > > > > - elif self.__Token == "INF": > > > > - self.__UndoToken() > > > > + elif self._Token == "INF": > > > > + self._UndoToken() > > > > RegionObj.RegionType = "INF" > > > > - while self.__IsKeyword("INF"): > > > > - self.__UndoToken() > > > > - ffsInf = self.__ParseInfStatement() > > > > + while self._IsKeyword("INF"): > > > > + self._UndoToken() > > > > + ffsInf = self._ParseInfStatement() > > > > if not ffsInf: > > > > break > > > > RegionObj.RegionDataList.append(ffsInf) > > > > > > > > - elif self.__Token == "DATA": > > > > - self.__UndoToken() > > > > - self.__GetRegionDataType(RegionObj) > > > > + elif self._Token == "DATA": > > > > + self._UndoToken() > > > > + self._GetRegionDataType(RegionObj) > > > > else: > > > > - self.__UndoToken() > > > > - if self.__GetRegionLayout(Fd): > > > > + self._UndoToken() > > > > + if self._GetRegionLayout(theFd): > > > > return True > > > > raise Warning("A valid region type was not found. " > > > > "Valid types are [SET, FV, CAPSULE, FILE, DATA, INF]. This error occurred", > > > > @@ -1935,189 +1881,185 @@ class FdfParser: > > > > > > > > return True > > > > > > > > - ## __GetRegionFvType() method > > > > + ## _GetRegionFvType() method > > > > # > > > > # Get region fv data for region > > > > # > > > > # @param self The object pointer > > > > # @param RegionObj for whom region data is got > > > > # > > > > - def __GetRegionFvType(self, RegionObj): > > > > - > > > > - if not self.__IsKeyword( BINARY_FILE_TYPE_FV): > > > > + def _GetRegionFvType(self, RegionObj): > > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > > > RegionObj.RegionType = BINARY_FILE_TYPE_FV > > > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > > > > > - while self.__IsKeyword( BINARY_FILE_TYPE_FV): > > > > + while self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > > > - RegionObj.RegionDataList.append((self.__Token).upper()) > > > > + RegionObj.RegionDataList.append((self._Token).upper()) > > > > > > > > - ## __GetRegionCapType() method > > > > + ## _GetRegionCapType() method > > > > # > > > > # Get region capsule data for region > > > > # > > > > # @param self The object pointer > > > > # @param RegionObj for whom region data is got > > > > # > > > > - def __GetRegionCapType(self, RegionObj): > > > > - > > > > - if not self.__IsKeyword("CAPSULE"): > > > > + def _GetRegionCapType(self, RegionObj): > > > > + if not self._IsKeyword("CAPSULE"): > > > > raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > > > > > RegionObj.RegionType = "CAPSULE" > > > > - RegionObj.RegionDataList.append(self.__Token) > > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > > > - while self.__IsKeyword("CAPSULE"): > > > > + while self._IsKeyword("CAPSULE"): > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) > > > > > > > > - RegionObj.RegionDataList.append(self.__Token) > > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > > > - ## __GetRegionFileType() method > > > > + ## _GetRegionFileType() method > > > > # > > > > # Get region file data for region > > > > # > > > > # @param self The object pointer > > > > # @param RegionObj for whom region data is got > > > > # > > > > - def __GetRegionFileType(self, RegionObj): > > > > - > > > > - if not self.__IsKeyword( "FILE"): > > > > + def _GetRegionFileType(self, RegionObj): > > > > + if not self._IsKeyword("FILE"): > > > > raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > > > RegionObj.RegionType = "FILE" > > > > - RegionObj.RegionDataList.append( self.__Token) > > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > > > - while self.__IsKeyword( "FILE"): > > > > + while self._IsKeyword("FILE"): > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) > > > > > > > > - RegionObj.RegionDataList.append(self.__Token) > > > > + RegionObj.RegionDataList.append(self._Token) > > > > > > > > - ## __GetRegionDataType() method > > > > + ## _GetRegionDataType() method > > > > # > > > > # Get region array data for region > > > > # > > > > # @param self The object pointer > > > > # @param RegionObj for whom region data is got > > > > # > > > > - def __GetRegionDataType(self, RegionObj): > > > > - > > > > - if not self.__IsKeyword( "DATA"): > > > > + def _GetRegionDataType(self, RegionObj): > > > > + if not self._IsKeyword("DATA"): > > > > raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > > > - if len(self.__Token) > 18: > > > > + if len(self._Token) > 18: > > > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > > > > > # convert hex string value to byte hex string array > > > > - AllString = self.__Token > > > > + AllString = self._Token > > > > AllStrLen = len (AllString) > > > > DataString = "" > > > > while AllStrLen > 4: > > > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > > > AllStrLen = AllStrLen - 2 > > > > - DataString = DataString + AllString[:AllStrLen] + "," > > > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > > > > > # byte value array > > > > - if len (self.__Token) <= 4: > > > > - while self.__IsToken(","): > > > > - if not self.__GetNextHexNumber(): > > > > + if len (self._Token) <= 4: > > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 4: > > > > + if len(self._Token) > 4: > > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > - DataString += self.__Token > > > > - DataString += "," > > > > + DataString += self._Token > > > > + DataString += TAB_COMMA_SPLIT > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - DataString = DataString.rstrip(",") > > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > > RegionObj.RegionType = "DATA" > > > > - RegionObj.RegionDataList.append( DataString) > > > > + RegionObj.RegionDataList.append(DataString) > > > > > > > > - while self.__IsKeyword( "DATA"): > > > > + while self._IsKeyword("DATA"): > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > > > - if len(self.__Token) > 18: > > > > + if len(self._Token) > 18: > > > > raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) > > > > > > > > # convert hex string value to byte hex string array > > > > - AllString = self.__Token > > > > + AllString = self._Token > > > > AllStrLen = len (AllString) > > > > DataString = "" > > > > while AllStrLen > 4: > > > > - DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + "," > > > > + DataString = DataString + "0x" + AllString[AllStrLen - 2: AllStrLen] + TAB_COMMA_SPLIT > > > > AllStrLen = AllStrLen - 2 > > > > - DataString = DataString + AllString[:AllStrLen] + "," > > > > + DataString = DataString + AllString[:AllStrLen] + TAB_COMMA_SPLIT > > > > > > > > # byte value array > > > > - if len (self.__Token) <= 4: > > > > - while self.__IsToken(","): > > > > - if not self.__GetNextHexNumber(): > > > > + if len (self._Token) <= 4: > > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 4: > > > > + if len(self._Token) > 4: > > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > - DataString += self.__Token > > > > - DataString += "," > > > > + DataString += self._Token > > > > + DataString += TAB_COMMA_SPLIT > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - DataString = DataString.rstrip(",") > > > > - RegionObj.RegionDataList.append( DataString) > > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > > + RegionObj.RegionDataList.append(DataString) > > > > > > > > - ## __GetFv() method > > > > + ## _GetFv() method > > > > # > > > > # Get FV section contents and store its data into FV dictionary of self.Profile > > > > # > > > > @@ -2125,68 +2067,68 @@ class FdfParser: > > > > # @retval True Successfully find a FV > > > > # @retval False Not able to find a FV > > > > # > > > > - def __GetFv(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetFv(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[FV."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FV."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[FV.", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[FV.", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > - FvName = self.__GetUiName() > > > > + FvName = self._GetUiName() > > > > self.CurrentFvName = FvName.upper() > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj = Fv.FV() > > > > + FvObj = FV() > > > > FvObj.UiFvName = self.CurrentFvName > > > > self.Profile.FvDict[self.CurrentFvName] = FvObj > > > > > > > > - Status = self.__GetCreateFile(FvObj) > > > > + Status = self._GetCreateFile(FvObj) > > > > if not Status: > > > > raise Warning("FV name error", self.FileName, self.CurrentLineNumber) > > > > > > > > - self.__GetDefineStatements(FvObj) > > > > + self._GetDefineStatements(FvObj) > > > > > > > > - self.__GetAddressStatements(FvObj) > > > > + self._GetAddressStatements(FvObj) > > > > > > > > FvObj.FvExtEntryTypeValue = [] > > > > FvObj.FvExtEntryType = [] > > > > FvObj.FvExtEntryData = [] > > > > while True: > > > > - self.__GetSetStatements(FvObj) > > > > + self._GetSetStatements(FvObj) > > > > > > > > - if not (self.__GetBlockStatement(FvObj) or self.__GetFvBaseAddress(FvObj) or > > > > - self.__GetFvForceRebase(FvObj) or self.__GetFvAlignment(FvObj) or > > > > - self.__GetFvAttributes(FvObj) or self.__GetFvNameGuid(FvObj) or > > > > - self.__GetFvExtEntryStatement(FvObj) or self.__GetFvNameString(FvObj)): > > > > + if not (self._GetBlockStatement(FvObj) or self._GetFvBaseAddress(FvObj) or > > > > + self._GetFvForceRebase(FvObj) or self._GetFvAlignment(FvObj) or > > > > + self._GetFvAttributes(FvObj) or self._GetFvNameGuid(FvObj) or > > > > + self._GetFvExtEntryStatement(FvObj) or self._GetFvNameString(FvObj)): > > > > break > > > > > > > > if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: > > > > raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) > > > > > > > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > - self.__GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > + self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) > > > > > > > > while True: > > > > - isInf = self.__GetInfStatement(FvObj) > > > > - isFile = self.__GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > > > + isInf = self._GetInfStatement(FvObj) > > > > + isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) > > > > if not isInf and not isFile: > > > > break > > > > > > > > return True > > > > > > > > - ## __GetFvAlignment() method > > > > + ## _GetFvAlignment() method > > > > # > > > > # Get alignment for FV > > > > # > > > > @@ -2195,26 +2137,25 @@ class FdfParser: > > > > # @retval True Successfully find a alignment statement > > > > # @retval False Not able to find a alignment statement > > > > # > > > > - def __GetFvAlignment(self, Obj): > > > > - > > > > - if not self.__IsKeyword( "FvAlignment"): > > > > + def _GetFvAlignment(self, Obj): > > > > + if not self._IsKeyword("FvAlignment"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > > > + if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ > > > > "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ > > > > "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ > > > > "1G", "2G"): > > > > - raise Warning("Unknown alignment value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - Obj.FvAlignment = self.__Token > > > > + raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + Obj.FvAlignment = self._Token > > > > return True > > > > > > > > - ## __GetFvBaseAddress() method > > > > + ## _GetFvBaseAddress() method > > > > # > > > > # Get BaseAddress for FV > > > > # > > > > @@ -2223,23 +2164,22 @@ class FdfParser: > > > > # @retval True Successfully find a FvBaseAddress statement > > > > # @retval False Not able to find a FvBaseAddress statement > > > > # > > > > - def __GetFvBaseAddress(self, Obj): > > > > - > > > > - if not self.__IsKeyword("FvBaseAddress"): > > > > + def _GetFvBaseAddress(self, Obj): > > > > + if not self._IsKeyword("FvBaseAddress"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not BaseAddrValuePattern.match(self.__Token.upper()): > > > > - raise Warning("Unknown FV base address value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - Obj.FvBaseAddress = self.__Token > > > > + if not BaseAddrValuePattern.match(self._Token.upper()): > > > > + raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + Obj.FvBaseAddress = self._Token > > > > return True > > > > > > > > - ## __GetFvForceRebase() method > > > > + ## _GetFvForceRebase() method > > > > # > > > > # Get FvForceRebase for FV > > > > # > > > > @@ -2248,23 +2188,22 @@ class FdfParser: > > > > # @retval True Successfully find a FvForceRebase statement > > > > # @retval False Not able to find a FvForceRebase statement > > > > # > > > > - def __GetFvForceRebase(self, Obj): > > > > - > > > > - if not self.__IsKeyword("FvForceRebase"): > > > > + def _GetFvForceRebase(self, Obj): > > > > + if not self._IsKeyword("FvForceRebase"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > > > - raise Warning("Unknown FvForceRebase value '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: > > > > + raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > > > + if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: > > > > Obj.FvForceRebase = True > > > > - elif self.__Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > > > + elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: > > > > Obj.FvForceRebase = False > > > > else: > > > > Obj.FvForceRebase = None > > > > @@ -2272,7 +2211,7 @@ class FdfParser: > > > > return True > > > > > > > > > > > > - ## __GetFvAttributes() method > > > > + ## _GetFvAttributes() method > > > > # > > > > # Get attributes for FV > > > > # > > > > @@ -2280,31 +2219,31 @@ class FdfParser: > > > > # @param Obj for whom attribute is got > > > > # @retval None > > > > # > > > > - def __GetFvAttributes(self, FvObj): > > > > + def _GetFvAttributes(self, FvObj): > > > > IsWordToken = False > > > > - while self.__GetNextWord(): > > > > + while self._GetNextWord(): > > > > IsWordToken = True > > > > - name = self.__Token > > > > + name = self._Token > > > > if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ > > > > "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ > > > > "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ > > > > "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ > > > > "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ > > > > "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken() or self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > > + if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvAttributeDict[name] = self.__Token > > > > + FvObj.FvAttributeDict[name] = self._Token > > > > > > > > return IsWordToken > > > > > > > > - ## __GetFvNameGuid() method > > > > + ## _GetFvNameGuid() method > > > > # > > > > # Get FV GUID for FV > > > > # > > > > @@ -2312,110 +2251,107 @@ class FdfParser: > > > > # @param Obj for whom GUID is got > > > > # @retval None > > > > # > > > > - def __GetFvNameGuid(self, FvObj): > > > > - > > > > - if not self.__IsKeyword( "FvNameGuid"): > > > > + def _GetFvNameGuid(self, FvObj): > > > > + if not self._IsKeyword("FvNameGuid"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextGuid(): > > > > + if not self._GetNextGuid(): > > > > raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvNameGuid = self.__Token > > > > + FvObj.FvNameGuid = self._Token > > > > > > > > return True > > > > > > > > - def __GetFvNameString(self, FvObj): > > > > - > > > > - if not self.__IsKeyword( "FvNameString"): > > > > + def _GetFvNameString(self, FvObj): > > > > + if not self._IsKeyword("FvNameString"): > > > > return False > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken() or self.__Token not in ('TRUE', 'FALSE'): > > > > + if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): > > > > raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvNameString = self.__Token > > > > + FvObj.FvNameString = self._Token > > > > > > > > return True > > > > > > > > - def __GetFvExtEntryStatement(self, FvObj): > > > > - > > > > - if not (self.__IsKeyword( "FV_EXT_ENTRY") or self.__IsKeyword( "FV_EXT_ENTRY_TYPE")): > > > > + def _GetFvExtEntryStatement(self, FvObj): > > > > + if not (self._IsKeyword("FV_EXT_ENTRY") or self._IsKeyword("FV_EXT_ENTRY_TYPE")): > > > > return False > > > > > > > > - if not self.__IsKeyword ("TYPE"): > > > > + if not self._IsKeyword ("TYPE"): > > > > raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber() and not self.__GetNextDecimalNumber(): > > > > + if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): > > > > raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvExtEntryTypeValue.append(self.__Token) > > > > + FvObj.FvExtEntryTypeValue.append(self._Token) > > > > > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsKeyword ("FILE") and not self.__IsKeyword ("DATA"): > > > > + if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): > > > > raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvExtEntryType.append(self.__Token) > > > > + FvObj.FvExtEntryType.append(self._Token) > > > > > > > > - if self.__Token == 'DATA': > > > > + if self._Token == 'DATA': > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) > > > > > > > > - if len(self.__Token) > 4: > > > > + if len(self._Token) > 4: > > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > > > > > - DataString = self.__Token > > > > - DataString += "," > > > > + DataString = self._Token > > > > + DataString += TAB_COMMA_SPLIT > > > > > > > > - while self.__IsToken(","): > > > > - if not self.__GetNextHexNumber(): > > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("Invalid Hex number", self.FileName, self.CurrentLineNumber) > > > > - if len(self.__Token) > 4: > > > > + if len(self._Token) > 4: > > > > raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) > > > > - DataString += self.__Token > > > > - DataString += "," > > > > + DataString += self._Token > > > > + DataString += TAB_COMMA_SPLIT > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - DataString = DataString.rstrip(",") > > > > + DataString = DataString.rstrip(TAB_COMMA_SPLIT) > > > > FvObj.FvExtEntryData.append(DataString) > > > > > > > > - if self.__Token == 'FILE': > > > > + if self._Token == 'FILE': > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvObj.FvExtEntryData.append(self.__Token) > > > > + FvObj.FvExtEntryData.append(self._Token) > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > return True > > > > > > > > - ## __GetAprioriSection() method > > > > + ## _GetAprioriSection() method > > > > # > > > > # Get token statements > > > > # > > > > @@ -2425,56 +2361,55 @@ class FdfParser: > > > > # @retval True Successfully find apriori statement > > > > # @retval False Not able to find apriori statement > > > > # > > > > - def __GetAprioriSection(self, FvObj, MacroDict = {}): > > > > - > > > > - if not self.__IsKeyword( "APRIORI"): > > > > + def _GetAprioriSection(self, FvObj, MacroDict = {}): > > > > + if not self._IsKeyword("APRIORI"): > > > > return False > > > > > > > > - if not self.__IsKeyword("PEI") and not self.__IsKeyword("DXE"): > > > > + if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): > > > > raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) > > > > - AprType = self.__Token > > > > + AprType = self._Token > > > > > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - AprSectionObj = AprioriSection.AprioriSection() > > > > + AprSectionObj = AprioriSection() > > > > AprSectionObj.AprioriType = AprType > > > > > > > > - self.__GetDefineStatements(AprSectionObj) > > > > + self._GetDefineStatements(AprSectionObj) > > > > MacroDict.update(AprSectionObj.DefineVarDict) > > > > > > > > while True: > > > > - IsInf = self.__GetInfStatement(AprSectionObj) > > > > - IsFile = self.__GetFileStatement( AprSectionObj) > > > > + IsInf = self._GetInfStatement(AprSectionObj) > > > > + IsFile = self._GetFileStatement(AprSectionObj) > > > > if not IsInf and not IsFile: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > FvObj.AprioriSectionList.append(AprSectionObj) > > > > return True > > > > > > > > - def __ParseInfStatement(self): > > > > - if not self.__IsKeyword("INF"): > > > > + def _ParseInfStatement(self): > > > > + if not self._IsKeyword("INF"): > > > > return None > > > > > > > > - ffsInf = FfsInfStatement.FfsInfStatement() > > > > - self.__GetInfOptions(ffsInf) > > > > + ffsInf = FfsInfStatement() > > > > + self._GetInfOptions(ffsInf) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > > > - ffsInf.InfFileName = self.__Token > > > > + ffsInf.InfFileName = self._Token > > > > if not ffsInf.InfFileName.endswith('.inf'): > > > > raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) > > > > > > > > ffsInf.CurrentLineNum = self.CurrentLineNumber > > > > - ffsInf.CurrentLineContent = self.__CurrentLine() > > > > + ffsInf.CurrentLineContent = self._CurrentLine() > > > > > > > > #Replace $(SAPCE) with real space > > > > ffsInf.InfFileName = ffsInf.InfFileName.replace('$(SPACE)', ' ') > > > > > > > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #do case sensitive check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > @@ -2492,16 +2427,16 @@ class FdfParser: > > > > else: > > > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > > > > > - if self.__IsToken('|'): > > > > - if self.__IsKeyword('RELOCS_STRIPPED'): > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + if self._IsKeyword('RELOCS_STRIPPED'): > > > > ffsInf.KeepReloc = False > > > > - elif self.__IsKeyword('RELOCS_RETAINED'): > > > > + elif self._IsKeyword('RELOCS_RETAINED'): > > > > ffsInf.KeepReloc = True > > > > else: > > > > - raise Warning("Unknown reloc strip flag '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + raise Warning("Unknown reloc strip flag '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > return ffsInf > > > > > > > > - ## __GetInfStatement() method > > > > + ## _GetInfStatement() method > > > > # > > > > # Get INF statements > > > > # > > > > @@ -2510,86 +2445,86 @@ class FdfParser: > > > > # @retval True Successfully find inf statement > > > > # @retval False Not able to find inf statement > > > > # > > > > - def __GetInfStatement(self, Obj, ForCapsule=False): > > > > - ffsInf = self.__ParseInfStatement() > > > > + def _GetInfStatement(self, Obj, ForCapsule=False): > > > > + ffsInf = self._ParseInfStatement() > > > > if not ffsInf: > > > > return False > > > > > > > > if ForCapsule: > > > > - capsuleFfs = CapsuleData.CapsuleFfs() > > > > - capsuleFfs.Ffs = ffsInf > > > > - Obj.CapsuleDataList.append(capsuleFfs) > > > > + myCapsuleFfs = CapsuleFfs() > > > > + myCapsuleFfs.Ffs = ffsInf > > > > + Obj.CapsuleDataList.append(myCapsuleFfs) > > > > else: > > > > Obj.FfsList.append(ffsInf) > > > > return True > > > > > > > > - ## __GetInfOptions() method > > > > + ## _GetInfOptions() method > > > > # > > > > # Get options for INF > > > > # > > > > # @param self The object pointer > > > > # @param FfsInfObj for whom option is got > > > > # > > > > - def __GetInfOptions(self, FfsInfObj): > > > > - if self.__IsKeyword("FILE_GUID"): > > > > - if not self.__IsToken("="): > > > > + def _GetInfOptions(self, FfsInfObj): > > > > + if self._IsKeyword("FILE_GUID"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextGuid(): > > > > + if not self._GetNextGuid(): > > > > raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) > > > > - FfsInfObj.OverrideGuid = self.__Token > > > > + FfsInfObj.OverrideGuid = self._Token > > > > > > > > - if self.__IsKeyword( "RuleOverride"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("RuleOverride"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) > > > > - FfsInfObj.Rule = self.__Token > > > > + FfsInfObj.Rule = self._Token > > > > > > > > - if self.__IsKeyword( "VERSION"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("VERSION"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Version", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__GetStringData(): > > > > - FfsInfObj.Version = self.__Token > > > > + if self._GetStringData(): > > > > + FfsInfObj.Version = self._Token > > > > > > > > - if self.__IsKeyword( BINARY_FILE_TYPE_UI): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword(BINARY_FILE_TYPE_UI): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__GetStringData(): > > > > - FfsInfObj.Ui = self.__Token > > > > + if self._GetStringData(): > > > > + FfsInfObj.Ui = self._Token > > > > > > > > - if self.__IsKeyword( "USE"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("USE"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) > > > > - FfsInfObj.UseArch = self.__Token > > > > + FfsInfObj.UseArch = self._Token > > > > > > > > > > > > - if self.__GetNextToken(): > > > > - p = > > > re.compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > > > - if p.match(self.__Token) and p.match(self.__Token).span()[1] == len(self.__Token): > > > > - FfsInfObj.KeyStringList.append(self.__Token) > > > > - if not self.__IsToken(","): > > > > + if self._GetNextToken(): > > > > + p = > > > compile(r'([a-zA-Z0-9\-]+|\$\(TARGET\)|\*)_([a-zA-Z0-9\-]+|\$\(TOOL_CHAIN_TAG\)|\*)_([a-zA-Z0-9\-]+|\$\(ARCH\))') > > > > + if p.match(self._Token) and p.match(self._Token).span()[1] == len(self._Token): > > > > + FfsInfObj.KeyStringList.append(self._Token) > > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > > return > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return > > > > > > > > - while self.__GetNextToken(): > > > > - if not p.match(self.__Token): > > > > + while self._GetNextToken(): > > > > + if not p.match(self._Token): > > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) > > > > - FfsInfObj.KeyStringList.append(self.__Token) > > > > + FfsInfObj.KeyStringList.append(self._Token) > > > > > > > > - if not self.__IsToken(","): > > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > > break > > > > > > > > - ## __GetFileStatement() method > > > > + ## _GetFileStatement() method > > > > # > > > > # Get FILE statements > > > > # > > > > @@ -2599,42 +2534,41 @@ class FdfParser: > > > > # @retval True Successfully find FILE statement > > > > # @retval False Not able to find FILE statement > > > > # > > > > - def __GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > > > - > > > > - if not self.__IsKeyword( "FILE"): > > > > + def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): > > > > + if not self._IsKeyword("FILE"): > > > > return False > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > > > > > - if ForCapsule and self.__Token == 'DATA': > > > > - self.__UndoToken() > > > > - self.__UndoToken() > > > > + if ForCapsule and self._Token == 'DATA': > > > > + self._UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - FfsFileObj = FfsFileStatement.FileStatement() > > > > - FfsFileObj.FvFileType = self.__Token > > > > + FfsFileObj = FileStatement() > > > > + FfsFileObj.FvFileType = self._Token > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextGuid(): > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextGuid(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == 'PCD': > > > > - if not self.__IsToken( "("): > > > > + if self._Token == 'PCD': > > > > + if not self._IsToken("("): > > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > > - PcdPair = self.__GetNextPcdSettings() > > > > - if not self.__IsToken( ")"): > > > > + PcdPair = self._GetNextPcdSettings() > > > > + if not self._IsToken(")"): > > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > > > - FfsFileObj.NameGuid = self.__Token > > > > + FfsFileObj.NameGuid = self._Token > > > > > > > > - self.__GetFilePart( FfsFileObj, MacroDict.copy()) > > > > + self._GetFilePart(FfsFileObj, MacroDict.copy()) > > > > > > > > if ForCapsule: > > > > - capsuleFfs = CapsuleData.CapsuleFfs() > > > > + capsuleFfs = CapsuleFfs() > > > > capsuleFfs.Ffs = FfsFileObj > > > > Obj.CapsuleDataList.append(capsuleFfs) > > > > else: > > > > @@ -2642,7 +2576,7 @@ class FdfParser: > > > > > > > > return True > > > > > > > > - ## __FileCouldHaveRelocFlag() method > > > > + ## _FileCouldHaveRelocFlag() method > > > > # > > > > # Check whether reloc strip flag can be set for a file type. > > > > # > > > > @@ -2651,13 +2585,13 @@ class FdfParser: > > > > # @retval False No way to have it > > > > # > > > > @staticmethod > > > > - def __FileCouldHaveRelocFlag (FileType): > > > > + def _FileCouldHaveRelocFlag (FileType): > > > > if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __SectionCouldHaveRelocFlag() method > > > > + ## _SectionCouldHaveRelocFlag() method > > > > # > > > > # Check whether reloc strip flag can be set for a section type. > > > > # > > > > @@ -2666,13 +2600,13 @@ class FdfParser: > > > > # @retval False No way to have it > > > > # > > > > @staticmethod > > > > - def __SectionCouldHaveRelocFlag (SectionType): > > > > + def _SectionCouldHaveRelocFlag (SectionType): > > > > if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __GetFilePart() method > > > > + ## _GetFilePart() method > > > > # > > > > # Get components for FILE statement > > > > # > > > > @@ -2680,58 +2614,57 @@ class FdfParser: > > > > # @param FfsFileObj for whom component is got > > > > # @param MacroDict dictionary used to replace macro > > > > # > > > > - def __GetFilePart(self, FfsFileObj, MacroDict = {}): > > > > + def _GetFilePart(self, FfsFileObj, MacroDict = {}): > > > > + self._GetFileOpts(FfsFileObj) > > > > > > > > - self.__GetFileOpts( FfsFileObj) > > > > - > > > > - if not self.__IsToken("{"): > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > > - if self.__FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > > > - if self.__Token == 'RELOCS_STRIPPED': > > > > + if not self._IsToken("{"): > > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > > + if self._FileCouldHaveRelocFlag(FfsFileObj.FvFileType): > > > > + if self._Token == 'RELOCS_STRIPPED': > > > > FfsFileObj.KeepReloc = False > > > > else: > > > > FfsFileObj.KeepReloc = True > > > > else: > > > > raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, > > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token == BINARY_FILE_TYPE_FV: > > > > - if not self.__IsToken( "="): > > > > + if self._Token == BINARY_FILE_TYPE_FV: > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > - FfsFileObj.FvName = self.__Token > > > > + FfsFileObj.FvName = self._Token > > > > > > > > - elif self.__Token == "FD": > > > > - if not self.__IsToken( "="): > > > > + elif self._Token == "FD": > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > > > - FfsFileObj.FdName = self.__Token > > > > + FfsFileObj.FdName = self._Token > > > > > > > > - elif self.__Token in ("DEFINE", "APRIORI", "SECTION"): > > > > - self.__UndoToken() > > > > - self.__GetSectionData( FfsFileObj, MacroDict) > > > > + elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): > > > > + self._UndoToken() > > > > + self._GetSectionData(FfsFileObj, MacroDict) > > > > > > > > elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': > > > > - self.__UndoToken() > > > > - self.__GetRAWData(FfsFileObj, MacroDict) > > > > + self._UndoToken() > > > > + self._GetRAWData(FfsFileObj, MacroDict) > > > > > > > > else: > > > > FfsFileObj.CurrentLineNum = self.CurrentLineNumber > > > > - FfsFileObj.CurrentLineContent = self.__CurrentLine() > > > > - FfsFileObj.FileName = self.__Token.replace('$(SPACE)', ' ') > > > > - self.__VerifyFile(FfsFileObj.FileName) > > > > + FfsFileObj.CurrentLineContent = self._CurrentLine() > > > > + FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') > > > > + self._VerifyFile(FfsFileObj.FileName) > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - ## __GetRAWData() method > > > > + ## _GetRAWData() method > > > > # > > > > # Get RAW data for FILE statement > > > > # > > > > @@ -2739,33 +2672,32 @@ class FdfParser: > > > > # @param FfsFileObj for whom section is got > > > > # @param MacroDict dictionary used to replace macro > > > > # > > > > - def __GetRAWData(self, FfsFileObj, MacroDict = {}): > > > > + def _GetRAWData(self, FfsFileObj, MacroDict = {}): > > > > FfsFileObj.FileName = [] > > > > FfsFileObj.SubAlignment = [] > > > > while True: > > > > AlignValue = None > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > #For FFS, Auto is default option same to "" > > > > - if not self.__Token == "Auto": > > > > - AlignValue = self.__Token > > > > - if not self.__GetNextToken(): > > > > + if not self._Token == "Auto": > > > > + AlignValue = self._Token > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > > > > > - FileName = self.__Token.replace('$(SPACE)', ' ') > > > > + FileName = self._Token.replace('$(SPACE)', ' ') > > > > if FileName == '}': > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) > > > > > > > > - self.__VerifyFile(FileName) > > > > + self._VerifyFile(FileName) > > > > File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) > > > > FfsFileObj.FileName.append(File.Path) > > > > FfsFileObj.SubAlignment.append(AlignValue) > > > > > > > > - if self.__IsToken( "}"): > > > > - self.__UndoToken() > > > > + if self._IsToken("}"): > > > > + self._UndoToken() > > > > break > > > > > > > > if len(FfsFileObj.SubAlignment) == 1: > > > > @@ -2773,45 +2705,43 @@ class FdfParser: > > > > if len(FfsFileObj.FileName) == 1: > > > > FfsFileObj.FileName = FfsFileObj.FileName[0] > > > > > > > > - ## __GetFileOpts() method > > > > + ## _GetFileOpts() method > > > > # > > > > # Get options for FILE statement > > > > # > > > > # @param self The object pointer > > > > # @param FfsFileObj for whom options is got > > > > # > > > > - def __GetFileOpts(self, FfsFileObj): > > > > - > > > > - if self.__GetNextToken(): > > > > - if TokenFindPattern.match(self.__Token): > > > > - FfsFileObj.KeyStringList.append(self.__Token) > > > > - if self.__IsToken(","): > > > > - while self.__GetNextToken(): > > > > - if not TokenFindPattern.match(self.__Token): > > > > + def _GetFileOpts(self, FfsFileObj): > > > > + if self._GetNextToken(): > > > > + if TokenFindPattern.match(self._Token): > > > > + FfsFileObj.KeyStringList.append(self._Token) > > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > > + while self._GetNextToken(): > > > > + if not TokenFindPattern.match(self._Token): > > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, > > self.CurrentLineNumber) > > > > - FfsFileObj.KeyStringList.append(self.__Token) > > > > + FfsFileObj.KeyStringList.append(self._Token) > > > > > > > > - if not self.__IsToken(","): > > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > > break > > > > > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > > > > > - if self.__IsKeyword( "FIXED", True): > > > > + if self._IsKeyword("FIXED", True): > > > > FfsFileObj.Fixed = True > > > > > > > > - if self.__IsKeyword( "CHECKSUM", True): > > > > + if self._IsKeyword("CHECKSUM", True): > > > > FfsFileObj.CheckSum = True > > > > > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > #For FFS, Auto is default option same to "" > > > > - if not self.__Token == "Auto": > > > > - FfsFileObj.Alignment = self.__Token > > > > + if not self._Token == "Auto": > > > > + FfsFileObj.Alignment = self._Token > > > > > > > > - ## __GetAlignment() method > > > > + ## _GetAlignment() method > > > > # > > > > # Return the alignment value > > > > # > > > > @@ -2819,18 +2749,18 @@ class FdfParser: > > > > # @retval True Successfully find alignment > > > > # @retval False Not able to find alignment > > > > # > > > > - def __GetAlignment(self): > > > > - if self.__IsKeyword( "Align", True): > > > > - if not self.__IsToken( "="): > > > > + def _GetAlignment(self): > > > > + if self._IsKeyword("Align", True): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) > > > > return True > > > > > > > > return False > > > > > > > > - ## __GetFilePart() method > > > > + ## _GetFilePart() method > > > > # > > > > # Get section data for FILE statement > > > > # > > > > @@ -2838,23 +2768,23 @@ class FdfParser: > > > > # @param FfsFileObj for whom section is got > > > > # @param MacroDict dictionary used to replace macro > > > > # > > > > - def __GetSectionData(self, FfsFileObj, MacroDict = {}): > > > > + def _GetSectionData(self, FfsFileObj, MacroDict = {}): > > > > Dict = {} > > > > Dict.update(MacroDict) > > > > > > > > - self.__GetDefineStatements(FfsFileObj) > > > > + self._GetDefineStatements(FfsFileObj) > > > > > > > > Dict.update(FfsFileObj.DefineVarDict) > > > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > > > - self.__GetAprioriSection(FfsFileObj, Dict.copy()) > > > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > > > + self._GetAprioriSection(FfsFileObj, Dict.copy()) > > > > > > > > while True: > > > > - IsLeafSection = self.__GetLeafSection(FfsFileObj, Dict) > > > > - IsEncapSection = self.__GetEncapsulationSec(FfsFileObj) > > > > + IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) > > > > + IsEncapSection = self._GetEncapsulationSec(FfsFileObj) > > > > if not IsLeafSection and not IsEncapSection: > > > > break > > > > > > > > - ## __GetLeafSection() method > > > > + ## _GetLeafSection() method > > > > # > > > > # Get leaf section for Obj > > > > # > > > > @@ -2864,97 +2794,95 @@ class FdfParser: > > > > # @retval True Successfully find section statement > > > > # @retval False Not able to find section statement > > > > # > > > > - def __GetLeafSection(self, Obj, MacroDict = {}): > > > > - > > > > + def _GetLeafSection(self, Obj, MacroDict = {}): > > > > OldPos = self.GetFileBufferPos() > > > > > > > > - if not self.__IsKeyword( "SECTION"): > > > > + if not self._IsKeyword("SECTION"): > > > > if len(Obj.SectionList) == 0: > > > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > > > else: > > > > return False > > > > > > > > AlignValue = None > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - AlignValue = self.__Token > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + AlignValue = self._Token > > > > > > > > BuildNum = None > > > > - if self.__IsKeyword( "BUILD_NUM"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("BUILD_NUM"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) > > > > > > > > - BuildNum = self.__Token > > > > + BuildNum = self._Token > > > > > > > > - if self.__IsKeyword( "VERSION"): > > > > + if self._IsKeyword("VERSION"): > > > > if AlignValue == 'Auto': > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected version", self.FileName, self.CurrentLineNumber) > > > > - VerSectionObj = VerSection.VerSection() > > > > + VerSectionObj = VerSection() > > > > VerSectionObj.Alignment = AlignValue > > > > VerSectionObj.BuildNum = BuildNum > > > > - if self.__GetStringData(): > > > > - VerSectionObj.StringData = self.__Token > > > > + if self._GetStringData(): > > > > + VerSectionObj.StringData = self._Token > > > > else: > > > > - VerSectionObj.FileName = self.__Token > > > > + VerSectionObj.FileName = self._Token > > > > Obj.SectionList.append(VerSectionObj) > > > > > > > > - elif self.__IsKeyword( BINARY_FILE_TYPE_UI): > > > > + elif self._IsKeyword(BINARY_FILE_TYPE_UI): > > > > if AlignValue == 'Auto': > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected UI", self.FileName, self.CurrentLineNumber) > > > > - UiSectionObj = UiSection.UiSection() > > > > + UiSectionObj = UiSection() > > > > UiSectionObj.Alignment = AlignValue > > > > - if self.__GetStringData(): > > > > - UiSectionObj.StringData = self.__Token > > > > + if self._GetStringData(): > > > > + UiSectionObj.StringData = self._Token > > > > else: > > > > - UiSectionObj.FileName = self.__Token > > > > + UiSectionObj.FileName = self._Token > > > > Obj.SectionList.append(UiSectionObj) > > > > > > > > - elif self.__IsKeyword( "FV_IMAGE"): > > > > + elif self._IsKeyword("FV_IMAGE"): > > > > if AlignValue == 'Auto': > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvName = self.__Token > > > > + FvName = self._Token > > > > FvObj = None > > > > > > > > - if self.__IsToken( "{"): > > > > - FvObj = Fv.FV() > > > > + if self._IsToken("{"): > > > > + FvObj = FV() > > > > FvObj.UiFvName = FvName.upper() > > > > - self.__GetDefineStatements(FvObj) > > > > + self._GetDefineStatements(FvObj) > > > > MacroDict.update(FvObj.DefineVarDict) > > > > - self.__GetBlockStatement(FvObj) > > > > - self.__GetSetStatements(FvObj) > > > > - self.__GetFvAlignment(FvObj) > > > > - self.__GetFvAttributes(FvObj) > > > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > > > - self.__GetAprioriSection(FvObj, MacroDict.copy()) > > > > + self._GetBlockStatement(FvObj) > > > > + self._GetSetStatements(FvObj) > > > > + self._GetFvAlignment(FvObj) > > > > + self._GetFvAttributes(FvObj) > > > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > > > + self._GetAprioriSection(FvObj, MacroDict.copy()) > > > > > > > > while True: > > > > - IsInf = self.__GetInfStatement(FvObj) > > > > - IsFile = self.__GetFileStatement(FvObj, MacroDict.copy()) > > > > + IsInf = self._GetInfStatement(FvObj) > > > > + IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) > > > > if not IsInf and not IsFile: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - FvImageSectionObj = FvImageSection.FvImageSection() > > > > + FvImageSectionObj = FvImageSection() > > > > FvImageSectionObj.Alignment = AlignValue > > > > if FvObj is not None: > > > > FvImageSectionObj.Fv = FvObj > > > > @@ -2965,81 +2893,81 @@ class FdfParser: > > > > > > > > Obj.SectionList.append(FvImageSectionObj) > > > > > > > > - elif self.__IsKeyword("PEI_DEPEX_EXP") or self.__IsKeyword("DXE_DEPEX_EXP") or self.__IsKeyword("SMM_DEPEX_EXP"): > > > > + elif self._IsKeyword("PEI_DEPEX_EXP") or self._IsKeyword("DXE_DEPEX_EXP") or self._IsKeyword("SMM_DEPEX_EXP"): > > > > if AlignValue == 'Auto': > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > - DepexSectionObj = DepexSection.DepexSection() > > > > + DepexSectionObj = DepexSection() > > > > DepexSectionObj.Alignment = AlignValue > > > > - DepexSectionObj.DepexType = self.__Token > > > > + DepexSectionObj.DepexType = self._Token > > > > > > > > - if not self.__IsToken( "="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__IsToken( "{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - if not self.__SkipToToken( "}"): > > > > + if not self._SkipToToken("}"): > > > > raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - DepexSectionObj.Expression = self.__SkippedChars.rstrip('}') > > > > + DepexSectionObj.Expression = self._SkippedChars.rstrip('}') > > > > Obj.SectionList.append(DepexSectionObj) > > > > > > > > else: > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected section type", self.FileName, self.CurrentLineNumber) > > > > > > > > # Encapsulation section appear, UndoToken and return > > > > - if self.__Token == "COMPRESS" or self.__Token == "GUIDED": > > > > + if self._Token == "COMPRESS" or self._Token == "GUIDED": > > > > self.SetFileBufferPos(OldPos) > > > > return False > > > > > > > > - if self.__Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > + if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", > > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > - raise Warning("Unknown section type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - if AlignValue == 'Auto'and (not self.__Token == BINARY_FILE_TYPE_PE32) and (not self.__Token == > > > BINARY_FILE_TYPE_TE): > > > > + raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == > > > BINARY_FILE_TYPE_TE): > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > > > > > # DataSection > > > > - DataSectionObj = DataSection.DataSection() > > > > + DataSectionObj = DataSection() > > > > DataSectionObj.Alignment = AlignValue > > > > - DataSectionObj.SecType = self.__Token > > > > + DataSectionObj.SecType = self._Token > > > > > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > > - if self.__FileCouldHaveRelocFlag(Obj.FvFileType) and > > self.__SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > > > - if self.__Token == 'RELOCS_STRIPPED': > > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > > + if self._FileCouldHaveRelocFlag(Obj.FvFileType) and self._SectionCouldHaveRelocFlag(DataSectionObj.SecType): > > > > + if self._Token == 'RELOCS_STRIPPED': > > > > DataSectionObj.KeepReloc = False > > > > else: > > > > DataSectionObj.KeepReloc = True > > > > else: > > > > raise Warning("File type %s, section type %s, could not have reloc strip flag%d" % (Obj.FvFileType, > > > DataSectionObj.SecType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__IsToken("="): > > > > - if not self.__GetNextToken(): > > > > + if self._IsToken(TAB_EQUAL_SPLIT): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) > > > > - DataSectionObj.SectFileName = self.__Token > > > > - self.__VerifyFile(DataSectionObj.SectFileName) > > > > + DataSectionObj.SectFileName = self._Token > > > > + self._VerifyFile(DataSectionObj.SectFileName) > > > > else: > > > > - if not self.__GetCglSection(DataSectionObj): > > > > + if not self._GetCglSection(DataSectionObj): > > > > return False > > > > > > > > Obj.SectionList.append(DataSectionObj) > > > > > > > > return True > > > > > > > > - ## __VerifyFile > > > > + ## _VerifyFile > > > > # > > > > # Check if file exists or not: > > > > # If current phase if GenFds, the file must exist; > > > > # If current phase is AutoGen and the file is not in $(OUTPUT_DIRECTORY), the file must exist > > > > # @param FileName: File path to be verified. > > > > # > > > > - def __VerifyFile(self, FileName): > > > > - if FileName.replace('$(WORKSPACE)', '').find('$') != -1: > > > > + def _VerifyFile(self, FileName): > > > > + if FileName.replace(TAB_WORKSPACE, '').find('$') != -1: > > > > return > > > > - if not GlobalData.gAutoGenPhase or not self.__GetMacroValue("OUTPUT_DIRECTORY") in FileName: > > > > + if not GlobalData.gAutoGenPhase or not self._GetMacroValue(TAB_DSC_DEFINES_OUTPUT_DIRECTORY) in FileName: > > > > ErrorCode, ErrorInfo = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > > > - ## __GetCglSection() method > > > > + ## _GetCglSection() method > > > > # > > > > # Get compressed or GUIDed section for Obj > > > > # > > > > @@ -3049,28 +2977,28 @@ class FdfParser: > > > > # @retval True Successfully find section statement > > > > # @retval False Not able to find section statement > > > > # > > > > - def __GetCglSection(self, Obj, AlignValue = None): > > > > + def _GetCglSection(self, Obj, AlignValue = None): > > > > > > > > - if self.__IsKeyword( "COMPRESS"): > > > > + if self._IsKeyword("COMPRESS"): > > > > type = "PI_STD" > > > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > > > - type = self.__Token > > > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > > > + type = self._Token > > > > > > > > - if not self.__IsToken("{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - CompressSectionObj = CompressSection.CompressSection() > > > > + CompressSectionObj = CompressSection() > > > > CompressSectionObj.Alignment = AlignValue > > > > CompressSectionObj.CompType = type > > > > # Recursive sections... > > > > while True: > > > > - IsLeafSection = self.__GetLeafSection(CompressSectionObj) > > > > - IsEncapSection = self.__GetEncapsulationSec(CompressSectionObj) > > > > + IsLeafSection = self._GetLeafSection(CompressSectionObj) > > > > + IsEncapSection = self._GetEncapsulationSec(CompressSectionObj) > > > > if not IsLeafSection and not IsEncapSection: > > > > break > > > > > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > Obj.SectionList.append(CompressSectionObj) > > > > > > > > @@ -3079,15 +3007,15 @@ class FdfParser: > > > > > > > > return True > > > > > > > > - elif self.__IsKeyword( "GUIDED"): > > > > + elif self._IsKeyword("GUIDED"): > > > > GuidValue = None > > > > - if self.__GetNextGuid(): > > > > - GuidValue = self.__Token > > > > + if self._GetNextGuid(): > > > > + GuidValue = self._Token > > > > > > > > - AttribDict = self.__GetGuidAttrib() > > > > - if not self.__IsToken("{"): > > > > + AttribDict = self._GetGuidAttrib() > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - GuidSectionObj = GuidSection.GuidSection() > > > > + GuidSectionObj = GuidSection() > > > > GuidSectionObj.Alignment = AlignValue > > > > GuidSectionObj.NameGuid = GuidValue > > > > GuidSectionObj.SectionType = "GUIDED" > > > > @@ -3096,12 +3024,12 @@ class FdfParser: > > > > GuidSectionObj.ExtraHeaderSize = AttribDict["EXTRA_HEADER_SIZE"] > > > > # Recursive sections... > > > > while True: > > > > - IsLeafSection = self.__GetLeafSection(GuidSectionObj) > > > > - IsEncapSection = self.__GetEncapsulationSec(GuidSectionObj) > > > > + IsLeafSection = self._GetLeafSection(GuidSectionObj) > > > > + IsEncapSection = self._GetEncapsulationSec(GuidSectionObj) > > > > if not IsLeafSection and not IsEncapSection: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > Obj.SectionList.append(GuidSectionObj) > > > > > > > > @@ -3109,44 +3037,43 @@ class FdfParser: > > > > > > > > return False > > > > > > > > - ## __GetGuidAttri() method > > > > + ## _GetGuidAttri() method > > > > # > > > > # Get attributes for GUID section > > > > # > > > > # @param self The object pointer > > > > # @retval AttribDict Dictionary of key-value pair of section attributes > > > > # > > > > - def __GetGuidAttrib(self): > > > > - > > > > + def _GetGuidAttrib(self): > > > > AttribDict = {} > > > > AttribDict["PROCESSING_REQUIRED"] = "NONE" > > > > AttribDict["AUTH_STATUS_VALID"] = "NONE" > > > > AttribDict["EXTRA_HEADER_SIZE"] = -1 > > > > - while self.__IsKeyword("PROCESSING_REQUIRED") or self.__IsKeyword("AUTH_STATUS_VALID") \ > > > > - or self.__IsKeyword("EXTRA_HEADER_SIZE"): > > > > - AttribKey = self.__Token > > > > + while self._IsKeyword("PROCESSING_REQUIRED") or self._IsKeyword("AUTH_STATUS_VALID") \ > > > > + or self._IsKeyword("EXTRA_HEADER_SIZE"): > > > > + AttribKey = self._Token > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) > > > > elif AttribKey == "EXTRA_HEADER_SIZE": > > > > Base = 10 > > > > - if self.__Token[0:2].upper() == "0X": > > > > + if self._Token[0:2].upper() == "0X": > > > > Base = 16 > > > > try: > > > > - AttribDict[AttribKey] = int(self.__Token, Base) > > > > + AttribDict[AttribKey] = int(self._Token, Base) > > > > continue > > > > except ValueError: > > > > raise Warning("expected Number", self.FileName, self.CurrentLineNumber) > > > > - elif self.__Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > > + elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): > > > > raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) > > > > - AttribDict[AttribKey] = self.__Token > > > > + AttribDict[AttribKey] = self._Token > > > > > > > > return AttribDict > > > > > > > > - ## __GetEncapsulationSec() method > > > > + ## _GetEncapsulationSec() method > > > > # > > > > # Get encapsulation section for FILE > > > > # > > > > @@ -3155,91 +3082,89 @@ class FdfParser: > > > > # @retval True Successfully find section statement > > > > # @retval False Not able to find section statement > > > > # > > > > - def __GetEncapsulationSec(self, FfsFileObj): > > > > - > > > > + def _GetEncapsulationSec(self, FfsFileObj): > > > > OldPos = self.GetFileBufferPos() > > > > - if not self.__IsKeyword( "SECTION"): > > > > + if not self._IsKeyword("SECTION"): > > > > if len(FfsFileObj.SectionList) == 0: > > > > raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) > > > > else: > > > > return False > > > > > > > > AlignValue = None > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - AlignValue = self.__Token > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENT_NOAUTO: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + AlignValue = self._Token > > > > > > > > - if not self.__GetCglSection(FfsFileObj, AlignValue): > > > > + if not self._GetCglSection(FfsFileObj, AlignValue): > > > > self.SetFileBufferPos(OldPos) > > > > return False > > > > else: > > > > return True > > > > > > > > - def __GetFmp(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetFmp(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[FMPPAYLOAD."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[FMPPAYLOAD."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - self.__SkipToToken("[FMPPAYLOAD.", True) > > > > - FmpUiName = self.__GetUiName().upper() > > > > + self._UndoToken() > > > > + self._SkipToToken("[FMPPAYLOAD.", True) > > > > + FmpUiName = self._GetUiName().upper() > > > > if FmpUiName in self.Profile.FmpPayloadDict: > > > > raise Warning("Duplicated FMP UI name found: %s" % FmpUiName, self.FileName, self.CurrentLineNumber) > > > > > > > > - FmpData = CapsuleData.CapsulePayload() > > > > + FmpData = CapsulePayload() > > > > FmpData.UiName = FmpUiName > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) > > > > FmpKeyList = ['IMAGE_HEADER_INIT_VERSION', 'IMAGE_TYPE_ID', 'IMAGE_INDEX', 'HARDWARE_INSTANCE', > > > 'CERTIFICATE_GUID', 'MONOTONIC_COUNT'] > > > > - while self.__Token in FmpKeyList: > > > > - Name = self.__Token > > > > + while self._Token in FmpKeyList: > > > > + Name = self._Token > > > > FmpKeyList.remove(Name) > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > if Name == 'IMAGE_TYPE_ID': > > > > - if not self.__GetNextGuid(): > > > > + if not self._GetNextGuid(): > > > > raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) > > > > - FmpData.ImageTypeId = self.__Token > > > > + FmpData.ImageTypeId = self._Token > > > > elif Name == 'CERTIFICATE_GUID': > > > > - if not self.__GetNextGuid(): > > > > + if not self._GetNextGuid(): > > > > raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > > > - FmpData.Certificate_Guid = self.__Token > > > > - if uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > > > uuid.UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > > > + FmpData.Certificate_Guid = self._Token > > > > + if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and > > > UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: > > > > raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for > > > CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) > > > > else: > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) > > > > - Value = self.__Token > > > > + Value = self._Token > > > > if Name == 'IMAGE_HEADER_INIT_VERSION': > > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > > FmpData.Version = Value > > > > elif Name == 'IMAGE_INDEX': > > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > > FmpData.ImageIndex = Value > > > > elif Name == 'HARDWARE_INSTANCE': > > > > - if FdfParser.__Verify(Name, Value, 'UINT8'): > > > > + if FdfParser._Verify(Name, Value, 'UINT8'): > > > > FmpData.HardwareInstance = Value > > > > elif Name == 'MONOTONIC_COUNT': > > > > - if FdfParser.__Verify(Name, Value, 'UINT64'): > > > > + if FdfParser._Verify(Name, Value, 'UINT64'): > > > > FmpData.MonotonicCount = Value > > > > if FmpData.MonotonicCount.upper().startswith('0X'): > > > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16) > > > > else: > > > > FmpData.MonotonicCount = (long)(FmpData.MonotonicCount) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > break > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > > > > > if (FmpData.MonotonicCount and not FmpData.Certificate_Guid) or (not FmpData.MonotonicCount and > > > FmpData.Certificate_Guid): > > > > EdkLogger.error("FdfParser", FORMAT_INVALID, "CERTIFICATE_GUID and MONOTONIC_COUNT must be work as a > > > pair.") > > > > @@ -3248,7 +3173,7 @@ class FdfParser: > > > > if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: > > > > raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, > > self.CurrentLineNumber) > > > > # get the Image file and Vendor code file > > > > - self.__GetFMPCapsuleData(FmpData) > > > > + self._GetFMPCapsuleData(FmpData) > > > > if not FmpData.ImageFile: > > > > raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) > > > > # check whether more than one Vendor code file > > > > @@ -3257,7 +3182,7 @@ class FdfParser: > > > > self.Profile.FmpPayloadDict[FmpUiName] = FmpData > > > > return True > > > > > > > > - ## __GetCapsule() method > > > > + ## _GetCapsule() method > > > > # > > > > # Get capsule section contents and store its data into capsule list of self.Profile > > > > # > > > > @@ -3265,90 +3190,89 @@ class FdfParser: > > > > # @retval True Successfully find a capsule > > > > # @retval False Not able to find a capsule > > > > # > > > > - def __GetCapsule(self): > > > > - > > > > - if not self.__GetNextToken(): > > > > + def _GetCapsule(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[CAPSULE."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[CAPSULE."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[CAPSULE.", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[CAPSULE.", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) > > > > > > > > - CapsuleObj = Capsule.Capsule() > > > > + CapsuleObj = Capsule() > > > > > > > > - CapsuleName = self.__GetUiName() > > > > + CapsuleName = self._GetUiName() > > > > if not CapsuleName: > > > > raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) > > > > > > > > CapsuleObj.UiCapsuleName = CapsuleName.upper() > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__IsKeyword("CREATE_FILE"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("CREATE_FILE"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected file name", self.FileName, self.CurrentLineNumber) > > > > > > > > - CapsuleObj.CreateFile = self.__Token > > > > + CapsuleObj.CreateFile = self._Token > > > > > > > > - self.__GetCapsuleStatements(CapsuleObj) > > > > + self._GetCapsuleStatements(CapsuleObj) > > > > self.Profile.CapsuleDict[CapsuleObj.UiCapsuleName] = CapsuleObj > > > > return True > > > > > > > > - ## __GetCapsuleStatements() method > > > > + ## _GetCapsuleStatements() method > > > > # > > > > # Get statements for capsule > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom statements are got > > > > # > > > > - def __GetCapsuleStatements(self, Obj): > > > > - self.__GetCapsuleTokens(Obj) > > > > - self.__GetDefineStatements(Obj) > > > > - self.__GetSetStatements(Obj) > > > > - self.__GetCapsuleData(Obj) > > > > + def _GetCapsuleStatements(self, Obj): > > > > + self._GetCapsuleTokens(Obj) > > > > + self._GetDefineStatements(Obj) > > > > + self._GetSetStatements(Obj) > > > > + self._GetCapsuleData(Obj) > > > > > > > > - ## __GetCapsuleTokens() method > > > > + ## _GetCapsuleTokens() method > > > > # > > > > # Get token statements for capsule > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom token statements are got > > > > # > > > > - def __GetCapsuleTokens(self, Obj): > > > > - if not self.__GetNextToken(): > > > > + def _GetCapsuleTokens(self, Obj): > > > > + if not self._GetNextToken(): > > > > return False > > > > - while self.__Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > > > "CAPSULE_HEADER_INIT_VERSION"): > > > > - Name = self.__Token.strip() > > > > - if not self.__IsToken("="): > > > > + while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", > > > "CAPSULE_HEADER_INIT_VERSION"): > > > > + Name = self._Token.strip() > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > > if Name == 'CAPSULE_FLAGS': > > > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > > > self.CurrentLineNumber) > > > > - Value = self.__Token.strip() > > > > - while self.__IsToken(","): > > > > - Value += ',' > > > > - if not self.__GetNextToken(): > > > > + Value = self._Token.strip() > > > > + while self._IsToken(TAB_COMMA_SPLIT): > > > > + Value += TAB_COMMA_SPLIT > > > > + if not self._GetNextToken(): > > > > raise Warning("expected value", self.FileName, self.CurrentLineNumber) > > > > - if not self.__Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > > + if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): > > > > raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, > > > self.CurrentLineNumber) > > > > - Value += self.__Token.strip() > > > > + Value += self._Token.strip() > > > > elif Name == 'OEM_CAPSULE_FLAGS': > > > > - Value = self.__Token.strip() > > > > + Value = self._Token.strip() > > > > if not Value.upper().startswith('0X'): > > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > > try: > > > > @@ -3357,51 +3281,49 @@ class FdfParser: > > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > > if not 0x0000 <= Value <= 0xFFFF: > > > > raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) > > > > - Value = self.__Token.strip() > > > > + Value = self._Token.strip() > > > > else: > > > > - Value = self.__Token.strip() > > > > + Value = self._Token.strip() > > > > Obj.TokensDict[Name] = Value > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > return False > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > > > > > - ## __GetCapsuleData() method > > > > + ## _GetCapsuleData() method > > > > # > > > > # Get capsule data for capsule > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom capsule data are got > > > > # > > > > - def __GetCapsuleData(self, Obj): > > > > - > > > > + def _GetCapsuleData(self, Obj): > > > > while True: > > > > - IsInf = self.__GetInfStatement(Obj, True) > > > > - IsFile = self.__GetFileStatement(Obj, True) > > > > - IsFv = self.__GetFvStatement(Obj) > > > > - IsFd = self.__GetFdStatement(Obj) > > > > - IsAnyFile = self.__GetAnyFileStatement(Obj) > > > > - IsAfile = self.__GetAfileStatement(Obj) > > > > - IsFmp = self.__GetFmpStatement(Obj) > > > > + IsInf = self._GetInfStatement(Obj, True) > > > > + IsFile = self._GetFileStatement(Obj, True) > > > > + IsFv = self._GetFvStatement(Obj) > > > > + IsFd = self._GetFdStatement(Obj) > > > > + IsAnyFile = self._GetAnyFileStatement(Obj) > > > > + IsAfile = self._GetAfileStatement(Obj) > > > > + IsFmp = self._GetFmpStatement(Obj) > > > > if not (IsInf or IsFile or IsFv or IsFd or IsAnyFile or IsAfile or IsFmp): > > > > break > > > > > > > > - ## __GetFMPCapsuleData() method > > > > + ## _GetFMPCapsuleData() method > > > > # > > > > # Get capsule data for FMP capsule > > > > # > > > > # @param self The object pointer > > > > # @param Obj for whom capsule data are got > > > > # > > > > - def __GetFMPCapsuleData(self, Obj): > > > > - > > > > + def _GetFMPCapsuleData(self, Obj): > > > > while True: > > > > - IsFv = self.__GetFvStatement(Obj, True) > > > > - IsFd = self.__GetFdStatement(Obj, True) > > > > - IsAnyFile = self.__GetAnyFileStatement(Obj, True) > > > > + IsFv = self._GetFvStatement(Obj, True) > > > > + IsFd = self._GetFdStatement(Obj, True) > > > > + IsAnyFile = self._GetAnyFileStatement(Obj, True) > > > > if not (IsFv or IsFd or IsAnyFile): > > > > break > > > > > > > > - ## __GetFvStatement() method > > > > + ## _GetFvStatement() method > > > > # > > > > # Get FV for capsule > > > > # > > > > @@ -3410,32 +3332,31 @@ class FdfParser: > > > > # @retval True Successfully find a FV statement > > > > # @retval False Not able to find a FV statement > > > > # > > > > - def __GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > > > - > > > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > > > + def _GetFvStatement(self, CapsuleObj, FMPCapsule = False): > > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > return False > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token.upper() not in self.Profile.FvDict: > > > > + if self._Token.upper() not in self.Profile.FvDict: > > > > raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) > > > > > > > > - CapsuleFv = CapsuleData.CapsuleFv() > > > > - CapsuleFv.FvName = self.__Token > > > > + myCapsuleFv = CapsuleFv() > > > > + myCapsuleFv.FvName = self._Token > > > > if FMPCapsule: > > > > if not CapsuleObj.ImageFile: > > > > - CapsuleObj.ImageFile.append(CapsuleFv) > > > > + CapsuleObj.ImageFile.append(myCapsuleFv) > > > > else: > > > > - CapsuleObj.VendorCodeFile.append(CapsuleFv) > > > > + CapsuleObj.VendorCodeFile.append(myCapsuleFv) > > > > else: > > > > - CapsuleObj.CapsuleDataList.append(CapsuleFv) > > > > + CapsuleObj.CapsuleDataList.append(myCapsuleFv) > > > > return True > > > > > > > > - ## __GetFdStatement() method > > > > + ## _GetFdStatement() method > > > > # > > > > # Get FD for capsule > > > > # > > > > @@ -3444,74 +3365,73 @@ class FdfParser: > > > > # @retval True Successfully find a FD statement > > > > # @retval False Not able to find a FD statement > > > > # > > > > - def __GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > > > - > > > > - if not self.__IsKeyword("FD"): > > > > + def _GetFdStatement(self, CapsuleObj, FMPCapsule = False): > > > > + if not self._IsKeyword("FD"): > > > > return False > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token.upper() not in self.Profile.FdDict: > > > > + if self._Token.upper() not in self.Profile.FdDict: > > > > raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) > > > > > > > > - CapsuleFd = CapsuleData.CapsuleFd() > > > > - CapsuleFd.FdName = self.__Token > > > > + myCapsuleFd = CapsuleFd() > > > > + myCapsuleFd.FdName = self._Token > > > > if FMPCapsule: > > > > if not CapsuleObj.ImageFile: > > > > - CapsuleObj.ImageFile.append(CapsuleFd) > > > > + CapsuleObj.ImageFile.append(myCapsuleFd) > > > > else: > > > > - CapsuleObj.VendorCodeFile.append(CapsuleFd) > > > > + CapsuleObj.VendorCodeFile.append(myCapsuleFd) > > > > else: > > > > - CapsuleObj.CapsuleDataList.append(CapsuleFd) > > > > + CapsuleObj.CapsuleDataList.append(myCapsuleFd) > > > > return True > > > > > > > > - def __GetFmpStatement(self, CapsuleObj): > > > > - if not self.__IsKeyword("FMP_PAYLOAD"): > > > > - if not self.__IsKeyword("FMP"): > > > > + def _GetFmpStatement(self, CapsuleObj): > > > > + if not self._IsKeyword("FMP_PAYLOAD"): > > > > + if not self._IsKeyword("FMP"): > > > > return False > > > > > > > > - if not self.__IsKeyword("PAYLOAD"): > > > > - self.__UndoToken() > > > > + if not self._IsKeyword("PAYLOAD"): > > > > + self._UndoToken() > > > > return False > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) > > > > - Payload = self.__Token.upper() > > > > + Payload = self._Token.upper() > > > > if Payload not in self.Profile.FmpPayloadDict: > > > > - raise Warning("This FMP Payload does not exist: %s" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) > > > > CapsuleObj.FmpPayloadList.append(self.Profile.FmpPayloadDict[Payload]) > > > > return True > > > > > > > > - def __ParseRawFileStatement(self): > > > > - if not self.__IsKeyword("FILE"): > > > > + def _ParseRawFileStatement(self): > > > > + if not self._IsKeyword("FILE"): > > > > return None > > > > > > > > - if not self.__IsKeyword("DATA"): > > > > - self.__UndoToken() > > > > + if not self._IsKeyword("DATA"): > > > > + self._UndoToken() > > > > return None > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > > > - AnyFileName = self.__Token > > > > - self.__VerifyFile(AnyFileName) > > > > + AnyFileName = self._Token > > > > + self._VerifyFile(AnyFileName) > > > > > > > > if not os.path.isabs(AnyFileName): > > > > AnyFileName = mws.join(GenFdsGlobalVariable.WorkSpaceDir, AnyFileName) > > > > > > > > return AnyFileName > > > > > > > > - ## __GetAnyFileStatement() method > > > > + ## _GetAnyFileStatement() method > > > > # > > > > # Get AnyFile for capsule > > > > # > > > > @@ -3520,23 +3440,23 @@ class FdfParser: > > > > # @retval True Successfully find a Anyfile statement > > > > # @retval False Not able to find a AnyFile statement > > > > # > > > > - def __GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > > > - AnyFileName = self.__ParseRawFileStatement() > > > > + def _GetAnyFileStatement(self, CapsuleObj, FMPCapsule = False): > > > > + AnyFileName = self._ParseRawFileStatement() > > > > if not AnyFileName: > > > > return False > > > > > > > > - CapsuleAnyFile = CapsuleData.CapsuleAnyFile() > > > > - CapsuleAnyFile.FileName = AnyFileName > > > > + myCapsuleAnyFile = CapsuleAnyFile() > > > > + myCapsuleAnyFile.FileName = AnyFileName > > > > if FMPCapsule: > > > > if not CapsuleObj.ImageFile: > > > > - CapsuleObj.ImageFile.append(CapsuleAnyFile) > > > > + CapsuleObj.ImageFile.append(myCapsuleAnyFile) > > > > else: > > > > - CapsuleObj.VendorCodeFile.append(CapsuleAnyFile) > > > > + CapsuleObj.VendorCodeFile.append(myCapsuleAnyFile) > > > > else: > > > > - CapsuleObj.CapsuleDataList.append(CapsuleAnyFile) > > > > + CapsuleObj.CapsuleDataList.append(myCapsuleAnyFile) > > > > return True > > > > > > > > - ## __GetAfileStatement() method > > > > + ## _GetAfileStatement() method > > > > # > > > > # Get Afile for capsule > > > > # > > > > @@ -3545,18 +3465,17 @@ class FdfParser: > > > > # @retval True Successfully find a Afile statement > > > > # @retval False Not able to find a Afile statement > > > > # > > > > - def __GetAfileStatement(self, CapsuleObj): > > > > - > > > > - if not self.__IsKeyword("APPEND"): > > > > + def _GetAfileStatement(self, CapsuleObj): > > > > + if not self._IsKeyword("APPEND"): > > > > return False > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) > > > > > > > > - AfileName = self.__Token > > > > + AfileName = self._Token > > > > AfileBaseName = os.path.basename(AfileName) > > > > > > > > if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: > > > > @@ -3565,19 +3484,19 @@ class FdfParser: > > > > > > > > if not os.path.isabs(AfileName): > > > > AfileName = GenFdsGlobalVariable.ReplaceWorkspaceMacro(AfileName) > > > > - self.__VerifyFile(AfileName) > > > > + self._VerifyFile(AfileName) > > > > else: > > > > if not os.path.exists(AfileName): > > > > raise Warning('%s does not exist' % AfileName, self.FileName, self.CurrentLineNumber) > > > > else: > > > > pass > > > > > > > > - CapsuleAfile = CapsuleData.CapsuleAfile() > > > > - CapsuleAfile.FileName = AfileName > > > > - CapsuleObj.CapsuleDataList.append(CapsuleAfile) > > > > + myCapsuleAfile = CapsuleAfile() > > > > + myCapsuleAfile.FileName = AfileName > > > > + CapsuleObj.CapsuleDataList.append(myCapsuleAfile) > > > > return True > > > > > > > > - ## __GetRule() method > > > > + ## _GetRule() method > > > > # > > > > # Get Rule section contents and store its data into rule list of self.Profile > > > > # > > > > @@ -3585,143 +3504,140 @@ class FdfParser: > > > > # @retval True Successfully find a Rule > > > > # @retval False Not able to find a Rule > > > > # > > > > - def __GetRule(self): > > > > - > > > > - if not self.__GetNextToken(): > > > > + def _GetRule(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[RULE."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[RULE."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[Rule.", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[Rule.", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__SkipToToken("."): > > > > + if not self._SkipToToken(TAB_SPLIT): > > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > > > - Arch = self.__SkippedChars.rstrip(".") > > > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT) > > > > if Arch.upper() not in ARCH_SET_FULL: > > > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > > > > > - ModuleType = self.__GetModuleType() > > > > + ModuleType = self._GetModuleType() > > > > > > > > TemplateName = "" > > > > - if self.__IsToken("."): > > > > - if not self.__GetNextWord(): > > > > + if self._IsToken(TAB_SPLIT): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected template name", self.FileName, self.CurrentLineNumber) > > > > - TemplateName = self.__Token > > > > + TemplateName = self._Token > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - RuleObj = self.__GetRuleFileStatements() > > > > + RuleObj = self._GetRuleFileStatements() > > > > RuleObj.Arch = Arch.upper() > > > > RuleObj.ModuleType = ModuleType > > > > RuleObj.TemplateName = TemplateName > > > > - if TemplateName == '' : > > > > + if TemplateName == '': > > > > self.Profile.RuleDict['RULE' + \ > > > > - '.' + \ > > > > + TAB_SPLIT + \ > > > > Arch.upper() + \ > > > > - '.' + \ > > > > + TAB_SPLIT + \ > > > > ModuleType.upper() ] = RuleObj > > > > - else : > > > > + else: > > > > self.Profile.RuleDict['RULE' + \ > > > > - '.' + \ > > > > + TAB_SPLIT + \ > > > > Arch.upper() + \ > > > > - '.' + \ > > > > + TAB_SPLIT + \ > > > > ModuleType.upper() + \ > > > > - '.' + \ > > > > + TAB_SPLIT + \ > > > > TemplateName.upper() ] = RuleObj > > > > return True > > > > > > > > - ## __GetModuleType() method > > > > + ## _GetModuleType() method > > > > # > > > > # Return the module type > > > > # > > > > # @param self The object pointer > > > > # @retval string module type > > > > # > > > > - def __GetModuleType(self): > > > > - > > > > - if not self.__GetNextWord(): > > > > + def _GetModuleType(self): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > > > SUP_MODULE_DXE_CORE, \ > > > > + if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, > > > SUP_MODULE_DXE_CORE, \ > > > > SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ > > > > SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ > > > > - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, > > SUP_MODULE_USER_DEFINED, > > > "DEFAULT", SUP_MODULE_BASE, \ > > > > + SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, > > SUP_MODULE_USER_DEFINED, > > > TAB_DEFAULT, SUP_MODULE_BASE, \ > > > > EDK_COMPONENT_TYPE_SECURITY_CORE, > > > EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, > > > EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ > > > > "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, > > > EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, > > "ACPITABLE", > > > SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): > > > > - raise Warning("Unknown Module type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - return self.__Token > > > > + raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + return self._Token > > > > > > > > - ## __GetFileExtension() method > > > > + ## _GetFileExtension() method > > > > # > > > > # Return the file extension > > > > # > > > > # @param self The object pointer > > > > # @retval string file name extension > > > > # > > > > - def __GetFileExtension(self): > > > > - if not self.__IsToken("."): > > > > + def _GetFileExtension(self): > > > > + if not self._IsToken(TAB_SPLIT): > > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > > > Ext = "" > > > > - if self.__GetNextToken(): > > > > - if FileExtensionPattern.match(self.__Token): > > > > - Ext = self.__Token > > > > - return '.' + Ext > > > > + if self._GetNextToken(): > > > > + if FileExtensionPattern.match(self._Token): > > > > + Ext = self._Token > > > > + return TAB_SPLIT + Ext > > > > else: > > > > - raise Warning("Unknown file extension '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > else: > > > > raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) > > > > > > > > - ## __GetRuleFileStatement() method > > > > + ## _GetRuleFileStatement() method > > > > # > > > > # Get rule contents > > > > # > > > > # @param self The object pointer > > > > # @retval Rule Rule object > > > > # > > > > - def __GetRuleFileStatements(self): > > > > - > > > > - if not self.__IsKeyword("FILE"): > > > > + def _GetRuleFileStatements(self): > > > > + if not self._IsKeyword("FILE"): > > > > raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) > > > > > > > > - Type = self.__Token.strip().upper() > > > > + Type = self._Token.strip().upper() > > > > if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ > > > > "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, > > > EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, > > > SUP_MODULE_MM_CORE_STANDALONE): > > > > - raise Warning("Unknown FV type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsKeyword("$(NAMED_GUID)"): > > > > - if not self.__GetNextWord(): > > > > + if not self._IsKeyword("$(NAMED_GUID)"): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == 'PCD': > > > > - if not self.__IsToken( "("): > > > > + if self._Token == 'PCD': > > > > + if not self._IsToken("("): > > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > > - PcdPair = self.__GetNextPcdSettings() > > > > - if not self.__IsToken( ")"): > > > > + PcdPair = self._GetNextPcdSettings() > > > > + if not self._IsToken(")"): > > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > > > - NameGuid = self.__Token > > > > + NameGuid = self._Token > > > > > > > > KeepReloc = None > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > > - if self.__FileCouldHaveRelocFlag(Type): > > > > - if self.__Token == 'RELOCS_STRIPPED': > > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > > + if self._FileCouldHaveRelocFlag(Type): > > > > + if self._Token == 'RELOCS_STRIPPED': > > > > KeepReloc = False > > > > else: > > > > KeepReloc = True > > > > @@ -3729,111 +3645,109 @@ class FdfParser: > > > > raise Warning("File type %s could not have reloc strip flag%d" % (Type, self.CurrentLineNumber), > > self.FileName, > > > self.CurrentLineNumber) > > > > > > > > KeyStringList = [] > > > > - if self.__GetNextToken(): > > > > - if TokenFindPattern.match(self.__Token): > > > > - KeyStringList.append(self.__Token) > > > > - if self.__IsToken(","): > > > > - while self.__GetNextToken(): > > > > - if not TokenFindPattern.match(self.__Token): > > > > + if self._GetNextToken(): > > > > + if TokenFindPattern.match(self._Token): > > > > + KeyStringList.append(self._Token) > > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > > + while self._GetNextToken(): > > > > + if not TokenFindPattern.match(self._Token): > > > > raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, > > self.CurrentLineNumber) > > > > - KeyStringList.append(self.__Token) > > > > + KeyStringList.append(self._Token) > > > > > > > > - if not self.__IsToken(","): > > > > + if not self._IsToken(TAB_COMMA_SPLIT): > > > > break > > > > > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > > > > > > > > > Fixed = False > > > > - if self.__IsKeyword("Fixed", True): > > > > + if self._IsKeyword("Fixed", True): > > > > Fixed = True > > > > > > > > CheckSum = False > > > > - if self.__IsKeyword("CheckSum", True): > > > > + if self._IsKeyword("CheckSum", True): > > > > CheckSum = True > > > > > > > > AlignValue = "" > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > #For FFS, Auto is default option same to "" > > > > - if not self.__Token == "Auto": > > > > - AlignValue = self.__Token > > > > + if not self._Token == "Auto": > > > > + AlignValue = self._Token > > > > > > > > - if self.__IsToken("{"): > > > > + if self._IsToken("{"): > > > > # Complex file rule expected > > > > - Rule = RuleComplexFile.RuleComplexFile() > > > > - Rule.FvFileType = Type > > > > - Rule.NameGuid = NameGuid > > > > - Rule.Alignment = AlignValue > > > > - Rule.CheckSum = CheckSum > > > > - Rule.Fixed = Fixed > > > > - Rule.KeyStringList = KeyStringList > > > > + NewRule = RuleComplexFile() > > > > + NewRule.FvFileType = Type > > > > + NewRule.NameGuid = NameGuid > > > > + NewRule.Alignment = AlignValue > > > > + NewRule.CheckSum = CheckSum > > > > + NewRule.Fixed = Fixed > > > > + NewRule.KeyStringList = KeyStringList > > > > if KeepReloc is not None: > > > > - Rule.KeepReloc = KeepReloc > > > > + NewRule.KeepReloc = KeepReloc > > > > > > > > while True: > > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(Rule) > > > > - IsLeaf = self.__GetEfiSection(Rule) > > > > + IsEncapsulate = self._GetRuleEncapsulationSection(NewRule) > > > > + IsLeaf = self._GetEfiSection(NewRule) > > > > if not IsEncapsulate and not IsLeaf: > > > > break > > > > > > > > - if not self.__IsToken("}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > > > > > - return Rule > > > > + return NewRule > > > > > > > > else: > > > > # Simple file rule expected > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) > > > > > > > > - SectionName = self.__Token > > > > + SectionName = self._Token > > > > > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", > > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) > > > > > > > > > > > > - if self.__IsKeyword("Fixed", True): > > > > + if self._IsKeyword("Fixed", True): > > > > Fixed = True > > > > > > > > - if self.__IsKeyword("CheckSum", True): > > > > + if self._IsKeyword("CheckSum", True): > > > > CheckSum = True > > > > > > > > SectAlignment = "" > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > > BINARY_FILE_TYPE_TE): > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > > BINARY_FILE_TYPE_TE): > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > > self.CurrentLineNumber) > > > > - SectAlignment = self.__Token > > > > + SectAlignment = self._Token > > > > > > > > Ext = None > > > > - if self.__IsToken('|'): > > > > - Ext = self.__GetFileExtension() > > > > - elif not self.__GetNextToken(): > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + Ext = self._GetFileExtension() > > > > + elif not self._GetNextToken(): > > > > raise Warning("expected File name", self.FileName, self.CurrentLineNumber) > > > > > > > > - Rule = RuleSimpleFile.RuleSimpleFile() > > > > - Rule.SectionType = SectionName > > > > - Rule.FvFileType = Type > > > > - Rule.NameGuid = NameGuid > > > > - Rule.Alignment = AlignValue > > > > - Rule.SectAlignment = SectAlignment > > > > - Rule.CheckSum = CheckSum > > > > - Rule.Fixed = Fixed > > > > - Rule.KeyStringList = KeyStringList > > > > + NewRule = RuleSimpleFile() > > > > + NewRule.SectionType = SectionName > > > > + NewRule.FvFileType = Type > > > > + NewRule.NameGuid = NameGuid > > > > + NewRule.Alignment = AlignValue > > > > + NewRule.SectAlignment = SectAlignment > > > > + NewRule.CheckSum = CheckSum > > > > + NewRule.Fixed = Fixed > > > > + NewRule.KeyStringList = KeyStringList > > > > if KeepReloc is not None: > > > > - Rule.KeepReloc = KeepReloc > > > > - Rule.FileExtension = Ext > > > > - Rule.FileName = self.__Token > > > > - return Rule > > > > + NewRule.KeepReloc = KeepReloc > > > > + NewRule.FileExtension = Ext > > > > + NewRule.FileName = self._Token > > > > + return NewRule > > > > > > > > - ## __GetEfiSection() method > > > > + ## _GetEfiSection() method > > > > # > > > > # Get section list for Rule > > > > # > > > > @@ -3842,127 +3756,124 @@ class FdfParser: > > > > # @retval True Successfully find section statement > > > > # @retval False Not able to find section statement > > > > # > > > > - def __GetEfiSection(self, Obj): > > > > - > > > > + def _GetEfiSection(self, Obj): > > > > OldPos = self.GetFileBufferPos() > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > return False > > > > - SectionName = self.__Token > > > > + SectionName = self._Token > > > > > > > > if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > > BINARY_FILE_TYPE_GUID, > > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > if SectionName == "FV_IMAGE": > > > > - FvImageSectionObj = FvImageSection.FvImageSection() > > > > - if self.__IsKeyword("FV_IMAGE"): > > > > + FvImageSectionObj = FvImageSection() > > > > + if self._IsKeyword("FV_IMAGE"): > > > > pass > > > > - if self.__IsToken( "{"): > > > > - FvObj = Fv.FV() > > > > - self.__GetDefineStatements(FvObj) > > > > - self.__GetBlockStatement(FvObj) > > > > - self.__GetSetStatements(FvObj) > > > > - self.__GetFvAlignment(FvObj) > > > > - self.__GetFvAttributes(FvObj) > > > > - self.__GetAprioriSection(FvObj) > > > > - self.__GetAprioriSection(FvObj) > > > > + if self._IsToken("{"): > > > > + FvObj = FV() > > > > + self._GetDefineStatements(FvObj) > > > > + self._GetBlockStatement(FvObj) > > > > + self._GetSetStatements(FvObj) > > > > + self._GetFvAlignment(FvObj) > > > > + self._GetFvAttributes(FvObj) > > > > + self._GetAprioriSection(FvObj) > > > > + self._GetAprioriSection(FvObj) > > > > > > > > while True: > > > > - IsInf = self.__GetInfStatement(FvObj) > > > > - IsFile = self.__GetFileStatement(FvObj) > > > > + IsInf = self._GetInfStatement(FvObj) > > > > + IsFile = self._GetFileStatement(FvObj) > > > > if not IsInf and not IsFile: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > FvImageSectionObj.Fv = FvObj > > > > FvImageSectionObj.FvName = None > > > > > > > > else: > > > > - if not self.__IsKeyword(BINARY_FILE_TYPE_FV): > > > > - raise Warning("expected BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) > > > > - FvImageSectionObj.FvFileType = self.__Token > > > > + if not self._IsKeyword(BINARY_FILE_TYPE_FV): > > > > + raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) > > > > + FvImageSectionObj.FvFileType = self._Token > > > > > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - FvImageSectionObj.Alignment = self.__Token > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENT_NOAUTO: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + FvImageSectionObj.Alignment = self._Token > > > > > > > > - if self.__IsToken('|'): > > > > - FvImageSectionObj.FvFileExtension = self.__GetFileExtension() > > > > - elif self.__GetNextToken(): > > > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > > > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + FvImageSectionObj.FvFileExtension = self._GetFileExtension() > > > > + elif self._GetNextToken(): > > > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, > > > BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > > BINARY_FILE_TYPE_GUID, > > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > - FvImageSectionObj.FvFileName = self.__Token > > > > + FvImageSectionObj.FvFileName = self._Token > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > else: > > > > raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) > > > > > > > > Obj.SectionList.append(FvImageSectionObj) > > > > return True > > > > > > > > - EfiSectionObj = EfiSection.EfiSection() > > > > + EfiSectionObj = EfiSection() > > > > EfiSectionObj.SectionType = SectionName > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected file type", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token == "STRING": > > > > - if not self.__RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > > > + if self._Token == "STRING": > > > > + if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): > > > > raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, > > self.CurrentLineNumber), > > > self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken('='): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__GetStringData(): > > > > - EfiSectionObj.StringData = self.__Token > > > > + if self._GetStringData(): > > > > + EfiSectionObj.StringData = self._Token > > > > > > > > - if self.__IsKeyword("BUILD_NUM"): > > > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > > + if self._IsKeyword("BUILD_NUM"): > > > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > > > - EfiSectionObj.BuildNum = self.__Token > > > > + EfiSectionObj.BuildNum = self._Token > > > > > > > > else: > > > > - EfiSectionObj.FileType = self.__Token > > > > - self.__CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > > > + EfiSectionObj.FileType = self._Token > > > > + self._CheckRuleSectionFileType(EfiSectionObj.SectionType, EfiSectionObj.FileType) > > > > > > > > - if self.__IsKeyword("Optional"): > > > > - if not self.__RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > > > + if self._IsKeyword("Optional"): > > > > + if not self._RuleSectionCouldBeOptional(EfiSectionObj.SectionType): > > > > raise Warning("%s section could NOT be optional%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), > > > self.FileName, self.CurrentLineNumber) > > > > EfiSectionObj.Optional = True > > > > > > > > - if self.__IsKeyword("BUILD_NUM"): > > > > - if not self.__RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > > + if self._IsKeyword("BUILD_NUM"): > > > > + if not self._RuleSectionCouldHaveBuildNum(EfiSectionObj.SectionType): > > > > raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, > > > self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) > > > > - EfiSectionObj.BuildNum = self.__Token > > > > + EfiSectionObj.BuildNum = self._Token > > > > > > > > - if self.__GetAlignment(): > > > > - if self.__Token not in ("Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", > > > > - "256K", "512K", "1M", "2M", "4M", "8M", "16M"): > > > > - raise Warning("Incorrect alignment '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - if self.__Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > > BINARY_FILE_TYPE_TE): > > > > + if self._GetAlignment(): > > > > + if self._Token not in ALIGNMENTS: > > > > + raise Warning("Incorrect alignment '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + if self._Token == 'Auto' and (not SectionName == BINARY_FILE_TYPE_PE32) and (not SectionName == > > > BINARY_FILE_TYPE_TE): > > > > raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, > > self.CurrentLineNumber) > > > > - EfiSectionObj.Alignment = self.__Token > > > > + EfiSectionObj.Alignment = self._Token > > > > > > > > - if self.__IsKeyword('RELOCS_STRIPPED') or self.__IsKeyword('RELOCS_RETAINED'): > > > > - if self.__SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > > > - if self.__Token == 'RELOCS_STRIPPED': > > > > + if self._IsKeyword('RELOCS_STRIPPED') or self._IsKeyword('RELOCS_RETAINED'): > > > > + if self._SectionCouldHaveRelocFlag(EfiSectionObj.SectionType): > > > > + if self._Token == 'RELOCS_STRIPPED': > > > > EfiSectionObj.KeepReloc = False > > > > else: > > > > EfiSectionObj.KeepReloc = True > > > > @@ -3972,35 +3883,35 @@ class FdfParser: > > > > raise Warning("Section type %s could not have reloc strip flag" % EfiSectionObj.SectionType, self.FileName, > > > self.CurrentLineNumber) > > > > > > > > > > > > - if self.__IsToken('|'): > > > > - EfiSectionObj.FileExtension = self.__GetFileExtension() > > > > - elif self.__GetNextToken(): > > > > - if self.__Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + EfiSectionObj.FileExtension = self._GetFileExtension() > > > > + elif self._GetNextToken(): > > > > + if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, > > > "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ > > > > BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, > > > BINARY_FILE_TYPE_SMM_DEPEX): > > > > > > > > - if self.__Token.startswith('PCD'): > > > > - self.__UndoToken() > > > > - self.__GetNextWord() > > > > + if self._Token.startswith('PCD'): > > > > + self._UndoToken() > > > > + self._GetNextWord() > > > > > > > > - if self.__Token == 'PCD': > > > > - if not self.__IsToken( "("): > > > > + if self._Token == 'PCD': > > > > + if not self._IsToken("("): > > > > raise Warning("expected '('", self.FileName, self.CurrentLineNumber) > > > > - PcdPair = self.__GetNextPcdSettings() > > > > - if not self.__IsToken( ")"): > > > > + PcdPair = self._GetNextPcdSettings() > > > > + if not self._IsToken(")"): > > > > raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) > > > > - self.__Token = 'PCD('+PcdPair[1]+'.'+PcdPair[0]+')' > > > > + self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' > > > > > > > > - EfiSectionObj.FileName = self.__Token > > > > + EfiSectionObj.FileName = self._Token > > > > > > > > else: > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > else: > > > > raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) > > > > > > > > Obj.SectionList.append(EfiSectionObj) > > > > return True > > > > > > > > - ## __RuleSectionCouldBeOptional() method > > > > + ## _RuleSectionCouldBeOptional() method > > > > # > > > > # Get whether a section could be optional > > > > # > > > > @@ -4009,13 +3920,13 @@ class FdfParser: > > > > # @retval False section never optional > > > > # > > > > @staticmethod > > > > - def __RuleSectionCouldBeOptional(SectionType): > > > > + def _RuleSectionCouldBeOptional(SectionType): > > > > if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, > > > "RAW", BINARY_FILE_TYPE_SMM_DEPEX): > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __RuleSectionCouldHaveBuildNum() method > > > > + ## _RuleSectionCouldHaveBuildNum() method > > > > # > > > > # Get whether a section could have build number information > > > > # > > > > @@ -4024,13 +3935,13 @@ class FdfParser: > > > > # @retval False section never have build number information > > > > # > > > > @staticmethod > > > > - def __RuleSectionCouldHaveBuildNum(SectionType): > > > > + def _RuleSectionCouldHaveBuildNum(SectionType): > > > > if SectionType in ("VERSION"): > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __RuleSectionCouldHaveString() method > > > > + ## _RuleSectionCouldHaveString() method > > > > # > > > > # Get whether a section could have string > > > > # > > > > @@ -4039,13 +3950,13 @@ class FdfParser: > > > > # @retval False section never have string > > > > # > > > > @staticmethod > > > > - def __RuleSectionCouldHaveString(SectionType): > > > > + def _RuleSectionCouldHaveString(SectionType): > > > > if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): > > > > return True > > > > else: > > > > return False > > > > > > > > - ## __CheckRuleSectionFileType() method > > > > + ## _CheckRuleSectionFileType() method > > > > # > > > > # Get whether a section matches a file type > > > > # > > > > @@ -4053,7 +3964,7 @@ class FdfParser: > > > > # @param SectionType The section type to check > > > > # @param FileType The file type to check > > > > # > > > > - def __CheckRuleSectionFileType(self, SectionType, FileType): > > > > + def _CheckRuleSectionFileType(self, SectionType, FileType): > > > > if SectionType == "COMPAT16": > > > > if FileType not in ("COMPAT16", "SEC_COMPAT16"): > > > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > > > @@ -4085,53 +3996,53 @@ class FdfParser: > > > > if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): > > > > raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) > > > > > > > > - ## __GetRuleEncapsulationSection() method > > > > + ## _GetRuleEncapsulationSection() method > > > > # > > > > # Get encapsulation section for Rule > > > > # > > > > # @param self The object pointer > > > > - # @param Rule for whom section is got > > > > + # @param theRule for whom section is got > > > > # @retval True Successfully find section statement > > > > # @retval False Not able to find section statement > > > > # > > > > - def __GetRuleEncapsulationSection(self, Rule): > > > > - if self.__IsKeyword( "COMPRESS"): > > > > + def _GetRuleEncapsulationSection(self, theRule): > > > > + if self._IsKeyword("COMPRESS"): > > > > Type = "PI_STD" > > > > - if self.__IsKeyword("PI_STD") or self.__IsKeyword("PI_NONE"): > > > > - Type = self.__Token > > > > + if self._IsKeyword("PI_STD") or self._IsKeyword("PI_NONE"): > > > > + Type = self._Token > > > > > > > > - if not self.__IsToken("{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > > > > > - CompressSectionObj = CompressSection.CompressSection() > > > > + CompressSectionObj = CompressSection() > > > > > > > > CompressSectionObj.CompType = Type > > > > # Recursive sections... > > > > while True: > > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(CompressSectionObj) > > > > - IsLeaf = self.__GetEfiSection(CompressSectionObj) > > > > + IsEncapsulate = self._GetRuleEncapsulationSection(CompressSectionObj) > > > > + IsLeaf = self._GetEfiSection(CompressSectionObj) > > > > if not IsEncapsulate and not IsLeaf: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - Rule.SectionList.append(CompressSectionObj) > > > > + theRule.SectionList.append(CompressSectionObj) > > > > > > > > return True > > > > > > > > - elif self.__IsKeyword( "GUIDED"): > > > > + elif self._IsKeyword("GUIDED"): > > > > GuidValue = None > > > > - if self.__GetNextGuid(): > > > > - GuidValue = self.__Token > > > > + if self._GetNextGuid(): > > > > + GuidValue = self._Token > > > > > > > > - if self.__IsKeyword( "$(NAMED_GUID)"): > > > > - GuidValue = self.__Token > > > > + if self._IsKeyword("$(NAMED_GUID)"): > > > > + GuidValue = self._Token > > > > > > > > - AttribDict = self.__GetGuidAttrib() > > > > + AttribDict = self._GetGuidAttrib() > > > > > > > > - if not self.__IsToken("{"): > > > > + if not self._IsToken("{"): > > > > raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) > > > > - GuidSectionObj = GuidSection.GuidSection() > > > > + GuidSectionObj = GuidSection() > > > > GuidSectionObj.NameGuid = GuidValue > > > > GuidSectionObj.SectionType = "GUIDED" > > > > GuidSectionObj.ProcessRequired = AttribDict["PROCESSING_REQUIRED"] > > > > @@ -4140,20 +4051,20 @@ class FdfParser: > > > > > > > > # Efi sections... > > > > while True: > > > > - IsEncapsulate = self.__GetRuleEncapsulationSection(GuidSectionObj) > > > > - IsLeaf = self.__GetEfiSection(GuidSectionObj) > > > > + IsEncapsulate = self._GetRuleEncapsulationSection(GuidSectionObj) > > > > + IsLeaf = self._GetEfiSection(GuidSectionObj) > > > > if not IsEncapsulate and not IsLeaf: > > > > break > > > > > > > > - if not self.__IsToken( "}"): > > > > + if not self._IsToken("}"): > > > > raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) > > > > - Rule.SectionList.append(GuidSectionObj) > > > > + theRule.SectionList.append(GuidSectionObj) > > > > > > > > return True > > > > > > > > return False > > > > > > > > - ## __GetVtf() method > > > > + ## _GetVtf() method > > > > # > > > > # Get VTF section contents and store its data into VTF list of self.Profile > > > > # > > > > @@ -4161,69 +4072,69 @@ class FdfParser: > > > > # @retval True Successfully find a VTF > > > > # @retval False Not able to find a VTF > > > > # > > > > - def __GetVtf(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetVtf(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[VTF."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[VTF."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[VTF.", True): > > > > + self._UndoToken() > > > > + if not self._IsToken("[VTF.", True): > > > > FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) > > > > #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ > > > > - # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine :], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > + # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], > > > FileLineTuple[1], self.CurrentOffsetWithinLine) > > > > raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__SkipToToken("."): > > > > + if not self._SkipToToken(TAB_SPLIT): > > > > raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) > > > > > > > > - Arch = self.__SkippedChars.rstrip(".").upper() > > > > - if Arch not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > > > + Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() > > > > + if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: > > > > raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) > > > > - Name = self.__Token.upper() > > > > + Name = self._Token.upper() > > > > > > > > - VtfObj = Vtf.Vtf() > > > > + VtfObj = Vtf() > > > > VtfObj.UiName = Name > > > > VtfObj.KeyArch = Arch > > > > > > > > - if self.__IsToken(","): > > > > - if not self.__GetNextWord(): > > > > + if self._IsToken(TAB_COMMA_SPLIT): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token.upper() not in ("IA32", "X64", "IPF", "ARM", "AARCH64"): > > > > - raise Warning("Unknown Arch '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - VtfObj.ArchList = self.__Token.upper() > > > > + if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, > > > TAB_ARCH_AARCH64): > > > > + raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + VtfObj.ArchList = self._Token.upper() > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__IsKeyword("IA32_RST_BIN"): > > > > - if not self.__IsToken("="): > > > > + if self._IsKeyword("IA32_RST_BIN"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) > > > > > > > > - VtfObj.ResetBin = self.__Token > > > > - if VtfObj.ResetBin.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + VtfObj.ResetBin = self._Token > > > > + if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(VtfObj.ResetBin), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > > > - while self.__GetComponentStatement(VtfObj): > > > > + while self._GetComponentStatement(VtfObj): > > > > pass > > > > > > > > self.Profile.VtfList.append(VtfObj) > > > > return True > > > > > > > > - ## __GetComponentStatement() method > > > > + ## _GetComponentStatement() method > > > > # > > > > # Get components in VTF > > > > # > > > > @@ -4232,132 +4143,132 @@ class FdfParser: > > > > # @retval True Successfully find a component > > > > # @retval False Not able to find a component > > > > # > > > > - def __GetComponentStatement(self, VtfObj): > > > > - if not self.__IsKeyword("COMP_NAME"): > > > > + def _GetComponentStatement(self, VtfObj): > > > > + if not self._IsKeyword("COMP_NAME"): > > > > return False > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextWord(): > > > > + if not self._GetNextWord(): > > > > raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) > > > > > > > > - CompStatementObj = ComponentStatement.ComponentStatement() > > > > - CompStatementObj.CompName = self.__Token > > > > + CompStatementObj = ComponentStatement() > > > > + CompStatementObj.CompName = self._Token > > > > > > > > - if not self.__IsKeyword("COMP_LOC"): > > > > + if not self._IsKeyword("COMP_LOC"): > > > > raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > CompStatementObj.CompLoc = "" > > > > - if self.__GetNextWord(): > > > > - CompStatementObj.CompLoc = self.__Token > > > > - if self.__IsToken('|'): > > > > - if not self.__GetNextWord(): > > > > + if self._GetNextWord(): > > > > + CompStatementObj.CompLoc = self._Token > > > > + if self._IsToken(TAB_VALUE_SPLIT): > > > > + if not self._GetNextWord(): > > > > raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support > > > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > - CompStatementObj.FilePos = self.__Token > > > > + CompStatementObj.FilePos = self._Token > > > > else: > > > > self.CurrentLineNumber += 1 > > > > self.CurrentOffsetWithinLine = 0 > > > > > > > > - if not self.__IsKeyword("COMP_TYPE"): > > > > + if not self._IsKeyword("COMP_TYPE"): > > > > raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > > > - if not self.__Token.startswith("0x") or len(self.__Token) < 3 or len(self.__Token) > 4 or \ > > > > - not self.__Token[2] in string.hexdigits or not self.__Token[-1] in string.hexdigits: > > > > - raise Warning("Unknown location type '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.CompType = self.__Token > > > > + if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): > > > > + if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ > > > > + not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: > > > > + raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + CompStatementObj.CompType = self._Token > > > > > > > > - if not self.__IsKeyword("COMP_VER"): > > > > + if not self._IsKeyword("COMP_VER"): > > > > raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) > > > > > > > > - Pattern = re.compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', re.DOTALL) > > > > - if Pattern.match(self.__Token) is None: > > > > - raise Warning("Unknown version format '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.CompVer = self.__Token > > > > + Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) > > > > + if Pattern.match(self._Token) is None: > > > > + raise Warning("Unknown version format '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + CompStatementObj.CompVer = self._Token > > > > > > > > - if not self.__IsKeyword("COMP_CS"): > > > > + if not self._IsKeyword("COMP_CS"): > > > > raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) > > > > - if self.__Token not in ("1", "0"): > > > > - raise Warning("Unknown Component CS '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > - CompStatementObj.CompCs = self.__Token > > > > + if self._Token not in ("1", "0"): > > > > + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > + CompStatementObj.CompCs = self._Token > > > > > > > > > > > > - if not self.__IsKeyword("COMP_BIN"): > > > > + if not self._IsKeyword("COMP_BIN"): > > > > raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) > > > > > > > > - CompStatementObj.CompBin = self.__Token > > > > - if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + CompStatementObj.CompBin = self._Token > > > > + if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompBin), > > > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > > > - if not self.__IsKeyword("COMP_SYM"): > > > > + if not self._IsKeyword("COMP_SYM"): > > > > raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) > > > > > > > > - CompStatementObj.CompSym = self.__Token > > > > - if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + CompStatementObj.CompSym = self._Token > > > > + if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(CompStatementObj.CompSym), > > > GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > > > - if not self.__IsKeyword("COMP_SIZE"): > > > > + if not self._IsKeyword("COMP_SIZE"): > > > > raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) > > > > > > > > - if not self.__IsToken("="): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > > > > > - if self.__IsToken("-"): > > > > - CompStatementObj.CompSize = self.__Token > > > > - elif self.__GetNextDecimalNumber(): > > > > - CompStatementObj.CompSize = self.__Token > > > > - elif self.__GetNextHexNumber(): > > > > - CompStatementObj.CompSize = self.__Token > > > > + if self._IsToken("-"): > > > > + CompStatementObj.CompSize = self._Token > > > > + elif self._GetNextDecimalNumber(): > > > > + CompStatementObj.CompSize = self._Token > > > > + elif self._GetNextHexNumber(): > > > > + CompStatementObj.CompSize = self._Token > > > > else: > > > > - raise Warning("Unknown size '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + raise Warning("Unknown size '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > VtfObj.ComponentStatementList.append(CompStatementObj) > > > > return True > > > > > > > > - ## __GetOptionRom() method > > > > + ## _GetOptionRom() method > > > > # > > > > # Get OptionROM section contents and store its data into OptionROM list of self.Profile > > > > # > > > > @@ -4365,38 +4276,37 @@ class FdfParser: > > > > # @retval True Successfully find a OptionROM > > > > # @retval False Not able to find a OptionROM > > > > # > > > > - def __GetOptionRom(self): > > > > - if not self.__GetNextToken(): > > > > + def _GetOptionRom(self): > > > > + if not self._GetNextToken(): > > > > return False > > > > > > > > - S = self.__Token.upper() > > > > - if S.startswith("[") and not S.startswith("[OPTIONROM."): > > > > + S = self._Token.upper() > > > > + if S.startswith(TAB_SECTION_START) and not S.startswith("[OPTIONROM."): > > > > self.SectionParser(S) > > > > - self.__UndoToken() > > > > + self._UndoToken() > > > > return False > > > > > > > > - self.__UndoToken() > > > > - if not self.__IsToken("[OptionRom.", True): > > > > - raise Warning("Unknown Keyword '%s'" % self.__Token, self.FileName, self.CurrentLineNumber) > > > > + self._UndoToken() > > > > + if not self._IsToken("[OptionRom.", True): > > > > + raise Warning("Unknown Keyword '%s'" % self._Token, self.FileName, self.CurrentLineNumber) > > > > > > > > - OptRomName = self.__GetUiName() > > > > + OptRomName = self._GetUiName() > > > > > > > > - if not self.__IsToken( "]"): > > > > + if not self._IsToken(TAB_SECTION_END): > > > > raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) > > > > > > > > - OptRomObj = OptionRom.OPTIONROM() > > > > - OptRomObj.DriverName = OptRomName > > > > + OptRomObj = OPTIONROM(OptRomName) > > > > self.Profile.OptRomDict[OptRomName] = OptRomObj > > > > > > > > while True: > > > > - isInf = self.__GetOptRomInfStatement(OptRomObj) > > > > - isFile = self.__GetOptRomFileStatement(OptRomObj) > > > > + isInf = self._GetOptRomInfStatement(OptRomObj) > > > > + isFile = self._GetOptRomFileStatement(OptRomObj) > > > > if not isInf and not isFile: > > > > break > > > > > > > > return True > > > > > > > > - ## __GetOptRomInfStatement() method > > > > + ## _GetOptRomInfStatement() method > > > > # > > > > # Get INF statements > > > > # > > > > @@ -4405,17 +4315,17 @@ class FdfParser: > > > > # @retval True Successfully find inf statement > > > > # @retval False Not able to find inf statement > > > > # > > > > - def __GetOptRomInfStatement(self, Obj): > > > > - if not self.__IsKeyword( "INF"): > > > > + def _GetOptRomInfStatement(self, Obj): > > > > + if not self._IsKeyword("INF"): > > > > return False > > > > > > > > - ffsInf = OptRomInfStatement.OptRomInfStatement() > > > > - self.__GetInfOptions( ffsInf) > > > > + ffsInf = OptRomInfStatement() > > > > + self._GetInfOptions(ffsInf) > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) > > > > - ffsInf.InfFileName = self.__Token > > > > - if ffsInf.InfFileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + ffsInf.InfFileName = self._Token > > > > + if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(ffsInf.InfFileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > @@ -4434,71 +4344,71 @@ class FdfParser: > > > > self.Profile.InfDict['ArchTBD'].append(ffsInf.InfFileName) > > > > > > > > > > > > - self.__GetOptRomOverrides (ffsInf) > > > > + self._GetOptRomOverrides (ffsInf) > > > > > > > > Obj.FfsList.append(ffsInf) > > > > return True > > > > > > > > - ## __GetOptRomOverrides() method > > > > + ## _GetOptRomOverrides() method > > > > # > > > > # Get overrides for OptROM INF & FILE > > > > # > > > > # @param self The object pointer > > > > # @param FfsInfObj for whom overrides is got > > > > # > > > > - def __GetOptRomOverrides(self, Obj): > > > > - if self.__IsToken('{'): > > > > - Overrides = OptRomInfStatement.OverrideAttribs() > > > > + def _GetOptRomOverrides(self, Obj): > > > > + if self._IsToken('{'): > > > > + Overrides = OverrideAttribs() > > > > while True: > > > > - if self.__IsKeyword( "PCI_VENDOR_ID"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("PCI_VENDOR_ID"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) > > > > - Overrides.PciVendorId = self.__Token > > > > + Overrides.PciVendorId = self._Token > > > > continue > > > > > > > > - if self.__IsKeyword( "PCI_CLASS_CODE"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("PCI_CLASS_CODE"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) > > > > - Overrides.PciClassCode = self.__Token > > > > + Overrides.PciClassCode = self._Token > > > > continue > > > > > > > > - if self.__IsKeyword( "PCI_DEVICE_ID"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("PCI_DEVICE_ID"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) > > > > > > > > - Overrides.PciDeviceId = self.__Token > > > > + Overrides.PciDeviceId = self._Token > > > > continue > > > > > > > > - if self.__IsKeyword( "PCI_REVISION"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("PCI_REVISION"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextHexNumber(): > > > > + if not self._GetNextHexNumber(): > > > > raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) > > > > - Overrides.PciRevision = self.__Token > > > > + Overrides.PciRevision = self._Token > > > > continue > > > > > > > > - if self.__IsKeyword( "PCI_COMPRESS"): > > > > - if not self.__IsToken( "="): > > > > + if self._IsKeyword("PCI_COMPRESS"): > > > > + if not self._IsToken(TAB_EQUAL_SPLIT): > > > > raise Warning("expected '='", self.FileName, self.CurrentLineNumber) > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) > > > > - Overrides.NeedCompress = self.__Token.upper() == 'TRUE' > > > > + Overrides.NeedCompress = self._Token.upper() == 'TRUE' > > > > continue > > > > > > > > - if self.__IsToken( "}"): > > > > + if self._IsToken("}"): > > > > break > > > > else: > > > > EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) > > > > > > > > Obj.OverrideAttribs = Overrides > > > > > > > > - ## __GetOptRomFileStatement() method > > > > + ## _GetOptRomFileStatement() method > > > > # > > > > # Get FILE statements > > > > # > > > > @@ -4507,33 +4417,33 @@ class FdfParser: > > > > # @retval True Successfully find FILE statement > > > > # @retval False Not able to find FILE statement > > > > # > > > > - def __GetOptRomFileStatement(self, Obj): > > > > - if not self.__IsKeyword( "FILE"): > > > > + def _GetOptRomFileStatement(self, Obj): > > > > + if not self._IsKeyword("FILE"): > > > > return False > > > > > > > > - FfsFileObj = OptRomFileStatement.OptRomFileStatement() > > > > + FfsFileObj = OptRomFileStatement() > > > > > > > > - if not self.__IsKeyword("EFI") and not self.__IsKeyword(BINARY_FILE_TYPE_BIN): > > > > + if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): > > > > raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) > > > > - FfsFileObj.FileType = self.__Token > > > > + FfsFileObj.FileType = self._Token > > > > > > > > - if not self.__GetNextToken(): > > > > + if not self._GetNextToken(): > > > > raise Warning("expected File path", self.FileName, self.CurrentLineNumber) > > > > - FfsFileObj.FileName = self.__Token > > > > - if FfsFileObj.FileName.replace('$(WORKSPACE)', '').find('$') == -1: > > > > + FfsFileObj.FileName = self._Token > > > > + if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: > > > > #check for file path > > > > ErrorCode, ErrorInfo = PathClass(NormPath(FfsFileObj.FileName), GenFdsGlobalVariable.WorkSpaceDir).Validate() > > > > if ErrorCode != 0: > > > > EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) > > > > > > > > if FfsFileObj.FileType == 'EFI': > > > > - self.__GetOptRomOverrides(FfsFileObj) > > > > + self._GetOptRomOverrides(FfsFileObj) > > > > > > > > Obj.FfsList.append(FfsFileObj) > > > > > > > > return True > > > > > > > > - ## __GetCapInFd() method > > > > + ## _GetCapInFd() method > > > > # > > > > # Get Cap list contained in FD > > > > # > > > > @@ -4541,7 +4451,7 @@ class FdfParser: > > > > # @param FdName FD name > > > > # @retval CapList List of Capsule in FD > > > > # > > > > - def __GetCapInFd (self, FdName): > > > > + def _GetCapInFd (self, FdName): > > > > CapList = [] > > > > if FdName.upper() in self.Profile.FdDict: > > > > FdObj = self.Profile.FdDict[FdName.upper()] > > > > @@ -4554,7 +4464,7 @@ class FdfParser: > > > > CapList.append(elementRegionData.upper()) > > > > return CapList > > > > > > > > - ## __GetReferencedFdCapTuple() method > > > > + ## _GetReferencedFdCapTuple() method > > > > # > > > > # Get FV and FD list referenced by a capsule image > > > > # > > > > @@ -4563,22 +4473,22 @@ class FdfParser: > > > > # @param RefFdList referenced FD by section > > > > # @param RefFvList referenced FV by section > > > > # > > > > - def __GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > > > - for CapsuleDataObj in CapObj.CapsuleDataList : > > > > + def _GetReferencedFdCapTuple(self, CapObj, RefFdList = [], RefFvList = []): > > > > + for CapsuleDataObj in CapObj.CapsuleDataList: > > > > if hasattr(CapsuleDataObj, 'FvName') and CapsuleDataObj.FvName is not None and > > CapsuleDataObj.FvName.upper() > > > not in RefFvList: > > > > RefFvList.append (CapsuleDataObj.FvName.upper()) > > > > elif hasattr(CapsuleDataObj, 'FdName') and CapsuleDataObj.FdName is not None and > > > CapsuleDataObj.FdName.upper() not in RefFdList: > > > > RefFdList.append (CapsuleDataObj.FdName.upper()) > > > > elif CapsuleDataObj.Ffs is not None: > > > > - if isinstance(CapsuleDataObj.Ffs, FfsFileStatement.FileStatement): > > > > + if isinstance(CapsuleDataObj.Ffs, FileStatement): > > > > if CapsuleDataObj.Ffs.FvName is not None and CapsuleDataObj.Ffs.FvName.upper() not in RefFvList: > > > > RefFvList.append(CapsuleDataObj.Ffs.FvName.upper()) > > > > elif CapsuleDataObj.Ffs.FdName is not None and CapsuleDataObj.Ffs.FdName.upper() not in RefFdList: > > > > RefFdList.append(CapsuleDataObj.Ffs.FdName.upper()) > > > > else: > > > > - self.__GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > > > + self._GetReferencedFdFvTupleFromSection(CapsuleDataObj.Ffs, RefFdList, RefFvList) > > > > > > > > - ## __GetFvInFd() method > > > > + ## _GetFvInFd() method > > > > # > > > > # Get FV list contained in FD > > > > # > > > > @@ -4586,7 +4496,7 @@ class FdfParser: > > > > # @param FdName FD name > > > > # @retval FvList list of FV in FD > > > > # > > > > - def __GetFvInFd (self, FdName): > > > > + def _GetFvInFd (self, FdName): > > > > FvList = [] > > > > if FdName.upper() in self.Profile.FdDict: > > > > FdObj = self.Profile.FdDict[FdName.upper()] > > > > @@ -4599,7 +4509,7 @@ class FdfParser: > > > > FvList.append(elementRegionData.upper()) > > > > return FvList > > > > > > > > - ## __GetReferencedFdFvTuple() method > > > > + ## _GetReferencedFdFvTuple() method > > > > # > > > > # Get FD and FV list referenced by a FFS file > > > > # > > > > @@ -4608,17 +4518,17 @@ class FdfParser: > > > > # @param RefFdList referenced FD by section > > > > # @param RefFvList referenced FV by section > > > > # > > > > - def __GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > > > + def _GetReferencedFdFvTuple(self, FvObj, RefFdList = [], RefFvList = []): > > > > for FfsObj in FvObj.FfsList: > > > > - if isinstance(FfsObj, FfsFileStatement.FileStatement): > > > > + if isinstance(FfsObj, FileStatement): > > > > if FfsObj.FvName is not None and FfsObj.FvName.upper() not in RefFvList: > > > > RefFvList.append(FfsObj.FvName.upper()) > > > > elif FfsObj.FdName is not None and FfsObj.FdName.upper() not in RefFdList: > > > > RefFdList.append(FfsObj.FdName.upper()) > > > > else: > > > > - self.__GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > > > + self._GetReferencedFdFvTupleFromSection(FfsObj, RefFdList, RefFvList) > > > > > > > > - ## __GetReferencedFdFvTupleFromSection() method > > > > + ## _GetReferencedFdFvTupleFromSection() method > > > > # > > > > # Get FD and FV list referenced by a FFS section > > > > # > > > > @@ -4627,19 +4537,19 @@ class FdfParser: > > > > # @param FdList referenced FD by section > > > > # @param FvList referenced FV by section > > > > # > > > > - def __GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > > > + def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): > > > > SectionStack = [] > > > > SectionStack.extend(FfsFile.SectionList) > > > > while SectionStack != []: > > > > SectionObj = SectionStack.pop() > > > > - if isinstance(SectionObj, FvImageSection.FvImageSection): > > > > + if isinstance(SectionObj, FvImageSection): > > > > if SectionObj.FvName is not None and SectionObj.FvName.upper() not in FvList: > > > > FvList.append(SectionObj.FvName.upper()) > > > > if SectionObj.Fv is not None and SectionObj.Fv.UiFvName is not None and SectionObj.Fv.UiFvName.upper() not > > in > > > FvList: > > > > FvList.append(SectionObj.Fv.UiFvName.upper()) > > > > - self.__GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > > > + self._GetReferencedFdFvTuple(SectionObj.Fv, FdList, FvList) > > > > > > > > - if isinstance(SectionObj, CompressSection.CompressSection) or isinstance(SectionObj, GuidSection.GuidSection): > > > > + if isinstance(SectionObj, CompressSection) or isinstance(SectionObj, GuidSection): > > > > SectionStack.extend(SectionObj.SectionList) > > > > > > > > ## CycleReferenceCheck() method > > > > @@ -4662,7 +4572,7 @@ class FdfParser: > > > > FdAnalyzedList = [] > > > > > > > > Index = 0 > > > > - while RefFvStack != [] and Index < MaxLength: > > > > + while RefFvStack and Index < MaxLength: > > > > Index = Index + 1 > > > > FvNameFromStack = RefFvStack.pop() > > > > if FvNameFromStack.upper() in self.Profile.FvDict: > > > > @@ -4672,14 +4582,14 @@ class FdfParser: > > > > > > > > RefFdList = [] > > > > RefFvList = [] > > > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > > > > > for RefFdName in RefFdList: > > > > if RefFdName in FdAnalyzedList: > > > > continue > > > > > > > > LogStr += "FV %s contains FD %s\n" % (FvNameFromStack, RefFdName) > > > > - FvInFdList = self.__GetFvInFd(RefFdName) > > > > + FvInFdList = self._GetFvInFd(RefFdName) > > > > if FvInFdList != []: > > > > for FvNameInFd in FvInFdList: > > > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > > > @@ -4725,7 +4635,7 @@ class FdfParser: > > > > > > > > RefFvList = [] > > > > RefFdList = [] > > > > - self.__GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > > > + self._GetReferencedFdCapTuple(CapObj, RefFdList, RefFvList) > > > > > > > > FvListLength = 0 > > > > FdListLength = 0 > > > > @@ -4735,7 +4645,7 @@ class FdfParser: > > > > continue > > > > > > > > LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) > > > > - CapInFdList = self.__GetCapInFd(RefFdName) > > > > + CapInFdList = self._GetCapInFd(RefFdName) > > > > if CapInFdList != []: > > > > for CapNameInFd in CapInFdList: > > > > LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) > > > > @@ -4746,7 +4656,7 @@ class FdfParser: > > > > EdkLogger.info(LogStr) > > > > return True > > > > > > > > - FvInFdList = self.__GetFvInFd(RefFdName) > > > > + FvInFdList = self._GetFvInFd(RefFdName) > > > > if FvInFdList != []: > > > > for FvNameInFd in FvInFdList: > > > > LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) > > > > @@ -4767,7 +4677,7 @@ class FdfParser: > > > > FvObj = self.Profile.FvDict[RefFvName.upper()] > > > > else: > > > > continue > > > > - self.__GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > + self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) > > > > FvAnalyzedList.append(RefFvName) > > > > > > > > return False > > > > diff --git a/BaseTools/Source/Python/GenFds/Ffs.py b/BaseTools/Source/Python/GenFds/Ffs.py > > > > index 598b2d023124..4c1b43572964 100644 > > > > --- a/BaseTools/Source/Python/GenFds/Ffs.py > > > > +++ b/BaseTools/Source/Python/GenFds/Ffs.py > > > > @@ -15,53 +15,41 @@ > > > > ## > > > > # Import Modules > > > > # > > > > -from CommonDataClass.FdfClass import FDClassObject > > > > from Common.DataType import * > > > > > > > > -## generate FFS > > > > -# > > > > -# > > > > -class Ffs(FDClassObject): > > > > - # mapping between FILE type in FDF and file type for GenFfs > > > > - FdfFvFileTypeToFileType = { > > > > - SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > > > - SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > > > - SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > > > - SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > > > - 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > > > - 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > > > - 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > > > - 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > > > - 'RAW' : 'EFI_FV_FILETYPE_RAW', > > > > - 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > > > - 'SMM' : 'EFI_FV_FILETYPE_SMM', > > > > - SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > > > - SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > > > - SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > > > - } > > > > +# mapping between FILE type in FDF and file type for GenFfs > > > > +FdfFvFileTypeToFileType = { > > > > + SUP_MODULE_SEC : 'EFI_FV_FILETYPE_SECURITY_CORE', > > > > + SUP_MODULE_PEI_CORE : 'EFI_FV_FILETYPE_PEI_CORE', > > > > + SUP_MODULE_PEIM : 'EFI_FV_FILETYPE_PEIM', > > > > + SUP_MODULE_DXE_CORE : 'EFI_FV_FILETYPE_DXE_CORE', > > > > + 'FREEFORM' : 'EFI_FV_FILETYPE_FREEFORM', > > > > + 'DRIVER' : 'EFI_FV_FILETYPE_DRIVER', > > > > + 'APPLICATION' : 'EFI_FV_FILETYPE_APPLICATION', > > > > + 'FV_IMAGE' : 'EFI_FV_FILETYPE_FIRMWARE_VOLUME_IMAGE', > > > > + 'RAW' : 'EFI_FV_FILETYPE_RAW', > > > > + 'PEI_DXE_COMBO' : 'EFI_FV_FILETYPE_COMBINED_PEIM_DRIVER', > > > > + 'SMM' : 'EFI_FV_FILETYPE_SMM', > > > > + SUP_MODULE_SMM_CORE : 'EFI_FV_FILETYPE_SMM_CORE', > > > > + SUP_MODULE_MM_STANDALONE : 'EFI_FV_FILETYPE_MM_STANDALONE', > > > > + SUP_MODULE_MM_CORE_STANDALONE : 'EFI_FV_FILETYPE_MM_CORE_STANDALONE' > > > > +} > > > > > > > > - # mapping between section type in FDF and file suffix > > > > - SectionSuffix = { > > > > - BINARY_FILE_TYPE_PE32 : '.pe32', > > > > - BINARY_FILE_TYPE_PIC : '.pic', > > > > - BINARY_FILE_TYPE_TE : '.te', > > > > - BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > > > - 'VERSION' : '.ver', > > > > - BINARY_FILE_TYPE_UI : '.ui', > > > > - 'COMPAT16' : '.com16', > > > > - 'RAW' : '.raw', > > > > - 'FREEFORM_SUBTYPE_GUID': '.guid', > > > > - 'SUBTYPE_GUID' : '.guid', > > > > - 'FV_IMAGE' : 'fv.sec', > > > > - 'COMPRESS' : '.com', > > > > - 'GUIDED' : '.guided', > > > > - BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > > > - BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > > > - } > > > > - > > > > - ## The constructor > > > > - # > > > > - # @param self The object pointer > > > > - # > > > > - def __init__(self): > > > > - FfsClassObject.__init__(self) > > > > +# mapping between section type in FDF and file suffix > > > > +SectionSuffix = { > > > > + BINARY_FILE_TYPE_PE32 : '.pe32', > > > > + BINARY_FILE_TYPE_PIC : '.pic', > > > > + BINARY_FILE_TYPE_TE : '.te', > > > > + BINARY_FILE_TYPE_DXE_DEPEX : '.dpx', > > > > + 'VERSION' : '.ver', > > > > + BINARY_FILE_TYPE_UI : '.ui', > > > > + 'COMPAT16' : '.com16', > > > > + 'RAW' : '.raw', > > > > + 'FREEFORM_SUBTYPE_GUID': '.guid', > > > > + 'SUBTYPE_GUID' : '.guid', > > > > + 'FV_IMAGE' : 'fv.sec', > > > > + 'COMPRESS' : '.com', > > > > + 'GUIDED' : '.guided', > > > > + BINARY_FILE_TYPE_PEI_DEPEX : '.dpx', > > > > + BINARY_FILE_TYPE_SMM_DEPEX : '.dpx' > > > > +} > > > > diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > > index 4c35aac76a00..7479efff04c7 100644 > > > > --- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > > +++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py > > > > @@ -16,26 +16,22 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from . import Ffs > > > > -from . import Rule > > > > -import Common.LongFilePathOs as os > > > > from io import BytesIO > > > > -import subprocess > > > > - > > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > +from struct import pack > > > > from CommonDataClass.FdfClass import FileStatementClassObject > > > > from Common import EdkLogger > > > > -from Common.BuildToolError import * > > > > -from Common.Misc import GuidStructureByteArrayToGuidString > > > > +from Common.BuildToolError import GENFDS_ERROR > > > > +from Common.Misc import GuidStructureByteArrayToGuidString, SaveFileOnChange > > > > +import Common.LongFilePathOs as os > > > > from .GuidSection import GuidSection > > > > from .FvImageSection import FvImageSection > > > > -from Common.Misc import SaveFileOnChange > > > > -from struct import * > > > > +from .Ffs import FdfFvFileTypeToFileType > > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > > > > > ## generate FFS from FILE > > > > # > > > > # > > > > -class FileStatement (FileStatementClassObject) : > > > > +class FileStatement (FileStatementClassObject): > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > @@ -60,7 +56,7 @@ class FileStatement (FileStatementClassObject) : > > > > # > > > > def GenFfs(self, Dict = {}, FvChildAddr=[], FvParentAddr=None, IsMakefile=False, FvName=None): > > > > > > > > - if self.NameGuid is not None and self.NameGuid.startswith('PCD('): > > > > + if self.NameGuid and self.NameGuid.startswith('PCD('): > > > > PcdValue = GenFdsGlobalVariable.GetPcdValue(self.NameGuid) > > > > if len(PcdValue) == 0: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, '%s NOT defined.' \ > > > > @@ -82,7 +78,7 @@ class FileStatement (FileStatementClassObject) : > > > > > > > > Dict.update(self.DefineVarDict) > > > > SectionAlignments = None > > > > - if self.FvName is not None : > > > > + if self.FvName: > > > > Buffer = BytesIO('') > > > > if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName)) > > > > @@ -90,14 +86,14 @@ class FileStatement (FileStatementClassObject) : > > > > FileName = Fv.AddToBuffer(Buffer) > > > > SectionFiles = [FileName] > > > > > > > > - elif self.FdName is not None: > > > > + elif self.FdName: > > > > if self.FdName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FdDict: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, "FD (%s) is NOT described in FDF file!" % (self.FdName)) > > > > Fd = GenFdsGlobalVariable.FdfParser.Profile.FdDict.get(self.FdName.upper()) > > > > FileName = Fd.GenFd() > > > > SectionFiles = [FileName] > > > > > > > > - elif self.FileName is not None: > > > > + elif self.FileName: > > > > if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW': > > > > if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == > > > len(self.SubAlignment): > > > > FileContent = '' > > > > @@ -111,7 +107,7 @@ class FileStatement (FileStatementClassObject) : > > > > Content = f.read() > > > > f.close() > > > > AlignValue = 1 > > > > - if self.SubAlignment[Index] is not None: > > > > + if self.SubAlignment[Index]: > > > > AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index]) > > > > if AlignValue > MaxAlignValue: > > > > MaxAlignIndex = Index > > > > @@ -143,7 +139,7 @@ class FileStatement (FileStatementClassObject) : > > > > SectionFiles = [] > > > > Index = 0 > > > > SectionAlignments = [] > > > > - for section in self.SectionList : > > > > + for section in self.SectionList: > > > > Index = Index + 1 > > > > SecIndex = '%d' %Index > > > > # process the inside FvImage from FvSection or GuidSection > > > > @@ -152,7 +148,7 @@ class FileStatement (FileStatementClassObject) : > > > > section.FvAddr = FvChildAddr.pop(0) > > > > elif isinstance(section, GuidSection): > > > > section.FvAddr = FvChildAddr > > > > - if FvParentAddr is not None and isinstance(section, GuidSection): > > > > + if FvParentAddr and isinstance(section, GuidSection): > > > > section.FvParentAddr = FvParentAddr > > > > > > > > if self.KeepReloc == False: > > > > @@ -168,7 +164,7 @@ class FileStatement (FileStatementClassObject) : > > > > # > > > > FfsFileOutput = os.path.join(OutputDir, self.NameGuid + '.ffs') > > > > GenFdsGlobalVariable.GenerateFfs(FfsFileOutput, SectionFiles, > > > > - Ffs.Ffs.FdfFvFileTypeToFileType.get(self.FvFileType), > > > > + FdfFvFileTypeToFileType.get(self.FvFileType), > > > > self.NameGuid, > > > > Fixed=self.Fixed, > > > > CheckSum=self.CheckSum, > > > > @@ -177,6 +173,3 @@ class FileStatement (FileStatementClassObject) : > > > > ) > > > > > > > > return FfsFileOutput > > > > - > > > > - > > > > - > > > > diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > > index 6149bc81b4ef..c1073c96e9aa 100644 > > > > --- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > > +++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py > > > > @@ -22,7 +22,7 @@ import Common.LongFilePathOs as os > > > > from io import BytesIO > > > > from struct import * > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -from . import Ffs > > > > +from .Ffs import SectionSuffix,FdfFvFileTypeToFileType > > > > import subprocess > > > > import sys > > > > from . import Section > > > > @@ -760,7 +760,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > > > > > SecNum = '%d' %Index > > > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > > Index = Index + 1 > > > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > > > File = GenFdsGlobalVariable.MacroExtend(File, Dict, self.CurrentArch) > > > > @@ -803,7 +803,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > else: > > > > SecNum = '%d' %Index > > > > GenSecOutputFile= self.__ExtendMacro__(Rule.NameGuid) + \ > > > > - Ffs.Ffs.SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > > + SectionSuffix[SectionType] + SUP_MODULE_SEC + SecNum > > > > OutputFile = os.path.join(self.OutputPath, GenSecOutputFile) > > > > GenSecInputFile = GenFdsGlobalVariable.MacroExtend(GenSecInputFile, Dict, self.CurrentArch) > > > > > > > > @@ -882,7 +882,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > self.ModuleGuid = RegistryGuidStr > > > > > > > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputSection, > > > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > > > self.ModuleGuid, Fixed=Rule.Fixed, > > > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > > > SectionAlign=SectionAlignments, > > > > @@ -1055,7 +1055,7 @@ class FfsInfStatement(FfsInfStatementClassObject): > > > > > > > > FfsOutput = os.path.join( self.OutputPath, self.ModuleGuid + '.ffs') > > > > GenFdsGlobalVariable.GenerateFfs(FfsOutput, InputFile, > > > > - Ffs.Ffs.FdfFvFileTypeToFileType[Rule.FvFileType], > > > > + FdfFvFileTypeToFileType[Rule.FvFileType], > > > > self.ModuleGuid, Fixed=Rule.Fixed, > > > > CheckSum=Rule.CheckSum, Align=Rule.Alignment, > > > > SectionAlign=Alignments, > > > > diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py > > > > index c66fc383278d..30525bd11fcc 100644 > > > > --- a/BaseTools/Source/Python/GenFds/Fv.py > > > > +++ b/BaseTools/Source/Python/GenFds/Fv.py > > > > @@ -1,4 +1,3 @@ > > > > -from __future__ import absolute_import > > > > ## @file > > > > # process FV generation > > > > # > > > > @@ -16,16 +15,13 @@ from __future__ import absolute_import > > > > ## > > > > # Import Modules > > > > # > > > > +from __future__ import absolute_import > > > > import Common.LongFilePathOs as os > > > > import subprocess > > > > from io import BytesIO > > > > from struct import * > > > > - > > > > -from . import Ffs > > > > -from . import AprioriSection > > > > from . import FfsFileStatement > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -from CommonDataClass.FdfClass import FvClassObject > > > > from Common.Misc import SaveFileOnChange, PackGUID > > > > from Common.LongFilePathSupport import CopyLongFilePath > > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > > @@ -36,13 +32,25 @@ FV_UI_EXT_ENTY_GUID = 'A67DF1FA-8DE8-4E98-AF09-4BDF2EFFBC7C' > > > > ## generate FV > > > > # > > > > # > > > > -class FV (FvClassObject): > > > > +class FV (object): > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > # > > > > def __init__(self): > > > > - FvClassObject.__init__(self) > > > > + self.UiFvName = None > > > > + self.CreateFileName = None > > > > + self.BlockSizeList = [] > > > > + self.DefineVarDict = {} > > > > + self.SetVarDict = {} > > > > + self.FvAlignment = None > > > > + self.FvAttributeDict = {} > > > > + self.FvNameGuid = None > > > > + self.FvNameString = None > > > > + self.AprioriSectionList = [] > > > > + self.FfsList = [] > > > > + self.BsBaseAddress = None > > > > + self.RtBaseAddress = None > > > > self.FvInfFile = None > > > > self.FvAddressFile = None > > > > self.BaseAddress = None > > > > @@ -68,7 +76,7 @@ class FV (FvClassObject): > > > > # @param MacroDict macro value pair > > > > # @retval string Generated FV file path > > > > # > > > > - def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, > > MacroDict > > > = {}, Flag=False) : > > > > + def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, > > > MacroDict = {}, Flag=False): > > > > > > > > if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: > > > > return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] > > > > @@ -96,7 +104,7 @@ class FV (FvClassObject): > > > > if self.FvBaseAddress is not None: > > > > BaseAddress = self.FvBaseAddress > > > > if not Flag: > > > > - self.__InitializeInf__(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > > > + self._InitializeInf(BaseAddress, BlockSize, BlockNum, ErasePloarity, VtfDict) > > > > # > > > > # First Process the Apriori section > > > > # > > > > @@ -114,7 +122,7 @@ class FV (FvClassObject): > > > > TAB_LINE_BREAK) > > > > > > > > # Process Modules in FfsList > > > > - for FfsFile in self.FfsList : > > > > + for FfsFile in self.FfsList: > > > > if Flag: > > > > if isinstance(FfsFile, FfsFileStatement.FileStatement): > > > > continue > > > > @@ -177,7 +185,7 @@ class FV (FvClassObject): > > > > > > > > if FvChildAddr != []: > > > > # Update Ffs again > > > > - for FfsFile in self.FfsList : > > > > + for FfsFile in self.FfsList: > > > > FileName = FfsFile.GenFfs(MacroDict, FvChildAddr, BaseAddress, IsMakefile=Flag, > > > FvName=self.UiFvName) > > > > > > > > if GenFdsGlobalVariable.LargeFileInFvFlags[-1]: > > > > @@ -252,7 +260,7 @@ class FV (FvClassObject): > > > > return True > > > > return False > > > > > > > > - ## __InitializeInf__() > > > > + ## _InitializeInf() > > > > # > > > > # Initilize the inf file to create FV > > > > # > > > > @@ -263,7 +271,7 @@ class FV (FvClassObject): > > > > # @param ErasePolarity Flash erase polarity > > > > # @param VtfDict VTF objects > > > > # > > > > - def __InitializeInf__ (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None) : > > > > + def _InitializeInf (self, BaseAddress = None, BlockSize= None, BlockNum = None, ErasePloarity='1', VtfDict=None): > > > > # > > > > # Create FV inf file > > > > # > > > > @@ -275,7 +283,7 @@ class FV (FvClassObject): > > > > # Add [Options] > > > > # > > > > self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK) > > > > - if BaseAddress is not None : > > > > + if BaseAddress is not None: > > > > self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \ > > > > BaseAddress + \ > > > > TAB_LINE_BREAK) > > > > @@ -294,7 +302,7 @@ class FV (FvClassObject): > > > > #set default block size is 1 > > > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = 0x1" + TAB_LINE_BREAK) > > > > > > > > - for BlockSize in self.BlockSizeList : > > > > + for BlockSize in self.BlockSizeList: > > > > if BlockSize[0] is not None: > > > > self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \ > > > > '0x%X' %BlockSize[0] + \ > > > > @@ -320,9 +328,9 @@ class FV (FvClassObject): > > > > ' %s' %ErasePloarity + \ > > > > TAB_LINE_BREAK) > > > > if not (self.FvAttributeDict is None): > > > > - for FvAttribute in self.FvAttributeDict.keys() : > > > > + for FvAttribute in self.FvAttributeDict.keys(): > > > > if FvAttribute == "FvUsedSizeEnable": > > > > - if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1') : > > > > + if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'): > > > > self.UsedSizeEnable = True > > > > continue > > > > self.FvInfFile.writelines("EFI_" + \ > > > > @@ -365,8 +373,8 @@ class FV (FvClassObject): > > > > Guid = FV_UI_EXT_ENTY_GUID.split('-') > > > > # > > > > # Layout: > > > > - # EFI_FIRMWARE_VOLUME_EXT_ENTRY : size 4 > > > > - # GUID : size 16 > > > > + # EFI_FIRMWARE_VOLUME_EXT_ENTRY: size 4 > > > > + # GUID: size 16 > > > > # FV UI name > > > > # > > > > Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002) > > > > @@ -422,7 +430,6 @@ class FV (FvClassObject): > > > > FvExtHeaderFileName + \ > > > > TAB_LINE_BREAK) > > > > > > > > - > > > > # > > > > # Add [Files] > > > > # > > > > diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py > > > > index 04556fc87099..d6e1f3315b7e 100644 > > > > --- a/BaseTools/Source/Python/GenFds/FvImageSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/FvImageSection.py > > > > @@ -18,7 +18,7 @@ > > > > from __future__ import absolute_import > > > > from . import Section > > > > from io import BytesIO > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > import subprocess > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > import Common.LongFilePathOs as os > > > > @@ -76,7 +76,7 @@ class FvImageSection(FvImageSectionClassObject): > > > > if FvAlignmentValue > MaxFvAlignment: > > > > MaxFvAlignment = FvAlignmentValue > > > > > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > Ffs.SectionSuffix.get("FV_IMAGE")) > > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + > > > SectionSuffix.get("FV_IMAGE")) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], > > 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > > > IsMakefile=IsMakefile) > > > > OutputFileList.append(OutputFile) > > > > > > > > @@ -146,7 +146,7 @@ class FvImageSection(FvImageSectionClassObject): > > > > # > > > > # Prepare the parameter of GenSection > > > > # > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > > Ffs.SectionSuffix.get("FV_IMAGE")) > > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > SectionSuffix.get("FV_IMAGE")) > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [FvFileName], 'EFI_SECTION_FIRMWARE_VOLUME_IMAGE', > > > IsMakefile=IsMakefile) > > > > OutputFileList.append(OutputFile) > > > > > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py > > > > index 9dec9c5eb57d..0c8091b79841 100644 > > > > --- a/BaseTools/Source/Python/GenFds/GenFds.py > > > > +++ b/BaseTools/Source/Python/GenFds/GenFds.py > > > > @@ -17,34 +17,30 @@ > > > > # > > > > from __future__ import print_function > > > > from __future__ import absolute_import > > > > +from re import compile > > > > from optparse import OptionParser > > > > -import sys > > > > -import Common.LongFilePathOs as os > > > > -import linecache > > > > -from . import FdfParser > > > > -import Common.BuildToolError as BuildToolError > > > > -from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > -from Workspace.WorkspaceDatabase import WorkspaceDatabase > > > > -from Workspace.BuildClassObject import PcdClassObject > > > > -from . import RuleComplexFile > > > > -from .EfiSection import EfiSection > > > > +from sys import exit > > > > +from glob import glob > > > > +from struct import unpack > > > > +from linecache import getlines > > > > from io import BytesIO > > > > -import Common.TargetTxtClassObject as TargetTxtClassObject > > > > -import Common.ToolDefClassObject as ToolDefClassObject > > > > + > > > > +import Common.LongFilePathOs as os > > > > +from Common.TargetTxtClassObject import TargetTxtClassObject > > > > from Common.DataType import * > > > > import Common.GlobalData as GlobalData > > > > from Common import EdkLogger > > > > -from Common.StringUtils import * > > > > -from Common.Misc import DirCache, PathClass > > > > -from Common.Misc import SaveFileOnChange > > > > -from Common.Misc import ClearDuplicatedInf > > > > -from Common.Misc import GuidStructureStringToGuidString > > > > +from Common.StringUtils import NormPath > > > > +from Common.Misc import DirCache, PathClass, GuidStructureStringToGuidString > > > > +from Common.Misc import SaveFileOnChange, ClearDuplicatedInf > > > > from Common.BuildVersion import gBUILD_VERSION > > > > from Common.MultipleWorkspace import MultipleWorkspace as mws > > > > -from . import FfsFileStatement > > > > -import glob > > > > -from struct import unpack > > > > -from Common.GlobalData import gGuidPattern > > > > +from Common.BuildToolError import FatalError, GENFDS_ERROR, CODE_ERROR, FORMAT_INVALID, RESOURCE_NOT_AVAILABLE, > > > FILE_NOT_FOUND, OPTION_MISSING, FORMAT_NOT_SUPPORTED,OPTION_VALUE_INVALID > > > > +from Workspace.WorkspaceDatabase import WorkspaceDatabase > > > > + > > > > +from .FdfParser import FdfParser, Warning > > > > +from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > +from .FfsFileStatement import FileStatement > > > > > > > > ## Version and Copyright > > > > versionNumber = "1.0" + ' ' + gBUILD_VERSION > > > > @@ -71,22 +67,22 @@ def main(): > > > > > > > > EdkLogger.Initialize() > > > > try: > > > > - if Options.verbose is not None: > > > > + if Options.verbose: > > > > EdkLogger.SetLevel(EdkLogger.VERBOSE) > > > > GenFdsGlobalVariable.VerboseMode = True > > > > > > > > - if Options.FixedAddress is not None: > > > > + if Options.FixedAddress: > > > > GenFdsGlobalVariable.FixedLoadAddress = True > > > > > > > > - if Options.quiet is not None: > > > > + if Options.quiet: > > > > EdkLogger.SetLevel(EdkLogger.QUIET) > > > > - if Options.debug is not None: > > > > + if Options.debug: > > > > EdkLogger.SetLevel(Options.debug + 1) > > > > GenFdsGlobalVariable.DebugLevel = Options.debug > > > > else: > > > > EdkLogger.SetLevel(EdkLogger.INFO) > > > > > > > > - if (Options.Workspace is None): > > > > + if not Options.Workspace: > > > > EdkLogger.error("GenFds", OPTION_MISSING, "WORKSPACE not defined", > > > > ExtraData="Please use '-w' switch to pass it or set the WORKSPACE environment variable.") > > > > elif not os.path.exists(Options.Workspace): > > > > @@ -97,7 +93,7 @@ def main(): > > > > GenFdsGlobalVariable.WorkSpaceDir = Workspace > > > > if 'EDK_SOURCE' in os.environ: > > > > GenFdsGlobalVariable.EdkSourceDir = os.path.normcase(os.environ['EDK_SOURCE']) > > > > - if (Options.debug): > > > > + if Options.debug: > > > > GenFdsGlobalVariable.VerboseLogger("Using Workspace:" + Workspace) > > > > if Options.GenfdsMultiThread: > > > > GenFdsGlobalVariable.EnableGenfdsMultiThread = True > > > > @@ -107,7 +103,7 @@ def main(): > > > > PackagesPath = os.getenv("PACKAGES_PATH") > > > > mws.setWs(GenFdsGlobalVariable.WorkSpaceDir, PackagesPath) > > > > > > > > - if (Options.filename): > > > > + if Options.filename: > > > > FdfFilename = Options.filename > > > > FdfFilename = GenFdsGlobalVariable.ReplaceWorkspaceMacro(FdfFilename) > > > > > > > > @@ -123,13 +119,13 @@ def main(): > > > > else: > > > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing FDF filename") > > > > > > > > - if (Options.BuildTarget): > > > > + if Options.BuildTarget: > > > > GenFdsGlobalVariable.TargetName = Options.BuildTarget > > > > > > > > - if (Options.ToolChain): > > > > + if Options.ToolChain: > > > > GenFdsGlobalVariable.ToolChainTag = Options.ToolChain > > > > > > > > - if (Options.activePlatform): > > > > + if Options.activePlatform: > > > > ActivePlatform = Options.activePlatform > > > > ActivePlatform = GenFdsGlobalVariable.ReplaceWorkspaceMacro(ActivePlatform) > > > > > > > > @@ -139,15 +135,15 @@ def main(): > > > > if not os.path.isabs (ActivePlatform): > > > > ActivePlatform = mws.join(GenFdsGlobalVariable.WorkSpaceDir, ActivePlatform) > > > > > > > > - if not os.path.exists(ActivePlatform) : > > > > + if not os.path.exists(ActivePlatform): > > > > EdkLogger.error("GenFds", FILE_NOT_FOUND, "ActivePlatform doesn't exist!") > > > > else: > > > > EdkLogger.error("GenFds", OPTION_MISSING, "Missing active platform") > > > > > > > > - GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > > > + GlobalData.BuildOptionPcd = Options.OptionPcd if Options.OptionPcd else {} > > > > GenFdsGlobalVariable.ActivePlatform = PathClass(NormPath(ActivePlatform)) > > > > > > > > - if (Options.ConfDirectory): > > > > + if Options.ConfDirectory: > > > > # Get alternate Conf location, if it is absolute, then just use the absolute directory name > > > > ConfDirectoryPath = os.path.normpath(Options.ConfDirectory) > > > > if ConfDirectoryPath.startswith('"'): > > > > @@ -169,7 +165,7 @@ def main(): > > > > GlobalData.gConfDirectory = GenFdsGlobalVariable.ConfDir > > > > BuildConfigurationFile = os.path.normpath(os.path.join(ConfDirectoryPath, "target.txt")) > > > > if os.path.isfile(BuildConfigurationFile) == True: > > > > - TargetTxt = TargetTxtClassObject.TargetTxtClassObject() > > > > + TargetTxt = TargetTxtClassObject() > > > > TargetTxt.LoadTargetTxtFile(BuildConfigurationFile) > > > > # if no build target given in command line, get it from target.txt > > > > if not GenFdsGlobalVariable.TargetName: > > > > @@ -237,10 +233,9 @@ def main(): > > > > GlobalData.gAllFiles = DirCache(Workspace) > > > > GlobalData.gWorkspace = Workspace > > > > > > > > - if (Options.archList) : > > > > + if Options.archList: > > > > ArchList = Options.archList.split(',') > > > > else: > > > > -# EdkLogger.error("GenFds", OPTION_MISSING, "Missing build ARCH") > > > > ArchList = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, Options.BuildTarget, > > > Options.ToolChain].SupArchList > > > > > > > > TargetArchList = set(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, TAB_COMMON, > > > Options.BuildTarget, Options.ToolChain].SupArchList) & set(ArchList) > > > > @@ -249,9 +244,11 @@ def main(): > > > > > > > > for Arch in ArchList: > > > > GenFdsGlobalVariable.OutputDirFromDscDict[Arch] = > > > NormPath(BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, Options.BuildTarget, > > > Options.ToolChain].OutputDirectory) > > > > - GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > Options.BuildTarget, Options.ToolChain].PlatformName > > > > > > > > - if (Options.outputDir): > > > > + # assign platform name based on last entry in ArchList > > > > + GenFdsGlobalVariable.PlatformName = BuildWorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, ArchList[-1], > > > Options.BuildTarget, Options.ToolChain].PlatformName > > > > + > > > > + if Options.outputDir: > > > > OutputDirFromCommandLine = GenFdsGlobalVariable.ReplaceWorkspaceMacro(Options.outputDir) > > > > if not os.path.isabs (OutputDirFromCommandLine): > > > > OutputDirFromCommandLine = os.path.join(GenFdsGlobalVariable.WorkSpaceDir, > > OutputDirFromCommandLine) > > > > @@ -274,27 +271,27 @@ def main(): > > > > GenFdsGlobalVariable.OutputDirDict[Key] = OutputDir > > > > > > > > """ Parse Fdf file, has to place after build Workspace as FDF may contain macros from DSC file """ > > > > - FdfParserObj = FdfParser.FdfParser(FdfFilename) > > > > + FdfParserObj = FdfParser(FdfFilename) > > > > FdfParserObj.ParseFile() > > > > > > > > if FdfParserObj.CycleReferenceCheck(): > > > > EdkLogger.error("GenFds", FORMAT_NOT_SUPPORTED, "Cycle Reference Detected in FDF file") > > > > > > > > - if (Options.uiFdName) : > > > > + if Options.uiFdName: > > > > if Options.uiFdName.upper() in FdfParserObj.Profile.FdDict: > > > > GenFds.OnlyGenerateThisFd = Options.uiFdName > > > > else: > > > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > > > "No such an FD in FDF file: %s" % Options.uiFdName) > > > > > > > > - if (Options.uiFvName) : > > > > + if Options.uiFvName: > > > > if Options.uiFvName.upper() in FdfParserObj.Profile.FvDict: > > > > GenFds.OnlyGenerateThisFv = Options.uiFvName > > > > else: > > > > EdkLogger.error("GenFds", OPTION_VALUE_INVALID, > > > > "No such an FV in FDF file: %s" % Options.uiFvName) > > > > > > > > - if (Options.uiCapName) : > > > > + if Options.uiCapName: > > > > if Options.uiCapName.upper() in FdfParserObj.Profile.CapsuleDict: > > > > GenFds.OnlyGenerateThisCap = Options.uiCapName > > > > else: > > > > @@ -302,7 +299,7 @@ def main(): > > > > "No such a Capsule in FDF file: %s" % Options.uiCapName) > > > > > > > > GenFdsGlobalVariable.WorkSpace = BuildWorkSpace > > > > - if ArchList is not None: > > > > + if ArchList: > > > > GenFdsGlobalVariable.ArchList = ArchList > > > > > > > > # Dsc Build Data will handle Pcd Settings from CommandLine. > > > > @@ -313,10 +310,8 @@ def main(): > > > > > > > > # Record the FV Region info that may specific in the FD > > > > if FdfParserObj.Profile.FvDict and FdfParserObj.Profile.FdDict: > > > > - for Fv in FdfParserObj.Profile.FvDict: > > > > - FvObj = FdfParserObj.Profile.FvDict[Fv] > > > > - for Fd in FdfParserObj.Profile.FdDict: > > > > - FdObj = FdfParserObj.Profile.FdDict[Fd] > > > > + for FvObj in FdfParserObj.Profile.FvDict.values(): > > > > + for FdObj in FdfParserObj.Profile.FdDict.values(): > > > > for RegionObj in FdObj.RegionList: > > > > if RegionObj.RegionType != BINARY_FILE_TYPE_FV: > > > > continue > > > > @@ -338,7 +333,7 @@ def main(): > > > > """Display FV space info.""" > > > > GenFds.DisplayFvSpaceInfo(FdfParserObj) > > > > > > > > - except FdfParser.Warning as X: > > > > + except Warning as X: > > > > EdkLogger.error(X.ToolName, FORMAT_INVALID, File=X.FileName, Line=X.LineNumber, ExtraData=X.Message, > > > RaiseError=False) > > > > ReturnCode = FORMAT_INVALID > > > > except FatalError as X: > > > > @@ -374,7 +369,6 @@ def SingleCheckCallback(option, opt_str, value, parser): > > > > # Using standard Python module optparse to parse command line option of this tool. > > > > # > > > > # @retval Opt A optparse.Values object containing the parsed options > > > > -# @retval Args Target of build command > > > > # > > > > def myOptionParser(): > > > > usage = "%prog [options] -f input_file -a arch_list -b build_target -p active_platform -t tool_chain_tag -D \"MacroName [= > > > MacroValue]\"" > > > > @@ -404,7 +398,7 @@ def myOptionParser(): > > > > Parser.add_option("--pcd", action="append", dest="OptionPcd", help="Set PCD value by command line. Format: > > > \"PcdName=Value\" ") > > > > Parser.add_option("--genfds-multi-thread", action="store_true", dest="GenfdsMultiThread", default=False, help="Enable > > GenFds > > > multi thread to generate ffs file.") > > > > > > > > - (Options, args) = Parser.parse_args() > > > > + Options, _ = Parser.parse_args() > > > > return Options > > > > > > > > ## The class implementing the EDK2 flash image generation process > > > > @@ -414,7 +408,7 @@ def myOptionParser(): > > > > # 2. Call methods of Fd class to generate FD > > > > # 3. Call methods of Fv class to generate FV that not belong to FD > > > > # > > > > -class GenFds : > > > > +class GenFds(object): > > > > FdfParsef = None > > > > OnlyGenerateThisFd = None > > > > OnlyGenerateThisFv = None > > > > @@ -423,12 +417,13 @@ class GenFds : > > > > ## GenFd() > > > > # > > > > # @param OutputDir Output directory > > > > - # @param FdfParser FDF contents parser > > > > + # @param FdfParserObject FDF contents parser > > > > # @param Workspace The directory of workspace > > > > # @param ArchList The Arch list of platform > > > > # > > > > - def GenFd (OutputDir, FdfParser, WorkSpace, ArchList): > > > > - GenFdsGlobalVariable.SetDir ('', FdfParser, WorkSpace, ArchList) > > > > + @staticmethod > > > > + def GenFd (OutputDir, FdfParserObject, WorkSpace, ArchList): > > > > + GenFdsGlobalVariable.SetDir ('', FdfParserObject, WorkSpace, ArchList) > > > > > > > > GenFdsGlobalVariable.VerboseLogger(" Generate all Fd images and their required FV and Capsule images!") > > > > if GenFds.OnlyGenerateThisCap is not None and GenFds.OnlyGenerateThisCap.upper() in > > > GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict: > > > > @@ -470,9 +465,10 @@ class GenFds : > > > > GenFdsGlobalVariable.VerboseLogger("\n Generate all Option ROM!") > > > > for OptRomObj in GenFdsGlobalVariable.FdfParser.Profile.OptRomDict.values(): > > > > OptRomObj.AddToBuffer(None) > > > > + > > > > @staticmethod > > > > - def GenFfsMakefile(OutputDir, FdfParser, WorkSpace, ArchList, GlobalData): > > > > - GenFdsGlobalVariable.SetEnv(FdfParser, WorkSpace, ArchList, GlobalData) > > > > + def GenFfsMakefile(OutputDir, FdfParserObject, WorkSpace, ArchList, GlobalData): > > > > + GenFdsGlobalVariable.SetEnv(FdfParserObject, WorkSpace, ArchList, GlobalData) > > > > for FdObj in GenFdsGlobalVariable.FdfParser.Profile.FdDict.values(): > > > > FdObj.GenFd(Flag=True) > > > > > > > > @@ -490,6 +486,7 @@ class GenFds : > > > > # @param FvObj Whose block size to get > > > > # @retval int Block size value > > > > # > > > > + @staticmethod > > > > def GetFvBlockSize(FvObj): > > > > DefaultBlockSize = 0x1 > > > > FdObj = None > > > > @@ -524,16 +521,17 @@ class GenFds : > > > > # @param FvObj Whose block size to get > > > > # @retval None > > > > # > > > > - def DisplayFvSpaceInfo(FdfParser): > > > > + @staticmethod > > > > + def DisplayFvSpaceInfo(FdfParserObject): > > > > > > > > FvSpaceInfoList = [] > > > > MaxFvNameLength = 0 > > > > - for FvName in FdfParser.Profile.FvDict: > > > > + for FvName in FdfParserObject.Profile.FvDict: > > > > if len(FvName) > MaxFvNameLength: > > > > MaxFvNameLength = len(FvName) > > > > FvSpaceInfoFileName = os.path.join(GenFdsGlobalVariable.FvDir, FvName.upper() + '.Fv.map') > > > > if os.path.exists(FvSpaceInfoFileName): > > > > - FileLinesList = linecache.getlines(FvSpaceInfoFileName) > > > > + FileLinesList = getlines(FvSpaceInfoFileName) > > > > TotalFound = False > > > > Total = '' > > > > UsedFound = False > > > > @@ -575,6 +573,7 @@ class GenFds : > > > > # @param DscFile modules from dsc file will be preprocessed > > > > # @retval None > > > > # > > > > + @staticmethod > > > > def PreprocessImage(BuildDb, DscFile): > > > > PcdDict = BuildDb.BuildObject[DscFile, TAB_COMMON, GenFdsGlobalVariable.TargetName, > > > GenFdsGlobalVariable.ToolChainTag].Pcds > > > > PcdValue = '' > > > > @@ -600,6 +599,7 @@ class GenFds : > > > > ModuleObj = BuildDb.BuildObject[Key, TAB_COMMON, GenFdsGlobalVariable.TargetName, > > > GenFdsGlobalVariable.ToolChainTag] > > > > print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType) > > > > > > > > + @staticmethod > > > > def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj): > > > > GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref") > > > > GuidXRefFile = BytesIO('') > > > > @@ -607,7 +607,6 @@ class GenFds : > > > > GuidDict = {} > > > > ModuleList = [] > > > > FileGuidList = [] > > > > - GuidPattern = gGuidPattern > > > > for Arch in ArchList: > > > > PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > > PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag) > > > > @@ -626,20 +625,16 @@ class GenFds : > > > > continue > > > > else: > > > > ModuleList.append(Module) > > > > - GuidMatch = GuidPattern.match(ModuleFile.BaseName) > > > > - if GuidMatch is not None: > > > > + if GlobalData.gGuidPattern.match(ModuleFile.BaseName): > > > > GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName)) > > > > else: > > > > GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName)) > > > > - for key, item in Module.Protocols.items(): > > > > - GuidDict[key] = item > > > > - for key, item in Module.Guids.items(): > > > > - GuidDict[key] = item > > > > - for key, item in Module.Ppis.items(): > > > > - GuidDict[key] = item > > > > + GuidDict.update(Module.Protocols) > > > > + GuidDict.update(Module.Guids) > > > > + GuidDict.update(Module.Ppis) > > > > for FvName in FdfParserObj.Profile.FvDict: > > > > for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList: > > > > - if not isinstance(FfsObj, FfsFileStatement.FileStatement): > > > > + if not isinstance(FfsObj, FileStatement): > > > > InfPath = PathClass(NormPath(mws.join(GenFdsGlobalVariable.WorkSpaceDir, FfsObj.InfFileName))) > > > > FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, > > > GenFdsGlobalVariable.ToolChainTag] > > > > if FdfModule in ModuleList: > > > > @@ -647,12 +642,9 @@ class GenFds : > > > > else: > > > > ModuleList.append(FdfModule) > > > > GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName)) > > > > - for key, item in FdfModule.Protocols.items(): > > > > - GuidDict[key] = item > > > > - for key, item in FdfModule.Guids.items(): > > > > - GuidDict[key] = item > > > > - for key, item in FdfModule.Ppis.items(): > > > > - GuidDict[key] = item > > > > + GuidDict.update(FdfModule.Protocols) > > > > + GuidDict.update(FdfModule.Guids) > > > > + GuidDict.update(FdfModule.Ppis) > > > > else: > > > > FileStatementGuid = FfsObj.NameGuid > > > > if FileStatementGuid in FileGuidList: > > > > @@ -661,13 +653,13 @@ class GenFds : > > > > FileGuidList.append(FileStatementGuid) > > > > Name = [] > > > > FfsPath = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > > > - FfsPath = glob.glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > > > + FfsPath = glob(os.path.join(FfsPath, FileStatementGuid) + '*') > > > > if not FfsPath: > > > > continue > > > > if not os.path.exists(FfsPath[0]): > > > > continue > > > > MatchDict = {} > > > > - ReFileEnds = > > > re.compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > > > + ReFileEnds = > > > compile('\S+(.ui)$|\S+(fv.sec.txt)$|\S+(.pe32.txt)$|\S+(.te.txt)$|\S+(.pic.txt)$|\S+(.raw.txt)$|\S+(.ffs.txt)$') > > > > FileList = os.listdir(FfsPath[0]) > > > > for File in FileList: > > > > Match = ReFileEnds.search(File) > > > > @@ -724,16 +716,10 @@ class GenFds : > > > > os.remove(GuidXRefFileName) > > > > GuidXRefFile.close() > > > > > > > > - ##Define GenFd as static function > > > > - GenFd = staticmethod(GenFd) > > > > - GetFvBlockSize = staticmethod(GetFvBlockSize) > > > > - DisplayFvSpaceInfo = staticmethod(DisplayFvSpaceInfo) > > > > - PreprocessImage = staticmethod(PreprocessImage) > > > > - GenerateGuidXRefFile = staticmethod(GenerateGuidXRefFile) > > > > - > > > > if __name__ == '__main__': > > > > r = main() > > > > ## 0-127 is a safe return range, and 1 is a standard default error > > > > - if r < 0 or r > 127: r = 1 > > > > - sys.exit(r) > > > > + if r < 0 or r > 127: > > > > + r = 1 > > > > + exit(r) > > > > > > > > diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > > index 14578a92a9b1..e867693d7c88 100644 > > > > --- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > > +++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py > > > > @@ -16,13 +16,15 @@ > > > > # Import Modules > > > > # > > > > from __future__ import print_function > > > > +from __future__ import absolute_import > > > > + > > > > import Common.LongFilePathOs as os > > > > -import sys > > > > -import subprocess > > > > -import struct > > > > -import array > > > > +from sys import stdout > > > > +from subprocess import PIPE,Popen > > > > +from struct import Struct > > > > +from array import array > > > > > > > > -from Common.BuildToolError import * > > > > +from Common.BuildToolError import COMMAND_FAILURE,GENFDS_ERROR > > > > from Common import EdkLogger > > > > from Common.Misc import SaveFileOnChange > > > > > > > > @@ -89,7 +91,7 @@ class GenFdsGlobalVariable: > > > > EFI_FIRMWARE_FILE_SYSTEM3_GUID = '5473C07A-3DCB-4dca-BD6F-1E9689E7349A' > > > > LARGE_FILE_SIZE = 0x1000000 > > > > > > > > - SectionHeader = struct.Struct("3B 1B") > > > > + SectionHeader = Struct("3B 1B") > > > > > > > > # FvName, FdName, CapName in FDF, Image file name > > > > ImageBinDict = {} > > > > @@ -97,7 +99,7 @@ class GenFdsGlobalVariable: > > > > ## LoadBuildRule > > > > # > > > > @staticmethod > > > > - def __LoadBuildRule(): > > > > + def _LoadBuildRule(): > > > > if GenFdsGlobalVariable.__BuildRuleDatabase: > > > > return GenFdsGlobalVariable.__BuildRuleDatabase > > > > BuildConfigurationFile = os.path.normpath(os.path.join(GenFdsGlobalVariable.ConfDir, "target.txt")) > > > > @@ -139,44 +141,43 @@ class GenFdsGlobalVariable: > > > > if not Arch in GenFdsGlobalVariable.OutputDirDict: > > > > return {} > > > > > > > > - BuildRuleDatabase = GenFdsGlobalVariable.__LoadBuildRule() > > > > + BuildRuleDatabase = GenFdsGlobalVariable._LoadBuildRule() > > > > if not BuildRuleDatabase: > > > > return {} > > > > > > > > PathClassObj = PathClass(Inf.MetaFile.File, > > > > GenFdsGlobalVariable.WorkSpaceDir) > > > > - Macro = {} > > > > - Macro["WORKSPACE" ] = GenFdsGlobalVariable.WorkSpaceDir > > > > - Macro["MODULE_NAME" ] = Inf.BaseName > > > > - Macro["MODULE_GUID" ] = Inf.Guid > > > > - Macro["MODULE_VERSION" ] = Inf.Version > > > > - Macro["MODULE_TYPE" ] = Inf.ModuleType > > > > - Macro["MODULE_FILE" ] = str(PathClassObj) > > > > - Macro["MODULE_FILE_BASE_NAME" ] = PathClassObj.BaseName > > > > - Macro["MODULE_RELATIVE_DIR" ] = PathClassObj.SubDir > > > > - Macro["MODULE_DIR" ] = PathClassObj.SubDir > > > > - > > > > - Macro["BASE_NAME" ] = Inf.BaseName > > > > - > > > > - Macro["ARCH" ] = Arch > > > > - Macro["TOOLCHAIN" ] = GenFdsGlobalVariable.ToolChainTag > > > > - Macro["TOOLCHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > > > - Macro["TOOL_CHAIN_TAG" ] = GenFdsGlobalVariable.ToolChainTag > > > > - Macro["TARGET" ] = GenFdsGlobalVariable.TargetName > > > > - > > > > - Macro["BUILD_DIR" ] = GenFdsGlobalVariable.OutputDirDict[Arch] > > > > - Macro["BIN_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > > - Macro["LIB_DIR" ] = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > > BuildDir = os.path.join( > > > > GenFdsGlobalVariable.OutputDirDict[Arch], > > > > Arch, > > > > PathClassObj.SubDir, > > > > PathClassObj.BaseName > > > > ) > > > > - Macro["MODULE_BUILD_DIR" ] = BuildDir > > > > - Macro["OUTPUT_DIR" ] = os.path.join(BuildDir, "OUTPUT") > > > > - Macro["DEBUG_DIR" ] = os.path.join(BuildDir, "DEBUG") > > > > - > > > > + BinDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[Arch], Arch) > > > > + Macro = { > > > > + "WORKSPACE":GenFdsGlobalVariable.WorkSpaceDir, > > > > + "MODULE_NAME":Inf.BaseName, > > > > + "MODULE_GUID":Inf.Guid, > > > > + "MODULE_VERSION":Inf.Version, > > > > + "MODULE_TYPE":Inf.ModuleType, > > > > + "MODULE_FILE":str(PathClassObj), > > > > + "MODULE_FILE_BASE_NAME":PathClassObj.BaseName, > > > > + "MODULE_RELATIVE_DIR":PathClassObj.SubDir, > > > > + "MODULE_DIR":PathClassObj.SubDir, > > > > + "BASE_NAME":Inf.BaseName, > > > > + "ARCH":Arch, > > > > + "TOOLCHAIN":GenFdsGlobalVariable.ToolChainTag, > > > > + "TOOLCHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > > > + "TOOL_CHAIN_TAG":GenFdsGlobalVariable.ToolChainTag, > > > > + "TARGET":GenFdsGlobalVariable.TargetName, > > > > + "BUILD_DIR":GenFdsGlobalVariable.OutputDirDict[Arch], > > > > + "BIN_DIR":BinDir, > > > > + "LIB_DIR":BinDir, > > > > + "MODULE_BUILD_DIR":BuildDir, > > > > + "OUTPUT_DIR":os.path.join(BuildDir, "OUTPUT"), > > > > + "DEBUG_DIR":os.path.join(BuildDir, "DEBUG") > > > > + } > > > > + > > > > BuildRules = {} > > > > for Type in BuildRuleDatabase.FileTypeList: > > > > #first try getting build rule by BuildRuleFamily > > > > @@ -216,12 +217,12 @@ class GenFdsGlobalVariable: > > > > > > > > if not Inf.IsBinaryModule: > > > > for File in Inf.Sources: > > > > - if File.TagName in ("", "*", GenFdsGlobalVariable.ToolChainTag) and \ > > > > - File.ToolChainFamily in ("", "*", GenFdsGlobalVariable.ToolChainFamily): > > > > + if File.TagName in {"", "*", GenFdsGlobalVariable.ToolChainTag} and \ > > > > + File.ToolChainFamily in {"", "*", GenFdsGlobalVariable.ToolChainFamily}: > > > > FileList.append((File, DataType.TAB_UNKNOWN_FILE)) > > > > > > > > for File in Inf.Binaries: > > > > - if File.Target in [DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName]: > > > > + if File.Target in {DataType.TAB_COMMON, '*', GenFdsGlobalVariable.TargetName}: > > > > FileList.append((File, File.Type)) > > > > > > > > for File, FileType in FileList: > > > > @@ -233,7 +234,7 @@ class GenFdsGlobalVariable: > > > > Source = SourceList[Index] > > > > Index = Index + 1 > > > > > > > > - if File.IsBinary and File == Source and Inf.Binaries is not None and File in Inf.Binaries: > > > > + if File.IsBinary and File == Source and Inf.Binaries and File in Inf.Binaries: > > > > # Skip all files that are not binary libraries > > > > if not Inf.LibraryClass: > > > > continue > > > > @@ -287,19 +288,18 @@ class GenFdsGlobalVariable: > > > > # @param Workspace The directory of workspace > > > > # @param ArchList The Arch list of platform > > > > # > > > > + @staticmethod > > > > def SetDir (OutputDir, FdfParser, WorkSpace, ArchList): > > > > - GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir :%s" % OutputDir) > > > > -# GenFdsGlobalVariable.OutputDirDict = OutputDir > > > > + GenFdsGlobalVariable.VerboseLogger("GenFdsGlobalVariable.OutputDir:%s" % OutputDir) > > > > GenFdsGlobalVariable.FdfParser = FdfParser > > > > GenFdsGlobalVariable.WorkSpace = WorkSpace > > > > GenFdsGlobalVariable.FvDir = os.path.join(GenFdsGlobalVariable.OutputDirDict[ArchList[0]], > > > DataType.TAB_FV_DIRECTORY) > > > > - if not os.path.exists(GenFdsGlobalVariable.FvDir) : > > > > + if not os.path.exists(GenFdsGlobalVariable.FvDir): > > > > os.makedirs(GenFdsGlobalVariable.FvDir) > > > > GenFdsGlobalVariable.FfsDir = os.path.join(GenFdsGlobalVariable.FvDir, 'Ffs') > > > > - if not os.path.exists(GenFdsGlobalVariable.FfsDir) : > > > > + if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > > > > > - T_CHAR_LF = '\n' > > > > # > > > > # Create FV Address inf file > > > > # > > > > @@ -308,7 +308,7 @@ class GenFdsGlobalVariable: > > > > # > > > > # Add [Options] > > > > # > > > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > > > BsAddress = '0' > > > > for Arch in ArchList: > > > > if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].BsBaseAddress: > > > > @@ -317,19 +317,22 @@ class GenFdsGlobalVariable: > > > > > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > > > BsAddress + \ > > > > - T_CHAR_LF) > > > > + DataType.TAB_LINE_BREAK) > > > > > > > > RtAddress = '0' > > > > - for Arch in ArchList: > > > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress: > > > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > > > + for Arch in reversed(ArchList): > > > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag].RtBaseAddress > > > > + if temp: > > > > + RtAddress = temp > > > > + break > > > > > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > > > RtAddress + \ > > > > - T_CHAR_LF) > > > > + DataType.TAB_LINE_BREAK) > > > > > > > > FvAddressFile.close() > > > > > > > > + @staticmethod > > > > def SetEnv(FdfParser, WorkSpace, ArchList, GlobalData): > > > > GenFdsGlobalVariable.ModuleFile = WorkSpace.ModuleFile > > > > GenFdsGlobalVariable.FdfParser = FdfParser > > > > @@ -360,7 +363,6 @@ class GenFdsGlobalVariable: > > > > if not os.path.exists(GenFdsGlobalVariable.FfsDir): > > > > os.makedirs(GenFdsGlobalVariable.FfsDir) > > > > > > > > - T_CHAR_LF = '\n' > > > > # > > > > # Create FV Address inf file > > > > # > > > > @@ -369,7 +371,7 @@ class GenFdsGlobalVariable: > > > > # > > > > # Add [Options] > > > > # > > > > - FvAddressFile.writelines("[options]" + T_CHAR_LF) > > > > + FvAddressFile.writelines("[options]" + DataType.TAB_LINE_BREAK) > > > > BsAddress = '0' > > > > for Arch in ArchList: > > > > BsAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > > @@ -380,20 +382,20 @@ class GenFdsGlobalVariable: > > > > > > > > FvAddressFile.writelines("EFI_BOOT_DRIVER_BASE_ADDRESS = " + \ > > > > BsAddress + \ > > > > - T_CHAR_LF) > > > > + DataType.TAB_LINE_BREAK) > > > > > > > > RtAddress = '0' > > > > - for Arch in ArchList: > > > > - if GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > > + for Arch in reversed(ArchList): > > > > + temp = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > > GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress: > > > > - RtAddress = GenFdsGlobalVariable.WorkSpace.BuildObject[ > > > > - GenFdsGlobalVariable.ActivePlatform, Arch, GlobalData.gGlobalDefines['TARGET'], > > > > - GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > > > + GlobalData.gGlobalDefines["TOOL_CHAIN_TAG"]].RtBaseAddress > > > > + if temp: > > > > + RtAddress = temp > > > > + break > > > > > > > > FvAddressFile.writelines("EFI_RUNTIME_DRIVER_BASE_ADDRESS = " + \ > > > > RtAddress + \ > > > > - T_CHAR_LF) > > > > + DataType.TAB_LINE_BREAK) > > > > > > > > FvAddressFile.close() > > > > > > > > @@ -401,6 +403,7 @@ class GenFdsGlobalVariable: > > > > # > > > > # @param String String that may contain macro > > > > # > > > > + @staticmethod > > > > def ReplaceWorkspaceMacro(String): > > > > String = mws.handleWsMacro(String) > > > > Str = String.replace('$(WORKSPACE)', GenFdsGlobalVariable.WorkSpaceDir) > > > > @@ -424,7 +427,7 @@ class GenFdsGlobalVariable: > > > > if not os.path.exists(Output): > > > > return True > > > > # always update "Output" if no "Input" given > > > > - if Input is None or len(Input) == 0: > > > > + if not Input: > > > > return True > > > > > > > > # if fdf file is changed after the 'Output" is generated, update the 'Output' > > > > @@ -449,9 +452,9 @@ class GenFdsGlobalVariable: > > > > Cmd += ("-s", Type) > > > > if CompressionType: > > > > Cmd += ("-c", CompressionType) > > > > - if Guid is not None: > > > > + if Guid: > > > > Cmd += ("-g", Guid) > > > > - if DummyFile is not None: > > > > + if DummyFile: > > > > Cmd += ("--dummy", DummyFile) > > > > if GuidHdrLen: > > > > Cmd += ("-l", GuidHdrLen) > > > > @@ -473,7 +476,7 @@ class GenFdsGlobalVariable: > > > > if ' '.join(Cmd).strip() not in GenFdsGlobalVariable.SecCmdList: > > > > GenFdsGlobalVariable.SecCmdList.append(' '.join(Cmd).strip()) > > > > else: > > > > - SectionData = array.array('B', [0, 0, 0, 0]) > > > > + SectionData = array('B', [0, 0, 0, 0]) > > > > SectionData.fromstring(Ui.encode("utf_16_le")) > > > > SectionData.append(0) > > > > SectionData.append(0) > > > > @@ -571,9 +574,9 @@ class GenFdsGlobalVariable: > > > > if BaseAddress: > > > > Cmd += ("-r", BaseAddress) > > > > > > > > - if ForceRebase == False: > > > > + if not ForceRebase: > > > > Cmd += ("-F", "FALSE") > > > > - elif ForceRebase == True: > > > > + else: > > > > Cmd += ("-F", "TRUE") > > > > > > > > if Capsule: > > > > @@ -634,7 +637,7 @@ class GenFdsGlobalVariable: > > > > Revision=None, DeviceId=None, VendorId=None, IsMakefile=False): > > > > InputList = [] > > > > Cmd = ["EfiRom"] > > > > - if len(EfiInput) > 0: > > > > + if EfiInput: > > > > > > > > if Compress: > > > > Cmd.append("-ec") > > > > @@ -645,7 +648,7 @@ class GenFdsGlobalVariable: > > > > Cmd.append(EfiFile) > > > > InputList.append (EfiFile) > > > > > > > > - if len(BinaryInput) > 0: > > > > + if BinaryInput: > > > > Cmd.append("-b") > > > > for BinFile in BinaryInput: > > > > Cmd.append(BinFile) > > > > @@ -656,13 +659,13 @@ class GenFdsGlobalVariable: > > > > return > > > > GenFdsGlobalVariable.DebugLogger(EdkLogger.DEBUG_5, "%s needs update because of newer %s" % (Output, InputList)) > > > > > > > > - if ClassCode is not None: > > > > + if ClassCode: > > > > Cmd += ("-l", ClassCode) > > > > - if Revision is not None: > > > > + if Revision: > > > > Cmd += ("-r", Revision) > > > > - if DeviceId is not None: > > > > + if DeviceId: > > > > Cmd += ("-i", DeviceId) > > > > - if VendorId is not None: > > > > + if VendorId: > > > > Cmd += ("-f", VendorId) > > > > > > > > Cmd += ("-o", Output) > > > > @@ -688,6 +691,7 @@ class GenFdsGlobalVariable: > > > > else: > > > > GenFdsGlobalVariable.CallExternalTool(Cmd, "Failed to call " + ToolPath, returnValue) > > > > > > > > + @staticmethod > > > > def CallExternalTool (cmd, errorMess, returnValue=[]): > > > > > > > > if type(cmd) not in (tuple, list): > > > > @@ -701,19 +705,19 @@ class GenFdsGlobalVariable: > > > > cmd += ('-v',) > > > > GenFdsGlobalVariable.InfLogger (cmd) > > > > else: > > > > - sys.stdout.write ('#') > > > > - sys.stdout.flush() > > > > + stdout.write ('#') > > > > + stdout.flush() > > > > GenFdsGlobalVariable.SharpCounter = GenFdsGlobalVariable.SharpCounter + 1 > > > > if GenFdsGlobalVariable.SharpCounter % GenFdsGlobalVariable.SharpNumberPerLine == 0: > > > > - sys.stdout.write('\n') > > > > + stdout.write('\n') > > > > > > > > try: > > > > - PopenObject = subprocess.Popen(' '.join(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) > > > > + PopenObject = Popen(' '.join(cmd), stdout=PIPE, stderr=PIPE, shell=True) > > > > except Exception as X: > > > > EdkLogger.error("GenFds", COMMAND_FAILURE, ExtraData="%s: %s" % (str(X), cmd[0])) > > > > (out, error) = PopenObject.communicate() > > > > > > > > - while PopenObject.returncode is None : > > > > + while PopenObject.returncode is None: > > > > PopenObject.wait() > > > > if returnValue != [] and returnValue[0] != 0: > > > > #get command return value > > > > @@ -727,45 +731,52 @@ class GenFdsGlobalVariable: > > > > print("###", cmd) > > > > EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess) > > > > > > > > + @staticmethod > > > > def VerboseLogger (msg): > > > > EdkLogger.verbose(msg) > > > > > > > > + @staticmethod > > > > def InfLogger (msg): > > > > EdkLogger.info(msg) > > > > > > > > + @staticmethod > > > > def ErrorLogger (msg, File=None, Line=None, ExtraData=None): > > > > EdkLogger.error('GenFds', GENFDS_ERROR, msg, File, Line, ExtraData) > > > > > > > > + @staticmethod > > > > def DebugLogger (Level, msg): > > > > EdkLogger.debug(Level, msg) > > > > > > > > - ## ReplaceWorkspaceMacro() > > > > + ## MacroExtend() > > > > # > > > > # @param Str String that may contain macro > > > > # @param MacroDict Dictionary that contains macro value pair > > > > # > > > > + @staticmethod > > > > def MacroExtend (Str, MacroDict={}, Arch=DataType.TAB_COMMON): > > > > - if Str is None : > > > > + if Str is None: > > > > return None > > > > > > > > - Dict = {'$(WORKSPACE)' : GenFdsGlobalVariable.WorkSpaceDir, > > > > - '$(EDK_SOURCE)' : GenFdsGlobalVariable.EdkSourceDir, > > > > + Dict = {'$(WORKSPACE)': GenFdsGlobalVariable.WorkSpaceDir, > > > > + '$(EDK_SOURCE)': GenFdsGlobalVariable.EdkSourceDir, > > > > # '$(OUTPUT_DIRECTORY)': GenFdsGlobalVariable.OutputDirFromDsc, > > > > - '$(TARGET)' : GenFdsGlobalVariable.TargetName, > > > > - '$(TOOL_CHAIN_TAG)' : GenFdsGlobalVariable.ToolChainTag, > > > > - '$(SPACE)' : ' ' > > > > + '$(TARGET)': GenFdsGlobalVariable.TargetName, > > > > + '$(TOOL_CHAIN_TAG)': GenFdsGlobalVariable.ToolChainTag, > > > > + '$(SPACE)': ' ' > > > > } > > > > - OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > > > + > > > > if Arch != DataType.TAB_COMMON and Arch in GenFdsGlobalVariable.ArchList: > > > > OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[Arch] > > > > + else: > > > > + OutputDir = GenFdsGlobalVariable.OutputDirFromDscDict[GenFdsGlobalVariable.ArchList[0]] > > > > > > > > Dict['$(OUTPUT_DIRECTORY)'] = OutputDir > > > > > > > > - if MacroDict is not None and len (MacroDict) != 0: > > > > + if MacroDict: > > > > Dict.update(MacroDict) > > > > > > > > for key in Dict: > > > > - if Str.find(key) >= 0 : > > > > + if Str.find(key) >= 0: > > > > Str = Str.replace (key, Dict[key]) > > > > > > > > if Str.find('$(ARCH)') >= 0: > > > > @@ -780,14 +791,14 @@ class GenFdsGlobalVariable: > > > > # > > > > # @param PcdPattern pattern that labels a PCD. > > > > # > > > > + @staticmethod > > > > def GetPcdValue (PcdPattern): > > > > - if PcdPattern is None : > > > > + if PcdPattern is None: > > > > return None > > > > PcdPair = PcdPattern.lstrip('PCD(').rstrip(')').strip().split('.') > > > > TokenSpace = PcdPair[0] > > > > TokenCName = PcdPair[1] > > > > > > > > - PcdValue = '' > > > > for Arch in GenFdsGlobalVariable.ArchList: > > > > Platform = GenFdsGlobalVariable.WorkSpace.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, > > > GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag] > > > > PcdDict = Platform.Pcds > > > > @@ -799,8 +810,7 @@ class GenFdsGlobalVariable: > > > > if PcdObj.DatumType != DataType.TAB_VOID: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > > > > > - PcdValue = PcdObj.DefaultValue > > > > - return PcdValue > > > > + return PcdObj.DefaultValue > > > > > > > > for Package in GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, > > > > Arch, > > > > @@ -815,21 +825,9 @@ class GenFdsGlobalVariable: > > > > if PcdObj.DatumType != DataType.TAB_VOID: > > > > EdkLogger.error("GenFds", GENFDS_ERROR, "%s is not VOID* datum type." % PcdPattern) > > > > > > > > - PcdValue = PcdObj.DefaultValue > > > > - return PcdValue > > > > + return PcdObj.DefaultValue > > > > > > > > - return PcdValue > > > > - > > > > - SetDir = staticmethod(SetDir) > > > > - SetEnv = staticmethod(SetEnv) > > > > - ReplaceWorkspaceMacro = staticmethod(ReplaceWorkspaceMacro) > > > > - CallExternalTool = staticmethod(CallExternalTool) > > > > - VerboseLogger = staticmethod(VerboseLogger) > > > > - InfLogger = staticmethod(InfLogger) > > > > - ErrorLogger = staticmethod(ErrorLogger) > > > > - DebugLogger = staticmethod(DebugLogger) > > > > - MacroExtend = staticmethod (MacroExtend) > > > > - GetPcdValue = staticmethod(GetPcdValue) > > > > + return '' > > > > > > > > ## FindExtendTool() > > > > # > > > > @@ -863,7 +861,7 @@ def FindExtendTool(KeyStringList, CurrentArchList, NameGuid): > > > > ToolOptionKey = None > > > > KeyList = None > > > > for ToolDef in ToolDefinition.items(): > > > > - if NameGuid.lower() == ToolDef[1].lower() : > > > > + if NameGuid.lower() == ToolDef[1].lower(): > > > > KeyList = ToolDef[0].split('_') > > > > Key = KeyList[0] + \ > > > > '_' + \ > > > > diff --git a/BaseTools/Source/Python/GenFds/GuidSection.py b/BaseTools/Source/Python/GenFds/GuidSection.py > > > > index fd660737f421..25609e6f2ae1 100644 > > > > --- a/BaseTools/Source/Python/GenFds/GuidSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/GuidSection.py > > > > @@ -19,7 +19,7 @@ > > > > from __future__ import absolute_import > > > > from . import Section > > > > import subprocess > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > import Common.LongFilePathOs as os > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > from .GenFdsGlobalVariable import FindExtendTool > > > > @@ -126,7 +126,7 @@ class GuidSection(GuidSectionClassObject) : > > > > ModuleName + \ > > > > SUP_MODULE_SEC + \ > > > > SecNum + \ > > > > - Ffs.SectionSuffix['GUIDED'] > > > > + SectionSuffix['GUIDED'] > > > > OutputFile = os.path.normpath(OutputFile) > > > > > > > > ExternalTool = None > > > > diff --git a/BaseTools/Source/Python/GenFds/OptionRom.py b/BaseTools/Source/Python/GenFds/OptionRom.py > > > > index bb7071fa7fca..9d7ad25ecc43 100644 > > > > --- a/BaseTools/Source/Python/GenFds/OptionRom.py > > > > +++ b/BaseTools/Source/Python/GenFds/OptionRom.py > > > > @@ -26,8 +26,6 @@ from Common.Misc import SaveFileOnChange > > > > from Common import EdkLogger > > > > from Common.BuildToolError import * > > > > > > > > -T_CHAR_LF = '\n' > > > > - > > > > ## > > > > # > > > > # > > > > @@ -36,9 +34,9 @@ class OPTIONROM (OptionRomClassObject): > > > > # > > > > # @param self The object pointer > > > > # > > > > - def __init__(self): > > > > + def __init__(self, Name = ""): > > > > OptionRomClassObject.__init__(self) > > > > - > > > > + self.DriverName = Name > > > > > > > > ## AddToBuffer() > > > > # > > > > diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py > > > > index 5242b74c9e70..9aeaa45d4566 100644 > > > > --- a/BaseTools/Source/Python/GenFds/Region.py > > > > +++ b/BaseTools/Source/Python/GenFds/Region.py > > > > @@ -20,7 +20,6 @@ from struct import * > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > from io import BytesIO > > > > import string > > > > -from CommonDataClass.FdfClass import RegionClassObject > > > > import Common.LongFilePathOs as os > > > > from stat import * > > > > from Common import EdkLogger > > > > @@ -32,15 +31,20 @@ from Common.DataType import BINARY_FILE_TYPE_FV > > > > ## generate Region > > > > # > > > > # > > > > -class Region(RegionClassObject): > > > > +class Region(object): > > > > > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > # > > > > def __init__(self): > > > > - RegionClassObject.__init__(self) > > > > - > > > > + self.Offset = None # The begin position of the Region > > > > + self.Size = None # The Size of the Region > > > > + self.PcdOffset = None > > > > + self.PcdSize = None > > > > + self.SetVarDict = {} > > > > + self.RegionType = None > > > > + self.RegionDataList = [] > > > > > > > > ## PadBuffer() > > > > # > > > > diff --git a/BaseTools/Source/Python/GenFds/UiSection.py b/BaseTools/Source/Python/GenFds/UiSection.py > > > > index 24f2f3ca938d..c1dff188be96 100644 > > > > --- a/BaseTools/Source/Python/GenFds/UiSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/UiSection.py > > > > @@ -17,7 +17,7 @@ > > > > # > > > > from __future__ import absolute_import > > > > from . import Section > > > > -from .Ffs import Ffs > > > > +from .Ffs import SectionSuffix > > > > import subprocess > > > > import Common.LongFilePathOs as os > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > @@ -59,7 +59,7 @@ class UiSection (UiSectionClassObject): > > > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > > > > > - OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > > Ffs.SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > > > + OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + SecNum + > > > SectionSuffix.get(BINARY_FILE_TYPE_UI)) > > > > > > > > if self.StringData is not None : > > > > NameString = self.StringData > > > > diff --git a/BaseTools/Source/Python/GenFds/VerSection.py b/BaseTools/Source/Python/GenFds/VerSection.py > > > > index db71fe9653d4..051743f10c21 100644 > > > > --- a/BaseTools/Source/Python/GenFds/VerSection.py > > > > +++ b/BaseTools/Source/Python/GenFds/VerSection.py > > > > @@ -1,7 +1,7 @@ > > > > ## @file > > > > # process Version section generation > > > > # > > > > -# Copyright (c) 2007 - 2017, Intel Corporation. All rights reserved.<BR> > > > > +# Copyright (c) 2007 - 2018, Intel Corporation. All rights reserved.<BR> > > > > # > > > > # This program and the accompanying materials > > > > # are licensed and made available under the terms and conditions of the BSD License > > > > @@ -16,10 +16,8 @@ > > > > # Import Modules > > > > # > > > > from __future__ import absolute_import > > > > -from .Ffs import Ffs > > > > -from . import Section > > > > +from .Ffs import SectionSuffix > > > > import Common.LongFilePathOs as os > > > > -import subprocess > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > from CommonDataClass.FdfClass import VerSectionClassObject > > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > > @@ -54,29 +52,27 @@ class VerSection (VerSectionClassObject): > > > > # > > > > # Prepare the parameter of GenSection > > > > # > > > > - if FfsInf is not None: > > > > + if FfsInf: > > > > self.Alignment = FfsInf.__ExtendMacro__(self.Alignment) > > > > self.BuildNum = FfsInf.__ExtendMacro__(self.BuildNum) > > > > self.StringData = FfsInf.__ExtendMacro__(self.StringData) > > > > self.FileName = FfsInf.__ExtendMacro__(self.FileName) > > > > > > > > OutputFile = os.path.join(OutputPath, > > > > - ModuleName + SUP_MODULE_SEC + SecNum + Ffs.SectionSuffix.get('VERSION')) > > > > + ModuleName + SUP_MODULE_SEC + SecNum + SectionSuffix.get('VERSION')) > > > > OutputFile = os.path.normpath(OutputFile) > > > > > > > > # Get String Data > > > > StringData = '' > > > > - if self.StringData is not None: > > > > + if self.StringData: > > > > StringData = self.StringData > > > > - elif self.FileName is not None: > > > > + elif self.FileName: > > > > FileNameStr = GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.FileName) > > > > FileNameStr = GenFdsGlobalVariable.MacroExtend(FileNameStr, Dict) > > > > FileObj = open(FileNameStr, 'r') > > > > StringData = FileObj.read() > > > > StringData = '"' + StringData + '"' > > > > FileObj.close() > > > > - else: > > > > - StringData = '' > > > > GenFdsGlobalVariable.GenerateSection(OutputFile, [], 'EFI_SECTION_VERSION', > > > > Ver=StringData, BuildNumber=self.BuildNum, IsMakefile=IsMakefile) > > > > OutputFileList = [] > > > > diff --git a/BaseTools/Source/Python/GenFds/Vtf.py b/BaseTools/Source/Python/GenFds/Vtf.py > > > > index 5cb2d4acfb1c..9dcd48b2d812 100644 > > > > --- a/BaseTools/Source/Python/GenFds/Vtf.py > > > > +++ b/BaseTools/Source/Python/GenFds/Vtf.py > > > > @@ -18,21 +18,24 @@ > > > > from __future__ import absolute_import > > > > from .GenFdsGlobalVariable import GenFdsGlobalVariable > > > > import Common.LongFilePathOs as os > > > > -from CommonDataClass.FdfClass import VtfClassObject > > > > from Common.LongFilePathSupport import OpenLongFilePath as open > > > > -T_CHAR_LF = '\n' > > > > +from Common.DataType import TAB_LINE_BREAK > > > > > > > > ## generate VTF > > > > # > > > > # > > > > -class Vtf (VtfClassObject): > > > > +class Vtf (object): > > > > > > > > ## The constructor > > > > # > > > > # @param self The object pointer > > > > # > > > > def __init__(self): > > > > - VtfClassObject.__init__(self) > > > > + self.KeyArch = None > > > > + self.ArchList = None > > > > + self.UiName = None > > > > + self.ResetBin = None > > > > + self.ComponentStatementList = [] > > > > > > > > ## GenVtf() method > > > > # > > > > @@ -44,7 +47,6 @@ class Vtf (VtfClassObject): > > > > # > > > > def GenVtf(self, FdAddressDict) : > > > > self.GenBsfInf() > > > > - OutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.Vtf') > > > > BaseAddArg = self.GetBaseAddressArg(FdAddressDict) > > > > OutputArg, VtfRawDict = self.GenOutputArg() > > > > > > > > @@ -69,57 +71,57 @@ class Vtf (VtfClassObject): > > > > FvList = self.GetFvList() > > > > self.BsfInfName = os.path.join(GenFdsGlobalVariable.FvDir, self.UiName + '.inf') > > > > BsfInf = open(self.BsfInfName, 'w+') > > > > - if self.ResetBin is not None: > > > > - BsfInf.writelines ("[OPTIONS]" + T_CHAR_LF) > > > > + if self.ResetBin: > > > > + BsfInf.writelines ("[OPTIONS]" + TAB_LINE_BREAK) > > > > BsfInf.writelines ("IA32_RST_BIN" + \ > > > > " = " + \ > > > > > > > GenFdsGlobalVariable.MacroExtend(GenFdsGlobalVariable.ReplaceWorkspaceMacro(self.ResetBin)) + \ > > > > - T_CHAR_LF) > > > > - BsfInf.writelines (T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > > > > > - BsfInf.writelines ("[COMPONENTS]" + T_CHAR_LF) > > > > + BsfInf.writelines ("[COMPONENTS]" + TAB_LINE_BREAK) > > > > > > > > for ComponentObj in self.ComponentStatementList : > > > > BsfInf.writelines ("COMP_NAME" + \ > > > > " = " + \ > > > > ComponentObj.CompName + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > if ComponentObj.CompLoc.upper() == 'NONE': > > > > BsfInf.writelines ("COMP_LOC" + \ > > > > " = " + \ > > > > 'N' + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > > > > > - elif ComponentObj.FilePos is not None: > > > > + elif ComponentObj.FilePos: > > > > BsfInf.writelines ("COMP_LOC" + \ > > > > " = " + \ > > > > ComponentObj.FilePos + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > else: > > > > Index = FvList.index(ComponentObj.CompLoc.upper()) > > > > if Index == 0: > > > > BsfInf.writelines ("COMP_LOC" + \ > > > > " = " + \ > > > > 'F' + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > elif Index == 1: > > > > BsfInf.writelines ("COMP_LOC" + \ > > > > " = " + \ > > > > 'S' + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > > > > > BsfInf.writelines ("COMP_TYPE" + \ > > > > " = " + \ > > > > ComponentObj.CompType + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > BsfInf.writelines ("COMP_VER" + \ > > > > " = " + \ > > > > ComponentObj.CompVer + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > BsfInf.writelines ("COMP_CS" + \ > > > > " = " + \ > > > > ComponentObj.CompCs + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > > > > > BinPath = ComponentObj.CompBin > > > > if BinPath != '-': > > > > @@ -127,7 +129,7 @@ class Vtf (VtfClassObject): > > > > BsfInf.writelines ("COMP_BIN" + \ > > > > " = " + \ > > > > BinPath + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > > > > > SymPath = ComponentObj.CompSym > > > > if SymPath != '-': > > > > @@ -135,12 +137,12 @@ class Vtf (VtfClassObject): > > > > BsfInf.writelines ("COMP_SYM" + \ > > > > " = " + \ > > > > SymPath + \ > > > > - T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > BsfInf.writelines ("COMP_SIZE" + \ > > > > " = " + \ > > > > ComponentObj.CompSize + \ > > > > - T_CHAR_LF) > > > > - BsfInf.writelines (T_CHAR_LF) > > > > + TAB_LINE_BREAK) > > > > + BsfInf.writelines (TAB_LINE_BREAK) > > > > > > > > BsfInf.close() > > > > > > > > diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py > > > > index a66adfb16746..95863fefe151 100644 > > > > --- a/BaseTools/Source/Python/build/BuildReport.py > > > > +++ b/BaseTools/Source/Python/build/BuildReport.py > > > > @@ -46,6 +46,7 @@ from Common.StringUtils import NormPath > > > > from Common.DataType import * > > > > import collections > > > > from Common.Expression import * > > > > +from GenFds.AprioriSection import DXE_APRIORI_GUID, PEI_APRIORI_GUID > > > > > > > > ## Pattern to extract contents in EDK DXS files > > > > gDxsDependencyPattern = re.compile(r"DEPENDENCY_START(.+)DEPENDENCY_END", re.DOTALL) > > > > @@ -1853,8 +1854,8 @@ class FdRegionReport(object): > > > > # > > > > # Add PEI and DXE a priori files GUIDs defined in PI specification. > > > > # > > > > - self._GuidsDb["1B45CC0A-156A-428A-AF62-49864DA0E6E6"] = "PEI Apriori" > > > > - self._GuidsDb["FC510EE7-FFDC-11D4-BD41-0080C73C8881"] = "DXE Apriori" > > > > + self._GuidsDb[PEI_APRIORI_GUID] = "PEI Apriori" > > > > + self._GuidsDb[DXE_APRIORI_GUID] = "DXE Apriori" > > > > # > > > > # Add ACPI table storage file > > > > # > > > > -- > > > > 2.16.2.windows.1 > > > > > > > > _______________________________________________ > > > > edk2-devel mailing list > > > > edk2-devel@lists.01.org > > > > https://lists.01.org/mailman/listinfo/edk2-devel > > > > > > _______________________________________________ > > edk2-devel mailing list > > edk2-devel@lists.01.org > > https://lists.01.org/mailman/listinfo/edk2-devel > _______________________________________________ > edk2-devel mailing list > edk2-devel@lists.01.org > https://lists.01.org/mailman/listinfo/edk2-devel > ^ permalink raw reply [flat|nested] 14+ messages in thread
* [Patch v3 2/6] BaseTools/GenFds: change objects to sets 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-23 17:29 ` [Patch v3 3/6] Basetools/GenFds: refactor class FV Jaben Carsey ` (4 subsequent siblings) 6 siblings, 0 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel; +Cc: Liming Gao, Yonghong Zhu Change lists and tuples used solely for "in" testing to sets. These operations are not order dependent. fixed some line length for PEP8 compliance on some. Cc: Liming Gao <liming.gao@intel.com> Cc: Yonghong Zhu <yonghong.zhu@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 222 +++++++++++--------- 1 file changed, 123 insertions(+), 99 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index 62bf9f14012a..8b4d5e7fd863 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -70,6 +70,7 @@ SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", "256K", "512K", "1M", "2M", "4M", "8M", "16M"} ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} +CR_LB_SET = {T_CHAR_CR, TAB_LINE_BREAK} RegionSizePattern = compile("\s*(?P<base>(?:0x|0X)?[a-fA-F0-9]+)\s*\|\s*(?P<size>(?:0x|0X)?[a-fA-F0-9]+)\s*") RegionSizeGuidPattern = compile("\s*(?P<base>\w+\.\w+[\.\w\[\]]*)\s*\|\s*(?P<size>\w+\.\w+[\.\w\[\]]*)\s*") @@ -266,7 +267,7 @@ class FdfParser: # def _SkipWhiteSpace(self): while not self._EndOfFile(): - if self._CurrentChar() in (TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB): + if self._CurrentChar() in {TAB_PRINTCHAR_NUL, T_CHAR_CR, TAB_LINE_BREAK, TAB_SPACE_SPLIT, T_CHAR_TAB}: self._SkippedChars += str(self._CurrentChar()) self._GetOneChar() else: @@ -406,14 +407,14 @@ class FdfParser: return Offset = StartPos[1] - while self.Profile.FileLinesList[StartPos[0]][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): + while self.Profile.FileLinesList[StartPos[0]][Offset] not in CR_LB_SET: self.Profile.FileLinesList[StartPos[0]][Offset] = Value Offset += 1 Line = StartPos[0] while Line < EndPos[0]: Offset = 0 - while self.Profile.FileLinesList[Line][Offset] not in (T_CHAR_CR, TAB_LINE_BREAK): + while self.Profile.FileLinesList[Line][Offset] not in CR_LB_SET: self.Profile.FileLinesList[Line][Offset] = Value Offset += 1 Line += 1 @@ -707,7 +708,7 @@ class FdfParser: PreIndex = 0 StartPos = CurLine.find('$(', PreIndex) EndPos = CurLine.find(')', StartPos+2) - while StartPos != -1 and EndPos != -1 and self._Token not in [TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, TAB_ELSE_IF]: + while StartPos != -1 and EndPos != -1 and self._Token not in {TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF, TAB_ELSE_IF}: MacroName = CurLine[StartPos+2: EndPos] MacorValue = self._GetMacroValue(MacroName) if MacorValue is not None: @@ -759,7 +760,7 @@ class FdfParser: self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple self._WipeOffArea.append(((SetLine, SetOffset), (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self._Token in (TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF): + elif self._Token in {TAB_IF_DEF, TAB_IF_N_DEF, TAB_IF}: IfStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) IfList.append([IfStartPos, None, None]) @@ -777,7 +778,7 @@ class FdfParser: IfList[-1] = [IfList[-1][0], ConditionSatisfied, BranchDetermined] if ConditionSatisfied: self._WipeOffArea.append((IfList[-1][0], (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - 1))) - elif self._Token in (TAB_ELSE_IF, '!else'): + elif self._Token in {TAB_ELSE_IF, TAB_ELSE}: ElseStartPos = (self.CurrentLineNumber - 1, self.CurrentOffsetWithinLine - len(self._Token)) if len(IfList) <= 0: raise Warning("Missing !if statement", self.FileName, self.CurrentLineNumber) @@ -860,13 +861,12 @@ class FdfParser: MacroDict.update(GlobalData.gGlobalDefines) MacroDict.update(GlobalData.gCommandLineDefines) - if GlobalData.BuildOptionPcd: - for Item in GlobalData.BuildOptionPcd: - if isinstance(Item, tuple): - continue - PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) - TmpValue = BuildOptionValue(TmpValue, {}) - MacroDict[PcdName.strip()] = TmpValue + for Item in GlobalData.BuildOptionPcd: + if isinstance(Item, tuple): + continue + PcdName, TmpValue = Item.split(TAB_EQUAL_SPLIT) + TmpValue = BuildOptionValue(TmpValue, {}) + MacroDict[PcdName.strip()] = TmpValue # Highest priority return MacroDict @@ -967,7 +967,7 @@ class FdfParser: def _GetExpression(self): Line = self.Profile.FileLinesList[self.CurrentLineNumber - 1] Index = len(Line) - 1 - while Line[Index] in [T_CHAR_CR, TAB_LINE_BREAK]: + while Line[Index] in CR_LB_SET: Index -= 1 ExpressionString = self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:Index+1] self.CurrentOffsetWithinLine += len(ExpressionString) @@ -1066,7 +1066,7 @@ class FdfParser: if self.CurrentLineNumber != StartLine: EndPos = len(self.Profile.FileLinesList[StartLine-1]) self._Token = self.Profile.FileLinesList[StartLine-1][StartPos: EndPos] - if self._Token.lower() in [TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE]: + if self._Token.lower() in {TAB_IF, TAB_END_IF, TAB_ELSE_IF, TAB_ELSE, TAB_IF_DEF, TAB_IF_N_DEF, TAB_ERROR, TAB_INCLUDE}: self._Token = self._Token.lower() if StartPos != self.CurrentOffsetWithinLine: return True @@ -1813,7 +1813,7 @@ class FdfParser: if not self._GetNextWord(): return True - if not self._Token in ("SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"): + if not self._Token in {"SET", BINARY_FILE_TYPE_FV, "FILE", "DATA", "CAPSULE", "INF"}: # # If next token is a word which is not a valid FV type, it might be part of [PcdOffset[|PcdSize]] # Or it might be next region's offset described by an expression which starts with a PCD. @@ -2147,10 +2147,10 @@ class FdfParser: if not self._GetNextToken(): raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) - if self._Token.upper() not in ("1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ + if self._Token.upper() not in {"1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ "1M", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", \ - "1G", "2G"): + "1G", "2G"}: raise Warning("Unknown alignment value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) Obj.FvAlignment = self._Token return True @@ -2198,12 +2198,12 @@ class FdfParser: if not self._GetNextToken(): raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) - if self._Token.upper() not in ["TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"]: + if self._Token.upper() not in {"TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"}: raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) - if self._Token.upper() in ["TRUE", "1", "0X1", "0X01"]: + if self._Token.upper() in {"TRUE", "1", "0X1", "0X01"}: Obj.FvForceRebase = True - elif self._Token.upper() in ["FALSE", "0", "0X0", "0X00"]: + elif self._Token.upper() in {"FALSE", "0", "0X0", "0X00"}: Obj.FvForceRebase = False else: Obj.FvForceRebase = None @@ -2224,19 +2224,19 @@ class FdfParser: while self._GetNextWord(): IsWordToken = True name = self._Token - if name not in ("ERASE_POLARITY", "MEMORY_MAPPED", \ + if name not in {"ERASE_POLARITY", "MEMORY_MAPPED", \ "STICKY_WRITE", "LOCK_CAP", "LOCK_STATUS", "WRITE_ENABLED_CAP", \ "WRITE_DISABLED_CAP", "WRITE_STATUS", "READ_ENABLED_CAP", \ "READ_DISABLED_CAP", "READ_STATUS", "READ_LOCK_CAP", \ "READ_LOCK_STATUS", "WRITE_LOCK_CAP", "WRITE_LOCK_STATUS", \ - "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"): + "WRITE_POLICY_RELIABLE", "WEAK_ALIGNMENT", "FvUsedSizeEnable"}: self._UndoToken() return False if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self._GetNextToken() or self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): + if not self._GetNextToken() or self._Token.upper() not in {"TRUE", "FALSE", "1", "0"}: raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) FvObj.FvAttributeDict[name] = self._Token @@ -2272,7 +2272,7 @@ class FdfParser: if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) - if not self._GetNextToken() or self._Token not in ('TRUE', 'FALSE'): + if not self._GetNextToken() or self._Token not in {'TRUE', 'FALSE'}: raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) FvObj.FvNameString = self._Token @@ -2586,7 +2586,7 @@ class FdfParser: # @staticmethod def _FileCouldHaveRelocFlag (FileType): - if FileType in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'): + if FileType in {SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, 'PEI_DXE_COMBO'}: return True else: return False @@ -2601,7 +2601,7 @@ class FdfParser: # @staticmethod def _SectionCouldHaveRelocFlag (SectionType): - if SectionType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32): + if SectionType in {BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32}: return True else: return False @@ -2647,7 +2647,7 @@ class FdfParser: raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) FfsFileObj.FdName = self._Token - elif self._Token in (TAB_DEFINE, "APRIORI", "SECTION"): + elif self._Token in {TAB_DEFINE, "APRIORI", "SECTION"}: self._UndoToken() self._GetSectionData(FfsFileObj, MacroDict) @@ -2919,8 +2919,8 @@ class FdfParser: self.SetFileBufferPos(OldPos) return False - if self._Token not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ - BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): + if self._Token not in {"COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ + BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX}: raise Warning("Unknown section type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) if AlignValue == 'Auto'and (not self._Token == BINARY_FILE_TYPE_PE32) and (not self._Token == BINARY_FILE_TYPE_TE): raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) @@ -3067,7 +3067,7 @@ class FdfParser: continue except ValueError: raise Warning("expected Number", self.FileName, self.CurrentLineNumber) - elif self._Token.upper() not in ("TRUE", "FALSE", "1", "0"): + elif self._Token.upper() not in {"TRUE", "FALSE", "1", "0"}: raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) AttribDict[AttribKey] = self._Token @@ -3254,21 +3254,21 @@ class FdfParser: def _GetCapsuleTokens(self, Obj): if not self._GetNextToken(): return False - while self._Token in ("CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"): + while self._Token in {"CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"}: Name = self._Token.strip() if not self._IsToken(TAB_EQUAL_SPLIT): raise Warning("expected '='", self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) if Name == 'CAPSULE_FLAGS': - if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in {"PersistAcrossReset", "PopulateSystemTable", "InitiateReset"}: raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) Value = self._Token.strip() while self._IsToken(TAB_COMMA_SPLIT): Value += TAB_COMMA_SPLIT if not self._GetNextToken(): raise Warning("expected value", self.FileName, self.CurrentLineNumber) - if not self._Token in ("PersistAcrossReset", "PopulateSystemTable", "InitiateReset"): + if not self._Token in {"PersistAcrossReset", "PopulateSystemTable", "InitiateReset"}: raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) Value += self._Token.strip() elif Name == 'OEM_CAPSULE_FLAGS': @@ -3478,7 +3478,7 @@ class FdfParser: AfileName = self._Token AfileBaseName = os.path.basename(AfileName) - if os.path.splitext(AfileBaseName)[1] not in [".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"]: + if os.path.splitext(AfileBaseName)[1] not in {".bin", ".BIN", ".Bin", ".dat", ".DAT", ".Dat", ".data", ".DATA", ".Data"}: raise Warning('invalid binary file type, should be one of "bin",BINARY_FILE_TYPE_BIN,"Bin","dat","DAT","Dat","data","DATA","Data"', \ self.FileName, self.CurrentLineNumber) @@ -3568,12 +3568,22 @@ class FdfParser: def _GetModuleType(self): if not self._GetNextWord(): raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) - if self._Token.upper() not in (SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, \ - SUP_MODULE_DXE_DRIVER, SUP_MODULE_DXE_SAL_DRIVER, \ - SUP_MODULE_DXE_SMM_DRIVER, SUP_MODULE_DXE_RUNTIME_DRIVER, \ - SUP_MODULE_UEFI_DRIVER, SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, TAB_DEFAULT, SUP_MODULE_BASE, \ - EDK_COMPONENT_TYPE_SECURITY_CORE, EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, EDK_COMPONENT_TYPE_PIC_PEIM, EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, \ - "PE32_PEIM", EDK_COMPONENT_TYPE_BS_DRIVER, EDK_COMPONENT_TYPE_RT_DRIVER, EDK_COMPONENT_TYPE_SAL_RT_DRIVER, EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): + if self._Token.upper() not in { + SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, + SUP_MODULE_DXE_CORE, SUP_MODULE_DXE_DRIVER, + SUP_MODULE_DXE_SAL_DRIVER, SUP_MODULE_DXE_SMM_DRIVER, + SUP_MODULE_DXE_RUNTIME_DRIVER, SUP_MODULE_UEFI_DRIVER, + SUP_MODULE_UEFI_APPLICATION, SUP_MODULE_USER_DEFINED, + TAB_DEFAULT, SUP_MODULE_BASE, + EDK_COMPONENT_TYPE_SECURITY_CORE, + EDK_COMPONENT_TYPE_COMBINED_PEIM_DRIVER, + EDK_COMPONENT_TYPE_PIC_PEIM, + EDK_COMPONENT_TYPE_RELOCATABLE_PEIM, "PE32_PEIM", + EDK_COMPONENT_TYPE_BS_DRIVER, EDK_COMPONENT_TYPE_RT_DRIVER, + EDK_COMPONENT_TYPE_SAL_RT_DRIVER, + EDK_COMPONENT_TYPE_APPLICATION, "ACPITABLE", + SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, + SUP_MODULE_MM_CORE_STANDALONE}: raise Warning("Unknown Module type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) return self._Token @@ -3614,8 +3624,10 @@ class FdfParser: raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) Type = self._Token.strip().upper() - if Type not in ("RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM,\ - "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, EDK_COMPONENT_TYPE_APPLICATION, "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, SUP_MODULE_MM_CORE_STANDALONE): + if Type not in {"RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, + "PEI_DXE_COMBO", "DRIVER", SUP_MODULE_DXE_CORE, EDK_COMPONENT_TYPE_APPLICATION, + "FV_IMAGE", "SMM", SUP_MODULE_SMM_CORE, SUP_MODULE_MM_STANDALONE, + SUP_MODULE_MM_CORE_STANDALONE}: raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): @@ -3707,8 +3719,12 @@ class FdfParser: SectionName = self._Token - if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ - BINARY_FILE_TYPE_UI, BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", BINARY_FILE_TYPE_SMM_DEPEX): + if SectionName not in { + "COMPAT16", BINARY_FILE_TYPE_PE32, + BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", + "RAW",BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, + BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", + BINARY_FILE_TYPE_SMM_DEPEX}: raise Warning("Unknown leaf section name '%s'" % SectionName, self.FileName, self.CurrentLineNumber) @@ -3762,8 +3778,12 @@ class FdfParser: return False SectionName = self._Token - if SectionName not in ("COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ - BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): + if SectionName not in { + "COMPAT16", BINARY_FILE_TYPE_PE32, + BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", + "RAW",BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, + BINARY_FILE_TYPE_PEI_DEPEX, "VERSION", "SUBTYPE_GUID", + BINARY_FILE_TYPE_SMM_DEPEX, BINARY_FILE_TYPE_GUID}: self._UndoToken() return False @@ -3805,8 +3825,13 @@ class FdfParser: if self._IsToken(TAB_VALUE_SPLIT): FvImageSectionObj.FvFileExtension = self._GetFileExtension() elif self._GetNextToken(): - if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ - BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): + if self._Token not in { + "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, + "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, + BINARY_FILE_TYPE_UI, "VERSION", + BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, + BINARY_FILE_TYPE_SMM_DEPEX}: FvImageSectionObj.FvFileName = self._Token else: self._UndoToken() @@ -3886,8 +3911,13 @@ class FdfParser: if self._IsToken(TAB_VALUE_SPLIT): EfiSectionObj.FileExtension = self._GetFileExtension() elif self._GetNextToken(): - if self._Token not in ("}", "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX,\ - BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, BINARY_FILE_TYPE_SMM_DEPEX): + if self._Token not in { + "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, + "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, + BINARY_FILE_TYPE_UI, "VERSION", + BINARY_FILE_TYPE_PEI_DEPEX, BINARY_FILE_TYPE_GUID, + BINARY_FILE_TYPE_SMM_DEPEX}: if self._Token.startswith('PCD'): self._UndoToken() @@ -3921,7 +3951,7 @@ class FdfParser: # @staticmethod def _RuleSectionCouldBeOptional(SectionType): - if SectionType in (BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "RAW", BINARY_FILE_TYPE_SMM_DEPEX): + if SectionType in {BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", BINARY_FILE_TYPE_PEI_DEPEX, "RAW", BINARY_FILE_TYPE_SMM_DEPEX}: return True else: return False @@ -3936,7 +3966,7 @@ class FdfParser: # @staticmethod def _RuleSectionCouldHaveBuildNum(SectionType): - if SectionType in ("VERSION"): + if SectionType == "VERSION": return True else: return False @@ -3951,7 +3981,7 @@ class FdfParser: # @staticmethod def _RuleSectionCouldHaveString(SectionType): - if SectionType in (BINARY_FILE_TYPE_UI, "VERSION"): + if SectionType in {BINARY_FILE_TYPE_UI, "VERSION"}: return True else: return False @@ -3966,34 +3996,34 @@ class FdfParser: # def _CheckRuleSectionFileType(self, SectionType, FileType): if SectionType == "COMPAT16": - if FileType not in ("COMPAT16", "SEC_COMPAT16"): + if FileType not in {"COMPAT16", "SEC_COMPAT16"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PE32: - if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_PE32"): + if FileType not in {BINARY_FILE_TYPE_PE32, "SEC_PE32"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PIC: - if FileType not in (BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_PIC): + if FileType not in {BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_PIC}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_TE: - if FileType not in (BINARY_FILE_TYPE_TE, "SEC_TE"): + if FileType not in {BINARY_FILE_TYPE_TE, "SEC_TE"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == "RAW": - if FileType not in (BINARY_FILE_TYPE_BIN, "SEC_BIN", "RAW", "ASL", "ACPI"): + if FileType not in {BINARY_FILE_TYPE_BIN, "SEC_BIN", "RAW", "ASL", "ACPI"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_DXE_DEPEX or SectionType == BINARY_FILE_TYPE_SMM_DEPEX: - if FileType not in (BINARY_FILE_TYPE_DXE_DEPEX, "SEC_DXE_DEPEX", BINARY_FILE_TYPE_SMM_DEPEX): + if FileType not in {BINARY_FILE_TYPE_DXE_DEPEX, "SEC_DXE_DEPEX", BINARY_FILE_TYPE_SMM_DEPEX}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_UI: - if FileType not in (BINARY_FILE_TYPE_UI, "SEC_UI"): + if FileType not in {BINARY_FILE_TYPE_UI, "SEC_UI"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == "VERSION": - if FileType not in ("VERSION", "SEC_VERSION"): + if FileType not in {"VERSION", "SEC_VERSION"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PEI_DEPEX: - if FileType not in (BINARY_FILE_TYPE_PEI_DEPEX, "SEC_PEI_DEPEX"): + if FileType not in {BINARY_FILE_TYPE_PEI_DEPEX, "SEC_PEI_DEPEX"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_GUID: - if FileType not in (BINARY_FILE_TYPE_PE32, "SEC_GUID"): + if FileType not in {BINARY_FILE_TYPE_PE32, "SEC_GUID"}: raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) ## _GetRuleEncapsulationSection() method @@ -4073,6 +4103,7 @@ class FdfParser: # @retval False Not able to find a VTF # def _GetVtf(self): + HW_ARCH_SET = {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64} if not self._GetNextToken(): return False @@ -4093,7 +4124,7 @@ class FdfParser: raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() - if Arch not in {TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64}: + if Arch not in HW_ARCH_SET: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) if not self._GetNextWord(): @@ -4107,7 +4138,7 @@ class FdfParser: if self._IsToken(TAB_COMMA_SPLIT): if not self._GetNextWord(): raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) - if self._Token.upper() not in (TAB_ARCH_IA32, TAB_ARCH_X64, TAB_ARCH_IPF, TAB_ARCH_ARM, TAB_ARCH_AARCH64): + if self._Token.upper() not in HW_ARCH_SET: raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) VtfObj.ArchList = self._Token.upper() @@ -4169,7 +4200,7 @@ class FdfParser: if not self._GetNextWord(): raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) - if self._Token not in ("F", "N", "S"): #, "H", "L", "PH", "PL"): not support + if self._Token not in {"F", "N", "S"}: #, "H", "L", "PH", "PL"): not support raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) CompStatementObj.FilePos = self._Token @@ -4185,7 +4216,7 @@ class FdfParser: if not self._GetNextToken(): raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) - if self._Token not in ("FIT", "PAL_B", "PAL_A", "OEM"): + if self._Token not in {"FIT", "PAL_B", "PAL_A", "OEM"}: if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) @@ -4213,7 +4244,7 @@ class FdfParser: if not self._GetNextToken(): raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) - if self._Token not in ("1", "0"): + if self._Token not in {"1", "0"}: raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) CompStatementObj.CompCs = self._Token @@ -4538,8 +4569,7 @@ class FdfParser: # @param FvList referenced FV by section # def _GetReferencedFdFvTupleFromSection(self, FfsFile, FdList = [], FvList = []): - SectionStack = [] - SectionStack.extend(FfsFile.SectionList) + SectionStack = list(FfsFile.SectionList) while SectionStack != []: SectionObj = SectionStack.pop() if isinstance(SectionObj, FvImageSection): @@ -4567,9 +4597,8 @@ class FdfParser: MaxLength = len (self.Profile.FvDict) for FvName in self.Profile.FvDict: LogStr = "\nCycle Reference Checking for FV: %s\n" % FvName - RefFvStack = [] - RefFvStack.append(FvName) - FdAnalyzedList = [] + RefFvStack = set(FvName) + FdAnalyzedList = set() Index = 0 while RefFvStack and Index < MaxLength: @@ -4594,17 +4623,17 @@ class FdfParser: for FvNameInFd in FvInFdList: LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) if FvNameInFd not in RefFvStack: - RefFvStack.append(FvNameInFd) + RefFvStack.add(FvNameInFd) if FvName in RefFvStack or FvNameFromStack in RefFvStack: EdkLogger.info(LogStr) return True - FdAnalyzedList.append(RefFdName) + FdAnalyzedList.add(RefFdName) for RefFvName in RefFvList: LogStr += "FV %s contains FV %s\n" % (FvNameFromStack, RefFvName) if RefFvName not in RefFvStack: - RefFvStack.append(RefFvName) + RefFvStack.add(RefFvName) if FvName in RefFvStack or FvNameFromStack in RefFvStack: EdkLogger.info(LogStr) @@ -4619,13 +4648,12 @@ class FdfParser: # Capsule image to be checked. # LogStr = "\n\n\nCycle Reference Checking for Capsule: %s\n" % CapName - RefCapStack = [] - RefCapStack.append(CapName) - FdAnalyzedList = [] - FvAnalyzedList = [] + RefCapStack = {CapName} + FdAnalyzedList = set() + FvAnalyzedList = set() Index = 0 - while RefCapStack != [] and Index < MaxLength: + while RefCapStack and Index < MaxLength: Index = Index + 1 CapNameFromStack = RefCapStack.pop() if CapNameFromStack.upper() in self.Profile.CapsuleDict: @@ -4645,25 +4673,21 @@ class FdfParser: continue LogStr += "Capsule %s contains FD %s\n" % (CapNameFromStack, RefFdName) - CapInFdList = self._GetCapInFd(RefFdName) - if CapInFdList != []: - for CapNameInFd in CapInFdList: - LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) - if CapNameInFd not in RefCapStack: - RefCapStack.append(CapNameInFd) + for CapNameInFd in self._GetCapInFd(RefFdName): + LogStr += "FD %s contains Capsule %s\n" % (RefFdName, CapNameInFd) + if CapNameInFd not in RefCapStack: + RefCapStack.append(CapNameInFd) - if CapName in RefCapStack or CapNameFromStack in RefCapStack: - EdkLogger.info(LogStr) - return True + if CapName in RefCapStack or CapNameFromStack in RefCapStack: + EdkLogger.info(LogStr) + return True - FvInFdList = self._GetFvInFd(RefFdName) - if FvInFdList != []: - for FvNameInFd in FvInFdList: - LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) - if FvNameInFd not in RefFvList: - RefFvList.append(FvNameInFd) + for FvNameInFd in self._GetFvInFd(RefFdName): + LogStr += "FD %s contains FV %s\n" % (RefFdName, FvNameInFd) + if FvNameInFd not in RefFvList: + RefFvList.append(FvNameInFd) - FdAnalyzedList.append(RefFdName) + FdAnalyzedList.add(RefFdName) # # the number of the parsed FV and FD image # @@ -4678,7 +4702,7 @@ class FdfParser: else: continue self._GetReferencedFdFvTuple(FvObj, RefFdList, RefFvList) - FvAnalyzedList.append(RefFvName) + FvAnalyzedList.add(RefFvName) return False -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch v3 3/6] Basetools/GenFds: refactor class FV 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey 2018-10-23 17:29 ` [Patch v3 2/6] BaseTools/GenFds: change objects to sets Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-23 17:29 ` [Patch v3 4/6] BaseTools/GenFds: remove MacroDict parameter Jaben Carsey ` (3 subsequent siblings) 6 siblings, 0 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel; +Cc: Yonghong Zhu, Liming Gao 1) initialize UiFvName via __init__ parameter. No change to default behavior. 2) initialize 3 empty lists in __init__. Curently not guarenteed initialized. Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 6 +----- BaseTools/Source/Python/GenFds/Fv.py | 9 +++++---- 2 files changed, 6 insertions(+), 9 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index 8b4d5e7fd863..9e806de0d294 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -2090,8 +2090,7 @@ class FdfParser: if not self._IsToken(TAB_SECTION_END): raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) - FvObj = FV() - FvObj.UiFvName = self.CurrentFvName + FvObj = FV(Name=self.CurrentFvName) self.Profile.FvDict[self.CurrentFvName] = FvObj Status = self._GetCreateFile(FvObj) @@ -2102,9 +2101,6 @@ class FdfParser: self._GetAddressStatements(FvObj) - FvObj.FvExtEntryTypeValue = [] - FvObj.FvExtEntryType = [] - FvObj.FvExtEntryData = [] while True: self._GetSetStatements(FvObj) diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py index 30525bd11fcc..d89e7bfbdd6e 100644 --- a/BaseTools/Source/Python/GenFds/Fv.py +++ b/BaseTools/Source/Python/GenFds/Fv.py @@ -37,8 +37,8 @@ class FV (object): # # @param self The object pointer # - def __init__(self): - self.UiFvName = None + def __init__(self, Name=None): + self.UiFvName = Name self.CreateFileName = None self.BlockSizeList = [] self.DefineVarDict = {} @@ -61,7 +61,9 @@ class FV (object): self.FvForceRebase = None self.FvRegionInFD = None self.UsedSizeEnable = False - + self.FvExtEntryTypeValue = [] + self.FvExtEntryType = [] + self.FvExtEntryData = [] ## AddToBuffer() # # Generate Fv and add it to the Buffer @@ -77,7 +79,6 @@ class FV (object): # @retval string Generated FV file path # def AddToBuffer (self, Buffer, BaseAddress=None, BlockSize= None, BlockNum=None, ErasePloarity='1', VtfDict=None, MacroDict = {}, Flag=False): - if BaseAddress is None and self.UiFvName.upper() + 'fv' in GenFdsGlobalVariable.ImageBinDict: return GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch v3 4/6] BaseTools/GenFds: remove MacroDict parameter 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey ` (2 preceding siblings ...) 2018-10-23 17:29 ` [Patch v3 3/6] Basetools/GenFds: refactor class FV Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-23 17:29 ` [Patch v3 5/6] BaseTools/GenFds: refactor FdfParser warnings Jaben Carsey ` (2 subsequent siblings) 6 siblings, 0 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel The MacroDict parameter goes around in circles through 4 functions without use. 1. GetSectionData calls into GetLeafSection, otherwise doesn’t use MacroDict 2. GetLeafSection calls into GetFileStatement, otherwise doesn’t use MacroDict 3. GetFileStatement calls into GetFilePart, otherwise doesn’t use MacroDict 4. GetFilePart calls into GetSectionData, otherwise doesn’t use MacroDict Go to 1 and repeat forever. Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 48 +++++++------------- 1 file changed, 16 insertions(+), 32 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index 9e806de0d294..f8f559aecffd 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -2113,12 +2113,12 @@ class FdfParser: if FvObj.FvNameString == 'TRUE' and not FvObj.FvNameGuid: raise Warning("FvNameString found but FvNameGuid was not found", self.FileName, self.CurrentLineNumber) - self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) - self._GetAprioriSection(FvObj, FvObj.DefineVarDict.copy()) + self._GetAprioriSection(FvObj) + self._GetAprioriSection(FvObj) while True: isInf = self._GetInfStatement(FvObj) - isFile = self._GetFileStatement(FvObj, MacroDict = FvObj.DefineVarDict.copy()) + isFile = self._GetFileStatement(FvObj) if not isInf and not isFile: break @@ -2353,11 +2353,10 @@ class FdfParser: # # @param self The object pointer # @param FvObj for whom apriori is got - # @param MacroDict dictionary used to replace macro # @retval True Successfully find apriori statement # @retval False Not able to find apriori statement # - def _GetAprioriSection(self, FvObj, MacroDict = {}): + def _GetAprioriSection(self, FvObj): if not self._IsKeyword("APRIORI"): return False @@ -2372,7 +2371,6 @@ class FdfParser: AprSectionObj.AprioriType = AprType self._GetDefineStatements(AprSectionObj) - MacroDict.update(AprSectionObj.DefineVarDict) while True: IsInf = self._GetInfStatement(AprSectionObj) @@ -2526,11 +2524,10 @@ class FdfParser: # # @param self The object pointer # @param Obj for whom FILE statement is got - # @param MacroDict dictionary used to replace macro # @retval True Successfully find FILE statement # @retval False Not able to find FILE statement # - def _GetFileStatement(self, Obj, ForCapsule = False, MacroDict = {}): + def _GetFileStatement(self, Obj, ForCapsule = False): if not self._IsKeyword("FILE"): return False @@ -2561,7 +2558,7 @@ class FdfParser: FfsFileObj.NameGuid = self._Token - self._GetFilePart(FfsFileObj, MacroDict.copy()) + self._GetFilePart(FfsFileObj) if ForCapsule: capsuleFfs = CapsuleFfs() @@ -2608,9 +2605,8 @@ class FdfParser: # # @param self The object pointer # @param FfsFileObj for whom component is got - # @param MacroDict dictionary used to replace macro # - def _GetFilePart(self, FfsFileObj, MacroDict = {}): + def _GetFilePart(self, FfsFileObj): self._GetFileOpts(FfsFileObj) if not self._IsToken("{"): @@ -2645,11 +2641,11 @@ class FdfParser: elif self._Token in {TAB_DEFINE, "APRIORI", "SECTION"}: self._UndoToken() - self._GetSectionData(FfsFileObj, MacroDict) + self._GetSectionData(FfsFileObj) elif hasattr(FfsFileObj, 'FvFileType') and FfsFileObj.FvFileType == 'RAW': self._UndoToken() - self._GetRAWData(FfsFileObj, MacroDict) + self._GetRAWData(FfsFileObj) else: FfsFileObj.CurrentLineNum = self.CurrentLineNumber @@ -2666,9 +2662,8 @@ class FdfParser: # # @param self The object pointer # @param FfsFileObj for whom section is got - # @param MacroDict dictionary used to replace macro # - def _GetRAWData(self, FfsFileObj, MacroDict = {}): + def _GetRAWData(self, FfsFileObj): FfsFileObj.FileName = [] FfsFileObj.SubAlignment = [] while True: @@ -2756,26 +2751,18 @@ class FdfParser: return False - ## _GetFilePart() method + ## _GetSectionData() method # # Get section data for FILE statement # # @param self The object pointer # @param FfsFileObj for whom section is got - # @param MacroDict dictionary used to replace macro # - def _GetSectionData(self, FfsFileObj, MacroDict = {}): - Dict = {} - Dict.update(MacroDict) - + def _GetSectionData(self, FfsFileObj): self._GetDefineStatements(FfsFileObj) - Dict.update(FfsFileObj.DefineVarDict) - self._GetAprioriSection(FfsFileObj, Dict.copy()) - self._GetAprioriSection(FfsFileObj, Dict.copy()) - while True: - IsLeafSection = self._GetLeafSection(FfsFileObj, Dict) + IsLeafSection = self._GetLeafSection(FfsFileObj) IsEncapSection = self._GetEncapsulationSec(FfsFileObj) if not IsLeafSection and not IsEncapSection: break @@ -2786,11 +2773,10 @@ class FdfParser: # # @param self The object pointer # @param Obj for whom leaf section is got - # @param MacroDict dictionary used to replace macro # @retval True Successfully find section statement # @retval False Not able to find section statement # - def _GetLeafSection(self, Obj, MacroDict = {}): + def _GetLeafSection(self, Obj): OldPos = self.GetFileBufferPos() if not self._IsKeyword("SECTION"): @@ -2861,17 +2847,15 @@ class FdfParser: FvObj = FV() FvObj.UiFvName = FvName.upper() self._GetDefineStatements(FvObj) - MacroDict.update(FvObj.DefineVarDict) + self._GetBlockStatement(FvObj) self._GetSetStatements(FvObj) self._GetFvAlignment(FvObj) self._GetFvAttributes(FvObj) - self._GetAprioriSection(FvObj, MacroDict.copy()) - self._GetAprioriSection(FvObj, MacroDict.copy()) while True: IsInf = self._GetInfStatement(FvObj) - IsFile = self._GetFileStatement(FvObj, MacroDict.copy()) + IsFile = self._GetFileStatement(FvObj) if not IsInf and not IsFile: break -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch v3 5/6] BaseTools/GenFds: refactor FdfParser warnings 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey ` (3 preceding siblings ...) 2018-10-23 17:29 ` [Patch v3 4/6] BaseTools/GenFds: remove MacroDict parameter Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-23 17:29 ` [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser Jaben Carsey 2018-10-25 11:55 ` [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Zhu, Yonghong 6 siblings, 0 replies; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel; +Cc: Yonghong Zhu, Liming Gao make functions for common error messages refactor to use these functions Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 603 ++++++++++---------- 1 file changed, 303 insertions(+), 300 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index f8f559aecffd..d954c0b40b3b 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -129,6 +129,24 @@ class Warning (Exception): def __str__(self): return self.Message + # helper functions to facilitate consistency in warnings + # each function is for a different common warning + @staticmethod + def Expected(Str, File, Line): + return Warning("expected {}".format(Str), File, Line) + @staticmethod + def ExpectedEquals(File, Line): + return Warning.Expected("'='", File, Line) + @staticmethod + def ExpectedCurlyOpen(File, Line): + return Warning.Expected("'{'", File, Line) + @staticmethod + def ExpectedCurlyClose(File, Line): + return Warning.Expected("'}'", File, Line) + @staticmethod + def ExpectedBracketClose(File, Line): + return Warning.Expected("']'", File, Line) + ## The Include file content class that used to record file data when parsing include file # # May raise Exception when opening file. @@ -184,8 +202,6 @@ class IncludeFileProfile: return (self.FileName, Line - InsertedLines + 1) - - ## The FDF content class that used to record file data when parsing FDF # # May raise Exception when opening file. @@ -563,10 +579,10 @@ class FdfParser: if self._Token == TAB_DEFINE: if not self._GetNextToken(): - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Macro name", self.FileName, self.CurrentLineNumber) Macro = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) Value = self._GetExpression() MacroDict[Macro] = Value @@ -575,7 +591,7 @@ class FdfParser: IncludeLine = self.CurrentLineNumber IncludeOffset = self.CurrentOffsetWithinLine - len(TAB_INCLUDE) if not self._GetNextToken(): - raise Warning("expected include file name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("include file name", self.FileName, self.CurrentLineNumber) IncFileName = self._Token PreIndex = 0 StartPos = IncFileName.find('$(', PreIndex) @@ -731,10 +747,10 @@ class FdfParser: DefineLine = self.CurrentLineNumber - 1 DefineOffset = self.CurrentOffsetWithinLine - len(TAB_DEFINE) if not self._GetNextToken(): - raise Warning("expected Macro name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Macro name", self.FileName, self.CurrentLineNumber) Macro = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) Value = self._GetExpression() self._SetMacroValue(Macro, Value) @@ -747,7 +763,7 @@ class FdfParser: PcdPair = self._GetNextPcdSettings() PcdName = "%s.%s" % (PcdPair[1], PcdPair[0]) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) Value = self._GetExpression() Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) @@ -1179,20 +1195,20 @@ class FdfParser: def _GetNextPcdSettings(self): if not self._GetNextWord(): - raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("<PcdTokenSpaceCName>", self.FileName, self.CurrentLineNumber) pcdTokenSpaceCName = self._Token if not self._IsToken(TAB_SPLIT): - raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) + raise Warning.Expected(".", self.FileName, self.CurrentLineNumber) if not self._GetNextWord(): - raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("<PcdCName>", self.FileName, self.CurrentLineNumber) pcdCName = self._Token Fields = [] while self._IsToken(TAB_SPLIT): if not self._GetNextPcdWord(): - raise Warning("expected format of <PcdTokenSpaceCName>.<PcdCName>", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Pcd Fields", self.FileName, self.CurrentLineNumber) Fields.append(self._Token) return (pcdCName, pcdTokenSpaceCName,TAB_SPLIT.join(Fields)) @@ -1207,33 +1223,25 @@ class FdfParser: # @retval False Not able to find a string data, file buffer pointer not changed # def _GetStringData(self): + QuoteToUse = None if self._Token.startswith(T_CHAR_DOUBLE_QUOTE) or self._Token.startswith("L\""): - self._UndoToken() - self._SkipToToken(T_CHAR_DOUBLE_QUOTE) - currentLineNumber = self.CurrentLineNumber - - if not self._SkipToToken(T_CHAR_DOUBLE_QUOTE): - raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) - if currentLineNumber != self.CurrentLineNumber: - raise Warning("Missing Quote \" for String", self.FileName, self.CurrentLineNumber) - self._Token = self._SkippedChars.rstrip(T_CHAR_DOUBLE_QUOTE) - return True - + QuoteToUse = T_CHAR_DOUBLE_QUOTE elif self._Token.startswith(T_CHAR_SINGLE_QUOTE) or self._Token.startswith("L\'"): - self._UndoToken() - self._SkipToToken(T_CHAR_SINGLE_QUOTE) - currentLineNumber = self.CurrentLineNumber - - if not self._SkipToToken(T_CHAR_SINGLE_QUOTE): - raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) - if currentLineNumber != self.CurrentLineNumber: - raise Warning("Missing Quote \' for String", self.FileName, self.CurrentLineNumber) - self._Token = self._SkippedChars.rstrip(T_CHAR_SINGLE_QUOTE) - return True - + QuoteToUse = T_CHAR_SINGLE_QUOTE else: return False + self._UndoToken() + self._SkipToToken(QuoteToUse) + currentLineNumber = self.CurrentLineNumber + + if not self._SkipToToken(QuoteToUse): + raise Warning(QuoteToUse, self.FileName, self.CurrentLineNumber) + if currentLineNumber != self.CurrentLineNumber: + raise Warning(QuoteToUse, self.FileName, self.CurrentLineNumber) + self._Token = self._SkippedChars.rstrip(QuoteToUse) + return True + ## _SkipToToken() method # # Search forward in file buffer for the string @@ -1376,10 +1384,10 @@ class FdfParser: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("expected [DEFINES", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("[DEFINES", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) while self._GetNextWord(): # handle the SET statement @@ -1391,9 +1399,9 @@ class FdfParser: Macro = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken() or self._Token.startswith(TAB_SECTION_START): - raise Warning("expected MACRO value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("MACRO value", self.FileName, self.CurrentLineNumber) Value = self._Token return False @@ -1434,7 +1442,7 @@ class FdfParser: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("expected [FD.]", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("[FD.]", self.FileName, self.CurrentLineNumber) FdName = self._GetUiName() if FdName == "": @@ -1444,21 +1452,21 @@ class FdfParser: FdName = GlobalData.gActivePlatform.PlatformName self.Profile.FdNameNotSet = True else: - raise Warning("expected FdName in [FD.] section", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FdName in [FD.] section", self.FileName, self.CurrentLineNumber) self.CurrentFdName = FdName.upper() if self.CurrentFdName in self.Profile.FdDict: raise Warning("Unexpected the same FD name", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) FdObj = FD() FdObj.FdUiName = self.CurrentFdName self.Profile.FdDict[self.CurrentFdName] = FdObj if len (self.Profile.FdDict) > 1 and self.Profile.FdNameNotSet: - raise Warning("expected all FDs have their name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("all FDs have their name", self.FileName, self.CurrentLineNumber) Status = self._GetCreateFile(FdObj) if not Status: @@ -1479,7 +1487,7 @@ class FdfParser: self._GetSetStatements(FdObj) if not self._GetRegionLayout(FdObj): - raise Warning("expected region layout", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("region layout", self.FileName, self.CurrentLineNumber) while self._GetRegionLayout(FdObj): pass @@ -1510,10 +1518,10 @@ class FdfParser: def _GetCreateFile(self, Obj): if self._IsKeyword("CREATE_FILE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected file name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("file name", self.FileName, self.CurrentLineNumber) FileName = self._Token Obj.CreateFileName = FileName @@ -1533,10 +1541,10 @@ class FdfParser: def _GetTokenStatements(self, Obj): if self._IsKeyword("BaseAddress"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex base address", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex base address", self.FileName, self.CurrentLineNumber) Obj.BaseAddress = self._Token @@ -1551,10 +1559,10 @@ class FdfParser: if self._IsKeyword("Size"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex size", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex size", self.FileName, self.CurrentLineNumber) Size = self._Token if self._IsToken(TAB_VALUE_SPLIT): @@ -1569,13 +1577,13 @@ class FdfParser: if self._IsKeyword("ErasePolarity"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Erase Polarity", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Erase Polarity", self.FileName, self.CurrentLineNumber) - if self._Token != "1" and self._Token != "0": - raise Warning("expected 1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) + if not self._Token in {"1", "0"}: + raise Warning.Expected("1 or 0 Erase Polarity", self.FileName, self.CurrentLineNumber) Obj.ErasePolarity = self._Token return True @@ -1594,20 +1602,20 @@ class FdfParser: def _GetAddressStatements(self, Obj): if self._IsKeyword("BsBaseAddress"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): - raise Warning("expected address", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("address", self.FileName, self.CurrentLineNumber) BsAddress = long(self._Token, 0) Obj.BsBaseAddress = BsAddress if self._IsKeyword("RtBaseAddress"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): - raise Warning("expected address", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("address", self.FileName, self.CurrentLineNumber) RtAddress = long(self._Token, 0) Obj.RtBaseAddress = RtAddress @@ -1626,7 +1634,7 @@ class FdfParser: Item = Obj.BlockSizeList[-1] if Item[0] is None or Item[1] is None: - raise Warning("expected block statement", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("block statement", self.FileName, self.CurrentLineNumber) return IsBlock ## _GetBlockStatement() method @@ -1643,10 +1651,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): - raise Warning("expected Hex or Integer block size", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex or Integer block size", self.FileName, self.CurrentLineNumber) BlockSize = self._Token BlockSizePcd = None @@ -1662,10 +1670,10 @@ class FdfParser: BlockNumber = None if self._IsKeyword("NumBlocks"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextDecimalNumber() and not self._GetNextHexNumber(): - raise Warning("expected block numbers", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("block numbers", self.FileName, self.CurrentLineNumber) BlockNumber = long(self._Token, 0) @@ -1699,10 +1707,10 @@ class FdfParser: self._GetNextToken() Macro = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("value", self.FileName, self.CurrentLineNumber) Value = self._Token Macro = '$(' + Macro + ')' @@ -1738,7 +1746,7 @@ class FdfParser: PcdPair = self._GetNextPcdSettings() if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) Value = self._GetExpression() Value = self._EvaluateConditional(Value, self.CurrentLineNumber, 'eval', True) @@ -1803,11 +1811,11 @@ class FdfParser: theFd.RegionList.append(RegionObj) if not self._IsToken(TAB_VALUE_SPLIT): - raise Warning("expected '|'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'|'", self.FileName, self.CurrentLineNumber) Size = self._CalcRegionExpr() if Size is None: - raise Warning("expected Region Size", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Region Size", self.FileName, self.CurrentLineNumber) RegionObj.Size = Size if not self._GetNextWord(): @@ -1890,13 +1898,13 @@ class FdfParser: # def _GetRegionFvType(self, RegionObj): if not self._IsKeyword(BINARY_FILE_TYPE_FV): - raise Warning("expected Keyword BINARY_FILE_TYPE_FV", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'FV'", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = BINARY_FILE_TYPE_FV RegionObj.RegionDataList.append((self._Token).upper()) @@ -1904,10 +1912,10 @@ class FdfParser: while self._IsKeyword(BINARY_FILE_TYPE_FV): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV name", self.FileName, self.CurrentLineNumber) RegionObj.RegionDataList.append((self._Token).upper()) @@ -1920,13 +1928,13 @@ class FdfParser: # def _GetRegionCapType(self, RegionObj): if not self._IsKeyword("CAPSULE"): - raise Warning("expected Keyword 'CAPSULE'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'CAPSULE'", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("CAPSULE name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "CAPSULE" RegionObj.RegionDataList.append(self._Token) @@ -1934,10 +1942,10 @@ class FdfParser: while self._IsKeyword("CAPSULE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected CAPSULE name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("CAPSULE name", self.FileName, self.CurrentLineNumber) RegionObj.RegionDataList.append(self._Token) @@ -1950,13 +1958,13 @@ class FdfParser: # def _GetRegionFileType(self, RegionObj): if not self._IsKeyword("FILE"): - raise Warning("expected Keyword 'FILE'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'FILE'", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected File name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File name", self.FileName, self.CurrentLineNumber) RegionObj.RegionType = "FILE" RegionObj.RegionDataList.append(self._Token) @@ -1964,10 +1972,10 @@ class FdfParser: while self._IsKeyword("FILE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FILE name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FILE name", self.FileName, self.CurrentLineNumber) RegionObj.RegionDataList.append(self._Token) @@ -1980,16 +1988,16 @@ class FdfParser: # def _GetRegionDataType(self, RegionObj): if not self._IsKeyword("DATA"): - raise Warning("expected Region Data type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Region Data type", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex byte", self.FileName, self.CurrentLineNumber) if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) @@ -2014,7 +2022,7 @@ class FdfParser: DataString += TAB_COMMA_SPLIT if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) RegionObj.RegionType = "DATA" @@ -2023,13 +2031,13 @@ class FdfParser: while self._IsKeyword("DATA"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex byte", self.FileName, self.CurrentLineNumber) if len(self._Token) > 18: raise Warning("Hex string can't be converted to a valid UINT64 value", self.FileName, self.CurrentLineNumber) @@ -2054,7 +2062,7 @@ class FdfParser: DataString += TAB_COMMA_SPLIT if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) RegionObj.RegionDataList.append(DataString) @@ -2088,7 +2096,7 @@ class FdfParser: self.CurrentFvName = FvName.upper() if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) FvObj = FV(Name=self.CurrentFvName) self.Profile.FvDict[self.CurrentFvName] = FvObj @@ -2138,10 +2146,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("alignment value", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in {"1", "2", "4", "8", "16", "32", "64", "128", "256", "512", \ "1K", "2K", "4K", "8K", "16K", "32K", "64K", "128K", "256K", "512K", \ @@ -2165,10 +2173,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV base address value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV base address value", self.FileName, self.CurrentLineNumber) if not BaseAddrValuePattern.match(self._Token.upper()): raise Warning("Unknown FV base address value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) @@ -2189,10 +2197,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FvForceRebase value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FvForceRebase value", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in {"TRUE", "FALSE", "0", "0X0", "0X00", "1", "0X1", "0X01"}: raise Warning("Unknown FvForceRebase value '%s'" % self._Token, self.FileName, self.CurrentLineNumber) @@ -2230,10 +2238,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken() or self._Token.upper() not in {"TRUE", "FALSE", "1", "0"}: - raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) FvObj.FvAttributeDict[name] = self._Token @@ -2252,10 +2260,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextGuid(): - raise Warning("expected FV GUID value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("GUID value", self.FileName, self.CurrentLineNumber) FvObj.FvNameGuid = self._Token @@ -2266,10 +2274,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) - if not self._GetNextToken() or self._Token not in {'TRUE', 'FALSE'}: - raise Warning("expected TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) + if not self._GetNextToken() or self._Token.upper() not in {'TRUE', 'FALSE'}: + raise Warning.Expected("TRUE or FALSE for FvNameString", self.FileName, self.CurrentLineNumber) FvObj.FvNameString = self._Token @@ -2280,34 +2288,33 @@ class FdfParser: return False if not self._IsKeyword ("TYPE"): - raise Warning("expected 'TYPE'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'TYPE'", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber() and not self._GetNextDecimalNumber(): - raise Warning("expected Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex FV extension entry type value At Line ", self.FileName, self.CurrentLineNumber) FvObj.FvExtEntryTypeValue.append(self._Token) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) - if not self._IsKeyword ("FILE") and not self._IsKeyword ("DATA"): - raise Warning("expected 'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) + if not self._IsKeyword("FILE") and not self._IsKeyword("DATA"): + raise Warning.Expected("'FILE' or 'DATA'", self.FileName, self.CurrentLineNumber) FvObj.FvExtEntryType.append(self._Token) if self._Token == 'DATA': - if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex byte", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex byte", self.FileName, self.CurrentLineNumber) if len(self._Token) > 4: raise Warning("Hex byte(must be 2 digits) too long", self.FileName, self.CurrentLineNumber) @@ -2324,26 +2331,25 @@ class FdfParser: DataString += TAB_COMMA_SPLIT if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) FvObj.FvExtEntryData.append(DataString) if self._Token == 'FILE': - if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV Extension Entry file path At Line ", self.FileName, self.CurrentLineNumber) FvObj.FvExtEntryData.append(self._Token) if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return True @@ -2361,11 +2367,11 @@ class FdfParser: return False if not self._IsKeyword("PEI") and not self._IsKeyword("DXE"): - raise Warning("expected Apriori file type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Apriori file type", self.FileName, self.CurrentLineNumber) AprType = self._Token if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) AprSectionObj = AprioriSection() AprSectionObj.AprioriType = AprType @@ -2379,7 +2385,7 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvObj.AprioriSectionList.append(AprSectionObj) return True @@ -2392,10 +2398,10 @@ class FdfParser: self._GetInfOptions(ffsInf) if not self._GetNextToken(): - raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("INF file path", self.FileName, self.CurrentLineNumber) ffsInf.InfFileName = self._Token if not ffsInf.InfFileName.endswith('.inf'): - raise Warning("expected .inf file path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected(".inf file path", self.FileName, self.CurrentLineNumber) ffsInf.CurrentLineNum = self.CurrentLineNumber ffsInf.CurrentLineContent = self._CurrentLine() @@ -2462,41 +2468,41 @@ class FdfParser: def _GetInfOptions(self, FfsInfObj): if self._IsKeyword("FILE_GUID"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextGuid(): - raise Warning("expected GUID value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("GUID value", self.FileName, self.CurrentLineNumber) FfsInfObj.OverrideGuid = self._Token if self._IsKeyword("RuleOverride"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Rule name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Rule name", self.FileName, self.CurrentLineNumber) FfsInfObj.Rule = self._Token if self._IsKeyword("VERSION"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Version", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Version", self.FileName, self.CurrentLineNumber) if self._GetStringData(): FfsInfObj.Version = self._Token if self._IsKeyword(BINARY_FILE_TYPE_UI): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected UI name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("UI name", self.FileName, self.CurrentLineNumber) if self._GetStringData(): FfsInfObj.Ui = self._Token if self._IsKeyword("USE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected ARCH name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("ARCH name", self.FileName, self.CurrentLineNumber) FfsInfObj.UseArch = self._Token @@ -2512,7 +2518,7 @@ class FdfParser: while self._GetNextToken(): if not p.match(self._Token): - raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) FfsInfObj.KeyStringList.append(self._Token) if not self._IsToken(TAB_COMMA_SPLIT): @@ -2532,7 +2538,7 @@ class FdfParser: return False if not self._GetNextWord(): - raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FFS type", self.FileName, self.CurrentLineNumber) if ForCapsule and self._Token == 'DATA': self._UndoToken() @@ -2543,17 +2549,17 @@ class FdfParser: FfsFileObj.FvFileType = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextGuid(): if not self._GetNextWord(): - raise Warning("expected File GUID", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File GUID", self.FileName, self.CurrentLineNumber) if self._Token == 'PCD': if not self._IsToken("("): - raise Warning("expected '('", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'('", self.FileName, self.CurrentLineNumber) PcdPair = self._GetNextPcdSettings() if not self._IsToken(")"): - raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("')'", self.FileName, self.CurrentLineNumber) self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' FfsFileObj.NameGuid = self._Token @@ -2620,23 +2626,23 @@ class FdfParser: raise Warning("File type %s could not have reloc strip flag%d" % (FfsFileObj.FvFileType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected File name or section data", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File name or section data", self.FileName, self.CurrentLineNumber) if self._Token == BINARY_FILE_TYPE_FV: if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV name", self.FileName, self.CurrentLineNumber) FfsFileObj.FvName = self._Token elif self._Token == "FD": if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FD name", self.FileName, self.CurrentLineNumber) FfsFileObj.FdName = self._Token elif self._Token in {TAB_DEFINE, "APRIORI", "SECTION"}: @@ -2654,7 +2660,7 @@ class FdfParser: self._VerifyFile(FfsFileObj.FileName) if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) ## _GetRAWData() method # @@ -2675,12 +2681,12 @@ class FdfParser: if not self._Token == "Auto": AlignValue = self._Token if not self._GetNextToken(): - raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) FileName = self._Token.replace('$(SPACE)', ' ') if FileName == '}': self._UndoToken() - raise Warning("expected Filename value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) self._VerifyFile(FileName) File = PathClass(NormPath(FileName), GenFdsGlobalVariable.WorkSpaceDir) @@ -2710,7 +2716,7 @@ class FdfParser: if self._IsToken(TAB_COMMA_SPLIT): while self._GetNextToken(): if not TokenFindPattern.match(self._Token): - raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) FfsFileObj.KeyStringList.append(self._Token) if not self._IsToken(TAB_COMMA_SPLIT): @@ -2743,10 +2749,10 @@ class FdfParser: def _GetAlignment(self): if self._IsKeyword("Align", True): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected alignment value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("alignment value", self.FileName, self.CurrentLineNumber) return True return False @@ -2781,7 +2787,7 @@ class FdfParser: if not self._IsKeyword("SECTION"): if len(Obj.SectionList) == 0: - raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("SECTION", self.FileName, self.CurrentLineNumber) else: return False @@ -2794,10 +2800,10 @@ class FdfParser: BuildNum = None if self._IsKeyword("BUILD_NUM"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Build number value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Build number value", self.FileName, self.CurrentLineNumber) BuildNum = self._Token @@ -2805,9 +2811,9 @@ class FdfParser: if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected version", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("version", self.FileName, self.CurrentLineNumber) VerSectionObj = VerSection() VerSectionObj.Alignment = AlignValue VerSectionObj.BuildNum = BuildNum @@ -2821,9 +2827,9 @@ class FdfParser: if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected UI", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("UI", self.FileName, self.CurrentLineNumber) UiSectionObj = UiSection() UiSectionObj.Alignment = AlignValue if self._GetStringData(): @@ -2836,9 +2842,9 @@ class FdfParser: if AlignValue == 'Auto': raise Warning("Auto alignment can only be used in PE32 or TE section ", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV name or FV file path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV name or FV file path", self.FileName, self.CurrentLineNumber) FvName = self._Token FvObj = None @@ -2860,7 +2866,7 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj = FvImageSection() FvImageSectionObj.Alignment = AlignValue @@ -2881,18 +2887,18 @@ class FdfParser: DepexSectionObj.DepexType = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) if not self._SkipToToken("}"): - raise Warning("expected Depex expression ending '}'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Depex expression ending '}'", self.FileName, self.CurrentLineNumber) DepexSectionObj.Expression = self._SkippedChars.rstrip('}') Obj.SectionList.append(DepexSectionObj) else: if not self._GetNextWord(): - raise Warning("expected section type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("section type", self.FileName, self.CurrentLineNumber) # Encapsulation section appear, UndoToken and return if self._Token == "COMPRESS" or self._Token == "GUIDED": @@ -2921,7 +2927,7 @@ class FdfParser: if self._IsToken(TAB_EQUAL_SPLIT): if not self._GetNextToken(): - raise Warning("expected section file path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("section file path", self.FileName, self.CurrentLineNumber) DataSectionObj.SectFileName = self._Token self._VerifyFile(DataSectionObj.SectFileName) else: @@ -2965,7 +2971,7 @@ class FdfParser: type = self._Token if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) CompressSectionObj = CompressSection() CompressSectionObj.Alignment = AlignValue @@ -2979,12 +2985,8 @@ class FdfParser: if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(CompressSectionObj) - -# else: -# raise Warning("Compress type not known") - return True elif self._IsKeyword("GUIDED"): @@ -2994,7 +2996,7 @@ class FdfParser: AttribDict = self._GetGuidAttrib() if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) GuidSectionObj = GuidSection() GuidSectionObj.Alignment = AlignValue GuidSectionObj.NameGuid = GuidValue @@ -3010,7 +3012,7 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(GuidSectionObj) return True @@ -3034,10 +3036,10 @@ class FdfParser: AttribKey = self._Token if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("TRUE(1)/FALSE(0)/Number", self.FileName, self.CurrentLineNumber) elif AttribKey == "EXTRA_HEADER_SIZE": Base = 10 if self._Token[0:2].upper() == "0X": @@ -3046,9 +3048,9 @@ class FdfParser: AttribDict[AttribKey] = int(self._Token, Base) continue except ValueError: - raise Warning("expected Number", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Number", self.FileName, self.CurrentLineNumber) elif self._Token.upper() not in {"TRUE", "FALSE", "1", "0"}: - raise Warning("expected TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("TRUE/FALSE (1/0)", self.FileName, self.CurrentLineNumber) AttribDict[AttribKey] = self._Token return AttribDict @@ -3066,7 +3068,7 @@ class FdfParser: OldPos = self.GetFileBufferPos() if not self._IsKeyword("SECTION"): if len(FfsFileObj.SectionList) == 0: - raise Warning("expected SECTION", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("SECTION", self.FileName, self.CurrentLineNumber) else: return False @@ -3101,7 +3103,7 @@ class FdfParser: FmpData.UiName = FmpUiName if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): raise Warning("The FMP payload section is empty!", self.FileName, self.CurrentLineNumber) @@ -3110,20 +3112,20 @@ class FdfParser: Name = self._Token FmpKeyList.remove(Name) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if Name == 'IMAGE_TYPE_ID': if not self._GetNextGuid(): - raise Warning("expected GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("GUID value for IMAGE_TYPE_ID.", self.FileName, self.CurrentLineNumber) FmpData.ImageTypeId = self._Token elif Name == 'CERTIFICATE_GUID': if not self._GetNextGuid(): - raise Warning("expected GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("GUID value for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) FmpData.Certificate_Guid = self._Token if UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_RSA2048_SHA256_GUID and UUID(FmpData.Certificate_Guid) != EFI_CERT_TYPE_PKCS7_GUID: raise Warning("Only support EFI_CERT_TYPE_RSA2048_SHA256_GUID or EFI_CERT_TYPE_PKCS7_GUID for CERTIFICATE_GUID.", self.FileName, self.CurrentLineNumber) else: if not self._GetNextToken(): - raise Warning("expected value of %s" % Name, self.FileName, self.CurrentLineNumber) + raise Warning.Expected("value of %s" % Name, self.FileName, self.CurrentLineNumber) Value = self._Token if Name == 'IMAGE_HEADER_INIT_VERSION': if FdfParser._Verify(Name, Value, 'UINT8'): @@ -3151,14 +3153,14 @@ class FdfParser: # Only the IMAGE_TYPE_ID is required item if FmpKeyList and 'IMAGE_TYPE_ID' in FmpKeyList: - raise Warning("Missing keywords IMAGE_TYPE_ID in FMP payload section.", self.FileName, self.CurrentLineNumber) + raise Warning("'IMAGE_TYPE_ID' in FMP payload section.", self.FileName, self.CurrentLineNumber) # get the Image file and Vendor code file self._GetFMPCapsuleData(FmpData) if not FmpData.ImageFile: raise Warning("Missing image file in FMP payload section.", self.FileName, self.CurrentLineNumber) # check whether more than one Vendor code file if len(FmpData.VendorCodeFile) > 1: - raise Warning("At most one Image file and one Vendor code file are allowed in FMP payload section.", self.FileName, self.CurrentLineNumber) + raise Warning("Vendor code file max of 1 per FMP payload section.", self.FileName, self.CurrentLineNumber) self.Profile.FmpPayloadDict[FmpUiName] = FmpData return True @@ -3185,25 +3187,25 @@ class FdfParser: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("expected [Capsule.]", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("[Capsule.]", self.FileName, self.CurrentLineNumber) CapsuleObj = Capsule() CapsuleName = self._GetUiName() if not CapsuleName: - raise Warning("expected capsule name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("capsule name", self.FileName, self.CurrentLineNumber) CapsuleObj.UiCapsuleName = CapsuleName.upper() if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) if self._IsKeyword("CREATE_FILE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected file name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("file name", self.FileName, self.CurrentLineNumber) CapsuleObj.CreateFile = self._Token @@ -3237,30 +3239,30 @@ class FdfParser: while self._Token in {"CAPSULE_GUID", "CAPSULE_HEADER_SIZE", "CAPSULE_FLAGS", "OEM_CAPSULE_FLAGS", "CAPSULE_HEADER_INIT_VERSION"}: Name = self._Token.strip() if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("value", self.FileName, self.CurrentLineNumber) if Name == 'CAPSULE_FLAGS': if not self._Token in {"PersistAcrossReset", "PopulateSystemTable", "InitiateReset"}: - raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) Value = self._Token.strip() while self._IsToken(TAB_COMMA_SPLIT): Value += TAB_COMMA_SPLIT if not self._GetNextToken(): - raise Warning("expected value", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("value", self.FileName, self.CurrentLineNumber) if not self._Token in {"PersistAcrossReset", "PopulateSystemTable", "InitiateReset"}: - raise Warning("expected PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("PersistAcrossReset, PopulateSystemTable, or InitiateReset", self.FileName, self.CurrentLineNumber) Value += self._Token.strip() elif Name == 'OEM_CAPSULE_FLAGS': Value = self._Token.strip() if not Value.upper().startswith('0X'): - raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("hex value starting with 0x", self.FileName, self.CurrentLineNumber) try: Value = int(Value, 0) except ValueError: - raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("hex string failed to convert to value", self.FileName, self.CurrentLineNumber) if not 0x0000 <= Value <= 0xFFFF: - raise Warning("expected hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("hex value between 0x0000 and 0xFFFF", self.FileName, self.CurrentLineNumber) Value = self._Token.strip() else: Value = self._Token.strip() @@ -3317,10 +3319,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FV name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV name", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in self.Profile.FvDict: raise Warning("FV name does not exist", self.FileName, self.CurrentLineNumber) @@ -3350,10 +3352,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected FD name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FD name", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in self.Profile.FdDict: raise Warning("FD name does not exist", self.FileName, self.CurrentLineNumber) @@ -3379,10 +3381,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("payload name after FMP_PAYLOAD =", self.FileName, self.CurrentLineNumber) Payload = self._Token.upper() if Payload not in self.Profile.FmpPayloadDict: raise Warning("This FMP Payload does not exist: %s" % self._Token, self.FileName, self.CurrentLineNumber) @@ -3398,10 +3400,10 @@ class FdfParser: return None if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected File name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File name", self.FileName, self.CurrentLineNumber) AnyFileName = self._Token self._VerifyFile(AnyFileName) @@ -3450,10 +3452,10 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Afile name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Afile name", self.FileName, self.CurrentLineNumber) AfileName = self._Token AfileBaseName = os.path.basename(AfileName) @@ -3498,10 +3500,10 @@ class FdfParser: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("expected [Rule.]", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("[Rule.]", self.FileName, self.CurrentLineNumber) if not self._SkipToToken(TAB_SPLIT): - raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'.'", self.FileName, self.CurrentLineNumber) Arch = self._SkippedChars.rstrip(TAB_SPLIT) if Arch.upper() not in ARCH_SET_FULL: @@ -3512,11 +3514,11 @@ class FdfParser: TemplateName = "" if self._IsToken(TAB_SPLIT): if not self._GetNextWord(): - raise Warning("expected template name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("template name", self.FileName, self.CurrentLineNumber) TemplateName = self._Token if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) RuleObj = self._GetRuleFileStatements() RuleObj.Arch = Arch.upper() @@ -3547,7 +3549,7 @@ class FdfParser: # def _GetModuleType(self): if not self._GetNextWord(): - raise Warning("expected Module type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Module type", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in { SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, SUP_MODULE_DXE_CORE, SUP_MODULE_DXE_DRIVER, @@ -3576,7 +3578,7 @@ class FdfParser: # def _GetFileExtension(self): if not self._IsToken(TAB_SPLIT): - raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'.'", self.FileName, self.CurrentLineNumber) Ext = "" if self._GetNextToken(): @@ -3587,7 +3589,7 @@ class FdfParser: raise Warning("Unknown file extension '%s'" % self._Token, self.FileName, self.CurrentLineNumber) else: - raise Warning("expected file extension", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("file extension", self.FileName, self.CurrentLineNumber) ## _GetRuleFileStatement() method # @@ -3598,10 +3600,10 @@ class FdfParser: # def _GetRuleFileStatements(self): if not self._IsKeyword("FILE"): - raise Warning("expected FILE", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FILE", self.FileName, self.CurrentLineNumber) if not self._GetNextWord(): - raise Warning("expected FFS type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FFS type", self.FileName, self.CurrentLineNumber) Type = self._Token.strip().upper() if Type not in {"RAW", "FREEFORM", SUP_MODULE_SEC, SUP_MODULE_PEI_CORE, SUP_MODULE_PEIM, @@ -3611,17 +3613,17 @@ class FdfParser: raise Warning("Unknown FV type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsKeyword("$(NAMED_GUID)"): if not self._GetNextWord(): - raise Warning("expected $(NAMED_GUID)", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("$(NAMED_GUID)", self.FileName, self.CurrentLineNumber) if self._Token == 'PCD': if not self._IsToken("("): - raise Warning("expected '('", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'('", self.FileName, self.CurrentLineNumber) PcdPair = self._GetNextPcdSettings() if not self._IsToken(")"): - raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("')'", self.FileName, self.CurrentLineNumber) self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' NameGuid = self._Token @@ -3643,7 +3645,7 @@ class FdfParser: if self._IsToken(TAB_COMMA_SPLIT): while self._GetNextToken(): if not TokenFindPattern.match(self._Token): - raise Warning("expected KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("KeyString \"Target_Tag_Arch\"", self.FileName, self.CurrentLineNumber) KeyStringList.append(self._Token) if not self._IsToken(TAB_COMMA_SPLIT): @@ -3688,14 +3690,14 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return NewRule else: # Simple file rule expected if not self._GetNextWord(): - raise Warning("expected leaf section type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("leaf section type", self.FileName, self.CurrentLineNumber) SectionName = self._Token @@ -3726,7 +3728,7 @@ class FdfParser: if self._IsToken(TAB_VALUE_SPLIT): Ext = self._GetFileExtension() elif not self._GetNextToken(): - raise Warning("expected File name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File name", self.FileName, self.CurrentLineNumber) NewRule = RuleSimpleFile() NewRule.SectionType = SectionName @@ -3788,13 +3790,13 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj.Fv = FvObj FvImageSectionObj.FvName = None else: if not self._IsKeyword(BINARY_FILE_TYPE_FV): - raise Warning("expected 'FV'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'FV'", self.FileName, self.CurrentLineNumber) FvImageSectionObj.FvFileType = self._Token if self._GetAlignment(): @@ -3816,7 +3818,7 @@ class FdfParser: else: self._UndoToken() else: - raise Warning("expected FV file name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("FV file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(FvImageSectionObj) return True @@ -3825,17 +3827,17 @@ class FdfParser: EfiSectionObj.SectionType = SectionName if not self._GetNextToken(): - raise Warning("expected file type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("file type", self.FileName, self.CurrentLineNumber) if self._Token == "STRING": if not self._RuleSectionCouldHaveString(EfiSectionObj.SectionType): raise Warning("%s section could NOT have string data%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Quoted String", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Quoted String", self.FileName, self.CurrentLineNumber) if self._GetStringData(): EfiSectionObj.StringData = self._Token @@ -3845,9 +3847,9 @@ class FdfParser: raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Build number", self.FileName, self.CurrentLineNumber) EfiSectionObj.BuildNum = self._Token else: @@ -3864,9 +3866,9 @@ class FdfParser: raise Warning("%s section could NOT have BUILD_NUM%d" % (EfiSectionObj.SectionType, self.CurrentLineNumber), self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Build number", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Build number", self.FileName, self.CurrentLineNumber) EfiSectionObj.BuildNum = self._Token if self._GetAlignment(): @@ -3905,10 +3907,10 @@ class FdfParser: if self._Token == 'PCD': if not self._IsToken("("): - raise Warning("expected '('", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'('", self.FileName, self.CurrentLineNumber) PcdPair = self._GetNextPcdSettings() if not self._IsToken(")"): - raise Warning("expected ')'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("')'", self.FileName, self.CurrentLineNumber) self._Token = 'PCD('+PcdPair[1]+TAB_SPLIT+PcdPair[0]+')' EfiSectionObj.FileName = self._Token @@ -3916,7 +3918,7 @@ class FdfParser: else: self._UndoToken() else: - raise Warning("expected section file name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("section file name", self.FileName, self.CurrentLineNumber) Obj.SectionList.append(EfiSectionObj) return True @@ -3975,36 +3977,37 @@ class FdfParser: # @param FileType The file type to check # def _CheckRuleSectionFileType(self, SectionType, FileType): + WarningString = "Incorrect section file type '%s'" if SectionType == "COMPAT16": if FileType not in {"COMPAT16", "SEC_COMPAT16"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PE32: if FileType not in {BINARY_FILE_TYPE_PE32, "SEC_PE32"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PIC: if FileType not in {BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_PIC}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_TE: if FileType not in {BINARY_FILE_TYPE_TE, "SEC_TE"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == "RAW": if FileType not in {BINARY_FILE_TYPE_BIN, "SEC_BIN", "RAW", "ASL", "ACPI"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_DXE_DEPEX or SectionType == BINARY_FILE_TYPE_SMM_DEPEX: if FileType not in {BINARY_FILE_TYPE_DXE_DEPEX, "SEC_DXE_DEPEX", BINARY_FILE_TYPE_SMM_DEPEX}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_UI: if FileType not in {BINARY_FILE_TYPE_UI, "SEC_UI"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == "VERSION": if FileType not in {"VERSION", "SEC_VERSION"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_PEI_DEPEX: if FileType not in {BINARY_FILE_TYPE_PEI_DEPEX, "SEC_PEI_DEPEX"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) elif SectionType == BINARY_FILE_TYPE_GUID: if FileType not in {BINARY_FILE_TYPE_PE32, "SEC_GUID"}: - raise Warning("Incorrect section file type '%s'" % FileType, self.FileName, self.CurrentLineNumber) + raise Warning(WarningString % FileType, self.FileName, self.CurrentLineNumber) ## _GetRuleEncapsulationSection() method # @@ -4022,7 +4025,7 @@ class FdfParser: Type = self._Token if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) CompressSectionObj = CompressSection() @@ -4035,7 +4038,7 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(CompressSectionObj) return True @@ -4051,7 +4054,7 @@ class FdfParser: AttribDict = self._GetGuidAttrib() if not self._IsToken("{"): - raise Warning("expected '{'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) GuidSectionObj = GuidSection() GuidSectionObj.NameGuid = GuidValue GuidSectionObj.SectionType = "GUIDED" @@ -4067,7 +4070,7 @@ class FdfParser: break if not self._IsToken("}"): - raise Warning("expected '}'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(GuidSectionObj) return True @@ -4098,17 +4101,17 @@ class FdfParser: FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber) #print 'Parsing String: %s in File %s, At line: %d, Offset Within Line: %d' \ # % (self.Profile.FileLinesList[self.CurrentLineNumber - 1][self.CurrentOffsetWithinLine:], FileLineTuple[0], FileLineTuple[1], self.CurrentOffsetWithinLine) - raise Warning("expected [VTF.]", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("[VTF.]", self.FileName, self.CurrentLineNumber) if not self._SkipToToken(TAB_SPLIT): - raise Warning("expected '.'", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("'.'", self.FileName, self.CurrentLineNumber) Arch = self._SkippedChars.rstrip(TAB_SPLIT).upper() if Arch not in HW_ARCH_SET: raise Warning("Unknown Arch '%s'" % Arch, self.FileName, self.CurrentLineNumber) if not self._GetNextWord(): - raise Warning("expected VTF name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("VTF name", self.FileName, self.CurrentLineNumber) Name = self._Token.upper() VtfObj = Vtf() @@ -4117,20 +4120,20 @@ class FdfParser: if self._IsToken(TAB_COMMA_SPLIT): if not self._GetNextWord(): - raise Warning("expected Arch list", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Arch list", self.FileName, self.CurrentLineNumber) if self._Token.upper() not in HW_ARCH_SET: raise Warning("Unknown Arch '%s'" % self._Token, self.FileName, self.CurrentLineNumber) VtfObj.ArchList = self._Token.upper() if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) if self._IsKeyword("IA32_RST_BIN"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Reset file", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Reset file", self.FileName, self.CurrentLineNumber) VtfObj.ResetBin = self._Token if VtfObj.ResetBin.replace(TAB_WORKSPACE, '').find('$') == -1: @@ -4159,26 +4162,26 @@ class FdfParser: return False if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextWord(): - raise Warning("expected Component Name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component Name", self.FileName, self.CurrentLineNumber) CompStatementObj = ComponentStatement() CompStatementObj.CompName = self._Token if not self._IsKeyword("COMP_LOC"): - raise Warning("expected COMP_LOC", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_LOC", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) CompStatementObj.CompLoc = "" if self._GetNextWord(): CompStatementObj.CompLoc = self._Token if self._IsToken(TAB_VALUE_SPLIT): if not self._GetNextWord(): - raise Warning("Expected Region Name", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Region Name", self.FileName, self.CurrentLineNumber) if self._Token not in {"F", "N", "S"}: #, "H", "L", "PH", "PL"): not support raise Warning("Unknown location type '%s'" % self._Token, self.FileName, self.CurrentLineNumber) @@ -4189,13 +4192,13 @@ class FdfParser: self.CurrentOffsetWithinLine = 0 if not self._IsKeyword("COMP_TYPE"): - raise Warning("expected COMP_TYPE", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_TYPE", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Component type", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component type", self.FileName, self.CurrentLineNumber) if self._Token not in {"FIT", "PAL_B", "PAL_A", "OEM"}: if not self._Token.startswith("0x") or len(self._Token) < 3 or len(self._Token) > 4 or \ not self._Token[2] in hexdigits or not self._Token[-1] in hexdigits: @@ -4203,13 +4206,13 @@ class FdfParser: CompStatementObj.CompType = self._Token if not self._IsKeyword("COMP_VER"): - raise Warning("expected COMP_VER", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_VER", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Component version", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component version", self.FileName, self.CurrentLineNumber) Pattern = compile('-$|[0-9a-fA-F]{1,2}\.[0-9a-fA-F]{1,2}$', DOTALL) if Pattern.match(self._Token) is None: @@ -4217,26 +4220,26 @@ class FdfParser: CompStatementObj.CompVer = self._Token if not self._IsKeyword("COMP_CS"): - raise Warning("expected COMP_CS", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_CS", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Component CS", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component CS", self.FileName, self.CurrentLineNumber) if self._Token not in {"1", "0"}: - raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) + raise Warning("Unknown Component CS '%s'" % self._Token, self.FileName, self.CurrentLineNumber) CompStatementObj.CompCs = self._Token if not self._IsKeyword("COMP_BIN"): - raise Warning("expected COMP_BIN", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_BIN", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Component file", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component file", self.FileName, self.CurrentLineNumber) CompStatementObj.CompBin = self._Token if CompStatementObj.CompBin != '-' and CompStatementObj.CompBin.replace(TAB_WORKSPACE, '').find('$') == -1: @@ -4246,13 +4249,13 @@ class FdfParser: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) if not self._IsKeyword("COMP_SYM"): - raise Warning("expected COMP_SYM", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_SYM", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected Component symbol file", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Component symbol file", self.FileName, self.CurrentLineNumber) CompStatementObj.CompSym = self._Token if CompStatementObj.CompSym != '-' and CompStatementObj.CompSym.replace(TAB_WORKSPACE, '').find('$') == -1: @@ -4262,10 +4265,10 @@ class FdfParser: EdkLogger.error("GenFds", ErrorCode, ExtraData=ErrorInfo) if not self._IsKeyword("COMP_SIZE"): - raise Warning("expected COMP_SIZE", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("COMP_SIZE", self.FileName, self.CurrentLineNumber) if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if self._IsToken("-"): CompStatementObj.CompSize = self._Token @@ -4304,7 +4307,7 @@ class FdfParser: OptRomName = self._GetUiName() if not self._IsToken(TAB_SECTION_END): - raise Warning("expected ']'", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedBracketClose(self.FileName, self.CurrentLineNumber) OptRomObj = OPTIONROM(OptRomName) self.Profile.OptRomDict[OptRomName] = OptRomObj @@ -4334,7 +4337,7 @@ class FdfParser: self._GetInfOptions(ffsInf) if not self._GetNextToken(): - raise Warning("expected INF file path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("INF file path", self.FileName, self.CurrentLineNumber) ffsInf.InfFileName = self._Token if ffsInf.InfFileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path @@ -4373,42 +4376,42 @@ class FdfParser: while True: if self._IsKeyword("PCI_VENDOR_ID"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex vendor id", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex vendor id", self.FileName, self.CurrentLineNumber) Overrides.PciVendorId = self._Token continue if self._IsKeyword("PCI_CLASS_CODE"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex class code", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex class code", self.FileName, self.CurrentLineNumber) Overrides.PciClassCode = self._Token continue if self._IsKeyword("PCI_DEVICE_ID"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex device id", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex device id", self.FileName, self.CurrentLineNumber) Overrides.PciDeviceId = self._Token continue if self._IsKeyword("PCI_REVISION"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextHexNumber(): - raise Warning("expected Hex revision", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Hex revision", self.FileName, self.CurrentLineNumber) Overrides.PciRevision = self._Token continue if self._IsKeyword("PCI_COMPRESS"): if not self._IsToken(TAB_EQUAL_SPLIT): - raise Warning("expected '='", self.FileName, self.CurrentLineNumber) + raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._GetNextToken(): - raise Warning("expected TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("TRUE/FALSE for compress", self.FileName, self.CurrentLineNumber) Overrides.NeedCompress = self._Token.upper() == 'TRUE' continue @@ -4435,11 +4438,11 @@ class FdfParser: FfsFileObj = OptRomFileStatement() if not self._IsKeyword("EFI") and not self._IsKeyword(BINARY_FILE_TYPE_BIN): - raise Warning("expected Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("Binary type (EFI/BIN)", self.FileName, self.CurrentLineNumber) FfsFileObj.FileType = self._Token if not self._GetNextToken(): - raise Warning("expected File path", self.FileName, self.CurrentLineNumber) + raise Warning.Expected("File path", self.FileName, self.CurrentLineNumber) FfsFileObj.FileName = self._Token if FfsFileObj.FileName.replace(TAB_WORKSPACE, '').find('$') == -1: #check for file path -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey ` (4 preceding siblings ...) 2018-10-23 17:29 ` [Patch v3 5/6] BaseTools/GenFds: refactor FdfParser warnings Jaben Carsey @ 2018-10-23 17:29 ` Jaben Carsey 2018-10-25 13:57 ` Feng, Bob C 2018-10-25 11:55 ` [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Zhu, Yonghong 6 siblings, 1 reply; 14+ messages in thread From: Jaben Carsey @ 2018-10-23 17:29 UTC (permalink / raw) To: edk2-devel; +Cc: Bob Feng, Yonghong Zhu, Liming Gao replace lots of '}' and "}" with a shared new consistent variable. Cc: Bob Feng <bob.c.feng@intel.com> Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 45 ++++++++++---------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index d954c0b40b3b..bf6e0bd2286c 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -65,8 +65,9 @@ T_CHAR_TAB = '\t' T_CHAR_DOUBLE_QUOTE = '\"' T_CHAR_SINGLE_QUOTE = '\'' T_CHAR_STAR = '*' +T_CHAR_BRACE_R = '}' -SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', T_CHAR_BRACE_R} ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", "256K", "512K", "1M", "2M", "4M", "8M", "16M"} ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} @@ -2021,7 +2022,7 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2061,7 +2062,7 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2330,10 +2331,10 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2348,7 +2349,7 @@ class FdfParser: FvObj.FvExtEntryData.append(self._Token) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return True @@ -2384,7 +2385,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvObj.AprioriSectionList.append(AprSectionObj) @@ -2659,7 +2660,7 @@ class FdfParser: FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') self._VerifyFile(FfsFileObj.FileName) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) ## _GetRAWData() method @@ -2684,7 +2685,7 @@ class FdfParser: raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) FileName = self._Token.replace('$(SPACE)', ' ') - if FileName == '}': + if FileName == T_CHAR_BRACE_R: self._UndoToken() raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) @@ -2693,7 +2694,7 @@ class FdfParser: FfsFileObj.FileName.append(File.Path) FfsFileObj.SubAlignment.append(AlignValue) - if self._IsToken("}"): + if self._IsToken(T_CHAR_BRACE_R): self._UndoToken() break @@ -2865,7 +2866,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj = FvImageSection() @@ -2890,10 +2891,10 @@ class FdfParser: raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) - if not self._SkipToToken("}"): + if not self._SkipToToken(T_CHAR_BRACE_R): raise Warning.Expected("Depex expression ending '}'", self.FileName, self.CurrentLineNumber) - DepexSectionObj.Expression = self._SkippedChars.rstrip('}') + DepexSectionObj.Expression = self._SkippedChars.rstrip(T_CHAR_BRACE_R) Obj.SectionList.append(DepexSectionObj) else: @@ -2984,7 +2985,7 @@ class FdfParser: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(CompressSectionObj) return True @@ -3011,7 +3012,7 @@ class FdfParser: if not IsLeafSection and not IsEncapSection: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(GuidSectionObj) @@ -3689,7 +3690,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return NewRule @@ -3789,7 +3790,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj.Fv = FvObj FvImageSectionObj.FvName = None @@ -3808,7 +3809,7 @@ class FdfParser: FvImageSectionObj.FvFileExtension = self._GetFileExtension() elif self._GetNextToken(): if self._Token not in { - "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + T_CHAR_BRACE_R, "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", @@ -3894,7 +3895,7 @@ class FdfParser: EfiSectionObj.FileExtension = self._GetFileExtension() elif self._GetNextToken(): if self._Token not in { - "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + T_CHAR_BRACE_R, "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", @@ -4037,7 +4038,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(CompressSectionObj) @@ -4069,7 +4070,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(GuidSectionObj) @@ -4415,7 +4416,7 @@ class FdfParser: Overrides.NeedCompress = self._Token.upper() == 'TRUE' continue - if self._IsToken("}"): + if self._IsToken(T_CHAR_BRACE_R): break else: EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser 2018-10-23 17:29 ` [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser Jaben Carsey @ 2018-10-25 13:57 ` Feng, Bob C 0 siblings, 0 replies; 14+ messages in thread From: Feng, Bob C @ 2018-10-25 13:57 UTC (permalink / raw) To: Carsey, Jaben, edk2-devel@lists.01.org; +Cc: Zhu, Yonghong, Gao, Liming Reviewed-by: Bob Feng <bob.c.feng@intel.com> -----Original Message----- From: Carsey, Jaben Sent: Wednesday, October 24, 2018 1:29 AM To: edk2-devel@lists.01.org Cc: Feng, Bob C <bob.c.feng@intel.com>; Zhu, Yonghong <yonghong.zhu@intel.com>; Gao, Liming <liming.gao@intel.com> Subject: [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser replace lots of '}' and "}" with a shared new consistent variable. Cc: Bob Feng <bob.c.feng@intel.com> Cc: Yonghong Zhu <yonghong.zhu@intel.com> Cc: Liming Gao <liming.gao@intel.com> Contributed-under: TianoCore Contribution Agreement 1.1 Signed-off-by: Jaben Carsey <jaben.carsey@intel.com> --- BaseTools/Source/Python/GenFds/FdfParser.py | 45 ++++++++++---------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py index d954c0b40b3b..bf6e0bd2286c 100644 --- a/BaseTools/Source/Python/GenFds/FdfParser.py +++ b/BaseTools/Source/Python/GenFds/FdfParser.py @@ -65,8 +65,9 @@ T_CHAR_TAB = '\t' T_CHAR_DOUBLE_QUOTE = '\"' T_CHAR_SINGLE_QUOTE = '\'' T_CHAR_STAR = '*' +T_CHAR_BRACE_R = '}' -SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', '}'} +SEPARATORS = {TAB_EQUAL_SPLIT, TAB_VALUE_SPLIT, TAB_COMMA_SPLIT, '{', +T_CHAR_BRACE_R} ALIGNMENTS = {"Auto", "8", "16", "32", "64", "128", "512", "1K", "4K", "32K", "64K", "128K", "256K", "512K", "1M", "2M", "4M", "8M", "16M"} ALIGNMENT_NOAUTO = ALIGNMENTS - {"Auto"} @@ -2021,7 +2022,7 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2061,7 +2062,7 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2330,10 +2331,10 @@ class FdfParser: DataString += self._Token DataString += TAB_COMMA_SPLIT - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) DataString = DataString.rstrip(TAB_COMMA_SPLIT) @@ -2348,7 +2349,7 @@ class FdfParser: FvObj.FvExtEntryData.append(self._Token) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return True @@ -2384,7 +2385,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvObj.AprioriSectionList.append(AprSectionObj) @@ -2659,7 +2660,7 @@ class FdfParser: FfsFileObj.FileName = self._Token.replace('$(SPACE)', ' ') self._VerifyFile(FfsFileObj.FileName) - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) ## _GetRAWData() method @@ -2684,7 +2685,7 @@ class FdfParser: raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) FileName = self._Token.replace('$(SPACE)', ' ') - if FileName == '}': + if FileName == T_CHAR_BRACE_R: self._UndoToken() raise Warning.Expected("Filename value", self.FileName, self.CurrentLineNumber) @@ -2693,7 +2694,7 @@ class FdfParser: FfsFileObj.FileName.append(File.Path) FfsFileObj.SubAlignment.append(AlignValue) - if self._IsToken("}"): + if self._IsToken(T_CHAR_BRACE_R): self._UndoToken() break @@ -2865,7 +2866,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj = FvImageSection() @@ -2890,10 +2891,10 @@ class FdfParser: raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber) if not self._IsToken("{"): raise Warning.ExpectedCurlyOpen(self.FileName, self.CurrentLineNumber) - if not self._SkipToToken("}"): + if not self._SkipToToken(T_CHAR_BRACE_R): raise Warning.Expected("Depex expression ending '}'", self.FileName, self.CurrentLineNumber) - DepexSectionObj.Expression = self._SkippedChars.rstrip('}') + DepexSectionObj.Expression = + self._SkippedChars.rstrip(T_CHAR_BRACE_R) Obj.SectionList.append(DepexSectionObj) else: @@ -2984,7 +2985,7 @@ class FdfParser: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(CompressSectionObj) return True @@ -3011,7 +3012,7 @@ class FdfParser: if not IsLeafSection and not IsEncapSection: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) Obj.SectionList.append(GuidSectionObj) @@ -3689,7 +3690,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) return NewRule @@ -3789,7 +3790,7 @@ class FdfParser: if not IsInf and not IsFile: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) FvImageSectionObj.Fv = FvObj FvImageSectionObj.FvName = None @@ -3808,7 +3809,7 @@ class FdfParser: FvImageSectionObj.FvFileExtension = self._GetFileExtension() elif self._GetNextToken(): if self._Token not in { - "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + T_CHAR_BRACE_R, "COMPAT16", + BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", @@ -3894,7 +3895,7 @@ class FdfParser: EfiSectionObj.FileExtension = self._GetFileExtension() elif self._GetNextToken(): if self._Token not in { - "}", "COMPAT16", BINARY_FILE_TYPE_PE32, + T_CHAR_BRACE_R, "COMPAT16", BINARY_FILE_TYPE_PE32, BINARY_FILE_TYPE_PIC, BINARY_FILE_TYPE_TE, "FV_IMAGE", "RAW", BINARY_FILE_TYPE_DXE_DEPEX, BINARY_FILE_TYPE_UI, "VERSION", @@ -4037,7 +4038,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(CompressSectionObj) @@ -4069,7 +4070,7 @@ class FdfParser: if not IsEncapsulate and not IsLeaf: break - if not self._IsToken("}"): + if not self._IsToken(T_CHAR_BRACE_R): raise Warning.ExpectedCurlyClose(self.FileName, self.CurrentLineNumber) theRule.SectionList.append(GuidSectionObj) @@ -4415,7 +4416,7 @@ class FdfParser: Overrides.NeedCompress = self._Token.upper() == 'TRUE' continue - if self._IsToken("}"): + if self._IsToken(T_CHAR_BRACE_R): break else: EdkLogger.error("FdfParser", FORMAT_INVALID, File=self.FileName, Line=self.CurrentLineNumber) -- 2.16.2.windows.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey ` (5 preceding siblings ...) 2018-10-23 17:29 ` [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser Jaben Carsey @ 2018-10-25 11:55 ` Zhu, Yonghong 6 siblings, 0 replies; 14+ messages in thread From: Zhu, Yonghong @ 2018-10-25 11:55 UTC (permalink / raw) To: Carsey, Jaben, edk2-devel@lists.01.org Reviewed-by: Yonghong Zhu <yonghong.zhu@intel.com> Best Regards, Zhu Yonghong -----Original Message----- From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Jaben Carsey Sent: Wednesday, October 24, 2018 1:29 AM To: edk2-devel@lists.01.org Subject: [edk2] [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Cleanup to many files for GenFds. No command line visible changes are included. 1) refactor imports to reduce namespace clutter. 2) refactor to use existing sharable objects (and create a few new) 3) eliminate shadowing of names 4) remove double underscored private methods for PEP8 5) eliminate unused code/parameters/variables 6) add standard warnings and use them for common code changes from v1: 1) do not shadow CapsuleFV. 2) rebase on master changes from v2: 1) do not delete duplicate function calls. 2) add ".lower()" for GUID string comparison. 3) rebase on master Jaben Carsey (6): BaseTools/GenFds: cleanup GenFds BaseTools/GenFds: change objects to sets Basetools/GenFds: refactor class FV BaseTools/GenFds: remove MacroDict parameter BaseTools/GenFds: refactor FdfParser warnings BaseTools/GenFds: create and use new variable in FdfParser BaseTools/Source/Python/CommonDataClass/FdfClass.py | 73 - BaseTools/Source/Python/Eot/EotMain.py | 372 +- BaseTools/Source/Python/GenFds/AprioriSection.py | 45 +- BaseTools/Source/Python/GenFds/Capsule.py | 26 +- BaseTools/Source/Python/GenFds/CapsuleData.py | 1 - BaseTools/Source/Python/GenFds/CompressSection.py | 4 +- BaseTools/Source/Python/GenFds/DataSection.py | 4 +- BaseTools/Source/Python/GenFds/DepexSection.py | 5 +- BaseTools/Source/Python/GenFds/EfiSection.py | 16 +- BaseTools/Source/Python/GenFds/FdfParser.py | 3780 ++++++++++---------- BaseTools/Source/Python/GenFds/Ffs.py | 82 +- BaseTools/Source/Python/GenFds/FfsFileStatement.py | 37 +- BaseTools/Source/Python/GenFds/FfsInfStatement.py | 10 +- BaseTools/Source/Python/GenFds/Fv.py | 54 +- BaseTools/Source/Python/GenFds/FvImageSection.py | 6 +- BaseTools/Source/Python/GenFds/GenFds.py | 160 +- BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py | 208 +- BaseTools/Source/Python/GenFds/GuidSection.py | 4 +- BaseTools/Source/Python/GenFds/OptionRom.py | 6 +- BaseTools/Source/Python/GenFds/Region.py | 12 +- BaseTools/Source/Python/GenFds/UiSection.py | 4 +- BaseTools/Source/Python/GenFds/VerSection.py | 16 +- BaseTools/Source/Python/GenFds/Vtf.py | 48 +- BaseTools/Source/Python/build/BuildReport.py | 5 +- 24 files changed, 2576 insertions(+), 2402 deletions(-) -- 2.16.2.windows.1 _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2018-10-26 6:55 UTC | newest] Thread overview: 14+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-10-23 17:29 [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Jaben Carsey 2018-10-23 17:29 ` [Patch v3 1/6] " Jaben Carsey 2018-10-25 13:45 ` Feng, Bob C 2018-10-26 4:53 ` Gary Lin 2018-10-26 5:25 ` Gao, Liming 2018-10-26 6:27 ` Gao, Liming 2018-10-26 6:54 ` Gary Lin 2018-10-23 17:29 ` [Patch v3 2/6] BaseTools/GenFds: change objects to sets Jaben Carsey 2018-10-23 17:29 ` [Patch v3 3/6] Basetools/GenFds: refactor class FV Jaben Carsey 2018-10-23 17:29 ` [Patch v3 4/6] BaseTools/GenFds: remove MacroDict parameter Jaben Carsey 2018-10-23 17:29 ` [Patch v3 5/6] BaseTools/GenFds: refactor FdfParser warnings Jaben Carsey 2018-10-23 17:29 ` [Patch v3 6/6] BaseTools/GenFds: create and use new variable in FdfParser Jaben Carsey 2018-10-25 13:57 ` Feng, Bob C 2018-10-25 11:55 ` [Patch v3 0/6] BaseTools/GenFds: cleanup GenFds Zhu, Yonghong
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox