public inbox for devel@edk2.groups.io
 help / color / mirror / Atom feed
* [Patch 03/33] BaseTools: Rename iteritems to items
  2019-01-25  4:55 [Patch " Feng, Bob C
@ 2019-01-25  4:55 ` Feng, Bob C
  0 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-25  4:55 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

replace the list iteritems by items in Python3.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/GenMake.py        | 6 +++---
 BaseTools/Source/Python/Workspace/DscBuildData.py | 2 +-
 BaseTools/Source/Python/build/build.py            | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index 4da10e3950..0e886967cc 100644
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -483,11 +483,11 @@ cleanlib:
             ImageEntryPoint = "EfiStart"
         else:
             # EdkII modules always use "_ModuleEntryPoint" as entry point
             ImageEntryPoint = "_ModuleEntryPoint"
 
-        for k, v in MyAgo.Module.Defines.iteritems():
+        for k, v in MyAgo.Module.Defines.items():
             if k not in MyAgo.Macros:
                 MyAgo.Macros[k] = v
 
         if 'MODULE_ENTRY_POINT' not in MyAgo.Macros:
             MyAgo.Macros['MODULE_ENTRY_POINT'] = ModuleEntryPoint
@@ -495,11 +495,11 @@ cleanlib:
             MyAgo.Macros['ARCH_ENTRY_POINT'] = ArchEntryPoint
         if 'IMAGE_ENTRY_POINT' not in MyAgo.Macros:
             MyAgo.Macros['IMAGE_ENTRY_POINT'] = ImageEntryPoint
 
         PCI_COMPRESS_Flag = False
-        for k, v in MyAgo.Module.Defines.iteritems():
+        for k, v in MyAgo.Module.Defines.items():
             if 'PCI_COMPRESS' == k and 'TRUE' == v:
                 PCI_COMPRESS_Flag = True
 
         # tools definitions
         ToolsDef = []
@@ -659,11 +659,11 @@ cleanlib:
             "module_file"               : MyAgo.MetaFile.Name,
             "module_file_base_name"     : MyAgo.MetaFile.BaseName,
             "module_relative_directory" : MyAgo.SourceDir,
             "module_dir"                : mws.join (self.Macros["WORKSPACE"], MyAgo.SourceDir),
             "package_relative_directory": package_rel_dir,
-            "module_extra_defines"      : ["%s = %s" % (k, v) for k, v in MyAgo.Module.Defines.iteritems()],
+            "module_extra_defines"      : ["%s = %s" % (k, v) for k, v in MyAgo.Module.Defines.items()],
 
             "architecture"              : MyAgo.Arch,
             "toolchain_tag"             : MyAgo.ToolChain,
             "build_target"              : MyAgo.BuildTarget,
 
diff --git a/BaseTools/Source/Python/Workspace/DscBuildData.py b/BaseTools/Source/Python/Workspace/DscBuildData.py
index 0dad04212e..9c5596927f 100644
--- a/BaseTools/Source/Python/Workspace/DscBuildData.py
+++ b/BaseTools/Source/Python/Workspace/DscBuildData.py
@@ -1674,11 +1674,11 @@ class DscBuildData(PlatformBuildClassObject):
             if (PcdCName, TokenSpaceGuid) in PcdValueDict:
                 PcdValueDict[PcdCName, TokenSpaceGuid][SkuName] = (PcdValue, DatumType, MaxDatumSize)
             else:
                 PcdValueDict[PcdCName, TokenSpaceGuid] = {SkuName:(PcdValue, DatumType, MaxDatumSize)}
 
-        for ((PcdCName, TokenSpaceGuid), PcdSetting) in PcdValueDict.iteritems():
+        for ((PcdCName, TokenSpaceGuid), PcdSetting) in PcdValueDict.items():
             if self.SkuIdMgr.SystemSkuId in PcdSetting:
                 PcdValue, DatumType, MaxDatumSize = PcdSetting[self.SkuIdMgr.SystemSkuId]
             elif TAB_DEFAULT in PcdSetting:
                 PcdValue, DatumType, MaxDatumSize = PcdSetting[TAB_DEFAULT]
             elif TAB_COMMON in PcdSetting:
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 3266333cc4..b7aefc9ae7 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -2123,11 +2123,11 @@ class Build():
                     # Build up the list of supported architectures for this build
                     prefix = '%s_%s_%s_' % (BuildTarget, ToolChain, Arch)
 
                     # Look through the tool definitions for GUIDed tools
                     guidAttribs = []
-                    for (attrib, value) in self.ToolDef.ToolsDefTxtDictionary.iteritems():
+                    for (attrib, value) in self.ToolDef.ToolsDefTxtDictionary.items():
                         if attrib.upper().endswith('_GUID'):
                             split = attrib.split('_')
                             thisPrefix = '_'.join(split[0:3]) + '_'
                             if thisPrefix == prefix:
                                 guid = self.ToolDef.ToolsDefTxtDictionary[attrib]
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch v2 00/33] BaseTools python3 migration patch set
@ 2019-01-29  2:05 Feng, Bob C
  2019-01-29  2:05 ` [Patch 01/33] BaseTool:Rename xrange() to range() Feng, Bob C
                   ` (33 more replies)
  0 siblings, 34 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55

V2:
The python files under CParser4 folder of ECC/Eot tool
are generated by antlr4 and forpython3 usage. 
They have python3 specific syntax, for example
the data type declaration for the arguments of a function. That
is not compitable with python2. this patch is to remove these syntax.

The version2 patch set is commit to https://github.com/BobCF/edk2.git branch py3basetools_v2

V1:
This patch set is to enable python3 on BaseTools. Basetools code will be
compatible with both python3 and python2.

We will have two envs PYTHON3_ENABLE and PYTHON_COMMAND. The behavior can be 
combined as the below to support this usage.
If user wants the specific python interpreter, he only needs to set PYTHON_COMMAND env.
If PYTHON3_ENABLE is set, PYTHON_COMMAND will be set to the found one by edk2 scripts based on PYTHON3_ENABLE value. 
If PYTHON3_ENABLE is not set, but PYTHON_COMMAND is set, then PYTHON_COMMAND will be used to run python script. No version check here. 
If PYTHON3_ENABLE is not set, but PYTHON_COMMAND is not set, PYTHON_COMMAND will be set to the high version python installed in OS. 

This patch set is verified by basic testing on Ovmf, MinKabylake and MinPurley platform with Python3.7.1 and 
minimal testing on Ovmf, MinKabylake and MinPurley with Python2.7.15. 

After this change, we will focus on the Python3 validation.

You can also review and try the patch set at https://github.com/BobCF/edk2.git branch py3basetools


Feng, Bob C (9):
  BaseTools: use OrderedDict instead of sdict
  BaseTools: Make sure AllPcdList valid.
  BaseTools:File open failed for VPD MapFile
  BaseTools:Fixed Rsa issue and a set define issue.
  BaseTools:ord() don't match in py2 and py3
  BaseTools: the list and iterator translation
  BaseTools: Handle the bytes and str difference
  BaseTools: ECC tool Python3 adaption
  BaseTools: Eot tool Python3 adaption

Liming Gao (1):
  BaseTools: Update PYTHON env to PYTHON_COMMAND

Yunhua Feng (3):
  BaseTools: nametuple not have verbose parameter in python3
  BaseTools: Remove unnecessary super function
  BaseTools: replace long by int

Zhiju Fan (3):
  BaseTools:TestTools character encoding issue
  BaseTools:Double carriage return inserted from Trim.py on Python3
  BaseTools:There is extra blank line in datalog

Zhijux Fan (17):
  BaseTool:Rename xrange() to range()
  BaseTools:use iterate list to replace the itertools
  BaseTools: Rename iteritems to items
  BaseTools: replace get_bytes_le() to bytes_le
  BaseTools:Solve the data sorting problem use python3
  BaseTools: Update argparse arguments since it not have version now
  BaseTools:Similar to octal data rectification
  BaseTools/UPT:merge UPT Tool use Python2 and Python3
  BaseTools: update Test scripts support python3
  BaseTools/Scripts: Porting PackageDocumentTools code to use Python3
  Basetools: It went wrong when use os.linesep
  BaseTools:Fv BaseAddress must set If it not set
  BaseTools: change the Division Operator
  BaseTools: Similar to octal data rectification
  BaseTools: Update windows and linux run scripts file to use Python3
  BaseTools:Update build tool to print python version information
  BaseTools:Linux Python highest version check.

 BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc                                                    |    6 +-
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex                                               |    6 +-
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds                                                 |    6 +-
 BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool                                             |    6 +-
 BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim                                                   |    6 +-
 BaseTools/Bin/CYGWIN_NT-5.1-i686/build                                                  |    6 +-
 BaseTools/BinWrappers/PosixLike/BPDG                                                    |    6 +-
 BaseTools/BinWrappers/PosixLike/Ecc                                                     |    6 +-
 BaseTools/BinWrappers/PosixLike/GenDepex                                                |    6 +-
 BaseTools/BinWrappers/PosixLike/GenFds                                                  |    6 +-
 BaseTools/BinWrappers/PosixLike/GenPatchPcdTable                                        |    6 +-
 BaseTools/BinWrappers/PosixLike/GenerateCapsule                                         |    6 +-
 BaseTools/BinWrappers/PosixLike/PatchPcdValue                                           |    6 +-
 BaseTools/BinWrappers/PosixLike/Pkcs7Sign                                               |    6 +-
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys                               |    6 +-
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign                                       |    6 +-
 BaseTools/BinWrappers/PosixLike/TargetTool                                              |    6 +-
 BaseTools/BinWrappers/PosixLike/Trim                                                    |    6 +-
 BaseTools/BinWrappers/PosixLike/UPT                                                     |    6 +-
 BaseTools/BinWrappers/PosixLike/build                                                   |    6 +-
 BaseTools/BinWrappers/WindowsLike/BPDG.bat                                              |    2 +-
 BaseTools/BinWrappers/WindowsLike/Ecc.bat                                               |    2 +-
 BaseTools/BinWrappers/WindowsLike/GenDepex.bat                                          |    2 +-
 BaseTools/BinWrappers/WindowsLike/GenFds.bat                                            |    2 +-
 BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat                                  |    2 +-
 BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat                                   |    2 +-
 BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat                                     |    2 +-
 BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat                                         |    2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat                         |    2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat                                 |    2 +-
 BaseTools/BinWrappers/WindowsLike/TargetTool.bat                                        |    2 +-
 BaseTools/BinWrappers/WindowsLike/Trim.bat                                              |    2 +-
 BaseTools/BinWrappers/WindowsLike/UPT.bat                                               |    2 +-
 BaseTools/BinWrappers/WindowsLike/build.bat                                             |    2 +-
 BaseTools/Makefile                                                                      |   12 +-
 BaseTools/Scripts/ConvertFceToStructurePcd.py                                           |    2 +-
 BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py                                |    6 +-
 BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw                                |   14 +--
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py          |    4 +-
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py      |   16 +--
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py             |    4 +-
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py      |   12 +-
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py |   12 +-
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py             |    4 +-
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py             |    4 +-
 BaseTools/Source/C/Makefile                                                             |   12 +-
 BaseTools/Source/Python/AutoGen/AutoGen.py                                              |   90 +++++++------
 BaseTools/Source/Python/AutoGen/GenC.py                                                 |   18 +--
 BaseTools/Source/Python/AutoGen/GenMake.py                                              |   40 +++---
 BaseTools/Source/Python/AutoGen/GenPcdDb.py                                             |   55 ++++----
 BaseTools/Source/Python/AutoGen/GenVar.py                                               |   34 ++---
 BaseTools/Source/Python/AutoGen/InfSectionParser.py                                     |    2 +-
 BaseTools/Source/Python/AutoGen/StrGather.py                                            |    9 +-
 BaseTools/Source/Python/AutoGen/UniClassObject.py                                       |    4 +-
 BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py                              |    6 +-
 BaseTools/Source/Python/BPDG/GenVpd.py                                                  |   32 ++---
 BaseTools/Source/Python/Common/Expression.py                                            |    9 +-
 BaseTools/Source/Python/Common/LongFilePathOs.py                                        |    3 +-
 BaseTools/Source/Python/Common/LongFilePathSupport.py                                   |   12 --
 BaseTools/Source/Python/Common/Misc.py                                                  |  192 +++++++---------------------
 BaseTools/Source/Python/Common/StringUtils.py                                           |   16 +--
 BaseTools/Source/Python/Common/VpdInfoFile.py                                           |   10 +-
 BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py                                    |    0
 BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py                                   |    0
 BaseTools/Source/Python/Ecc/CParser3/__init__.py                                        |    0
 BaseTools/Source/Python/Ecc/CParser4/C.g4                                               |  637 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CLexer.py                                          |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CListener.py                                       |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CParser.py                                         | 6279 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/__init__.py                                        |    0
 BaseTools/Source/Python/Ecc/Check.py                                                    |    4 +-
 BaseTools/Source/Python/Ecc/CodeFragmentCollector.py                                    |   20 +--
 BaseTools/Source/Python/Ecc/Configuration.py                                            |    3 -
 BaseTools/Source/Python/Ecc/EccMain.py                                                  |    2 +-
 BaseTools/Source/Python/Ecc/EccToolError.py                                             |    4 +-
 BaseTools/Source/Python/Ecc/FileProfile.py                                              |    2 +-
 BaseTools/Source/Python/Ecc/MetaDataParser.py                                           |    2 +-
 BaseTools/Source/Python/Ecc/c.py                                                        |    6 +-
 BaseTools/Source/Python/Ecc/config.ini                                                  |    2 -
 BaseTools/Source/Python/Eot/{ => CParser3}/CLexer.py                                    |    0
 BaseTools/Source/Python/Eot/{ => CParser3}/CParser.py                                   |    0
 BaseTools/Source/Python/Eot/CParser3/__init__.py                                        |    0
 BaseTools/Source/Python/Eot/CParser4/CLexer.py                                          |  633 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/CListener.py                                       |  814 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/CParser.py                                         | 6279 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/__init__.py                                        |    0
 BaseTools/Source/Python/Eot/CodeFragmentCollector.py                                    |   22 ++--
 BaseTools/Source/Python/GenFds/AprioriSection.py                                        |    4 +-
 BaseTools/Source/Python/GenFds/Capsule.py                                               |   15 ++-
 BaseTools/Source/Python/GenFds/CapsuleData.py                                           |    4 +-
 BaseTools/Source/Python/GenFds/DataSection.py                                           |    4 +-
 BaseTools/Source/Python/GenFds/EfiSection.py                                            |    4 +-
 BaseTools/Source/Python/GenFds/Fd.py                                                    |    4 +-
 BaseTools/Source/Python/GenFds/FdfParser.py                                             |   22 ++--
 BaseTools/Source/Python/GenFds/FfsFileStatement.py                                      |   16 +--
 BaseTools/Source/Python/GenFds/FfsInfStatement.py                                       |   22 ++--
 BaseTools/Source/Python/GenFds/Fv.py                                                    |   52 ++++----
 BaseTools/Source/Python/GenFds/FvImageSection.py                                        |   22 ++--
 BaseTools/Source/Python/GenFds/GenFds.py                                                |   37 +++---
 BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py                                  |    4 +-
 BaseTools/Source/Python/GenFds/Region.py                                                |   10 +-
 BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py                                  |    4 +-
 BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py                                          |   15 +--
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py                  |   17 +--
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py                          |   20 +--
 BaseTools/Source/Python/Trim/Trim.py                                                    |   20 ++-
 BaseTools/Source/Python/UPT/Core/IpiDb.py                                               |    4 +-
 BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py                                   |    6 +-
 BaseTools/Source/Python/UPT/Library/CommentGenerating.py                                |    6 +-
 BaseTools/Source/Python/UPT/Library/CommentParsing.py                                   |   10 +-
 BaseTools/Source/Python/UPT/Library/Misc.py                                             |  190 +++++-----------------------
 BaseTools/Source/Python/UPT/Library/ParserValidate.py                                   |    2 +-
 BaseTools/Source/Python/UPT/Library/Parsing.py                                          |    2 +-
 BaseTools/Source/Python/UPT/Library/StringUtils.py                                      |   40 +++---
 BaseTools/Source/Python/UPT/Library/UniClassObject.py                                   |    6 +-
 BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py                                  |    2 +-
 BaseTools/Source/Python/UPT/Logger/StringTable.py                                       |    2 +-
 BaseTools/Source/Python/UPT/Parser/DecParser.py                                         |    8 +-
 BaseTools/Source/Python/UPT/Parser/DecParserMisc.py                                     |   30 +----
 BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py                                 |    4 +-
 BaseTools/Source/Python/UPT/Parser/InfParser.py                                         |    4 +-
 BaseTools/Source/Python/UPT/Parser/InfSectionParser.py                                  |    4 +-
 BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py                               |    2 +-
 BaseTools/Source/Python/UPT/UPT.py                                                      |    1 +
 BaseTools/Source/Python/UPT/Xml/IniToXml.py                                             |    2 +-
 BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py                                        |    2 +-
 BaseTools/Source/Python/Workspace/BuildClassObject.py                                   |    6 +-
 BaseTools/Source/Python/Workspace/DscBuildData.py                                       |   34 +++--
 BaseTools/Source/Python/Workspace/InfBuildData.py                                       |    4 +-
 BaseTools/Source/Python/Workspace/MetaDataTable.py                                      |    2 +-
 BaseTools/Source/Python/Workspace/MetaFileParser.py                                     |    4 +-
 BaseTools/Source/Python/build/BuildReport.py                                            |   38 ++++--
 BaseTools/Source/Python/build/build.py                                                  |   71 ++++++-----
 BaseTools/Tests/CToolsTests.py                                                          |    2 +-
 BaseTools/Tests/CheckUnicodeSourceFiles.py                                              |    6 +-
 BaseTools/Tests/GNUmakefile                                                             |    2 +-
 BaseTools/Tests/PythonTest.py                                                           |   15 +++
 BaseTools/Tests/TestTools.py                                                            |   14 ++-
 BaseTools/toolsetup.bat                                                                 |   84 +++++++++++--
 edksetup.sh                                                                             |   77 +++++++++++-
 140 files changed, 16982 insertions(+), 938 deletions(-)
 rename BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py (100%)
 rename BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py (100%)
 create mode 100644 BaseTools/Source/Python/Ecc/CParser3/__init__.py
 create mode 100644 BaseTools/Source/Python/Ecc/CParser4/C.g4
 create mode 100644 BaseTools/Source/Python/Ecc/CParser4/CLexer.py
 create mode 100644 BaseTools/Source/Python/Ecc/CParser4/CListener.py
 create mode 100644 BaseTools/Source/Python/Ecc/CParser4/CParser.py
 create mode 100644 BaseTools/Source/Python/Ecc/CParser4/__init__.py
 rename BaseTools/Source/Python/Eot/{ => CParser3}/CLexer.py (100%)
 rename BaseTools/Source/Python/Eot/{ => CParser3}/CParser.py (100%)
 create mode 100644 BaseTools/Source/Python/Eot/CParser3/__init__.py
 create mode 100644 BaseTools/Source/Python/Eot/CParser4/CLexer.py
 create mode 100644 BaseTools/Source/Python/Eot/CParser4/CListener.py
 create mode 100644 BaseTools/Source/Python/Eot/CParser4/CParser.py
 create mode 100644 BaseTools/Source/Python/Eot/CParser4/__init__.py
 create mode 100644 BaseTools/Tests/PythonTest.py

-- 
2.20.1.windows.1



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [Patch 01/33] BaseTool:Rename xrange() to range()
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 02/33] BaseTools:use iterate list to replace the itertools Feng, Bob C
                   ` (32 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

Because the xrange() was not exist in Python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/GenPcdDb.py | 22 +++++++++++-----------
 BaseTools/Source/Python/BPDG/GenVpd.py      |  6 +++---
 BaseTools/Source/Python/Common/Misc.py      |  2 +-
 BaseTools/Source/Python/GenFds/Region.py    |  2 +-
 4 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenPcdDb.py b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
index a9068d2d7a..d3e85293d2 100644
--- a/BaseTools/Source/Python/AutoGen/GenPcdDb.py
+++ b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
@@ -259,11 +259,11 @@ class DbItemList:
         if self.ItemSize == 0:
             #
             # Variable length, need to calculate one by one
             #
             assert(Index < len(self.RawDataList))
-            for ItemIndex in xrange(Index):
+            for ItemIndex in range(Index):
                 Offset += len(self.RawDataList[ItemIndex])
         else:
             Offset = self.ItemSize * Index
 
         return Offset
@@ -346,11 +346,11 @@ class DbComItemList (DbItemList):
             # The only variable table is stringtable, it is not Composite item, should not reach here
             #
             assert(False)
         else:
             assert(Index < len(self.RawDataList))
-            for ItemIndex in xrange(Index):
+            for ItemIndex in range(Index):
                 Offset += len(self.RawDataList[ItemIndex]) * self.ItemSize
 
         return Offset
 
     def GetListSize(self):
@@ -412,11 +412,11 @@ class DbStringHeadTableItemList(DbItemList):
         if self.ItemSize == 0:
             #
             # Variable length, need to calculate one by one
             #
             assert(Index < len(self.RawDataList))
-            for ItemIndex in xrange(Index):
+            for ItemIndex in range(Index):
                 Offset += len(self.RawDataList[ItemIndex])
         else:
             for innerIndex in range(Index):
                 if type(self.RawDataList[innerIndex]) in (list, tuple):
                     Offset += len(self.RawDataList[innerIndex]) * self.ItemSize
@@ -496,27 +496,27 @@ class DbStringItemList (DbComItemList):
             LenList = []
 
         assert(len(RawDataList) == len(LenList))
         DataList = []
         # adjust DataList according to the LenList
-        for Index in xrange(len(RawDataList)):
+        for Index in range(len(RawDataList)):
             Len = LenList[Index]
             RawDatas = RawDataList[Index]
             assert(Len >= len(RawDatas))
             ActualDatas = []
-            for i in xrange(len(RawDatas)):
+            for i in range(len(RawDatas)):
                 ActualDatas.append(RawDatas[i])
-            for i in xrange(len(RawDatas), Len):
+            for i in range(len(RawDatas), Len):
                 ActualDatas.append(0)
             DataList.append(ActualDatas)
         self.LenList = LenList
         DbComItemList.__init__(self, ItemSize, DataList, RawDataList)
     def GetInterOffset(self, Index):
         Offset = 0
 
         assert(Index < len(self.LenList))
-        for ItemIndex in xrange(Index):
+        for ItemIndex in range(Index):
             Offset += self.LenList[ItemIndex]
 
         return Offset
 
     def GetListSize(self):
@@ -700,11 +700,11 @@ def BuildExDataBase(Dict):
     # The FixedHeader length of the PCD_DATABASE_INIT, from Signature to Pad
     FixedHeaderLen = 80
 
     # Get offset of SkuId table in the database
     SkuIdTableOffset = FixedHeaderLen
-    for DbIndex in xrange(len(DbTotal)):
+    for DbIndex in range(len(DbTotal)):
         if DbTotal[DbIndex] is SkuidValue:
             break
         SkuIdTableOffset += DbItemTotal[DbIndex].GetListSize()
 
 
@@ -712,11 +712,11 @@ def BuildExDataBase(Dict):
 
     # Fix up the LocalTokenNumberTable, SkuHeader table
     for (LocalTokenNumberTableIndex, (Offset, Table)) in enumerate(LocalTokenNumberTable):
         DbIndex = 0
         DbOffset = FixedHeaderLen
-        for DbIndex in xrange(len(DbTotal)):
+        for DbIndex in range(len(DbTotal)):
             if DbTotal[DbIndex] is Table:
                 DbOffset += DbItemTotal[DbIndex].GetInterOffset(Offset)
                 break
             DbOffset += DbItemTotal[DbIndex].GetListSize()
             if DbIndex + 1 == InitTableNum:
@@ -738,11 +738,11 @@ def BuildExDataBase(Dict):
         skuindex = 0
         for VariableEntryPerSku in VariableEntries:
             (VariableHeadGuidIndex, VariableHeadStringIndex, SKUVariableOffset, VariableOffset, VariableRefTable, VariableAttribute) = VariableEntryPerSku[:]
             DbIndex = 0
             DbOffset = FixedHeaderLen
-            for DbIndex in xrange(len(DbTotal)):
+            for DbIndex in range(len(DbTotal)):
                 if DbTotal[DbIndex] is VariableRefTable:
                     DbOffset += DbItemTotal[DbIndex].GetInterOffset(VariableOffset)
                     break
                 DbOffset += DbItemTotal[DbIndex].GetListSize()
                 if DbIndex + 1 == InitTableNum:
@@ -758,11 +758,11 @@ def BuildExDataBase(Dict):
             VarAttr, VarProp = VariableAttributes.GetVarAttributes(VariableAttribute)
             VariableEntryPerSku[:] = (VariableHeadStringIndex, DbOffset, VariableHeadGuidIndex, SKUVariableOffset, VarAttr, VarProp)
 
     # calculate various table offset now
     DbTotalLength = FixedHeaderLen
-    for DbIndex in xrange(len(DbItemTotal)):
+    for DbIndex in range(len(DbItemTotal)):
         if DbItemTotal[DbIndex] is DbLocalTokenNumberTable:
             LocalTokenNumberTableOffset = DbTotalLength
         elif DbItemTotal[DbIndex] is DbExMapTable:
             ExMapTableOffset = DbTotalLength
         elif DbItemTotal[DbIndex] is DbGuidTable:
diff --git a/BaseTools/Source/Python/BPDG/GenVpd.py b/BaseTools/Source/Python/BPDG/GenVpd.py
index e5da47f95e..b91837d3d6 100644
--- a/BaseTools/Source/Python/BPDG/GenVpd.py
+++ b/BaseTools/Source/Python/BPDG/GenVpd.py
@@ -210,11 +210,11 @@ class PcdEntry:
             EdkLogger.error("BPDG", BuildToolError.RESOURCE_OVERFLOW,
                             "The byte array %s is too large for size %d(File: %s Line: %s)" % (ValueString, Size, self.FileName, self.Lineno))
 
         ReturnArray = array.array('B')
 
-        for Index in xrange(len(ValueList)):
+        for Index in range(len(ValueList)):
             Value = None
             if ValueList[Index].lower().startswith('0x'):
                 # translate hex value
                 try:
                     Value = int(ValueList[Index], 16)
@@ -236,11 +236,11 @@ class PcdEntry:
                                 "The value item %s in byte array %s do not in range 0 ~ 0xFF(File: %s Line: %s)" % \
                                 (ValueList[Index], ValueString, self.FileName, self.Lineno))
 
             ReturnArray.append(Value)
 
-        for Index in xrange(len(ValueList), Size):
+        for Index in range(len(ValueList), Size):
             ReturnArray.append(0)
 
         self.PcdValue = ReturnArray.tolist()
 
     ## Pack a unicode PCD value into byte array.
@@ -271,11 +271,11 @@ class PcdEntry:
             except:
                 EdkLogger.error("BPDG", BuildToolError.FORMAT_INVALID,
                                 "Invalid unicode character %s in unicode string %s(File: %s Line: %s)" % \
                                 (Value, UnicodeString, self.FileName, self.Lineno))
 
-        for Index in xrange(len(UnicodeString) * 2, Size):
+        for Index in range(len(UnicodeString) * 2, Size):
             ReturnArray.append(0)
 
         self.PcdValue = ReturnArray.tolist()
 
 
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index 0b4dd7a7c1..01bd62a9e2 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1446,11 +1446,11 @@ def CheckPcdDatum(Type, Value):
     return True, ""
 
 def CommonPath(PathList):
     P1 = min(PathList).split(os.path.sep)
     P2 = max(PathList).split(os.path.sep)
-    for Index in xrange(min(len(P1), len(P2))):
+    for Index in range(min(len(P1), len(P2))):
         if P1[Index] != P2[Index]:
             return os.path.sep.join(P1[:Index])
     return os.path.sep.join(P1)
 
 class PathClass(object):
diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py
index 8ca61254b0..acc9dea413 100644
--- a/BaseTools/Source/Python/GenFds/Region.py
+++ b/BaseTools/Source/Python/GenFds/Region.py
@@ -60,11 +60,11 @@ class Region(object):
         if Size > 0:
             if (ErasePolarity == '1') :
                 PadByte = pack('B', 0xFF)
             else:
                 PadByte = pack('B', 0)
-            PadData = ''.join(PadByte for i in xrange(0, Size))
+            PadData = ''.join(PadByte for i in range(0, Size))
             Buffer.write(PadData)
 
     ## AddToBuffer()
     #
     #   Add region data to the Buffer
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 02/33] BaseTools:use iterate list to replace the itertools
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
  2019-01-29  2:05 ` [Patch 01/33] BaseTool:Rename xrange() to range() Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 03/33] BaseTools: Rename iteritems to items Feng, Bob C
                   ` (31 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

itertools.imap() replace map(), itertools.ifilter() replace filter

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/build/build.py | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index e79949fa28..d07c8f84d6 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -25,11 +25,10 @@ import sys
 import glob
 import time
 import platform
 import traceback
 import encodings.ascii
-import itertools
 import multiprocessing
 
 from struct import *
 from threading import *
 import threading
@@ -1100,13 +1099,12 @@ class Build():
 
             if os.path.exists(PrebuildEnvFile):
                 f = open(PrebuildEnvFile)
                 envs = f.readlines()
                 f.close()
-                envs = itertools.imap(lambda l: l.split('=', 1), envs)
-                envs = itertools.ifilter(lambda l: len(l) == 2, envs)
-                envs = itertools.imap(lambda l: [i.strip() for i in l], envs)
+                envs = [l.split("=", 1) for l in envs ]
+                envs = [[I.strip() for I in item] for item in envs if len(item) == 2]
                 os.environ.update(dict(envs))
             EdkLogger.info("\n- Prebuild Done -\n")
 
     def LaunchPostbuild(self):
         if self.Postbuild:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 03/33] BaseTools: Rename iteritems to items
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
  2019-01-29  2:05 ` [Patch 01/33] BaseTool:Rename xrange() to range() Feng, Bob C
  2019-01-29  2:05 ` [Patch 02/33] BaseTools:use iterate list to replace the itertools Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 04/33] BaseTools: replace get_bytes_le() to bytes_le Feng, Bob C
                   ` (30 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

replace the list iteritems by items in Python3.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/GenMake.py        | 6 +++---
 BaseTools/Source/Python/Workspace/DscBuildData.py | 2 +-
 BaseTools/Source/Python/build/build.py            | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index 4da10e3950..0e886967cc 100644
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -483,11 +483,11 @@ cleanlib:
             ImageEntryPoint = "EfiStart"
         else:
             # EdkII modules always use "_ModuleEntryPoint" as entry point
             ImageEntryPoint = "_ModuleEntryPoint"
 
-        for k, v in MyAgo.Module.Defines.iteritems():
+        for k, v in MyAgo.Module.Defines.items():
             if k not in MyAgo.Macros:
                 MyAgo.Macros[k] = v
 
         if 'MODULE_ENTRY_POINT' not in MyAgo.Macros:
             MyAgo.Macros['MODULE_ENTRY_POINT'] = ModuleEntryPoint
@@ -495,11 +495,11 @@ cleanlib:
             MyAgo.Macros['ARCH_ENTRY_POINT'] = ArchEntryPoint
         if 'IMAGE_ENTRY_POINT' not in MyAgo.Macros:
             MyAgo.Macros['IMAGE_ENTRY_POINT'] = ImageEntryPoint
 
         PCI_COMPRESS_Flag = False
-        for k, v in MyAgo.Module.Defines.iteritems():
+        for k, v in MyAgo.Module.Defines.items():
             if 'PCI_COMPRESS' == k and 'TRUE' == v:
                 PCI_COMPRESS_Flag = True
 
         # tools definitions
         ToolsDef = []
@@ -659,11 +659,11 @@ cleanlib:
             "module_file"               : MyAgo.MetaFile.Name,
             "module_file_base_name"     : MyAgo.MetaFile.BaseName,
             "module_relative_directory" : MyAgo.SourceDir,
             "module_dir"                : mws.join (self.Macros["WORKSPACE"], MyAgo.SourceDir),
             "package_relative_directory": package_rel_dir,
-            "module_extra_defines"      : ["%s = %s" % (k, v) for k, v in MyAgo.Module.Defines.iteritems()],
+            "module_extra_defines"      : ["%s = %s" % (k, v) for k, v in MyAgo.Module.Defines.items()],
 
             "architecture"              : MyAgo.Arch,
             "toolchain_tag"             : MyAgo.ToolChain,
             "build_target"              : MyAgo.BuildTarget,
 
diff --git a/BaseTools/Source/Python/Workspace/DscBuildData.py b/BaseTools/Source/Python/Workspace/DscBuildData.py
index 0dad04212e..9c5596927f 100644
--- a/BaseTools/Source/Python/Workspace/DscBuildData.py
+++ b/BaseTools/Source/Python/Workspace/DscBuildData.py
@@ -1674,11 +1674,11 @@ class DscBuildData(PlatformBuildClassObject):
             if (PcdCName, TokenSpaceGuid) in PcdValueDict:
                 PcdValueDict[PcdCName, TokenSpaceGuid][SkuName] = (PcdValue, DatumType, MaxDatumSize)
             else:
                 PcdValueDict[PcdCName, TokenSpaceGuid] = {SkuName:(PcdValue, DatumType, MaxDatumSize)}
 
-        for ((PcdCName, TokenSpaceGuid), PcdSetting) in PcdValueDict.iteritems():
+        for ((PcdCName, TokenSpaceGuid), PcdSetting) in PcdValueDict.items():
             if self.SkuIdMgr.SystemSkuId in PcdSetting:
                 PcdValue, DatumType, MaxDatumSize = PcdSetting[self.SkuIdMgr.SystemSkuId]
             elif TAB_DEFAULT in PcdSetting:
                 PcdValue, DatumType, MaxDatumSize = PcdSetting[TAB_DEFAULT]
             elif TAB_COMMON in PcdSetting:
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index d07c8f84d6..94074b89b4 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -2086,11 +2086,11 @@ class Build():
                     # Build up the list of supported architectures for this build
                     prefix = '%s_%s_%s_' % (BuildTarget, ToolChain, Arch)
 
                     # Look through the tool definitions for GUIDed tools
                     guidAttribs = []
-                    for (attrib, value) in self.ToolDef.ToolsDefTxtDictionary.iteritems():
+                    for (attrib, value) in self.ToolDef.ToolsDefTxtDictionary.items():
                         if attrib.upper().endswith('_GUID'):
                             split = attrib.split('_')
                             thisPrefix = '_'.join(split[0:3]) + '_'
                             if thisPrefix == prefix:
                                 guid = self.ToolDef.ToolsDefTxtDictionary[attrib]
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 04/33] BaseTools: replace get_bytes_le() to bytes_le
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (2 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 03/33] BaseTools: Rename iteritems to items Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 05/33] BaseTools: use OrderedDict instead of sdict Feng, Bob C
                   ` (29 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

UUID does not have the get_bytes_le() in python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/Common/Misc.py                         | 2 +-
 BaseTools/Source/Python/GenFds/CapsuleData.py                  | 2 +-
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index 01bd62a9e2..2ef8e07839 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1178,11 +1178,11 @@ def ParseFieldValue (Value):
                 raise BadExpression("Invalid GUID value string %s" % Value)
             Value = TmpValue
         if Value[0] == '"' and Value[-1] == '"':
             Value = Value[1:-1]
         try:
-            Value = "'" + uuid.UUID(Value).get_bytes_le() + "'"
+            Value = "'" + uuid.UUID(Value).bytes_le + "'"
         except ValueError as Message:
             raise BadExpression(Message)
         Value, Size = ParseFieldValue(Value)
         return Value, 16
     if Value.startswith('L"') and Value.endswith('"'):
diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py
index 6071e9f4a5..db201c074b 100644
--- a/BaseTools/Source/Python/GenFds/CapsuleData.py
+++ b/BaseTools/Source/Python/GenFds/CapsuleData.py
@@ -227,11 +227,11 @@ class CapsulePayload(CapsuleData):
                        VendorFileSize,
                        int(self.HardwareInstance, 16)
                        )
         if AuthData:
             Buffer += pack('QIHH', AuthData[0], AuthData[1], AuthData[2], AuthData[3])
-            Buffer += uuid.UUID(AuthData[4]).get_bytes_le()
+            Buffer += uuid.UUID(AuthData[4]).bytes_le
 
         #
         # Append file content to the structure
         #
         ImageFile = open(self.ImageFile, 'rb')
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
index 3fd7eefd6a..370ae2e3fa 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
@@ -184,11 +184,11 @@ if __name__ == '__main__':
 
     #
     # Write output file that contains hash GUID, Public Key, Signature, and Input data
     #
     args.OutputFile = open(args.OutputFileName, 'wb')
-    args.OutputFile.write(EFI_HASH_ALGORITHM_SHA256_GUID.get_bytes_le())
+    args.OutputFile.write(EFI_HASH_ALGORITHM_SHA256_GUID.bytes_le)
     args.OutputFile.write(PublicKey)
     args.OutputFile.write(Signature)
     args.OutputFile.write(args.InputFileBuffer)
     args.OutputFile.close()
 
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 05/33] BaseTools: use OrderedDict instead of sdict
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (3 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 04/33] BaseTools: replace get_bytes_le() to bytes_le Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 06/33] BaseTools: nametuple not have verbose parameter in python3 Feng, Bob C
                   ` (28 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Yonghong Zhu, Yunhua Feng

use OrderedDict instead of sdict, and delete sdict

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yunhua Feng <yunhuax.feng@intel.com>
---
 BaseTools/Source/Python/Common/Misc.py | 123 +--------------------------------------------------------------------------------------------------------------------------
 1 file changed, 1 insertion(+), 122 deletions(-)

diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index 2ef8e07839..9967ac470c 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -820,131 +820,10 @@ class Progressor:
             Progressor._StopFlag.set()
         if Progressor._ProgressThread is not None:
             Progressor._ProgressThread.join()
             Progressor._ProgressThread = None
 
-## A dict which can access its keys and/or values orderly
-#
-#  The class implements a new kind of dict which its keys or values can be
-#  accessed in the order they are added into the dict. It guarantees the order
-#  by making use of an internal list to keep a copy of keys.
-#
-class sdict(dict):
-    ## Constructor
-    def __init__(self):
-        dict.__init__(self)
-        self._key_list = []
-
-    ## [] operator
-    def __setitem__(self, key, value):
-        if key not in self._key_list:
-            self._key_list.append(key)
-        dict.__setitem__(self, key, value)
-
-    ## del operator
-    def __delitem__(self, key):
-        self._key_list.remove(key)
-        dict.__delitem__(self, key)
-
-    ## used in "for k in dict" loop to ensure the correct order
-    def __iter__(self):
-        return self.iterkeys()
-
-    ## len() support
-    def __len__(self):
-        return len(self._key_list)
-
-    ## "in" test support
-    def __contains__(self, key):
-        return key in self._key_list
-
-    ## indexof support
-    def index(self, key):
-        return self._key_list.index(key)
-
-    ## insert support
-    def insert(self, key, newkey, newvalue, order):
-        index = self._key_list.index(key)
-        if order == 'BEFORE':
-            self._key_list.insert(index, newkey)
-            dict.__setitem__(self, newkey, newvalue)
-        elif order == 'AFTER':
-            self._key_list.insert(index + 1, newkey)
-            dict.__setitem__(self, newkey, newvalue)
-
-    ## append support
-    def append(self, sdict):
-        for key in sdict:
-            if key not in self._key_list:
-                self._key_list.append(key)
-            dict.__setitem__(self, key, sdict[key])
-
-    def has_key(self, key):
-        return key in self._key_list
-
-    ## Empty the dict
-    def clear(self):
-        self._key_list = []
-        dict.clear(self)
-
-    ## Return a copy of keys
-    def keys(self):
-        keys = []
-        for key in self._key_list:
-            keys.append(key)
-        return keys
-
-    ## Return a copy of values
-    def values(self):
-        values = []
-        for key in self._key_list:
-            values.append(self[key])
-        return values
-
-    ## Return a copy of (key, value) list
-    def items(self):
-        items = []
-        for key in self._key_list:
-            items.append((key, self[key]))
-        return items
-
-    ## Iteration support
-    def iteritems(self):
-        return iter(self.items())
-
-    ## Keys interation support
-    def iterkeys(self):
-        return iter(self.keys())
-
-    ## Values interation support
-    def itervalues(self):
-        return iter(self.values())
-
-    ## Return value related to a key, and remove the (key, value) from the dict
-    def pop(self, key, *dv):
-        value = None
-        if key in self._key_list:
-            value = self[key]
-            self.__delitem__(key)
-        elif len(dv) != 0 :
-            value = kv[0]
-        return value
-
-    ## Return (key, value) pair, and remove the (key, value) from the dict
-    def popitem(self):
-        key = self._key_list[-1]
-        value = self[key]
-        self.__delitem__(key)
-        return key, value
-
-    def update(self, dict=None, **kwargs):
-        if dict is not None:
-            for k, v in dict.items():
-                self[k] = v
-        if len(kwargs):
-            for k, v in kwargs.items():
-                self[k] = v
 
 ## Dictionary using prioritized list as key
 #
 class tdict:
     _ListType = type([])
@@ -1744,11 +1623,11 @@ class SkuClass():
             if skuid_num > 0xFFFFFFFFFFFFFFFF:
                 EdkLogger.error("build", PARAMETER_INVALID,
                             ExtraData = "SKU-ID [%s] value %s exceeds the max value of UINT64"
                                       % (SkuName, SkuId))
 
-        self.AvailableSkuIds = sdict()
+        self.AvailableSkuIds = OrderedDict()
         self.SkuIdSet = []
         self.SkuIdNumberSet = []
         self.SkuData = SkuIds
         self._SkuInherit = {}
         self._SkuIdentifier = SkuIdentifier
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 06/33] BaseTools: nametuple not have verbose parameter in python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (4 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 05/33] BaseTools: use OrderedDict instead of sdict Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 07/33] BaseTools: Remove unnecessary super function Feng, Bob C
                   ` (27 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Yunhua Feng, Bob Feng, Liming Gao, Yonghong Zhu

From: Yunhua Feng <yunhuax.feng@intel.com>

nametuple not have verbose parameter in python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yunhua Feng <yunhuax.feng@intel.com>
---
 BaseTools/Source/Python/Workspace/BuildClassObject.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/BaseTools/Source/Python/Workspace/BuildClassObject.py b/BaseTools/Source/Python/Workspace/BuildClassObject.py
index 73920c5153..b67414b930 100644
--- a/BaseTools/Source/Python/Workspace/BuildClassObject.py
+++ b/BaseTools/Source/Python/Workspace/BuildClassObject.py
@@ -353,11 +353,11 @@ class StructurePcd(PcdClassObject):
         new_pcd.PcdFieldValueFromComm = CopyDict(self.PcdFieldValueFromComm)
         new_pcd.PcdFieldValueFromFdf = CopyDict(self.PcdFieldValueFromFdf)
         new_pcd.ValueChain = {item for item in self.ValueChain}
         return new_pcd
 
-LibraryClassObject = namedtuple('LibraryClassObject', ['LibraryClass','SupModList'], verbose=False)
+LibraryClassObject = namedtuple('LibraryClassObject', ['LibraryClass','SupModList'])
 
 ## ModuleBuildClassObject
 #
 # This Class defines ModuleBuildClass
 #
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 07/33] BaseTools: Remove unnecessary super function
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (5 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 06/33] BaseTools: nametuple not have verbose parameter in python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 08/33] BaseTools: replace long by int Feng, Bob C
                   ` (26 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Yunhua Feng, Bob Feng, Liming Gao, Yonghong Zhu

From: Yunhua Feng <yunhuax.feng@intel.com>

Remove unnecessary super function

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yunhua Feng <yunhuax.feng@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index f4cfb0830d..0bed416c52 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -267,12 +267,10 @@ class AutoGen(object):
             return cls.__ObjectCache[Key]
             # it didnt exist. create it, cache it, then return it
         RetVal = cls.__ObjectCache[Key] = super(AutoGen, cls).__new__(cls)
         return RetVal
 
-    def __init__ (self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs):
-        super(AutoGen, self).__init__(self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs)
 
     ## hash() operator
     #
     #  The file path of platform file will be used to represent hash value of this object
     #
@@ -301,11 +299,10 @@ class AutoGen(object):
 #
 class WorkspaceAutoGen(AutoGen):
     # call super().__init__ then call the worker function with different parameter count
     def __init__(self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs):
         if not hasattr(self, "_Init"):
-            super(WorkspaceAutoGen, self).__init__(Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs)
             self._InitWorker(Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs)
             self._Init = True
 
     ## Initialize WorkspaceAutoGen
     #
@@ -1092,11 +1089,10 @@ class WorkspaceAutoGen(AutoGen):
 #
 class PlatformAutoGen(AutoGen):
     # call super().__init__ then call the worker function with different parameter count
     def __init__(self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs):
         if not hasattr(self, "_Init"):
-            super(PlatformAutoGen, self).__init__(self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs)
             self._InitWorker(Workspace, MetaFile, Target, Toolchain, Arch)
             self._Init = True
     #
     # Used to store all PCDs for both PEI and DXE phase, in order to generate
     # correct PCD database
@@ -2524,11 +2520,10 @@ def _MakeDir(PathList):
 #
 class ModuleAutoGen(AutoGen):
     # call super().__init__ then call the worker function with different parameter count
     def __init__(self, Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs):
         if not hasattr(self, "_Init"):
-            super(ModuleAutoGen, self).__init__(Workspace, MetaFile, Target, Toolchain, Arch, *args, **kwargs)
             self._InitWorker(Workspace, MetaFile, Target, Toolchain, Arch, *args)
             self._Init = True
 
     ## Cache the timestamps of metafiles of every module in a class attribute
     #
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 08/33] BaseTools: replace long by int
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (6 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 07/33] BaseTools: Remove unnecessary super function Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 09/33] BaseTools:Solve the data sorting problem use python3 Feng, Bob C
                   ` (25 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Yunhua Feng, Bob Feng, Liming Gao, Yonghong Zhu

From: Yunhua Feng <yunhuax.feng@intel.com>

replace long by int
Because the long() was not exist in Python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Yunhua Feng <yunhuax.feng@intel.com>
---
 BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py     |  4 ++--
 BaseTools/Source/Python/Common/Misc.py                         |  2 +-
 BaseTools/Source/Python/GenFds/AprioriSection.py               |  2 +-
 BaseTools/Source/Python/GenFds/FdfParser.py                    | 18 +++++++++---------
 BaseTools/Source/Python/GenFds/GenFds.py                       |  8 ++++----
 BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py                 | 10 +++++-----
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py |  4 ++--
 BaseTools/Source/Python/UPT/Parser/DecParser.py                |  6 +++---
 8 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py b/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
index edd40a1498..6ddf38fd0d 100644
--- a/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
+++ b/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
@@ -55,11 +55,11 @@ class VAR_CHECK_PCD_VARIABLE_TAB_CONTAINER(object):
             itemIndex = 0
             for item in var_check_tab.validtab:
                 itemIndex += 1
                 realLength += 5
                 for v_data in item.data:
-                    if type(v_data) in (int, long):
+                    if isinstance(v_data, int):
                         realLength += item.StorageWidth
                     else:
                         realLength += item.StorageWidth
                         realLength += item.StorageWidth
                 if (index == len(self.var_check_info)) :
@@ -135,11 +135,11 @@ class VAR_CHECK_PCD_VARIABLE_TAB_CONTAINER(object):
 
                 b = pack("=B", item.StorageWidth)
                 Buffer += b
                 realLength += 1
                 for v_data in item.data:
-                    if type(v_data) in (int, long):
+                    if isinstance(v_data, int):
                         b = pack(PACK_CODE_BY_SIZE[item.StorageWidth], v_data)
                         Buffer += b
                         realLength += item.StorageWidth
                     else:
                         b = pack(PACK_CODE_BY_SIZE[item.StorageWidth], v_data[0])
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index 9967ac470c..1b8a4bef2e 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1311,11 +1311,11 @@ def CheckPcdDatum(Type, Value):
                           ", FALSE, False, false, 0x0, 0x00, 0" % (Value, Type)
     elif Type in [TAB_UINT8, TAB_UINT16, TAB_UINT32, TAB_UINT64]:
         if Value and int(Value, 0) < 0:
             return False, "PCD can't be set to negative value[%s] for datum type [%s]" % (Value, Type)
         try:
-            Value = long(Value, 0)
+            Value = int(Value, 0)
             if Value > MAX_VAL_TYPE[Type]:
                 return False, "Too large PCD value[%s] for datum type [%s]" % (Value, Type)
         except:
             return False, "Invalid value [%s] of type [%s];"\
                           " must be a hexadecimal, decimal or octal in C language format." % (Value, Type)
diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py
index e7b856a115..55d99320c7 100644
--- a/BaseTools/Source/Python/GenFds/AprioriSection.py
+++ b/BaseTools/Source/Python/GenFds/AprioriSection.py
@@ -95,11 +95,11 @@ class AprioriSection (object):
                         EdkLoggerError("GenFds", RESOURCE_NOT_AVAILABLE,
                                         "INF %s not found in build ARCH %s!" \
                                         % (InfFileName, GenFdsGlobalVariable.ArchList))
 
             GuidPart = Guid.split('-')
-            Buffer.write(pack('I', long(GuidPart[0], 16)))
+            Buffer.write(pack('I', int(GuidPart[0], 16)))
             Buffer.write(pack('H', int(GuidPart[1], 16)))
             Buffer.write(pack('H', int(GuidPart[2], 16)))
 
             for Num in range(2):
                 Char = GuidPart[3][Num*2:Num*2+2]
diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py
index 3bcfd2d42a..69cb7de8e5 100644
--- a/BaseTools/Source/Python/GenFds/FdfParser.py
+++ b/BaseTools/Source/Python/GenFds/FdfParser.py
@@ -1560,11 +1560,11 @@ class FdfParser:
                 Obj.SizePcd = pcdPair
                 self.Profile.PcdDict[pcdPair] = Size
                 self.SetPcdLocalation(pcdPair)
                 FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber)
                 self.Profile.PcdFileLineDict[pcdPair] = FileLineTuple
-            Obj.Size = long(Size, 0)
+            Obj.Size = int(Size, 0)
             return True
 
         if self._IsKeyword("ErasePolarity"):
             if not self._IsToken(TAB_EQUAL_SPLIT):
                 raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber)
@@ -1595,21 +1595,21 @@ class FdfParser:
                 raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber)
 
             if not self._GetNextDecimalNumber() and not self._GetNextHexNumber():
                 raise Warning.Expected("address", self.FileName, self.CurrentLineNumber)
 
-            BsAddress = long(self._Token, 0)
+            BsAddress = int(self._Token, 0)
             Obj.BsBaseAddress = BsAddress
 
         if self._IsKeyword("RtBaseAddress"):
             if not self._IsToken(TAB_EQUAL_SPLIT):
                 raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber)
 
             if not self._GetNextDecimalNumber() and not self._GetNextHexNumber():
                 raise Warning.Expected("address", self.FileName, self.CurrentLineNumber)
 
-            RtAddress = long(self._Token, 0)
+            RtAddress = int(self._Token, 0)
             Obj.RtBaseAddress = RtAddress
 
     ## _GetBlockStatements() method
     #
     #   Get block statements
@@ -1653,21 +1653,21 @@ class FdfParser:
             BlockSizePcd = PcdPair
             self.Profile.PcdDict[PcdPair] = BlockSize
             self.SetPcdLocalation(PcdPair)
             FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber)
             self.Profile.PcdFileLineDict[PcdPair] = FileLineTuple
-        BlockSize = long(BlockSize, 0)
+        BlockSize = int(BlockSize, 0)
 
         BlockNumber = None
         if self._IsKeyword("NumBlocks"):
             if not self._IsToken(TAB_EQUAL_SPLIT):
                 raise Warning.ExpectedEquals(self.FileName, self.CurrentLineNumber)
 
             if not self._GetNextDecimalNumber() and not self._GetNextHexNumber():
                 raise Warning.Expected("block numbers", self.FileName, self.CurrentLineNumber)
 
-            BlockNumber = long(self._Token, 0)
+            BlockNumber = int(self._Token, 0)
 
         Obj.BlockSizeList.append((BlockSize, BlockNumber, BlockSizePcd))
         return True
 
     ## _GetDefineStatements() method
@@ -1772,11 +1772,11 @@ class FdfParser:
             if CurCh in '|\r\n' and PairCount == 0:
                 break
             Expr += CurCh
             self._GetOneChar()
         try:
-            return long(
+            return int(
                 ValueExpression(Expr,
                                 self._CollectMacroPcd()
                                 )(True), 0)
         except Exception:
             self.SetFileBufferPos(StartPos)
@@ -1820,11 +1820,11 @@ class FdfParser:
             self._UndoToken()
             IsRegionPcd = (RegionSizeGuidPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]) or
                            RegionOffsetPcdPattern.match(self._CurrentLine()[self.CurrentOffsetWithinLine:]))
             if IsRegionPcd:
                 RegionObj.PcdOffset = self._GetNextPcdSettings()
-                self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + long(theFd.BaseAddress, 0))
+                self.Profile.PcdDict[RegionObj.PcdOffset] = "0x%08X" % (RegionObj.Offset + int(theFd.BaseAddress, 0))
                 self.SetPcdLocalation(RegionObj.PcdOffset)
                 self._PcdDict['%s.%s' % (RegionObj.PcdOffset[1], RegionObj.PcdOffset[0])] = "0x%x" % RegionObj.Offset
                 FileLineTuple = GetRealFileLine(self.FileName, self.CurrentLineNumber)
                 self.Profile.PcdFileLineDict[RegionObj.PcdOffset] = FileLineTuple
                 if self._IsToken(TAB_VALUE_SPLIT):
@@ -3132,13 +3132,13 @@ class FdfParser:
                         FmpData.HardwareInstance = Value
                 elif Name == 'MONOTONIC_COUNT':
                     if FdfParser._Verify(Name, Value, 'UINT64'):
                         FmpData.MonotonicCount = Value
                         if FmpData.MonotonicCount.upper().startswith('0X'):
-                            FmpData.MonotonicCount = (long)(FmpData.MonotonicCount, 16)
+                            FmpData.MonotonicCount = int(FmpData.MonotonicCount, 16)
                         else:
-                            FmpData.MonotonicCount = (long)(FmpData.MonotonicCount)
+                            FmpData.MonotonicCount = int(FmpData.MonotonicCount)
             if not self._GetNextToken():
                 break
         else:
             self._UndoToken()
 
diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py
index 2a5d473e3f..ae5d7fd26d 100644
--- a/BaseTools/Source/Python/GenFds/GenFds.py
+++ b/BaseTools/Source/Python/GenFds/GenFds.py
@@ -624,13 +624,13 @@ class GenFds(object):
                     FvSpaceInfoList.append((FvName, Total, Used, Free))
 
         GenFdsGlobalVariable.InfLogger('\nFV Space Information')
         for FvSpaceInfo in FvSpaceInfoList:
             Name = FvSpaceInfo[0]
-            TotalSizeValue = long(FvSpaceInfo[1], 0)
-            UsedSizeValue = long(FvSpaceInfo[2], 0)
-            FreeSizeValue = long(FvSpaceInfo[3], 0)
+            TotalSizeValue = int(FvSpaceInfo[1], 0)
+            UsedSizeValue = int(FvSpaceInfo[2], 0)
+            FreeSizeValue = int(FvSpaceInfo[3], 0)
             if UsedSizeValue == TotalSizeValue:
                 Percentage = '100'
             else:
                 Percentage = str((UsedSizeValue + 0.0) / TotalSizeValue)[0:4].lstrip('0.')
 
@@ -653,11 +653,11 @@ class GenFds(object):
                 break
 
         if PcdValue == '':
             return
 
-        Int64PcdValue = long(PcdValue, 0)
+        Int64PcdValue = int(PcdValue, 0)
         if Int64PcdValue == 0 or Int64PcdValue < -1:
             return
 
         TopAddress = 0
         if Int64PcdValue > 0:
diff --git a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
index a44781f2e8..c5c99d94ef 100644
--- a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
+++ b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
@@ -139,15 +139,15 @@ if __name__ == '__main__':
     sys.exit(1)
   args.OutputFileName = args.OutputFile
 
   try:
     if args.MonotonicCountStr.upper().startswith('0X'):
-      args.MonotonicCountValue = (long)(args.MonotonicCountStr, 16)
+      args.MonotonicCountValue = int(args.MonotonicCountStr, 16)
     else:
-      args.MonotonicCountValue = (long)(args.MonotonicCountStr)
+      args.MonotonicCountValue = int(args.MonotonicCountStr)
   except:
-    args.MonotonicCountValue = (long)(0)
+    args.MonotonicCountValue = int(0)
 
   if args.Encode:
     #
     # Save signer private cert filename and close private cert file
     #
@@ -249,13 +249,13 @@ if __name__ == '__main__':
     if not args.SignatureSizeStr:
       print("ERROR: please use the option --signature-size to specify the size of the signature data!")
       sys.exit(1)
     else:
       if args.SignatureSizeStr.upper().startswith('0X'):
-        SignatureSize = (long)(args.SignatureSizeStr, 16)
+        SignatureSize = int(args.SignatureSizeStr, 16)
       else:
-        SignatureSize = (long)(args.SignatureSizeStr)
+        SignatureSize = int(args.SignatureSizeStr)
     if SignatureSize < 0:
         print("ERROR: The value of option --signature-size can't be set to negative value!")
         sys.exit(1)
     elif SignatureSize > len(args.InputFileBuffer):
         print("ERROR: The value of option --signature-size is exceed the size of the input file !")
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
index 370ae2e3fa..3a1a2dff00 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
@@ -161,13 +161,13 @@ if __name__ == '__main__':
     sys.exit(Process.returncode)
 
   if args.MonotonicCountStr:
     try:
       if args.MonotonicCountStr.upper().startswith('0X'):
-        args.MonotonicCountValue = (long)(args.MonotonicCountStr, 16)
+        args.MonotonicCountValue = int(args.MonotonicCountStr, 16)
       else:
-        args.MonotonicCountValue = (long)(args.MonotonicCountStr)
+        args.MonotonicCountValue = int(args.MonotonicCountStr)
     except:
         pass
 
   if args.Encode:
     FullInputFileBuffer = args.InputFileBuffer
diff --git a/BaseTools/Source/Python/UPT/Parser/DecParser.py b/BaseTools/Source/Python/UPT/Parser/DecParser.py
index a88b51d055..8f3d60df57 100644
--- a/BaseTools/Source/Python/UPT/Parser/DecParser.py
+++ b/BaseTools/Source/Python/UPT/Parser/DecParser.py
@@ -618,15 +618,15 @@ class _DecPcd(_DecBase):
         #
         Token = TokenList[-1].strip()
         if not IsValidToken(PCD_TOKEN_PATTERN, Token):
             self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN % Token)
         elif not Token.startswith('0x') and not Token.startswith('0X'):
-            if long(Token) > 4294967295:
+            if int(Token) > 4294967295:
                 self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN_INT % Token)
-            Token = hex(long(Token))[:-1]
+            Token = hex(int(Token))[:-1]
 
-        IntToken = long(Token, 0)
+        IntToken = int(Token, 0)
         if (Guid, IntToken) in self.TokenMap:
             if self.TokenMap[Guid, IntToken] != CName:
                 self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN_UNIQUE%(Token))
         else:
             self.TokenMap[Guid, IntToken] = CName
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 09/33] BaseTools:Solve the data sorting problem use python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (7 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 08/33] BaseTools: replace long by int Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 10/33] BaseTools: Update argparse arguments since it not have version now Feng, Bob C
                   ` (24 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

set PYTHONHASHSEED
Specifying the value 0 will disable hash randomization.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/toolsetup.bat | 1 +
 edksetup.sh             | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/BaseTools/toolsetup.bat b/BaseTools/toolsetup.bat
index b58560d4d7..1cac3105c2 100755
--- a/BaseTools/toolsetup.bat
+++ b/BaseTools/toolsetup.bat
@@ -272,10 +272,11 @@ goto check_build_environment
   echo.
   echo !!! ERROR !!! Binary C tools are missing. They are requried to be built from BaseTools Source.
   echo.
 
 :check_build_environment
+  set PYTHONHASHSEED=0
   if defined BASETOOLS_PYTHON_SOURCE goto VisualStudioAvailable
 
   if not defined BASE_TOOLS_PATH (
      if not exist "Source\C\Makefile" (
        if not exist "%EDK_TOOLS_PATH%\Source\C\Makefile" goto no_source_files
diff --git a/edksetup.sh b/edksetup.sh
index 93d6525758..3dee8c5d61 100755
--- a/edksetup.sh
+++ b/edksetup.sh
@@ -75,11 +75,11 @@ function SetWorkspace()
 
   #
   # Set $WORKSPACE
   #
   export WORKSPACE=`pwd`
-
+  export PYTHONHASHSEED=0
   return 0
 }
 
 function SetupEnv()
 {
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 10/33] BaseTools: Update argparse arguments since it not have version now
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (8 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 09/33] BaseTools:Solve the data sorting problem use python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 11/33] BaseTools:Similar to octal data rectification Feng, Bob C
                   ` (23 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

argparse.ArgumentParser not have version parameter

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py                         | 3 ++-
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py | 3 ++-
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py         | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
index c5c99d94ef..2a7c308895 100644
--- a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
+++ b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
@@ -74,14 +74,15 @@ TEST_TRUSTED_PUBLIC_CERT_FILENAME = 'TestRoot.pub.pem'
 
 if __name__ == '__main__':
   #
   # Create command line argument parser object
   #
-  parser = argparse.ArgumentParser(prog=__prog__, version=__version__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
+  parser = argparse.ArgumentParser(prog=__prog__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
   group = parser.add_mutually_exclusive_group(required=True)
   group.add_argument("-e", action="store_true", dest='Encode', help='encode file')
   group.add_argument("-d", action="store_true", dest='Decode', help='decode file')
+  group.add_argument("--version", action='version', version=__version__)
   parser.add_argument("-o", "--output", dest='OutputFile', type=str, metavar='filename', help="specify the output filename", required=True)
   parser.add_argument("--signer-private-cert", dest='SignerPrivateCertFile', type=argparse.FileType('rb'), help="specify the signer private cert filename.  If not specified, a test signer private cert is used.")
   parser.add_argument("--other-public-cert", dest='OtherPublicCertFile', type=argparse.FileType('rb'), help="specify the other public cert filename.  If not specified, a test other public cert is used.")
   parser.add_argument("--trusted-public-cert", dest='TrustedPublicCertFile', type=argparse.FileType('rb'), help="specify the trusted public cert filename.  If not specified, a test trusted public cert is used.")
   parser.add_argument("--monotonic-count", dest='MonotonicCountStr', type=str, help="specify the MonotonicCount in FMP capsule.  If not specified, 0 is used.")
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
index a34dac423b..f96ceb2637 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
@@ -41,12 +41,13 @@ __usage__     = '%s [options]' % (__prog__)
 
 if __name__ == '__main__':
   #
   # Create command line argument parser object
   #
-  parser = argparse.ArgumentParser(prog=__prog__, version=__version__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
+  parser = argparse.ArgumentParser(prog=__prog__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
   group = parser.add_mutually_exclusive_group(required=True)
+  group.add_argument("--version", action='version', version=__version__)
   group.add_argument("-o", "--output", dest='OutputFile', type=argparse.FileType('wb'), metavar='filename', nargs='*', help="specify the output private key filename in PEM format")
   group.add_argument("-i", "--input", dest='InputFile', type=argparse.FileType('rb'), metavar='filename', nargs='*', help="specify the input private key filename in PEM format")
   parser.add_argument("--public-key-hash", dest='PublicKeyHashFile', type=argparse.FileType('wb'), help="specify the public key hash filename that is SHA 256 hash of 2048 bit RSA public key in binary format")
   parser.add_argument("--public-key-hash-c", dest='PublicKeyHashCFile', type=argparse.FileType('wb'), help="specify the public key hash filename that is SHA 256 hash of 2048 bit RSA public key in C structure format")
   parser.add_argument("-v", "--verbose", dest='Verbose', action="store_true", help="increase output messages")
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
index 3a1a2dff00..d2bb0c998c 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
@@ -60,14 +60,15 @@ TEST_SIGNING_PRIVATE_KEY_FILENAME = 'TestSigningPrivateKey.pem'
 
 if __name__ == '__main__':
   #
   # Create command line argument parser object
   #
-  parser = argparse.ArgumentParser(prog=__prog__, version=__version__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
+  parser = argparse.ArgumentParser(prog=__prog__, usage=__usage__, description=__copyright__, conflict_handler='resolve')
   group = parser.add_mutually_exclusive_group(required=True)
   group.add_argument("-e", action="store_true", dest='Encode', help='encode file')
   group.add_argument("-d", action="store_true", dest='Decode', help='decode file')
+  group.add_argument("--version", action='version', version=__version__)
   parser.add_argument("-o", "--output", dest='OutputFile', type=str, metavar='filename', help="specify the output filename", required=True)
   parser.add_argument("--monotonic-count", dest='MonotonicCountStr', type=str, help="specify the MonotonicCount in FMP capsule.")
   parser.add_argument("--private-key", dest='PrivateKeyFile', type=argparse.FileType('rb'), help="specify the private key filename.  If not specified, a test signing key is used.")
   parser.add_argument("-v", "--verbose", dest='Verbose', action="store_true", help="increase output messages")
   parser.add_argument("-q", "--quiet", dest='Quiet', action="store_true", help="reduce output messages")
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 11/33] BaseTools:Similar to octal data rectification
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (9 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 10/33] BaseTools: Update argparse arguments since it not have version now Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 12/33] BaseTools/UPT:merge UPT Tool use Python2 and Python3 Feng, Bob C
                   ` (22 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

In python3, if Value is octal data, the int(Value, 0) report an error

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/GenC.py           |  2 ++
 BaseTools/Source/Python/Common/Misc.py            |  6 ++++--
 BaseTools/Source/Python/Workspace/DscBuildData.py |  3 ++-
 BaseTools/Source/Python/build/BuildReport.py      | 12 ++++++++++++
 4 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
index 500a78f058..915ba2e235 100644
--- a/BaseTools/Source/Python/AutoGen/GenC.py
+++ b/BaseTools/Source/Python/AutoGen/GenC.py
@@ -1008,10 +1008,12 @@ def CreateModulePcdCode(Info, AutoGenC, AutoGenH, Pcd):
 
         if Pcd.DatumType in TAB_PCD_CLEAN_NUMERIC_TYPES:
             try:
                 if Value.upper().endswith('L'):
                     Value = Value[:-1]
+                if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+                    Value = Value.lstrip('0')
                 ValueNumber = int (Value, 0)
             except:
                 EdkLogger.error("build", AUTOGEN_ERROR,
                                 "PCD value is not valid dec or hex number for datum type [%s] of PCD %s.%s" % (Pcd.DatumType, Pcd.TokenSpaceGuidCName, TokenCName),
                                 ExtraData="[%s]" % str(Info))
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index 1b8a4bef2e..d23a075f43 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1308,13 +1308,15 @@ def CheckPcdDatum(Type, Value):
     elif Type == 'BOOLEAN':
         if Value not in ['TRUE', 'True', 'true', '0x1', '0x01', '1', 'FALSE', 'False', 'false', '0x0', '0x00', '0']:
             return False, "Invalid value [%s] of type [%s]; must be one of TRUE, True, true, 0x1, 0x01, 1"\
                           ", FALSE, False, false, 0x0, 0x00, 0" % (Value, Type)
     elif Type in [TAB_UINT8, TAB_UINT16, TAB_UINT32, TAB_UINT64]:
-        if Value and int(Value, 0) < 0:
-            return False, "PCD can't be set to negative value[%s] for datum type [%s]" % (Value, Type)
+        if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+            Value = Value.lstrip('0')
         try:
+            if Value and int(Value, 0) < 0:
+                return False, "PCD can't be set to negative value[%s] for datum type [%s]" % (Value, Type)
             Value = int(Value, 0)
             if Value > MAX_VAL_TYPE[Type]:
                 return False, "Too large PCD value[%s] for datum type [%s]" % (Value, Type)
         except:
             return False, "Invalid value [%s] of type [%s];"\
diff --git a/BaseTools/Source/Python/Workspace/DscBuildData.py b/BaseTools/Source/Python/Workspace/DscBuildData.py
index 9c5596927f..c2bc705091 100644
--- a/BaseTools/Source/Python/Workspace/DscBuildData.py
+++ b/BaseTools/Source/Python/Workspace/DscBuildData.py
@@ -37,10 +37,11 @@ from Common.Misc import ProcessDuplicatedInf,RemoveCComments
 import re
 from Common.Parsing import IsValidWord
 from Common.VariableAttributes import VariableAttributes
 import Common.GlobalData as GlobalData
 import subprocess
+from functools import reduce
 from Common.Misc import SaveFileOnChange
 from Workspace.BuildClassObject import PlatformBuildClassObject, StructurePcd, PcdClassObject, ModuleBuildClassObject
 from collections import OrderedDict, defaultdict
 from .BuildClassObject import ArrayIndex
 
@@ -1926,11 +1927,11 @@ class DscBuildData(PlatformBuildClassObject):
         index_elements = ArrayIndex.findall(index)
         pcd_capacity = Pcd.Capacity
         if index:
             indicator = "(Pcd"
             if len(pcd_capacity)>2:
-                for i in xrange(0,len(index_elements)):
+                for i in range(0,len(index_elements)):
                     index_ele = index_elements[i]
                     index_num = index_ele.strip("[").strip("]").strip()
                     if i == len(index_elements) -2:
                         indicator += "+ %d*Size/sizeof(%s)/%d + %s)" %(int(cleanupindex(index_elements[i+1])),Pcd.BaseDatumType,reduce(lambda x,y: int(x)*int(y),pcd_capacity[:-1]), cleanupindex(index_elements[i]))
                         break
diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py
index 6c9a20b373..b940de1c90 100644
--- a/BaseTools/Source/Python/build/BuildReport.py
+++ b/BaseTools/Source/Python/build/BuildReport.py
@@ -1024,26 +1024,34 @@ class PcdReport(object):
                     FileWrite(File, Key)
                     First = False
 
 
                 if Pcd.DatumType in TAB_PCD_NUMERIC_TYPES:
+                    if PcdValue.startswith('0') and not PcdValue.lower().startswith('0x') and len(PcdValue) > 2:
+                        PcdValue = PcdValue.lstrip('0')
                     PcdValueNumber = int(PcdValue.strip(), 0)
                     if DecDefaultValue is None:
                         DecMatch = True
                     else:
+                        if DecDefaultValue.startswith('0') and not DecDefaultValue.lower().startswith('0x') and len(DecDefaultValue) > 2:
+                            DecDefaultValue = DecDefaultValue.lstrip('0')
                         DecDefaultValueNumber = int(DecDefaultValue.strip(), 0)
                         DecMatch = (DecDefaultValueNumber == PcdValueNumber)
 
                     if InfDefaultValue is None:
                         InfMatch = True
                     else:
+                        if InfDefaultValue.startswith('0') and not InfDefaultValue.lower().startswith('0x') and len(InfDefaultValue) > 2:
+                            InfDefaultValue = InfDefaultValue.lstrip('0')
                         InfDefaultValueNumber = int(InfDefaultValue.strip(), 0)
                         InfMatch = (InfDefaultValueNumber == PcdValueNumber)
 
                     if DscDefaultValue is None:
                         DscMatch = True
                     else:
+                        if DscDefaultValue.startswith('0') and not DscDefaultValue.lower().startswith('0x') and len(DscDefaultValue) > 2:
+                            DscDefaultValue = DscDefaultValue.lstrip('0')
                         DscDefaultValueNumber = int(DscDefaultValue.strip(), 0)
                         DscMatch = (DscDefaultValueNumber == PcdValueNumber)
                 else:
                     if DecDefaultValue is None:
                         DecMatch = True
@@ -1161,10 +1169,12 @@ class PcdReport(object):
                     if not BuildOptionMatch:
                         ModuleOverride = self.ModulePcdOverride.get((Pcd.TokenCName, Pcd.TokenSpaceGuidCName), {})
                         for ModulePath in ModuleOverride:
                             ModuleDefault = ModuleOverride[ModulePath]
                             if Pcd.DatumType in TAB_PCD_NUMERIC_TYPES:
+                                if ModuleDefault.startswith('0') and not ModuleDefault.lower().startswith('0x') and len(ModuleDefault) > 2:
+                                    ModuleDefault = ModuleDefault.lstrip('0')
                                 ModulePcdDefaultValueNumber = int(ModuleDefault.strip(), 0)
                                 Match = (ModulePcdDefaultValueNumber == PcdValueNumber)
                                 if Pcd.DatumType == 'BOOLEAN':
                                     ModuleDefault = str(ModulePcdDefaultValueNumber)
                             else:
@@ -1262,10 +1272,12 @@ class PcdReport(object):
                 FileWrite(File, ' %-*s   : %6s %10s = %s' % (self.MaxLen, Flag + ' ' + PcdTokenCName, TypeName, '(' + Pcd.DatumType + ')', '{'))
                 for Array in ArrayList:
                     FileWrite(File, Array)
             else:
                 if Pcd.DatumType in TAB_PCD_CLEAN_NUMERIC_TYPES:
+                    if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+                        Value = Value.lstrip('0')
                     if Value.startswith(('0x', '0X')):
                         Value = '{} ({:d})'.format(Value, int(Value, 0))
                     else:
                         Value = "0x{:X} ({})".format(int(Value, 0), Value)
                 FileWrite(File, ' %-*s   : %6s %10s = %s' % (self.MaxLen, Flag + ' ' + PcdTokenCName, TypeName, '(' + Pcd.DatumType + ')', Value))
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 12/33] BaseTools/UPT:merge UPT Tool use Python2 and Python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (10 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 11/33] BaseTools:Similar to octal data rectification Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 13/33] BaseTools: update Test scripts support python3 Feng, Bob C
                   ` (21 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

In UPT Tool,merge python2 and python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/UPT/Core/IpiDb.py                 |   4 ++--
 BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py     |   6 +++---
 BaseTools/Source/Python/UPT/Library/CommentGenerating.py  |   6 ++----
 BaseTools/Source/Python/UPT/Library/CommentParsing.py     |  10 ++++------
 BaseTools/Source/Python/UPT/Library/Misc.py               | 190 ++++++++++++++++++++++++++++++----------------------------------------------------------------------------------------------------------------------------------------------------------------
 BaseTools/Source/Python/UPT/Library/ParserValidate.py     |   2 +-
 BaseTools/Source/Python/UPT/Library/Parsing.py            |   2 +-
 BaseTools/Source/Python/UPT/Library/StringUtils.py        |  36 ++++++++++++++++++------------------
 BaseTools/Source/Python/UPT/Library/UniClassObject.py     |   6 ++++--
 BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py    |   2 +-
 BaseTools/Source/Python/UPT/Logger/StringTable.py         |   2 +-
 BaseTools/Source/Python/UPT/Parser/DecParser.py           |   4 ++--
 BaseTools/Source/Python/UPT/Parser/DecParserMisc.py       |  30 +++++-------------------------
 BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py   |   4 ++--
 BaseTools/Source/Python/UPT/Parser/InfParser.py           |   4 ++--
 BaseTools/Source/Python/UPT/Parser/InfSectionParser.py    |   4 ++--
 BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py |   2 +-
 BaseTools/Source/Python/UPT/UPT.py                        |   1 +
 BaseTools/Source/Python/UPT/Xml/IniToXml.py               |   2 +-
 BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py          |   2 +-
 20 files changed, 84 insertions(+), 235 deletions(-)

diff --git a/BaseTools/Source/Python/UPT/Core/IpiDb.py b/BaseTools/Source/Python/UPT/Core/IpiDb.py
index a781d358c8..48defeac7e 100644
--- a/BaseTools/Source/Python/UPT/Core/IpiDb.py
+++ b/BaseTools/Source/Python/UPT/Core/IpiDb.py
@@ -42,11 +42,11 @@ import platform as pf
 class IpiDatabase(object):
     def __init__(self, DbPath, Workspace):
         Dir = os.path.dirname(DbPath)
         if not os.path.isdir(Dir):
             os.mkdir(Dir)
-        self.Conn = sqlite3.connect(unicode(DbPath), isolation_level='DEFERRED')
+        self.Conn = sqlite3.connect(u''.join(DbPath), isolation_level='DEFERRED')
         self.Conn.execute("PRAGMA page_size=4096")
         self.Conn.execute("PRAGMA synchronous=OFF")
         self.Cur = self.Conn.cursor()
         self.DpTable = 'DpInfo'
         self.PkgTable = 'PkgInfo'
@@ -919,10 +919,10 @@ class IpiDatabase(object):
     # @param StringList:  A list for strings to be converted
     #
     def __ConvertToSqlString(self, StringList):
         if self.DpTable:
             pass
-        return map(lambda s: s.replace("'", "''"), StringList)
+        return list(map(lambda s: s.replace("'", "''"), StringList))
 
 
 
 
diff --git a/BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py b/BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py
index c2a240a884..1f8b3f163e 100644
--- a/BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py
+++ b/BaseTools/Source/Python/UPT/GenMetaFile/GenInfFile.py
@@ -272,11 +272,11 @@ def GenDefines(ModuleObject):
     for UserExtension in ModuleObject.GetUserExtensionList():
         DefinesDict = UserExtension.GetDefinesDict()
         if not DefinesDict:
             continue
         for Statement in DefinesDict:
-            if Statement.split(DT.TAB_EQUAL_SPLIT) > 1:
+            if len(Statement.split(DT.TAB_EQUAL_SPLIT)) > 1:
                 Statement = (u'%s ' % Statement.split(DT.TAB_EQUAL_SPLIT, 1)[0]).ljust(LeftOffset) \
                              + u'= %s' % Statement.split(DT.TAB_EQUAL_SPLIT, 1)[1].lstrip()
             SortedArch = DT.TAB_ARCH_COMMON
             if Statement.strip().startswith(DT.TAB_INF_DEFINES_CUSTOM_MAKEFILE):
                 pos = Statement.find(DT.TAB_VALUE_SPLIT)
@@ -407,11 +407,11 @@ def GenLibraryClasses(ModuleObject):
             Statement += Name
             if FFE:
                 Statement += '|' + FFE
             ModuleList = LibraryClass.GetSupModuleList()
             ArchList = LibraryClass.GetSupArchList()
-            for Index in xrange(0, len(ArchList)):
+            for Index in range(0, len(ArchList)):
                 ArchList[Index] = ConvertArchForInstall(ArchList[Index])
             ArchList.sort()
             SortedArch = ' '.join(ArchList)
             KeyList = []
             if not ModuleList or IsAllModuleList(ModuleList):
@@ -570,11 +570,11 @@ def GenUserExtensions(ModuleObject):
         Statement = UserExtension.GetStatement()
 # Comment the code to support user extension without any statement just the section header in []
 #         if not Statement:
 #             continue
         ArchList = UserExtension.GetSupArchList()
-        for Index in xrange(0, len(ArchList)):
+        for Index in range(0, len(ArchList)):
             ArchList[Index] = ConvertArchForInstall(ArchList[Index])
         ArchList.sort()
         KeyList = []
         CommonPreFix = ''
         if UserExtension.GetUserID():
diff --git a/BaseTools/Source/Python/UPT/Library/CommentGenerating.py b/BaseTools/Source/Python/UPT/Library/CommentGenerating.py
index 4726629695..bd3514bc49 100644
--- a/BaseTools/Source/Python/UPT/Library/CommentGenerating.py
+++ b/BaseTools/Source/Python/UPT/Library/CommentGenerating.py
@@ -122,14 +122,12 @@ def GenHeaderCommentSection(Abstract, Description, Copyright, License, IsBinaryH
     Content = ''
 
     #
     # Convert special character to (c), (r) and (tm).
     #
-    if isinstance(Abstract, unicode):
-        Abstract = ConvertSpecialUnicodes(Abstract)
-    if isinstance(Description, unicode):
-        Description = ConvertSpecialUnicodes(Description)
+    Abstract = ConvertSpecialUnicodes(Abstract)
+    Description = ConvertSpecialUnicodes(Description)
     if IsBinaryHeader:
         Content += CommChar * 2 + TAB_SPACE_SPLIT + TAB_BINARY_HEADER_COMMENT + '\r\n'
     elif CommChar == TAB_COMMENT_EDK1_SPLIT:
         Content += CommChar + TAB_SPACE_SPLIT + TAB_COMMENT_EDK1_START + TAB_STAR + TAB_SPACE_SPLIT +\
          TAB_HEADER_COMMENT + '\r\n'
diff --git a/BaseTools/Source/Python/UPT/Library/CommentParsing.py b/BaseTools/Source/Python/UPT/Library/CommentParsing.py
index 285812c9c2..a09a530ffb 100644
--- a/BaseTools/Source/Python/UPT/Library/CommentParsing.py
+++ b/BaseTools/Source/Python/UPT/Library/CommentParsing.py
@@ -72,11 +72,11 @@ def ParseHeaderCommentSection(CommentList, FileName = None, IsBinaryHeader = Fal
 
     #
     # first find the last copyright line
     #
     Last = 0
-    for Index in xrange(len(CommentList)-1, 0, -1):
+    for Index in range(len(CommentList)-1, 0, -1):
         Line = CommentList[Index][0]
         if _IsCopyrightLine(Line):
             Last = Index
             break
 
@@ -204,21 +204,19 @@ def ParsePcdErrorCode (Value = None, ContainerFile = None, LineNum = None):
     try:
         if Value.strip().startswith((TAB_HEX_START, TAB_CAPHEX_START)):
             Base = 16
         else:
             Base = 10
-        ErrorCode = long(Value, Base)
+        ErrorCode = int(Value, Base)
         if ErrorCode > PCD_ERR_CODE_MAX_SIZE or ErrorCode < 0:
             Logger.Error('Parser',
                         FORMAT_NOT_SUPPORTED,
                         "The format %s of ErrorCode is not valid, should be UNIT32 type or long type" % Value,
                         File = ContainerFile,
                         Line = LineNum)
-        #
-        # To delete the tailing 'L'
-        #
-        return hex(ErrorCode)[:-1]
+        ErrorCode = '0x%x' % ErrorCode
+        return ErrorCode
     except ValueError as XStr:
         if XStr:
             pass
         Logger.Error('Parser',
                     FORMAT_NOT_SUPPORTED,
diff --git a/BaseTools/Source/Python/UPT/Library/Misc.py b/BaseTools/Source/Python/UPT/Library/Misc.py
index 8c2a6787f0..d69b161420 100644
--- a/BaseTools/Source/Python/UPT/Library/Misc.py
+++ b/BaseTools/Source/Python/UPT/Library/Misc.py
@@ -30,11 +30,11 @@ from os import remove
 from os import rmdir
 from os import linesep
 from os import walk
 from os import environ
 import re
-from UserDict import IterableUserDict
+from collections import OrderedDict
 
 import Logger.Log as Logger
 from Logger import StringTable as ST
 from Logger import ToolError
 from Library import GlobalData
@@ -158,27 +158,39 @@ def RemoveDirectory(Directory, Recursively=False):
 # @param      Content:         The new content of the file
 # @param      IsBinaryFile:    The flag indicating if the file is binary file
 #                              or not
 #
 def SaveFileOnChange(File, Content, IsBinaryFile=True):
-    if not IsBinaryFile:
-        Content = Content.replace("\n", linesep)
-
     if os.path.exists(File):
-        try:
-            if Content == __FileHookOpen__(File, "rb").read():
-                return False
-        except BaseException:
-            Logger.Error(None, ToolError.FILE_OPEN_FAILURE, ExtraData=File)
+        if IsBinaryFile:
+            try:
+                if Content == __FileHookOpen__(File, "rb").read():
+                    return False
+            except BaseException:
+                Logger.Error(None, ToolError.FILE_OPEN_FAILURE, ExtraData=File)
+        else:
+            try:
+                if Content == __FileHookOpen__(File, "r").read():
+                    return False
+            except BaseException:
+                Logger.Error(None, ToolError.FILE_OPEN_FAILURE, ExtraData=File)
 
     CreateDirectory(os.path.dirname(File))
-    try:
-        FileFd = __FileHookOpen__(File, "wb")
-        FileFd.write(Content)
-        FileFd.close()
-    except BaseException:
-        Logger.Error(None, ToolError.FILE_CREATE_FAILURE, ExtraData=File)
+    if IsBinaryFile:
+        try:
+            FileFd = __FileHookOpen__(File, "wb")
+            FileFd.write(Content)
+            FileFd.close()
+        except BaseException:
+            Logger.Error(None, ToolError.FILE_CREATE_FAILURE, ExtraData=File)
+    else:
+        try:
+            FileFd = __FileHookOpen__(File, "w")
+            FileFd.write(Content)
+            FileFd.close()
+        except BaseException:
+            Logger.Error(None, ToolError.FILE_CREATE_FAILURE, ExtraData=File)
 
     return True
 
 ## Get all files of a directory
 #
@@ -286,160 +298,18 @@ def RealPath2(File, Dir='', OverrideDir=''):
         else:
             return NewFile, ''
 
     return None, None
 
-## A dict which can access its keys and/or values orderly
-#
-#  The class implements a new kind of dict which its keys or values can be
-#  accessed in the order they are added into the dict. It guarantees the order
-#  by making use of an internal list to keep a copy of keys.
-#
-class Sdict(IterableUserDict):
-    ## Constructor
-    #
-    def __init__(self):
-        IterableUserDict.__init__(self)
-        self._key_list = []
-
-    ## [] operator
-    #
-    def __setitem__(self, Key, Value):
-        if Key not in self._key_list:
-            self._key_list.append(Key)
-        IterableUserDict.__setitem__(self, Key, Value)
-
-    ## del operator
-    #
-    def __delitem__(self, Key):
-        self._key_list.remove(Key)
-        IterableUserDict.__delitem__(self, Key)
-
-    ## used in "for k in dict" loop to ensure the correct order
-    #
-    def __iter__(self):
-        return self.iterkeys()
-
-    ## len() support
-    #
-    def __len__(self):
-        return len(self._key_list)
-
-    ## "in" test support
-    #
-    def __contains__(self, Key):
-        return Key in self._key_list
-
-    ## indexof support
-    #
-    def index(self, Key):
-        return self._key_list.index(Key)
-
-    ## insert support
-    #
-    def insert(self, Key, Newkey, Newvalue, Order):
-        Index = self._key_list.index(Key)
-        if Order == 'BEFORE':
-            self._key_list.insert(Index, Newkey)
-            IterableUserDict.__setitem__(self, Newkey, Newvalue)
-        elif Order == 'AFTER':
-            self._key_list.insert(Index + 1, Newkey)
-            IterableUserDict.__setitem__(self, Newkey, Newvalue)
-
-    ## append support
-    #
-    def append(self, Sdict2):
-        for Key in Sdict2:
-            if Key not in self._key_list:
-                self._key_list.append(Key)
-            IterableUserDict.__setitem__(self, Key, Sdict2[Key])
-    ## hash key
-    #
-    def has_key(self, Key):
-        return Key in self._key_list
-
-    ## Empty the dict
-    #
-    def clear(self):
-        self._key_list = []
-        IterableUserDict.clear(self)
-
-    ## Return a copy of keys
-    #
-    def keys(self):
-        Keys = []
-        for Key in self._key_list:
-            Keys.append(Key)
-        return Keys
-
-    ## Return a copy of values
-    #
-    def values(self):
-        Values = []
-        for Key in self._key_list:
-            Values.append(self[Key])
-        return Values
-
-    ## Return a copy of (key, value) list
-    #
-    def items(self):
-        Items = []
-        for Key in self._key_list:
-            Items.append((Key, self[Key]))
-        return Items
-
-    ## Iteration support
-    #
-    def iteritems(self):
-        return iter(self.items())
-
-    ## Keys interation support
-    #
-    def iterkeys(self):
-        return iter(self.keys())
-
-    ## Values interation support
-    #
-    def itervalues(self):
-        return iter(self.values())
-
-    ## Return value related to a key, and remove the (key, value) from the dict
-    #
-    def pop(self, Key, *Dv):
-        Value = None
-        if Key in self._key_list:
-            Value = self[Key]
-            self.__delitem__(Key)
-        elif len(Dv) != 0 :
-            Value = Dv[0]
-        return Value
-
-    ## Return (key, value) pair, and remove the (key, value) from the dict
-    #
-    def popitem(self):
-        Key = self._key_list[-1]
-        Value = self[Key]
-        self.__delitem__(Key)
-        return Key, Value
-    ## update method
-    #
-    def update(self, Dict=None, **Kwargs):
-        if Dict is not None:
-            for Key1, Val1 in Dict.items():
-                self[Key1] = Val1
-        if len(Kwargs):
-            for Key1, Val1 in Kwargs.items():
-                self[Key1] = Val1
-
 ## CommonPath
 #
 # @param PathList: PathList
 #
 def CommonPath(PathList):
     Path1 = min(PathList).split(os.path.sep)
     Path2 = max(PathList).split(os.path.sep)
-    for Index in xrange(min(len(Path1), len(Path2))):
+    for Index in range(min(len(Path1), len(Path2))):
         if Path1[Index] != Path2[Index]:
             return os.path.sep.join(Path1[:Index])
     return os.path.sep.join(Path1)
 
 ## PathClass
@@ -888,11 +758,11 @@ def ProcessEdkComment(LineList):
                 Count = Count + 1
 
             if FindEdkBlockComment:
                 if FirstPos == -1:
                     FirstPos = StartPos
-                for Index in xrange(StartPos, EndPos+1):
+                for Index in range(StartPos, EndPos+1):
                     LineList[Index] = ''
                 FindEdkBlockComment = False
         elif Line.find("//") != -1 and not Line.startswith("#"):
             #
             # handling cpp style comment
@@ -955,11 +825,11 @@ def GetLibInstanceInfo(String, WorkSpace, LineNo):
         return False
     if IsValidFileFlag:
         FileLinesList = []
 
         try:
-            FInputfile = open(FullFileName, "rb", 0)
+            FInputfile = open(FullFileName, "r")
             try:
                 FileLinesList = FInputfile.readlines()
             except BaseException:
                 Logger.Error("InfParser",
                              ToolError.FILE_READ_FAILURE,
diff --git a/BaseTools/Source/Python/UPT/Library/ParserValidate.py b/BaseTools/Source/Python/UPT/Library/ParserValidate.py
index 31b9b68cd5..87d156fa4c 100644
--- a/BaseTools/Source/Python/UPT/Library/ParserValidate.py
+++ b/BaseTools/Source/Python/UPT/Library/ParserValidate.py
@@ -725,9 +725,9 @@ def IsValidUserId(UserId):
 #
 # Check if a UTF16-LE file has a BOM header
 #
 def CheckUTF16FileHeader(File):
     FileIn = open(File, 'rb').read(2)
-    if FileIn != '\xff\xfe':
+    if FileIn != b'\xff\xfe':
         return False
 
     return True
diff --git a/BaseTools/Source/Python/UPT/Library/Parsing.py b/BaseTools/Source/Python/UPT/Library/Parsing.py
index 81729d6cdb..3eca8e3849 100644
--- a/BaseTools/Source/Python/UPT/Library/Parsing.py
+++ b/BaseTools/Source/Python/UPT/Library/Parsing.py
@@ -972,11 +972,11 @@ def GenSection(SectionName, SectionDict, SplitArch=True, NeedBlankLine=False):
             else:
                 if SectionName != 'UserExtensions':
                     ArchList = GetSplitValueList(SectionAttrs, DataType.TAB_COMMENT_SPLIT)
                 else:
                     ArchList = [SectionAttrs]
-            for Index in xrange(0, len(ArchList)):
+            for Index in range(0, len(ArchList)):
                 ArchList[Index] = ConvertArchForInstall(ArchList[Index])
             Section = '[' + SectionName + '.' + (', ' + SectionName + '.').join(ArchList) + ']'
         else:
             Section = '[' + SectionName + ']'
         Content += '\n' + Section + '\n'
diff --git a/BaseTools/Source/Python/UPT/Library/StringUtils.py b/BaseTools/Source/Python/UPT/Library/StringUtils.py
index 2be382fa17..90946337d0 100644
--- a/BaseTools/Source/Python/UPT/Library/StringUtils.py
+++ b/BaseTools/Source/Python/UPT/Library/StringUtils.py
@@ -18,11 +18,10 @@ StringUtils
 ##
 # Import Modules
 #
 import re
 import os.path
-from string import strip
 import Logger.Log as Logger
 import Library.DataType as DataType
 from Logger.ToolError import FORMAT_INVALID
 from Logger.ToolError import PARSER_ERROR
 from Logger import StringTable as ST
@@ -42,11 +41,11 @@ gMACRO_PATTERN = re.compile("\$\(([_A-Z][_A-Z0-9]*)\)", re.UNICODE)
 # @param SplitTag:  The split key, default is DataType.TAB_VALUE_SPLIT
 # @param MaxSplit:  The max number of split values, default is -1
 #
 #
 def GetSplitValueList(String, SplitTag=DataType.TAB_VALUE_SPLIT, MaxSplit= -1):
-    return map(lambda l: l.strip(), String.split(SplitTag, MaxSplit))
+    return list(map(lambda l: l.strip(), String.split(SplitTag, MaxSplit)))
 
 ## MergeArches
 #
 # Find a key's all arches in dict, add the new arch to the list
 # If not exist any arch, set the arch directly
@@ -433,11 +432,11 @@ def GetSingleValueOfKeyFromLines(Lines, Dictionary, CommentCharacter, KeySplitCh
                 #
                 # Remove comments and white spaces
                 #
                 LineList[1] = CleanString(LineList[1], CommentCharacter)
                 if ValueSplitFlag:
-                    Value = map(strip, LineList[1].split(ValueSplitCharacter))
+                    Value = list(map(lambda x: x.strip(), LineList[1].split(ValueSplitCharacter)))
                 else:
                     Value = CleanString(LineList[1], CommentCharacter).splitlines()
 
                 if Key[0] in Dictionary:
                     if Key[0] not in Keys:
@@ -630,11 +629,11 @@ def SplitString(String):
 # Replace "'" with "''" in each item of StringList
 #
 # @param StringList:  A list for strings to be converted
 #
 def ConvertToSqlString(StringList):
-    return map(lambda s: s.replace("'", "''"), StringList)
+    return list(map(lambda s: s.replace("'", "''"), StringList))
 
 ## Convert To Sql String
 #
 # Replace "'" with "''" in the String
 #
@@ -938,27 +937,28 @@ def SplitPcdEntry(String):
 # @param Arch2
 #
 def IsMatchArch(Arch1, Arch2):
     if 'COMMON' in Arch1 or 'COMMON' in Arch2:
         return True
-    if isinstance(Arch1, basestring) and isinstance(Arch2, basestring):
-        if Arch1 == Arch2:
-            return True
-
-    if isinstance(Arch1, basestring) and isinstance(Arch2, list):
-        return Arch1 in Arch2
+    try:
+        if isinstance(Arch1, list) and isinstance(Arch2, list):
+            for Item1 in Arch1:
+                for Item2 in Arch2:
+                    if Item1 == Item2:
+                        return True
 
-    if isinstance(Arch2, basestring) and isinstance(Arch1, list):
-        return Arch2 in Arch1
+        elif isinstance(Arch1, list):
+            return Arch2 in Arch1
 
-    if isinstance(Arch1, list) and isinstance(Arch2, list):
-        for Item1 in Arch1:
-            for Item2 in Arch2:
-                if Item1 == Item2:
-                    return True
+        elif isinstance(Arch2, list):
+            return Arch1 in Arch2
 
-    return False
+        else:
+            if Arch1 == Arch2:
+                return True
+    except:
+        return False
 
 # Search all files in FilePath to find the FileName with the largest index
 # Return the FileName with index +1 under the FilePath
 #
 def GetUniFileName(FilePath, FileName):
diff --git a/BaseTools/Source/Python/UPT/Library/UniClassObject.py b/BaseTools/Source/Python/UPT/Library/UniClassObject.py
index cd575d5a34..bd7804b753 100644
--- a/BaseTools/Source/Python/UPT/Library/UniClassObject.py
+++ b/BaseTools/Source/Python/UPT/Library/UniClassObject.py
@@ -117,14 +117,16 @@ def UniToHexList(Uni):
 # @param Uni:    The python unicode string
 #
 # @retval NewUni:  The converted unicode string
 #
 def ConvertSpecialUnicodes(Uni):
-    NewUni = Uni
+    OldUni = NewUni = Uni
     NewUni = NewUni.replace(u'\u00A9', '(c)')
     NewUni = NewUni.replace(u'\u00AE', '(r)')
     NewUni = NewUni.replace(u'\u2122', '(tm)')
+    if OldUni == NewUni:
+        NewUni = OldUni
     return NewUni
 
 ## GetLanguageCode1766
 #
 # Check the language code read from .UNI file and convert RFC 4646 codes to RFC 1766 codes
@@ -511,11 +513,11 @@ class UniFileClassObject(object):
                 if FileIn[LineCount].strip().startswith('#language'):
                     Line = Line + FileIn[LineCount]
                     FileIn[LineCount-1] = Line
                     FileIn[LineCount] = '\r\n'
                     LineCount -= 1
-                    for Index in xrange (LineCount + 1, len (FileIn) - 1):
+                    for Index in range (LineCount + 1, len (FileIn) - 1):
                         if (Index == len(FileIn) -1):
                             FileIn[Index] = '\r\n'
                         else:
                             FileIn[Index] = FileIn[Index + 1]
                     continue
diff --git a/BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py b/BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py
index ee158f33d9..b24e3ed01b 100644
--- a/BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py
+++ b/BaseTools/Source/Python/UPT/Library/Xml/XmlRoutines.py
@@ -178,11 +178,11 @@ def XmlElementData(Dom):
 #
 # @param  Dom                The root XML DOM object.
 # @param  String             A XPath style path.
 #
 def XmlElementList(Dom, String):
-    return map(XmlElementData, XmlList(Dom, String))
+    return list(map(XmlElementData, XmlList(Dom, String)))
 
 
 ## Get the XML attribute of the current node.
 #
 # Return a single XML attribute named Attribute from the current root Dom.
diff --git a/BaseTools/Source/Python/UPT/Logger/StringTable.py b/BaseTools/Source/Python/UPT/Logger/StringTable.py
index c1c7732b40..061943925a 100644
--- a/BaseTools/Source/Python/UPT/Logger/StringTable.py
+++ b/BaseTools/Source/Python/UPT/Logger/StringTable.py
@@ -40,11 +40,11 @@ MSG_USAGE_STRING = _("\n"
 # Version and Copyright
 #
 MSG_VERSION_NUMBER = _("1.1")
 MSG_VERSION = _("UEFI Packaging Tool (UEFIPT) - Revision " + \
                 MSG_VERSION_NUMBER)
-MSG_COPYRIGHT = _("Copyright (c) 2011 - 2016 Intel Corporation All Rights Reserved.")
+MSG_COPYRIGHT = _("Copyright (c) 2011 - 2018 Intel Corporation All Rights Reserved.")
 MSG_VERSION_COPYRIGHT = _("\n  %s\n  %s" % (MSG_VERSION, MSG_COPYRIGHT))
 MSG_USAGE = _("%s [options]\n%s" % ("UPT", MSG_VERSION_COPYRIGHT))
 MSG_DESCRIPTION = _("The UEFIPT is used to create, " + \
                     "install or remove a UEFI Distribution Package. " + \
                     "If WORKSPACE environment variable is present, " + \
diff --git a/BaseTools/Source/Python/UPT/Parser/DecParser.py b/BaseTools/Source/Python/UPT/Parser/DecParser.py
index 8f3d60df57..f7eeb84127 100644
--- a/BaseTools/Source/Python/UPT/Parser/DecParser.py
+++ b/BaseTools/Source/Python/UPT/Parser/DecParser.py
@@ -620,11 +620,11 @@ class _DecPcd(_DecBase):
         if not IsValidToken(PCD_TOKEN_PATTERN, Token):
             self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN % Token)
         elif not Token.startswith('0x') and not Token.startswith('0X'):
             if int(Token) > 4294967295:
                 self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN_INT % Token)
-            Token = hex(int(Token))[:-1]
+            Token = '0x%x' % int(Token)
 
         IntToken = int(Token, 0)
         if (Guid, IntToken) in self.TokenMap:
             if self.TokenMap[Guid, IntToken] != CName:
                 self._LoggerError(ST.ERR_DECPARSE_PCD_TOKEN_UNIQUE%(Token))
@@ -750,11 +750,11 @@ class _DecUserExtension(_DecBase):
 # Top dec parser
 #
 class Dec(_DecBase, _DecComments):
     def __init__(self, DecFile, Parse = True):
         try:
-            Content = ConvertSpecialChar(open(DecFile, 'rb').readlines())
+            Content = ConvertSpecialChar(open(DecFile, 'r').readlines())
         except BaseException:
             Logger.Error(TOOL_NAME, FILE_OPEN_FAILURE, File=DecFile,
                          ExtraData=ST.ERR_DECPARSE_FILEOPEN % DecFile)
 
         #
diff --git a/BaseTools/Source/Python/UPT/Parser/DecParserMisc.py b/BaseTools/Source/Python/UPT/Parser/DecParserMisc.py
index c5c35ede78..9ec3462c77 100644
--- a/BaseTools/Source/Python/UPT/Parser/DecParserMisc.py
+++ b/BaseTools/Source/Python/UPT/Parser/DecParserMisc.py
@@ -149,11 +149,11 @@ def IsValidNumValUint8(Token):
     if Token.lower().startswith('0x'):
         Base = 16
     else:
         Base = 10
     try:
-        TokenValue = long(Token, Base)
+        TokenValue = int(Token, Base)
     except BaseException:
         Valid, Cause = IsValidLogicalExpr(Token, True)
         if Cause:
             pass
     if not Valid:
@@ -260,34 +260,14 @@ def IsValidPcdDatum(Type, Value):
             if Value and not Value.startswith('0x') \
                 and not Value.startswith('0X'):
                 Value = Value.lstrip('0')
                 if not Value:
                     return True, ""
-            Value = long(Value, 0)
-            TypeLenMap = {
-                #
-                # 0x00 - 0xff
-                #
-                'UINT8'  : 2,
-                #
-                # 0x0000 - 0xffff
-                #
-                'UINT16' : 4,
-                #
-                # 0x00000000 - 0xffffffff
-                #
-                'UINT32' : 8,
-                #
-                # 0x0 - 0xffffffffffffffff
-                #
-                'UINT64' : 16
-            }
-            HexStr = hex(Value)
-            #
-            # First two chars of HexStr are 0x and tail char is L
-            #
-            if TypeLenMap[Type] < len(HexStr) - 3:
+            Value = int(Value, 0)
+            MAX_VAL_TYPE = {"BOOLEAN": 0x01, 'UINT8': 0xFF, 'UINT16': 0xFFFF, 'UINT32': 0xFFFFFFFF,
+                            'UINT64': 0xFFFFFFFFFFFFFFFF}
+            if Value > MAX_VAL_TYPE[Type]:
                 return False, ST.ERR_DECPARSE_PCD_INT_EXCEED % (StrVal, Type)
         except BaseException:
             Valid, Cause = IsValidLogicalExpr(Value, True)
         if not Valid:
             return False, Cause
diff --git a/BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py b/BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py
index 029a436cec..c314892adf 100644
--- a/BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py
+++ b/BaseTools/Source/Python/UPT/Parser/InfAsBuiltProcess.py
@@ -203,11 +203,11 @@ def GetFileLineContent(FileName, WorkSpace, LineNo, OriginalString):
 
     FileLinesList = []
 
     try:
         FullFileName = FullFileName.replace('\\', '/')
-        Inputfile = open(FullFileName, "rb", 0)
+        Inputfile = open(FullFileName, "r")
         try:
             FileLinesList = Inputfile.readlines()
         except BaseException:
             Logger.Error("InfParser", ToolError.FILE_READ_FAILURE, ST.ERR_FILE_OPEN_FAILURE, File=FullFileName)
         finally:
@@ -245,11 +245,11 @@ def GetGuidVerFormLibInstance(Guid, Version, WorkSpace, CurrentInfFileName):
         try:
             if InfFile.strip().upper() == CurrentInfFileName.strip().upper():
                 continue
             InfFile = InfFile.replace('\\', '/')
             if InfFile not in GlobalData.gLIBINSTANCEDICT:
-                InfFileObj = open(InfFile, "rb", 0)
+                InfFileObj = open(InfFile, "r")
                 GlobalData.gLIBINSTANCEDICT[InfFile] = InfFileObj
             else:
                 InfFileObj = GlobalData.gLIBINSTANCEDICT[InfFile]
 
         except BaseException:
diff --git a/BaseTools/Source/Python/UPT/Parser/InfParser.py b/BaseTools/Source/Python/UPT/Parser/InfParser.py
index cd99262e03..5df7320324 100644
--- a/BaseTools/Source/Python/UPT/Parser/InfParser.py
+++ b/BaseTools/Source/Python/UPT/Parser/InfParser.py
@@ -49,11 +49,11 @@ from Parser.InfParserMisc import IsBinaryInf
 #
 def OpenInfFile(Filename):
     FileLinesList = []
 
     try:
-        FInputfile = open(Filename, "rb", 0)
+        FInputfile = open(Filename, "r")
         try:
             FileLinesList = FInputfile.readlines()
         except BaseException:
             Logger.Error("InfParser",
                          FILE_READ_FAILURE,
@@ -84,11 +84,11 @@ class InfParser(InfSectionParser):
     def __init__(self, Filename = None, WorkspaceDir = None):
 
         #
         # Call parent class construct function
         #
-        super(InfParser, self).__init__()
+        InfSectionParser.__init__()
 
         self.WorkspaceDir    = WorkspaceDir
         self.SupArchList     = DT.ARCH_LIST
         self.EventList    = []
         self.HobList      = []
diff --git a/BaseTools/Source/Python/UPT/Parser/InfSectionParser.py b/BaseTools/Source/Python/UPT/Parser/InfSectionParser.py
index 1f254058d1..d9c9d41fcb 100644
--- a/BaseTools/Source/Python/UPT/Parser/InfSectionParser.py
+++ b/BaseTools/Source/Python/UPT/Parser/InfSectionParser.py
@@ -225,11 +225,11 @@ class InfSectionParser(InfDefinSectionParser,
         #
         self.InfDefSection = InfDefObject()
         self.InfBuildOptionSection = InfBuildOptionsObject()
         self.InfLibraryClassSection = InfLibraryClassObject()
         self.InfPackageSection = InfPackageObject()
-        self.InfPcdSection = InfPcdObject(self.MetaFiles.keys()[0])
+        self.InfPcdSection = InfPcdObject(list(self.MetaFiles.keys())[0])
         self.InfSourcesSection = InfSourcesObject()
         self.InfUserExtensionSection = InfUserExtensionObject()
         self.InfProtocolSection = InfProtocolObject()
         self.InfPpiSection = InfPpiObject()
         self.InfGuidSection = InfGuidObject()
@@ -453,11 +453,11 @@ class InfSectionParser(InfDefinSectionParser,
                 ArchList = []
                 for Match in ReFindHobArchRe.finditer(HobSectionStr):
                     Arch = Match.groups(1)[0].upper()
                     ArchList.append(Arch)
             CommentSoFar = ''
-            for Index in xrange(1, len(List)):
+            for Index in range(1, len(List)):
                 Result = ParseComment(List[Index], DT.ALL_USAGE_TOKENS, TokenDict, [], False)
                 Usage = Result[0]
                 Type = Result[1]
                 HelpText = Result[3]
 
diff --git a/BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py b/BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py
index c055089f2c..2e83c247ed 100644
--- a/BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py
+++ b/BaseTools/Source/Python/UPT/PomAdapter/InfPomAlignment.py
@@ -131,11 +131,11 @@ class InfPomAlignment(ModuleObject):
         #
         RecordSet = self.Parser.InfDefSection.Defines
         #
         # Should only have one ArchString Item.
         #
-        ArchString = RecordSet.keys()[0]
+        ArchString = list(RecordSet.keys())[0]
         ArchList = GetSplitValueList(ArchString, ' ')
         ArchList = ConvertArchList(ArchList)
         HasCalledFlag = False
         #
         # Get data from Sdict()
diff --git a/BaseTools/Source/Python/UPT/UPT.py b/BaseTools/Source/Python/UPT/UPT.py
index 004fc5ff2f..55b63a3ca1 100644
--- a/BaseTools/Source/Python/UPT/UPT.py
+++ b/BaseTools/Source/Python/UPT/UPT.py
@@ -19,10 +19,11 @@ UPT
 
 ## import modules
 #
 import locale
 import sys
+from imp import reload
 encoding = locale.getdefaultlocale()[1]
 if encoding:
     reload(sys)
     sys.setdefaultencoding(encoding)
 from Core import FileHook
diff --git a/BaseTools/Source/Python/UPT/Xml/IniToXml.py b/BaseTools/Source/Python/UPT/Xml/IniToXml.py
index 70d8fb19f2..8125f183be 100644
--- a/BaseTools/Source/Python/UPT/Xml/IniToXml.py
+++ b/BaseTools/Source/Python/UPT/Xml/IniToXml.py
@@ -324,11 +324,11 @@ def IniToXml(IniFile):
 
     SectionName = ''
     CurrentKey = ''
     PreMap = None
     Map = None
-    FileContent = ConvertSpecialChar(open(IniFile, 'rb').readlines())
+    FileContent = ConvertSpecialChar(open(IniFile, 'r').readlines())
     LastIndex = 0
     for Index in range(0, len(FileContent)):
         LastIndex = Index
         Line = FileContent[Index].strip()
         if Line == '' or Line.startswith(';'):
diff --git a/BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py b/BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py
index d170761aad..bf64d89f17 100644
--- a/BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py
+++ b/BaseTools/Source/Python/UPT/Xml/XmlParserMisc.py
@@ -51,11 +51,11 @@ def ConvertVariableName(VariableName):
         FirstByte = int(ValueList[Index], 16)
         SecondByte = int(ValueList[Index + 1], 16)
         if SecondByte != 0:
             return None
 
-        if FirstByte not in xrange(0x20, 0x7F):
+        if FirstByte not in range(0x20, 0x7F):
             return None
         TransferedStr += ('%c')%FirstByte
         Index = Index + 2
 
     return 'L"' + TransferedStr + '"'
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 13/33] BaseTools: update Test scripts support python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (11 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 12/33] BaseTools/UPT:merge UPT Tool use Python2 and Python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 14/33] BaseTools/Scripts: Porting PackageDocumentTools code to use Python3 Feng, Bob C
                   ` (20 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

update Test scripts support python2 and python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Tests/CToolsTests.py             |  2 +-
 BaseTools/Tests/CheckUnicodeSourceFiles.py |  6 +++---
 BaseTools/Tests/TestTools.py               | 13 ++++++++-----
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/BaseTools/Tests/CToolsTests.py b/BaseTools/Tests/CToolsTests.py
index ab75d9a7dc..f0de44b141 100644
--- a/BaseTools/Tests/CToolsTests.py
+++ b/BaseTools/Tests/CToolsTests.py
@@ -24,11 +24,11 @@ modules = (
     TianoCompress,
     )
 
 
 def TheTestSuite():
-    suites = map(lambda module: module.TheTestSuite(), modules)
+    suites = list(map(lambda module: module.TheTestSuite(), modules))
     return unittest.TestSuite(suites)
 
 if __name__ == '__main__':
     allTests = TheTestSuite()
     unittest.TextTestRunner().run(allTests)
diff --git a/BaseTools/Tests/CheckUnicodeSourceFiles.py b/BaseTools/Tests/CheckUnicodeSourceFiles.py
index 6ae62f180a..c76b2bc20e 100644
--- a/BaseTools/Tests/CheckUnicodeSourceFiles.py
+++ b/BaseTools/Tests/CheckUnicodeSourceFiles.py
@@ -108,11 +108,11 @@ class Tests(TestTools.BaseToolsTest):
         # with the Surrogate Pair code point.
         #
         # This test makes sure that BaseTools rejects these characters
         # if seen in a .uni file.
         #
-        data = codecs.BOM_UTF16_LE + '//\x01\xd8 '
+        data = codecs.BOM_UTF16_LE + b'//\x01\xd8 '
 
         self.CheckFile(encoding=None, shouldPass=False, string=data)
 
     def testValidUtf8File(self):
         self.CheckFile(encoding='utf_8', shouldPass=True)
@@ -159,20 +159,20 @@ class Tests(TestTools.BaseToolsTest):
         # UTF-16 Surrogate Pairs.
         #
         # This test makes sure that BaseTools rejects these characters
         # if seen in a .uni file.
         #
-        data = '\xed\xa0\x81'
+        data = b'\xed\xa0\x81'
 
         self.CheckFile(encoding=None, shouldPass=False, string=data)
 
     def testSurrogatePairUnicodeCharInUtf8FileWithBom(self):
         #
         # Same test as testSurrogatePairUnicodeCharInUtf8File, but add
         # the UTF-8 BOM
         #
-        data = codecs.BOM_UTF8 + '\xed\xa0\x81'
+        data = codecs.BOM_UTF8 + b'\xed\xa0\x81'
 
         self.CheckFile(encoding=None, shouldPass=False, string=data)
 
 TheTestSuite = TestTools.MakeTheTestSuite(locals())
 
diff --git a/BaseTools/Tests/TestTools.py b/BaseTools/Tests/TestTools.py
index e16e993048..4332dcdaac 100644
--- a/BaseTools/Tests/TestTools.py
+++ b/BaseTools/Tests/TestTools.py
@@ -38,11 +38,11 @@ if PythonSourceDir not in sys.path:
     #
     sys.path.append(PythonSourceDir)
 
 def MakeTheTestSuite(localItems):
     tests = []
-    for name, item in localItems.iteritems():
+    for name, item in localItems.items():
         if isinstance(item, type):
             if issubclass(item, unittest.TestCase):
                 tests.append(unittest.TestLoader().loadTestsFromTestCase(item))
             elif issubclass(item, unittest.TestSuite):
                 tests.append(item())
@@ -144,13 +144,16 @@ class BaseToolsTest(unittest.TestCase):
         data = f.read()
         f.close()
         return data
 
     def WriteTmpFile(self, fileName, data):
-        f = open(self.GetTmpFilePath(fileName), 'w')
-        f.write(data)
-        f.close()
+        if isinstance(data, bytes):
+            with open(self.GetTmpFilePath(fileName), 'wb') as f:
+                f.write(data)
+        else:
+            with open(self.GetTmpFilePath(fileName), 'w') as f:
+                f.write(data)
 
     def GenRandomFileData(self, fileName, minlen = None, maxlen = None):
         if maxlen is None: maxlen = minlen
         f = self.OpenTmpFile(fileName, 'w')
         f.write(self.GetRandomString(minlen, maxlen))
@@ -159,11 +162,11 @@ class BaseToolsTest(unittest.TestCase):
     def GetRandomString(self, minlen = None, maxlen = None):
         if minlen is None: minlen = 1024
         if maxlen is None: maxlen = minlen
         return ''.join(
             [chr(random.randint(0, 255))
-             for x in xrange(random.randint(minlen, maxlen))
+             for x in range(random.randint(minlen, maxlen))
             ])
 
     def setUp(self):
         self.savedEnvPath = os.environ['PATH']
         self.savedSysPath = sys.path[:]
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 14/33] BaseTools/Scripts: Porting PackageDocumentTools code to use Python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (12 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 13/33] BaseTools: update Test scripts support python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 15/33] Basetools: It went wrong when use os.linesep Feng, Bob C
                   ` (19 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

Porting PackageDocumentTools code to support python2 and python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Scripts/ConvertFceToStructurePcd.py                                           |  2 +-
 BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py                                |  6 +++---
 BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw                                | 14 +++++++-------
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py          |  4 ++--
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py      | 16 ++++++++--------
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py             |  4 ++--
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py      | 12 ++++++------
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py | 12 ++++++------
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py             |  4 ++--
 BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py             |  4 ++--
 10 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/BaseTools/Scripts/ConvertFceToStructurePcd.py b/BaseTools/Scripts/ConvertFceToStructurePcd.py
index 59eec28d5e..1495ac34d6 100644
--- a/BaseTools/Scripts/ConvertFceToStructurePcd.py
+++ b/BaseTools/Scripts/ConvertFceToStructurePcd.py
@@ -133,11 +133,11 @@ class parser_lst(object):
                       offset = int(offset, 10)
                       tmp_name = pcdname2_re.findall(t_name)[0] + '[0]'
                       tmp_dict[offset] = tmp_name
                       pcdname_num = int(pcdname_num_re.findall(t_name)[0],10)
                       uint = int(unit_num.findall(uint)[0],10)
-                      bit = uint / 8
+                      bit = uint // 8
                       for i in range(1, pcdname_num):
                         offset += bit
                         tmp_name = pcdname2_re.findall(t_name)[0] + '[%s]' % i
                         tmp_dict[offset] = tmp_name
                     else:
diff --git a/BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py b/BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py
index 4deeee01a5..e404a07cd7 100644
--- a/BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py
+++ b/BaseTools/Scripts/PackageDocumentTools/packagedoc_cli.py
@@ -14,12 +14,12 @@
 
 from __future__ import print_function
 import os, sys, logging, traceback, subprocess
 from optparse import OptionParser
 
-import plugins.EdkPlugins.edk2.model.baseobject as baseobject
-import plugins.EdkPlugins.edk2.model.doxygengen as doxygengen
+from .plugins.EdkPlugins.edk2.model import baseobject
+from .plugins.EdkPlugins.edk2.model import doxygengen
 
 gArchMarcoDict = {'ALL'      : 'MDE_CPU_IA32 MDE_CPU_X64 MDE_CPU_EBC MDE_CPU_IPF _MSC_EXTENSIONS __GNUC__ __INTEL_COMPILER',
                   'IA32_MSFT': 'MDE_CPU_IA32 _MSC_EXTENSIONS',
                   'IA32_GNU' : 'MDE_CPU_IA32 __GNUC__',
                   'X64_MSFT' : 'MDE_CPU_X64 _MSC_EXTENSIONS  ASM_PFX= OPTIONAL= ',
@@ -36,11 +36,11 @@ def parseCmdArgs():
                       help='Specify the absolute path for package DEC file. For example: c:\\tianocore\\MdePkg\\MdePkg.dec')
     parser.add_option('-x', '--doxygen', action='store', dest='DoxygenPath',
                       help='Specify the absolute path of doxygen tools installation. For example: C:\\Program Files\\doxygen\bin\doxygen.exe')
     parser.add_option('-o', '--output', action='store', dest='OutputPath',
                       help='Specify the document output path. For example: c:\\docoutput')
-    parser.add_option('-a', '--arch', action='store', dest='Arch', choices=gArchMarcoDict.keys(),
+    parser.add_option('-a', '--arch', action='store', dest='Arch', choices=list(gArchMarcoDict.keys()),
                       help='Specify the architecture used in preprocess package\'s source. For example: -a IA32_MSFT')
     parser.add_option('-m', '--mode', action='store', dest='DocumentMode', choices=['CHM', 'HTML'],
                       help='Specify the document mode from : CHM or HTML')
     parser.add_option('-i', '--includeonly', action='store_true', dest='IncludeOnly',
                       help='Only generate document for package\'s public interfaces produced by include folder. ')
diff --git a/BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw b/BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw
index 28f6f9bf5c..2998db3915 100644
--- a/BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw
+++ b/BaseTools/Scripts/PackageDocumentTools/packagedocapp.pyw
@@ -16,12 +16,12 @@
 import os, sys, wx, logging
 
 import wx.stc
 import wx.lib.newevent
 import wx.lib.agw.genericmessagedialog as GMD
-import plugins.EdkPlugins.edk2.model.baseobject as baseobject
-import plugins.EdkPlugins.edk2.model.doxygengen as doxygengen
+from plugins.EdkPlugins.edk2.model import baseobject
+from plugins.EdkPlugins.edk2.model import doxygengen
 
 if hasattr(sys, "frozen"):
     appPath = os.path.abspath(os.path.dirname(sys.executable))
 else:
     appPath = os.path.abspath(os.path.dirname(__file__))
@@ -718,11 +718,11 @@ class ProgressDialog(wx.Dialog):
             lines = []
             f = open (path_html, "r")
             lines = f.readlines()
             f.close()
             bfound = False
-            for index in xrange(len(lines)):
+            for index in range(len(lines)):
                 if lines[index].find('<a class="el" href="files.html" target="basefrm">File List</a>') != -1:
                     lines[index] = "<!-- %s" % lines[index]
                     bfound = True
                     continue
                 if bfound:
@@ -967,11 +967,11 @@ class ProgressDialog(wx.Dialog):
         self.LogMessage('    >>> Fixup .dox postfix for file %s \n' % path)
         try:
             fd = open(path, 'r')
             text = fd.read()
             fd.close()
-        except Exception, e:
+        except Exception as e:
             self.LogMessage ("   <<<Fail to open file %s" % path)
             return
         text = text.replace ('.s.dox', '.s')
         text = text.replace ('.S.dox', '.S')
         text = text.replace ('.asm.dox', '.asm')
@@ -980,33 +980,33 @@ class ProgressDialog(wx.Dialog):
         text = text.replace ('.Uni.dox', '.Uni')
         try:
             fd = open(path, 'w')
             fd.write(text)
             fd.close()
-        except Exception, e:
+        except Exception as e:
             self.LogMessage ("    <<<Fail to fixup file %s" % path)
             return
         self.LogMessage('    >>> Finish to fixup .dox postfix for file %s \n' % path)
 
     def FixDecDoxygenFileLink(self, path, text):
         self.LogMessage('    >>> Fixup .decdoxygen postfix for file %s \n' % path)
         try:
             fd = open(path, 'r')
             lines = fd.readlines()
             fd.close()
-        except Exception, e:
+        except Exception as e:
             self.LogMessage ("   <<<Fail to open file %s" % path)
             return
         for line in lines:
             if line.find('.decdoxygen') != -1:
                 lines.remove(line)
                 break
         try:
             fd = open(path, 'w')
             fd.write("".join(lines))
             fd.close()
-        except Exception, e:
+        except Exception as e:
             self.LogMessage ("    <<<Fail to fixup file %s" % path)
             return
         self.LogMessage('    >>> Finish to fixup .decdoxygen postfix for file %s \n' % path)
 
 import threading
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py
index d1e21135cf..ae47ff1344 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/basemodel/doxygen.py
@@ -65,11 +65,11 @@ class Page(BaseDoxygeItem):
         for page in pageArray:
             self.AddPage(page)
 
     def AddSection(self, section):
         self.mSections.append(section)
-        self.mSections.sort(cmp=lambda x, y: cmp(x.mName.lower(), y.mName.lower()))
+        self.mSections.sort(key=lambda x: x.mName.lower())
 
     def Generate(self):
         if self.mIsMainPage:
             self.mText.append('/** \mainpage %s' % self.mName)
             self.mIsSort = False
@@ -78,11 +78,11 @@ class Page(BaseDoxygeItem):
 
         if len(self.mDescription) != 0:
             self.mText.append(self.mDescription)
         endIndex = len(self.mText)
 
-        self.mSections.sort()
+        self.mSections.sort(key=lambda x: x.mName.lower())
         for sect in self.mSections:
             self.mText += sect.Generate()
 
         endIndex = len(self.mText)
 
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py
index b49c87c8bd..0159bd5269 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/baseobject.py
@@ -8,16 +8,16 @@
 # http://opensource.org/licenses/bsd-license.php
 #
 # THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 
-import plugins.EdkPlugins.basemodel.ini as ini
-import plugins.EdkPlugins.edk2.model.dsc as dsc
-import plugins.EdkPlugins.edk2.model.inf as inf
-import plugins.EdkPlugins.edk2.model.dec as dec
+from ...basemodel import ini
+from ...edk2.model import dsc
+from ...edk2.model import inf
+from ...edk2.model import dec
 import os
-from plugins.EdkPlugins.basemodel.message import *
+from ...basemodel.message import *
 
 class SurfaceObject(object):
     _objs = {}
 
     def __new__(cls, *args, **kwargs):
@@ -653,17 +653,17 @@ class Package(SurfaceObject):
 
     def GetPcds(self):
         return self._pcds
 
     def GetPpis(self):
-        return self._ppis.values()
+        return list(self._ppis.values())
 
     def GetProtocols(self):
-        return self._protocols.values()
+        return list(self._protocols.values())
 
     def GetGuids(self):
-        return self._guids.values()
+        return list(self._guids.values())
 
     def Destroy(self):
         for pcd in self._pcds.values():
             if pcd is not None:
                 pcd.Destroy()
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py
index 9ff0df3851..3d210f72ac 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dec.py
@@ -9,13 +9,13 @@
 #
 # THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #
 
-import plugins.EdkPlugins.basemodel.ini as ini
+from ...basemodel import ini
 import re, os
-from plugins.EdkPlugins.basemodel.message import *
+from ...basemodel.message import *
 
 class DECFile(ini.BaseINIFile):
 
     def GetSectionInstance(self, parent, name, isCombined=False):
         return DECSection(parent, name, isCombined)
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py
index c22d362ff3..9c299fbfc5 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen.py
@@ -14,21 +14,21 @@
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 
 """This file produce action class to generate doxygen document for edk2 codebase.
    The action classes are shared by GUI and command line tools.
 """
-import plugins.EdkPlugins.basemodel.doxygen as doxygen
+from ...basemodel import doxygen
 import os
 try:
     import wx
     gInGui = True
 except:
     gInGui = False
 import re
-import plugins.EdkPlugins.edk2.model.inf as inf
-import plugins.EdkPlugins.edk2.model.dec as dec
-from plugins.EdkPlugins.basemodel.message import *
+from ...edk2.model import inf
+from ...edk2.model import dec
+from ...basemodel.message import *
 
 _ignore_dir = ['.svn', '_svn', 'cvs']
 _inf_key_description_mapping_table = {
   'INF_VERSION':'Version of INF file specification',
   #'BASE_NAME':'Module Name',
@@ -384,11 +384,11 @@ class PackageDocumentAction(DoxygenAction):
             return
 
         configFile.AddFile(path)
 
         no = 0
-        for no in xrange(len(lines)):
+        for no in range(len(lines)):
             if len(lines[no].strip()) == 0:
                 continue
             if lines[no].strip()[:2] in ['##', '//', '/*', '*/']:
                 continue
             index = lines[no].lower().find('include')
@@ -998,11 +998,11 @@ class PackageDocumentAction(DoxygenAction):
         newpath = path + '.dox'
         #import core.textfile as textfile
         #file = textfile.TextFile(path)
 
         try:
-            file = open(path, 'rb')
+            file = open(path, 'r')
         except (IOError, OSError) as msg:
             return None
 
         t = file.read()
         file.close()
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py
index 4bae6968a9..3a862a92ea 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/doxygengen_spec.py
@@ -11,21 +11,21 @@
 # http://opensource.org/licenses/bsd-license.php
 #
 # THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 
-import plugins.EdkPlugins.basemodel.doxygen as doxygen
+from ...basemodel import doxygen
 import os
 try:
     import wx
     gInGui = True
 except:
     gInGui = False
 import re
-import plugins.EdkPlugins.edk2.model.inf as inf
-import plugins.EdkPlugins.edk2.model.dec as dec
-from plugins.EdkPlugins.basemodel.message import *
+from ...edk2.model import inf
+from ...edk2.model import dec
+from ...basemodel.message import *
 
 _ignore_dir = ['.svn', '_svn', 'cvs']
 _inf_key_description_mapping_table = {
   'INF_VERSION':'Version of INF file specification',
   #'BASE_NAME':'Module Name',
@@ -386,11 +386,11 @@ class PackageDocumentAction(DoxygenAction):
             return
 
         configFile.AddFile(path)
         return
         no = 0
-        for no in xrange(len(lines)):
+        for no in range(len(lines)):
             if len(lines[no].strip()) == 0:
                 continue
             if lines[no].strip()[:2] in ['##', '//', '/*', '*/']:
                 continue
             index = lines[no].lower().find('include')
@@ -1001,11 +1001,11 @@ class PackageDocumentAction(DoxygenAction):
         newpath = path + '.dox'
         #import core.textfile as textfile
         #file = textfile.TextFile(path)
 
         try:
-            file = open(path, 'rb')
+            file = open(path, 'r')
         except (IOError, OSError) as msg:
             return None
 
         t = file.read()
         file.close()
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py
index 0628fa7408..6f59e566b8 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/dsc.py
@@ -9,13 +9,13 @@
 #
 # THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #
 
-import plugins.EdkPlugins.basemodel.ini as ini
+from ...basemodel import ini
 import re, os
-from plugins.EdkPlugins.basemodel.message import *
+from ...basemodel.message import *
 
 class DSCFile(ini.BaseINIFile):
     def GetSectionInstance(self, parent, name, isCombined=False):
         return DSCSection(parent, name, isCombined)
 
diff --git a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py
index 793e95efed..cf2e49d3af 100644
--- a/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py
+++ b/BaseTools/Scripts/PackageDocumentTools/plugins/EdkPlugins/edk2/model/inf.py
@@ -9,13 +9,13 @@
 #
 # THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
 # WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
 #
 
-import plugins.EdkPlugins.basemodel.ini as ini
+from ...basemodel import ini
 import re, os
-from plugins.EdkPlugins.basemodel.message import *
+from ...basemodel.message import *
 
 class INFFile(ini.BaseINIFile):
     _libobjs = {}
 
     def GetSectionInstance(self, parent, name, isCombined=False):
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 15/33] Basetools: It went wrong when use os.linesep
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (13 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 14/33] BaseTools/Scripts: Porting PackageDocumentTools code to use Python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 16/33] BaseTools:Fv BaseAddress must set If it not set Feng, Bob C
                   ` (18 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

in python2 and python3,use of line breaks

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py | 10 +++++-----
 BaseTools/Source/Python/AutoGen/GenMake.py |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index 0bed416c52..00ed804e62 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -679,31 +679,31 @@ class WorkspaceAutoGen(AutoGen):
         #
         # Create BuildOptions Macro & PCD metafile, also add the Active Platform and FDF file.
         #
         content = 'gCommandLineDefines: '
         content += str(GlobalData.gCommandLineDefines)
-        content += os.linesep
+        content += TAB_LINE_BREAK
         content += 'BuildOptionPcd: '
         content += str(GlobalData.BuildOptionPcd)
-        content += os.linesep
+        content += TAB_LINE_BREAK
         content += 'Active Platform: '
         content += str(self.Platform)
-        content += os.linesep
+        content += TAB_LINE_BREAK
         if self.FdfFile:
             content += 'Flash Image Definition: '
             content += str(self.FdfFile)
-            content += os.linesep
+            content += TAB_LINE_BREAK
         SaveFileOnChange(os.path.join(self.BuildDir, 'BuildOptions'), content, False)
 
         #
         # Create PcdToken Number file for Dynamic/DynamicEx Pcd.
         #
         PcdTokenNumber = 'PcdTokenNumber: '
         if Pa.PcdTokenNumber:
             if Pa.DynamicPcdList:
                 for Pcd in Pa.DynamicPcdList:
-                    PcdTokenNumber += os.linesep
+                    PcdTokenNumber += TAB_LINE_BREAK
                     PcdTokenNumber += str((Pcd.TokenCName, Pcd.TokenSpaceGuidCName))
                     PcdTokenNumber += ' : '
                     PcdTokenNumber += str(Pa.PcdTokenNumber[Pcd.TokenCName, Pcd.TokenSpaceGuidCName])
         SaveFileOnChange(os.path.join(self.BuildDir, 'PcdTokenNumber'), PcdTokenNumber, False)
 
diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index 0e886967cc..3094a555e0 100644
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -546,12 +546,12 @@ cleanlib:
                         NewStr.append(Str)
                 UnexpandMacroStr = ' '.join(UnexpandMacro)
                 NewRespStr = ' '.join(NewStr)
                 SaveFileOnChange(RespFile, NewRespStr, False)
                 ToolsDef.append("%s = %s" % (Resp, UnexpandMacroStr + ' @' + RespFile))
-                RespFileListContent += '@' + RespFile + os.linesep
-                RespFileListContent += NewRespStr + os.linesep
+                RespFileListContent += '@' + RespFile + TAB_LINE_BREAK
+                RespFileListContent += NewRespStr + TAB_LINE_BREAK
             SaveFileOnChange(RespFileList, RespFileListContent, False)
         else:
             if os.path.exists(RespFileList):
                 os.remove(RespFileList)
 
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 16/33] BaseTools:Fv BaseAddress must set If it not set
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (14 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 15/33] Basetools: It went wrong when use os.linesep Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 17/33] BaseTools: Make sure AllPcdList valid Feng, Bob C
                   ` (17 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao

From: Zhijux Fan <zhijux.fan@intel.com>

If ForceRebase is not set, and FV is specified in FD region,
 it should have FvBaseAddress

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/GenFds/FvImageSection.py | 2 ++
 BaseTools/Source/Python/GenFds/GenFds.py         | 5 +++++
 2 files changed, 7 insertions(+)

diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py
index d6e1f3315b..7f277ddef2 100644
--- a/BaseTools/Source/Python/GenFds/FvImageSection.py
+++ b/BaseTools/Source/Python/GenFds/FvImageSection.py
@@ -101,10 +101,12 @@ class FvImageSection(FvImageSectionClassObject):
         if self.FvName is not None:
             Buffer = BytesIO('')
             Fv = GenFdsGlobalVariable.FdfParser.Profile.FvDict.get(self.FvName)
             if Fv is not None:
                 self.Fv = Fv
+                if not self.FvAddr and self.Fv.BaseAddress:
+                    self.FvAddr = self.Fv.BaseAddress
                 FvFileName = Fv.AddToBuffer(Buffer, self.FvAddr, MacroDict = Dict, Flag=IsMakefile)
                 if Fv.FvAlignment is not None:
                     if self.Alignment is None:
                         self.Alignment = Fv.FvAlignment
                     else:
diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py
index ae5d7fd26d..f1ce527f88 100644
--- a/BaseTools/Source/Python/GenFds/GenFds.py
+++ b/BaseTools/Source/Python/GenFds/GenFds.py
@@ -360,10 +360,12 @@ def GenFdsApi(FdsCommandDict, WorkSpaceDataBase=None):
                     for RegionObj in FdObj.RegionList:
                         if RegionObj.RegionType != BINARY_FILE_TYPE_FV:
                             continue
                         for RegionData in RegionObj.RegionDataList:
                             if FvObj.UiFvName.upper() == RegionData.upper():
+                                if not FvObj.BaseAddress:
+                                    FvObj.BaseAddress = '0x%x' % (int(FdObj.BaseAddress, 0) + RegionObj.Offset)
                                 if FvObj.FvRegionInFD:
                                     if FvObj.FvRegionInFD != RegionObj.Size:
                                         EdkLogger.error("GenFds", FORMAT_INVALID, "The FV %s's region is specified in multiple FD with different value." %FvObj.UiFvName)
                                 else:
                                     FvObj.FvRegionInFD = RegionObj.Size
@@ -674,20 +676,23 @@ class GenFds(object):
         GuidXRefFile = BytesIO('')
         PkgGuidDict = {}
         GuidDict = {}
         ModuleList = []
         FileGuidList = []
+        VariableGuidSet = set()
         for Arch in ArchList:
             PlatformDataBase = BuildDb.BuildObject[GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag]
             PkgList = GenFdsGlobalVariable.WorkSpace.GetPackageList(GenFdsGlobalVariable.ActivePlatform, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag)
             for P in PkgList:
                 PkgGuidDict.update(P.Guids)
             for Name, Guid in PlatformDataBase.Pcds:
                 Pcd = PlatformDataBase.Pcds[Name, Guid]
                 if Pcd.Type in [TAB_PCDS_DYNAMIC_HII, TAB_PCDS_DYNAMIC_EX_HII]:
                     for SkuId in Pcd.SkuInfoList:
                         Sku = Pcd.SkuInfoList[SkuId]
+                        if Sku.VariableGuid in VariableGuidSet:continue
+                        VariableGuidSet.add(Sku.VariableGuid)
                         if Sku.VariableGuid and Sku.VariableGuid in PkgGuidDict.keys():
                             GuidDict[Sku.VariableGuid] = PkgGuidDict[Sku.VariableGuid]
             for ModuleFile in PlatformDataBase.Modules:
                 Module = BuildDb.BuildObject[ModuleFile, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag]
                 if Module in ModuleList:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 17/33] BaseTools: Make sure AllPcdList valid.
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (15 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 16/33] BaseTools:Fv BaseAddress must set If it not set Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 18/33] BaseTools:TestTools character encoding issue Feng, Bob C
                   ` (16 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao

This patch is to make sure the AllPcdList is always evaluated.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index 00ed804e62..f9ce17cf77 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -1142,11 +1142,10 @@ class PlatformAutoGen(AutoGen):
         self.Arch = Arch
         self.SourceDir = PlatformFile.SubDir
         self.SourceOverrideDir = None
         self.FdTargetList = self.Workspace.FdTargetList
         self.FvTargetList = self.Workspace.FvTargetList
-        self.AllPcdList = []
         # get the original module/package/platform objects
         self.BuildDatabase = Workspace.BuildDatabase
         self.DscBuildDataObj = Workspace.Platform
 
         # flag indicating if the makefile/C-code file has been created or not
@@ -1223,10 +1222,13 @@ class PlatformAutoGen(AutoGen):
         self.LibraryBuildDirectoryList = Makefile.GetLibraryBuildDirectoryList()
         self.ModuleBuildDirectoryList = Makefile.GetModuleBuildDirectoryList()
 
         self.IsMakeFileCreated = True
 
+    @property
+    def AllPcdList(self):
+        return self.DynamicPcdList + self.NonDynamicPcdList
     ## Deal with Shared FixedAtBuild Pcds
     #
     def CollectFixedAtBuildPcds(self):
         for LibAuto in self.LibraryAutoGenList:
             FixedAtBuildPcds = {}
@@ -1737,11 +1739,10 @@ class PlatformAutoGen(AutoGen):
                     if type(SkuId) in (str, unicode) and eval(SkuId) == 0 or SkuId == 0:
                         continue
                     pcd.SkuInfoList[SkuName] = copy.deepcopy(pcd.SkuInfoList[TAB_DEFAULT])
                     pcd.SkuInfoList[SkuName].SkuId = SkuId
                     pcd.SkuInfoList[SkuName].SkuIdName = SkuName
-        self.AllPcdList = self._NonDynamicPcdList + self._DynamicPcdList
 
     def FixVpdOffset(self, VpdFile ):
         FvPath = os.path.join(self.BuildDir, TAB_FV_DIRECTORY)
         if not os.path.exists(FvPath):
             try:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 18/33] BaseTools:TestTools character encoding issue
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (16 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 17/33] BaseTools: Make sure AllPcdList valid Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 19/33] BaseTools:Double carriage return inserted from Trim.py on Python3 Feng, Bob C
                   ` (15 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhiju Fan, Bob Feng, Liming Gao

From: Zhiju Fan <zhijux.fan@intel.com>

Specifies encoding when opening a file using codecs

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Tests/TestTools.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/BaseTools/Tests/TestTools.py b/BaseTools/Tests/TestTools.py
index 4332dcdaac..ace92992fc 100644
--- a/BaseTools/Tests/TestTools.py
+++ b/BaseTools/Tests/TestTools.py
@@ -22,10 +22,11 @@ import os.path
 import random
 import shutil
 import subprocess
 import sys
 import unittest
+import codecs
 
 TestsDir = os.path.realpath(os.path.split(sys.argv[0])[0])
 BaseToolsDir = os.path.realpath(os.path.join(TestsDir, '..'))
 CSourceDir = os.path.join(BaseToolsDir, 'Source', 'C')
 PythonSourceDir = os.path.join(BaseToolsDir, 'Source', 'Python')
@@ -148,11 +149,11 @@ class BaseToolsTest(unittest.TestCase):
     def WriteTmpFile(self, fileName, data):
         if isinstance(data, bytes):
             with open(self.GetTmpFilePath(fileName), 'wb') as f:
                 f.write(data)
         else:
-            with open(self.GetTmpFilePath(fileName), 'w') as f:
+            with codecs.open(self.GetTmpFilePath(fileName), 'w', encoding='utf-8') as f:
                 f.write(data)
 
     def GenRandomFileData(self, fileName, minlen = None, maxlen = None):
         if maxlen is None: maxlen = minlen
         f = self.OpenTmpFile(fileName, 'w')
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 19/33] BaseTools:Double carriage return inserted from Trim.py on Python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (17 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 18/33] BaseTools:TestTools character encoding issue Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 20/33] BaseTools:File open failed for VPD MapFile Feng, Bob C
                   ` (14 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhiju Fan, Bob Feng, Liming Gao

From: Zhiju Fan <zhijux.fan@intel.com>

https://bugzilla.tianocore.org/show_bug.cgi?id=1379

Line 208 of BaseTools/Source/Python/Trim/Trim.py uses
'NewLines.append(os.linesep)' to insert a new line into
the list that will be written to the output file.
This causes the '\r\n' inserted with os.linesep to be
written as '\r\r\n', causing some assemblers to error.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/Trim/Trim.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/BaseTools/Source/Python/Trim/Trim.py b/BaseTools/Source/Python/Trim/Trim.py
index 4b3091bec3..51010bf326 100644
--- a/BaseTools/Source/Python/Trim/Trim.py
+++ b/BaseTools/Source/Python/Trim/Trim.py
@@ -203,11 +203,11 @@ def TrimPreprocessedFile(Source, Target, ConvertHex, TrimLong):
                 # possible?
                 NewLines[LineNumber - 1] = Line
             else:
                 if LineNumber > (len(NewLines) + 1):
                     for LineIndex in range(len(NewLines), LineNumber-1):
-                        NewLines.append(os.linesep)
+                        NewLines.append(TAB_LINE_BREAK)
                 NewLines.append(Line)
             LineNumber = None
             EdkLogger.verbose("Now we have lines: %d" % len(NewLines))
         else:
             NewLines.append(Line)
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 20/33] BaseTools:File open failed for VPD MapFile
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (18 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 19/33] BaseTools:Double carriage return inserted from Trim.py on Python3 Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 21/33] BaseTools: change the Division Operator Feng, Bob C
                   ` (13 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Zhiju . Fan

correct open MapFile support python2 and python3

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/BPDG/GenVpd.py | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/BaseTools/Source/Python/BPDG/GenVpd.py b/BaseTools/Source/Python/BPDG/GenVpd.py
index b91837d3d6..71fcfc5ac3 100644
--- a/BaseTools/Source/Python/BPDG/GenVpd.py
+++ b/BaseTools/Source/Python/BPDG/GenVpd.py
@@ -649,11 +649,11 @@ class GenVPD :
         except:
             # Open failed
             EdkLogger.error("BPDG", BuildToolError.FILE_OPEN_FAILURE, "File open failed for %s" % self.VpdFileName, None)
 
         try :
-            fMapFile = open(MapFileName, "w", 0)
+            fMapFile = open(MapFileName, "w")
         except:
             # Open failed
             EdkLogger.error("BPDG", BuildToolError.FILE_OPEN_FAILURE, "File open failed for %s" % self.MapFileName, None)
 
         # Use a instance of BytesIO to cache data
@@ -673,12 +673,16 @@ class GenVPD :
                 EdkLogger.error("BPDG", BuildToolError.FILE_WRITE_FAILURE, "Write data to file %s failed, please check whether the file been locked or using by other applications." % self.MapFileName, None)
 
             # Write Vpd binary file
             fStringIO.seek (eachPcd.PcdBinOffset)
             if isinstance(eachPcd.PcdValue, list):
-                ValueList = [chr(Item) for Item in eachPcd.PcdValue]
-                fStringIO.write(''.join(ValueList))
+                for i in range(len(eachPcd.PcdValue)):
+                    Value = eachPcd.PcdValue[i:i + 1]
+                    if isinstance(bytes(Value), str):
+                        fStringIO.write(chr(Value[0]))
+                    else:
+                        fStringIO.write(bytes(Value))
             else:
                 fStringIO.write (eachPcd.PcdValue)
 
         try :
             fVpdFile.write (fStringIO.getvalue())
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 21/33] BaseTools: change the Division Operator
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (19 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 20/33] BaseTools:File open failed for VPD MapFile Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:05 ` [Patch 22/33] BaseTools:There is extra blank line in datalog Feng, Bob C
                   ` (12 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

PEP 238 -- Changing the Division Operator
x/y to return a reasonable approximation of the mathematical result
    of the division ("true division")
x//y to return the floor ("floor division")

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py             |  2 +-
 BaseTools/Source/Python/AutoGen/GenC.py                |  8 ++++----
 BaseTools/Source/Python/BPDG/GenVpd.py                 |  8 ++++----
 BaseTools/Source/Python/Common/Expression.py           |  9 ++++++++-
 BaseTools/Source/Python/Common/Misc.py                 |  6 +++---
 BaseTools/Source/Python/GenFds/DataSection.py          |  4 ++--
 BaseTools/Source/Python/GenFds/EfiSection.py           |  4 ++--
 BaseTools/Source/Python/GenFds/FfsInfStatement.py      |  8 ++++----
 BaseTools/Source/Python/GenFds/Fv.py                   |  4 ++--
 BaseTools/Source/Python/GenFds/FvImageSection.py       | 10 +++++-----
 BaseTools/Source/Python/GenFds/GenFds.py               |  2 +-
 BaseTools/Source/Python/GenFds/Region.py               |  4 ++--
 BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py |  4 ++--
 BaseTools/Source/Python/build/build.py                 | 18 +++++++++---------
 14 files changed, 49 insertions(+), 42 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index f9ce17cf77..5f0da5a815 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -3812,11 +3812,11 @@ class ModuleAutoGen(AutoGen):
                             else:
                                 NewValue = NewValue + '0x%02x' % (ord(PcdValue[Index]) % 0x100) + ', '
                         Padding = '0x00, '
                         if Unicode:
                             Padding = Padding * 2
-                            ArraySize = ArraySize / 2
+                            ArraySize = ArraySize // 2
                         if ArraySize < (len(PcdValue) + 1):
                             if Pcd.MaxSizeUserSet:
                                 EdkLogger.error("build", AUTOGEN_ERROR,
                                             "The maximum size of VOID* type PCD '%s.%s' is less than its actual size occupied." % (Pcd.TokenSpaceGuidCName, TokenCName)
                                             )
diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
index 915ba2e235..e46942a3e2 100644
--- a/BaseTools/Source/Python/AutoGen/GenC.py
+++ b/BaseTools/Source/Python/AutoGen/GenC.py
@@ -1051,21 +1051,21 @@ def CreateModulePcdCode(Info, AutoGenC, AutoGenH, Pcd):
                     if Unicode:
                         NewValue = NewValue + str(ord(Value[Index]) % 0x10000) + ', '
                     else:
                         NewValue = NewValue + str(ord(Value[Index]) % 0x100) + ', '
                 if Unicode:
-                    ArraySize = ArraySize / 2
+                    ArraySize = ArraySize // 2
                 Value = NewValue + '0 }'
             if ArraySize < ValueSize:
                 if Pcd.MaxSizeUserSet:
                     EdkLogger.error("build", AUTOGEN_ERROR,
                                 "The maximum size of VOID* type PCD '%s.%s' is less than its actual size occupied." % (Pcd.TokenSpaceGuidCName, TokenCName),
                                 ExtraData="[%s]" % str(Info))
                 else:
                     ArraySize = Pcd.GetPcdSize()
                     if Unicode:
-                        ArraySize = ArraySize / 2
+                        ArraySize = ArraySize // 2
             Array = '[%d]' % ArraySize
         #
         # skip casting for fixed at build since it breaks ARM assembly.
         # Long term we need PCD macros that work in assembly
         #
@@ -1904,20 +1904,20 @@ def BmpImageDecoder(File, Buffer, PaletteIndex, TransParent):
         if TransParent:
             ImageBuffer = pack('B', EFI_HII_IIBT_IMAGE_1BIT_TRANS)
         else:
             ImageBuffer = pack('B', EFI_HII_IIBT_IMAGE_1BIT)
         ImageBuffer += pack('B', PaletteIndex)
-        Width = (BmpHeader.biWidth + 7)/8
+        Width = (BmpHeader.biWidth + 7)//8
         if BmpHeader.bfOffBits > BMP_IMAGE_HEADER_STRUCT.size + 2:
             PaletteBuffer = Buffer[BMP_IMAGE_HEADER_STRUCT.size + 2 : BmpHeader.bfOffBits]
     elif BmpHeader.biBitCount == 4:
         if TransParent:
             ImageBuffer = pack('B', EFI_HII_IIBT_IMAGE_4BIT_TRANS)
         else:
             ImageBuffer = pack('B', EFI_HII_IIBT_IMAGE_4BIT)
         ImageBuffer += pack('B', PaletteIndex)
-        Width = (BmpHeader.biWidth + 1)/2
+        Width = (BmpHeader.biWidth + 1)//2
         if BmpHeader.bfOffBits > BMP_IMAGE_HEADER_STRUCT.size + 2:
             PaletteBuffer = Buffer[BMP_IMAGE_HEADER_STRUCT.size + 2 : BmpHeader.bfOffBits]
     elif BmpHeader.biBitCount == 8:
         if TransParent:
             ImageBuffer = pack('B', EFI_HII_IIBT_IMAGE_8BIT_TRANS)
diff --git a/BaseTools/Source/Python/BPDG/GenVpd.py b/BaseTools/Source/Python/BPDG/GenVpd.py
index 71fcfc5ac3..b4a2dd25a2 100644
--- a/BaseTools/Source/Python/BPDG/GenVpd.py
+++ b/BaseTools/Source/Python/BPDG/GenVpd.py
@@ -429,11 +429,11 @@ class GenVPD :
                             EdkLogger.warn("BPDG", "The offset value of PCD %s is not 8-byte aligned!" %(PCD.PcdCName), File=self.InputFileName)
                         else:
                             EdkLogger.error("BPDG", BuildToolError.FORMAT_INVALID, 'The offset value of PCD %s should be %s-byte aligned.' % (PCD.PcdCName, Alignment))
                 else:
                     if PCD.PcdOccupySize % Alignment != 0:
-                        PCD.PcdOccupySize = (PCD.PcdOccupySize / Alignment + 1) * Alignment
+                        PCD.PcdOccupySize = (PCD.PcdOccupySize // Alignment + 1) * Alignment
 
                 PackSize = PCD.PcdOccupySize
                 if PCD._IsBoolean(PCD.PcdValue, PCD.PcdSize):
                     PCD._PackBooleanValue(PCD.PcdValue)
                     self.FileLinesList[count] = PCD
@@ -507,11 +507,11 @@ class GenVPD :
         if (len(self.PcdFixedOffsetSizeList) == 0) and (len(self.PcdUnknownOffsetList) != 0) :
             # The offset start from 0
             NowOffset = 0
             for Pcd in self.PcdUnknownOffsetList :
                 if NowOffset % Pcd.Alignment != 0:
-                    NowOffset = (NowOffset/ Pcd.Alignment + 1) * Pcd.Alignment
+                    NowOffset = (NowOffset// Pcd.Alignment + 1) * Pcd.Alignment
                 Pcd.PcdBinOffset = NowOffset
                 Pcd.PcdOffset    = str(hex(Pcd.PcdBinOffset))
                 NowOffset       += Pcd.PcdOccupySize
 
             self.PcdFixedOffsetSizeList = self.PcdUnknownOffsetList
@@ -571,11 +571,11 @@ class GenVPD :
                         eachUnfixedPcd      = self.PcdUnknownOffsetList[countOfUnfixedList]
                         needFixPcdSize      = eachUnfixedPcd.PcdOccupySize
                         # Not been fixed
                         if eachUnfixedPcd.PcdOffset == TAB_STAR :
                             if LastOffset % eachUnfixedPcd.Alignment != 0:
-                                LastOffset = (LastOffset / eachUnfixedPcd.Alignment + 1) * eachUnfixedPcd.Alignment
+                                LastOffset = (LastOffset // eachUnfixedPcd.Alignment + 1) * eachUnfixedPcd.Alignment
                             # The offset un-fixed pcd can write into this free space
                             if needFixPcdSize <= (NowOffset - LastOffset) :
                                 # Change the offset value of un-fixed pcd
                                 eachUnfixedPcd.PcdOffset    = str(hex(LastOffset))
                                 eachUnfixedPcd.PcdBinOffset = LastOffset
@@ -625,11 +625,11 @@ class GenVPD :
             LastPcd    = self.PcdFixedOffsetSizeList[lenOfList-1]
             NeedFixPcd = self.PcdUnknownOffsetList[0]
 
             NeedFixPcd.PcdBinOffset = LastPcd.PcdBinOffset + LastPcd.PcdOccupySize
             if NeedFixPcd.PcdBinOffset % NeedFixPcd.Alignment != 0:
-                NeedFixPcd.PcdBinOffset = (NeedFixPcd.PcdBinOffset / NeedFixPcd.Alignment + 1) * NeedFixPcd.Alignment
+                NeedFixPcd.PcdBinOffset = (NeedFixPcd.PcdBinOffset // NeedFixPcd.Alignment + 1) * NeedFixPcd.Alignment
 
             NeedFixPcd.PcdOffset    = str(hex(NeedFixPcd.PcdBinOffset))
 
             # Insert this pcd into fixed offset pcd list's tail.
             self.PcdFixedOffsetSizeList.insert(lenOfList, NeedFixPcd)
diff --git a/BaseTools/Source/Python/Common/Expression.py b/BaseTools/Source/Python/Common/Expression.py
index 19ea13b7fb..0c7e25b445 100644
--- a/BaseTools/Source/Python/Common/Expression.py
+++ b/BaseTools/Source/Python/Common/Expression.py
@@ -437,10 +437,17 @@ class ValueExpression(BaseExpression):
                 if Val:
                     Val = Val2
                 else:
                     Val = Val3
                 continue
+            #
+            # PEP 238 -- Changing the Division Operator
+            # x/y to return a reasonable approximation of the mathematical result of the division ("true division")
+            # x//y to return the floor ("floor division")
+            #
+            if Op == '/':
+                Op = '//'
             try:
                 Val = self.Eval(Op, Val, EvalFunc())
             except WrnExpression as Warn:
                 self._WarnExcept = Warn
                 Val = Warn.result
@@ -910,11 +917,11 @@ class ValueExpressionEx(ValueExpression):
                         TmpValue = int(PcdValue)
                         TmpList = []
                         if TmpValue.bit_length() == 0:
                             PcdValue = '{0x00}'
                         else:
-                            for I in range((TmpValue.bit_length() + 7) / 8):
+                            for I in range((TmpValue.bit_length() + 7) // 8):
                                 TmpList.append('0x%02x' % ((TmpValue >> I * 8) & 0xff))
                             PcdValue = '{' + ', '.join(TmpList) + '}'
                     except:
                         if PcdValue.strip().startswith('{'):
                             PcdValueList = SplitPcdValueString(PcdValue.strip()[1:-1])
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index d23a075f43..bd7c2812e8 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1023,11 +1023,11 @@ def ParseFieldValue (Value):
         return '{' + out + '}', Size
 
     if "{CODE(" in Value:
         return Value, len(Value.split(","))
     if isinstance(Value, type(0)):
-        return Value, (Value.bit_length() + 7) / 8
+        return Value, (Value.bit_length() + 7) // 8
     if not isinstance(Value, type('')):
         raise BadExpression('Type %s is %s' %(Value, type(Value)))
     Value = Value.strip()
     if Value.startswith(TAB_UINT8) and Value.endswith(')'):
         Value, Size = ParseFieldValue(Value.split('(', 1)[1][:-1])
@@ -1144,16 +1144,16 @@ def ParseFieldValue (Value):
             Value = int(Value, 16)
         except:
             raise BadExpression("invalid hex value: %s" % Value)
         if Value == 0:
             return 0, 1
-        return Value, (Value.bit_length() + 7) / 8
+        return Value, (Value.bit_length() + 7) // 8
     if Value[0].isdigit():
         Value = int(Value, 10)
         if Value == 0:
             return 0, 1
-        return Value, (Value.bit_length() + 7) / 8
+        return Value, (Value.bit_length() + 7) // 8
     if Value.lower() == 'true':
         return 1, 1
     if Value.lower() == 'false':
         return 0, 1
     return Value, 1
diff --git a/BaseTools/Source/Python/GenFds/DataSection.py b/BaseTools/Source/Python/GenFds/DataSection.py
index 28f9b931ca..989e33c43f 100644
--- a/BaseTools/Source/Python/GenFds/DataSection.py
+++ b/BaseTools/Source/Python/GenFds/DataSection.py
@@ -86,13 +86,13 @@ class DataSection (DataSectionClassObject):
         if self.Alignment == 'Auto' and self.SecType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32):
             ImageObj = PeImageClass (Filename)
             if ImageObj.SectionAlignment < 0x400:
                 self.Alignment = str (ImageObj.SectionAlignment)
             elif ImageObj.SectionAlignment < 0x100000:
-                self.Alignment = str (ImageObj.SectionAlignment / 0x400) + 'K'
+                self.Alignment = str (ImageObj.SectionAlignment // 0x400) + 'K'
             else:
-                self.Alignment = str (ImageObj.SectionAlignment / 0x100000) + 'M'
+                self.Alignment = str (ImageObj.SectionAlignment // 0x100000) + 'M'
 
         NoStrip = True
         if self.SecType in (BINARY_FILE_TYPE_TE, BINARY_FILE_TYPE_PE32):
             if self.KeepReloc is not None:
                 NoStrip = self.KeepReloc
diff --git a/BaseTools/Source/Python/GenFds/EfiSection.py b/BaseTools/Source/Python/GenFds/EfiSection.py
index 0be176ec8a..dfb2470874 100644
--- a/BaseTools/Source/Python/GenFds/EfiSection.py
+++ b/BaseTools/Source/Python/GenFds/EfiSection.py
@@ -246,13 +246,13 @@ class EfiSection (EfiSectionClassObject):
                     if self.Alignment == 'Auto' and (SectionType == BINARY_FILE_TYPE_PE32 or SectionType == BINARY_FILE_TYPE_TE):
                         ImageObj = PeImageClass (File)
                         if ImageObj.SectionAlignment < 0x400:
                             Align = str (ImageObj.SectionAlignment)
                         elif ImageObj.SectionAlignment < 0x100000:
-                            Align = str (ImageObj.SectionAlignment / 0x400) + 'K'
+                            Align = str (ImageObj.SectionAlignment // 0x400) + 'K'
                         else:
-                            Align = str (ImageObj.SectionAlignment / 0x100000) + 'M'
+                            Align = str (ImageObj.SectionAlignment // 0x100000) + 'M'
 
                     if File[(len(File)-4):] == '.efi':
                         MapFile = File.replace('.efi', '.map')
                         CopyMapFile = os.path.join(OutputPath, ModuleName + '.map')
                         if IsMakefile:
diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
index 4dda3cf787..a7298a6daf 100644
--- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py
+++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
@@ -770,13 +770,13 @@ class FfsInfStatement(FfsInfStatementClassObject):
                 if self.Alignment == 'Auto' and (SectionType == BINARY_FILE_TYPE_PE32 or SectionType == BINARY_FILE_TYPE_TE):
                     ImageObj = PeImageClass (File)
                     if ImageObj.SectionAlignment < 0x400:
                         self.Alignment = str (ImageObj.SectionAlignment)
                     elif ImageObj.SectionAlignment < 0x100000:
-                        self.Alignment = str (ImageObj.SectionAlignment / 0x400) + 'K'
+                        self.Alignment = str (ImageObj.SectionAlignment // 0x400) + 'K'
                     else:
-                        self.Alignment = str (ImageObj.SectionAlignment / 0x100000) + 'M'
+                        self.Alignment = str (ImageObj.SectionAlignment // 0x100000) + 'M'
 
                 if not NoStrip:
                     FileBeforeStrip = os.path.join(self.OutputPath, ModuleName + '.reloc')
                     if not os.path.exists(FileBeforeStrip) or \
                            (os.path.getmtime(File) > os.path.getmtime(FileBeforeStrip)):
@@ -812,13 +812,13 @@ class FfsInfStatement(FfsInfStatementClassObject):
             if self.Alignment == 'Auto' and (SectionType == BINARY_FILE_TYPE_PE32 or SectionType == BINARY_FILE_TYPE_TE):
                 ImageObj = PeImageClass (GenSecInputFile)
                 if ImageObj.SectionAlignment < 0x400:
                     self.Alignment = str (ImageObj.SectionAlignment)
                 elif ImageObj.SectionAlignment < 0x100000:
-                    self.Alignment = str (ImageObj.SectionAlignment / 0x400) + 'K'
+                    self.Alignment = str (ImageObj.SectionAlignment // 0x400) + 'K'
                 else:
-                    self.Alignment = str (ImageObj.SectionAlignment / 0x100000) + 'M'
+                    self.Alignment = str (ImageObj.SectionAlignment // 0x100000) + 'M'
 
             if not NoStrip:
                 FileBeforeStrip = os.path.join(self.OutputPath, ModuleName + '.reloc')
                 if not os.path.exists(FileBeforeStrip) or \
                        (os.path.getmtime(GenSecInputFile) > os.path.getmtime(FileBeforeStrip)):
diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py
index bd5c259348..b141d44dc4 100644
--- a/BaseTools/Source/Python/GenFds/Fv.py
+++ b/BaseTools/Source/Python/GenFds/Fv.py
@@ -220,13 +220,13 @@ class FV (object):
                         if FvAlignmentValue >= 0x100000:
                             if FvAlignmentValue >= 0x1000000:
                             #The max alignment supported by FFS is 16M.
                                 self.FvAlignment = "16M"
                             else:
-                                self.FvAlignment = str(FvAlignmentValue / 0x100000) + "M"
+                                self.FvAlignment = str(FvAlignmentValue // 0x100000) + "M"
                         else:
-                            self.FvAlignment = str(FvAlignmentValue / 0x400) + "K"
+                            self.FvAlignment = str(FvAlignmentValue // 0x400) + "K"
                     else:
                         # FvAlignmentValue is less than 1K
                         self.FvAlignment = str (FvAlignmentValue)
                     FvFileObj.close()
                     GenFdsGlobalVariable.ImageBinDict[self.UiFvName.upper() + 'fv'] = FvOutputFile
diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py
index 7f277ddef2..535b86ab5e 100644
--- a/BaseTools/Source/Python/GenFds/FvImageSection.py
+++ b/BaseTools/Source/Python/GenFds/FvImageSection.py
@@ -69,11 +69,11 @@ class FvImageSection(FvImageSectionClassObject):
                     FvFileObj = open (FvFileName, 'rb')
                     FvFileObj.seek(0)
                     # PI FvHeader is 0x48 byte
                     FvHeaderBuffer = FvFileObj.read(0x48)
                     # FV alignment position.
-                    FvAlignmentValue = 1 << (ord (FvHeaderBuffer[0x2E]) & 0x1F)
+                    FvAlignmentValue = 1 << (FvHeaderBuffer[0x2E] & 0x1F)
                     FvFileObj.close()
                 if FvAlignmentValue > MaxFvAlignment:
                     MaxFvAlignment = FvAlignmentValue
 
                 OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get("FV_IMAGE"))
@@ -85,13 +85,13 @@ class FvImageSection(FvImageSectionClassObject):
                 if MaxFvAlignment >= 0x100000:
                     #The max alignment supported by FFS is 16M.
                     if MaxFvAlignment >= 0x1000000:
                         self.Alignment = "16M"
                     else:
-                        self.Alignment = str(MaxFvAlignment / 0x100000) + "M"
+                        self.Alignment = str(MaxFvAlignment // 0x100000) + "M"
                 else:
-                    self.Alignment = str (MaxFvAlignment / 0x400) + "K"
+                    self.Alignment = str (MaxFvAlignment // 0x400) + "K"
             else:
                 # MaxFvAlignment is less than 1K
                 self.Alignment = str (MaxFvAlignment)
 
             return OutputFileList, self.Alignment
@@ -127,13 +127,13 @@ class FvImageSection(FvImageSectionClassObject):
                             if FvAlignmentValue >= 0x100000:
                                 #The max alignment supported by FFS is 16M.
                                 if FvAlignmentValue >= 0x1000000:
                                     self.Alignment = "16M"
                                 else:
-                                    self.Alignment = str(FvAlignmentValue / 0x100000) + "M"
+                                    self.Alignment = str(FvAlignmentValue // 0x100000) + "M"
                             else:
-                                self.Alignment = str (FvAlignmentValue / 0x400) + "K"
+                                self.Alignment = str (FvAlignmentValue // 0x400) + "K"
                         else:
                             # FvAlignmentValue is less than 1K
                             self.Alignment = str (FvAlignmentValue)
                         FvFileObj.close()
                     else:
diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py
index f1ce527f88..2efb2edd9a 100644
--- a/BaseTools/Source/Python/GenFds/GenFds.py
+++ b/BaseTools/Source/Python/GenFds/GenFds.py
@@ -749,11 +749,11 @@ class GenFds(object):
                             for File in MatchDict['.ui']:
                                 with open(os.path.join(FfsPath[0], File), 'rb') as F:
                                     F.read()
                                     length = F.tell()
                                     F.seek(4)
-                                    TmpStr = unpack('%dh' % ((length - 4) / 2), F.read())
+                                    TmpStr = unpack('%dh' % ((length - 4) // 2), F.read())
                                     Name = ''.join(chr(c) for c in TmpStr[:-1])
                         else:
                             FileList = []
                             if 'fv.sec.txt' in MatchDict:
                                 FileList = MatchDict['fv.sec.txt']
diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py
index acc9dea413..83363276d2 100644
--- a/BaseTools/Source/Python/GenFds/Region.py
+++ b/BaseTools/Source/Python/GenFds/Region.py
@@ -298,20 +298,20 @@ class Region(object):
                 continue
             # region located in current blocks
             else:
                 # region ended within current blocks
                 if self.Offset + self.Size <= End:
-                    ExpectedList.append((BlockSize, (RemindingSize + BlockSize - 1) / BlockSize))
+                    ExpectedList.append((BlockSize, (RemindingSize + BlockSize - 1) // BlockSize))
                     break
                 # region not ended yet
                 else:
                     # region not started in middle of current blocks
                     if self.Offset <= Start:
                         UsedBlockNum = BlockNum
                     # region started in middle of current blocks
                     else:
-                        UsedBlockNum = (End - self.Offset) / BlockSize
+                        UsedBlockNum = (End - self.Offset) // BlockSize
                     Start = End
                     ExpectedList.append((BlockSize, UsedBlockNum))
                     RemindingSize -= BlockSize * UsedBlockNum
 
         if FvObj.BlockSizeList == []:
diff --git a/BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py b/BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py
index 8e243aea96..0be5eba492 100644
--- a/BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py
+++ b/BaseTools/Source/Python/PatchPcdValue/PatchPcdValue.py
@@ -131,11 +131,11 @@ def PatchBinaryFile(FileName, ValueOffset, TypeName, ValueString, MaxSize=0):
         #
         # Set PCD value into binary data
         #
         for Index in range(ValueLength):
             ByteList[ValueOffset + Index] = ValueNumber % 0x100
-            ValueNumber = ValueNumber / 0x100
+            ValueNumber = ValueNumber // 0x100
     elif TypeName == TAB_VOID:
         ValueString = SavedStr
         if ValueString.startswith('L"'):
             #
             # Patch Unicode String
@@ -146,11 +146,11 @@ def PatchBinaryFile(FileName, ValueOffset, TypeName, ValueString, MaxSize=0):
                 # Reserve zero as unicode tail
                 #
                 if Index + 2 >= ValueLength:
                     break
                 #
-                # Set string value one by one
+                # Set string value one by one/ 0x100
                 #
                 ByteList[ValueOffset + Index] = ord(ByteString)
                 Index = Index + 2
         elif ValueString.startswith("{") and ValueString.endswith("}"):
             #
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 94074b89b4..139a1dfe29 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -662,11 +662,11 @@ class PeImageInfo():
         self.Guid             = Guid
         self.Arch             = Arch
         self.OutputDir        = OutputDir
         self.DebugDir         = DebugDir
         self.Image            = ImageClass
-        self.Image.Size       = (self.Image.Size / 0x1000 + 1) * 0x1000
+        self.Image.Size       = (self.Image.Size // 0x1000 + 1) * 0x1000
 
 ## The class implementing the EDK2 build process
 #
 #   The build process includes:
 #       1. Load configuration from target.txt and tools_def.txt in $(WORKSPACE)/Conf
@@ -1580,25 +1580,25 @@ class Build():
             # Patch real PCD value by PatchPcdValue tool
             #
             for PcdInfo in PcdTable:
                 ReturnValue = 0
                 if PcdInfo[0] == TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_PEI_PAGE_SIZE:
-                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_PEI_PAGE_SIZE_DATA_TYPE, str (PeiSize / 0x1000))
+                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_PEI_PAGE_SIZE_DATA_TYPE, str (PeiSize // 0x1000))
                 elif PcdInfo[0] == TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_DXE_PAGE_SIZE:
-                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_DXE_PAGE_SIZE_DATA_TYPE, str (BtSize / 0x1000))
+                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_DXE_PAGE_SIZE_DATA_TYPE, str (BtSize // 0x1000))
                 elif PcdInfo[0] == TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_RUNTIME_PAGE_SIZE:
-                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_RUNTIME_PAGE_SIZE_DATA_TYPE, str (RtSize / 0x1000))
+                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_RUNTIME_PAGE_SIZE_DATA_TYPE, str (RtSize // 0x1000))
                 elif PcdInfo[0] == TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_SMM_PAGE_SIZE and len (SmmModuleList) > 0:
-                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_SMM_PAGE_SIZE_DATA_TYPE, str (SmmSize / 0x1000))
+                    ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_SMM_PAGE_SIZE_DATA_TYPE, str (SmmSize // 0x1000))
                 if ReturnValue != 0:
                     EdkLogger.error("build", PARAMETER_INVALID, "Patch PCD value failed", ExtraData=ErrorInfo)
 
-        MapBuffer.write('PEI_CODE_PAGE_NUMBER      = 0x%x\n' % (PeiSize / 0x1000))
-        MapBuffer.write('BOOT_CODE_PAGE_NUMBER     = 0x%x\n' % (BtSize / 0x1000))
-        MapBuffer.write('RUNTIME_CODE_PAGE_NUMBER  = 0x%x\n' % (RtSize / 0x1000))
+        MapBuffer.write('PEI_CODE_PAGE_NUMBER      = 0x%x\n' % (PeiSize // 0x1000))
+        MapBuffer.write('BOOT_CODE_PAGE_NUMBER     = 0x%x\n' % (BtSize // 0x1000))
+        MapBuffer.write('RUNTIME_CODE_PAGE_NUMBER  = 0x%x\n' % (RtSize // 0x1000))
         if len (SmmModuleList) > 0:
-            MapBuffer.write('SMM_CODE_PAGE_NUMBER      = 0x%x\n' % (SmmSize / 0x1000))
+            MapBuffer.write('SMM_CODE_PAGE_NUMBER      = 0x%x\n' % (SmmSize // 0x1000))
 
         PeiBaseAddr = TopMemoryAddress - RtSize - BtSize
         BtBaseAddr  = TopMemoryAddress - RtSize
         RtBaseAddr  = TopMemoryAddress - ReservedRuntimeMemorySize
 
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 22/33] BaseTools:There is extra blank line in datalog
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (20 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 21/33] BaseTools: change the Division Operator Feng, Bob C
@ 2019-01-29  2:05 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 23/33] BaseTools: Similar to octal data rectification Feng, Bob C
                   ` (11 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:05 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhiju Fan, Bob Feng, Liming Gao

From: Zhiju Fan <zhijux.fan@intel.com>

There should be no blank line across every
line in datalog if open it with Notepad++.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/build/BuildReport.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py
index b940de1c90..ff632b6759 100644
--- a/BaseTools/Source/Python/build/BuildReport.py
+++ b/BaseTools/Source/Python/build/BuildReport.py
@@ -248,11 +248,11 @@ def FileLinesSplit(Content=None, MaxLength=None):
         if Line:
             NewContentList.append(Line)
     for NewLine in NewContentList:
         NewContent += NewLine + TAB_LINE_BREAK
 
-    NewContent = NewContent.replace(TAB_LINE_BREAK, gEndOfLine).replace('\r\r\n', gEndOfLine)
+    NewContent = NewContent.replace(gEndOfLine, TAB_LINE_BREAK).replace('\r\r\n', gEndOfLine)
     return NewContent
 
 
 
 ##
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 23/33] BaseTools: Similar to octal data rectification
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (21 preceding siblings ...)
  2019-01-29  2:05 ` [Patch 22/33] BaseTools:There is extra blank line in datalog Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 24/33] BaseTools: Update windows and linux run scripts file to use Python3 Feng, Bob C
                   ` (10 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

In python3, if Value is octal data, the int(Value, 0) report an error

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/GenC.py      |  2 +-
 BaseTools/Source/Python/Common/Misc.py       |  2 +-
 BaseTools/Source/Python/build/BuildReport.py | 17 +++++++++++------
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
index e46942a3e2..f1f3b6f359 100644
--- a/BaseTools/Source/Python/AutoGen/GenC.py
+++ b/BaseTools/Source/Python/AutoGen/GenC.py
@@ -1008,11 +1008,11 @@ def CreateModulePcdCode(Info, AutoGenC, AutoGenH, Pcd):
 
         if Pcd.DatumType in TAB_PCD_CLEAN_NUMERIC_TYPES:
             try:
                 if Value.upper().endswith('L'):
                     Value = Value[:-1]
-                if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+                if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 1 and Value.lstrip('0'):
                     Value = Value.lstrip('0')
                 ValueNumber = int (Value, 0)
             except:
                 EdkLogger.error("build", AUTOGEN_ERROR,
                                 "PCD value is not valid dec or hex number for datum type [%s] of PCD %s.%s" % (Pcd.DatumType, Pcd.TokenSpaceGuidCName, TokenCName),
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index bd7c2812e8..e0e355286b 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -1308,11 +1308,11 @@ def CheckPcdDatum(Type, Value):
     elif Type == 'BOOLEAN':
         if Value not in ['TRUE', 'True', 'true', '0x1', '0x01', '1', 'FALSE', 'False', 'false', '0x0', '0x00', '0']:
             return False, "Invalid value [%s] of type [%s]; must be one of TRUE, True, true, 0x1, 0x01, 1"\
                           ", FALSE, False, false, 0x0, 0x00, 0" % (Value, Type)
     elif Type in [TAB_UINT8, TAB_UINT16, TAB_UINT32, TAB_UINT64]:
-        if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+        if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 1 and Value.lstrip('0'):
             Value = Value.lstrip('0')
         try:
             if Value and int(Value, 0) < 0:
                 return False, "PCD can't be set to negative value[%s] for datum type [%s]" % (Value, Type)
             Value = int(Value, 0)
diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py
index ff632b6759..9483262dd1 100644
--- a/BaseTools/Source/Python/build/BuildReport.py
+++ b/BaseTools/Source/Python/build/BuildReport.py
@@ -1024,33 +1024,37 @@ class PcdReport(object):
                     FileWrite(File, Key)
                     First = False
 
 
                 if Pcd.DatumType in TAB_PCD_NUMERIC_TYPES:
-                    if PcdValue.startswith('0') and not PcdValue.lower().startswith('0x') and len(PcdValue) > 2:
+                    if PcdValue.startswith('0') and not PcdValue.lower().startswith('0x') and \
+                            len(PcdValue) > 1 and PcdValue.lstrip('0'):
                         PcdValue = PcdValue.lstrip('0')
                     PcdValueNumber = int(PcdValue.strip(), 0)
                     if DecDefaultValue is None:
                         DecMatch = True
                     else:
-                        if DecDefaultValue.startswith('0') and not DecDefaultValue.lower().startswith('0x') and len(DecDefaultValue) > 2:
+                        if DecDefaultValue.startswith('0') and not DecDefaultValue.lower().startswith('0x') and \
+                                len(DecDefaultValue) > 1 and DecDefaultValue.lstrip('0'):
                             DecDefaultValue = DecDefaultValue.lstrip('0')
                         DecDefaultValueNumber = int(DecDefaultValue.strip(), 0)
                         DecMatch = (DecDefaultValueNumber == PcdValueNumber)
 
                     if InfDefaultValue is None:
                         InfMatch = True
                     else:
-                        if InfDefaultValue.startswith('0') and not InfDefaultValue.lower().startswith('0x') and len(InfDefaultValue) > 2:
+                        if InfDefaultValue.startswith('0') and not InfDefaultValue.lower().startswith('0x') and \
+                                len(InfDefaultValue) > 1 and InfDefaultValue.lstrip('0'):
                             InfDefaultValue = InfDefaultValue.lstrip('0')
                         InfDefaultValueNumber = int(InfDefaultValue.strip(), 0)
                         InfMatch = (InfDefaultValueNumber == PcdValueNumber)
 
                     if DscDefaultValue is None:
                         DscMatch = True
                     else:
-                        if DscDefaultValue.startswith('0') and not DscDefaultValue.lower().startswith('0x') and len(DscDefaultValue) > 2:
+                        if DscDefaultValue.startswith('0') and not DscDefaultValue.lower().startswith('0x') and \
+                                len(DscDefaultValue) > 1 and DscDefaultValue.lstrip('0'):
                             DscDefaultValue = DscDefaultValue.lstrip('0')
                         DscDefaultValueNumber = int(DscDefaultValue.strip(), 0)
                         DscMatch = (DscDefaultValueNumber == PcdValueNumber)
                 else:
                     if DecDefaultValue is None:
@@ -1169,11 +1173,12 @@ class PcdReport(object):
                     if not BuildOptionMatch:
                         ModuleOverride = self.ModulePcdOverride.get((Pcd.TokenCName, Pcd.TokenSpaceGuidCName), {})
                         for ModulePath in ModuleOverride:
                             ModuleDefault = ModuleOverride[ModulePath]
                             if Pcd.DatumType in TAB_PCD_NUMERIC_TYPES:
-                                if ModuleDefault.startswith('0') and not ModuleDefault.lower().startswith('0x') and len(ModuleDefault) > 2:
+                                if ModuleDefault.startswith('0') and not ModuleDefault.lower().startswith('0x') and \
+                                        len(ModuleDefault) > 1 and ModuleDefault.lstrip('0'):
                                     ModuleDefault = ModuleDefault.lstrip('0')
                                 ModulePcdDefaultValueNumber = int(ModuleDefault.strip(), 0)
                                 Match = (ModulePcdDefaultValueNumber == PcdValueNumber)
                                 if Pcd.DatumType == 'BOOLEAN':
                                     ModuleDefault = str(ModulePcdDefaultValueNumber)
@@ -1272,11 +1277,11 @@ class PcdReport(object):
                 FileWrite(File, ' %-*s   : %6s %10s = %s' % (self.MaxLen, Flag + ' ' + PcdTokenCName, TypeName, '(' + Pcd.DatumType + ')', '{'))
                 for Array in ArrayList:
                     FileWrite(File, Array)
             else:
                 if Pcd.DatumType in TAB_PCD_CLEAN_NUMERIC_TYPES:
-                    if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 2:
+                    if Value.startswith('0') and not Value.lower().startswith('0x') and len(Value) > 1 and Value.lstrip('0'):
                         Value = Value.lstrip('0')
                     if Value.startswith(('0x', '0X')):
                         Value = '{} ({:d})'.format(Value, int(Value, 0))
                     else:
                         Value = "0x{:X} ({})".format(int(Value, 0), Value)
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 24/33] BaseTools: Update windows and linux run scripts file to use Python3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (22 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 23/33] BaseTools: Similar to octal data rectification Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 25/33] BaseTools:Update build tool to print python version information Feng, Bob C
                   ` (9 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao, Yonghong Zhu

From: Zhijux Fan <zhijux.fan@intel.com>

Modify windows script, PosixLike script, edksetup.sh, edksetup.bat to
use Python3 based on PYTHON3_ENABLE environment.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc                            |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex                       |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds                         |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool                     |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim                           |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/build                          |  6 +++---
 BaseTools/BinWrappers/PosixLike/BPDG                            |  6 +++---
 BaseTools/BinWrappers/PosixLike/Ecc                             |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenDepex                        |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenFds                          |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenPatchPcdTable                |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenerateCapsule                 |  6 +++---
 BaseTools/BinWrappers/PosixLike/PatchPcdValue                   |  6 +++---
 BaseTools/BinWrappers/PosixLike/Pkcs7Sign                       |  6 +++---
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys       |  6 +++---
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign               |  6 +++---
 BaseTools/BinWrappers/PosixLike/TargetTool                      |  6 +++---
 BaseTools/BinWrappers/PosixLike/Trim                            |  6 +++---
 BaseTools/BinWrappers/PosixLike/UPT                             |  6 +++---
 BaseTools/BinWrappers/PosixLike/build                           |  6 +++---
 BaseTools/BinWrappers/WindowsLike/BPDG.bat                      |  2 +-
 BaseTools/BinWrappers/WindowsLike/Ecc.bat                       |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenDepex.bat                  |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenFds.bat                    |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat          |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat           |  2 +-
 BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat             |  2 +-
 BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat                 |  2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat |  2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat         |  2 +-
 BaseTools/BinWrappers/WindowsLike/TargetTool.bat                |  2 +-
 BaseTools/BinWrappers/WindowsLike/Trim.bat                      |  2 +-
 BaseTools/BinWrappers/WindowsLike/UPT.bat                       |  2 +-
 BaseTools/BinWrappers/WindowsLike/build.bat                     |  2 +-
 BaseTools/Makefile                                              |  8 ++++----
 BaseTools/Source/C/Makefile                                     |  8 ++++----
 BaseTools/Tests/GNUmakefile                                     |  2 +-
 BaseTools/toolsetup.bat                                         | 53 ++++++++++++++++++++++++++++++++++++++++++++---------
 edksetup.sh                                                     | 51 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 39 files changed, 177 insertions(+), 93 deletions(-)

diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
index 214d88fff1..8532fe510d 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
index 214d88fff1..8532fe510d 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
index 214d88fff1..8532fe510d 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool b/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
index 214d88fff1..8532fe510d 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
index 7cac4f7c4f..54e09c039b 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 exe=$(basename "$full_cmd")
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/build b/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
index 214d88fff1..8532fe510d 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/BPDG b/BaseTools/BinWrappers/PosixLike/BPDG
index 276c7ea207..e9f570b52c 100755
--- a/BaseTools/BinWrappers/PosixLike/BPDG
+++ b/BaseTools/BinWrappers/PosixLike/BPDG
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Ecc b/BaseTools/BinWrappers/PosixLike/Ecc
index 1142964028..ed4b7cd384 100755
--- a/BaseTools/BinWrappers/PosixLike/Ecc
+++ b/BaseTools/BinWrappers/PosixLike/Ecc
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenDepex b/BaseTools/BinWrappers/PosixLike/GenDepex
index dad174788b..d99e54f222 100755
--- a/BaseTools/BinWrappers/PosixLike/GenDepex
+++ b/BaseTools/BinWrappers/PosixLike/GenDepex
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenFds b/BaseTools/BinWrappers/PosixLike/GenFds
index 276c7ea207..e9f570b52c 100755
--- a/BaseTools/BinWrappers/PosixLike/GenFds
+++ b/BaseTools/BinWrappers/PosixLike/GenFds
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable b/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
+++ b/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenerateCapsule b/BaseTools/BinWrappers/PosixLike/GenerateCapsule
index 59a6c8ba43..91bbd22738 100755
--- a/BaseTools/BinWrappers/PosixLike/GenerateCapsule
+++ b/BaseTools/BinWrappers/PosixLike/GenerateCapsule
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/PatchPcdValue b/BaseTools/BinWrappers/PosixLike/PatchPcdValue
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/PatchPcdValue
+++ b/BaseTools/BinWrappers/PosixLike/PatchPcdValue
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Pkcs7Sign b/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
+++ b/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
index 1bc1054a34..b42a126840 100755
--- a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
+++ b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
+++ b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/TargetTool b/BaseTools/BinWrappers/PosixLike/TargetTool
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/TargetTool
+++ b/BaseTools/BinWrappers/PosixLike/TargetTool
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Trim b/BaseTools/BinWrappers/PosixLike/Trim
index 6c8dde5bec..d64b834006 100755
--- a/BaseTools/BinWrappers/PosixLike/Trim
+++ b/BaseTools/BinWrappers/PosixLike/Trim
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 exe=$(basename "$full_cmd")
diff --git a/BaseTools/BinWrappers/PosixLike/UPT b/BaseTools/BinWrappers/PosixLike/UPT
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/UPT
+++ b/BaseTools/BinWrappers/PosixLike/UPT
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/build b/BaseTools/BinWrappers/PosixLike/build
index 01ae23ddeb..d8b8b8f145 100755
--- a/BaseTools/BinWrappers/PosixLike/build
+++ b/BaseTools/BinWrappers/PosixLike/build
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a python2 command is available, use it in preference to python
-if command -v python2 >/dev/null 2>&1; then
-    python_exe=python2
+# If a ${PYTHON} command is available, use it in preference to python
+if command -v ${PYTHON} >/dev/null 2>&1; then
+    python_exe=${PYTHON}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/WindowsLike/BPDG.bat b/BaseTools/BinWrappers/WindowsLike/BPDG.bat
index 98095cfbd4..4a43e5353e 100644
--- a/BaseTools/BinWrappers/WindowsLike/BPDG.bat
+++ b/BaseTools/BinWrappers/WindowsLike/BPDG.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON_HOME%\python.exe -m %ToolName%.%ToolName% %*
+@%PYTHON% -m %ToolName%.%ToolName% %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Ecc.bat b/BaseTools/BinWrappers/WindowsLike/Ecc.bat
index 8705e7541e..e63ef50135 100644
--- a/BaseTools/BinWrappers/WindowsLike/Ecc.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Ecc.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON_HOME%\python.exe -m %ToolName%.EccMain %*
+@%PYTHON% -m %ToolName%.EccMain %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenDepex.bat b/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
index ffc783d2be..6c7250f008 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\AutoGen\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\AutoGen\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenFds.bat b/BaseTools/BinWrappers/WindowsLike/GenFds.bat
index 98095cfbd4..4a43e5353e 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenFds.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenFds.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON_HOME%\python.exe -m %ToolName%.%ToolName% %*
+@%PYTHON% -m %ToolName%.%ToolName% %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat b/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat b/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
index ca442d181b..1ab7d33f98 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
@@ -1 +1 @@
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\Capsule\GenerateCapsule.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\Capsule\GenerateCapsule.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat b/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
+++ b/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat b/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
index df9336567c..32da349b31 100644
--- a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
@@ -1 +1 @@
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\Rsa2048Sha256Sign\Rsa2048Sha256GenerateKeys.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\Rsa2048Sha256Sign\Rsa2048Sha256GenerateKeys.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/TargetTool.bat b/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
+++ b/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Trim.bat b/BaseTools/BinWrappers/WindowsLike/Trim.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/Trim.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Trim.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/UPT.bat b/BaseTools/BinWrappers/WindowsLike/UPT.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/UPT.bat
+++ b/BaseTools/BinWrappers/WindowsLike/UPT.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/build.bat b/BaseTools/BinWrappers/WindowsLike/build.bat
index 9fbb704a6e..82e0a90d6c 100644
--- a/BaseTools/BinWrappers/WindowsLike/build.bat
+++ b/BaseTools/BinWrappers/WindowsLike/build.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON_HOME%\python.exe %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/Makefile b/BaseTools/Makefile
index e6932c77c0..2569ea2ff4 100644
--- a/BaseTools/Makefile
+++ b/BaseTools/Makefile
@@ -18,19 +18,19 @@
 SUBDIRS = $(BASE_TOOLS_PATH)\Source\C $(BASE_TOOLS_PATH)\Source\Python
 
 all: c
 
 c :
-  @$(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $(BASE_TOOLS_PATH)\Source\C
+  @$(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $(BASE_TOOLS_PATH)\Source\C
 
 
 subdirs: $(SUBDIRS)
-  @$(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $**
+  @$(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $**
 
 .PHONY: clean
 clean:
-  $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py clean $(SUBDIRS)
+  $(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py clean $(SUBDIRS)
 
 .PHONY: cleanall
 cleanall:
-  $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  cleanall $(SUBDIRS)
+  $(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  cleanall $(SUBDIRS)
 
diff --git a/BaseTools/Source/C/Makefile b/BaseTools/Source/C/Makefile
index 5806dcedd9..dd39661272 100644
--- a/BaseTools/Source/C/Makefile
+++ b/BaseTools/Source/C/Makefile
@@ -36,19 +36,19 @@ libs: $(LIBRARIES)
 	@echo.
 	@echo ######################
 	@echo # Build libraries
 	@echo ######################
 	@if not exist $(LIB_PATH) mkdir $(LIB_PATH)
-	@$(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py all $**
+	@$(PYTHON) Makefiles\NmakeSubdirs.py all $**
 
 apps: $(APPLICATIONS)
 	@echo.
 	@echo ######################
 	@echo # Build executables
 	@echo ######################
 	@if not exist $(BIN_PATH) mkdir $(BIN_PATH)
-	@$(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py all $**
+	@$(PYTHON) Makefiles\NmakeSubdirs.py all $**
 
 install: $(LIB_PATH) $(BIN_PATH)
 	@echo.
 	@echo ######################
 	@echo # Install to $(SYS_LIB_PATH)
@@ -58,13 +58,13 @@ install: $(LIB_PATH) $(BIN_PATH)
 	@-xcopy $(BIN_PATH)\*.exe $(SYS_BIN_PATH) /I /D /E /F /Y > NUL 2>&1
   @-xcopy $(BIN_PATH)\*.bat $(SYS_BIN_PATH) /I /D /E /F /Y > NUL 2>&1
 
 .PHONY: clean
 clean:
-  @$(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py clean $(LIBRARIES) $(APPLICATIONS)
+  @$(PYTHON) Makefiles\NmakeSubdirs.py clean $(LIBRARIES) $(APPLICATIONS)
 
 .PHONY: cleanall
 cleanall:
-  @$(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py cleanall $(LIBRARIES) $(APPLICATIONS)
+  @$(PYTHON) Makefiles\NmakeSubdirs.py cleanall $(LIBRARIES) $(APPLICATIONS)
 
 !INCLUDE Makefiles\ms.rule
 
diff --git a/BaseTools/Tests/GNUmakefile b/BaseTools/Tests/GNUmakefile
index 0c11f6aae9..d6f4e1908b 100644
--- a/BaseTools/Tests/GNUmakefile
+++ b/BaseTools/Tests/GNUmakefile
@@ -12,10 +12,10 @@
 #
 
 all: test
 
 test:
-	@if command -v python2 >/dev/null 2>&1; then python2 RunTests.py; else python RunTests.py; fi
+	@if command -v $(PYTHON) >/dev/null 1; then $(PYTHON) RunTests.py; else python RunTests.py; fi
 
 clean:
 	find . -name '*.pyc' -exec rm '{}' ';'
 
diff --git a/BaseTools/toolsetup.bat b/BaseTools/toolsetup.bat
index 1cac3105c2..811b23051f 100755
--- a/BaseTools/toolsetup.bat
+++ b/BaseTools/toolsetup.bat
@@ -272,40 +272,75 @@ goto check_build_environment
   echo.
   echo !!! ERROR !!! Binary C tools are missing. They are requried to be built from BaseTools Source.
   echo.
 
 :check_build_environment
-  set PYTHONHASHSEED=0
-  if defined BASETOOLS_PYTHON_SOURCE goto VisualStudioAvailable
+  set PYTHONHASHSEED=1
 
   if not defined BASE_TOOLS_PATH (
      if not exist "Source\C\Makefile" (
        if not exist "%EDK_TOOLS_PATH%\Source\C\Makefile" goto no_source_files
        set BASE_TOOLS_PATH=%EDK_TOOLS_PATH%
      ) else (
        set BASE_TOOLS_PATH=%CD%
      )
   )
 
-  if not defined PYTHON_HOME (
-    if defined PYTHONHOME (
-      set PYTHON_HOME=%PYTHONHOME%
-    ) else (
+:defined_python
+if defined PYTHON3_ENABLE (
+  if "%PYTHON3_ENABLE%" EQU "TRUE" (
+    set PYTHON=py -3
+    %PYTHON% --version >NUL 2>&1
+    if %ERRORLEVEL% NEQ 0 (
       echo.
-      echo !!! ERROR !!! Binary python tools are missing. PYTHON_HOME environment variable is not set.
-      echo PYTHON_HOME is required to build or execute the python tools.
+      echo !!! ERROR !!!  PYTHON3 is not installed or added to environment variables
       echo.
       goto end
+    ) else (
+      goto check_freezer_path
+    )
+  ) 
+)
+
+if defined PYTHON_HOME (
+  if EXIST "%PYTHON_HOME%" (
+    set PYTHON=%PYTHON_HOME%\python.exe
+    goto check_freezer_path
+    )
+  )
+if defined PYTHONHOME (
+  if EXIST "%PYTHONHOME%" (
+    set PYTHON_HOME=%PYTHONHOME%
+    set PYTHON=%PYTHON_HOME%\python.exe
+    goto check_freezer_path
     )
   )
+  
+  echo.
+  echo !!! ERROR !!! Binary python tools are missing.
+  echo PYTHON_HOME or PYTHON3_ENABLE environment variable is not set successfully.
+  echo PYTHON_HOME or PYTHON3_ENABLE is required to build or execute the python tools.
+  echo.
+  goto end
 
+:check_freezer_path
+  if defined BASETOOLS_PYTHON_SOURCE goto print_python_info
   set "PATH=%BASE_TOOLS_PATH%\BinWrappers\WindowsLike;%PATH%"
   set BASETOOLS_PYTHON_SOURCE=%BASE_TOOLS_PATH%\Source\Python
   set PYTHONPATH=%BASETOOLS_PYTHON_SOURCE%;%PYTHONPATH%
 
+:print_python_info
   echo                PATH = %PATH%
-  echo         PYTHON_HOME = %PYTHON_HOME%
+  if "%PYTHON3_ENABLE%" EQU "TRUE" (
+    echo      PYTHON3_ENABLE = %PYTHON3_ENABLE%
+    echo             PYTHON3 = %PYTHON%
+  ) else (
+    echo      PYTHON3_ENABLE = %PYTHON3_ENABLE%
+    if defined PYTHON_HOME (
+      echo         PYTHON_HOME = %PYTHON_HOME%
+    )
+  )
   echo          PYTHONPATH = %PYTHONPATH%
   echo.
 
 :VisualStudioAvailable
   if not defined FORCE_REBUILD (
diff --git a/edksetup.sh b/edksetup.sh
index 3dee8c5d61..06f95f4b9c 100755
--- a/edksetup.sh
+++ b/edksetup.sh
@@ -75,11 +75,11 @@ function SetWorkspace()
 
   #
   # Set $WORKSPACE
   #
   export WORKSPACE=`pwd`
-  export PYTHONHASHSEED=0
+  export PYTHONHASHSEED=1
   return 0
 }
 
 function SetupEnv()
 {
@@ -109,14 +109,63 @@ function SetupEnv()
     echo the EDK2 BuildEnv script.
     return 1
   fi
 }
 
+function SetupPython()
+{    
+  if [ $PYTHON3_ENABLE ] && [ $PYTHON3_ENABLE == TRUE ]
+  then
+    for python in $(which python3)
+    do
+      python=$(echo $python | grep "[[:digit:]]$" || true)
+      python_version=${python##*python}
+      if [ -z "${python_version}" ];then
+        continue
+      fi
+      if [ -z $origin_version ];then
+        origin_version=$python_version
+        export PYTHON=$python
+        continue
+      fi
+      ret=`echo "$origin_version < $python_version" |bc`
+      if [ "$ret" -eq 1 ]; then
+        origin_version=$python_version
+        export PYTHON=$python
+      fi
+    done
+  fi
+  
+  if [ -z $PYTHON3_ENABLE ] || [ $PYTHON3_ENABLE != TRUE ]
+  then
+    for python in $(which python2)
+    do
+      python=$(echo $python | grep "[[:digit:]]$" || true)
+      python_version=${python##*python}
+      if [ -z "${python_version}" ];then
+        continue
+      fi
+      if [ -z $origin_version ] || [ $origin_version -ge 3 ]
+      then
+        origin_version=$python_version
+        export PYTHON=$python
+        continue
+      fi
+      ret=`echo "$origin_version < $python_version" |bc`
+      if [ "$ret" -eq 1 ]; then
+        origin_version=$python_version
+        export PYTHON=$python
+      fi
+    done
+  fi
+}
+
 function SourceEnv()
 {
   SetWorkspace &&
   SetupEnv
+  SetupPython
 }
 
 I=$#
 while [ $I -gt 0 ]
 do
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 25/33] BaseTools:Update build tool to print python version information
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (23 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 24/33] BaseTools: Update windows and linux run scripts file to use Python3 Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 26/33] BaseTools:Linux Python highest version check Feng, Bob C
                   ` (8 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao

From: Zhijux Fan <zhijux.fan@intel.com>

print PYTHON3_ENABLE and PYTHON_COMMAND in build tool

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/build/build.py | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 139a1dfe29..b5b969e876 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -789,10 +789,16 @@ class Build():
         EdkLogger.quiet("%-16s = %s" % ("EDK_TOOLS_PATH", os.environ["EDK_TOOLS_PATH"]))
         if "EDK_TOOLS_BIN" in os.environ:
             # Print the same path style with WORKSPACE env.
             EdkLogger.quiet("%-16s = %s" % ("EDK_TOOLS_BIN", os.path.normcase(os.path.normpath(os.environ["EDK_TOOLS_BIN"]))))
         EdkLogger.quiet("%-16s = %s" % ("CONF_PATH", GlobalData.gConfDirectory))
+        if "PYTHON3_ENABLE" in os.environ:
+            PYTHON3_ENABLE = os.environ["PYTHON3_ENABLE"]
+            if PYTHON3_ENABLE != "TRUE":
+                PYTHON3_ENABLE = "FALSE"
+            EdkLogger.quiet("%-16s = %s" % ("PYTHON3_ENABLE", PYTHON3_ENABLE))
+        EdkLogger.quiet("%-16s = %s" % ("PYTHON", os.environ["PYTHON"]))
         self.InitPreBuild()
         self.InitPostBuild()
         if self.Prebuild:
             EdkLogger.quiet("%-16s = %s" % ("PREBUILD", self.Prebuild))
         if self.Postbuild:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 26/33] BaseTools:Linux Python highest version check.
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (24 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 25/33] BaseTools:Update build tool to print python version information Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 27/33] BaseTools: Update PYTHON env to PYTHON_COMMAND Feng, Bob C
                   ` (7 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Zhijux Fan, Bob Feng, Liming Gao

From: Zhijux Fan <zhijux.fan@intel.com>

Linux Python highest version check.
The path of Python interpreter assign values to PYTHON_COMMAND

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 edksetup.sh | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/edksetup.sh b/edksetup.sh
index 06f95f4b9c..bfa54ddf70 100755
--- a/edksetup.sh
+++ b/edksetup.sh
@@ -113,15 +113,18 @@ function SetupEnv()
 
 function SetupPython()
 {    
   if [ $PYTHON3_ENABLE ] && [ $PYTHON3_ENABLE == TRUE ]
   then
-    for python in $(which python3)
+    if [ $origin_version ];then
+      origin_version=
+    fi
+    for python in $(whereis python3)
     do
       python=$(echo $python | grep "[[:digit:]]$" || true)
       python_version=${python##*python}
-      if [ -z "${python_version}" ];then
+      if [ -z "${python_version}" ] || (! command -v $python >/dev/null 2>&1);then
         continue
       fi
       if [ -z $origin_version ];then
         origin_version=$python_version
         export PYTHON=$python
@@ -135,18 +138,21 @@ function SetupPython()
     done
   fi
   
   if [ -z $PYTHON3_ENABLE ] || [ $PYTHON3_ENABLE != TRUE ]
   then
-    for python in $(which python2)
+    if [ $origin_version ];then
+      origin_version=
+    fi
+    for python in $(whereis python2)
     do
       python=$(echo $python | grep "[[:digit:]]$" || true)
       python_version=${python##*python}
-      if [ -z "${python_version}" ];then
+      if [ -z "${python_version}" ] || (! command -v $python >/dev/null 2>&1);then
         continue
       fi
-      if [ -z $origin_version ] || [ $origin_version -ge 3 ]
+      if [ -z $origin_version ]
       then
         origin_version=$python_version
         export PYTHON=$python
         continue
       fi
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 27/33] BaseTools: Update PYTHON env to PYTHON_COMMAND
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (25 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 26/33] BaseTools:Linux Python highest version check Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 28/33] BaseTools:Fixed Rsa issue and a set define issue Feng, Bob C
                   ` (6 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Liming Gao

From: Liming Gao <liming.gao@intel.com>

Update PYTHON env to PYTHON_COMMAND.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Liming Gao <liming.gao@intel.com>
---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc                            |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex                       |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds                         |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool                     |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim                           |  6 +++---
 BaseTools/Bin/CYGWIN_NT-5.1-i686/build                          |  6 +++---
 BaseTools/BinWrappers/PosixLike/BPDG                            |  6 +++---
 BaseTools/BinWrappers/PosixLike/Ecc                             |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenDepex                        |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenFds                          |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenPatchPcdTable                |  6 +++---
 BaseTools/BinWrappers/PosixLike/GenerateCapsule                 |  6 +++---
 BaseTools/BinWrappers/PosixLike/PatchPcdValue                   |  6 +++---
 BaseTools/BinWrappers/PosixLike/Pkcs7Sign                       |  6 +++---
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys       |  6 +++---
 BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign               |  6 +++---
 BaseTools/BinWrappers/PosixLike/TargetTool                      |  6 +++---
 BaseTools/BinWrappers/PosixLike/Trim                            |  6 +++---
 BaseTools/BinWrappers/PosixLike/UPT                             |  6 +++---
 BaseTools/BinWrappers/PosixLike/build                           |  6 +++---
 BaseTools/BinWrappers/WindowsLike/BPDG.bat                      |  2 +-
 BaseTools/BinWrappers/WindowsLike/Ecc.bat                       |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenDepex.bat                  |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenFds.bat                    |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat          |  2 +-
 BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat           |  2 +-
 BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat             |  2 +-
 BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat                 |  2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat |  2 +-
 BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat         |  2 +-
 BaseTools/BinWrappers/WindowsLike/TargetTool.bat                |  2 +-
 BaseTools/BinWrappers/WindowsLike/Trim.bat                      |  2 +-
 BaseTools/BinWrappers/WindowsLike/UPT.bat                       |  2 +-
 BaseTools/BinWrappers/WindowsLike/build.bat                     |  2 +-
 BaseTools/Makefile                                              | 12 ++++++++----
 BaseTools/Source/C/Makefile                                     | 12 ++++++++----
 BaseTools/Source/Python/build/build.py                          |  3 ++-
 BaseTools/Tests/GNUmakefile                                     |  2 +-
 BaseTools/Tests/PythonTest.py                                   | 15 +++++++++++++++
 BaseTools/toolsetup.bat                                         | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------
 edksetup.sh                                                     | 42 ++++++++++++++++++++++++++++++------------
 41 files changed, 194 insertions(+), 120 deletions(-)

diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
index 8532fe510d..1ba451cf5e 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Ecc
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
index 8532fe510d..1ba451cf5e 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenDepex
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
index 8532fe510d..1ba451cf5e 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/GenFds
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool b/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
index 8532fe510d..1ba451cf5e 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/TargetTool
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
index 54e09c039b..b53b79bba4 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/Trim
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 exe=$(basename "$full_cmd")
diff --git a/BaseTools/Bin/CYGWIN_NT-5.1-i686/build b/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
index 8532fe510d..1ba451cf5e 100755
--- a/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
+++ b/BaseTools/Bin/CYGWIN_NT-5.1-i686/build
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/BPDG b/BaseTools/BinWrappers/PosixLike/BPDG
index e9f570b52c..c894384908 100755
--- a/BaseTools/BinWrappers/PosixLike/BPDG
+++ b/BaseTools/BinWrappers/PosixLike/BPDG
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Ecc b/BaseTools/BinWrappers/PosixLike/Ecc
index ed4b7cd384..15edf52106 100755
--- a/BaseTools/BinWrappers/PosixLike/Ecc
+++ b/BaseTools/BinWrappers/PosixLike/Ecc
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenDepex b/BaseTools/BinWrappers/PosixLike/GenDepex
index d99e54f222..183cf58224 100755
--- a/BaseTools/BinWrappers/PosixLike/GenDepex
+++ b/BaseTools/BinWrappers/PosixLike/GenDepex
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenFds b/BaseTools/BinWrappers/PosixLike/GenFds
index e9f570b52c..c894384908 100755
--- a/BaseTools/BinWrappers/PosixLike/GenFds
+++ b/BaseTools/BinWrappers/PosixLike/GenFds
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable b/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
+++ b/BaseTools/BinWrappers/PosixLike/GenPatchPcdTable
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/GenerateCapsule b/BaseTools/BinWrappers/PosixLike/GenerateCapsule
index 91bbd22738..023048c61d 100755
--- a/BaseTools/BinWrappers/PosixLike/GenerateCapsule
+++ b/BaseTools/BinWrappers/PosixLike/GenerateCapsule
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/PatchPcdValue b/BaseTools/BinWrappers/PosixLike/PatchPcdValue
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/PatchPcdValue
+++ b/BaseTools/BinWrappers/PosixLike/PatchPcdValue
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Pkcs7Sign b/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
+++ b/BaseTools/BinWrappers/PosixLike/Pkcs7Sign
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
index b42a126840..ea71c7c61a 100755
--- a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
+++ b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256GenerateKeys
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
+++ b/BaseTools/BinWrappers/PosixLike/Rsa2048Sha256Sign
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/TargetTool b/BaseTools/BinWrappers/PosixLike/TargetTool
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/TargetTool
+++ b/BaseTools/BinWrappers/PosixLike/TargetTool
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/Trim b/BaseTools/BinWrappers/PosixLike/Trim
index d64b834006..1dd28e9662 100755
--- a/BaseTools/BinWrappers/PosixLike/Trim
+++ b/BaseTools/BinWrappers/PosixLike/Trim
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 exe=$(basename "$full_cmd")
diff --git a/BaseTools/BinWrappers/PosixLike/UPT b/BaseTools/BinWrappers/PosixLike/UPT
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/UPT
+++ b/BaseTools/BinWrappers/PosixLike/UPT
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/PosixLike/build b/BaseTools/BinWrappers/PosixLike/build
index d8b8b8f145..f3770eed42 100755
--- a/BaseTools/BinWrappers/PosixLike/build
+++ b/BaseTools/BinWrappers/PosixLike/build
@@ -1,11 +1,11 @@
 #!/usr/bin/env bash
 #python `dirname $0`/RunToolFromSource.py `basename $0` $*
 
-# If a ${PYTHON} command is available, use it in preference to python
-if command -v ${PYTHON} >/dev/null 2>&1; then
-    python_exe=${PYTHON}
+# If a ${PYTHON_COMMAND} command is available, use it in preference to python
+if command -v ${PYTHON_COMMAND} >/dev/null 2>&1; then
+    python_exe=${PYTHON_COMMAND}
 fi
 
 full_cmd=${BASH_SOURCE:-$0} # see http://mywiki.wooledge.org/BashFAQ/028 for a discussion of why $0 is not a good choice here
 dir=$(dirname "$full_cmd")
 cmd=${full_cmd##*/}
diff --git a/BaseTools/BinWrappers/WindowsLike/BPDG.bat b/BaseTools/BinWrappers/WindowsLike/BPDG.bat
index 4a43e5353e..f43dba81f1 100644
--- a/BaseTools/BinWrappers/WindowsLike/BPDG.bat
+++ b/BaseTools/BinWrappers/WindowsLike/BPDG.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON% -m %ToolName%.%ToolName% %*
+@%PYTHON_COMMAND% -m %ToolName%.%ToolName% %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Ecc.bat b/BaseTools/BinWrappers/WindowsLike/Ecc.bat
index e63ef50135..ba1a15b3b8 100644
--- a/BaseTools/BinWrappers/WindowsLike/Ecc.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Ecc.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON% -m %ToolName%.EccMain %*
+@%PYTHON_COMMAND% -m %ToolName%.EccMain %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenDepex.bat b/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
index 6c7250f008..f8f3eefacf 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenDepex.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\AutoGen\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\AutoGen\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenFds.bat b/BaseTools/BinWrappers/WindowsLike/GenFds.bat
index 4a43e5353e..f43dba81f1 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenFds.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenFds.bat
@@ -1,4 +1,4 @@
 @setlocal
 @set ToolName=%~n0%
 @set PYTHONPATH=%PYTHONPATH%;%BASE_TOOLS_PATH%\Source\Python
-@%PYTHON% -m %ToolName%.%ToolName% %*
+@%PYTHON_COMMAND% -m %ToolName%.%ToolName% %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat b/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenPatchPcdTable.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat b/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
index 1ab7d33f98..11b4a48aec 100644
--- a/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
+++ b/BaseTools/BinWrappers/WindowsLike/GenerateCapsule.bat
@@ -1 +1 @@
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\Capsule\GenerateCapsule.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\Capsule\GenerateCapsule.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat b/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
+++ b/BaseTools/BinWrappers/WindowsLike/PatchPcdValue.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat b/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Pkcs7Sign.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
index 32da349b31..6d4443b608 100644
--- a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256GenerateKeys.bat
@@ -1 +1 @@
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\Rsa2048Sha256Sign\Rsa2048Sha256GenerateKeys.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\Rsa2048Sha256Sign\Rsa2048Sha256GenerateKeys.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Rsa2048Sha256Sign.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/TargetTool.bat b/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
+++ b/BaseTools/BinWrappers/WindowsLike/TargetTool.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/Trim.bat b/BaseTools/BinWrappers/WindowsLike/Trim.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/Trim.bat
+++ b/BaseTools/BinWrappers/WindowsLike/Trim.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/UPT.bat b/BaseTools/BinWrappers/WindowsLike/UPT.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/UPT.bat
+++ b/BaseTools/BinWrappers/WindowsLike/UPT.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/BinWrappers/WindowsLike/build.bat b/BaseTools/BinWrappers/WindowsLike/build.bat
index 82e0a90d6c..9616cd893b 100644
--- a/BaseTools/BinWrappers/WindowsLike/build.bat
+++ b/BaseTools/BinWrappers/WindowsLike/build.bat
@@ -1,3 +1,3 @@
 @setlocal
 @set ToolName=%~n0%
-@%PYTHON% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
+@%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Source\Python\%ToolName%\%ToolName%.py %*
diff --git a/BaseTools/Makefile b/BaseTools/Makefile
index 2569ea2ff4..de98e0b617 100644
--- a/BaseTools/Makefile
+++ b/BaseTools/Makefile
@@ -18,19 +18,23 @@
 SUBDIRS = $(BASE_TOOLS_PATH)\Source\C $(BASE_TOOLS_PATH)\Source\Python
 
 all: c
 
 c :
-  @$(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $(BASE_TOOLS_PATH)\Source\C
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $(BASE_TOOLS_PATH)\Source\C
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $(BASE_TOOLS_PATH)\Source\C
 
 
 subdirs: $(SUBDIRS)
-  @$(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $**
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $**
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  all $**
 
 .PHONY: clean
 clean:
-  $(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py clean $(SUBDIRS)
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py clean $(SUBDIRS)
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py clean $(SUBDIRS)
 
 .PHONY: cleanall
 cleanall:
-  $(PYTHON) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  cleanall $(SUBDIRS)
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  cleanall $(SUBDIRS)
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe $(BASE_TOOLS_PATH)\Source\C\Makefiles\NmakeSubdirs.py  cleanall $(SUBDIRS)
 
diff --git a/BaseTools/Source/C/Makefile b/BaseTools/Source/C/Makefile
index dd39661272..08f0081212 100644
--- a/BaseTools/Source/C/Makefile
+++ b/BaseTools/Source/C/Makefile
@@ -36,19 +36,21 @@ libs: $(LIBRARIES)
 	@echo.
 	@echo ######################
 	@echo # Build libraries
 	@echo ######################
 	@if not exist $(LIB_PATH) mkdir $(LIB_PATH)
-	@$(PYTHON) Makefiles\NmakeSubdirs.py all $**
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) Makefiles\NmakeSubdirs.py all $**
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py all $**
 
 apps: $(APPLICATIONS)
 	@echo.
 	@echo ######################
 	@echo # Build executables
 	@echo ######################
 	@if not exist $(BIN_PATH) mkdir $(BIN_PATH)
-	@$(PYTHON) Makefiles\NmakeSubdirs.py all $**
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) Makefiles\NmakeSubdirs.py all $**
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py all $**
 
 install: $(LIB_PATH) $(BIN_PATH)
 	@echo.
 	@echo ######################
 	@echo # Install to $(SYS_LIB_PATH)
@@ -58,13 +60,15 @@ install: $(LIB_PATH) $(BIN_PATH)
 	@-xcopy $(BIN_PATH)\*.exe $(SYS_BIN_PATH) /I /D /E /F /Y > NUL 2>&1
   @-xcopy $(BIN_PATH)\*.bat $(SYS_BIN_PATH) /I /D /E /F /Y > NUL 2>&1
 
 .PHONY: clean
 clean:
-  @$(PYTHON) Makefiles\NmakeSubdirs.py clean $(LIBRARIES) $(APPLICATIONS)
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) Makefiles\NmakeSubdirs.py clean $(LIBRARIES) $(APPLICATIONS)
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe Makefiles\NmakeSubdirs.py clean $(LIBRARIES) $(APPLICATIONS)
 
 .PHONY: cleanall
 cleanall:
-  @$(PYTHON) Makefiles\NmakeSubdirs.py cleanall $(LIBRARIES) $(APPLICATIONS)
+  @if defined PYTHON_COMMAND $(PYTHON_COMMAND) $(PYTHON_COMMAND) Makefiles\NmakeSubdirs.py cleanall $(LIBRARIES) $(APPLICATIONS)
+  @if not defined PYTHON_COMMAND $(PYTHON_HOME)\python.exe $(PYTHON_COMMAND) Makefiles\NmakeSubdirs.py cleanall $(LIBRARIES) $(APPLICATIONS)
 
 !INCLUDE Makefiles\ms.rule
 
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index b5b969e876..c2b22cca70 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -794,11 +794,12 @@ class Build():
         if "PYTHON3_ENABLE" in os.environ:
             PYTHON3_ENABLE = os.environ["PYTHON3_ENABLE"]
             if PYTHON3_ENABLE != "TRUE":
                 PYTHON3_ENABLE = "FALSE"
             EdkLogger.quiet("%-16s = %s" % ("PYTHON3_ENABLE", PYTHON3_ENABLE))
-        EdkLogger.quiet("%-16s = %s" % ("PYTHON", os.environ["PYTHON"]))
+        if "PYTHON_COMMAND" in os.environ:
+            EdkLogger.quiet("%-16s = %s" % ("PYTHON_COMMAND", os.environ["PYTHON_COMMAND"]))
         self.InitPreBuild()
         self.InitPostBuild()
         if self.Prebuild:
             EdkLogger.quiet("%-16s = %s" % ("PREBUILD", self.Prebuild))
         if self.Postbuild:
diff --git a/BaseTools/Tests/GNUmakefile b/BaseTools/Tests/GNUmakefile
index d6f4e1908b..3eb52b3f83 100644
--- a/BaseTools/Tests/GNUmakefile
+++ b/BaseTools/Tests/GNUmakefile
@@ -12,10 +12,10 @@
 #
 
 all: test
 
 test:
-	@if command -v $(PYTHON) >/dev/null 1; then $(PYTHON) RunTests.py; else python RunTests.py; fi
+	@if command -v $(PYTHON_COMMAND) >/dev/null 1; then $(PYTHON_COMMAND) RunTests.py; else python RunTests.py; fi
 
 clean:
 	find . -name '*.pyc' -exec rm '{}' ';'
 
diff --git a/BaseTools/Tests/PythonTest.py b/BaseTools/Tests/PythonTest.py
new file mode 100644
index 0000000000..cd49dc5b5d
--- /dev/null
+++ b/BaseTools/Tests/PythonTest.py
@@ -0,0 +1,15 @@
+## @file
+# Test whether PYTHON_COMMAND is available
+#
+# Copyright (c) 2013 - 2018, Intel Corporation. All rights reserved.<BR>
+# This program and the accompanying materials
+# are licensed and made available under the terms and conditions of the BSD License
+# which accompanies this distribution.  The full text of the license may be found at
+# http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+
+if __name__ == '__main__':
+    print('TRUE')
diff --git a/BaseTools/toolsetup.bat b/BaseTools/toolsetup.bat
index 811b23051f..6fe63e4baa 100755
--- a/BaseTools/toolsetup.bat
+++ b/BaseTools/toolsetup.bat
@@ -284,62 +284,94 @@ goto check_build_environment
        set BASE_TOOLS_PATH=%CD%
      )
   )
 
 :defined_python
+if defined PYTHON_COMMAND if not defined PYTHON3_ENABLE (
+  goto check_python_available
+)
 if defined PYTHON3_ENABLE (
   if "%PYTHON3_ENABLE%" EQU "TRUE" (
-    set PYTHON=py -3
-    %PYTHON% --version >NUL 2>&1
-    if %ERRORLEVEL% NEQ 0 (
+    set PYTHON_COMMAND=py -3
+    goto check_python_available
+  ) else (
+    goto check_python2
+  )
+)
+if not defined PYTHON_COMMAND if not defined PYTHON3_ENABLE (
+  set PYTHON_COMMAND=py -3
+  py -3 %BASE_TOOLS_PATH%\Tests\PythonTest.py >PythonCheck.txt 2>&1
+  setlocal enabledelayedexpansion
+  set /p PythonCheck=<"PythonCheck.txt"
+  del PythonCheck.txt
+  if "!PythonCheck!" NEQ "TRUE" (
+    if not defined PYTHON_HOME if not defined PYTHONHOME (
+      endlocal
+      set PYTHON_COMMAND=
       echo.
-      echo !!! ERROR !!!  PYTHON3 is not installed or added to environment variables
+      echo !!! ERROR !!! Binary python tools are missing.
+      echo PYTHON_COMMAND, PYTHON3_ENABLE or PYTHON_HOME
+      echo Environment variable is not set successfully.
+      echo They is required to build or execute the python tools.
       echo.
       goto end
     ) else (
-      goto check_freezer_path
+      goto check_python2
     )
-  ) 
+  ) else (
+    goto check_freezer_path
+  )
 )
 
+:check_python2
+endlocal
 if defined PYTHON_HOME (
   if EXIST "%PYTHON_HOME%" (
-    set PYTHON=%PYTHON_HOME%\python.exe
-    goto check_freezer_path
-    )
+    set PYTHON_COMMAND=%PYTHON_HOME%\python.exe
+    goto check_python_available
   )
+)
 if defined PYTHONHOME (
   if EXIST "%PYTHONHOME%" (
     set PYTHON_HOME=%PYTHONHOME%
-    set PYTHON=%PYTHON_HOME%\python.exe
+    set PYTHON_COMMAND=%PYTHON_HOME%\python.exe
+    goto check_python_available
+  )
+)
+echo.
+echo !!! ERROR !!!  PYTHON_HOME is not defined or The value of this variable does not exist
+echo.
+goto end
+:check_python_available
+%PYTHON_COMMAND% %BASE_TOOLS_PATH%\Tests\PythonTest.py >PythonCheck.txt 2>&1
+  setlocal enabledelayedexpansion
+  set /p PythonCheck=<"PythonCheck.txt"
+  del PythonCheck.txt
+  if "!PythonCheck!" NEQ "TRUE" (
+    echo.
+    echo ! ERROR !  "%PYTHON_COMMAND%" is not installed or added to environment variables
+    echo.
+    goto end
+  ) else (
     goto check_freezer_path
-    )
   )
-  
-  echo.
-  echo !!! ERROR !!! Binary python tools are missing.
-  echo PYTHON_HOME or PYTHON3_ENABLE environment variable is not set successfully.
-  echo PYTHON_HOME or PYTHON3_ENABLE is required to build or execute the python tools.
-  echo.
-  goto end
 
 :check_freezer_path
+  endlocal
   if defined BASETOOLS_PYTHON_SOURCE goto print_python_info
   set "PATH=%BASE_TOOLS_PATH%\BinWrappers\WindowsLike;%PATH%"
   set BASETOOLS_PYTHON_SOURCE=%BASE_TOOLS_PATH%\Source\Python
   set PYTHONPATH=%BASETOOLS_PYTHON_SOURCE%;%PYTHONPATH%
 
 :print_python_info
   echo                PATH = %PATH%
-  if "%PYTHON3_ENABLE%" EQU "TRUE" (
+  if defined PYTHON3_ENABLE if "%PYTHON3_ENABLE%" EQU "TRUE" (
     echo      PYTHON3_ENABLE = %PYTHON3_ENABLE%
-    echo             PYTHON3 = %PYTHON%
+    echo             PYTHON3 = %PYTHON_COMMAND%
   ) else (
-    echo      PYTHON3_ENABLE = %PYTHON3_ENABLE%
-    if defined PYTHON_HOME (
-      echo         PYTHON_HOME = %PYTHON_HOME%
-    )
+    echo      PYTHON3_ENABLE = FALSE
+    echo      PYTHON_COMMAND = %PYTHON_COMMAND%
   )
   echo          PYTHONPATH = %PYTHONPATH%
   echo.
 
 :VisualStudioAvailable
diff --git a/edksetup.sh b/edksetup.sh
index bfa54ddf70..a8897d10f8 100755
--- a/edksetup.sh
+++ b/edksetup.sh
@@ -88,11 +88,11 @@ function SetupEnv()
     . $EDK_TOOLS_PATH/BuildEnv
   elif [ -f "$WORKSPACE/BaseTools/BuildEnv" ]
   then
     . $WORKSPACE/BaseTools/BuildEnv
   elif [ -n "$PACKAGES_PATH" ]
-  then 
+  then
     PATH_LIST=$PACKAGES_PATH
     PATH_LIST=${PATH_LIST//:/ }
     for DIR in $PATH_LIST
     do
       if [ -f "$DIR/BaseTools/BuildEnv" ]
@@ -109,15 +109,13 @@ function SetupEnv()
     echo the EDK2 BuildEnv script.
     return 1
   fi
 }
 
-function SetupPython()
-{    
-  if [ $PYTHON3_ENABLE ] && [ $PYTHON3_ENABLE == TRUE ]
-  then
-    if [ $origin_version ];then
+function SetupPython3()
+{
+  if [ $origin_version ];then
       origin_version=
     fi
     for python in $(whereis python3)
     do
       python=$(echo $python | grep "[[:digit:]]$" || true)
@@ -125,22 +123,39 @@ function SetupPython()
       if [ -z "${python_version}" ] || (! command -v $python >/dev/null 2>&1);then
         continue
       fi
       if [ -z $origin_version ];then
         origin_version=$python_version
-        export PYTHON=$python
+        export PYTHON_COMMAND=$python
         continue
       fi
       ret=`echo "$origin_version < $python_version" |bc`
       if [ "$ret" -eq 1 ]; then
         origin_version=$python_version
-        export PYTHON=$python
+        export PYTHON_COMMAND=$python
       fi
     done
+    return 0
+}
+
+function SetupPython()
+{
+  if [ $PYTHON_COMMAND ] && [ -z $PYTHON3_ENABLE ];then
+    if ( command -v $PYTHON_COMMAND >/dev/null 2>&1 );then
+      return 0
+    else
+      echo $PYTHON_COMMAND Cannot be used to build or execute the python tools.
+      return 1
+    fi
   fi
-  
-  if [ -z $PYTHON3_ENABLE ] || [ $PYTHON3_ENABLE != TRUE ]
+
+  if [ $PYTHON3_ENABLE ] && [ $PYTHON3_ENABLE == TRUE ]
+  then
+    SetupPython3
+  fi
+
+  if [ $PYTHON3_ENABLE ] && [ $PYTHON3_ENABLE != TRUE ]
   then
     if [ $origin_version ];then
       origin_version=
     fi
     for python in $(whereis python2)
@@ -151,20 +166,23 @@ function SetupPython()
         continue
       fi
       if [ -z $origin_version ]
       then
         origin_version=$python_version
-        export PYTHON=$python
+        export PYTHON_COMMAND=$python
         continue
       fi
       ret=`echo "$origin_version < $python_version" |bc`
       if [ "$ret" -eq 1 ]; then
         origin_version=$python_version
-        export PYTHON=$python
+        export PYTHON_COMMAND=$python
       fi
     done
+    return 0
   fi
+
+  SetupPython3
 }
 
 function SourceEnv()
 {
   SetWorkspace &&
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 28/33] BaseTools:Fixed Rsa issue and a set define issue.
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (26 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 27/33] BaseTools: Update PYTHON env to PYTHON_COMMAND Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 29/33] BaseTools:ord() don't match in py2 and py3 Feng, Bob C
                   ` (5 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Yonghong Zhu, Zhiju . Fan

ValueError: non-hexadecimal number found in
 fromhex() arg at position 0

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
index d2bb0c998c..c285a69ec0 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
@@ -154,11 +154,11 @@ if __name__ == '__main__':
   #
   Process = subprocess.Popen('%s rsa -in "%s" -modulus -noout' % (OpenSslCommand, args.PrivateKeyFileName), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
   PublicKeyHexString = Process.communicate()[0].split('=')[1].strip()
   PublicKey = ''
   while len(PublicKeyHexString) > 0:
-    PublicKey = PublicKey + chr(int(PublicKeyHexString[0:2], 16))
+    PublicKey = PublicKey + PublicKeyHexString[0:2]
     PublicKeyHexString=PublicKeyHexString[2:]
   if Process.returncode != 0:
     sys.exit(Process.returncode)
 
   if args.MonotonicCountStr:
@@ -186,11 +186,11 @@ if __name__ == '__main__':
     #
     # Write output file that contains hash GUID, Public Key, Signature, and Input data
     #
     args.OutputFile = open(args.OutputFileName, 'wb')
     args.OutputFile.write(EFI_HASH_ALGORITHM_SHA256_GUID.bytes_le)
-    args.OutputFile.write(PublicKey)
+    args.OutputFile.write(bytearray.fromhex(str(PublicKey)))
     args.OutputFile.write(Signature)
     args.OutputFile.write(args.InputFileBuffer)
     args.OutputFile.close()
 
   if args.Decode:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 29/33] BaseTools:ord() don't match in py2 and py3
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (27 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 28/33] BaseTools:Fixed Rsa issue and a set define issue Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 30/33] BaseTools: the list and iterator translation Feng, Bob C
                   ` (4 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Zhiju . Fan

In python2, the FvHeaderBuffer Type is a str
In python3, the FvHeaderBuffer Type is a bytes

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/GenFds/FvImageSection.py | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py
index 535b86ab5e..7ea931e1b5 100644
--- a/BaseTools/Source/Python/GenFds/FvImageSection.py
+++ b/BaseTools/Source/Python/GenFds/FvImageSection.py
@@ -69,11 +69,14 @@ class FvImageSection(FvImageSectionClassObject):
                     FvFileObj = open (FvFileName, 'rb')
                     FvFileObj.seek(0)
                     # PI FvHeader is 0x48 byte
                     FvHeaderBuffer = FvFileObj.read(0x48)
                     # FV alignment position.
-                    FvAlignmentValue = 1 << (FvHeaderBuffer[0x2E] & 0x1F)
+                    if isinstance(FvHeaderBuffer[0x2E], str):
+                        FvAlignmentValue = 1 << (ord(FvHeaderBuffer[0x2E]) & 0x1F)
+                    else:
+                        FvAlignmentValue = 1 << (FvHeaderBuffer[0x2E] & 0x1F)
                     FvFileObj.close()
                 if FvAlignmentValue > MaxFvAlignment:
                     MaxFvAlignment = FvAlignmentValue
 
                 OutputFile = os.path.join(OutputPath, ModuleName + SUP_MODULE_SEC + Num + SectionSuffix.get("FV_IMAGE"))
@@ -119,11 +122,14 @@ class FvImageSection(FvImageSectionClassObject):
                         FvFileObj = open (FvFileName, 'rb')
                         FvFileObj.seek(0)
                         # PI FvHeader is 0x48 byte
                         FvHeaderBuffer = FvFileObj.read(0x48)
                         # FV alignment position.
-                        FvAlignmentValue = 1 << (ord (FvHeaderBuffer[0x2E]) & 0x1F)
+                        if isinstance(FvHeaderBuffer[0x2E], str):
+                            FvAlignmentValue = 1 << (ord(FvHeaderBuffer[0x2E]) & 0x1F)
+                        else:
+                            FvAlignmentValue = 1 << (FvHeaderBuffer[0x2E] & 0x1F)
                         # FvAlignmentValue is larger than or equal to 1K
                         if FvAlignmentValue >= 0x400:
                             if FvAlignmentValue >= 0x100000:
                                 #The max alignment supported by FFS is 16M.
                                 if FvAlignmentValue >= 0x1000000:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 30/33] BaseTools: the list and iterator translation
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (28 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 29/33] BaseTools:ord() don't match in py2 and py3 Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 31/33] BaseTools: Handle the bytes and str difference Feng, Bob C
                   ` (3 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Yonghong Zhu, Zhiju . Fan

In python3,The keys of the dictionary not a list,It needs to be converted

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py         | 26 +++++++++++++-------------
 BaseTools/Source/Python/AutoGen/GenC.py            |  2 +-
 BaseTools/Source/Python/AutoGen/GenMake.py         | 16 ++++++++--------
 BaseTools/Source/Python/AutoGen/GenPcdDb.py        |  4 ++--
 BaseTools/Source/Python/AutoGen/StrGather.py       |  4 ++--
 BaseTools/Source/Python/Common/Misc.py             |  5 +++--
 BaseTools/Source/Python/Common/StringUtils.py      |  6 +++---
 BaseTools/Source/Python/GenFds/FfsInfStatement.py  |  2 +-
 BaseTools/Source/Python/Workspace/DscBuildData.py  | 16 ++++++++--------
 BaseTools/Source/Python/Workspace/InfBuildData.py  |  4 ++--
 BaseTools/Source/Python/Workspace/MetaDataTable.py |  2 +-
 BaseTools/Source/Python/build/BuildReport.py       |  4 ++--
 BaseTools/Source/Python/build/build.py             |  2 +-
 13 files changed, 47 insertions(+), 46 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index 5f0da5a815..baa1842667 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -927,11 +927,11 @@ class WorkspaceAutoGen(AutoGen):
     # @return  None
     #
     def _CheckAllPcdsTokenValueConflict(self):
         for Pa in self.AutoGenObjectList:
             for Package in Pa.PackageList:
-                PcdList = Package.Pcds.values()
+                PcdList = list(Package.Pcds.values())
                 PcdList.sort(key=lambda x: int(x.TokenValue, 0))
                 Count = 0
                 while (Count < len(PcdList) - 1) :
                     Item = PcdList[Count]
                     ItemNext = PcdList[Count + 1]
@@ -973,11 +973,11 @@ class WorkspaceAutoGen(AutoGen):
                                                 )
                             SameTokenValuePcdListCount += 1
                         Count += SameTokenValuePcdListCount
                     Count += 1
 
-                PcdList = Package.Pcds.values()
+                PcdList = list(Package.Pcds.values())
                 PcdList.sort(key=lambda x: "%s.%s" % (x.TokenSpaceGuidCName, x.TokenCName))
                 Count = 0
                 while (Count < len(PcdList) - 1) :
                     Item = PcdList[Count]
                     ItemNext = PcdList[Count + 1]
@@ -1298,11 +1298,11 @@ class PlatformAutoGen(AutoGen):
 
             if PcdNvStoreDfBuffer:
                 if os.path.exists(VpdMapFilePath):
                     OrgVpdFile.Read(VpdMapFilePath)
                     PcdItems = OrgVpdFile.GetOffset(PcdNvStoreDfBuffer[0])
-                    NvStoreOffset = PcdItems.values()[0].strip() if PcdItems else '0'
+                    NvStoreOffset = list(PcdItems.values())[0].strip() if PcdItems else '0'
                 else:
                     EdkLogger.error("build", FILE_READ_FAILURE, "Can not find VPD map file %s to fix up VPD offset." % VpdMapFilePath)
 
                 NvStoreOffset = int(NvStoreOffset, 16) if NvStoreOffset.upper().startswith("0X") else int(NvStoreOffset)
                 default_skuobj = PcdNvStoreDfBuffer[0].SkuInfoList.get(TAB_DEFAULT)
@@ -1497,11 +1497,11 @@ class PlatformAutoGen(AutoGen):
                 self._PlatformPcds[item].DatumType = TAB_VOID
 
         if (self.Workspace.ArchList[-1] == self.Arch):
             for Pcd in self._DynamicPcdList:
                 # just pick the a value to determine whether is unicode string type
-                Sku = Pcd.SkuInfoList.values()[0]
+                Sku = Pcd.SkuInfoList.get(TAB_DEFAULT)
                 Sku.VpdOffset = Sku.VpdOffset.strip()
 
                 if Pcd.DatumType not in [TAB_UINT8, TAB_UINT16, TAB_UINT32, TAB_UINT64, TAB_VOID, "BOOLEAN"]:
                     Pcd.DatumType = TAB_VOID
 
@@ -1603,11 +1603,11 @@ class PlatformAutoGen(AutoGen):
 
                         # Not found, it should be signature
                         if not FoundFlag :
                             # just pick the a value to determine whether is unicode string type
                             SkuValueMap = {}
-                            SkuObjList = DscPcdEntry.SkuInfoList.items()
+                            SkuObjList = list(DscPcdEntry.SkuInfoList.items())
                             DefaultSku = DscPcdEntry.SkuInfoList.get(TAB_DEFAULT)
                             if DefaultSku:
                                 defaultindex = SkuObjList.index((TAB_DEFAULT, DefaultSku))
                                 SkuObjList[0], SkuObjList[defaultindex] = SkuObjList[defaultindex], SkuObjList[0]
                             for (SkuName, Sku) in SkuObjList:
@@ -1629,11 +1629,11 @@ class PlatformAutoGen(AutoGen):
                                             DscPcdEntry.DefaultValue = DecPcdEntry.DefaultValue
                                             DscPcdEntry.TokenValue = DecPcdEntry.TokenValue
                                             DscPcdEntry.TokenSpaceGuidValue = eachDec.Guids[DecPcdEntry.TokenSpaceGuidCName]
                                             # Only fix the value while no value provided in DSC file.
                                             if not Sku.DefaultValue:
-                                                DscPcdEntry.SkuInfoList[DscPcdEntry.SkuInfoList.keys()[0]].DefaultValue = DecPcdEntry.DefaultValue
+                                                DscPcdEntry.SkuInfoList[list(DscPcdEntry.SkuInfoList.keys())[0]].DefaultValue = DecPcdEntry.DefaultValue
 
                                 if DscPcdEntry not in self._DynamicPcdList:
                                     self._DynamicPcdList.append(DscPcdEntry)
                                 Sku.VpdOffset = Sku.VpdOffset.strip()
                                 PcdValue = Sku.DefaultValue
@@ -1711,11 +1711,11 @@ class PlatformAutoGen(AutoGen):
                         EdkLogger.error("build", FILE_READ_FAILURE, "Can not find VPD map file %s to fix up VPD offset." % VpdMapFilePath)
 
             # Delete the DynamicPcdList At the last time enter into this function
             for Pcd in self._DynamicPcdList:
                 # just pick the a value to determine whether is unicode string type
-                Sku = Pcd.SkuInfoList.values()[0]
+                Sku = Pcd.SkuInfoList.get(TAB_DEFAULT)
                 Sku.VpdOffset = Sku.VpdOffset.strip()
 
                 if Pcd.DatumType not in [TAB_UINT8, TAB_UINT16, TAB_UINT32, TAB_UINT64, TAB_VOID, "BOOLEAN"]:
                     Pcd.DatumType = TAB_VOID
 
@@ -2284,11 +2284,11 @@ class PlatformAutoGen(AutoGen):
                     Pcd.MaxDatumSize = str((len(Value) - 2) * 2)
                 elif Value[0] == '{':
                     Pcd.MaxDatumSize = str(len(Value.split(',')))
                 else:
                     Pcd.MaxDatumSize = str(len(Value) - 1)
-        return Pcds.values()
+        return list(Pcds.values())
 
 
 
     ## Calculate the priority value of the build option
     #
@@ -2353,11 +2353,11 @@ class PlatformAutoGen(AutoGen):
 
         #
         # Use the highest priority value.
         #
         if (len(OverrideList) >= 2):
-            KeyList = OverrideList.keys()
+            KeyList = list(OverrideList.keys())
             for Index in range(len(KeyList)):
                 NowKey = KeyList[Index]
                 Target1, ToolChain1, Arch1, CommandType1, Attr1 = NowKey.split("_")
                 for Index1 in range(len(KeyList) - Index - 1):
                     NextKey = KeyList[Index1 + Index + 1]
@@ -2471,13 +2471,13 @@ class PlatformAutoGen(AutoGen):
             for Tool in Options:
                 for Attr in Options[Tool]:
                     if Attr == TAB_TOD_DEFINES_BUILDRULEORDER:
                         BuildRuleOrder = Options[Tool][Attr]
 
-        AllTools = set(ModuleOptions.keys() + PlatformOptions.keys() +
-                       PlatformModuleOptions.keys() + ModuleTypeOptions.keys() +
-                       self.ToolDefinition.keys())
+        AllTools = set(list(ModuleOptions.keys()) + list(PlatformOptions.keys()) +
+                       list(PlatformModuleOptions.keys()) + list(ModuleTypeOptions.keys()) +
+                       list(self.ToolDefinition.keys()))
         BuildOptions = defaultdict(lambda: defaultdict(str))
         for Tool in AllTools:
             for Options in [self.ToolDefinition, ModuleOptions, PlatformOptions, ModuleTypeOptions, PlatformModuleOptions]:
                 if Tool not in Options:
                     continue
@@ -3517,11 +3517,11 @@ class ModuleAutoGen(AutoGen):
 
         if not VfrUniBaseName:
             return None
         MapFileName = os.path.join(self.OutputDir, self.Name + ".map")
         EfiFileName = os.path.join(self.OutputDir, self.Name + ".efi")
-        VfrUniOffsetList = GetVariableOffset(MapFileName, EfiFileName, VfrUniBaseName.values())
+        VfrUniOffsetList = GetVariableOffset(MapFileName, EfiFileName, list(VfrUniBaseName.values()))
         if not VfrUniOffsetList:
             return None
 
         OutputName = '%sOffset.bin' % self.Name
         UniVfrOffsetFileName    =  os.path.join( self.OutputDir, OutputName)
diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
index f1f3b6f359..700c94b3a7 100644
--- a/BaseTools/Source/Python/AutoGen/GenC.py
+++ b/BaseTools/Source/Python/AutoGen/GenC.py
@@ -2048,11 +2048,11 @@ def CreateCode(Info, AutoGenC, AutoGenH, StringH, UniGenCFlag, UniGenBinBuffer,
         GuidMacros = []
         for Guid in Info.Module.Guids:
             if Guid in Info.Module.GetGuidsUsedByPcd():
                 continue
             GuidMacros.append('#define %s %s' % (Guid, Info.Module.Guids[Guid]))
-        for Guid, Value in Info.Module.Protocols.items() + Info.Module.Ppis.items():
+        for Guid, Value in list(Info.Module.Protocols.items()) + list(Info.Module.Ppis.items()):
             GuidMacros.append('#define %s %s' % (Guid, Value))
         # supports FixedAtBuild and FeaturePcd usage in VFR file
         if Info.VfrFileList and Info.ModulePcdList:
             GuidMacros.append('#define %s %s' % ('FixedPcdGetBool(TokenName)', '_PCD_VALUE_##TokenName'))
             GuidMacros.append('#define %s %s' % ('FixedPcdGet8(TokenName)', '_PCD_VALUE_##TokenName'))
diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index 3094a555e0..c42053eb4c 100644
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -673,12 +673,12 @@ cleanlib:
             "module_debug_directory"    : MyAgo.DebugDir,
 
             "separator"                 : Separator,
             "module_tool_definitions"   : ToolsDef,
 
-            "shell_command_code"        : self._SHELL_CMD_[self._FileType].keys(),
-            "shell_command"             : self._SHELL_CMD_[self._FileType].values(),
+            "shell_command_code"        : list(self._SHELL_CMD_[self._FileType].keys()),
+            "shell_command"             : list(self._SHELL_CMD_[self._FileType].values()),
 
             "module_entry_point"        : ModuleEntryPoint,
             "image_entry_point"         : ImageEntryPoint,
             "arch_entry_point"          : ArchEntryPoint,
             "remaining_build_target"    : self.ResultFileList,
@@ -1273,12 +1273,12 @@ ${BEGIN}\t-@${create_directory_command}\n${END}\
             "module_debug_directory"    : MyAgo.DebugDir,
 
             "separator"                 : Separator,
             "module_tool_definitions"   : ToolsDef,
 
-            "shell_command_code"        : self._SHELL_CMD_[self._FileType].keys(),
-            "shell_command"             : self._SHELL_CMD_[self._FileType].values(),
+            "shell_command_code"        : list(self._SHELL_CMD_[self._FileType].keys()),
+            "shell_command"             : list(self._SHELL_CMD_[self._FileType].values()),
 
             "create_directory_command"  : self.GetCreateDirectoryCommand(self.IntermediateDirectoryList),
             "custom_makefile_content"   : CustomMakefile
         }
 
@@ -1447,12 +1447,12 @@ cleanlib:
             "platform_build_directory"  : MyAgo.BuildDir,
             "platform_dir"              : MyAgo.Macros["PLATFORM_DIR"],
 
             "toolchain_tag"             : MyAgo.ToolChain,
             "build_target"              : MyAgo.BuildTarget,
-            "shell_command_code"        : self._SHELL_CMD_[self._FileType].keys(),
-            "shell_command"             : self._SHELL_CMD_[self._FileType].values(),
+            "shell_command_code"        : list(self._SHELL_CMD_[self._FileType].keys()),
+            "shell_command"             : list(self._SHELL_CMD_[self._FileType].values()),
             "build_architecture_list"   : MyAgo.Arch,
             "architecture"              : MyAgo.Arch,
             "separator"                 : Separator,
             "create_directory_command"  : self.GetCreateDirectoryCommand(self.IntermediateDirectoryList),
             "cleanall_command"          : self.GetRemoveDirectoryCommand(self.IntermediateDirectoryList),
@@ -1579,12 +1579,12 @@ class TopLevelMakefile(BuildFile):
             "platform_build_directory"  : MyAgo.BuildDir,
             "conf_directory"            : GlobalData.gConfDirectory,
 
             "toolchain_tag"             : MyAgo.ToolChain,
             "build_target"              : MyAgo.BuildTarget,
-            "shell_command_code"        : self._SHELL_CMD_[self._FileType].keys(),
-            "shell_command"             : self._SHELL_CMD_[self._FileType].values(),
+            "shell_command_code"        : list(self._SHELL_CMD_[self._FileType].keys()),
+            "shell_command"             : list(self._SHELL_CMD_[self._FileType].values()),
             'arch'                      : list(MyAgo.ArchList),
             "build_architecture_list"   : ','.join(MyAgo.ArchList),
             "separator"                 : Separator,
             "create_directory_command"  : self.GetCreateDirectoryCommand(self.IntermediateDirectoryList),
             "cleanall_command"          : self.GetRemoveDirectoryCommand(self.IntermediateDirectoryList),
diff --git a/BaseTools/Source/Python/AutoGen/GenPcdDb.py b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
index d3e85293d2..2cb1745823 100644
--- a/BaseTools/Source/Python/AutoGen/GenPcdDb.py
+++ b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
@@ -613,11 +613,11 @@ def BuildExDataBase(Dict):
     DbInitValueUint32 = DbComItemList(4, RawDataList = InitValueUint32)
     VardefValueUint32 = Dict['VARDEF_DB_VALUE_UINT32']
     DbVardefValueUint32 = DbItemList(4, RawDataList = VardefValueUint32)
     VpdHeadValue = Dict['VPD_DB_VALUE']
     DbVpdHeadValue = DbComItemList(4, RawDataList = VpdHeadValue)
-    ExMapTable = zip(Dict['EXMAPPING_TABLE_EXTOKEN'], Dict['EXMAPPING_TABLE_LOCAL_TOKEN'], Dict['EXMAPPING_TABLE_GUID_INDEX'])
+    ExMapTable = list(zip(Dict['EXMAPPING_TABLE_EXTOKEN'], Dict['EXMAPPING_TABLE_LOCAL_TOKEN'], Dict['EXMAPPING_TABLE_GUID_INDEX']))
     DbExMapTable = DbExMapTblItemList(8, RawDataList = ExMapTable)
     LocalTokenNumberTable = Dict['LOCAL_TOKEN_NUMBER_DB_VALUE']
     DbLocalTokenNumberTable = DbItemList(4, RawDataList = LocalTokenNumberTable)
     GuidTable = Dict['GUID_STRUCTURE']
     DbGuidTable = DbItemList(16, RawDataList = GuidTable)
@@ -647,11 +647,11 @@ def BuildExDataBase(Dict):
     DbPcdCNameTable = DbStringItemList(0, RawDataList = PcdCNameTableValue, LenList = PcdCNameLen)
 
     PcdNameOffsetTable = Dict['PCD_NAME_OFFSET']
     DbPcdNameOffsetTable = DbItemList(4, RawDataList = PcdNameOffsetTable)
 
-    SizeTableValue = zip(Dict['SIZE_TABLE_MAXIMUM_LENGTH'], Dict['SIZE_TABLE_CURRENT_LENGTH'])
+    SizeTableValue = list(zip(Dict['SIZE_TABLE_MAXIMUM_LENGTH'], Dict['SIZE_TABLE_CURRENT_LENGTH']))
     DbSizeTableValue = DbSizeTableItemList(2, RawDataList = SizeTableValue)
     InitValueUint16 = Dict['INIT_DB_VALUE_UINT16']
     DbInitValueUint16 = DbComItemList(2, RawDataList = InitValueUint16)
     VardefValueUint16 = Dict['VARDEF_DB_VALUE_UINT16']
     DbVardefValueUint16 = DbItemList(2, RawDataList = VardefValueUint16)
diff --git a/BaseTools/Source/Python/AutoGen/StrGather.py b/BaseTools/Source/Python/AutoGen/StrGather.py
index d34a9e9447..d87680b2e7 100644
--- a/BaseTools/Source/Python/AutoGen/StrGather.py
+++ b/BaseTools/Source/Python/AutoGen/StrGather.py
@@ -549,13 +549,13 @@ def GetStringFiles(UniFilList, SourceFileList, IncludeList, IncludePathList, Ski
     if len(UniFilList) > 0:
         if ShellMode:
             #
             # support ISO 639-2 codes in .UNI files of EDK Shell
             #
-            Uni = UniFileClassObject(sorted (UniFilList), True, IncludePathList)
+            Uni = UniFileClassObject(sorted(UniFilList, key=lambda x: x.File), True, IncludePathList)
         else:
-            Uni = UniFileClassObject(sorted (UniFilList), IsCompatibleMode, IncludePathList)
+            Uni = UniFileClassObject(sorted(UniFilList, key=lambda x: x.File), IsCompatibleMode, IncludePathList)
     else:
         EdkLogger.error("UnicodeStringGather", AUTOGEN_ERROR, 'No unicode files given')
 
     FileList = GetFileList(SourceFileList, IncludeList, SkipList)
 
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index e0e355286b..d3b71fc4a2 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -572,17 +572,18 @@ def RealPath(File, Dir='', OverrideDir=''):
 #   @retval     GuidValue   if the CName is found in any given package
 #   @retval     None        if the CName is not found in all given packages
 #
 def GuidValue(CName, PackageList, Inffile = None):
     for P in PackageList:
-        GuidKeys = P.Guids.keys()
+        GuidKeys = list(P.Guids.keys())
         if Inffile and P._PrivateGuids:
             if not Inffile.startswith(P.MetaFile.Dir):
                 GuidKeys = [x for x in P.Guids if x not in P._PrivateGuids]
         if CName in GuidKeys:
             return P.Guids[CName]
     return None
+    return None
 
 ## A string template class
 #
 #  This class implements a template for string replacement. A string template
 #  looks like following
@@ -1635,11 +1636,11 @@ class SkuClass():
         self._SkuIdentifier = SkuIdentifier
         if SkuIdentifier == '' or SkuIdentifier is None:
             self.SkuIdSet = ['DEFAULT']
             self.SkuIdNumberSet = ['0U']
         elif SkuIdentifier == 'ALL':
-            self.SkuIdSet = SkuIds.keys()
+            self.SkuIdSet = list(SkuIds.keys())
             self.SkuIdNumberSet = [num[0].strip() + 'U' for num in SkuIds.values()]
         else:
             r = SkuIdentifier.split('|')
             self.SkuIdSet=[(r[k].strip()).upper() for k in range(len(r))]
             k = None
diff --git a/BaseTools/Source/Python/Common/StringUtils.py b/BaseTools/Source/Python/Common/StringUtils.py
index d5afde7a95..0fa51f365b 100644
--- a/BaseTools/Source/Python/Common/StringUtils.py
+++ b/BaseTools/Source/Python/Common/StringUtils.py
@@ -97,11 +97,11 @@ def GetSplitValueList(String, SplitTag=DataType.TAB_VALUE_SPLIT, MaxSplit= -1):
 # @param MaxSplit:  The max number of split values, default is -1
 #
 # @retval list() A list for splitted string
 #
 def GetSplitList(String, SplitStr=DataType.TAB_VALUE_SPLIT, MaxSplit= -1):
-    return map(lambda l: l.strip(), String.split(SplitStr, MaxSplit))
+    return list(map(lambda l: l.strip(), String.split(SplitStr, MaxSplit)))
 
 ## MergeArches
 #
 # Find a key's all arches in dict, add the new arch to the list
 # If not exist any arch, set the arch directly
@@ -543,11 +543,11 @@ def GetSingleValueOfKeyFromLines(Lines, Dictionary, CommentCharacter, KeySplitCh
                 #
                 # Remove comments and white spaces
                 #
                 LineList[1] = CleanString(LineList[1], CommentCharacter)
                 if ValueSplitFlag:
-                    Value = map(string.strip, LineList[1].split(ValueSplitCharacter))
+                    Value = list(map(string.strip, LineList[1].split(ValueSplitCharacter)))
                 else:
                     Value = CleanString(LineList[1], CommentCharacter).splitlines()
 
                 if Key[0] in Dictionary:
                     if Key[0] not in Keys:
@@ -749,11 +749,11 @@ def SplitString(String):
 # 1. Replace "'" with "''" in each item of StringList
 #
 # @param StringList:  A list for strings to be converted
 #
 def ConvertToSqlString(StringList):
-    return map(lambda s: s.replace("'", "''"), StringList)
+    return list(map(lambda s: s.replace("'", "''"), StringList))
 
 ## Convert To Sql String
 #
 # 1. Replace "'" with "''" in the String
 #
diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
index a7298a6daf..80257923f0 100644
--- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py
+++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
@@ -1073,11 +1073,11 @@ class FfsInfStatement(FfsInfStatementClassObject):
     #   @retval RetValue              A list contain offset of UNI/INF object.
     #
     def __GetBuildOutputMapFileVfrUniInfo(self, VfrUniBaseName):
         MapFileName = os.path.join(self.EfiOutputPath, self.BaseName + ".map")
         EfiFileName = os.path.join(self.EfiOutputPath, self.BaseName + ".efi")
-        return GetVariableOffset(MapFileName, EfiFileName, VfrUniBaseName.values())
+        return GetVariableOffset(MapFileName, EfiFileName, list(VfrUniBaseName.values()))
 
     ## __GenUniVfrOffsetFile() method
     #
     #   Generate the offset file for the module which contain VFR or UNI file.
     #
diff --git a/BaseTools/Source/Python/Workspace/DscBuildData.py b/BaseTools/Source/Python/Workspace/DscBuildData.py
index c2bc705091..13b2cef59d 100644
--- a/BaseTools/Source/Python/Workspace/DscBuildData.py
+++ b/BaseTools/Source/Python/Workspace/DscBuildData.py
@@ -1604,11 +1604,11 @@ class DscBuildData(PlatformBuildClassObject):
                     pcd.SkuInfoList[TAB_DEFAULT] = pcd.SkuInfoList[TAB_COMMON]
                     del pcd.SkuInfoList[TAB_COMMON]
                 elif TAB_DEFAULT in pcd.SkuInfoList and TAB_COMMON in pcd.SkuInfoList:
                     del pcd.SkuInfoList[TAB_COMMON]
 
-        map(self.FilterSkuSettings, [Pcds[pcdkey] for pcdkey in Pcds if Pcds[pcdkey].Type in DynamicPcdType])
+        list((self.FilterSkuSettings, [Pcds[pcdkey] for pcdkey in Pcds if Pcds[pcdkey].Type in DynamicPcdType]))
         return Pcds
     @cached_property
     def PlatformUsedPcds(self):
         FdfInfList = []
         if GlobalData.gFdfParser:
@@ -2558,11 +2558,11 @@ class DscBuildData(PlatformBuildClassObject):
                         BuildOptions[Arch] |= self.ParseCCFlags(self.BuildOptions[Options])
 
         if BuildOptions:
             ArchBuildOptions = {arch:flags for arch,flags in BuildOptions.items() if arch != 'COMMON'}
             if len(ArchBuildOptions.keys()) == 1:
-                BuildOptions['COMMON'] |= (ArchBuildOptions.values()[0])
+                BuildOptions['COMMON'] |= (list(ArchBuildOptions.values())[0])
             elif len(ArchBuildOptions.keys()) > 1:
                 CommonBuildOptions = reduce(lambda x,y: x&y, ArchBuildOptions.values())
                 BuildOptions['COMMON'] |= CommonBuildOptions
             ValueList = list(BuildOptions['COMMON'])
             CC_FLAGS += " ".join(ValueList)
@@ -2776,11 +2776,11 @@ class DscBuildData(PlatformBuildClassObject):
                 pcd.SkuInfoList[TAB_DEFAULT] = pcd.SkuInfoList[TAB_COMMON]
                 del pcd.SkuInfoList[TAB_COMMON]
             elif TAB_DEFAULT in pcd.SkuInfoList and TAB_COMMON in pcd.SkuInfoList:
                 del pcd.SkuInfoList[TAB_COMMON]
 
-        map(self.FilterSkuSettings, Pcds.values())
+        list(map(self.FilterSkuSettings, Pcds.values()))
 
         return Pcds
 
     def FilterSkuSettings(self, PcdObj):
 
@@ -2841,11 +2841,11 @@ class DscBuildData(PlatformBuildClassObject):
                         nextskuid = self.SkuIdMgr.GetNextSkuId(nextskuid)
                     PcdObj.SkuInfoList[skuname] = copy.deepcopy(PcdObj.SkuInfoList[nextskuid])
                     PcdObj.SkuInfoList[skuname].SkuId = skuid
                     PcdObj.SkuInfoList[skuname].SkuIdName = skuname
             if PcdType in [self._PCD_TYPE_STRING_[MODEL_PCD_DYNAMIC_HII], self._PCD_TYPE_STRING_[MODEL_PCD_DYNAMIC_EX_HII]]:
-                PcdObj.DefaultValue = PcdObj.SkuInfoList.values()[0].HiiDefaultValue if self.SkuIdMgr.SkuUsageType == self.SkuIdMgr.SINGLE else PcdObj.SkuInfoList[TAB_DEFAULT].HiiDefaultValue
+                PcdObj.DefaultValue = list(PcdObj.SkuInfoList.values())[0].HiiDefaultValue if self.SkuIdMgr.SkuUsageType == self.SkuIdMgr.SINGLE else PcdObj.SkuInfoList[TAB_DEFAULT].HiiDefaultValue
             Pcds[PcdCName, TokenSpaceGuid]= PcdObj
         return Pcds
     ## Retrieve dynamic HII PCD settings
     #
     #   @param  Type    PCD type
@@ -2962,21 +2962,21 @@ class DscBuildData(PlatformBuildClassObject):
                 Pcds[PcdCName, TokenSpaceGuid].CustomAttribute['DscPosition'] = int(Dummy4)
             if SkuName not in Pcds[PcdCName, TokenSpaceGuid].DscRawValue:
                 Pcds[PcdCName, TokenSpaceGuid].DscRawValue[SkuName] = {}
             Pcds[PcdCName, TokenSpaceGuid].DscRawValue[SkuName][DefaultStore] = DefaultValue
         for pcd in Pcds.values():
-            SkuInfoObj = pcd.SkuInfoList.values()[0]
             pcdDecObject = self._DecPcds[pcd.TokenCName, pcd.TokenSpaceGuidCName]
             pcd.DatumType = pcdDecObject.DatumType
             # Only fix the value while no value provided in DSC file.
             for sku in pcd.SkuInfoList.values():
                 if (sku.HiiDefaultValue == "" or sku.HiiDefaultValue is None):
                     sku.HiiDefaultValue = pcdDecObject.DefaultValue
                     for default_store in sku.DefaultStoreDict:
                         sku.DefaultStoreDict[default_store]=pcdDecObject.DefaultValue
                     pcd.DefaultValue = pcdDecObject.DefaultValue
             if TAB_DEFAULT not in pcd.SkuInfoList and TAB_COMMON not in pcd.SkuInfoList:
+                SkuInfoObj = list(pcd.SkuInfoList.values())[0]
                 valuefromDec = pcdDecObject.DefaultValue
                 SkuInfo = SkuInfoClass(TAB_DEFAULT, '0', SkuInfoObj.VariableName, SkuInfoObj.VariableGuid, SkuInfoObj.VariableOffset, valuefromDec, VariableAttribute=SkuInfoObj.VariableAttribute, DefaultStore={DefaultStore:valuefromDec})
                 pcd.SkuInfoList[TAB_DEFAULT] = SkuInfo
             elif TAB_DEFAULT not in pcd.SkuInfoList and TAB_COMMON in pcd.SkuInfoList:
                 pcd.SkuInfoList[TAB_DEFAULT] = pcd.SkuInfoList[TAB_COMMON]
@@ -3002,11 +3002,11 @@ class DscBuildData(PlatformBuildClassObject):
         rt, invalidhii = DscBuildData.CheckVariableNameAssignment(Pcds)
         if not rt:
             invalidpcd = ",".join(invalidhii)
             EdkLogger.error('build', PCD_VARIABLE_INFO_ERROR, Message='The same HII PCD must map to the same EFI variable for all SKUs', File=self.MetaFile, ExtraData=invalidpcd)
 
-        map(self.FilterSkuSettings, Pcds.values())
+        list(map(self.FilterSkuSettings, Pcds.values()))
 
         return Pcds
 
     @staticmethod
     def CheckVariableNameAssignment(Pcds):
@@ -3100,18 +3100,18 @@ class DscBuildData(PlatformBuildClassObject):
 
             if SkuName not in Pcds[PcdCName, TokenSpaceGuid].DscRawValue:
                 Pcds[PcdCName, TokenSpaceGuid].DscRawValue[SkuName] = {}
             Pcds[PcdCName, TokenSpaceGuid].DscRawValue[SkuName][TAB_DEFAULT_STORES_DEFAULT] = InitialValue
         for pcd in Pcds.values():
-            SkuInfoObj = pcd.SkuInfoList.values()[0]
             pcdDecObject = self._DecPcds[pcd.TokenCName, pcd.TokenSpaceGuidCName]
             pcd.DatumType = pcdDecObject.DatumType
             # Only fix the value while no value provided in DSC file.
             for sku in pcd.SkuInfoList.values():
                 if not sku.DefaultValue:
                     sku.DefaultValue = pcdDecObject.DefaultValue
             if TAB_DEFAULT not in pcd.SkuInfoList and TAB_COMMON not in pcd.SkuInfoList:
+                SkuInfoObj = list(pcd.SkuInfoList.values())[0]
                 valuefromDec = pcdDecObject.DefaultValue
                 SkuInfo = SkuInfoClass(TAB_DEFAULT, '0', '', '', '', '', SkuInfoObj.VpdOffset, valuefromDec)
                 pcd.SkuInfoList[TAB_DEFAULT] = SkuInfo
             elif TAB_DEFAULT not in pcd.SkuInfoList and TAB_COMMON in pcd.SkuInfoList:
                 pcd.SkuInfoList[TAB_DEFAULT] = pcd.SkuInfoList[TAB_COMMON]
@@ -3127,11 +3127,11 @@ class DscBuildData(PlatformBuildClassObject):
                 PcdValueTypeSet.add("UnicodeString" if sku.DefaultValue.startswith(('L"',"L'")) else "OtherVOID*")
             if len(PcdValueTypeSet) > 1:
                 for sku in pcd.SkuInfoList.values():
                     sku.DefaultValue = StringToArray(sku.DefaultValue) if sku.DefaultValue.startswith(('L"',"L'")) else sku.DefaultValue
 
-        map(self.FilterSkuSettings, Pcds.values())
+        list(map(self.FilterSkuSettings, Pcds.values()))
         return Pcds
 
     ## Add external modules
     #
     #   The external modules are mostly those listed in FDF file, which don't
diff --git a/BaseTools/Source/Python/Workspace/InfBuildData.py b/BaseTools/Source/Python/Workspace/InfBuildData.py
index f0c7e6ddd4..fc779a9d25 100644
--- a/BaseTools/Source/Python/Workspace/InfBuildData.py
+++ b/BaseTools/Source/Python/Workspace/InfBuildData.py
@@ -30,11 +30,11 @@ from Workspace.BuildClassObject import ModuleBuildClassObject, LibraryClassObjec
 #   @retval     GuidValue   if the CName is found in any given package
 #   @retval     None        if the CName is not found in all given packages
 #
 def _ProtocolValue(CName, PackageList, Inffile = None):
     for P in PackageList:
-        ProtocolKeys = P.Protocols.keys()
+        ProtocolKeys = list(P.Protocols.keys())
         if Inffile and P._PrivateProtocols:
             if not Inffile.startswith(P.MetaFile.Dir):
                 ProtocolKeys = [x for x in P.Protocols if x not in P._PrivateProtocols]
         if CName in ProtocolKeys:
             return P.Protocols[CName]
@@ -49,11 +49,11 @@ def _ProtocolValue(CName, PackageList, Inffile = None):
 #   @retval     GuidValue   if the CName is found in any given package
 #   @retval     None        if the CName is not found in all given packages
 #
 def _PpiValue(CName, PackageList, Inffile = None):
     for P in PackageList:
-        PpiKeys = P.Ppis.keys()
+        PpiKeys = list(P.Ppis.keys())
         if Inffile and P._PrivatePpis:
             if not Inffile.startswith(P.MetaFile.Dir):
                 PpiKeys = [x for x in P.Ppis if x not in P._PrivatePpis]
         if CName in PpiKeys:
             return P.Ppis[CName]
diff --git a/BaseTools/Source/Python/Workspace/MetaDataTable.py b/BaseTools/Source/Python/Workspace/MetaDataTable.py
index 8becddbe08..c5be0ab10c 100644
--- a/BaseTools/Source/Python/Workspace/MetaDataTable.py
+++ b/BaseTools/Source/Python/Workspace/MetaDataTable.py
@@ -20,11 +20,11 @@ import Common.EdkLogger as EdkLogger
 from CommonDataClass import DataClass
 from CommonDataClass.DataClass import FileClass
 
 ## Convert to SQL required string format
 def ConvertToSqlString(StringList):
-    return map(lambda s: "'" + s.replace("'", "''") + "'", StringList)
+    return list(map(lambda s: "'" + s.replace("'", "''") + "'", StringList))
 
 ## TableFile
 #
 # This class defined a common table
 #
diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py
index 9483262dd1..1cd1b0886a 100644
--- a/BaseTools/Source/Python/build/BuildReport.py
+++ b/BaseTools/Source/Python/build/BuildReport.py
@@ -1207,11 +1207,11 @@ class PcdReport(object):
                 FileWrite(File, gSubSectionEnd)
 
     def ParseStruct(self, struct):
         HasDscOverride = False
         if struct:
-            for _, Values in struct.items():
+            for _, Values in list(struct.items()):
                 for Key, value in Values.items():
                     if value[1] and value[1].endswith('.dsc'):
                         HasDscOverride = True
                         break
                 if HasDscOverride == True:
@@ -1423,11 +1423,11 @@ class PcdReport(object):
                             VPDPcdList.append(VPDPcdItem)
                     if IsStructure:
                         FiledOverrideFlag = False
                         OverrideValues = Pcd.SkuOverrideValues[Sku]
                         if OverrideValues:
-                            Keys = OverrideValues.keys()
+                            Keys = list(OverrideValues.keys())
                             OverrideFieldStruct = self.OverrideFieldValue(Pcd, OverrideValues[Keys[0]])
                             self.PrintStructureInfo(File, OverrideFieldStruct)
                             FiledOverrideFlag = True
                         if not FiledOverrideFlag and (Pcd.PcdFieldValueFromComm or Pcd.PcdFieldValueFromFdf):
                             OverrideFieldStruct = self.OverrideFieldValue(Pcd, {})
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index c2b22cca70..43fc3c8077 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -441,11 +441,11 @@ class BuildTask:
                 EdkLogger.debug(EdkLogger.DEBUG_8, "Pending Queue (%d), Ready Queue (%d)"
                                 % (len(BuildTask._PendingQueue), len(BuildTask._ReadyQueue)))
 
                 # get all pending tasks
                 BuildTask._PendingQueueLock.acquire()
-                BuildObjectList = BuildTask._PendingQueue.keys()
+                BuildObjectList = list(BuildTask._PendingQueue.keys())
                 #
                 # check if their dependency is resolved, and if true, move them
                 # into ready queue
                 #
                 for BuildObject in BuildObjectList:
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 31/33] BaseTools: Handle the bytes and str difference
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (29 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 30/33] BaseTools: the list and iterator translation Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29  2:06 ` [Patch 32/33] BaseTools: ECC tool Python3 adaption Feng, Bob C
                   ` (2 subsequent siblings)
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao, Yonghong Zhu, Zhiju . Fan

Deal with bytes and str is different, remove the unicode(),
correct open file parameter.
Using utcfromtimestamp instead of fromtimestamp.

Cc: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
Cc: Yonghong Zhu <yonghong.zhu@intel.com>
Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Zhiju.Fan <zhijux.fan@intel.com>
---
 BaseTools/Source/Python/AutoGen/AutoGen.py                             | 42 ++++++++++++++++++++----------------------
 BaseTools/Source/Python/AutoGen/GenC.py                                |  6 +++---
 BaseTools/Source/Python/AutoGen/GenMake.py                             | 14 +++++++++-----
 BaseTools/Source/Python/AutoGen/GenPcdDb.py                            | 29 +++++++++++++++--------------
 BaseTools/Source/Python/AutoGen/GenVar.py                              | 34 ++++++++++++++++++----------------
 BaseTools/Source/Python/AutoGen/InfSectionParser.py                    |  2 +-
 BaseTools/Source/Python/AutoGen/StrGather.py                           |  5 ++++-
 BaseTools/Source/Python/AutoGen/UniClassObject.py                      |  4 ++--
 BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py             |  2 +-
 BaseTools/Source/Python/BPDG/GenVpd.py                                 |  8 ++++----
 BaseTools/Source/Python/Common/LongFilePathOs.py                       |  3 +--
 BaseTools/Source/Python/Common/LongFilePathSupport.py                  | 12 ------------
 BaseTools/Source/Python/Common/Misc.py                                 | 48 ++++++++++++++++++++++++++++++++----------------
 BaseTools/Source/Python/Common/StringUtils.py                          | 10 ++--------
 BaseTools/Source/Python/Common/VpdInfoFile.py                          | 10 +++++-----
 BaseTools/Source/Python/GenFds/AprioriSection.py                       |  2 +-
 BaseTools/Source/Python/GenFds/Capsule.py                              | 15 +++++++--------
 BaseTools/Source/Python/GenFds/CapsuleData.py                          |  2 +-
 BaseTools/Source/Python/GenFds/Fd.py                                   |  4 ++--
 BaseTools/Source/Python/GenFds/FdfParser.py                            |  4 ++--
 BaseTools/Source/Python/GenFds/FfsFileStatement.py                     | 16 ++++++++--------
 BaseTools/Source/Python/GenFds/FfsInfStatement.py                      | 12 +++++-------
 BaseTools/Source/Python/GenFds/Fv.py                                   | 48 ++++++++++++++++++++++++------------------------
 BaseTools/Source/Python/GenFds/FvImageSection.py                       |  2 +-
 BaseTools/Source/Python/GenFds/GenFds.py                               | 22 +++++++++++-----------
 BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py                 |  4 ++--
 BaseTools/Source/Python/GenFds/Region.py                               |  6 +++---
 BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py                         |  2 +-
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py | 14 +++++++-------
 BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py         |  7 ++++---
 BaseTools/Source/Python/Trim/Trim.py                                   | 18 ++++++++----------
 BaseTools/Source/Python/UPT/Library/StringUtils.py                     |  4 +---
 BaseTools/Source/Python/Workspace/BuildClassObject.py                  |  4 ++--
 BaseTools/Source/Python/Workspace/DscBuildData.py                      | 13 ++++++++++---
 BaseTools/Source/Python/Workspace/MetaFileParser.py                    |  4 ++--
 BaseTools/Source/Python/build/BuildReport.py                           | 15 +++++++--------
 BaseTools/Source/Python/build/build.py                                 | 44 +++++++++++++++++++++-----------------------
 37 files changed, 247 insertions(+), 244 deletions(-)

diff --git a/BaseTools/Source/Python/AutoGen/AutoGen.py b/BaseTools/Source/Python/AutoGen/AutoGen.py
index baa1842667..a95d2c710e 100644
--- a/BaseTools/Source/Python/AutoGen/AutoGen.py
+++ b/BaseTools/Source/Python/AutoGen/AutoGen.py
@@ -724,15 +724,15 @@ class WorkspaceAutoGen(AutoGen):
         if GlobalData.gUseHashCache:
             m = hashlib.md5()
             for files in AllWorkSpaceMetaFiles:
                 if files.endswith('.dec'):
                     continue
-                f = open(files, 'r')
+                f = open(files, 'rb')
                 Content = f.read()
                 f.close()
                 m.update(Content)
-            SaveFileOnChange(os.path.join(self.BuildDir, 'AutoGen.hash'), m.hexdigest(), True)
+            SaveFileOnChange(os.path.join(self.BuildDir, 'AutoGen.hash'), m.hexdigest(), False)
             GlobalData.gPlatformHash = m.hexdigest()
 
         #
         # Write metafile list to build directory
         #
@@ -753,25 +753,25 @@ class WorkspaceAutoGen(AutoGen):
         PkgDir = os.path.join(self.BuildDir, Pkg.Arch, Pkg.PackageName)
         CreateDirectory(PkgDir)
         HashFile = os.path.join(PkgDir, Pkg.PackageName + '.hash')
         m = hashlib.md5()
         # Get .dec file's hash value
-        f = open(Pkg.MetaFile.Path, 'r')
+        f = open(Pkg.MetaFile.Path, 'rb')
         Content = f.read()
         f.close()
         m.update(Content)
         # Get include files hash value
         if Pkg.Includes:
             for inc in sorted(Pkg.Includes, key=lambda x: str(x)):
                 for Root, Dirs, Files in os.walk(str(inc)):
                     for File in sorted(Files):
                         File_Path = os.path.join(Root, File)
-                        f = open(File_Path, 'r')
+                        f = open(File_Path, 'rb')
                         Content = f.read()
                         f.close()
                         m.update(Content)
-        SaveFileOnChange(HashFile, m.hexdigest(), True)
+        SaveFileOnChange(HashFile, m.hexdigest(), False)
         GlobalData.gPackageHash[Pkg.Arch][Pkg.PackageName] = m.hexdigest()
 
     def _GetMetaFiles(self, Target, Toolchain, Arch):
         AllWorkSpaceMetaFiles = set()
         #
@@ -1734,11 +1734,11 @@ class PlatformAutoGen(AutoGen):
         self._DynamicPcdList.extend(list(OtherPcdArray))
         allskuset = [(SkuName, Sku.SkuId) for pcd in self._DynamicPcdList for (SkuName, Sku) in pcd.SkuInfoList.items()]
         for pcd in self._DynamicPcdList:
             if len(pcd.SkuInfoList) == 1:
                 for (SkuName, SkuId) in allskuset:
-                    if type(SkuId) in (str, unicode) and eval(SkuId) == 0 or SkuId == 0:
+                    if isinstance(SkuId, str) and eval(SkuId) == 0 or SkuId == 0:
                         continue
                     pcd.SkuInfoList[SkuName] = copy.deepcopy(pcd.SkuInfoList[TAB_DEFAULT])
                     pcd.SkuInfoList[SkuName].SkuId = SkuId
                     pcd.SkuInfoList[SkuName].SkuIdName = SkuName
 
@@ -1904,11 +1904,11 @@ class PlatformAutoGen(AutoGen):
                             MakeFlags = Value
                     else:
                         ToolsDef += "%s_%s = %s\n" % (Tool, Attr, Value)
             ToolsDef += "\n"
 
-        SaveFileOnChange(self.ToolDefinitionFile, ToolsDef)
+        SaveFileOnChange(self.ToolDefinitionFile, ToolsDef, False)
         for DllPath in DllPathList:
             os.environ["PATH"] = DllPath + os.pathsep + os.environ["PATH"]
         os.environ["MAKE_FLAGS"] = MakeFlags
 
         return RetVal
@@ -3301,22 +3301,22 @@ class ModuleAutoGen(AutoGen):
             self._ApplyBuildRule(AutoFile, TAB_UNKNOWN_FILE)
         if str(StringH) != "":
             AutoFile = PathClass(gAutoGenStringFileName % {"module_name":self.Name}, self.DebugDir)
             RetVal[AutoFile] = str(StringH)
             self._ApplyBuildRule(AutoFile, TAB_UNKNOWN_FILE)
-        if UniStringBinBuffer is not None and UniStringBinBuffer.getvalue() != "":
+        if UniStringBinBuffer is not None and UniStringBinBuffer.getvalue() != b"":
             AutoFile = PathClass(gAutoGenStringFormFileName % {"module_name":self.Name}, self.OutputDir)
             RetVal[AutoFile] = UniStringBinBuffer.getvalue()
             AutoFile.IsBinary = True
             self._ApplyBuildRule(AutoFile, TAB_UNKNOWN_FILE)
         if UniStringBinBuffer is not None:
             UniStringBinBuffer.close()
         if str(StringIdf) != "":
             AutoFile = PathClass(gAutoGenImageDefFileName % {"module_name":self.Name}, self.DebugDir)
             RetVal[AutoFile] = str(StringIdf)
             self._ApplyBuildRule(AutoFile, TAB_UNKNOWN_FILE)
-        if IdfGenBinBuffer is not None and IdfGenBinBuffer.getvalue() != "":
+        if IdfGenBinBuffer is not None and IdfGenBinBuffer.getvalue() != b"":
             AutoFile = PathClass(gAutoGenIdfFileName % {"module_name":self.Name}, self.OutputDir)
             RetVal[AutoFile] = IdfGenBinBuffer.getvalue()
             AutoFile.IsBinary = True
             self._ApplyBuildRule(AutoFile, TAB_UNKNOWN_FILE)
         if IdfGenBinBuffer is not None:
@@ -3530,33 +3530,31 @@ class ModuleAutoGen(AutoGen):
             fInputfile = open(UniVfrOffsetFileName, "wb+", 0)
         except:
             EdkLogger.error("build", FILE_OPEN_FAILURE, "File open failed for %s" % UniVfrOffsetFileName, None)
 
         # Use a instance of BytesIO to cache data
-        fStringIO = BytesIO('')
+        fStringIO = BytesIO()
 
         for Item in VfrUniOffsetList:
             if (Item[0].find("Strings") != -1):
                 #
                 # UNI offset in image.
                 # GUID + Offset
                 # { 0x8913c5e0, 0x33f6, 0x4d86, { 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66 } }
                 #
-                UniGuid = [0xe0, 0xc5, 0x13, 0x89, 0xf6, 0x33, 0x86, 0x4d, 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66]
-                UniGuid = [chr(ItemGuid) for ItemGuid in UniGuid]
-                fStringIO.write(''.join(UniGuid))
+                UniGuid = b'\xe0\xc5\x13\x89\xf63\x86M\x9b\xf1C\xef\x89\xfc\x06f'
+                fStringIO.write(UniGuid)
                 UniValue = pack ('Q', int (Item[1], 16))
                 fStringIO.write (UniValue)
             else:
                 #
                 # VFR binary offset in image.
                 # GUID + Offset
                 # { 0xd0bc7cb4, 0x6a47, 0x495f, { 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2 } };
                 #
-                VfrGuid = [0xb4, 0x7c, 0xbc, 0xd0, 0x47, 0x6a, 0x5f, 0x49, 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2]
-                VfrGuid = [chr(ItemGuid) for ItemGuid in VfrGuid]
-                fStringIO.write(''.join(VfrGuid))
+                VfrGuid = b'\xb4|\xbc\xd0Gj_I\xaa\x11q\x07F\xda\x06\xa2'
+                fStringIO.write(VfrGuid)
                 VfrValue = pack ('Q', int (Item[1], 16))
                 fStringIO.write (VfrValue)
         #
         # write data into file.
         #
@@ -4093,44 +4091,44 @@ class ModuleAutoGen(AutoGen):
     def GenModuleHash(self):
         if self.Arch not in GlobalData.gModuleHash:
             GlobalData.gModuleHash[self.Arch] = {}
         m = hashlib.md5()
         # Add Platform level hash
-        m.update(GlobalData.gPlatformHash)
+        m.update(GlobalData.gPlatformHash.encode('utf-8'))
         # Add Package level hash
         if self.DependentPackageList:
             for Pkg in sorted(self.DependentPackageList, key=lambda x: x.PackageName):
                 if Pkg.PackageName in GlobalData.gPackageHash[self.Arch]:
-                    m.update(GlobalData.gPackageHash[self.Arch][Pkg.PackageName])
+                    m.update(GlobalData.gPackageHash[self.Arch][Pkg.PackageName].encode('utf-8'))
 
         # Add Library hash
         if self.LibraryAutoGenList:
             for Lib in sorted(self.LibraryAutoGenList, key=lambda x: x.Name):
                 if Lib.Name not in GlobalData.gModuleHash[self.Arch]:
                     Lib.GenModuleHash()
-                m.update(GlobalData.gModuleHash[self.Arch][Lib.Name])
+                m.update(GlobalData.gModuleHash[self.Arch][Lib.Name].encode('utf-8'))
 
         # Add Module self
-        f = open(str(self.MetaFile), 'r')
+        f = open(str(self.MetaFile), 'rb')
         Content = f.read()
         f.close()
         m.update(Content)
         # Add Module's source files
         if self.SourceFileList:
             for File in sorted(self.SourceFileList, key=lambda x: str(x)):
-                f = open(str(File), 'r')
+                f = open(str(File), 'rb')
                 Content = f.read()
                 f.close()
                 m.update(Content)
 
         ModuleHashFile = path.join(self.BuildDir, self.Name + ".hash")
         if self.Name not in GlobalData.gModuleHash[self.Arch]:
             GlobalData.gModuleHash[self.Arch][self.Name] = m.hexdigest()
         if GlobalData.gBinCacheSource:
             if self.AttemptModuleCacheCopy():
                 return False
-        return SaveFileOnChange(ModuleHashFile, m.hexdigest(), True)
+        return SaveFileOnChange(ModuleHashFile, m.hexdigest(), False)
 
     ## Decide whether we can skip the ModuleAutoGen process
     def CanSkipbyHash(self):
         if GlobalData.gUseHashCache:
             return not self.GenModuleHash()
diff --git a/BaseTools/Source/Python/AutoGen/GenC.py b/BaseTools/Source/Python/AutoGen/GenC.py
index 700c94b3a7..9700bf8527 100644
--- a/BaseTools/Source/Python/AutoGen/GenC.py
+++ b/BaseTools/Source/Python/AutoGen/GenC.py
@@ -1780,11 +1780,11 @@ def CreateIdfFileCode(Info, AutoGenC, StringH, IdfGenCFlag, IdfGenBinBuffer):
                                 TempBuffer = pack('B', EFI_HII_IIBT_IMAGE_PNG)
                                 TempBuffer += pack('I', len(Buffer))
                                 TempBuffer += Buffer
                             elif File.Ext.upper() == '.JPG':
                                 ImageType, = struct.unpack('4s', Buffer[6:10])
-                                if ImageType != 'JFIF':
+                                if ImageType != b'JFIF':
                                     EdkLogger.error("build", FILE_TYPE_MISMATCH, "The file %s is not a standard JPG file." % File.Path)
                                 TempBuffer = pack('B', EFI_HII_IIBT_IMAGE_JPEG)
                                 TempBuffer += pack('I', len(Buffer))
                                 TempBuffer += Buffer
                             elif File.Ext.upper() == '.BMP':
@@ -1880,11 +1880,11 @@ def CreateIdfFileCode(Info, AutoGenC, StringH, IdfGenCFlag, IdfGenBinBuffer):
 #   UINT8    BlockBody[];
 # } EFI_HII_IMAGE_BLOCK;
 
 def BmpImageDecoder(File, Buffer, PaletteIndex, TransParent):
     ImageType, = struct.unpack('2s', Buffer[0:2])
-    if ImageType!= 'BM': # BMP file type is 'BM'
+    if ImageType!= b'BM': # BMP file type is 'BM'
         EdkLogger.error("build", FILE_TYPE_MISMATCH, "The file %s is not a standard BMP file." % File.Path)
     BMP_IMAGE_HEADER = collections.namedtuple('BMP_IMAGE_HEADER', ['bfSize', 'bfReserved1', 'bfReserved2', 'bfOffBits', 'biSize', 'biWidth', 'biHeight', 'biPlanes', 'biBitCount', 'biCompression', 'biSizeImage', 'biXPelsPerMeter', 'biYPelsPerMeter', 'biClrUsed', 'biClrImportant'])
     BMP_IMAGE_HEADER_STRUCT = struct.Struct('IHHIIIIHHIIIIII')
     BmpHeader = BMP_IMAGE_HEADER._make(BMP_IMAGE_HEADER_STRUCT.unpack_from(Buffer[2:]))
     #
@@ -1952,11 +1952,11 @@ def BmpImageDecoder(File, Buffer, PaletteIndex, TransParent):
     if PaletteBuffer and len(PaletteBuffer) > 1:
         PaletteTemp = pack('x')
         for Index in range(0, len(PaletteBuffer)):
             if Index % 4 == 3:
                 continue
-            PaletteTemp += PaletteBuffer[Index]
+            PaletteTemp += PaletteBuffer[Index:Index+1]
         PaletteBuffer = PaletteTemp[1:]
     return ImageBuffer, PaletteBuffer
 
 ## Create common code
 #
diff --git a/BaseTools/Source/Python/AutoGen/GenMake.py b/BaseTools/Source/Python/AutoGen/GenMake.py
index c42053eb4c..dc4cd688f4 100644
--- a/BaseTools/Source/Python/AutoGen/GenMake.py
+++ b/BaseTools/Source/Python/AutoGen/GenMake.py
@@ -1036,21 +1036,25 @@ cleanlib:
             CurrentFileDependencyList = []
             if F in DepDb:
                 CurrentFileDependencyList = DepDb[F]
             else:
                 try:
-                    Fd = open(F.Path, 'r')
+                    Fd = open(F.Path, 'rb')
+                    FileContent = Fd.read()
+                    Fd.close()
                 except BaseException as X:
                     EdkLogger.error("build", FILE_OPEN_FAILURE, ExtraData=F.Path + "\n\t" + str(X))
-
-                FileContent = Fd.read()
-                Fd.close()
                 if len(FileContent) == 0:
                     continue
 
                 if FileContent[0] == 0xff or FileContent[0] == 0xfe:
-                    FileContent = unicode(FileContent, "utf-16")
+                    FileContent = FileContent.decode('utf-16')
+                else:
+                    try:
+                        FileContent = str(FileContent)
+                    except:
+                        pass
                 IncludedFileList = gIncludePattern.findall(FileContent)
 
                 for Inc in IncludedFileList:
                     Inc = Inc.strip()
                     # if there's macro used to reference header file, expand it
diff --git a/BaseTools/Source/Python/AutoGen/GenPcdDb.py b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
index 2cb1745823..cbf7a39dd5 100644
--- a/BaseTools/Source/Python/AutoGen/GenPcdDb.py
+++ b/BaseTools/Source/Python/AutoGen/GenPcdDb.py
@@ -293,11 +293,11 @@ class DbItemList:
             GuidString = GuidStructureStringToGuidString(GuidStructureValue)
             return PackGUID(GuidString.split('-'))
 
         PackStr = PACK_CODE_BY_SIZE[self.ItemSize]
 
-        Buffer = ''
+        Buffer = bytearray()
         for Datas in self.RawDataList:
             if type(Datas) in (list, tuple):
                 for Data in Datas:
                     if PackStr:
                         Buffer += pack(PackStr, GetIntegerValue(Data))
@@ -318,11 +318,11 @@ class DbItemList:
 class DbExMapTblItemList (DbItemList):
     def __init__(self, ItemSize, DataList=None, RawDataList=None):
         DbItemList.__init__(self, ItemSize, DataList, RawDataList)
 
     def PackData(self):
-        Buffer = ''
+        Buffer = bytearray()
         PackStr = "=LHH"
         for Datas in self.RawDataList:
             Buffer += pack(PackStr,
                            GetIntegerValue(Datas[0]),
                            GetIntegerValue(Datas[1]),
@@ -367,11 +367,11 @@ class DbComItemList (DbItemList):
         return self.ListSize
 
     def PackData(self):
         PackStr = PACK_CODE_BY_SIZE[self.ItemSize]
 
-        Buffer = ''
+        Buffer = bytearray()
         for DataList in self.RawDataList:
             for Data in DataList:
                 if type(Data) in (list, tuple):
                     for SingleData in Data:
                         Buffer += pack(PackStr, GetIntegerValue(SingleData))
@@ -388,11 +388,11 @@ class DbVariableTableItemList (DbComItemList):
     def __init__(self, ItemSize, DataList=None, RawDataList=None):
         DbComItemList.__init__(self, ItemSize, DataList, RawDataList)
 
     def PackData(self):
         PackStr = "=LLHHLHH"
-        Buffer = ''
+        Buffer = bytearray()
         for DataList in self.RawDataList:
             for Data in DataList:
                 Buffer += pack(PackStr,
                                GetIntegerValue(Data[0]),
                                GetIntegerValue(Data[1]),
@@ -449,11 +449,11 @@ class DbSkuHeadTableItemList (DbItemList):
     def __init__(self, ItemSize, DataList=None, RawDataList=None):
         DbItemList.__init__(self, ItemSize, DataList, RawDataList)
 
     def PackData(self):
         PackStr = "=LL"
-        Buffer = ''
+        Buffer = bytearray()
         for Data in self.RawDataList:
             Buffer += pack(PackStr,
                            GetIntegerValue(Data[0]),
                            GetIntegerValue(Data[1]))
         return Buffer
@@ -471,11 +471,11 @@ class DbSizeTableItemList (DbItemList):
         for Data in self.RawDataList:
             length += (1 + len(Data[1]))
         return length * self.ItemSize
     def PackData(self):
         PackStr = "=H"
-        Buffer = ''
+        Buffer = bytearray()
         for Data in self.RawDataList:
             Buffer += pack(PackStr,
                            GetIntegerValue(Data[0]))
             for subData in Data[1]:
                 Buffer += pack(PackStr,
@@ -851,12 +851,13 @@ def BuildExDataBase(Dict):
     Buffer += b
 
     Index = 0
     for Item in DbItemTotal:
         Index +=1
-        b = Item.PackData()
-        Buffer += b
+        packdata = Item.PackData()
+        for i in range(len(packdata)):
+            Buffer += packdata[i:i + 1]
         if Index == InitTableNum:
             if len(Buffer) % 8:
                 for num in range(8 - len(Buffer) % 8):
                     b = pack('=B', Pad)
                     Buffer += b
@@ -919,13 +920,13 @@ def CreatePcdDataBase(PcdDBData):
             databasebuff = databasebuff[:-1] + pack("=B", item[1])
     totallen = len(databasebuff)
     totallenbuff = pack("=L", totallen)
     newbuffer = databasebuff[:32]
     for i in range(4):
-        newbuffer += totallenbuff[i]
+        newbuffer += totallenbuff[i:i+1]
     for i in range(36, totallen):
-        newbuffer += databasebuff[i]
+        newbuffer += databasebuff[i:i+1]
 
     return newbuffer
 
 def CreateVarCheckBin(VarCheckTab):
     return VarCheckTab[(TAB_DEFAULT, "0")]
@@ -963,12 +964,12 @@ def NewCreatePcdDatabasePhaseSpecificAutoGen(Platform, Phase):
     VarCheckTableData = {}
     if DynamicPcdSet_Sku:
         for skuname, skuid in DynamicPcdSet_Sku:
             AdditionalAutoGenH, AdditionalAutoGenC, PcdDbBuffer, VarCheckTab = CreatePcdDatabasePhaseSpecificAutoGen (Platform, DynamicPcdSet_Sku[(skuname, skuid)], Phase)
             final_data = ()
-            for item in PcdDbBuffer:
-                final_data += unpack("B", item)
+            for item in range(len(PcdDbBuffer)):
+                final_data += unpack("B", PcdDbBuffer[item:item+1])
             PcdDBData[(skuname, skuid)] = (PcdDbBuffer, final_data)
             PcdDriverAutoGenData[(skuname, skuid)] = (AdditionalAutoGenH, AdditionalAutoGenC)
             VarCheckTableData[(skuname, skuid)] = VarCheckTab
         if Platform.Platform.VarCheckFlag:
             dest = os.path.join(Platform.BuildDir, TAB_FV_DIRECTORY)
@@ -976,12 +977,12 @@ def NewCreatePcdDatabasePhaseSpecificAutoGen(Platform, Phase):
             VarCheckTable.dump(dest, Phase)
         AdditionalAutoGenH, AdditionalAutoGenC =  CreateAutoGen(PcdDriverAutoGenData)
     else:
         AdditionalAutoGenH, AdditionalAutoGenC, PcdDbBuffer, VarCheckTab = CreatePcdDatabasePhaseSpecificAutoGen (Platform, {}, Phase)
         final_data = ()
-        for item in PcdDbBuffer:
-            final_data += unpack("B", item)
+        for item in range(len(PcdDbBuffer)):
+            final_data += unpack("B", PcdDbBuffer[item:item + 1])
         PcdDBData[(TAB_DEFAULT, "0")] = (PcdDbBuffer, final_data)
 
     return AdditionalAutoGenH, AdditionalAutoGenC, CreatePcdDataBase(PcdDBData)
 ## Create PCD database in DXE or PEI phase
 #
diff --git a/BaseTools/Source/Python/AutoGen/GenVar.py b/BaseTools/Source/Python/AutoGen/GenVar.py
index 98f88e2497..453af66022 100644
--- a/BaseTools/Source/Python/AutoGen/GenVar.py
+++ b/BaseTools/Source/Python/AutoGen/GenVar.py
@@ -71,24 +71,26 @@ class VariableMgr(object):
             firstdata_type = sku_var_info_offset_list[0].data_type
             if firstdata_type in DataType.TAB_PCD_NUMERIC_TYPES:
                 fisrtdata_flag = DataType.PACK_CODE_BY_SIZE[MAX_SIZE_TYPE[firstdata_type]]
                 fisrtdata = fisrtvalue_list[0]
                 fisrtvalue_list = []
-                for data_byte in pack(fisrtdata_flag, int(fisrtdata, 16) if fisrtdata.upper().startswith('0X') else int(fisrtdata)):
-                    fisrtvalue_list.append(hex(unpack("B", data_byte)[0]))
+                pack_data = pack(fisrtdata_flag, int(fisrtdata, 0))
+                for data_byte in range(len(pack_data)):
+                    fisrtvalue_list.append(hex(unpack("B", pack_data[data_byte:data_byte + 1])[0]))
             newvalue_list = ["0x00"] * FirstOffset + fisrtvalue_list
 
             for var_item in sku_var_info_offset_list[1:]:
                 CurOffset = int(var_item.var_offset, 16) if var_item.var_offset.upper().startswith("0X") else int(var_item.var_offset)
                 CurvalueList = var_item.default_value.strip("{").strip("}").split(",")
                 Curdata_type = var_item.data_type
                 if Curdata_type in DataType.TAB_PCD_NUMERIC_TYPES:
                     data_flag = DataType.PACK_CODE_BY_SIZE[MAX_SIZE_TYPE[Curdata_type]]
                     data = CurvalueList[0]
                     CurvalueList = []
-                    for data_byte in pack(data_flag, int(data, 16) if data.upper().startswith('0X') else int(data)):
-                        CurvalueList.append(hex(unpack("B", data_byte)[0]))
+                    pack_data = pack(data_flag, int(data, 0))
+                    for data_byte in range(len(pack_data)):
+                        CurvalueList.append(hex(unpack("B", pack_data[data_byte:data_byte + 1])[0]))
                 if CurOffset > len(newvalue_list):
                     newvalue_list = newvalue_list + ["0x00"] * (CurOffset - len(newvalue_list)) + CurvalueList
                 else:
                     newvalue_list[CurOffset : CurOffset + len(CurvalueList)] = CurvalueList
 
@@ -121,12 +123,12 @@ class VariableMgr(object):
                     tail = ",".join("0x00" for i in range(var_max_len-len(default_sku_default.default_value.split(","))))
 
             default_data_buffer = VariableMgr.PACK_VARIABLES_DATA(default_sku_default.default_value, default_sku_default.data_type, tail)
 
             default_data_array = ()
-            for item in default_data_buffer:
-                default_data_array += unpack("B", item)
+            for item in range(len(default_data_buffer)):
+                default_data_array += unpack("B", default_data_buffer[item:item + 1])
 
             var_data[(DataType.TAB_DEFAULT, DataType.TAB_DEFAULT_STORES_DEFAULT)][index] = (default_data_buffer, sku_var_info[(DataType.TAB_DEFAULT, DataType.TAB_DEFAULT_STORES_DEFAULT)])
 
             for (skuid, defaultstoragename) in indexedvarinfo[index]:
                 tail = None
@@ -139,12 +141,12 @@ class VariableMgr(object):
                         tail = ",".join("0x00" for i in range(var_max_len-len(other_sku_other.default_value.split(","))))
 
                 others_data_buffer = VariableMgr.PACK_VARIABLES_DATA(other_sku_other.default_value, other_sku_other.data_type, tail)
 
                 others_data_array = ()
-                for item in others_data_buffer:
-                    others_data_array += unpack("B", item)
+                for item in range(len(others_data_buffer)):
+                    others_data_array += unpack("B", others_data_buffer[item:item + 1])
 
                 data_delta = VariableMgr.calculate_delta(default_data_array, others_data_array)
 
                 var_data[(skuid, defaultstoragename)][index] = (data_delta, sku_var_info[(skuid, defaultstoragename)])
         return var_data
@@ -156,11 +158,11 @@ class VariableMgr(object):
 
         if not var_data:
             return []
 
         pcds_default_data = var_data.get((DataType.TAB_DEFAULT, DataType.TAB_DEFAULT_STORES_DEFAULT), {})
-        NvStoreDataBuffer = ""
+        NvStoreDataBuffer = bytearray()
         var_data_offset = collections.OrderedDict()
         offset = NvStorageHeaderSize
         for default_data, default_info in pcds_default_data.values():
             var_name_buffer = VariableMgr.PACK_VARIABLE_NAME(default_info.var_name)
 
@@ -183,11 +185,11 @@ class VariableMgr(object):
 
         variable_storage_header_buffer = VariableMgr.PACK_VARIABLE_STORE_HEADER(len(NvStoreDataBuffer) + 28)
 
         nv_default_part = VariableMgr.AlignData(VariableMgr.PACK_DEFAULT_DATA(0, 0, VariableMgr.unpack_data(variable_storage_header_buffer+NvStoreDataBuffer)), 8)
 
-        data_delta_structure_buffer = ""
+        data_delta_structure_buffer = bytearray()
         for skuname, defaultstore in var_data:
             if (skuname, defaultstore) == (DataType.TAB_DEFAULT, DataType.TAB_DEFAULT_STORES_DEFAULT):
                 continue
             pcds_sku_data = var_data[(skuname, defaultstore)]
             delta_data_set = []
@@ -214,12 +216,12 @@ class VariableMgr(object):
         return  [hex(item) for item in VariableMgr.unpack_data(data)]
 
     @staticmethod
     def unpack_data(data):
         final_data = ()
-        for item in data:
-            final_data += unpack("B", item)
+        for item in range(len(data)):
+            final_data += unpack("B", data[item:item + 1])
         return final_data
 
     @staticmethod
     def calculate_delta(default, theother):
         if len(default) - len(theother) != 0:
@@ -283,11 +285,11 @@ class VariableMgr(object):
 
         return Buffer
 
     @staticmethod
     def PACK_VARIABLES_DATA(var_value,data_type, tail = None):
-        Buffer = ""
+        Buffer = bytearray()
         data_len = 0
         if data_type == DataType.TAB_VOID:
             for value_char in var_value.strip("{").strip("}").split(","):
                 Buffer += pack("=B", int(value_char, 16))
             data_len += len(var_value.split(","))
@@ -313,11 +315,11 @@ class VariableMgr(object):
 
         return Buffer
 
     @staticmethod
     def PACK_DEFAULT_DATA(defaultstoragename, skuid, var_value):
-        Buffer = ""
+        Buffer = bytearray()
         Buffer += pack("=L", 4+8+8)
         Buffer += pack("=Q", int(skuid))
         Buffer += pack("=Q", int(defaultstoragename))
 
         for item in var_value:
@@ -338,11 +340,11 @@ class VariableMgr(object):
         return self.DefaultStoreMap.get(dname)[0]
 
     def PACK_DELTA_DATA(self, skuname, defaultstoragename, delta_list):
         skuid = self.GetSkuId(skuname)
         defaultstorageid = self.GetDefaultStoreId(defaultstoragename)
-        Buffer = ""
+        Buffer = bytearray()
         Buffer += pack("=L", 4+8+8)
         Buffer += pack("=Q", int(skuid))
         Buffer += pack("=Q", int(defaultstorageid))
         for (delta_offset, value) in delta_list:
             Buffer += pack("=L", delta_offset)
@@ -361,10 +363,10 @@ class VariableMgr(object):
 
         return mybuffer
 
     @staticmethod
     def PACK_VARIABLE_NAME(var_name):
-        Buffer = ""
+        Buffer = bytearray()
         for name_char in var_name.strip("{").strip("}").split(","):
             Buffer += pack("=B", int(name_char, 16))
 
         return Buffer
diff --git a/BaseTools/Source/Python/AutoGen/InfSectionParser.py b/BaseTools/Source/Python/AutoGen/InfSectionParser.py
index d985089738..09e9af3fb4 100644
--- a/BaseTools/Source/Python/AutoGen/InfSectionParser.py
+++ b/BaseTools/Source/Python/AutoGen/InfSectionParser.py
@@ -32,11 +32,11 @@ class InfSectionParser():
         FileLastLine = False
         SectionLine = ''
         SectionData = []
 
         try:
-            FileLinesList = open(self._FilePath, "r", 0).readlines()
+            FileLinesList = open(self._FilePath, "r").readlines()
         except BaseException:
             EdkLogger.error("build", AUTOGEN_ERROR, 'File %s is opened failed.' % self._FilePath)
 
         for Index in range(0, len(FileLinesList)):
             line = str(FileLinesList[Index]).strip()
diff --git a/BaseTools/Source/Python/AutoGen/StrGather.py b/BaseTools/Source/Python/AutoGen/StrGather.py
index d87680b2e7..680ec16bd4 100644
--- a/BaseTools/Source/Python/AutoGen/StrGather.py
+++ b/BaseTools/Source/Python/AutoGen/StrGather.py
@@ -121,11 +121,14 @@ def DecToHexList(Dec, Digit = 8):
 # @param Ascii:  The acsii string
 #
 # @retval:       A list for formatted hex string
 #
 def AscToHexList(Ascii):
-    return ['0x{0:02X}'.format(ord(Item)) for Item in Ascii]
+    try:
+        return ['0x{0:02X}'.format(Item) for Item in Ascii]
+    except:
+        return ['0x{0:02X}'.format(ord(Item)) for Item in Ascii]
 
 ## Create content of .h file
 #
 # Create content of .h file
 #
diff --git a/BaseTools/Source/Python/AutoGen/UniClassObject.py b/BaseTools/Source/Python/AutoGen/UniClassObject.py
index 764d95ec66..d162387cc5 100644
--- a/BaseTools/Source/Python/AutoGen/UniClassObject.py
+++ b/BaseTools/Source/Python/AutoGen/UniClassObject.py
@@ -22,11 +22,11 @@ import distutils.util
 import Common.EdkLogger as EdkLogger
 from io import BytesIO
 from Common.BuildToolError import *
 from Common.StringUtils import GetLineNo
 from Common.Misc import PathClass
-from Common.LongFilePathSupport import LongFilePath, UniToStr
+from Common.LongFilePathSupport import LongFilePath
 from Common.GlobalData import *
 ##
 # Static definitions
 #
 UNICODE_WIDE_CHAR = u'\\wide'
@@ -425,11 +425,11 @@ class UniFileClassObject(object):
             while (StartPos != -1):
                 EndPos = Line.find(u'\\', StartPos + 1, StartPos + 7)
                 if EndPos != -1 and EndPos - StartPos == 6 :
                     if g4HexChar.match(Line[StartPos + 2 : EndPos], re.UNICODE):
                         EndStr = Line[EndPos: ]
-                        UniStr = ('\u' + (Line[StartPos + 2 : EndPos])).decode('unicode_escape')
+                        UniStr = Line[StartPos + 2: EndPos]
                         if EndStr.startswith(u'\\x') and len(EndStr) >= 7:
                             if EndStr[6] == u'\\' and g4HexChar.match(EndStr[2 : 6], re.UNICODE):
                                 Line = Line[0 : StartPos] + UniStr + EndStr
                         else:
                             Line = Line[0 : StartPos] + UniStr + EndStr[1:]
diff --git a/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py b/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
index 6ddf38fd0d..91c2de621f 100644
--- a/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
+++ b/BaseTools/Source/Python/AutoGen/ValidCheckingInfoObject.py
@@ -39,11 +39,11 @@ class VAR_CHECK_PCD_VARIABLE_TAB_CONTAINER(object):
             return
         if not os.path.exists(dest):
             os.mkdir(dest)
         BinFileName = "PcdVarCheck.bin"
         BinFilePath = os.path.join(dest, BinFileName)
-        Buffer = ''
+        Buffer = bytearray()
         index = 0
         for var_check_tab in self.var_check_info:
             index += 1
             realLength = 0
             realLength += 32
diff --git a/BaseTools/Source/Python/BPDG/GenVpd.py b/BaseTools/Source/Python/BPDG/GenVpd.py
index b4a2dd25a2..09712be386 100644
--- a/BaseTools/Source/Python/BPDG/GenVpd.py
+++ b/BaseTools/Source/Python/BPDG/GenVpd.py
@@ -184,11 +184,11 @@ class PcdEntry:
         # No null-terminator in 'string'
         if (QuotedFlag and len(ValueString) + 1 > Size) or (not QuotedFlag and len(ValueString) > Size):
             EdkLogger.error("BPDG", BuildToolError.RESOURCE_OVERFLOW,
                             "PCD value string %s is exceed to size %d(File: %s Line: %s)" % (ValueString, Size, self.FileName, self.Lineno))
         try:
-            self.PcdValue = pack('%ds' % Size, ValueString)
+            self.PcdValue = pack('%ds' % Size, ValueString.encode('utf-8'))
         except:
             EdkLogger.error("BPDG", BuildToolError.FORMAT_INVALID,
                             "Invalid size or value for PCD %s to pack(File: %s Line: %s)." % (self.PcdCName, self.FileName, self.Lineno))
 
     ## Pack a byte-array PCD value.
@@ -304,11 +304,11 @@ class GenVPD :
         self.VpdFileName             = VpdFileName
         self.FileLinesList           = []
         self.PcdFixedOffsetSizeList  = []
         self.PcdUnknownOffsetList    = []
         try:
-            fInputfile = open(InputFileName, "r", 0)
+            fInputfile = open(InputFileName, "r")
             try:
                 self.FileLinesList = fInputfile.readlines()
             except:
                 EdkLogger.error("BPDG", BuildToolError.FILE_READ_FAILURE, "File read failed for %s" % InputFileName, None)
             finally:
@@ -643,11 +643,11 @@ class GenVPD :
     #
     def GenerateVpdFile (self, MapFileName, BinFileName):
         #Open an VPD file to process
 
         try:
-            fVpdFile = open(BinFileName, "wb", 0)
+            fVpdFile = open(BinFileName, "wb")
         except:
             # Open failed
             EdkLogger.error("BPDG", BuildToolError.FILE_OPEN_FAILURE, "File open failed for %s" % self.VpdFileName, None)
 
         try :
@@ -655,11 +655,11 @@ class GenVPD :
         except:
             # Open failed
             EdkLogger.error("BPDG", BuildToolError.FILE_OPEN_FAILURE, "File open failed for %s" % self.MapFileName, None)
 
         # Use a instance of BytesIO to cache data
-        fStringIO = BytesIO('')
+        fStringIO = BytesIO()
 
         # Write the header of map file.
         try :
             fMapFile.write (st.MAP_FILE_COMMENT_TEMPLATE + "\n")
         except:
diff --git a/BaseTools/Source/Python/Common/LongFilePathOs.py b/BaseTools/Source/Python/Common/LongFilePathOs.py
index 53528546b7..3d6fe9b01c 100644
--- a/BaseTools/Source/Python/Common/LongFilePathOs.py
+++ b/BaseTools/Source/Python/Common/LongFilePathOs.py
@@ -13,11 +13,10 @@
 
 from __future__ import absolute_import
 import os
 from . import LongFilePathOsPath
 from Common.LongFilePathSupport import LongFilePath
-from Common.LongFilePathSupport import UniToStr
 import time
 
 path = LongFilePathOsPath
 
 def access(path, mode):
@@ -62,11 +61,11 @@ def utime(path, times):
 
 def listdir(path):
     List = []
     uList = os.listdir(u"%s" % LongFilePath(path))
     for Item in uList:
-        List.append(UniToStr(Item))
+        List.append(Item)
     return List
 
 environ = os.environ
 getcwd = os.getcwd
 chdir = os.chdir
diff --git a/BaseTools/Source/Python/Common/LongFilePathSupport.py b/BaseTools/Source/Python/Common/LongFilePathSupport.py
index b3e3c8ea64..ed29d37d38 100644
--- a/BaseTools/Source/Python/Common/LongFilePathSupport.py
+++ b/BaseTools/Source/Python/Common/LongFilePathSupport.py
@@ -47,17 +47,5 @@ def CodecOpenLongFilePath(Filename, Mode='rb', Encoding=None, Errors='strict', B
 #
 def CopyLongFilePath(src, dst):
     with open(LongFilePath(src), 'rb') as fsrc:
         with open(LongFilePath(dst), 'wb') as fdst:
             shutil.copyfileobj(fsrc, fdst)
-
-## Convert a python unicode string to a normal string
-#
-# Convert a python unicode string to a normal string
-# UniToStr(u'I am a string') is 'I am a string'
-#
-# @param Uni:  The python unicode string
-#
-# @retval:     The formatted normal string
-#
-def UniToStr(Uni):
-    return repr(Uni)[2:-1]
diff --git a/BaseTools/Source/Python/Common/Misc.py b/BaseTools/Source/Python/Common/Misc.py
index d3b71fc4a2..6b3c4f7937 100644
--- a/BaseTools/Source/Python/Common/Misc.py
+++ b/BaseTools/Source/Python/Common/Misc.py
@@ -454,35 +454,48 @@ def RemoveDirectory(Directory, Recursively=False):
 #
 #   @retval     True            If the file content is changed and the file is renewed
 #   @retval     False           If the file content is the same
 #
 def SaveFileOnChange(File, Content, IsBinaryFile=True):
-    if not IsBinaryFile:
-        Content = Content.replace("\n", os.linesep)
 
     if os.path.exists(File):
-        try:
-            if Content == open(File, "rb").read():
-                return False
-        except:
-            EdkLogger.error(None, FILE_OPEN_FAILURE, ExtraData=File)
+        if IsBinaryFile:
+            try:
+                with open(File, "rb") as f:
+                    if Content == f.read():
+                        return False
+            except:
+                EdkLogger.error(None, FILE_OPEN_FAILURE, ExtraData=File)
+        else:
+            try:
+                with open(File, "r") as f:
+                    if Content == f.read():
+                        return False
+            except:
+                EdkLogger.error(None, FILE_OPEN_FAILURE, ExtraData=File)
 
     DirName = os.path.dirname(File)
     if not CreateDirectory(DirName):
         EdkLogger.error(None, FILE_CREATE_FAILURE, "Could not create directory %s" % DirName)
     else:
         if DirName == '':
             DirName = os.getcwd()
         if not os.access(DirName, os.W_OK):
             EdkLogger.error(None, PERMISSION_FAILURE, "Do not have write permission on directory %s" % DirName)
 
-    try:
-        Fd = open(File, "wb")
-        Fd.write(Content)
-        Fd.close()
-    except IOError as X:
-        EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+    if IsBinaryFile:
+        try:
+            with open(File, "wb") as Fd:
+                Fd.write(Content)
+        except IOError as X:
+            EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
+    else:
+        try:
+            with open(File, 'w') as Fd:
+                Fd.write(Content)
+        except IOError as X:
+            EdkLogger.error(None, FILE_CREATE_FAILURE, ExtraData='IOError %s' % X)
 
     return True
 
 ## Retrieve and cache the real path name in file system
 #
@@ -1058,11 +1071,14 @@ def ParseFieldValue (Value):
                 raise BadExpression("Invalid GUID value string %s" % Value)
             Value = TmpValue
         if Value[0] == '"' and Value[-1] == '"':
             Value = Value[1:-1]
         try:
-            Value = "'" + uuid.UUID(Value).bytes_le + "'"
+            Value = str(uuid.UUID(Value).bytes_le)
+            if Value.startswith("b'"):
+                Value = Value[2:-1]
+            Value = "'" + Value + "'"
         except ValueError as Message:
             raise BadExpression(Message)
         Value, Size = ParseFieldValue(Value)
         return Value, 16
     if Value.startswith('L"') and Value.endswith('"'):
@@ -1534,11 +1550,11 @@ class PeImageClass():
         PeOffset = self._ByteListToInt(ByteList[0x3C:0x3E])
         PeObject.seek(PeOffset)
         ByteArray = array.array('B')
         ByteArray.fromfile(PeObject, 4)
         # PE signature should be 'PE\0\0'
-        if ByteArray.tostring() != 'PE\0\0':
+        if ByteArray.tostring() != b'PE\0\0':
             self.ErrorInfo = self.FileName + ' has no valid PE signature PE00'
             return
 
         # Read PE file header
         ByteArray = array.array('B')
@@ -1750,11 +1766,11 @@ class SkuClass():
 #   @param      Input   The object that may be either a integer value or a string
 #
 #   @retval     Value    The integer value that the input represents
 #
 def GetIntegerValue(Input):
-    if type(Input) in (int, long):
+    if not isinstance(Input, str):
         return Input
     String = Input
     if String.endswith("U"):
         String = String[:-1]
     if String.endswith("ULL"):
diff --git a/BaseTools/Source/Python/Common/StringUtils.py b/BaseTools/Source/Python/Common/StringUtils.py
index 0fa51f365b..c6227271a4 100644
--- a/BaseTools/Source/Python/Common/StringUtils.py
+++ b/BaseTools/Source/Python/Common/StringUtils.py
@@ -814,15 +814,11 @@ def GetHelpTextList(HelpTextClassList):
                 List.extend(HelpText.String.split('\n'))
 
     return List
 
 def StringToArray(String):
-    if isinstance(String, unicode):
-        if len(unicode) == 0:
-            return "{0x00,0x00}"
-        return "{%s,0x00,0x00}" % ",".join("0x%02x,0x00" % ord(C) for C in String)
-    elif String.startswith('L"'):
+    if String.startswith('L"'):
         if String == "L\"\"":
             return "{0x00,0x00}"
         else:
             return "{%s,0x00,0x00}" % ",".join("0x%02x,0x00" % ord(C) for C in String[2:-1])
     elif String.startswith('"'):
@@ -841,13 +837,11 @@ def StringToArray(String):
             return '{%s,0}' % ','.join(String.split())
         else:
             return '{%s,0,0}' % ','.join(String.split())
 
 def StringArrayLength(String):
-    if isinstance(String, unicode):
-        return (len(String) + 1) * 2 + 1;
-    elif String.startswith('L"'):
+    if String.startswith('L"'):
         return (len(String) - 3 + 1) * 2
     elif String.startswith('"'):
         return (len(String) - 2 + 1)
     else:
         return len(String.split()) + 1
diff --git a/BaseTools/Source/Python/Common/VpdInfoFile.py b/BaseTools/Source/Python/Common/VpdInfoFile.py
index cebc1f7187..e6cc768ee1 100644
--- a/BaseTools/Source/Python/Common/VpdInfoFile.py
+++ b/BaseTools/Source/Python/Common/VpdInfoFile.py
@@ -90,22 +90,22 @@ class VpdInfoFile:
     #
     def Add(self, Vpd, skuname, Offset):
         if (Vpd is None):
             EdkLogger.error("VpdInfoFile", BuildToolError.ATTRIBUTE_UNKNOWN_ERROR, "Invalid VPD PCD entry.")
 
-        if not (Offset >= 0 or Offset == TAB_STAR):
+        if not (Offset >= "0" or Offset == TAB_STAR):
             EdkLogger.error("VpdInfoFile", BuildToolError.PARAMETER_INVALID, "Invalid offset parameter: %s." % Offset)
 
         if Vpd.DatumType == TAB_VOID:
-            if Vpd.MaxDatumSize <= 0:
+            if Vpd.MaxDatumSize <= "0":
                 EdkLogger.error("VpdInfoFile", BuildToolError.PARAMETER_INVALID,
                                 "Invalid max datum size for VPD PCD %s.%s" % (Vpd.TokenSpaceGuidCName, Vpd.TokenCName))
         elif Vpd.DatumType in TAB_PCD_NUMERIC_TYPES:
             if not Vpd.MaxDatumSize:
                 Vpd.MaxDatumSize = MAX_SIZE_TYPE[Vpd.DatumType]
         else:
-            if Vpd.MaxDatumSize <= 0:
+            if Vpd.MaxDatumSize <= "0":
                 EdkLogger.error("VpdInfoFile", BuildToolError.PARAMETER_INVALID,
                                 "Invalid max datum size for VPD PCD %s.%s" % (Vpd.TokenSpaceGuidCName, Vpd.TokenCName))
 
         if Vpd not in self._VpdArray:
             #
@@ -125,11 +125,11 @@ class VpdInfoFile:
         if not (FilePath is not None or len(FilePath) != 0):
             EdkLogger.error("VpdInfoFile", BuildToolError.PARAMETER_INVALID,
                             "Invalid parameter FilePath: %s." % FilePath)
 
         Content = FILE_COMMENT_TEMPLATE
-        Pcds = sorted(self._VpdArray.keys())
+        Pcds = sorted(self._VpdArray.keys(), key=lambda x: x.TokenCName)
         for Pcd in Pcds:
             i = 0
             PcdTokenCName = Pcd.TokenCName
             for PcdItem in GlobalData.MixedPcd:
                 if (Pcd.TokenCName, Pcd.TokenSpaceGuidCName) in GlobalData.MixedPcd[PcdItem]:
@@ -247,11 +247,11 @@ def CallExtenalBPDGTool(ToolPath, VpdFileName):
                                         stderr= subprocess.PIPE,
                                         shell=True)
     except Exception as X:
         EdkLogger.error("BPDG", BuildToolError.COMMAND_FAILURE, ExtraData=str(X))
     (out, error) = PopenObject.communicate()
-    print(out)
+    print(out.decode(encoding='utf-8', errors='ignore'))
     while PopenObject.returncode is None :
         PopenObject.wait()
 
     if PopenObject.returncode != 0:
         EdkLogger.debug(EdkLogger.DEBUG_1, "Fail to call BPDG tool", str(error))
diff --git a/BaseTools/Source/Python/GenFds/AprioriSection.py b/BaseTools/Source/Python/GenFds/AprioriSection.py
index 55d99320c7..0a828278e8 100644
--- a/BaseTools/Source/Python/GenFds/AprioriSection.py
+++ b/BaseTools/Source/Python/GenFds/AprioriSection.py
@@ -51,11 +51,11 @@ class AprioriSection (object):
     #   @param  FvName      for whom apriori file generated
     #   @param  Dict        dictionary contains macro and its value
     #   @retval string      Generated file name
     #
     def GenFfs (self, FvName, Dict = {}, IsMakefile = False):
-        Buffer = BytesIO('')
+        Buffer = BytesIO()
         if self.AprioriType == "PEI":
             AprioriFileGuid = PEI_APRIORI_GUID
         else:
             AprioriFileGuid = DXE_APRIORI_GUID
 
diff --git a/BaseTools/Source/Python/GenFds/Capsule.py b/BaseTools/Source/Python/GenFds/Capsule.py
index 1cdbdcf7ba..9013fca410 100644
--- a/BaseTools/Source/Python/GenFds/Capsule.py
+++ b/BaseTools/Source/Python/GenFds/Capsule.py
@@ -179,11 +179,11 @@ class Capsule (CapsuleClassObject):
         BodySize = len(FwMgrHdr.getvalue()) + len(Content.getvalue())
         Header.write(pack('=I', HdrSize + BodySize))
         #
         # The real capsule header structure is 28 bytes
         #
-        Header.write('\x00'*(HdrSize-28))
+        Header.write(b'\x00'*(HdrSize-28))
         Header.write(FwMgrHdr.getvalue())
         Header.write(Content.getvalue())
         #
         # Generate FMP capsule file
         #
@@ -204,22 +204,21 @@ class Capsule (CapsuleClassObject):
         if ('CAPSULE_GUID' in self.TokensDict and
             uuid.UUID(self.TokensDict['CAPSULE_GUID']) == uuid.UUID('6DCBD5ED-E82D-4C44-BDA1-7194199AD92A')):
             return self.GenFmpCapsule()
 
         CapInfFile = self.GenCapInf()
-        CapInfFile.writelines("[files]" + TAB_LINE_BREAK)
+        CapInfFile.append("[files]" + TAB_LINE_BREAK)
         CapFileList = []
         for CapsuleDataObj in self.CapsuleDataList:
             CapsuleDataObj.CapsuleName = self.CapsuleName
             FileName = CapsuleDataObj.GenCapsuleSubItem()
             CapsuleDataObj.CapsuleName = None
             CapFileList.append(FileName)
-            CapInfFile.writelines("EFI_FILE_NAME = " + \
+            CapInfFile.append("EFI_FILE_NAME = " + \
                                    FileName      + \
                                    TAB_LINE_BREAK)
-        SaveFileOnChange(self.CapInfFileName, CapInfFile.getvalue(), False)
-        CapInfFile.close()
+        SaveFileOnChange(self.CapInfFileName, ''.join(CapInfFile), False)
         #
         # Call GenFv tool to generate capsule
         #
         CapOutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiCapsuleName)
         CapOutputFile = CapOutputFile + '.Cap'
@@ -241,16 +240,16 @@ class Capsule (CapsuleClassObject):
     #   @retval file        inf file object
     #
     def GenCapInf(self):
         self.CapInfFileName = os.path.join(GenFdsGlobalVariable.FvDir,
                                    self.UiCapsuleName +  "_Cap" + '.inf')
-        CapInfFile = BytesIO() #open (self.CapInfFileName , 'w+')
+        CapInfFile = []
 
-        CapInfFile.writelines("[options]" + TAB_LINE_BREAK)
+        CapInfFile.append("[options]" + TAB_LINE_BREAK)
 
         for Item in self.TokensDict:
-            CapInfFile.writelines("EFI_"                    + \
+            CapInfFile.append("EFI_"                    + \
                                   Item                      + \
                                   ' = '                     + \
                                   self.TokensDict[Item]     + \
                                   TAB_LINE_BREAK)
 
diff --git a/BaseTools/Source/Python/GenFds/CapsuleData.py b/BaseTools/Source/Python/GenFds/CapsuleData.py
index db201c074b..ace4699a0e 100644
--- a/BaseTools/Source/Python/GenFds/CapsuleData.py
+++ b/BaseTools/Source/Python/GenFds/CapsuleData.py
@@ -80,11 +80,11 @@ class CapsuleFv (CapsuleData):
     #
     def GenCapsuleSubItem(self):
         if self.FvName.find('.fv') == -1:
             if self.FvName.upper() in GenFdsGlobalVariable.FdfParser.Profile.FvDict:
                 FvObj = GenFdsGlobalVariable.FdfParser.Profile.FvDict[self.FvName.upper()]
-                FdBuffer = BytesIO('')
+                FdBuffer = BytesIO()
                 FvObj.CapsuleName = self.CapsuleName
                 FvFile = FvObj.AddToBuffer(FdBuffer)
                 FvObj.CapsuleName = None
                 FdBuffer.close()
                 return FvFile
diff --git a/BaseTools/Source/Python/GenFds/Fd.py b/BaseTools/Source/Python/GenFds/Fd.py
index 9c43a62cc3..e1849a356c 100644
--- a/BaseTools/Source/Python/GenFds/Fd.py
+++ b/BaseTools/Source/Python/GenFds/Fd.py
@@ -70,11 +70,11 @@ class FD(FDClassObject):
         for RegionObj in self.RegionList:
             if RegionObj.RegionType == 'CAPSULE':
                 HasCapsuleRegion = True
                 break
         if HasCapsuleRegion:
-            TempFdBuffer = BytesIO('')
+            TempFdBuffer = BytesIO()
             PreviousRegionStart = -1
             PreviousRegionSize = 1
 
             for RegionObj in self.RegionList :
                 if RegionObj.RegionType == 'CAPSULE':
@@ -99,11 +99,11 @@ class FD(FDClassObject):
                 if PreviousRegionSize > self.Size:
                     pass
                 GenFdsGlobalVariable.VerboseLogger('Call each region\'s AddToBuffer function')
                 RegionObj.AddToBuffer (TempFdBuffer, self.BaseAddress, self.BlockSizeList, self.ErasePolarity, GenFdsGlobalVariable.ImageBinDict, self.DefineVarDict)
 
-        FdBuffer = BytesIO('')
+        FdBuffer = BytesIO()
         PreviousRegionStart = -1
         PreviousRegionSize = 1
         for RegionObj in self.RegionList :
             if RegionObj.Offset + RegionObj.Size <= PreviousRegionStart:
                 EdkLogger.error("GenFds", GENFDS_ERROR,
diff --git a/BaseTools/Source/Python/GenFds/FdfParser.py b/BaseTools/Source/Python/GenFds/FdfParser.py
index 69cb7de8e5..63edf816ec 100644
--- a/BaseTools/Source/Python/GenFds/FdfParser.py
+++ b/BaseTools/Source/Python/GenFds/FdfParser.py
@@ -157,11 +157,11 @@ class IncludeFileProfile:
     #
     def __init__(self, FileName):
         self.FileName = FileName
         self.FileLinesList = []
         try:
-            with open(FileName, "rb", 0) as fsock:
+            with open(FileName, "r") as fsock:
                 self.FileLinesList = fsock.readlines()
                 for index, line in enumerate(self.FileLinesList):
                     if not line.endswith(TAB_LINE_BREAK):
                         self.FileLinesList[index] += TAB_LINE_BREAK
         except:
@@ -211,11 +211,11 @@ class FileProfile:
     #   @param  FileName    The file that to be parsed
     #
     def __init__(self, FileName):
         self.FileLinesList = []
         try:
-            with open(FileName, "rb", 0) as fsock:
+            with open(FileName, "r") as fsock:
                 self.FileLinesList = fsock.readlines()
 
         except:
             EdkLogger.error("FdfParser", FILE_OPEN_FAILURE, ExtraData=FileName)
 
diff --git a/BaseTools/Source/Python/GenFds/FfsFileStatement.py b/BaseTools/Source/Python/GenFds/FfsFileStatement.py
index 7479efff04..e05fb8ca42 100644
--- a/BaseTools/Source/Python/GenFds/FfsFileStatement.py
+++ b/BaseTools/Source/Python/GenFds/FfsFileStatement.py
@@ -77,11 +77,11 @@ class FileStatement (FileStatementClassObject):
             os.makedirs(OutputDir)
 
         Dict.update(self.DefineVarDict)
         SectionAlignments = None
         if self.FvName:
-            Buffer = BytesIO('')
+            Buffer = BytesIO()
             if self.FvName.upper() not in GenFdsGlobalVariable.FdfParser.Profile.FvDict:
                 EdkLogger.error("GenFds", GENFDS_ERROR, "FV (%s) is NOT described in FDF file!" % (self.FvName))
             Fv = GenFdsGlobalVariable.FdfParser.Profile.FvDict.get(self.FvName.upper())
             FileName = Fv.AddToBuffer(Buffer)
             SectionFiles = [FileName]
@@ -94,11 +94,11 @@ class FileStatement (FileStatementClassObject):
             SectionFiles = [FileName]
 
         elif self.FileName:
             if hasattr(self, 'FvFileType') and self.FvFileType == 'RAW':
                 if isinstance(self.FileName, list) and isinstance(self.SubAlignment, list) and len(self.FileName) == len(self.SubAlignment):
-                    FileContent = ''
+                    FileContent = BytesIO()
                     MaxAlignIndex = 0
                     MaxAlignValue = 1
                     for Index, File in enumerate(self.FileName):
                         try:
                             f = open(File, 'rb')
@@ -110,19 +110,19 @@ class FileStatement (FileStatementClassObject):
                         if self.SubAlignment[Index]:
                             AlignValue = GenFdsGlobalVariable.GetAlignment(self.SubAlignment[Index])
                         if AlignValue > MaxAlignValue:
                             MaxAlignIndex = Index
                             MaxAlignValue = AlignValue
-                        FileContent += Content
-                        if len(FileContent) % AlignValue != 0:
-                            Size = AlignValue - len(FileContent) % AlignValue
+                        FileContent.write(Content)
+                        if len(FileContent.getvalue()) % AlignValue != 0:
+                            Size = AlignValue - len(FileContent.getvalue()) % AlignValue
                             for i in range(0, Size):
-                                FileContent += pack('B', 0xFF)
+                                FileContent.write(pack('B', 0xFF))
 
-                    if FileContent:
+                    if FileContent.getvalue() != b'':
                         OutputRAWFile = os.path.join(GenFdsGlobalVariable.FfsDir, self.NameGuid, self.NameGuid + '.raw')
-                        SaveFileOnChange(OutputRAWFile, FileContent, True)
+                        SaveFileOnChange(OutputRAWFile, FileContent.getvalue(), True)
                         self.FileName = OutputRAWFile
                         self.SubAlignment = self.SubAlignment[MaxAlignIndex]
 
                 if self.Alignment and self.SubAlignment:
                     if GenFdsGlobalVariable.GetAlignment (self.Alignment) < GenFdsGlobalVariable.GetAlignment (self.SubAlignment):
diff --git a/BaseTools/Source/Python/GenFds/FfsInfStatement.py b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
index 80257923f0..6dcb57deed 100644
--- a/BaseTools/Source/Python/GenFds/FfsInfStatement.py
+++ b/BaseTools/Source/Python/GenFds/FfsInfStatement.py
@@ -1086,33 +1086,31 @@ class FfsInfStatement(FfsInfStatementClassObject):
     #
     @staticmethod
     def __GenUniVfrOffsetFile(VfrUniOffsetList, UniVfrOffsetFileName):
 
         # Use a instance of StringIO to cache data
-        fStringIO = BytesIO('')
+        fStringIO = BytesIO()
 
         for Item in VfrUniOffsetList:
             if (Item[0].find("Strings") != -1):
                 #
                 # UNI offset in image.
                 # GUID + Offset
                 # { 0x8913c5e0, 0x33f6, 0x4d86, { 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66 } }
                 #
-                UniGuid = [0xe0, 0xc5, 0x13, 0x89, 0xf6, 0x33, 0x86, 0x4d, 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66]
-                UniGuid = [chr(ItemGuid) for ItemGuid in UniGuid]
-                fStringIO.write(''.join(UniGuid))
+                UniGuid = b'\xe0\xc5\x13\x89\xf63\x86M\x9b\xf1C\xef\x89\xfc\x06f'
+                fStringIO.write(UniGuid)
                 UniValue = pack ('Q', int (Item[1], 16))
                 fStringIO.write (UniValue)
             else:
                 #
                 # VFR binary offset in image.
                 # GUID + Offset
                 # { 0xd0bc7cb4, 0x6a47, 0x495f, { 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2 } };
                 #
-                VfrGuid = [0xb4, 0x7c, 0xbc, 0xd0, 0x47, 0x6a, 0x5f, 0x49, 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2]
-                VfrGuid = [chr(ItemGuid) for ItemGuid in VfrGuid]
-                fStringIO.write(''.join(VfrGuid))
+                VfrGuid = b'\xb4|\xbc\xd0Gj_I\xaa\x11q\x07F\xda\x06\xa2'
+                fStringIO.write(VfrGuid)
                 type (Item[1])
                 VfrValue = pack ('Q', int (Item[1], 16))
                 fStringIO.write (VfrValue)
 
         #
diff --git a/BaseTools/Source/Python/GenFds/Fv.py b/BaseTools/Source/Python/GenFds/Fv.py
index b141d44dc4..2ae991128a 100644
--- a/BaseTools/Source/Python/GenFds/Fv.py
+++ b/BaseTools/Source/Python/GenFds/Fv.py
@@ -115,11 +115,11 @@ class FV (object):
         for AprSection in self.AprioriSectionList:
             FileName = AprSection.GenFfs (self.UiFvName, MacroDict, IsMakefile=Flag)
             FfsFileList.append(FileName)
             # Add Apriori file name to Inf file
             if not Flag:
-                self.FvInfFile.writelines("EFI_FILE_NAME = " + \
+                self.FvInfFile.append("EFI_FILE_NAME = " + \
                                             FileName          + \
                                             TAB_LINE_BREAK)
 
         # Process Modules in FfsList
         for FfsFile in self.FfsList:
@@ -129,16 +129,16 @@ class FV (object):
             if GenFdsGlobalVariable.EnableGenfdsMultiThread and GenFdsGlobalVariable.ModuleFile and GenFdsGlobalVariable.ModuleFile.Path.find(os.path.normpath(FfsFile.InfFileName)) == -1:
                 continue
             FileName = FfsFile.GenFfs(MacroDict, FvParentAddr=BaseAddress, IsMakefile=Flag, FvName=self.UiFvName)
             FfsFileList.append(FileName)
             if not Flag:
-                self.FvInfFile.writelines("EFI_FILE_NAME = " + \
+                self.FvInfFile.append("EFI_FILE_NAME = " + \
                                             FileName          + \
                                             TAB_LINE_BREAK)
         if not Flag:
-            SaveFileOnChange(self.InfFileName, self.FvInfFile.getvalue(), False)
-            self.FvInfFile.close()
+            FvInfFile = ''.join(self.FvInfFile)
+            SaveFileOnChange(self.InfFileName, FvInfFile, False)
         #
         # Call GenFv tool
         #
         FvOutputFile = os.path.join(GenFdsGlobalVariable.FvDir, self.UiFvName)
         FvOutputFile = FvOutputFile + '.Fv'
@@ -206,18 +206,18 @@ class FV (object):
             if os.path.isfile(FvOutputFile) and os.path.getsize(FvOutputFile) >= 0x48:
                 FvFileObj = open(FvOutputFile, 'rb')
                 # PI FvHeader is 0x48 byte
                 FvHeaderBuffer = FvFileObj.read(0x48)
                 Signature = FvHeaderBuffer[0x28:0x32]
-                if Signature and Signature.startswith('_FVH'):
+                if Signature and Signature.startswith(b'_FVH'):
                     GenFdsGlobalVariable.VerboseLogger("\nGenerate %s FV Successfully" % self.UiFvName)
                     GenFdsGlobalVariable.SharpCounter = 0
 
                     FvFileObj.seek(0)
                     Buffer.write(FvFileObj.read())
                     # FV alignment position.
-                    FvAlignmentValue = 1 << (ord(FvHeaderBuffer[0x2E]) & 0x1F)
+                    FvAlignmentValue = 1 << (ord(FvHeaderBuffer[0x2E:0x2F]) & 0x1F)
                     if FvAlignmentValue >= 0x400:
                         if FvAlignmentValue >= 0x100000:
                             if FvAlignmentValue >= 0x1000000:
                             #The max alignment supported by FFS is 16M.
                                 self.FvAlignment = "16M"
@@ -274,73 +274,73 @@ class FV (object):
         #
         # Create FV inf file
         #
         self.InfFileName = os.path.join(GenFdsGlobalVariable.FvDir,
                                    self.UiFvName + '.inf')
-        self.FvInfFile = BytesIO()
+        self.FvInfFile = []
 
         #
         # Add [Options]
         #
-        self.FvInfFile.writelines("[options]" + TAB_LINE_BREAK)
+        self.FvInfFile.append("[options]" + TAB_LINE_BREAK)
         if BaseAddress is not None:
-            self.FvInfFile.writelines("EFI_BASE_ADDRESS = " + \
+            self.FvInfFile.append("EFI_BASE_ADDRESS = " + \
                                        BaseAddress          + \
                                        TAB_LINE_BREAK)
 
         if BlockSize is not None:
-            self.FvInfFile.writelines("EFI_BLOCK_SIZE = " + \
+            self.FvInfFile.append("EFI_BLOCK_SIZE = " + \
                                       '0x%X' %BlockSize    + \
                                       TAB_LINE_BREAK)
             if BlockNum is not None:
-                self.FvInfFile.writelines("EFI_NUM_BLOCKS   = "  + \
+                self.FvInfFile.append("EFI_NUM_BLOCKS   = "  + \
                                       ' 0x%X' %BlockNum    + \
                                       TAB_LINE_BREAK)
         else:
             if self.BlockSizeList == []:
                 if not self._GetBlockSize():
                     #set default block size is 1
-                    self.FvInfFile.writelines("EFI_BLOCK_SIZE  = 0x1" + TAB_LINE_BREAK)
+                    self.FvInfFile.append("EFI_BLOCK_SIZE  = 0x1" + TAB_LINE_BREAK)
 
             for BlockSize in self.BlockSizeList:
                 if BlockSize[0] is not None:
-                    self.FvInfFile.writelines("EFI_BLOCK_SIZE  = "  + \
+                    self.FvInfFile.append("EFI_BLOCK_SIZE  = "  + \
                                           '0x%X' %BlockSize[0]    + \
                                           TAB_LINE_BREAK)
 
                 if BlockSize[1] is not None:
-                    self.FvInfFile.writelines("EFI_NUM_BLOCKS   = "  + \
+                    self.FvInfFile.append("EFI_NUM_BLOCKS   = "  + \
                                           ' 0x%X' %BlockSize[1]    + \
                                           TAB_LINE_BREAK)
 
         if self.BsBaseAddress is not None:
-            self.FvInfFile.writelines('EFI_BOOT_DRIVER_BASE_ADDRESS = ' + \
+            self.FvInfFile.append('EFI_BOOT_DRIVER_BASE_ADDRESS = ' + \
                                        '0x%X' %self.BsBaseAddress)
         if self.RtBaseAddress is not None:
-            self.FvInfFile.writelines('EFI_RUNTIME_DRIVER_BASE_ADDRESS = ' + \
+            self.FvInfFile.append('EFI_RUNTIME_DRIVER_BASE_ADDRESS = ' + \
                                       '0x%X' %self.RtBaseAddress)
         #
         # Add attribute
         #
-        self.FvInfFile.writelines("[attributes]" + TAB_LINE_BREAK)
+        self.FvInfFile.append("[attributes]" + TAB_LINE_BREAK)
 
-        self.FvInfFile.writelines("EFI_ERASE_POLARITY   = "       + \
+        self.FvInfFile.append("EFI_ERASE_POLARITY   = "       + \
                                           ' %s' %ErasePloarity    + \
                                           TAB_LINE_BREAK)
         if not (self.FvAttributeDict is None):
             for FvAttribute in self.FvAttributeDict.keys():
                 if FvAttribute == "FvUsedSizeEnable":
                     if self.FvAttributeDict[FvAttribute].upper() in ('TRUE', '1'):
                         self.UsedSizeEnable = True
                     continue
-                self.FvInfFile.writelines("EFI_"            + \
+                self.FvInfFile.append("EFI_"            + \
                                           FvAttribute       + \
                                           ' = '             + \
                                           self.FvAttributeDict[FvAttribute] + \
                                           TAB_LINE_BREAK )
         if self.FvAlignment is not None:
-            self.FvInfFile.writelines("EFI_FVB2_ALIGNMENT_"     + \
+            self.FvInfFile.append("EFI_FVB2_ALIGNMENT_"     + \
                                        self.FvAlignment.strip() + \
                                        " = TRUE"                + \
                                        TAB_LINE_BREAK)
 
         #
@@ -349,11 +349,11 @@ class FV (object):
         if not self.FvNameGuid:
             if len(self.FvExtEntryType) > 0 or self.UsedSizeEnable:
                 GenFdsGlobalVariable.ErrorLogger("FV Extension Header Entries declared for %s with no FvNameGuid declaration." % (self.UiFvName))
         else:
             TotalSize = 16 + 4
-            Buffer = ''
+            Buffer = bytearray()
             if self.UsedSizeEnable:
                 TotalSize += (4 + 4)
                 ## define EFI_FV_EXT_TYPE_USED_SIZE_TYPE 0x03
                 #typedef  struct
                 # {
@@ -376,11 +376,11 @@ class FV (object):
                 #   GUID: size 16
                 #   FV UI name
                 #
                 Buffer += (pack('HH', (FvUiLen + 16 + 4), 0x0002)
                            + PackGUID(Guid)
-                           + self.UiFvName)
+                           + self.UiFvName.encode('utf-8'))
 
             for Index in range (0, len(self.FvExtEntryType)):
                 if self.FvExtEntryType[Index] == 'FILE':
                     # check if the path is absolute or relative
                     if os.path.isabs(self.FvExtEntryData[Index]):
@@ -423,13 +423,13 @@ class FV (object):
                 Changed = SaveFileOnChange(FvExtHeaderFileName, FvExtHeaderFile.getvalue(), True)
                 FvExtHeaderFile.close()
                 if Changed:
                   if os.path.exists (self.InfFileName):
                     os.remove (self.InfFileName)
-                self.FvInfFile.writelines("EFI_FV_EXT_HEADER_FILE_NAME = "      + \
+                self.FvInfFile.append("EFI_FV_EXT_HEADER_FILE_NAME = "      + \
                                            FvExtHeaderFileName                  + \
                                            TAB_LINE_BREAK)
 
         #
         # Add [Files]
         #
-        self.FvInfFile.writelines("[files]" + TAB_LINE_BREAK)
+        self.FvInfFile.append("[files]" + TAB_LINE_BREAK)
diff --git a/BaseTools/Source/Python/GenFds/FvImageSection.py b/BaseTools/Source/Python/GenFds/FvImageSection.py
index 7ea931e1b5..85e59cc347 100644
--- a/BaseTools/Source/Python/GenFds/FvImageSection.py
+++ b/BaseTools/Source/Python/GenFds/FvImageSection.py
@@ -100,11 +100,11 @@ class FvImageSection(FvImageSectionClassObject):
             return OutputFileList, self.Alignment
         #
         # Generate Fv
         #
         if self.FvName is not None:
-            Buffer = BytesIO('')
+            Buffer = BytesIO()
             Fv = GenFdsGlobalVariable.FdfParser.Profile.FvDict.get(self.FvName)
             if Fv is not None:
                 self.Fv = Fv
                 if not self.FvAddr and self.Fv.BaseAddress:
                     self.FvAddr = self.Fv.BaseAddress
diff --git a/BaseTools/Source/Python/GenFds/GenFds.py b/BaseTools/Source/Python/GenFds/GenFds.py
index 2efb2edd9a..a99d56a9fd 100644
--- a/BaseTools/Source/Python/GenFds/GenFds.py
+++ b/BaseTools/Source/Python/GenFds/GenFds.py
@@ -520,11 +520,11 @@ class GenFds(object):
                 FvObj.AddToBuffer(Buffer)
                 Buffer.close()
                 return
         elif GenFds.OnlyGenerateThisFv is None:
             for FvObj in GenFdsGlobalVariable.FdfParser.Profile.FvDict.values():
-                Buffer = BytesIO('')
+                Buffer = BytesIO()
                 FvObj.AddToBuffer(Buffer)
                 Buffer.close()
 
         if GenFds.OnlyGenerateThisFv is None and GenFds.OnlyGenerateThisFd is None and GenFds.OnlyGenerateThisCap is None:
             if GenFdsGlobalVariable.FdfParser.Profile.CapsuleDict != {}:
@@ -671,11 +671,11 @@ class GenFds(object):
             print(ModuleObj.BaseName + ' ' + ModuleObj.ModuleType)
 
     @staticmethod
     def GenerateGuidXRefFile(BuildDb, ArchList, FdfParserObj):
         GuidXRefFileName = os.path.join(GenFdsGlobalVariable.FvDir, "Guid.xref")
-        GuidXRefFile = BytesIO('')
+        GuidXRefFile = []
         PkgGuidDict = {}
         GuidDict = {}
         ModuleList = []
         FileGuidList = []
         VariableGuidSet = set()
@@ -698,13 +698,13 @@ class GenFds(object):
                 if Module in ModuleList:
                     continue
                 else:
                     ModuleList.append(Module)
                 if GlobalData.gGuidPattern.match(ModuleFile.BaseName):
-                    GuidXRefFile.write("%s %s\n" % (ModuleFile.BaseName, Module.BaseName))
+                    GuidXRefFile.append("%s %s\n" % (ModuleFile.BaseName, Module.BaseName))
                 else:
-                    GuidXRefFile.write("%s %s\n" % (Module.Guid, Module.BaseName))
+                    GuidXRefFile.append("%s %s\n" % (Module.Guid, Module.BaseName))
                 GuidDict.update(Module.Protocols)
                 GuidDict.update(Module.Guids)
                 GuidDict.update(Module.Ppis)
             for FvName in FdfParserObj.Profile.FvDict:
                 for FfsObj in FdfParserObj.Profile.FvDict[FvName].FfsList:
@@ -713,11 +713,11 @@ class GenFds(object):
                         FdfModule = BuildDb.BuildObject[InfPath, Arch, GenFdsGlobalVariable.TargetName, GenFdsGlobalVariable.ToolChainTag]
                         if FdfModule in ModuleList:
                             continue
                         else:
                             ModuleList.append(FdfModule)
-                        GuidXRefFile.write("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName))
+                        GuidXRefFile.append("%s %s\n" % (FdfModule.Guid, FdfModule.BaseName))
                         GuidDict.update(FdfModule.Protocols)
                         GuidDict.update(FdfModule.Guids)
                         GuidDict.update(FdfModule.Ppis)
                     else:
                         FileStatementGuid = FfsObj.NameGuid
@@ -774,23 +774,23 @@ class GenFds(object):
                                     Name.append((F.read().split()[-1]))
                         if not Name:
                             continue
 
                         Name = ' '.join(Name) if isinstance(Name, type([])) else Name
-                        GuidXRefFile.write("%s %s\n" %(FileStatementGuid, Name))
+                        GuidXRefFile.append("%s %s\n" %(FileStatementGuid, Name))
 
        # Append GUIDs, Protocols, and PPIs to the Xref file
-        GuidXRefFile.write("\n")
+        GuidXRefFile.append("\n")
         for key, item in GuidDict.items():
-            GuidXRefFile.write("%s %s\n" % (GuidStructureStringToGuidString(item).upper(), key))
+            GuidXRefFile.append("%s %s\n" % (GuidStructureStringToGuidString(item).upper(), key))
 
-        if GuidXRefFile.getvalue():
-            SaveFileOnChange(GuidXRefFileName, GuidXRefFile.getvalue(), False)
+        if GuidXRefFile:
+            GuidXRefFile = ''.join(GuidXRefFile)
+            SaveFileOnChange(GuidXRefFileName, GuidXRefFile, False)
             GenFdsGlobalVariable.InfLogger("\nGUID cross reference file can be found at %s" % GuidXRefFileName)
         elif os.path.exists(GuidXRefFileName):
             os.remove(GuidXRefFileName)
-        GuidXRefFile.close()
 
 
 if __name__ == '__main__':
     r = main()
     ## 0-127 is a safe return range, and 1 is a standard default error
diff --git a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py
index febe0737a2..028bcc480c 100644
--- a/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py
+++ b/BaseTools/Source/Python/GenFds/GenFdsGlobalVariable.py
@@ -720,12 +720,12 @@ class GenFdsGlobalVariable:
             #get command return value
             returnValue[0] = PopenObject.returncode
             return
         if PopenObject.returncode != 0 or GenFdsGlobalVariable.VerboseMode or GenFdsGlobalVariable.DebugLevel != -1:
             GenFdsGlobalVariable.InfLogger ("Return Value = %d" % PopenObject.returncode)
-            GenFdsGlobalVariable.InfLogger (out)
-            GenFdsGlobalVariable.InfLogger (error)
+            GenFdsGlobalVariable.InfLogger(out.decode(encoding='utf-8', errors='ignore'))
+            GenFdsGlobalVariable.InfLogger(error.decode(encoding='utf-8', errors='ignore'))
             if PopenObject.returncode != 0:
                 print("###", cmd)
                 EdkLogger.error("GenFds", COMMAND_FAILURE, errorMess)
 
     @staticmethod
diff --git a/BaseTools/Source/Python/GenFds/Region.py b/BaseTools/Source/Python/GenFds/Region.py
index 83363276d2..972847efae 100644
--- a/BaseTools/Source/Python/GenFds/Region.py
+++ b/BaseTools/Source/Python/GenFds/Region.py
@@ -60,12 +60,12 @@ class Region(object):
         if Size > 0:
             if (ErasePolarity == '1') :
                 PadByte = pack('B', 0xFF)
             else:
                 PadByte = pack('B', 0)
-            PadData = ''.join(PadByte for i in range(0, Size))
-            Buffer.write(PadData)
+            for i in range(0, Size):
+                Buffer.write(PadByte)
 
     ## AddToBuffer()
     #
     #   Add region data to the Buffer
     #
@@ -129,11 +129,11 @@ class Region(object):
                         self.FvAddress = self.FvAddress + FvOffset
                         FvAlignValue = GenFdsGlobalVariable.GetAlignment(FvObj.FvAlignment)
                         if self.FvAddress % FvAlignValue != 0:
                             EdkLogger.error("GenFds", GENFDS_ERROR,
                                             "FV (%s) is NOT %s Aligned!" % (FvObj.UiFvName, FvObj.FvAlignment))
-                        FvBuffer = BytesIO('')
+                        FvBuffer = BytesIO()
                         FvBaseAddress = '0x%X' % self.FvAddress
                         BlockSize = None
                         BlockNum = None
                         FvObj.AddToBuffer(FvBuffer, FvBaseAddress, BlockSize, BlockNum, ErasePolarity, Flag=Flag)
                         if Flag:
diff --git a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
index 2a7c308895..003f052a90 100644
--- a/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
+++ b/BaseTools/Source/Python/Pkcs7Sign/Pkcs7Sign.py
@@ -120,11 +120,11 @@ if __name__ == '__main__':
 
   Version = Process.communicate()
   if Process.returncode != 0:
     print('ERROR: Open SSL command not available.  Please verify PATH or set OPENSSL_PATH')
     sys.exit(Process.returncode)
-  print(Version[0])
+  print(Version[0].decode())
 
   #
   # Read input file into a buffer and save input filename
   #
   args.InputFileName   = args.InputFile.name
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
index f96ceb2637..c0b661d03c 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256GenerateKeys.py
@@ -82,11 +82,11 @@ if __name__ == '__main__':
 
   Version = Process.communicate()
   if Process.returncode != 0:
     print('ERROR: Open SSL command not available.  Please verify PATH or set OPENSSL_PATH')
     sys.exit(Process.returncode)
-  print(Version[0])
+  print(Version[0].decode())
 
   args.PemFileName = []
 
   #
   # Check for output file argument
@@ -117,23 +117,23 @@ if __name__ == '__main__':
       # Save PEM filename and close input file
       #
       args.PemFileName.append(Item.name)
       Item.close()
 
-  PublicKeyHash = ''
+  PublicKeyHash = bytearray()
   for Item in args.PemFileName:
     #
     # Extract public key from private key into STDOUT
     #
     Process = subprocess.Popen('%s rsa -in %s -modulus -noout' % (OpenSslCommand, Item), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
-    PublicKeyHexString = Process.communicate()[0].split('=')[1].strip()
+    PublicKeyHexString = Process.communicate()[0].split(b'=')[1].strip()
     if Process.returncode != 0:
       print('ERROR: Unable to extract public key from private key')
       sys.exit(Process.returncode)
-    PublicKey = ''
+    PublicKey = bytearray()
     for Index in range (0, len(PublicKeyHexString), 2):
-      PublicKey = PublicKey + chr(int(PublicKeyHexString[Index:Index + 2], 16))
+      PublicKey = PublicKey + PublicKeyHexString[Index:Index + 2]
 
     #
     # Generate SHA 256 hash of RSA 2048 bit public key into STDOUT
     #
     Process = subprocess.Popen('%s dgst -sha256 -binary' % (OpenSslCommand), stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
@@ -155,18 +155,18 @@ if __name__ == '__main__':
   #
   # Convert public key hash to a C structure string
   #
   PublicKeyHashC = '{'
   for Item in PublicKeyHash:
-    PublicKeyHashC = PublicKeyHashC + '0x%02x, ' % (ord(Item))
+    PublicKeyHashC = PublicKeyHashC + '0x%02x, ' % (Item)
   PublicKeyHashC = PublicKeyHashC[:-2] + '}'
 
   #
   # Write SHA 256 of 2048 bit binary public key to public key hash C structure file
   #
   try:
-    args.PublicKeyHashCFile.write (PublicKeyHashC)
+    args.PublicKeyHashCFile.write (bytes(PublicKeyHashC))
     args.PublicKeyHashCFile.close ()
   except:
     pass
 
   #
diff --git a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
index c285a69ec0..6cea885853 100644
--- a/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
+++ b/BaseTools/Source/Python/Rsa2048Sha256Sign/Rsa2048Sha256Sign.py
@@ -103,11 +103,11 @@ if __name__ == '__main__':
 
   Version = Process.communicate()
   if Process.returncode != 0:
     print('ERROR: Open SSL command not available.  Please verify PATH or set OPENSSL_PATH')
     sys.exit(Process.returncode)
-  print(Version[0])
+  print(Version[0].decode('utf-8'))
 
   #
   # Read input file into a buffer and save input filename
   #
   args.InputFileName   = args.InputFile.name
@@ -151,11 +151,12 @@ if __name__ == '__main__':
 
   #
   # Extract public key from private key into STDOUT
   #
   Process = subprocess.Popen('%s rsa -in "%s" -modulus -noout' % (OpenSslCommand, args.PrivateKeyFileName), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
-  PublicKeyHexString = Process.communicate()[0].split('=')[1].strip()
+  PublicKeyHexString = Process.communicate()[0].split(b'=')[1].strip()
+  PublicKeyHexString = PublicKeyHexString.decode('utf-8')
   PublicKey = ''
   while len(PublicKeyHexString) > 0:
     PublicKey = PublicKey + PublicKeyHexString[0:2]
     PublicKeyHexString=PublicKeyHexString[2:]
   if Process.returncode != 0:
@@ -208,11 +209,11 @@ if __name__ == '__main__':
       sys.exit(1)
 
     #
     # Verify the public key
     #
-    if Header.PublicKey != PublicKey:
+    if Header.PublicKey != bytearray.fromhex(PublicKey):
       print('ERROR: Public key in input file does not match public key from private key file')
       sys.exit(1)
 
     FullInputFileBuffer = args.InputFileBuffer
     if args.MonotonicCountStr:
diff --git a/BaseTools/Source/Python/Trim/Trim.py b/BaseTools/Source/Python/Trim/Trim.py
index 51010bf326..428bf0d681 100644
--- a/BaseTools/Source/Python/Trim/Trim.py
+++ b/BaseTools/Source/Python/Trim/Trim.py
@@ -243,11 +243,11 @@ def TrimPreprocessedFile(Source, Target, ConvertHex, TrimLong):
             if Brace == 0 and Line.find(";") >= 0:
                 MulPatternFlag = False
 
     # save to file
     try:
-        f = open (Target, 'wb')
+        f = open (Target, 'w')
     except:
         EdkLogger.error("Trim", FILE_OPEN_FAILURE, ExtraData=Target)
     f.writelines(NewLines)
     f.close()
 
@@ -456,33 +456,31 @@ def GenerateVfrBinSec(ModuleName, DebugDir, OutputFile):
         fInputfile = open(OutputFile, "wb+", 0)
     except:
         EdkLogger.error("Trim", FILE_OPEN_FAILURE, "File open failed for %s" %OutputFile, None)
 
     # Use a instance of BytesIO to cache data
-    fStringIO = BytesIO('')
+    fStringIO = BytesIO()
 
     for Item in VfrUniOffsetList:
         if (Item[0].find("Strings") != -1):
             #
             # UNI offset in image.
             # GUID + Offset
             # { 0x8913c5e0, 0x33f6, 0x4d86, { 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66 } }
             #
-            UniGuid = [0xe0, 0xc5, 0x13, 0x89, 0xf6, 0x33, 0x86, 0x4d, 0x9b, 0xf1, 0x43, 0xef, 0x89, 0xfc, 0x6, 0x66]
-            UniGuid = [chr(ItemGuid) for ItemGuid in UniGuid]
-            fStringIO.write(''.join(UniGuid))
+            UniGuid = b'\xe0\xc5\x13\x89\xf63\x86M\x9b\xf1C\xef\x89\xfc\x06f'
+            fStringIO.write(UniGuid)
             UniValue = pack ('Q', int (Item[1], 16))
             fStringIO.write (UniValue)
         else:
             #
             # VFR binary offset in image.
             # GUID + Offset
             # { 0xd0bc7cb4, 0x6a47, 0x495f, { 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2 } };
             #
-            VfrGuid = [0xb4, 0x7c, 0xbc, 0xd0, 0x47, 0x6a, 0x5f, 0x49, 0xaa, 0x11, 0x71, 0x7, 0x46, 0xda, 0x6, 0xa2]
-            VfrGuid = [chr(ItemGuid) for ItemGuid in VfrGuid]
-            fStringIO.write(''.join(VfrGuid))
+            VfrGuid = b'\xb4|\xbc\xd0Gj_I\xaa\x11q\x07F\xda\x06\xa2'
+            fStringIO.write(VfrGuid)
             type (Item[1])
             VfrValue = pack ('Q', int (Item[1], 16))
             fStringIO.write (VfrValue)
 
     #
@@ -560,11 +558,11 @@ def TrimEdkSources(Source, Target):
 def TrimEdkSourceCode(Source, Target):
     EdkLogger.verbose("\t%s -> %s" % (Source, Target))
     CreateDirectory(os.path.dirname(Target))
 
     try:
-        f = open (Source, 'rb')
+        f = open (Source, 'r')
     except:
         EdkLogger.error("Trim", FILE_OPEN_FAILURE, ExtraData=Source)
     # read whole file
     Lines = f.read()
     f.close()
@@ -579,11 +577,11 @@ def TrimEdkSourceCode(Source, Target):
     # save all lines if trimmed
     if Source == Target and NewLines == Lines:
         return
 
     try:
-        f = open (Target, 'wb')
+        f = open (Target, 'w')
     except:
         EdkLogger.error("Trim", FILE_OPEN_FAILURE, ExtraData=Target)
     f.write(NewLines)
     f.close()
 
diff --git a/BaseTools/Source/Python/UPT/Library/StringUtils.py b/BaseTools/Source/Python/UPT/Library/StringUtils.py
index 90946337d0..a3391daa91 100644
--- a/BaseTools/Source/Python/UPT/Library/StringUtils.py
+++ b/BaseTools/Source/Python/UPT/Library/StringUtils.py
@@ -677,13 +677,11 @@ def GetHelpTextList(HelpTextClassList):
 # Get String Array Length
 #
 # @param String: the source string
 #
 def StringArrayLength(String):
-    if isinstance(String, unicode):
-        return (len(String) + 1) * 2 + 1
-    elif String.startswith('L"'):
+    if String.startswith('L"'):
         return (len(String) - 3 + 1) * 2
     elif String.startswith('"'):
         return (len(String) - 2 + 1)
     else:
         return len(String.split()) + 1
diff --git a/BaseTools/Source/Python/Workspace/BuildClassObject.py b/BaseTools/Source/Python/Workspace/BuildClassObject.py
index b67414b930..cff77a71ae 100644
--- a/BaseTools/Source/Python/Workspace/BuildClassObject.py
+++ b/BaseTools/Source/Python/Workspace/BuildClassObject.py
@@ -92,17 +92,17 @@ class PcdClassObject(object):
                     fields = self.SkuOverrideValues[sku][defaultstore]
                     for demesionattr in fields:
                         deme = ArrayIndex.findall(demesionattr)
                         for i in range(len(deme)-1):
                             if int(deme[i].lstrip("[").rstrip("]").strip()) > int(self._Capacity[i]):
-                                print "error"
+                                print ("error")
         if hasattr(self,"DefaultValues"):
             for demesionattr in self.DefaultValues:
                 deme = ArrayIndex.findall(demesionattr)
                 for i in range(len(deme)-1):
                     if int(deme[i].lstrip("[").rstrip("]").strip()) > int(self._Capacity[i]):
-                        print "error"
+                        print ("error")
         return self._Capacity
     @property
     def DatumType(self):
         return self._DatumType
 
diff --git a/BaseTools/Source/Python/Workspace/DscBuildData.py b/BaseTools/Source/Python/Workspace/DscBuildData.py
index 13b2cef59d..a96502b4bf 100644
--- a/BaseTools/Source/Python/Workspace/DscBuildData.py
+++ b/BaseTools/Source/Python/Workspace/DscBuildData.py
@@ -154,11 +154,18 @@ def GetDependencyList(FileStack, SearchPathList):
 
             if len(FileContent) == 0:
                 continue
 
             if FileContent[0] == 0xff or FileContent[0] == 0xfe:
-                FileContent = unicode(FileContent, "utf-16")
+                FileContent = FileContent.decode('utf-16')
+                IncludedFileList = gIncludePattern.findall(FileContent)
+            else:
+                try:
+                    FileContent = str(FileContent)
+                    IncludedFileList = gIncludePattern.findall(FileContent)
+                except:
+                    pass
             IncludedFileList = gIncludePattern.findall(FileContent)
 
             for Inc in IncludedFileList:
                 Inc = Inc.strip()
                 Inc = os.path.normpath(Inc)
@@ -1613,11 +1620,11 @@ class DscBuildData(PlatformBuildClassObject):
         FdfInfList = []
         if GlobalData.gFdfParser:
             FdfInfList = GlobalData.gFdfParser.Profile.InfList
         FdfModuleList = [PathClass(NormPath(Inf), GlobalData.gWorkspace, Arch=self._Arch) for Inf in FdfInfList]
         AllModulePcds = set()
-        ModuleSet = set(self._Modules.keys() + FdfModuleList)
+        ModuleSet = set(list(self._Modules.keys()) + FdfModuleList)
         for ModuleFile in ModuleSet:
             ModuleData = self._Bdb[ModuleFile, self._Arch, self._Target, self._Toolchain]
             AllModulePcds = AllModulePcds | ModuleData.PcdsName
         for ModuleFile in self.LibraryInstances:
             ModuleData = self._Bdb.CreateBuildObject(ModuleFile, self._Arch, self._Target, self._Toolchain)
@@ -1741,11 +1748,11 @@ class DscBuildData(PlatformBuildClassObject):
         try:
             Process = subprocess.Popen(Command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
         except:
             EdkLogger.error('Build', COMMAND_FAILURE, 'Can not execute command: %s' % Command)
         Result = Process.communicate()
-        return Process.returncode, Result[0], Result[1]
+        return Process.returncode, Result[0].decode(encoding='utf-8', errors='ignore'), Result[1].decode(encoding='utf-8', errors='ignore')
 
     @staticmethod
     def IntToCString(Value, ValueSize):
         Result = '"'
         if not isinstance (Value, str):
diff --git a/BaseTools/Source/Python/Workspace/MetaFileParser.py b/BaseTools/Source/Python/Workspace/MetaFileParser.py
index f31dbc2649..b2428a535c 100644
--- a/BaseTools/Source/Python/Workspace/MetaFileParser.py
+++ b/BaseTools/Source/Python/Workspace/MetaFileParser.py
@@ -1997,14 +1997,14 @@ class DecParser(MetaFileParser):
                     self._ValueList = None
                     self._include_flag = False
                     return
 
                 if self._include_flag:
-                    self._ValueList[1] = "<HeaderFiles>_" + md5(self._CurrentLine).hexdigest()
+                    self._ValueList[1] = "<HeaderFiles>_" + md5(self._CurrentLine.encode('utf-8')).hexdigest()
                     self._ValueList[2] = self._CurrentLine
                 if self._package_flag and "}" != self._CurrentLine:
-                    self._ValueList[1] = "<Packages>_" + md5(self._CurrentLine).hexdigest()
+                    self._ValueList[1] = "<Packages>_" + md5(self._CurrentLine.encode('utf-8')).hexdigest()
                     self._ValueList[2] = self._CurrentLine
                 if self._CurrentLine == "}":
                     self._package_flag = False
                     self._include_flag = False
                     self._ValueList = None
diff --git a/BaseTools/Source/Python/build/BuildReport.py b/BaseTools/Source/Python/build/BuildReport.py
index 1cd1b0886a..8d3b030151 100644
--- a/BaseTools/Source/Python/build/BuildReport.py
+++ b/BaseTools/Source/Python/build/BuildReport.py
@@ -141,11 +141,11 @@ VPDPcdList = []
 # @Wrapper                   Indicates whether to wrap the string
 #
 def FileWrite(File, String, Wrapper=False):
     if Wrapper:
         String = textwrap.fill(String, 120)
-    File.write(String + gEndOfLine)
+    File.append(String + gEndOfLine)
 
 def ByteArrayForamt(Value):
     IsByteArray = False
     SplitNum = 16
     ArrayList = []
@@ -634,11 +634,11 @@ class ModuleReport(object):
                 if Match:
                     self.Size = int(Match.group(1))
 
                 Match = gTimeStampPattern.search(FileContents)
                 if Match:
-                    self.BuildTimeStamp = datetime.fromtimestamp(int(Match.group(1)))
+                    self.BuildTimeStamp = datetime.utcfromtimestamp(int(Match.group(1)))
             except IOError:
                 EdkLogger.warn(None, "Fail to read report file", FwReportFileName)
 
         if "HASH" in ReportType:
             OutputDir = os.path.join(self._BuildDir, "OUTPUT")
@@ -719,12 +719,12 @@ class ModuleReport(object):
 def ReadMessage(From, To, ExitFlag):
     while True:
         # read one line a time
         Line = From.readline()
         # empty string means "end"
-        if Line is not None and Line != "":
-            To(Line.rstrip())
+        if Line is not None and Line != b"":
+            To(Line.rstrip().decode(encoding='utf-8', errors='ignore'))
         else:
             break
         if ExitFlag.isSet():
             break
 
@@ -2267,22 +2267,21 @@ class BuildReport(object):
     # @param GenFdsTime      The total time of GenFds phase
     #
     def GenerateReport(self, BuildDuration, AutoGenTime, MakeTime, GenFdsTime):
         if self.ReportFile:
             try:
-                File = BytesIO('')
+                File = []
                 for (Wa, MaList) in self.ReportList:
                     PlatformReport(Wa, MaList, self.ReportType).GenerateReport(File, BuildDuration, AutoGenTime, MakeTime, GenFdsTime, self.ReportType)
-                Content = FileLinesSplit(File.getvalue(), gLineMaxLength)
-                SaveFileOnChange(self.ReportFile, Content, True)
+                Content = FileLinesSplit(''.join(File), gLineMaxLength)
+                SaveFileOnChange(self.ReportFile, Content, False)
                 EdkLogger.quiet("Build report can be found at %s" % os.path.abspath(self.ReportFile))
             except IOError:
                 EdkLogger.error(None, FILE_WRITE_FAILURE, ExtraData=self.ReportFile)
             except:
                 EdkLogger.error("BuildReport", CODE_ERROR, "Unknown fatal error when generating build report", ExtraData=self.ReportFile, RaiseError=False)
                 EdkLogger.quiet("(Python %s on %s\n%s)" % (platform.python_version(), sys.platform, traceback.format_exc()))
-            File.close()
 
 # This acts like the main() function for the script, unless it is 'import'ed into another script.
 if __name__ == '__main__':
     pass
 
diff --git a/BaseTools/Source/Python/build/build.py b/BaseTools/Source/Python/build/build.py
index 43fc3c8077..cdea312864 100644
--- a/BaseTools/Source/Python/build/build.py
+++ b/BaseTools/Source/Python/build/build.py
@@ -18,11 +18,10 @@
 # Import Modules
 #
 from __future__ import print_function
 import Common.LongFilePathOs as os
 import re
-from io import BytesIO
 import sys
 import glob
 import time
 import platform
 import traceback
@@ -180,12 +179,12 @@ def NormFile(FilePath, Workspace):
 def ReadMessage(From, To, ExitFlag):
     while True:
         # read one line a time
         Line = From.readline()
         # empty string means "end"
-        if Line is not None and Line != "":
-            To(Line.rstrip())
+        if Line is not None and Line != b"":
+            To(Line.rstrip().decode(encoding='utf-8', errors='ignore'))
         else:
             break
         if ExitFlag.isSet():
             break
 
@@ -1408,15 +1407,15 @@ class Build():
                 ImageMap.close()
             #
             # Add general information.
             #
             if ModeIsSmm:
-                MapBuffer.write('\n\n%s (Fixed SMRAM Offset,   BaseAddress=0x%010X,  EntryPoint=0x%010X)\n' % (ModuleName, BaseAddress, BaseAddress + ModuleInfo.Image.EntryPoint))
+                MapBuffer.append('\n\n%s (Fixed SMRAM Offset,   BaseAddress=0x%010X,  EntryPoint=0x%010X)\n' % (ModuleName, BaseAddress, BaseAddress + ModuleInfo.Image.EntryPoint))
             elif AddrIsOffset:
-                MapBuffer.write('\n\n%s (Fixed Memory Offset,  BaseAddress=-0x%010X, EntryPoint=-0x%010X)\n' % (ModuleName, 0 - BaseAddress, 0 - (BaseAddress + ModuleInfo.Image.EntryPoint)))
+                MapBuffer.append('\n\n%s (Fixed Memory Offset,  BaseAddress=-0x%010X, EntryPoint=-0x%010X)\n' % (ModuleName, 0 - BaseAddress, 0 - (BaseAddress + ModuleInfo.Image.EntryPoint)))
             else:
-                MapBuffer.write('\n\n%s (Fixed Memory Address, BaseAddress=0x%010X,  EntryPoint=0x%010X)\n' % (ModuleName, BaseAddress, BaseAddress + ModuleInfo.Image.EntryPoint))
+                MapBuffer.append('\n\n%s (Fixed Memory Address, BaseAddress=0x%010X,  EntryPoint=0x%010X)\n' % (ModuleName, BaseAddress, BaseAddress + ModuleInfo.Image.EntryPoint))
             #
             # Add guid and general seciton section.
             #
             TextSectionAddress = 0
             DataSectionAddress = 0
@@ -1424,25 +1423,25 @@ class Build():
                 if SectionHeader[0] == '.text':
                     TextSectionAddress = SectionHeader[1]
                 elif SectionHeader[0] in ['.data', '.sdata']:
                     DataSectionAddress = SectionHeader[1]
             if AddrIsOffset:
-                MapBuffer.write('(GUID=%s, .textbaseaddress=-0x%010X, .databaseaddress=-0x%010X)\n' % (ModuleInfo.Guid, 0 - (BaseAddress + TextSectionAddress), 0 - (BaseAddress + DataSectionAddress)))
+                MapBuffer.append('(GUID=%s, .textbaseaddress=-0x%010X, .databaseaddress=-0x%010X)\n' % (ModuleInfo.Guid, 0 - (BaseAddress + TextSectionAddress), 0 - (BaseAddress + DataSectionAddress)))
             else:
-                MapBuffer.write('(GUID=%s, .textbaseaddress=0x%010X, .databaseaddress=0x%010X)\n' % (ModuleInfo.Guid, BaseAddress + TextSectionAddress, BaseAddress + DataSectionAddress))
+                MapBuffer.append('(GUID=%s, .textbaseaddress=0x%010X, .databaseaddress=0x%010X)\n' % (ModuleInfo.Guid, BaseAddress + TextSectionAddress, BaseAddress + DataSectionAddress))
             #
             # Add debug image full path.
             #
-            MapBuffer.write('(IMAGE=%s)\n\n' % (ModuleDebugImage))
+            MapBuffer.append('(IMAGE=%s)\n\n' % (ModuleDebugImage))
             #
             # Add funtion address
             #
             for Function in FunctionList:
                 if AddrIsOffset:
-                    MapBuffer.write('  -0x%010X    %s\n' % (0 - (BaseAddress + Function[1]), Function[0]))
+                    MapBuffer.append('  -0x%010X    %s\n' % (0 - (BaseAddress + Function[1]), Function[0]))
                 else:
-                    MapBuffer.write('  0x%010X    %s\n' % (BaseAddress + Function[1], Function[0]))
+                    MapBuffer.append('  0x%010X    %s\n' % (BaseAddress + Function[1], Function[0]))
             ImageMap.close()
 
             #
             # for SMM module in SMRAM, the SMRAM will be allocated from base to top.
             #
@@ -1473,19 +1472,19 @@ class Build():
                         # Replace GUID with module name
                         #
                         GuidString = MatchGuid.group()
                         if GuidString.upper() in ModuleList:
                             Line = Line.replace(GuidString, ModuleList[GuidString.upper()].Name)
-                    MapBuffer.write(Line)
+                    MapBuffer.append(Line)
                     #
                     # Add the debug image full path.
                     #
                     MatchGuid = GuidName.match(Line)
                     if MatchGuid is not None:
                         GuidString = MatchGuid.group().split("=")[1]
                         if GuidString.upper() in ModuleList:
-                            MapBuffer.write('(IMAGE=%s)\n' % (os.path.join(ModuleList[GuidString.upper()].DebugDir, ModuleList[GuidString.upper()].Name + '.efi')))
+                            MapBuffer.append('(IMAGE=%s)\n' % (os.path.join(ModuleList[GuidString.upper()].DebugDir, ModuleList[GuidString.upper()].Name + '.efi')))
 
                 FvMap.close()
 
     ## Collect MAP information of all modules
     #
@@ -1597,25 +1596,25 @@ class Build():
                 elif PcdInfo[0] == TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_SMM_PAGE_SIZE and len (SmmModuleList) > 0:
                     ReturnValue, ErrorInfo = PatchBinaryFile (EfiImage, PcdInfo[1], TAB_PCDS_PATCHABLE_LOAD_FIX_ADDRESS_SMM_PAGE_SIZE_DATA_TYPE, str (SmmSize // 0x1000))
                 if ReturnValue != 0:
                     EdkLogger.error("build", PARAMETER_INVALID, "Patch PCD value failed", ExtraData=ErrorInfo)
 
-        MapBuffer.write('PEI_CODE_PAGE_NUMBER      = 0x%x\n' % (PeiSize // 0x1000))
-        MapBuffer.write('BOOT_CODE_PAGE_NUMBER     = 0x%x\n' % (BtSize // 0x1000))
-        MapBuffer.write('RUNTIME_CODE_PAGE_NUMBER  = 0x%x\n' % (RtSize // 0x1000))
+        MapBuffer.append('PEI_CODE_PAGE_NUMBER      = 0x%x\n' % (PeiSize // 0x1000))
+        MapBuffer.append('BOOT_CODE_PAGE_NUMBER     = 0x%x\n' % (BtSize // 0x1000))
+        MapBuffer.append('RUNTIME_CODE_PAGE_NUMBER  = 0x%x\n' % (RtSize // 0x1000))
         if len (SmmModuleList) > 0:
-            MapBuffer.write('SMM_CODE_PAGE_NUMBER      = 0x%x\n' % (SmmSize // 0x1000))
+            MapBuffer.append('SMM_CODE_PAGE_NUMBER      = 0x%x\n' % (SmmSize // 0x1000))
 
         PeiBaseAddr = TopMemoryAddress - RtSize - BtSize
         BtBaseAddr  = TopMemoryAddress - RtSize
         RtBaseAddr  = TopMemoryAddress - ReservedRuntimeMemorySize
 
         self._RebaseModule (MapBuffer, PeiBaseAddr, PeiModuleList, TopMemoryAddress == 0)
         self._RebaseModule (MapBuffer, BtBaseAddr, BtModuleList, TopMemoryAddress == 0)
         self._RebaseModule (MapBuffer, RtBaseAddr, RtModuleList, TopMemoryAddress == 0)
         self._RebaseModule (MapBuffer, 0x1000, SmmModuleList, AddrIsOffset=False, ModeIsSmm=True)
-        MapBuffer.write('\n\n')
+        MapBuffer.append('\n\n')
         sys.stdout.write ("\n")
         sys.stdout.flush()
 
     ## Save platform Map file
     #
@@ -1625,12 +1624,11 @@ class Build():
         #
         MapFilePath = os.path.join(Wa.BuildDir, Wa.Name + '.map')
         #
         # Save address map into MAP file.
         #
-        SaveFileOnChange(MapFilePath, MapBuffer.getvalue(), False)
-        MapBuffer.close()
+        SaveFileOnChange(MapFilePath, ''.join(MapBuffer), False)
         if self.LoadFixAddress != 0:
             sys.stdout.write ("\nLoad Module At Fix Address Map file can be found at %s\n" % (MapFilePath))
         sys.stdout.flush()
 
     ## Build active platform for different build targets and different tool chains
@@ -1701,11 +1699,11 @@ class Build():
                             if Ma is None:
                                 continue
                             if not Ma.IsLibrary:
                                 ModuleList[Ma.Guid.upper()] = Ma
 
-                    MapBuffer = BytesIO('')
+                    MapBuffer = []
                     if self.LoadFixAddress != 0:
                         #
                         # Rebase module to the preferred memory address before GenFds
                         #
                         self._CollectModuleMapBuffer(MapBuffer, ModuleList)
@@ -1859,11 +1857,11 @@ class Build():
                             if Ma is None:
                                 continue
                             if not Ma.IsLibrary:
                                 ModuleList[Ma.Guid.upper()] = Ma
 
-                    MapBuffer = BytesIO('')
+                    MapBuffer = []
                     if self.LoadFixAddress != 0:
                         #
                         # Rebase module to the preferred memory address before GenFds
                         #
                         self._CollectModuleMapBuffer(MapBuffer, ModuleList)
@@ -2040,11 +2038,11 @@ class Build():
                             if not Ma.IsLibrary:
                                 ModuleList[Ma.Guid.upper()] = Ma
                     #
                     # Rebase module to the preferred memory address before GenFds
                     #
-                    MapBuffer = BytesIO('')
+                    MapBuffer = []
                     if self.LoadFixAddress != 0:
                         self._CollectModuleMapBuffer(MapBuffer, ModuleList)
 
                     if self.Fdf:
                         #
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (30 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 31/33] BaseTools: Handle the bytes and str difference Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
       [not found]   ` <20190902190211.GZ29255@bivouac.eciton.net>
  2019-01-29  2:06 ` [Patch v2 33/33] BaseTools: Eot " Feng, Bob C
  2019-01-29 13:07 ` [Patch v2 00/33] BaseTools python3 migration patch set Laszlo Ersek
  33 siblings, 1 reply; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao

v2:
The python files under CParser4 are generated by antlr4 and for
python3 usage. They have python3 specific syntax, for example
the data type declaration for the arguments of a function. That
is not compitable with python2. this patch is to remove these syntax.

ECC tool Python3 adaption.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
---
 BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
 BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
 BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
 BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
 BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
 BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
 BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
 BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
 BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
 BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
 BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
 BaseTools/Source/Python/Ecc/c.py                      |    6 +-
 BaseTools/Source/Python/Ecc/config.ini                |    2 -
 17 files changed, 8385 insertions(+), 23 deletions(-)

diff --git a/BaseTools/Source/Python/Ecc/CLexer.py b/BaseTools/Source/Python/Ecc/CParser3/CLexer.py
similarity index 100%
rename from BaseTools/Source/Python/Ecc/CLexer.py
rename to BaseTools/Source/Python/Ecc/CParser3/CLexer.py
diff --git a/BaseTools/Source/Python/Ecc/CParser.py b/BaseTools/Source/Python/Ecc/CParser3/CParser.py
similarity index 100%
rename from BaseTools/Source/Python/Ecc/CParser.py
rename to BaseTools/Source/Python/Ecc/CParser3/CParser.py
diff --git a/BaseTools/Source/Python/Ecc/CParser3/__init__.py b/BaseTools/Source/Python/Ecc/CParser3/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/BaseTools/Source/Python/Ecc/CParser4/C.g4 b/BaseTools/Source/Python/Ecc/CParser4/C.g4
new file mode 100644
index 0000000000..89363b08c3
--- /dev/null
+++ b/BaseTools/Source/Python/Ecc/CParser4/C.g4
@@ -0,0 +1,637 @@
+/* @file
+ This file is used to be the grammar file of ECC tool
+
+ Copyright (c) 2009 - 2018, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials
+ are licensed and made available under the terms and conditions of the BSD License
+ which accompanies this distribution.  The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.php
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+*/
+
+
+grammar C;
+options {
+    language=Python;
+}
+
+
+@header {
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+}
+
+@members {
+
+def printTokenInfo(self, line, offset, tokenText):
+    print(str(line)+ ',' + str(offset) + ':' + str(tokenText))
+
+def StorePredicateExpression(self, StartLine, StartOffset, EndLine, EndOffset, Text):
+    PredExp = CodeFragment.PredicateExpression(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.PredicateExpressionList.append(PredExp)
+
+def StoreEnumerationDefinition(self, StartLine, StartOffset, EndLine, EndOffset, Text):
+    EnumDef = CodeFragment.EnumerationDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.EnumerationDefinitionList.append(EnumDef)
+
+def StoreStructUnionDefinition(self, StartLine, StartOffset, EndLine, EndOffset, Text):
+    SUDef = CodeFragment.StructUnionDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.StructUnionDefinitionList.append(SUDef)
+
+def StoreTypedefDefinition(self, StartLine, StartOffset, EndLine, EndOffset, FromText, ToText):
+    Tdef = CodeFragment.TypedefDefinition(FromText, ToText, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.TypedefDefinitionList.append(Tdef)
+
+def StoreFunctionDefinition(self, StartLine, StartOffset, EndLine, EndOffset, ModifierText, DeclText, LeftBraceLine, LeftBraceOffset, DeclLine, DeclOffset):
+    FuncDef = CodeFragment.FunctionDefinition(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset), (LeftBraceLine, LeftBraceOffset), (DeclLine, DeclOffset))
+    FileProfile.FunctionDefinitionList.append(FuncDef)
+
+def StoreVariableDeclaration(self, StartLine, StartOffset, EndLine, EndOffset, ModifierText, DeclText):
+    VarDecl = CodeFragment.VariableDeclaration(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.VariableDeclarationList.append(VarDecl)
+
+def StoreFunctionCalling(self, StartLine, StartOffset, EndLine, EndOffset, FuncName, ParamList):
+    FuncCall = CodeFragment.FunctionCalling(FuncName, ParamList, (StartLine, StartOffset), (EndLine, EndOffset))
+    FileProfile.FunctionCallingList.append(FuncCall)
+
+}
+
+translation_unit
+    : external_declaration*
+    ;
+
+
+external_declaration
+    :   ( declaration_specifiers? declarator declaration* '{' )
+    |   function_definition
+    |   declaration
+    |   macro_statement (';')?
+    ;
+
+function_definition
+locals [String ModifierText = '', String DeclText = '', int LBLine = 0, int LBOffset = 0, int DeclLine = 0, int DeclOffset = 0]
+@init {
+ModifierText = '';
+DeclText = '';
+LBLine = 0;
+LBOffset = 0;
+DeclLine = 0;
+DeclOffset = 0;
+}
+@after{
+self.StoreFunctionDefinition(localctx.start.line, localctx.start.column, localctx.stop.line, localctx.stop.column, ModifierText, DeclText, LBLine, LBOffset, DeclLine, DeclOffset)
+}
+    :    d=declaration_specifiers? declarator
+    (   declaration+ a=compound_statement  // K&R style
+    |   b=compound_statement        // ANSI style
+    )   {
+if localctx.d != None:
+    ModifierText = $declaration_specifiers.text
+else:
+    ModifierText = ''
+DeclText = $declarator.text
+DeclLine = $declarator.start.line
+DeclOffset = $declarator.start.column
+if localctx.a != None:
+    LBLine = $a.start.line
+    LBOffset = $a.start.column
+else:
+    LBLine = $b.start.line
+    LBOffset = $b.start.column
+        }
+    ;
+
+
+declaration_specifiers
+    :   (   storage_class_specifier
+        |   type_specifier
+        |   type_qualifier
+        )+
+    ;
+
+declaration
+    : a='typedef' b=declaration_specifiers? c=init_declarator_list d=';'
+    {
+if localctx.b is not None:
+    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, $d.line, localctx.d.column, $b.text, $c.text)
+else:
+    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, $d.line, localctx.d.column, '', $c.text)
+    }
+    | s=declaration_specifiers t=init_declarator_list? e=';'
+    {
+if localctx.t is not None:
+    self.StoreVariableDeclaration($s.start.line, $s.start.column, $t.start.line, $t.start.column, $s.text, $t.text)
+}
+    ;
+
+init_declarator_list
+    : init_declarator (',' init_declarator)*
+    ;
+
+init_declarator
+    : declarator ('=' initializer)?
+    ;
+
+storage_class_specifier
+    : 'extern'
+    | 'static'
+    | 'auto'
+    | 'register'
+    | 'STATIC'
+    ;
+
+type_specifier
+    : 'void'
+    | 'char'
+    | 'short'
+    | 'int'
+    | 'long'
+    | 'float'
+    | 'double'
+    | 'signed'
+    | 'unsigned'
+    | s=struct_or_union_specifier
+    {
+if localctx.s.stop is not None:
+    self.StoreStructUnionDefinition($s.start.line, $s.start.column, $s.stop.line, $s.stop.column, $s.text)
+}
+    | e=enum_specifier
+    {
+if localctx.e.stop is not None:
+    self.StoreEnumerationDefinition($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)
+}
+    | (IDENTIFIER type_qualifier* declarator)
+    |  type_id
+    ;
+
+type_id
+    :   IDENTIFIER
+        //{self.printTokenInfo($a.line, $a.pos, $a.text)}
+    ;
+
+struct_or_union_specifier
+    : struct_or_union IDENTIFIER? '{' struct_declaration_list '}'
+    | struct_or_union IDENTIFIER
+    ;
+
+struct_or_union
+    : 'struct'
+    | 'union'
+    ;
+
+struct_declaration_list
+    : struct_declaration+
+    ;
+
+struct_declaration
+    : specifier_qualifier_list struct_declarator_list ';'
+    ;
+
+specifier_qualifier_list
+    : ( type_qualifier | type_specifier )+
+    ;
+
+struct_declarator_list
+    : struct_declarator (',' struct_declarator)*
+    ;
+
+struct_declarator
+    : declarator (':' constant_expression)?
+    | ':' constant_expression
+    ;
+
+enum_specifier
+    : 'enum' '{' enumerator_list ','? '}'
+    | 'enum' IDENTIFIER '{' enumerator_list ','? '}'
+    | 'enum' IDENTIFIER
+    ;
+
+enumerator_list
+    : enumerator (',' enumerator)*
+    ;
+
+enumerator
+    : IDENTIFIER ('=' constant_expression)?
+    ;
+
+type_qualifier
+    : 'const'
+    | 'volatile'
+    | 'IN'
+    | 'OUT'
+    | 'OPTIONAL'
+    | 'CONST'
+    | 'UNALIGNED'
+    | 'VOLATILE'
+    | 'GLOBAL_REMOVE_IF_UNREFERENCED'
+    | 'EFIAPI'
+    | 'EFI_BOOTSERVICE'
+    | 'EFI_RUNTIMESERVICE'
+    | 'PACKED'
+    ;
+
+declarator
+    : pointer? ('EFIAPI')? ('EFI_BOOTSERVICE')? ('EFI_RUNTIMESERVICE')? direct_declarator
+//  | ('EFIAPI')? ('EFI_BOOTSERVICE')? ('EFI_RUNTIMESERVICE')? pointer? direct_declarator
+    | pointer
+    ;
+
+direct_declarator
+    : IDENTIFIER declarator_suffix*
+    | '(' ('EFIAPI')? declarator ')' declarator_suffix+
+    ;
+
+declarator_suffix
+    :   '[' constant_expression ']'
+    |   '[' ']'
+    |   '(' parameter_type_list ')'
+    |   '(' identifier_list ')'
+    |   '(' ')'
+    ;
+
+pointer
+    : '*' type_qualifier+ pointer?
+    | '*' pointer
+    | '*'
+    ;
+
+parameter_type_list
+    : parameter_list (',' ('OPTIONAL')? '...')?
+    ;
+
+parameter_list
+    : parameter_declaration (',' ('OPTIONAL')? parameter_declaration)*
+    ;
+
+parameter_declaration
+    : declaration_specifiers (declarator|abstract_declarator)* ('OPTIONAL')?
+    //accomerdate user-defined type only, no declarator follow.
+    | pointer* IDENTIFIER
+    ;
+
+identifier_list
+    : IDENTIFIER
+    (',' IDENTIFIER)*
+    ;
+
+type_name
+    : specifier_qualifier_list abstract_declarator?
+    | type_id
+    ;
+
+abstract_declarator
+    : pointer direct_abstract_declarator?
+    | direct_abstract_declarator
+    ;
+
+direct_abstract_declarator
+    :   ( '(' abstract_declarator ')' | abstract_declarator_suffix ) abstract_declarator_suffix*
+    ;
+
+abstract_declarator_suffix
+    :   '[' ']'
+    |   '[' constant_expression ']'
+    |   '(' ')'
+    |   '(' parameter_type_list ')'
+    ;
+
+initializer
+
+    : assignment_expression
+    | '{' initializer_list ','? '}'
+    ;
+
+initializer_list
+    : initializer (',' initializer )*
+    ;
+
+// E x p r e s s i o n s
+
+argument_expression_list
+    :   assignment_expression ('OPTIONAL')? (',' assignment_expression ('OPTIONAL')?)*
+    ;
+
+additive_expression
+    : (multiplicative_expression) ('+' multiplicative_expression | '-' multiplicative_expression)*
+    ;
+
+multiplicative_expression
+    : (cast_expression) ('*' cast_expression | '/' cast_expression | '%' cast_expression)*
+    ;
+
+cast_expression
+    : '(' type_name ')' cast_expression
+    | unary_expression
+    ;
+
+unary_expression
+    : postfix_expression
+    | '++' unary_expression
+    | '--' unary_expression
+    | unary_operator cast_expression
+    | 'sizeof' unary_expression
+    | 'sizeof' '(' type_name ')'
+    ;
+
+postfix_expression
+locals [FuncCallText='']
+@init
+    {
+self.FuncCallText=''
+    }
+    :   p=primary_expression {self.FuncCallText += $p.text}
+        (   '[' expression ']'
+        |   '(' a=')'{self.StoreFunctionCalling($p.start.line, $p.start.column, $a.line, localctx.a.column, self.FuncCallText, '')}
+        |   '(' c=argument_expression_list b=')' {self.StoreFunctionCalling($p.start.line, $p.start.column, $b.line, localctx.b.column, self.FuncCallText, $c.text)}
+        |   '(' macro_parameter_list ')'
+        |   '.' x=IDENTIFIER {self.FuncCallText += '.' + $x.text}
+        |   '*' y=IDENTIFIER {self.FuncCallText = $y.text}
+        |   '->' z=IDENTIFIER {self.FuncCallText += '->' + $z.text}
+        |   '++'
+        |   '--'
+        )*
+    ;
+
+macro_parameter_list
+    : parameter_declaration (',' parameter_declaration)*
+    ;
+
+unary_operator
+    : '&'
+    | '*'
+    | '+'
+    | '-'
+    | '~'
+    | '!'
+    ;
+
+primary_expression
+    : IDENTIFIER
+    | constant
+    | '(' expression ')'
+    ;
+
+constant
+    :   HEX_LITERAL
+    |   OCTAL_LITERAL
+    |   DECIMAL_LITERAL
+    |   CHARACTER_LITERAL
+    |   (IDENTIFIER* STRING_LITERAL+)+ IDENTIFIER*
+    |   FLOATING_POINT_LITERAL
+    ;
+
+/////
+
+expression
+    : assignment_expression (',' assignment_expression)*
+    ;
+
+constant_expression
+    : conditional_expression
+    ;
+
+assignment_expression
+    : lvalue assignment_operator assignment_expression
+    | conditional_expression
+    ;
+
+lvalue
+    :   unary_expression
+    ;
+
+assignment_operator
+    : '='
+    | '*='
+    | '/='
+    | '%='
+    | '+='
+    | '-='
+    | '<<='
+    | '>>='
+    | '&='
+    | '^='
+    | '|='
+    ;
+
+conditional_expression
+    : e=logical_or_expression ('?' expression ':' conditional_expression {self.StorePredicateExpression($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)})?
+    ;
+
+logical_or_expression
+    : logical_and_expression ('||' logical_and_expression)*
+    ;
+
+logical_and_expression
+    : inclusive_or_expression ('&&' inclusive_or_expression)*
+    ;
+
+inclusive_or_expression
+    : exclusive_or_expression ('|' exclusive_or_expression)*
+    ;
+
+exclusive_or_expression
+    : and_expression ('^' and_expression)*
+    ;
+
+and_expression
+    : equality_expression ('&' equality_expression)*
+    ;
+equality_expression
+    : relational_expression (('=='|'!=') relational_expression )*
+    ;
+
+relational_expression
+    : shift_expression (('<'|'>'|'<='|'>=') shift_expression)*
+    ;
+
+shift_expression
+    : additive_expression (('<<'|'>>') additive_expression)*
+    ;
+
+// S t a t e m e n t s
+
+statement
+    : labeled_statement
+    | compound_statement
+    | expression_statement
+    | selection_statement
+    | iteration_statement
+    | jump_statement
+    | macro_statement
+    | asm2_statement
+    | asm1_statement
+    | asm_statement
+    | declaration
+    ;
+
+asm2_statement
+    : '__asm__'? IDENTIFIER '(' (~(';'))* ')' ';'
+    ;
+
+asm1_statement
+    : '_asm' '{' (~('}'))* '}'
+    ;
+
+asm_statement
+    : '__asm' '{' (~('}'))* '}'
+    ;
+
+macro_statement
+    : IDENTIFIER '(' declaration*  statement_list? expression? ')'
+    ;
+
+labeled_statement
+    : IDENTIFIER ':' statement
+    | 'case' constant_expression ':' statement
+    | 'default' ':' statement
+    ;
+
+compound_statement
+    : '{' declaration* statement_list? '}'
+    ;
+
+statement_list
+    : statement+
+    ;
+
+expression_statement
+    : ';'
+    | expression ';'
+    ;
+
+selection_statement
+    : 'if' '(' e=expression ')' {self.StorePredicateExpression($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)} statement (:'else' statement)?
+    | 'switch' '(' expression ')' statement
+    ;
+
+iteration_statement
+    : 'while' '(' e=expression ')' statement {self.StorePredicateExpression($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)}
+    | 'do' statement 'while' '(' e=expression ')' ';' {self.StorePredicateExpression($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)}
+    //| 'for' '(' expression_statement e=expression_statement expression? ')' statement {self.StorePredicateExpression($e.start.line, $e.start.column, $e.stop.line, $e.stop.column, $e.text)}
+    ;
+
+jump_statement
+    : 'goto' IDENTIFIER ';'
+    | 'continue' ';'
+    | 'break' ';'
+    | 'return' ';'
+    | 'return' expression ';'
+    ;
+
+IDENTIFIER
+    :   LETTER (LETTER|'0'..'9')*
+    ;
+
+fragment
+LETTER
+    :   '$'
+    |  'A'..'Z'
+    |  'a'..'z'
+    |   '_'
+    ;
+
+CHARACTER_LITERAL
+    :   ('L')? '\'' ( EscapeSequence | ~('\''|'\\') ) '\''
+    ;
+
+STRING_LITERAL
+    :  ('L')? '"' ( EscapeSequence | ~('\\'|'"') )* '"'
+    ;
+
+HEX_LITERAL : '0' ('x'|'X') HexDigit+ IntegerTypeSuffix? ;
+
+DECIMAL_LITERAL : ('0' | '1'..'9' '0'..'9'*) IntegerTypeSuffix? ;
+
+OCTAL_LITERAL : '0' ('0'..'7')+ IntegerTypeSuffix? ;
+
+fragment
+HexDigit : ('0'..'9'|'a'..'f'|'A'..'F') ;
+
+fragment
+IntegerTypeSuffix
+    : ('u'|'U')
+    | ('l'|'L')
+    | ('u'|'U')  ('l'|'L')
+    | ('u'|'U')  ('l'|'L') ('l'|'L')
+    ;
+
+FLOATING_POINT_LITERAL
+    :   ('0'..'9')+ '.' ('0'..'9')* Exponent? FloatTypeSuffix?
+    |   '.' ('0'..'9')+ Exponent? FloatTypeSuffix?
+    |   ('0'..'9')+ Exponent FloatTypeSuffix?
+    |   ('0'..'9')+ Exponent? FloatTypeSuffix
+    ;
+
+fragment
+Exponent : ('e'|'E') ('+'|'-')? ('0'..'9')+ ;
+
+fragment
+FloatTypeSuffix : ('f'|'F'|'d'|'D') ;
+
+fragment
+EscapeSequence
+    :  '\\' ('b'|'t'|'n'|'f'|'r'|'\''|'\\')
+    |   OctalEscape
+    ;
+
+fragment
+OctalEscape
+    :   '\\' ('0'..'3') ('0'..'7') ('0'..'7')
+    |   '\\' ('0'..'7') ('0'..'7')
+    |   '\\' ('0'..'7')
+    ;
+
+fragment
+UnicodeEscape
+    :   '\\' 'u' HexDigit HexDigit HexDigit HexDigit
+    ;
+
+WS  :  (' '|'\r'|'\t'|'\u000C'|'\n')
+       -> channel(HIDDEN)
+    ;
+
+// ingore '\' of line concatenation
+BS  : ('\\')
+      -> channel(HIDDEN)
+    ;
+
+UnicodeVocabulary
+    : '\u0003'..'\uFFFE'
+    ;
+
+COMMENT
+    : '/*' .*? '*/'
+      -> channel(HIDDEN)
+    ;
+
+LINE_COMMENT
+    : '//' ~('\n'|'\r')* '\r'? '\n'
+      -> channel(HIDDEN)
+    ;
+
+// ignore #line info for now
+LINE_COMMAND
+    : '#' ~('\n'|'\r')* '\r'? '\n'
+      -> channel(HIDDEN)
+    ;
diff --git a/BaseTools/Source/Python/Ecc/CParser4/CLexer.py b/BaseTools/Source/Python/Ecc/CParser4/CLexer.py
new file mode 100644
index 0000000000..4e2c7d0354
--- /dev/null
+++ b/BaseTools/Source/Python/Ecc/CParser4/CLexer.py
@@ -0,0 +1,632 @@
+# Generated from C.g4 by ANTLR 4.7.1
+from antlr4 import *
+from io import StringIO
+from typing.io import TextIO
+import sys
+
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+def serializedATN():
+    with StringIO() as buf:
+        buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2k")
+        buf.write("\u0383\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7")
+        buf.write("\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r")
+        buf.write("\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23")
+        buf.write("\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30")
+        buf.write("\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36")
+        buf.write("\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%")
+        buf.write("\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.")
+        buf.write("\t.\4/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64")
+        buf.write("\t\64\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:")
+        buf.write("\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\t")
+        buf.write("C\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH\4I\tI\4J\tJ\4K\tK\4L\t")
+        buf.write("L\4M\tM\4N\tN\4O\tO\4P\tP\4Q\tQ\4R\tR\4S\tS\4T\tT\4U\t")
+        buf.write("U\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\tZ\4[\t[\4\\\t\\\4]\t]\4")
+        buf.write("^\t^\4_\t_\4`\t`\4a\ta\4b\tb\4c\tc\4d\td\4e\te\4f\tf\4")
+        buf.write("g\tg\4h\th\4i\ti\4j\tj\4k\tk\4l\tl\4m\tm\4n\tn\4o\to\4")
+        buf.write("p\tp\4q\tq\4r\tr\3\2\3\2\3\3\3\3\3\4\3\4\3\4\3\4\3\4\3")
+        buf.write("\4\3\4\3\4\3\5\3\5\3\6\3\6\3\7\3\7\3\7\3\7\3\7\3\7\3\7")
+        buf.write("\3\b\3\b\3\b\3\b\3\b\3\b\3\b\3\t\3\t\3\t\3\t\3\t\3\n\3")
+        buf.write("\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\13\3\13\3\13\3\13\3\13")
+        buf.write("\3\13\3\13\3\f\3\f\3\f\3\f\3\f\3\r\3\r\3\r\3\r\3\r\3\16")
+        buf.write("\3\16\3\16\3\16\3\16\3\16\3\17\3\17\3\17\3\17\3\20\3\20")
+        buf.write("\3\20\3\20\3\20\3\21\3\21\3\21\3\21\3\21\3\21\3\22\3\22")
+        buf.write("\3\22\3\22\3\22\3\22\3\22\3\23\3\23\3\23\3\23\3\23\3\23")
+        buf.write("\3\23\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\25")
+        buf.write("\3\25\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\27\3\27\3\27")
+        buf.write("\3\27\3\27\3\27\3\30\3\30\3\31\3\31\3\31\3\31\3\31\3\32")
+        buf.write("\3\32\3\32\3\32\3\32\3\32\3\33\3\33\3\33\3\33\3\33\3\33")
+        buf.write("\3\33\3\33\3\33\3\34\3\34\3\34\3\35\3\35\3\35\3\35\3\36")
+        buf.write("\3\36\3\36\3\36\3\36\3\36\3\36\3\36\3\36\3\37\3\37\3\37")
+        buf.write("\3\37\3\37\3\37\3 \3 \3 \3 \3 \3 \3 \3 \3 \3 \3!\3!\3")
+        buf.write("!\3!\3!\3!\3!\3!\3!\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3")
+        buf.write("\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"")
+        buf.write("\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3#\3#\3#\3#\3#\3#\3#")
+        buf.write("\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3%\3")
+        buf.write("%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3")
+        buf.write("&\3&\3&\3&\3&\3&\3&\3\'\3\'\3(\3(\3)\3)\3*\3*\3+\3+\3")
+        buf.write(",\3,\3,\3,\3-\3-\3.\3.\3/\3/\3\60\3\60\3\61\3\61\3\61")
+        buf.write("\3\62\3\62\3\62\3\63\3\63\3\63\3\63\3\63\3\63\3\63\3\64")
+        buf.write("\3\64\3\65\3\65\3\65\3\66\3\66\3\67\3\67\38\38\39\39\3")
+        buf.write("9\3:\3:\3:\3;\3;\3;\3<\3<\3<\3=\3=\3=\3>\3>\3>\3>\3?\3")
+        buf.write("?\3?\3?\3@\3@\3@\3A\3A\3A\3B\3B\3B\3C\3C\3D\3D\3D\3E\3")
+        buf.write("E\3E\3F\3F\3G\3G\3H\3H\3H\3I\3I\3I\3J\3J\3K\3K\3L\3L\3")
+        buf.write("L\3M\3M\3M\3N\3N\3N\3O\3O\3O\3P\3P\3P\3P\3P\3P\3P\3P\3")
+        buf.write("Q\3Q\3Q\3Q\3Q\3R\3R\3R\3R\3R\3R\3S\3S\3S\3S\3S\3T\3T\3")
+        buf.write("T\3T\3T\3T\3T\3T\3U\3U\3U\3V\3V\3V\3V\3V\3W\3W\3W\3W\3")
+        buf.write("W\3W\3W\3X\3X\3X\3X\3X\3X\3Y\3Y\3Y\3Z\3Z\3Z\3Z\3Z\3[\3")
+        buf.write("[\3[\3[\3[\3[\3[\3[\3[\3\\\3\\\3\\\3\\\3\\\3\\\3]\3]\3")
+        buf.write("]\3]\3]\3]\3]\3^\3^\3^\7^\u02b2\n^\f^\16^\u02b5\13^\3")
+        buf.write("_\3_\3`\5`\u02ba\n`\3`\3`\3`\5`\u02bf\n`\3`\3`\3a\5a\u02c4")
+        buf.write("\na\3a\3a\3a\7a\u02c9\na\fa\16a\u02cc\13a\3a\3a\3b\3b")
+        buf.write("\3b\6b\u02d3\nb\rb\16b\u02d4\3b\5b\u02d8\nb\3c\3c\3c\7")
+        buf.write("c\u02dd\nc\fc\16c\u02e0\13c\5c\u02e2\nc\3c\5c\u02e5\n")
+        buf.write("c\3d\3d\6d\u02e9\nd\rd\16d\u02ea\3d\5d\u02ee\nd\3e\3e")
+        buf.write("\3f\3f\3f\3f\3f\3f\5f\u02f8\nf\3g\6g\u02fb\ng\rg\16g\u02fc")
+        buf.write("\3g\3g\7g\u0301\ng\fg\16g\u0304\13g\3g\5g\u0307\ng\3g")
+        buf.write("\5g\u030a\ng\3g\3g\6g\u030e\ng\rg\16g\u030f\3g\5g\u0313")
+        buf.write("\ng\3g\5g\u0316\ng\3g\6g\u0319\ng\rg\16g\u031a\3g\3g\5")
+        buf.write("g\u031f\ng\3g\6g\u0322\ng\rg\16g\u0323\3g\5g\u0327\ng")
+        buf.write("\3g\5g\u032a\ng\3h\3h\5h\u032e\nh\3h\6h\u0331\nh\rh\16")
+        buf.write("h\u0332\3i\3i\3j\3j\3j\5j\u033a\nj\3k\3k\3k\3k\3k\3k\3")
+        buf.write("k\3k\3k\5k\u0345\nk\3l\3l\3l\3l\3l\3l\3l\3m\3m\3m\3m\3")
+        buf.write("n\3n\3n\3n\3o\3o\3p\3p\3p\3p\7p\u035c\np\fp\16p\u035f")
+        buf.write("\13p\3p\3p\3p\3p\3p\3q\3q\3q\3q\7q\u036a\nq\fq\16q\u036d")
+        buf.write("\13q\3q\5q\u0370\nq\3q\3q\3q\3q\3r\3r\7r\u0378\nr\fr\16")
+        buf.write("r\u037b\13r\3r\5r\u037e\nr\3r\3r\3r\3r\3\u035d\2s\3\3")
+        buf.write("\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23\13\25\f\27\r\31\16")
+        buf.write("\33\17\35\20\37\21!\22#\23%\24\'\25)\26+\27-\30/\31\61")
+        buf.write("\32\63\33\65\34\67\359\36;\37= ?!A\"C#E$G%I&K\'M(O)Q*")
+        buf.write("S+U,W-Y.[/]\60_\61a\62c\63e\64g\65i\66k\67m8o9q:s;u<w")
+        buf.write("=y>{?}@\177A\u0081B\u0083C\u0085D\u0087E\u0089F\u008b")
+        buf.write("G\u008dH\u008fI\u0091J\u0093K\u0095L\u0097M\u0099N\u009b")
+        buf.write("O\u009dP\u009fQ\u00a1R\u00a3S\u00a5T\u00a7U\u00a9V\u00ab")
+        buf.write("W\u00adX\u00afY\u00b1Z\u00b3[\u00b5\\\u00b7]\u00b9^\u00bb")
+        buf.write("_\u00bd\2\u00bf`\u00c1a\u00c3b\u00c5c\u00c7d\u00c9\2\u00cb")
+        buf.write("\2\u00cde\u00cf\2\u00d1\2\u00d3\2\u00d5\2\u00d7\2\u00d9")
+        buf.write("f\u00dbg\u00ddh\u00dfi\u00e1j\u00e3k\3\2\20\6\2&&C\\a")
+        buf.write("ac|\4\2))^^\4\2$$^^\4\2ZZzz\5\2\62;CHch\6\2NNWWnnww\4")
+        buf.write("\2WWww\4\2NNnn\4\2GGgg\4\2--//\6\2FFHHffhh\t\2))^^ddh")
+        buf.write("hppttvv\5\2\13\f\16\17\"\"\4\2\f\f\17\17\2\u03a2\2\3\3")
+        buf.write("\2\2\2\2\5\3\2\2\2\2\7\3\2\2\2\2\t\3\2\2\2\2\13\3\2\2")
+        buf.write("\2\2\r\3\2\2\2\2\17\3\2\2\2\2\21\3\2\2\2\2\23\3\2\2\2")
+        buf.write("\2\25\3\2\2\2\2\27\3\2\2\2\2\31\3\2\2\2\2\33\3\2\2\2\2")
+        buf.write("\35\3\2\2\2\2\37\3\2\2\2\2!\3\2\2\2\2#\3\2\2\2\2%\3\2")
+        buf.write("\2\2\2\'\3\2\2\2\2)\3\2\2\2\2+\3\2\2\2\2-\3\2\2\2\2/\3")
+        buf.write("\2\2\2\2\61\3\2\2\2\2\63\3\2\2\2\2\65\3\2\2\2\2\67\3\2")
+        buf.write("\2\2\29\3\2\2\2\2;\3\2\2\2\2=\3\2\2\2\2?\3\2\2\2\2A\3")
+        buf.write("\2\2\2\2C\3\2\2\2\2E\3\2\2\2\2G\3\2\2\2\2I\3\2\2\2\2K")
+        buf.write("\3\2\2\2\2M\3\2\2\2\2O\3\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2")
+        buf.write("U\3\2\2\2\2W\3\2\2\2\2Y\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2")
+        buf.write("\2_\3\2\2\2\2a\3\2\2\2\2c\3\2\2\2\2e\3\2\2\2\2g\3\2\2")
+        buf.write("\2\2i\3\2\2\2\2k\3\2\2\2\2m\3\2\2\2\2o\3\2\2\2\2q\3\2")
+        buf.write("\2\2\2s\3\2\2\2\2u\3\2\2\2\2w\3\2\2\2\2y\3\2\2\2\2{\3")
+        buf.write("\2\2\2\2}\3\2\2\2\2\177\3\2\2\2\2\u0081\3\2\2\2\2\u0083")
+        buf.write("\3\2\2\2\2\u0085\3\2\2\2\2\u0087\3\2\2\2\2\u0089\3\2\2")
+        buf.write("\2\2\u008b\3\2\2\2\2\u008d\3\2\2\2\2\u008f\3\2\2\2\2\u0091")
+        buf.write("\3\2\2\2\2\u0093\3\2\2\2\2\u0095\3\2\2\2\2\u0097\3\2\2")
+        buf.write("\2\2\u0099\3\2\2\2\2\u009b\3\2\2\2\2\u009d\3\2\2\2\2\u009f")
+        buf.write("\3\2\2\2\2\u00a1\3\2\2\2\2\u00a3\3\2\2\2\2\u00a5\3\2\2")
+        buf.write("\2\2\u00a7\3\2\2\2\2\u00a9\3\2\2\2\2\u00ab\3\2\2\2\2\u00ad")
+        buf.write("\3\2\2\2\2\u00af\3\2\2\2\2\u00b1\3\2\2\2\2\u00b3\3\2\2")
+        buf.write("\2\2\u00b5\3\2\2\2\2\u00b7\3\2\2\2\2\u00b9\3\2\2\2\2\u00bb")
+        buf.write("\3\2\2\2\2\u00bf\3\2\2\2\2\u00c1\3\2\2\2\2\u00c3\3\2\2")
+        buf.write("\2\2\u00c5\3\2\2\2\2\u00c7\3\2\2\2\2\u00cd\3\2\2\2\2\u00d9")
+        buf.write("\3\2\2\2\2\u00db\3\2\2\2\2\u00dd\3\2\2\2\2\u00df\3\2\2")
+        buf.write("\2\2\u00e1\3\2\2\2\2\u00e3\3\2\2\2\3\u00e5\3\2\2\2\5\u00e7")
+        buf.write("\3\2\2\2\7\u00e9\3\2\2\2\t\u00f1\3\2\2\2\13\u00f3\3\2")
+        buf.write("\2\2\r\u00f5\3\2\2\2\17\u00fc\3\2\2\2\21\u0103\3\2\2\2")
+        buf.write("\23\u0108\3\2\2\2\25\u0111\3\2\2\2\27\u0118\3\2\2\2\31")
+        buf.write("\u011d\3\2\2\2\33\u0122\3\2\2\2\35\u0128\3\2\2\2\37\u012c")
+        buf.write("\3\2\2\2!\u0131\3\2\2\2#\u0137\3\2\2\2%\u013e\3\2\2\2")
+        buf.write("\'\u0145\3\2\2\2)\u014e\3\2\2\2+\u0150\3\2\2\2-\u0157")
+        buf.write("\3\2\2\2/\u015d\3\2\2\2\61\u015f\3\2\2\2\63\u0164\3\2")
+        buf.write("\2\2\65\u016a\3\2\2\2\67\u0173\3\2\2\29\u0176\3\2\2\2")
+        buf.write(";\u017a\3\2\2\2=\u0183\3\2\2\2?\u0189\3\2\2\2A\u0193\3")
+        buf.write("\2\2\2C\u019c\3\2\2\2E\u01ba\3\2\2\2G\u01c1\3\2\2\2I\u01d1")
+        buf.write("\3\2\2\2K\u01e4\3\2\2\2M\u01eb\3\2\2\2O\u01ed\3\2\2\2")
+        buf.write("Q\u01ef\3\2\2\2S\u01f1\3\2\2\2U\u01f3\3\2\2\2W\u01f5\3")
+        buf.write("\2\2\2Y\u01f9\3\2\2\2[\u01fb\3\2\2\2]\u01fd\3\2\2\2_\u01ff")
+        buf.write("\3\2\2\2a\u0201\3\2\2\2c\u0204\3\2\2\2e\u0207\3\2\2\2")
+        buf.write("g\u020e\3\2\2\2i\u0210\3\2\2\2k\u0213\3\2\2\2m\u0215\3")
+        buf.write("\2\2\2o\u0217\3\2\2\2q\u0219\3\2\2\2s\u021c\3\2\2\2u\u021f")
+        buf.write("\3\2\2\2w\u0222\3\2\2\2y\u0225\3\2\2\2{\u0228\3\2\2\2")
+        buf.write("}\u022c\3\2\2\2\177\u0230\3\2\2\2\u0081\u0233\3\2\2\2")
+        buf.write("\u0083\u0236\3\2\2\2\u0085\u0239\3\2\2\2\u0087\u023b\3")
+        buf.write("\2\2\2\u0089\u023e\3\2\2\2\u008b\u0241\3\2\2\2\u008d\u0243")
+        buf.write("\3\2\2\2\u008f\u0245\3\2\2\2\u0091\u0248\3\2\2\2\u0093")
+        buf.write("\u024b\3\2\2\2\u0095\u024d\3\2\2\2\u0097\u024f\3\2\2\2")
+        buf.write("\u0099\u0252\3\2\2\2\u009b\u0255\3\2\2\2\u009d\u0258\3")
+        buf.write("\2\2\2\u009f\u025b\3\2\2\2\u00a1\u0263\3\2\2\2\u00a3\u0268")
+        buf.write("\3\2\2\2\u00a5\u026e\3\2\2\2\u00a7\u0273\3\2\2\2\u00a9")
+        buf.write("\u027b\3\2\2\2\u00ab\u027e\3\2\2\2\u00ad\u0283\3\2\2\2")
+        buf.write("\u00af\u028a\3\2\2\2\u00b1\u0290\3\2\2\2\u00b3\u0293\3")
+        buf.write("\2\2\2\u00b5\u0298\3\2\2\2\u00b7\u02a1\3\2\2\2\u00b9\u02a7")
+        buf.write("\3\2\2\2\u00bb\u02ae\3\2\2\2\u00bd\u02b6\3\2\2\2\u00bf")
+        buf.write("\u02b9\3\2\2\2\u00c1\u02c3\3\2\2\2\u00c3\u02cf\3\2\2\2")
+        buf.write("\u00c5\u02e1\3\2\2\2\u00c7\u02e6\3\2\2\2\u00c9\u02ef\3")
+        buf.write("\2\2\2\u00cb\u02f7\3\2\2\2\u00cd\u0329\3\2\2\2\u00cf\u032b")
+        buf.write("\3\2\2\2\u00d1\u0334\3\2\2\2\u00d3\u0339\3\2\2\2\u00d5")
+        buf.write("\u0344\3\2\2\2\u00d7\u0346\3\2\2\2\u00d9\u034d\3\2\2\2")
+        buf.write("\u00db\u0351\3\2\2\2\u00dd\u0355\3\2\2\2\u00df\u0357\3")
+        buf.write("\2\2\2\u00e1\u0365\3\2\2\2\u00e3\u0375\3\2\2\2\u00e5\u00e6")
+        buf.write("\7}\2\2\u00e6\4\3\2\2\2\u00e7\u00e8\7=\2\2\u00e8\6\3\2")
+        buf.write("\2\2\u00e9\u00ea\7v\2\2\u00ea\u00eb\7{\2\2\u00eb\u00ec")
+        buf.write("\7r\2\2\u00ec\u00ed\7g\2\2\u00ed\u00ee\7f\2\2\u00ee\u00ef")
+        buf.write("\7g\2\2\u00ef\u00f0\7h\2\2\u00f0\b\3\2\2\2\u00f1\u00f2")
+        buf.write("\7.\2\2\u00f2\n\3\2\2\2\u00f3\u00f4\7?\2\2\u00f4\f\3\2")
+        buf.write("\2\2\u00f5\u00f6\7g\2\2\u00f6\u00f7\7z\2\2\u00f7\u00f8")
+        buf.write("\7v\2\2\u00f8\u00f9\7g\2\2\u00f9\u00fa\7t\2\2\u00fa\u00fb")
+        buf.write("\7p\2\2\u00fb\16\3\2\2\2\u00fc\u00fd\7u\2\2\u00fd\u00fe")
+        buf.write("\7v\2\2\u00fe\u00ff\7c\2\2\u00ff\u0100\7v\2\2\u0100\u0101")
+        buf.write("\7k\2\2\u0101\u0102\7e\2\2\u0102\20\3\2\2\2\u0103\u0104")
+        buf.write("\7c\2\2\u0104\u0105\7w\2\2\u0105\u0106\7v\2\2\u0106\u0107")
+        buf.write("\7q\2\2\u0107\22\3\2\2\2\u0108\u0109\7t\2\2\u0109\u010a")
+        buf.write("\7g\2\2\u010a\u010b\7i\2\2\u010b\u010c\7k\2\2\u010c\u010d")
+        buf.write("\7u\2\2\u010d\u010e\7v\2\2\u010e\u010f\7g\2\2\u010f\u0110")
+        buf.write("\7t\2\2\u0110\24\3\2\2\2\u0111\u0112\7U\2\2\u0112\u0113")
+        buf.write("\7V\2\2\u0113\u0114\7C\2\2\u0114\u0115\7V\2\2\u0115\u0116")
+        buf.write("\7K\2\2\u0116\u0117\7E\2\2\u0117\26\3\2\2\2\u0118\u0119")
+        buf.write("\7x\2\2\u0119\u011a\7q\2\2\u011a\u011b\7k\2\2\u011b\u011c")
+        buf.write("\7f\2\2\u011c\30\3\2\2\2\u011d\u011e\7e\2\2\u011e\u011f")
+        buf.write("\7j\2\2\u011f\u0120\7c\2\2\u0120\u0121\7t\2\2\u0121\32")
+        buf.write("\3\2\2\2\u0122\u0123\7u\2\2\u0123\u0124\7j\2\2\u0124\u0125")
+        buf.write("\7q\2\2\u0125\u0126\7t\2\2\u0126\u0127\7v\2\2\u0127\34")
+        buf.write("\3\2\2\2\u0128\u0129\7k\2\2\u0129\u012a\7p\2\2\u012a\u012b")
+        buf.write("\7v\2\2\u012b\36\3\2\2\2\u012c\u012d\7n\2\2\u012d\u012e")
+        buf.write("\7q\2\2\u012e\u012f\7p\2\2\u012f\u0130\7i\2\2\u0130 \3")
+        buf.write("\2\2\2\u0131\u0132\7h\2\2\u0132\u0133\7n\2\2\u0133\u0134")
+        buf.write("\7q\2\2\u0134\u0135\7c\2\2\u0135\u0136\7v\2\2\u0136\"")
+        buf.write("\3\2\2\2\u0137\u0138\7f\2\2\u0138\u0139\7q\2\2\u0139\u013a")
+        buf.write("\7w\2\2\u013a\u013b\7d\2\2\u013b\u013c\7n\2\2\u013c\u013d")
+        buf.write("\7g\2\2\u013d$\3\2\2\2\u013e\u013f\7u\2\2\u013f\u0140")
+        buf.write("\7k\2\2\u0140\u0141\7i\2\2\u0141\u0142\7p\2\2\u0142\u0143")
+        buf.write("\7g\2\2\u0143\u0144\7f\2\2\u0144&\3\2\2\2\u0145\u0146")
+        buf.write("\7w\2\2\u0146\u0147\7p\2\2\u0147\u0148\7u\2\2\u0148\u0149")
+        buf.write("\7k\2\2\u0149\u014a\7i\2\2\u014a\u014b\7p\2\2\u014b\u014c")
+        buf.write("\7g\2\2\u014c\u014d\7f\2\2\u014d(\3\2\2\2\u014e\u014f")
+        buf.write("\7\177\2\2\u014f*\3\2\2\2\u0150\u0151\7u\2\2\u0151\u0152")
+        buf.write("\7v\2\2\u0152\u0153\7t\2\2\u0153\u0154\7w\2\2\u0154\u0155")
+        buf.write("\7e\2\2\u0155\u0156\7v\2\2\u0156,\3\2\2\2\u0157\u0158")
+        buf.write("\7w\2\2\u0158\u0159\7p\2\2\u0159\u015a\7k\2\2\u015a\u015b")
+        buf.write("\7q\2\2\u015b\u015c\7p\2\2\u015c.\3\2\2\2\u015d\u015e")
+        buf.write("\7<\2\2\u015e\60\3\2\2\2\u015f\u0160\7g\2\2\u0160\u0161")
+        buf.write("\7p\2\2\u0161\u0162\7w\2\2\u0162\u0163\7o\2\2\u0163\62")
+        buf.write("\3\2\2\2\u0164\u0165\7e\2\2\u0165\u0166\7q\2\2\u0166\u0167")
+        buf.write("\7p\2\2\u0167\u0168\7u\2\2\u0168\u0169\7v\2\2\u0169\64")
+        buf.write("\3\2\2\2\u016a\u016b\7x\2\2\u016b\u016c\7q\2\2\u016c\u016d")
+        buf.write("\7n\2\2\u016d\u016e\7c\2\2\u016e\u016f\7v\2\2\u016f\u0170")
+        buf.write("\7k\2\2\u0170\u0171\7n\2\2\u0171\u0172\7g\2\2\u0172\66")
+        buf.write("\3\2\2\2\u0173\u0174\7K\2\2\u0174\u0175\7P\2\2\u01758")
+        buf.write("\3\2\2\2\u0176\u0177\7Q\2\2\u0177\u0178\7W\2\2\u0178\u0179")
+        buf.write("\7V\2\2\u0179:\3\2\2\2\u017a\u017b\7Q\2\2\u017b\u017c")
+        buf.write("\7R\2\2\u017c\u017d\7V\2\2\u017d\u017e\7K\2\2\u017e\u017f")
+        buf.write("\7Q\2\2\u017f\u0180\7P\2\2\u0180\u0181\7C\2\2\u0181\u0182")
+        buf.write("\7N\2\2\u0182<\3\2\2\2\u0183\u0184\7E\2\2\u0184\u0185")
+        buf.write("\7Q\2\2\u0185\u0186\7P\2\2\u0186\u0187\7U\2\2\u0187\u0188")
+        buf.write("\7V\2\2\u0188>\3\2\2\2\u0189\u018a\7W\2\2\u018a\u018b")
+        buf.write("\7P\2\2\u018b\u018c\7C\2\2\u018c\u018d\7N\2\2\u018d\u018e")
+        buf.write("\7K\2\2\u018e\u018f\7I\2\2\u018f\u0190\7P\2\2\u0190\u0191")
+        buf.write("\7G\2\2\u0191\u0192\7F\2\2\u0192@\3\2\2\2\u0193\u0194")
+        buf.write("\7X\2\2\u0194\u0195\7Q\2\2\u0195\u0196\7N\2\2\u0196\u0197")
+        buf.write("\7C\2\2\u0197\u0198\7V\2\2\u0198\u0199\7K\2\2\u0199\u019a")
+        buf.write("\7N\2\2\u019a\u019b\7G\2\2\u019bB\3\2\2\2\u019c\u019d")
+        buf.write("\7I\2\2\u019d\u019e\7N\2\2\u019e\u019f\7Q\2\2\u019f\u01a0")
+        buf.write("\7D\2\2\u01a0\u01a1\7C\2\2\u01a1\u01a2\7N\2\2\u01a2\u01a3")
+        buf.write("\7a\2\2\u01a3\u01a4\7T\2\2\u01a4\u01a5\7G\2\2\u01a5\u01a6")
+        buf.write("\7O\2\2\u01a6\u01a7\7Q\2\2\u01a7\u01a8\7X\2\2\u01a8\u01a9")
+        buf.write("\7G\2\2\u01a9\u01aa\7a\2\2\u01aa\u01ab\7K\2\2\u01ab\u01ac")
+        buf.write("\7H\2\2\u01ac\u01ad\7a\2\2\u01ad\u01ae\7W\2\2\u01ae\u01af")
+        buf.write("\7P\2\2\u01af\u01b0\7T\2\2\u01b0\u01b1\7G\2\2\u01b1\u01b2")
+        buf.write("\7H\2\2\u01b2\u01b3\7G\2\2\u01b3\u01b4\7T\2\2\u01b4\u01b5")
+        buf.write("\7G\2\2\u01b5\u01b6\7P\2\2\u01b6\u01b7\7E\2\2\u01b7\u01b8")
+        buf.write("\7G\2\2\u01b8\u01b9\7F\2\2\u01b9D\3\2\2\2\u01ba\u01bb")
+        buf.write("\7G\2\2\u01bb\u01bc\7H\2\2\u01bc\u01bd\7K\2\2\u01bd\u01be")
+        buf.write("\7C\2\2\u01be\u01bf\7R\2\2\u01bf\u01c0\7K\2\2\u01c0F\3")
+        buf.write("\2\2\2\u01c1\u01c2\7G\2\2\u01c2\u01c3\7H\2\2\u01c3\u01c4")
+        buf.write("\7K\2\2\u01c4\u01c5\7a\2\2\u01c5\u01c6\7D\2\2\u01c6\u01c7")
+        buf.write("\7Q\2\2\u01c7\u01c8\7Q\2\2\u01c8\u01c9\7V\2\2\u01c9\u01ca")
+        buf.write("\7U\2\2\u01ca\u01cb\7G\2\2\u01cb\u01cc\7T\2\2\u01cc\u01cd")
+        buf.write("\7X\2\2\u01cd\u01ce\7K\2\2\u01ce\u01cf\7E\2\2\u01cf\u01d0")
+        buf.write("\7G\2\2\u01d0H\3\2\2\2\u01d1\u01d2\7G\2\2\u01d2\u01d3")
+        buf.write("\7H\2\2\u01d3\u01d4\7K\2\2\u01d4\u01d5\7a\2\2\u01d5\u01d6")
+        buf.write("\7T\2\2\u01d6\u01d7\7W\2\2\u01d7\u01d8\7P\2\2\u01d8\u01d9")
+        buf.write("\7V\2\2\u01d9\u01da\7K\2\2\u01da\u01db\7O\2\2\u01db\u01dc")
+        buf.write("\7G\2\2\u01dc\u01dd\7U\2\2\u01dd\u01de\7G\2\2\u01de\u01df")
+        buf.write("\7T\2\2\u01df\u01e0\7X\2\2\u01e0\u01e1\7K\2\2\u01e1\u01e2")
+        buf.write("\7E\2\2\u01e2\u01e3\7G\2\2\u01e3J\3\2\2\2\u01e4\u01e5")
+        buf.write("\7R\2\2\u01e5\u01e6\7C\2\2\u01e6\u01e7\7E\2\2\u01e7\u01e8")
+        buf.write("\7M\2\2\u01e8\u01e9\7G\2\2\u01e9\u01ea\7F\2\2\u01eaL\3")
+        buf.write("\2\2\2\u01eb\u01ec\7*\2\2\u01ecN\3\2\2\2\u01ed\u01ee\7")
+        buf.write("+\2\2\u01eeP\3\2\2\2\u01ef\u01f0\7]\2\2\u01f0R\3\2\2\2")
+        buf.write("\u01f1\u01f2\7_\2\2\u01f2T\3\2\2\2\u01f3\u01f4\7,\2\2")
+        buf.write("\u01f4V\3\2\2\2\u01f5\u01f6\7\60\2\2\u01f6\u01f7\7\60")
+        buf.write("\2\2\u01f7\u01f8\7\60\2\2\u01f8X\3\2\2\2\u01f9\u01fa\7")
+        buf.write("-\2\2\u01faZ\3\2\2\2\u01fb\u01fc\7/\2\2\u01fc\\\3\2\2")
+        buf.write("\2\u01fd\u01fe\7\61\2\2\u01fe^\3\2\2\2\u01ff\u0200\7\'")
+        buf.write("\2\2\u0200`\3\2\2\2\u0201\u0202\7-\2\2\u0202\u0203\7-")
+        buf.write("\2\2\u0203b\3\2\2\2\u0204\u0205\7/\2\2\u0205\u0206\7/")
+        buf.write("\2\2\u0206d\3\2\2\2\u0207\u0208\7u\2\2\u0208\u0209\7k")
+        buf.write("\2\2\u0209\u020a\7|\2\2\u020a\u020b\7g\2\2\u020b\u020c")
+        buf.write("\7q\2\2\u020c\u020d\7h\2\2\u020df\3\2\2\2\u020e\u020f")
+        buf.write("\7\60\2\2\u020fh\3\2\2\2\u0210\u0211\7/\2\2\u0211\u0212")
+        buf.write("\7@\2\2\u0212j\3\2\2\2\u0213\u0214\7(\2\2\u0214l\3\2\2")
+        buf.write("\2\u0215\u0216\7\u0080\2\2\u0216n\3\2\2\2\u0217\u0218")
+        buf.write("\7#\2\2\u0218p\3\2\2\2\u0219\u021a\7,\2\2\u021a\u021b")
+        buf.write("\7?\2\2\u021br\3\2\2\2\u021c\u021d\7\61\2\2\u021d\u021e")
+        buf.write("\7?\2\2\u021et\3\2\2\2\u021f\u0220\7\'\2\2\u0220\u0221")
+        buf.write("\7?\2\2\u0221v\3\2\2\2\u0222\u0223\7-\2\2\u0223\u0224")
+        buf.write("\7?\2\2\u0224x\3\2\2\2\u0225\u0226\7/\2\2\u0226\u0227")
+        buf.write("\7?\2\2\u0227z\3\2\2\2\u0228\u0229\7>\2\2\u0229\u022a")
+        buf.write("\7>\2\2\u022a\u022b\7?\2\2\u022b|\3\2\2\2\u022c\u022d")
+        buf.write("\7@\2\2\u022d\u022e\7@\2\2\u022e\u022f\7?\2\2\u022f~\3")
+        buf.write("\2\2\2\u0230\u0231\7(\2\2\u0231\u0232\7?\2\2\u0232\u0080")
+        buf.write("\3\2\2\2\u0233\u0234\7`\2\2\u0234\u0235\7?\2\2\u0235\u0082")
+        buf.write("\3\2\2\2\u0236\u0237\7~\2\2\u0237\u0238\7?\2\2\u0238\u0084")
+        buf.write("\3\2\2\2\u0239\u023a\7A\2\2\u023a\u0086\3\2\2\2\u023b")
+        buf.write("\u023c\7~\2\2\u023c\u023d\7~\2\2\u023d\u0088\3\2\2\2\u023e")
+        buf.write("\u023f\7(\2\2\u023f\u0240\7(\2\2\u0240\u008a\3\2\2\2\u0241")
+        buf.write("\u0242\7~\2\2\u0242\u008c\3\2\2\2\u0243\u0244\7`\2\2\u0244")
+        buf.write("\u008e\3\2\2\2\u0245\u0246\7?\2\2\u0246\u0247\7?\2\2\u0247")
+        buf.write("\u0090\3\2\2\2\u0248\u0249\7#\2\2\u0249\u024a\7?\2\2\u024a")
+        buf.write("\u0092\3\2\2\2\u024b\u024c\7>\2\2\u024c\u0094\3\2\2\2")
+        buf.write("\u024d\u024e\7@\2\2\u024e\u0096\3\2\2\2\u024f\u0250\7")
+        buf.write(">\2\2\u0250\u0251\7?\2\2\u0251\u0098\3\2\2\2\u0252\u0253")
+        buf.write("\7@\2\2\u0253\u0254\7?\2\2\u0254\u009a\3\2\2\2\u0255\u0256")
+        buf.write("\7>\2\2\u0256\u0257\7>\2\2\u0257\u009c\3\2\2\2\u0258\u0259")
+        buf.write("\7@\2\2\u0259\u025a\7@\2\2\u025a\u009e\3\2\2\2\u025b\u025c")
+        buf.write("\7a\2\2\u025c\u025d\7a\2\2\u025d\u025e\7c\2\2\u025e\u025f")
+        buf.write("\7u\2\2\u025f\u0260\7o\2\2\u0260\u0261\7a\2\2\u0261\u0262")
+        buf.write("\7a\2\2\u0262\u00a0\3\2\2\2\u0263\u0264\7a\2\2\u0264\u0265")
+        buf.write("\7c\2\2\u0265\u0266\7u\2\2\u0266\u0267\7o\2\2\u0267\u00a2")
+        buf.write("\3\2\2\2\u0268\u0269\7a\2\2\u0269\u026a\7a\2\2\u026a\u026b")
+        buf.write("\7c\2\2\u026b\u026c\7u\2\2\u026c\u026d\7o\2\2\u026d\u00a4")
+        buf.write("\3\2\2\2\u026e\u026f\7e\2\2\u026f\u0270\7c\2\2\u0270\u0271")
+        buf.write("\7u\2\2\u0271\u0272\7g\2\2\u0272\u00a6\3\2\2\2\u0273\u0274")
+        buf.write("\7f\2\2\u0274\u0275\7g\2\2\u0275\u0276\7h\2\2\u0276\u0277")
+        buf.write("\7c\2\2\u0277\u0278\7w\2\2\u0278\u0279\7n\2\2\u0279\u027a")
+        buf.write("\7v\2\2\u027a\u00a8\3\2\2\2\u027b\u027c\7k\2\2\u027c\u027d")
+        buf.write("\7h\2\2\u027d\u00aa\3\2\2\2\u027e\u027f\7g\2\2\u027f\u0280")
+        buf.write("\7n\2\2\u0280\u0281\7u\2\2\u0281\u0282\7g\2\2\u0282\u00ac")
+        buf.write("\3\2\2\2\u0283\u0284\7u\2\2\u0284\u0285\7y\2\2\u0285\u0286")
+        buf.write("\7k\2\2\u0286\u0287\7v\2\2\u0287\u0288\7e\2\2\u0288\u0289")
+        buf.write("\7j\2\2\u0289\u00ae\3\2\2\2\u028a\u028b\7y\2\2\u028b\u028c")
+        buf.write("\7j\2\2\u028c\u028d\7k\2\2\u028d\u028e\7n\2\2\u028e\u028f")
+        buf.write("\7g\2\2\u028f\u00b0\3\2\2\2\u0290\u0291\7f\2\2\u0291\u0292")
+        buf.write("\7q\2\2\u0292\u00b2\3\2\2\2\u0293\u0294\7i\2\2\u0294\u0295")
+        buf.write("\7q\2\2\u0295\u0296\7v\2\2\u0296\u0297\7q\2\2\u0297\u00b4")
+        buf.write("\3\2\2\2\u0298\u0299\7e\2\2\u0299\u029a\7q\2\2\u029a\u029b")
+        buf.write("\7p\2\2\u029b\u029c\7v\2\2\u029c\u029d\7k\2\2\u029d\u029e")
+        buf.write("\7p\2\2\u029e\u029f\7w\2\2\u029f\u02a0\7g\2\2\u02a0\u00b6")
+        buf.write("\3\2\2\2\u02a1\u02a2\7d\2\2\u02a2\u02a3\7t\2\2\u02a3\u02a4")
+        buf.write("\7g\2\2\u02a4\u02a5\7c\2\2\u02a5\u02a6\7m\2\2\u02a6\u00b8")
+        buf.write("\3\2\2\2\u02a7\u02a8\7t\2\2\u02a8\u02a9\7g\2\2\u02a9\u02aa")
+        buf.write("\7v\2\2\u02aa\u02ab\7w\2\2\u02ab\u02ac\7t\2\2\u02ac\u02ad")
+        buf.write("\7p\2\2\u02ad\u00ba\3\2\2\2\u02ae\u02b3\5\u00bd_\2\u02af")
+        buf.write("\u02b2\5\u00bd_\2\u02b0\u02b2\4\62;\2\u02b1\u02af\3\2")
+        buf.write("\2\2\u02b1\u02b0\3\2\2\2\u02b2\u02b5\3\2\2\2\u02b3\u02b1")
+        buf.write("\3\2\2\2\u02b3\u02b4\3\2\2\2\u02b4\u00bc\3\2\2\2\u02b5")
+        buf.write("\u02b3\3\2\2\2\u02b6\u02b7\t\2\2\2\u02b7\u00be\3\2\2\2")
+        buf.write("\u02b8\u02ba\7N\2\2\u02b9\u02b8\3\2\2\2\u02b9\u02ba\3")
+        buf.write("\2\2\2\u02ba\u02bb\3\2\2\2\u02bb\u02be\7)\2\2\u02bc\u02bf")
+        buf.write("\5\u00d3j\2\u02bd\u02bf\n\3\2\2\u02be\u02bc\3\2\2\2\u02be")
+        buf.write("\u02bd\3\2\2\2\u02bf\u02c0\3\2\2\2\u02c0\u02c1\7)\2\2")
+        buf.write("\u02c1\u00c0\3\2\2\2\u02c2\u02c4\7N\2\2\u02c3\u02c2\3")
+        buf.write("\2\2\2\u02c3\u02c4\3\2\2\2\u02c4\u02c5\3\2\2\2\u02c5\u02ca")
+        buf.write("\7$\2\2\u02c6\u02c9\5\u00d3j\2\u02c7\u02c9\n\4\2\2\u02c8")
+        buf.write("\u02c6\3\2\2\2\u02c8\u02c7\3\2\2\2\u02c9\u02cc\3\2\2\2")
+        buf.write("\u02ca\u02c8\3\2\2\2\u02ca\u02cb\3\2\2\2\u02cb\u02cd\3")
+        buf.write("\2\2\2\u02cc\u02ca\3\2\2\2\u02cd\u02ce\7$\2\2\u02ce\u00c2")
+        buf.write("\3\2\2\2\u02cf\u02d0\7\62\2\2\u02d0\u02d2\t\5\2\2\u02d1")
+        buf.write("\u02d3\5\u00c9e\2\u02d2\u02d1\3\2\2\2\u02d3\u02d4\3\2")
+        buf.write("\2\2\u02d4\u02d2\3\2\2\2\u02d4\u02d5\3\2\2\2\u02d5\u02d7")
+        buf.write("\3\2\2\2\u02d6\u02d8\5\u00cbf\2\u02d7\u02d6\3\2\2\2\u02d7")
+        buf.write("\u02d8\3\2\2\2\u02d8\u00c4\3\2\2\2\u02d9\u02e2\7\62\2")
+        buf.write("\2\u02da\u02de\4\63;\2\u02db\u02dd\4\62;\2\u02dc\u02db")
+        buf.write("\3\2\2\2\u02dd\u02e0\3\2\2\2\u02de\u02dc\3\2\2\2\u02de")
+        buf.write("\u02df\3\2\2\2\u02df\u02e2\3\2\2\2\u02e0\u02de\3\2\2\2")
+        buf.write("\u02e1\u02d9\3\2\2\2\u02e1\u02da\3\2\2\2\u02e2\u02e4\3")
+        buf.write("\2\2\2\u02e3\u02e5\5\u00cbf\2\u02e4\u02e3\3\2\2\2\u02e4")
+        buf.write("\u02e5\3\2\2\2\u02e5\u00c6\3\2\2\2\u02e6\u02e8\7\62\2")
+        buf.write("\2\u02e7\u02e9\4\629\2\u02e8\u02e7\3\2\2\2\u02e9\u02ea")
+        buf.write("\3\2\2\2\u02ea\u02e8\3\2\2\2\u02ea\u02eb\3\2\2\2\u02eb")
+        buf.write("\u02ed\3\2\2\2\u02ec\u02ee\5\u00cbf\2\u02ed\u02ec\3\2")
+        buf.write("\2\2\u02ed\u02ee\3\2\2\2\u02ee\u00c8\3\2\2\2\u02ef\u02f0")
+        buf.write("\t\6\2\2\u02f0\u00ca\3\2\2\2\u02f1\u02f8\t\7\2\2\u02f2")
+        buf.write("\u02f3\t\b\2\2\u02f3\u02f8\t\t\2\2\u02f4\u02f5\t\b\2\2")
+        buf.write("\u02f5\u02f6\t\t\2\2\u02f6\u02f8\t\t\2\2\u02f7\u02f1\3")
+        buf.write("\2\2\2\u02f7\u02f2\3\2\2\2\u02f7\u02f4\3\2\2\2\u02f8\u00cc")
+        buf.write("\3\2\2\2\u02f9\u02fb\4\62;\2\u02fa\u02f9\3\2\2\2\u02fb")
+        buf.write("\u02fc\3\2\2\2\u02fc\u02fa\3\2\2\2\u02fc\u02fd\3\2\2\2")
+        buf.write("\u02fd\u02fe\3\2\2\2\u02fe\u0302\7\60\2\2\u02ff\u0301")
+        buf.write("\4\62;\2\u0300\u02ff\3\2\2\2\u0301\u0304\3\2\2\2\u0302")
+        buf.write("\u0300\3\2\2\2\u0302\u0303\3\2\2\2\u0303\u0306\3\2\2\2")
+        buf.write("\u0304\u0302\3\2\2\2\u0305\u0307\5\u00cfh\2\u0306\u0305")
+        buf.write("\3\2\2\2\u0306\u0307\3\2\2\2\u0307\u0309\3\2\2\2\u0308")
+        buf.write("\u030a\5\u00d1i\2\u0309\u0308\3\2\2\2\u0309\u030a\3\2")
+        buf.write("\2\2\u030a\u032a\3\2\2\2\u030b\u030d\7\60\2\2\u030c\u030e")
+        buf.write("\4\62;\2\u030d\u030c\3\2\2\2\u030e\u030f\3\2\2\2\u030f")
+        buf.write("\u030d\3\2\2\2\u030f\u0310\3\2\2\2\u0310\u0312\3\2\2\2")
+        buf.write("\u0311\u0313\5\u00cfh\2\u0312\u0311\3\2\2\2\u0312\u0313")
+        buf.write("\3\2\2\2\u0313\u0315\3\2\2\2\u0314\u0316\5\u00d1i\2\u0315")
+        buf.write("\u0314\3\2\2\2\u0315\u0316\3\2\2\2\u0316\u032a\3\2\2\2")
+        buf.write("\u0317\u0319\4\62;\2\u0318\u0317\3\2\2\2\u0319\u031a\3")
+        buf.write("\2\2\2\u031a\u0318\3\2\2\2\u031a\u031b\3\2\2\2\u031b\u031c")
+        buf.write("\3\2\2\2\u031c\u031e\5\u00cfh\2\u031d\u031f\5\u00d1i\2")
+        buf.write("\u031e\u031d\3\2\2\2\u031e\u031f\3\2\2\2\u031f\u032a\3")
+        buf.write("\2\2\2\u0320\u0322\4\62;\2\u0321\u0320\3\2\2\2\u0322\u0323")
+        buf.write("\3\2\2\2\u0323\u0321\3\2\2\2\u0323\u0324\3\2\2\2\u0324")
+        buf.write("\u0326\3\2\2\2\u0325\u0327\5\u00cfh\2\u0326\u0325\3\2")
+        buf.write("\2\2\u0326\u0327\3\2\2\2\u0327\u0328\3\2\2\2\u0328\u032a")
+        buf.write("\5\u00d1i\2\u0329\u02fa\3\2\2\2\u0329\u030b\3\2\2\2\u0329")
+        buf.write("\u0318\3\2\2\2\u0329\u0321\3\2\2\2\u032a\u00ce\3\2\2\2")
+        buf.write("\u032b\u032d\t\n\2\2\u032c\u032e\t\13\2\2\u032d\u032c")
+        buf.write("\3\2\2\2\u032d\u032e\3\2\2\2\u032e\u0330\3\2\2\2\u032f")
+        buf.write("\u0331\4\62;\2\u0330\u032f\3\2\2\2\u0331\u0332\3\2\2\2")
+        buf.write("\u0332\u0330\3\2\2\2\u0332\u0333\3\2\2\2\u0333\u00d0\3")
+        buf.write("\2\2\2\u0334\u0335\t\f\2\2\u0335\u00d2\3\2\2\2\u0336\u0337")
+        buf.write("\7^\2\2\u0337\u033a\t\r\2\2\u0338\u033a\5\u00d5k\2\u0339")
+        buf.write("\u0336\3\2\2\2\u0339\u0338\3\2\2\2\u033a\u00d4\3\2\2\2")
+        buf.write("\u033b\u033c\7^\2\2\u033c\u033d\4\62\65\2\u033d\u033e")
+        buf.write("\4\629\2\u033e\u0345\4\629\2\u033f\u0340\7^\2\2\u0340")
+        buf.write("\u0341\4\629\2\u0341\u0345\4\629\2\u0342\u0343\7^\2\2")
+        buf.write("\u0343\u0345\4\629\2\u0344\u033b\3\2\2\2\u0344\u033f\3")
+        buf.write("\2\2\2\u0344\u0342\3\2\2\2\u0345\u00d6\3\2\2\2\u0346\u0347")
+        buf.write("\7^\2\2\u0347\u0348\7w\2\2\u0348\u0349\5\u00c9e\2\u0349")
+        buf.write("\u034a\5\u00c9e\2\u034a\u034b\5\u00c9e\2\u034b\u034c\5")
+        buf.write("\u00c9e\2\u034c\u00d8\3\2\2\2\u034d\u034e\t\16\2\2\u034e")
+        buf.write("\u034f\3\2\2\2\u034f\u0350\bm\2\2\u0350\u00da\3\2\2\2")
+        buf.write("\u0351\u0352\7^\2\2\u0352\u0353\3\2\2\2\u0353\u0354\b")
+        buf.write("n\2\2\u0354\u00dc\3\2\2\2\u0355\u0356\4\5\0\2\u0356\u00de")
+        buf.write("\3\2\2\2\u0357\u0358\7\61\2\2\u0358\u0359\7,\2\2\u0359")
+        buf.write("\u035d\3\2\2\2\u035a\u035c\13\2\2\2\u035b\u035a\3\2\2")
+        buf.write("\2\u035c\u035f\3\2\2\2\u035d\u035e\3\2\2\2\u035d\u035b")
+        buf.write("\3\2\2\2\u035e\u0360\3\2\2\2\u035f\u035d\3\2\2\2\u0360")
+        buf.write("\u0361\7,\2\2\u0361\u0362\7\61\2\2\u0362\u0363\3\2\2\2")
+        buf.write("\u0363\u0364\bp\2\2\u0364\u00e0\3\2\2\2\u0365\u0366\7")
+        buf.write("\61\2\2\u0366\u0367\7\61\2\2\u0367\u036b\3\2\2\2\u0368")
+        buf.write("\u036a\n\17\2\2\u0369\u0368\3\2\2\2\u036a\u036d\3\2\2")
+        buf.write("\2\u036b\u0369\3\2\2\2\u036b\u036c\3\2\2\2\u036c\u036f")
+        buf.write("\3\2\2\2\u036d\u036b\3\2\2\2\u036e\u0370\7\17\2\2\u036f")
+        buf.write("\u036e\3\2\2\2\u036f\u0370\3\2\2\2\u0370\u0371\3\2\2\2")
+        buf.write("\u0371\u0372\7\f\2\2\u0372\u0373\3\2\2\2\u0373\u0374\b")
+        buf.write("q\2\2\u0374\u00e2\3\2\2\2\u0375\u0379\7%\2\2\u0376\u0378")
+        buf.write("\n\17\2\2\u0377\u0376\3\2\2\2\u0378\u037b\3\2\2\2\u0379")
+        buf.write("\u0377\3\2\2\2\u0379\u037a\3\2\2\2\u037a\u037d\3\2\2\2")
+        buf.write("\u037b\u0379\3\2\2\2\u037c\u037e\7\17\2\2\u037d\u037c")
+        buf.write("\3\2\2\2\u037d\u037e\3\2\2\2\u037e\u037f\3\2\2\2\u037f")
+        buf.write("\u0380\7\f\2\2\u0380\u0381\3\2\2\2\u0381\u0382\br\2\2")
+        buf.write("\u0382\u00e4\3\2\2\2\'\2\u02b1\u02b3\u02b9\u02be\u02c3")
+        buf.write("\u02c8\u02ca\u02d4\u02d7\u02de\u02e1\u02e4\u02ea\u02ed")
+        buf.write("\u02f7\u02fc\u0302\u0306\u0309\u030f\u0312\u0315\u031a")
+        buf.write("\u031e\u0323\u0326\u0329\u032d\u0332\u0339\u0344\u035d")
+        buf.write("\u036b\u036f\u0379\u037d\3\2\3\2")
+        return buf.getvalue()
+
+
+class CLexer(Lexer):
+
+    atn = ATNDeserializer().deserialize(serializedATN())
+
+    decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+    T__0 = 1
+    T__1 = 2
+    T__2 = 3
+    T__3 = 4
+    T__4 = 5
+    T__5 = 6
+    T__6 = 7
+    T__7 = 8
+    T__8 = 9
+    T__9 = 10
+    T__10 = 11
+    T__11 = 12
+    T__12 = 13
+    T__13 = 14
+    T__14 = 15
+    T__15 = 16
+    T__16 = 17
+    T__17 = 18
+    T__18 = 19
+    T__19 = 20
+    T__20 = 21
+    T__21 = 22
+    T__22 = 23
+    T__23 = 24
+    T__24 = 25
+    T__25 = 26
+    T__26 = 27
+    T__27 = 28
+    T__28 = 29
+    T__29 = 30
+    T__30 = 31
+    T__31 = 32
+    T__32 = 33
+    T__33 = 34
+    T__34 = 35
+    T__35 = 36
+    T__36 = 37
+    T__37 = 38
+    T__38 = 39
+    T__39 = 40
+    T__40 = 41
+    T__41 = 42
+    T__42 = 43
+    T__43 = 44
+    T__44 = 45
+    T__45 = 46
+    T__46 = 47
+    T__47 = 48
+    T__48 = 49
+    T__49 = 50
+    T__50 = 51
+    T__51 = 52
+    T__52 = 53
+    T__53 = 54
+    T__54 = 55
+    T__55 = 56
+    T__56 = 57
+    T__57 = 58
+    T__58 = 59
+    T__59 = 60
+    T__60 = 61
+    T__61 = 62
+    T__62 = 63
+    T__63 = 64
+    T__64 = 65
+    T__65 = 66
+    T__66 = 67
+    T__67 = 68
+    T__68 = 69
+    T__69 = 70
+    T__70 = 71
+    T__71 = 72
+    T__72 = 73
+    T__73 = 74
+    T__74 = 75
+    T__75 = 76
+    T__76 = 77
+    T__77 = 78
+    T__78 = 79
+    T__79 = 80
+    T__80 = 81
+    T__81 = 82
+    T__82 = 83
+    T__83 = 84
+    T__84 = 85
+    T__85 = 86
+    T__86 = 87
+    T__87 = 88
+    T__88 = 89
+    T__89 = 90
+    T__90 = 91
+    T__91 = 92
+    IDENTIFIER = 93
+    CHARACTER_LITERAL = 94
+    STRING_LITERAL = 95
+    HEX_LITERAL = 96
+    DECIMAL_LITERAL = 97
+    OCTAL_LITERAL = 98
+    FLOATING_POINT_LITERAL = 99
+    WS = 100
+    BS = 101
+    UnicodeVocabulary = 102
+    COMMENT = 103
+    LINE_COMMENT = 104
+    LINE_COMMAND = 105
+
+    channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
+
+    modeNames = [ "DEFAULT_MODE" ]
+
+    literalNames = [ "<INVALID>",
+            "'{'", "';'", "'typedef'", "','", "'='", "'extern'", "'static'",
+            "'auto'", "'register'", "'STATIC'", "'void'", "'char'", "'short'",
+            "'int'", "'long'", "'float'", "'double'", "'signed'", "'unsigned'",
+            "'}'", "'struct'", "'union'", "':'", "'enum'", "'const'", "'volatile'",
+            "'IN'", "'OUT'", "'OPTIONAL'", "'CONST'", "'UNALIGNED'", "'VOLATILE'",
+            "'GLOBAL_REMOVE_IF_UNREFERENCED'", "'EFIAPI'", "'EFI_BOOTSERVICE'",
+            "'EFI_RUNTIMESERVICE'", "'PACKED'", "'('", "')'", "'['", "']'",
+            "'*'", "'...'", "'+'", "'-'", "'/'", "'%'", "'++'", "'--'",
+            "'sizeof'", "'.'", "'->'", "'&'", "'~'", "'!'", "'*='", "'/='",
+            "'%='", "'+='", "'-='", "'<<='", "'>>='", "'&='", "'^='", "'|='",
+            "'?'", "'||'", "'&&'", "'|'", "'^'", "'=='", "'!='", "'<'",
+            "'>'", "'<='", "'>='", "'<<'", "'>>'", "'__asm__'", "'_asm'",
+            "'__asm'", "'case'", "'default'", "'if'", "'else'", "'switch'",
+            "'while'", "'do'", "'goto'", "'continue'", "'break'", "'return'" ]
+
+    symbolicNames = [ "<INVALID>",
+            "IDENTIFIER", "CHARACTER_LITERAL", "STRING_LITERAL", "HEX_LITERAL",
+            "DECIMAL_LITERAL", "OCTAL_LITERAL", "FLOATING_POINT_LITERAL",
+            "WS", "BS", "UnicodeVocabulary", "COMMENT", "LINE_COMMENT",
+            "LINE_COMMAND" ]
+
+    ruleNames = [ "T__0", "T__1", "T__2", "T__3", "T__4", "T__5", "T__6",
+                  "T__7", "T__8", "T__9", "T__10", "T__11", "T__12", "T__13",
+                  "T__14", "T__15", "T__16", "T__17", "T__18", "T__19",
+                  "T__20", "T__21", "T__22", "T__23", "T__24", "T__25",
+                  "T__26", "T__27", "T__28", "T__29", "T__30", "T__31",
+                  "T__32", "T__33", "T__34", "T__35", "T__36", "T__37",
+                  "T__38", "T__39", "T__40", "T__41", "T__42", "T__43",
+                  "T__44", "T__45", "T__46", "T__47", "T__48", "T__49",
+                  "T__50", "T__51", "T__52", "T__53", "T__54", "T__55",
+                  "T__56", "T__57", "T__58", "T__59", "T__60", "T__61",
+                  "T__62", "T__63", "T__64", "T__65", "T__66", "T__67",
+                  "T__68", "T__69", "T__70", "T__71", "T__72", "T__73",
+                  "T__74", "T__75", "T__76", "T__77", "T__78", "T__79",
+                  "T__80", "T__81", "T__82", "T__83", "T__84", "T__85",
+                  "T__86", "T__87", "T__88", "T__89", "T__90", "T__91",
+                  "IDENTIFIER", "LETTER", "CHARACTER_LITERAL", "STRING_LITERAL",
+                  "HEX_LITERAL", "DECIMAL_LITERAL", "OCTAL_LITERAL", "HexDigit",
+                  "IntegerTypeSuffix", "FLOATING_POINT_LITERAL", "Exponent",
+                  "FloatTypeSuffix", "EscapeSequence", "OctalEscape", "UnicodeEscape",
+                  "WS", "BS", "UnicodeVocabulary", "COMMENT", "LINE_COMMENT",
+                  "LINE_COMMAND" ]
+
+    grammarFileName = "C.g4"
+
+    # @param  output= sys.stdout Type: TextIO
+    def __init__(self,input=None,output= sys.stdout):
+        super().__init__(input, output)
+        self.checkVersion("4.7.1")
+        self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
+        self._actions = None
+        self._predicates = None
+
+
+
+    def printTokenInfo(self,line,offset,tokenText):
+        print(str(line)+ ',' + str(offset) + ':' + str(tokenText))
+
+    def StorePredicateExpression(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        PredExp = CodeFragment.PredicateExpression(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.PredicateExpressionList.append(PredExp)
+
+    def StoreEnumerationDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        EnumDef = CodeFragment.EnumerationDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.EnumerationDefinitionList.append(EnumDef)
+
+    def StoreStructUnionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        SUDef = CodeFragment.StructUnionDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.StructUnionDefinitionList.append(SUDef)
+
+    def StoreTypedefDefinition(self,StartLine,StartOffset,EndLine,EndOffset,FromText,ToText):
+        Tdef = CodeFragment.TypedefDefinition(FromText, ToText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.TypedefDefinitionList.append(Tdef)
+
+    def StoreFunctionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText,LeftBraceLine,LeftBraceOffset,DeclLine,DeclOffset):
+        FuncDef = CodeFragment.FunctionDefinition(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset), (LeftBraceLine, LeftBraceOffset), (DeclLine, DeclOffset))
+        FileProfile.FunctionDefinitionList.append(FuncDef)
+
+    def StoreVariableDeclaration(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText):
+        VarDecl = CodeFragment.VariableDeclaration(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.VariableDeclarationList.append(VarDecl)
+
+    def StoreFunctionCalling(self,StartLine,StartOffset,EndLine,EndOffset,FuncName,ParamList):
+        FuncCall = CodeFragment.FunctionCalling(FuncName, ParamList, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.FunctionCallingList.append(FuncCall)
+
+
+
diff --git a/BaseTools/Source/Python/Ecc/CParser4/CListener.py b/BaseTools/Source/Python/Ecc/CParser4/CListener.py
new file mode 100644
index 0000000000..1696336694
--- /dev/null
+++ b/BaseTools/Source/Python/Ecc/CParser4/CListener.py
@@ -0,0 +1,815 @@
+# Generated from C.g4 by ANTLR 4.7.1
+from antlr4 import *
+if __name__ is not None and "." in __name__:
+    from .CParser import CParser
+else:
+    from CParser import CParser
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+
+# This class defines a complete listener for a parse tree produced by CParser.
+class CListener(ParseTreeListener):
+
+    # Enter a parse tree produced by CParser#translation_unit.
+    # @param  ctx Type: CParser.Translation_unitContext
+    def enterTranslation_unit(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#translation_unit.
+    # @param  ctx Type: CParser.Translation_unitContext
+    def exitTranslation_unit(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#external_declaration.
+    # @param  ctx Type: CParser.External_declarationContext
+    def enterExternal_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#external_declaration.
+    # @param  ctx Type: CParser.External_declarationContext
+    def exitExternal_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#function_definition.
+    # @param  ctx Type: CParser.Function_definitionContext
+    def enterFunction_definition(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#function_definition.
+    # @param  ctx Type: CParser.Function_definitionContext
+    def exitFunction_definition(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declaration_specifiers.
+    # @param  ctx Type: CParser.Declaration_specifiersContext
+    def enterDeclaration_specifiers(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declaration_specifiers.
+    # @param  ctx Type: CParser.Declaration_specifiersContext
+    def exitDeclaration_specifiers(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declaration.
+    # @param  ctx Type: CParser.DeclarationContext
+    def enterDeclaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declaration.
+    # @param  ctx Type: CParser.DeclarationContext
+    def exitDeclaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#init_declarator_list.
+    # @param  ctx Type: CParser.Init_declarator_listContext
+    def enterInit_declarator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#init_declarator_list.
+    # @param  ctx Type: CParser.Init_declarator_listContext
+    def exitInit_declarator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#init_declarator.
+    # @param  ctx Type: CParser.Init_declaratorContext
+    def enterInit_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#init_declarator.
+    # @param  ctx Type: CParser.Init_declaratorContext
+    def exitInit_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#storage_class_specifier.
+    # @param  ctx Type: CParser.Storage_class_specifierContext
+    def enterStorage_class_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#storage_class_specifier.
+    # @param  ctx Type: CParser.Storage_class_specifierContext
+    def exitStorage_class_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_specifier.
+    # @param  ctx Type: CParser.Type_specifierContext
+    def enterType_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_specifier.
+    # @param  ctx Type: CParser.Type_specifierContext
+    def exitType_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_id.
+    # @param  ctx Type: CParser.Type_idContext
+    def enterType_id(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_id.
+    # @param  ctx Type: CParser.Type_idContext
+    def exitType_id(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_or_union_specifier.
+    # @param  ctx Type: CParser.Struct_or_union_specifierContext
+    def enterStruct_or_union_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_or_union_specifier.
+    # @param  ctx Type: CParser.Struct_or_union_specifierContext
+    def exitStruct_or_union_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_or_union.
+    # @param  ctx Type: CParser.Struct_or_unionContext
+    def enterStruct_or_union(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_or_union.
+    # @param  ctx Type: CParser.Struct_or_unionContext
+    def exitStruct_or_union(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declaration_list.
+    # @param  ctx Type: CParser.Struct_declaration_listContext
+    def enterStruct_declaration_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declaration_list.
+    # @param  ctx Type: CParser.Struct_declaration_listContext
+    def exitStruct_declaration_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declaration.
+    # @param  ctx Type: CParser.Struct_declarationContext
+    def enterStruct_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declaration.
+    # @param  ctx Type: CParser.Struct_declarationContext
+    def exitStruct_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#specifier_qualifier_list.
+    # @param  ctx Type: CParser.Specifier_qualifier_listContext
+    def enterSpecifier_qualifier_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#specifier_qualifier_list.
+    # @param  ctx Type: CParser.Specifier_qualifier_listContext
+    def exitSpecifier_qualifier_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declarator_list.
+    # @param  ctx Type: CParser.Struct_declarator_listContext
+    def enterStruct_declarator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declarator_list.
+    # @param  ctx Type: CParser.Struct_declarator_listContext
+    def exitStruct_declarator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declarator.
+    # @param  ctx Type: CParser.Struct_declaratorContext
+    def enterStruct_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declarator.
+    # @param  ctx Type: CParser.Struct_declaratorContext
+    def exitStruct_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enum_specifier.
+    # @param  ctx Type: CParser.Enum_specifierContext
+    def enterEnum_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enum_specifier.
+    # @param  ctx Type: CParser.Enum_specifierContext
+    def exitEnum_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enumerator_list.
+    # @param  ctx Type: CParser.Enumerator_listContext
+    def enterEnumerator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enumerator_list.
+    # @param  ctx Type: CParser.Enumerator_listContext
+    def exitEnumerator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enumerator.
+    # @param  ctx Type: CParser.EnumeratorContext
+    def enterEnumerator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enumerator.
+    # @param  ctx Type: CParser.EnumeratorContext
+    def exitEnumerator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_qualifier.
+    # @param  ctx Type: CParser.Type_qualifierContext
+    def enterType_qualifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_qualifier.
+    # @param  ctx Type: CParser.Type_qualifierContext
+    def exitType_qualifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declarator.
+    # @param  ctx Type: CParser.DeclaratorContext
+    def enterDeclarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declarator.
+    # @param  ctx Type: CParser.DeclaratorContext
+    def exitDeclarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#direct_declarator.
+    # @param  ctx Type: CParser.Direct_declaratorContext
+    def enterDirect_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#direct_declarator.
+    # @param  ctx Type: CParser.Direct_declaratorContext
+    def exitDirect_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declarator_suffix.
+    # @param  ctx Type: CParser.Declarator_suffixContext
+    def enterDeclarator_suffix(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declarator_suffix.
+    # @param  ctx Type: CParser.Declarator_suffixContext
+    def exitDeclarator_suffix(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#pointer.
+    # @param  ctx Type: CParser.PointerContext
+    def enterPointer(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#pointer.
+    # @param  ctx Type: CParser.PointerContext
+    def exitPointer(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_type_list.
+    # @param  ctx Type: CParser.Parameter_type_listContext
+    def enterParameter_type_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_type_list.
+    # @param  ctx Type: CParser.Parameter_type_listContext
+    def exitParameter_type_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_list.
+    # @param  ctx Type: CParser.Parameter_listContext
+    def enterParameter_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_list.
+    # @param  ctx Type: CParser.Parameter_listContext
+    def exitParameter_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_declaration.
+    # @param  ctx Type: CParser.Parameter_declarationContext
+    def enterParameter_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_declaration.
+    # @param  ctx Type: CParser.Parameter_declarationContext
+    def exitParameter_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#identifier_list.
+    # @param  ctx Type: CParser.Identifier_listContext
+    def enterIdentifier_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#identifier_list.
+    # @param  ctx Type: CParser.Identifier_listContext
+    def exitIdentifier_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_name.
+    # @param  ctx Type: CParser.Type_nameContext
+    def enterType_name(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_name.
+    # @param  ctx Type: CParser.Type_nameContext
+    def exitType_name(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#abstract_declarator.
+    # @param  ctx Type: CParser.Abstract_declaratorContext
+    def enterAbstract_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#abstract_declarator.
+    # @param  ctx Type: CParser.Abstract_declaratorContext
+    def exitAbstract_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#direct_abstract_declarator.
+    # @param  ctx Type: CParser.Direct_abstract_declaratorContext
+    def enterDirect_abstract_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#direct_abstract_declarator.
+    # @param  ctx Type: CParser.Direct_abstract_declaratorContext
+    def exitDirect_abstract_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#abstract_declarator_suffix.
+    # @param  ctx Type: CParser.Abstract_declarator_suffixContext
+    def enterAbstract_declarator_suffix(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#abstract_declarator_suffix.
+    # @param  ctx Type: CParser.Abstract_declarator_suffixContext
+    def exitAbstract_declarator_suffix(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#initializer.
+    # @param  ctx Type: CParser.InitializerContext
+    def enterInitializer(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#initializer.
+    # @param  ctx Type: CParser.InitializerContext
+    def exitInitializer(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#initializer_list.
+    # @param  ctx Type: CParser.Initializer_listContext
+    def enterInitializer_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#initializer_list.
+    # @param  ctx Type: CParser.Initializer_listContext
+    def exitInitializer_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#argument_expression_list.
+    # @param  ctx Type: CParser.Argument_expression_listContext
+    def enterArgument_expression_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#argument_expression_list.
+    # @param  ctx Type: CParser.Argument_expression_listContext
+    def exitArgument_expression_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#additive_expression.
+    # @param  ctx Type: CParser.Additive_expressionContext
+    def enterAdditive_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#additive_expression.
+    # @param  ctx Type: CParser.Additive_expressionContext
+    def exitAdditive_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#multiplicative_expression.
+    # @param  ctx Type: CParser.Multiplicative_expressionContext
+    def enterMultiplicative_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#multiplicative_expression.
+    # @param  ctx Type: CParser.Multiplicative_expressionContext
+    def exitMultiplicative_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#cast_expression.
+    # @param  ctx Type: CParser.Cast_expressionContext
+    def enterCast_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#cast_expression.
+    # @param  ctx Type: CParser.Cast_expressionContext
+    def exitCast_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#unary_expression.
+    # @param  ctx Type: CParser.Unary_expressionContext
+    def enterUnary_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#unary_expression.
+    # @param  ctx Type: CParser.Unary_expressionContext
+    def exitUnary_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#postfix_expression.
+    # @param  ctx Type: CParser.Postfix_expressionContext
+    def enterPostfix_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#postfix_expression.
+    # @param  ctx Type: CParser.Postfix_expressionContext
+    def exitPostfix_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#macro_parameter_list.
+    # @param  ctx Type: CParser.Macro_parameter_listContext
+    def enterMacro_parameter_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#macro_parameter_list.
+    # @param  ctx Type: CParser.Macro_parameter_listContext
+    def exitMacro_parameter_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#unary_operator.
+    # @param  ctx Type: CParser.Unary_operatorContext
+    def enterUnary_operator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#unary_operator.
+    # @param  ctx Type: CParser.Unary_operatorContext
+    def exitUnary_operator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#primary_expression.
+    # @param  ctx Type: CParser.Primary_expressionContext
+    def enterPrimary_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#primary_expression.
+    # @param  ctx Type: CParser.Primary_expressionContext
+    def exitPrimary_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#constant.
+    # @param  ctx Type: CParser.ConstantContext
+    def enterConstant(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#constant.
+    # @param  ctx Type: CParser.ConstantContext
+    def exitConstant(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#expression.
+    # @param  ctx Type: CParser.ExpressionContext
+    def enterExpression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#expression.
+    # @param  ctx Type: CParser.ExpressionContext
+    def exitExpression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#constant_expression.
+    # @param  ctx Type: CParser.Constant_expressionContext
+    def enterConstant_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#constant_expression.
+    # @param  ctx Type: CParser.Constant_expressionContext
+    def exitConstant_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#assignment_expression.
+    # @param  ctx Type: CParser.Assignment_expressionContext
+    def enterAssignment_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#assignment_expression.
+    # @param  ctx Type: CParser.Assignment_expressionContext
+    def exitAssignment_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#lvalue.
+    # @param  ctx Type: CParser.LvalueContext
+    def enterLvalue(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#lvalue.
+    # @param  ctx Type: CParser.LvalueContext
+    def exitLvalue(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#assignment_operator.
+    # @param  ctx Type: CParser.Assignment_operatorContext
+    def enterAssignment_operator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#assignment_operator.
+    # @param  ctx Type: CParser.Assignment_operatorContext
+    def exitAssignment_operator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#conditional_expression.
+    # @param  ctx Type: CParser.Conditional_expressionContext
+    def enterConditional_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#conditional_expression.
+    # @param  ctx Type: CParser.Conditional_expressionContext
+    def exitConditional_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#logical_or_expression.
+    # @param  ctx Type: CParser.Logical_or_expressionContext
+    def enterLogical_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#logical_or_expression.
+    # @param  ctx Type: CParser.Logical_or_expressionContext
+    def exitLogical_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#logical_and_expression.
+    # @param  ctx Type: CParser.Logical_and_expressionContext
+    def enterLogical_and_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#logical_and_expression.
+    # @param  ctx Type: CParser.Logical_and_expressionContext
+    def exitLogical_and_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#inclusive_or_expression.
+    # @param  ctx Type: CParser.Inclusive_or_expressionContext
+    def enterInclusive_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#inclusive_or_expression.
+    # @param  ctx Type: CParser.Inclusive_or_expressionContext
+    def exitInclusive_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#exclusive_or_expression.
+    # @param  ctx Type: CParser.Exclusive_or_expressionContext
+    def enterExclusive_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#exclusive_or_expression.
+    # @param  ctx Type: CParser.Exclusive_or_expressionContext
+    def exitExclusive_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#and_expression.
+    # @param  ctx Type: CParser.And_expressionContext
+    def enterAnd_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#and_expression.
+    # @param  ctx Type: CParser.And_expressionContext
+    def exitAnd_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#equality_expression.
+    # @param  ctx Type: CParser.Equality_expressionContext
+    def enterEquality_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#equality_expression.
+    # @param  ctx Type: CParser.Equality_expressionContext
+    def exitEquality_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#relational_expression.
+    # @param  ctx Type: CParser.Relational_expressionContext
+    def enterRelational_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#relational_expression.
+    # @param  ctx Type: CParser.Relational_expressionContext
+    def exitRelational_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#shift_expression.
+    # @param  ctx Type: CParser.Shift_expressionContext
+    def enterShift_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#shift_expression.
+    # @param  ctx Type: CParser.Shift_expressionContext
+    def exitShift_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#statement.
+    # @param  ctx Type: CParser.StatementContext
+    def enterStatement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#statement.
+    # @param  ctx Type: CParser.StatementContext
+    def exitStatement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm2_statement.
+    # @param  ctx Type: CParser.Asm2_statementContext
+    def enterAsm2_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm2_statement.
+    # @param  ctx Type: CParser.Asm2_statementContext
+    def exitAsm2_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm1_statement.
+    # @param  ctx Type: CParser.Asm1_statementContext
+    def enterAsm1_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm1_statement.
+    # @param  ctx Type: CParser.Asm1_statementContext
+    def exitAsm1_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm_statement.
+    # @param  ctx Type: CParser.Asm_statementContext
+    def enterAsm_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm_statement.
+    # @param  ctx Type: CParser.Asm_statementContext
+    def exitAsm_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#macro_statement.
+    # @param  ctx Type: CParser.Macro_statementContext
+    def enterMacro_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#macro_statement.
+    # @param  ctx Type: CParser.Macro_statementContext
+    def exitMacro_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#labeled_statement.
+    # @param  ctx Type: CParser.Labeled_statementContext
+    def enterLabeled_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#labeled_statement.
+    # @param  ctx Type: CParser.Labeled_statementContext
+    def exitLabeled_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#compound_statement.
+    # @param  ctx Type: CParser.Compound_statementContext
+    def enterCompound_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#compound_statement.
+    # @param  ctx Type: CParser.Compound_statementContext
+    def exitCompound_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#statement_list.
+    # @param  ctx Type: CParser.Statement_listContext
+    def enterStatement_list(self,ctx):
+        pass
+
+
+    # Exit a parse tree produced by CParser#statement_list.
+    # @param  ctx Type: CParser.Statement_listContext
+    def exitStatement_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#expression_statement.
+    # @param  ctx Type: CParser.Expression_statementContext
+    def enterExpression_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#expression_statement.
+    # @param  ctx Type: CParser.Expression_statementContext
+    def exitExpression_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#selection_statement.
+    # @param  ctx Type: CParser.Selection_statementContext
+    def enterSelection_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#selection_statement.
+    # @param  ctx Type: CParser.Selection_statementContext
+    def exitSelection_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#iteration_statement.
+    # @param  ctx Type: CParser.Iteration_statementContext
+    def enterIteration_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#iteration_statement.
+    # @param  ctx Type: CParser.Iteration_statementContext
+    def exitIteration_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#jump_statement.
+    # @param  ctx Type: CParser.Jump_statementContext
+    def enterJump_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#jump_statement.
+    # @param  ctx Type: CParser.Jump_statementContext
+    def exitJump_statement(self,ctx):
+        pass
+
+
diff --git a/BaseTools/Source/Python/Ecc/CParser4/CParser.py b/BaseTools/Source/Python/Ecc/CParser4/CParser.py
new file mode 100644
index 0000000000..08d8a423f4
--- /dev/null
+++ b/BaseTools/Source/Python/Ecc/CParser4/CParser.py
@@ -0,0 +1,6279 @@
+# Generated from C.g4 by ANTLR 4.7.1
+# encoding: utf-8
+from antlr4 import *
+from io import StringIO
+from typing.io import TextIO
+import sys
+
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+def serializedATN():
+    with StringIO() as buf:
+        buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\3k")
+        buf.write("\u0380\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t\7")
+        buf.write("\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r\4\16")
+        buf.write("\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23\t\23")
+        buf.write("\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30\4\31")
+        buf.write("\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36\t\36")
+        buf.write("\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%\4&\t")
+        buf.write("&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.\t.\4")
+        buf.write("/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64\t\64")
+        buf.write("\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:\4;\t")
+        buf.write(";\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\tC\4D\t")
+        buf.write("D\4E\tE\4F\tF\4G\tG\4H\tH\3\2\7\2\u0092\n\2\f\2\16\2\u0095")
+        buf.write("\13\2\3\3\5\3\u0098\n\3\3\3\3\3\7\3\u009c\n\3\f\3\16\3")
+        buf.write("\u009f\13\3\3\3\3\3\3\3\3\3\3\3\3\3\5\3\u00a7\n\3\5\3")
+        buf.write("\u00a9\n\3\3\4\5\4\u00ac\n\4\3\4\3\4\6\4\u00b0\n\4\r\4")
+        buf.write("\16\4\u00b1\3\4\3\4\3\4\5\4\u00b7\n\4\3\4\3\4\3\5\3\5")
+        buf.write("\3\5\6\5\u00be\n\5\r\5\16\5\u00bf\3\6\3\6\5\6\u00c4\n")
+        buf.write("\6\3\6\3\6\3\6\3\6\3\6\3\6\5\6\u00cc\n\6\3\6\3\6\3\6\5")
+        buf.write("\6\u00d1\n\6\3\7\3\7\3\7\7\7\u00d6\n\7\f\7\16\7\u00d9")
+        buf.write("\13\7\3\b\3\b\3\b\5\b\u00de\n\b\3\t\3\t\3\n\3\n\3\n\3")
+        buf.write("\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n")
+        buf.write("\7\n\u00f3\n\n\f\n\16\n\u00f6\13\n\3\n\3\n\5\n\u00fa\n")
+        buf.write("\n\3\13\3\13\3\f\3\f\5\f\u0100\n\f\3\f\3\f\3\f\3\f\3\f")
+        buf.write("\3\f\3\f\5\f\u0109\n\f\3\r\3\r\3\16\6\16\u010e\n\16\r")
+        buf.write("\16\16\16\u010f\3\17\3\17\3\17\3\17\3\20\3\20\6\20\u0118")
+        buf.write("\n\20\r\20\16\20\u0119\3\21\3\21\3\21\7\21\u011f\n\21")
+        buf.write("\f\21\16\21\u0122\13\21\3\22\3\22\3\22\5\22\u0127\n\22")
+        buf.write("\3\22\3\22\5\22\u012b\n\22\3\23\3\23\3\23\3\23\5\23\u0131")
+        buf.write("\n\23\3\23\3\23\3\23\3\23\3\23\3\23\3\23\5\23\u013a\n")
+        buf.write("\23\3\23\3\23\3\23\3\23\5\23\u0140\n\23\3\24\3\24\3\24")
+        buf.write("\7\24\u0145\n\24\f\24\16\24\u0148\13\24\3\25\3\25\3\25")
+        buf.write("\5\25\u014d\n\25\3\26\3\26\3\27\5\27\u0152\n\27\3\27\5")
+        buf.write("\27\u0155\n\27\3\27\5\27\u0158\n\27\3\27\5\27\u015b\n")
+        buf.write("\27\3\27\3\27\5\27\u015f\n\27\3\30\3\30\7\30\u0163\n\30")
+        buf.write("\f\30\16\30\u0166\13\30\3\30\3\30\5\30\u016a\n\30\3\30")
+        buf.write("\3\30\3\30\6\30\u016f\n\30\r\30\16\30\u0170\5\30\u0173")
+        buf.write("\n\30\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31")
+        buf.write("\3\31\3\31\3\31\3\31\3\31\3\31\5\31\u0185\n\31\3\32\3")
+        buf.write("\32\6\32\u0189\n\32\r\32\16\32\u018a\3\32\5\32\u018e\n")
+        buf.write("\32\3\32\3\32\3\32\5\32\u0193\n\32\3\33\3\33\3\33\5\33")
+        buf.write("\u0198\n\33\3\33\5\33\u019b\n\33\3\34\3\34\3\34\5\34\u01a0")
+        buf.write("\n\34\3\34\7\34\u01a3\n\34\f\34\16\34\u01a6\13\34\3\35")
+        buf.write("\3\35\3\35\7\35\u01ab\n\35\f\35\16\35\u01ae\13\35\3\35")
+        buf.write("\5\35\u01b1\n\35\3\35\7\35\u01b4\n\35\f\35\16\35\u01b7")
+        buf.write("\13\35\3\35\5\35\u01ba\n\35\3\36\3\36\3\36\7\36\u01bf")
+        buf.write("\n\36\f\36\16\36\u01c2\13\36\3\37\3\37\5\37\u01c6\n\37")
+        buf.write("\3\37\5\37\u01c9\n\37\3 \3 \5 \u01cd\n \3 \5 \u01d0\n")
+        buf.write(" \3!\3!\3!\3!\3!\5!\u01d7\n!\3!\7!\u01da\n!\f!\16!\u01dd")
+        buf.write("\13!\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\5")
+        buf.write("\"\u01eb\n\"\3#\3#\3#\3#\5#\u01f1\n#\3#\3#\5#\u01f5\n")
+        buf.write("#\3$\3$\3$\7$\u01fa\n$\f$\16$\u01fd\13$\3%\3%\5%\u0201")
+        buf.write("\n%\3%\3%\3%\5%\u0206\n%\7%\u0208\n%\f%\16%\u020b\13%")
+        buf.write("\3&\3&\3&\3&\3&\7&\u0212\n&\f&\16&\u0215\13&\3\'\3\'\3")
+        buf.write("\'\3\'\3\'\3\'\3\'\7\'\u021e\n\'\f\'\16\'\u0221\13\'\3")
+        buf.write("(\3(\3(\3(\3(\3(\5(\u0229\n(\3)\3)\3)\3)\3)\3)\3)\3)\3")
+        buf.write(")\3)\3)\3)\3)\3)\3)\5)\u023a\n)\3*\3*\3*\3*\3*\3*\3*\3")
+        buf.write("*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3")
+        buf.write("*\3*\3*\3*\7*\u0259\n*\f*\16*\u025c\13*\3+\3+\3+\7+\u0261")
+        buf.write("\n+\f+\16+\u0264\13+\3,\3,\3-\3-\3-\3-\3-\3-\5-\u026e")
+        buf.write("\n-\3.\3.\3.\3.\3.\7.\u0275\n.\f.\16.\u0278\13.\3.\6.")
+        buf.write("\u027b\n.\r.\16.\u027c\6.\u027f\n.\r.\16.\u0280\3.\7.")
+        buf.write("\u0284\n.\f.\16.\u0287\13.\3.\5.\u028a\n.\3/\3/\3/\7/")
+        buf.write("\u028f\n/\f/\16/\u0292\13/\3\60\3\60\3\61\3\61\3\61\3")
+        buf.write("\61\3\61\5\61\u029b\n\61\3\62\3\62\3\63\3\63\3\64\3\64")
+        buf.write("\3\64\3\64\3\64\3\64\3\64\5\64\u02a8\n\64\3\65\3\65\3")
+        buf.write("\65\7\65\u02ad\n\65\f\65\16\65\u02b0\13\65\3\66\3\66\3")
+        buf.write("\66\7\66\u02b5\n\66\f\66\16\66\u02b8\13\66\3\67\3\67\3")
+        buf.write("\67\7\67\u02bd\n\67\f\67\16\67\u02c0\13\67\38\38\38\7")
+        buf.write("8\u02c5\n8\f8\168\u02c8\138\39\39\39\79\u02cd\n9\f9\16")
+        buf.write("9\u02d0\139\3:\3:\3:\7:\u02d5\n:\f:\16:\u02d8\13:\3;\3")
+        buf.write(";\3;\7;\u02dd\n;\f;\16;\u02e0\13;\3<\3<\3<\7<\u02e5\n")
+        buf.write("<\f<\16<\u02e8\13<\3=\3=\3=\3=\3=\3=\3=\3=\3=\3=\3=\5")
+        buf.write("=\u02f5\n=\3>\5>\u02f8\n>\3>\3>\3>\7>\u02fd\n>\f>\16>")
+        buf.write("\u0300\13>\3>\3>\3>\3?\3?\3?\7?\u0308\n?\f?\16?\u030b")
+        buf.write("\13?\3?\3?\3@\3@\3@\7@\u0312\n@\f@\16@\u0315\13@\3@\3")
+        buf.write("@\3A\3A\3A\7A\u031c\nA\fA\16A\u031f\13A\3A\5A\u0322\n")
+        buf.write("A\3A\5A\u0325\nA\3A\3A\3B\3B\3B\3B\3B\3B\3B\3B\3B\3B\3")
+        buf.write("B\5B\u0334\nB\3C\3C\7C\u0338\nC\fC\16C\u033b\13C\3C\5")
+        buf.write("C\u033e\nC\3C\3C\3D\6D\u0343\nD\rD\16D\u0344\3E\3E\3E")
+        buf.write("\3E\5E\u034b\nE\3F\3F\3F\3F\3F\3F\3F\3F\5F\u0355\nF\3")
+        buf.write("F\3F\3F\3F\3F\3F\5F\u035d\nF\3G\3G\3G\3G\3G\3G\3G\3G\3")
+        buf.write("G\3G\3G\3G\3G\3G\3G\3G\5G\u036f\nG\3H\3H\3H\3H\3H\3H\3")
+        buf.write("H\3H\3H\3H\3H\3H\3H\5H\u037e\nH\3H\2\2I\2\4\6\b\n\f\16")
+        buf.write("\20\22\24\26\30\32\34\36 \"$&(*,.\60\62\64\668:<>@BDF")
+        buf.write("HJLNPRTVXZ\\^`bdfhjlnprtvxz|~\u0080\u0082\u0084\u0086")
+        buf.write("\u0088\u008a\u008c\u008e\2\f\3\2\b\f\3\2\27\30\3\2\33")
+        buf.write("\'\5\2,,./\679\4\2\7\7:C\3\2IJ\3\2KN\3\2OP\3\2\4\4\3\2")
+        buf.write("\26\26\2\u03d8\2\u0093\3\2\2\2\4\u00a8\3\2\2\2\6\u00ab")
+        buf.write("\3\2\2\2\b\u00bd\3\2\2\2\n\u00d0\3\2\2\2\f\u00d2\3\2\2")
+        buf.write("\2\16\u00da\3\2\2\2\20\u00df\3\2\2\2\22\u00f9\3\2\2\2")
+        buf.write("\24\u00fb\3\2\2\2\26\u0108\3\2\2\2\30\u010a\3\2\2\2\32")
+        buf.write("\u010d\3\2\2\2\34\u0111\3\2\2\2\36\u0117\3\2\2\2 \u011b")
+        buf.write("\3\2\2\2\"\u012a\3\2\2\2$\u013f\3\2\2\2&\u0141\3\2\2\2")
+        buf.write("(\u0149\3\2\2\2*\u014e\3\2\2\2,\u015e\3\2\2\2.\u0172\3")
+        buf.write("\2\2\2\60\u0184\3\2\2\2\62\u0192\3\2\2\2\64\u0194\3\2")
+        buf.write("\2\2\66\u019c\3\2\2\28\u01b9\3\2\2\2:\u01bb\3\2\2\2<\u01c8")
+        buf.write("\3\2\2\2>\u01cf\3\2\2\2@\u01d6\3\2\2\2B\u01ea\3\2\2\2")
+        buf.write("D\u01f4\3\2\2\2F\u01f6\3\2\2\2H\u01fe\3\2\2\2J\u020c\3")
+        buf.write("\2\2\2L\u0216\3\2\2\2N\u0228\3\2\2\2P\u0239\3\2\2\2R\u023b")
+        buf.write("\3\2\2\2T\u025d\3\2\2\2V\u0265\3\2\2\2X\u026d\3\2\2\2")
+        buf.write("Z\u0289\3\2\2\2\\\u028b\3\2\2\2^\u0293\3\2\2\2`\u029a")
+        buf.write("\3\2\2\2b\u029c\3\2\2\2d\u029e\3\2\2\2f\u02a0\3\2\2\2")
+        buf.write("h\u02a9\3\2\2\2j\u02b1\3\2\2\2l\u02b9\3\2\2\2n\u02c1\3")
+        buf.write("\2\2\2p\u02c9\3\2\2\2r\u02d1\3\2\2\2t\u02d9\3\2\2\2v\u02e1")
+        buf.write("\3\2\2\2x\u02f4\3\2\2\2z\u02f7\3\2\2\2|\u0304\3\2\2\2")
+        buf.write("~\u030e\3\2\2\2\u0080\u0318\3\2\2\2\u0082\u0333\3\2\2")
+        buf.write("\2\u0084\u0335\3\2\2\2\u0086\u0342\3\2\2\2\u0088\u034a")
+        buf.write("\3\2\2\2\u008a\u035c\3\2\2\2\u008c\u036e\3\2\2\2\u008e")
+        buf.write("\u037d\3\2\2\2\u0090\u0092\5\4\3\2\u0091\u0090\3\2\2\2")
+        buf.write("\u0092\u0095\3\2\2\2\u0093\u0091\3\2\2\2\u0093\u0094\3")
+        buf.write("\2\2\2\u0094\3\3\2\2\2\u0095\u0093\3\2\2\2\u0096\u0098")
+        buf.write("\5\b\5\2\u0097\u0096\3\2\2\2\u0097\u0098\3\2\2\2\u0098")
+        buf.write("\u0099\3\2\2\2\u0099\u009d\5,\27\2\u009a\u009c\5\n\6\2")
+        buf.write("\u009b\u009a\3\2\2\2\u009c\u009f\3\2\2\2\u009d\u009b\3")
+        buf.write("\2\2\2\u009d\u009e\3\2\2\2\u009e\u00a0\3\2\2\2\u009f\u009d")
+        buf.write("\3\2\2\2\u00a0\u00a1\7\3\2\2\u00a1\u00a9\3\2\2\2\u00a2")
+        buf.write("\u00a9\5\6\4\2\u00a3\u00a9\5\n\6\2\u00a4\u00a6\5\u0080")
+        buf.write("A\2\u00a5\u00a7\7\4\2\2\u00a6\u00a5\3\2\2\2\u00a6\u00a7")
+        buf.write("\3\2\2\2\u00a7\u00a9\3\2\2\2\u00a8\u0097\3\2\2\2\u00a8")
+        buf.write("\u00a2\3\2\2\2\u00a8\u00a3\3\2\2\2\u00a8\u00a4\3\2\2\2")
+        buf.write("\u00a9\5\3\2\2\2\u00aa\u00ac\5\b\5\2\u00ab\u00aa\3\2\2")
+        buf.write("\2\u00ab\u00ac\3\2\2\2\u00ac\u00ad\3\2\2\2\u00ad\u00b6")
+        buf.write("\5,\27\2\u00ae\u00b0\5\n\6\2\u00af\u00ae\3\2\2\2\u00b0")
+        buf.write("\u00b1\3\2\2\2\u00b1\u00af\3\2\2\2\u00b1\u00b2\3\2\2\2")
+        buf.write("\u00b2\u00b3\3\2\2\2\u00b3\u00b4\5\u0084C\2\u00b4\u00b7")
+        buf.write("\3\2\2\2\u00b5\u00b7\5\u0084C\2\u00b6\u00af\3\2\2\2\u00b6")
+        buf.write("\u00b5\3\2\2\2\u00b7\u00b8\3\2\2\2\u00b8\u00b9\b\4\1\2")
+        buf.write("\u00b9\7\3\2\2\2\u00ba\u00be\5\20\t\2\u00bb\u00be\5\22")
+        buf.write("\n\2\u00bc\u00be\5*\26\2\u00bd\u00ba\3\2\2\2\u00bd\u00bb")
+        buf.write("\3\2\2\2\u00bd\u00bc\3\2\2\2\u00be\u00bf\3\2\2\2\u00bf")
+        buf.write("\u00bd\3\2\2\2\u00bf\u00c0\3\2\2\2\u00c0\t\3\2\2\2\u00c1")
+        buf.write("\u00c3\7\5\2\2\u00c2\u00c4\5\b\5\2\u00c3\u00c2\3\2\2\2")
+        buf.write("\u00c3\u00c4\3\2\2\2\u00c4\u00c5\3\2\2\2\u00c5\u00c6\5")
+        buf.write("\f\7\2\u00c6\u00c7\7\4\2\2\u00c7\u00c8\b\6\1\2\u00c8\u00d1")
+        buf.write("\3\2\2\2\u00c9\u00cb\5\b\5\2\u00ca\u00cc\5\f\7\2\u00cb")
+        buf.write("\u00ca\3\2\2\2\u00cb\u00cc\3\2\2\2\u00cc\u00cd\3\2\2\2")
+        buf.write("\u00cd\u00ce\7\4\2\2\u00ce\u00cf\b\6\1\2\u00cf\u00d1\3")
+        buf.write("\2\2\2\u00d0\u00c1\3\2\2\2\u00d0\u00c9\3\2\2\2\u00d1\13")
+        buf.write("\3\2\2\2\u00d2\u00d7\5\16\b\2\u00d3\u00d4\7\6\2\2\u00d4")
+        buf.write("\u00d6\5\16\b\2\u00d5\u00d3\3\2\2\2\u00d6\u00d9\3\2\2")
+        buf.write("\2\u00d7\u00d5\3\2\2\2\u00d7\u00d8\3\2\2\2\u00d8\r\3\2")
+        buf.write("\2\2\u00d9\u00d7\3\2\2\2\u00da\u00dd\5,\27\2\u00db\u00dc")
+        buf.write("\7\7\2\2\u00dc\u00de\5D#\2\u00dd\u00db\3\2\2\2\u00dd\u00de")
+        buf.write("\3\2\2\2\u00de\17\3\2\2\2\u00df\u00e0\t\2\2\2\u00e0\21")
+        buf.write("\3\2\2\2\u00e1\u00fa\7\r\2\2\u00e2\u00fa\7\16\2\2\u00e3")
+        buf.write("\u00fa\7\17\2\2\u00e4\u00fa\7\20\2\2\u00e5\u00fa\7\21")
+        buf.write("\2\2\u00e6\u00fa\7\22\2\2\u00e7\u00fa\7\23\2\2\u00e8\u00fa")
+        buf.write("\7\24\2\2\u00e9\u00fa\7\25\2\2\u00ea\u00eb\5\26\f\2\u00eb")
+        buf.write("\u00ec\b\n\1\2\u00ec\u00fa\3\2\2\2\u00ed\u00ee\5$\23\2")
+        buf.write("\u00ee\u00ef\b\n\1\2\u00ef\u00fa\3\2\2\2\u00f0\u00f4\7")
+        buf.write("_\2\2\u00f1\u00f3\5*\26\2\u00f2\u00f1\3\2\2\2\u00f3\u00f6")
+        buf.write("\3\2\2\2\u00f4\u00f2\3\2\2\2\u00f4\u00f5\3\2\2\2\u00f5")
+        buf.write("\u00f7\3\2\2\2\u00f6\u00f4\3\2\2\2\u00f7\u00fa\5,\27\2")
+        buf.write("\u00f8\u00fa\5\24\13\2\u00f9\u00e1\3\2\2\2\u00f9\u00e2")
+        buf.write("\3\2\2\2\u00f9\u00e3\3\2\2\2\u00f9\u00e4\3\2\2\2\u00f9")
+        buf.write("\u00e5\3\2\2\2\u00f9\u00e6\3\2\2\2\u00f9\u00e7\3\2\2\2")
+        buf.write("\u00f9\u00e8\3\2\2\2\u00f9\u00e9\3\2\2\2\u00f9\u00ea\3")
+        buf.write("\2\2\2\u00f9\u00ed\3\2\2\2\u00f9\u00f0\3\2\2\2\u00f9\u00f8")
+        buf.write("\3\2\2\2\u00fa\23\3\2\2\2\u00fb\u00fc\7_\2\2\u00fc\25")
+        buf.write("\3\2\2\2\u00fd\u00ff\5\30\r\2\u00fe\u0100\7_\2\2\u00ff")
+        buf.write("\u00fe\3\2\2\2\u00ff\u0100\3\2\2\2\u0100\u0101\3\2\2\2")
+        buf.write("\u0101\u0102\7\3\2\2\u0102\u0103\5\32\16\2\u0103\u0104")
+        buf.write("\7\26\2\2\u0104\u0109\3\2\2\2\u0105\u0106\5\30\r\2\u0106")
+        buf.write("\u0107\7_\2\2\u0107\u0109\3\2\2\2\u0108\u00fd\3\2\2\2")
+        buf.write("\u0108\u0105\3\2\2\2\u0109\27\3\2\2\2\u010a\u010b\t\3")
+        buf.write("\2\2\u010b\31\3\2\2\2\u010c\u010e\5\34\17\2\u010d\u010c")
+        buf.write("\3\2\2\2\u010e\u010f\3\2\2\2\u010f\u010d\3\2\2\2\u010f")
+        buf.write("\u0110\3\2\2\2\u0110\33\3\2\2\2\u0111\u0112\5\36\20\2")
+        buf.write("\u0112\u0113\5 \21\2\u0113\u0114\7\4\2\2\u0114\35\3\2")
+        buf.write("\2\2\u0115\u0118\5*\26\2\u0116\u0118\5\22\n\2\u0117\u0115")
+        buf.write("\3\2\2\2\u0117\u0116\3\2\2\2\u0118\u0119\3\2\2\2\u0119")
+        buf.write("\u0117\3\2\2\2\u0119\u011a\3\2\2\2\u011a\37\3\2\2\2\u011b")
+        buf.write("\u0120\5\"\22\2\u011c\u011d\7\6\2\2\u011d\u011f\5\"\22")
+        buf.write("\2\u011e\u011c\3\2\2\2\u011f\u0122\3\2\2\2\u0120\u011e")
+        buf.write("\3\2\2\2\u0120\u0121\3\2\2\2\u0121!\3\2\2\2\u0122\u0120")
+        buf.write("\3\2\2\2\u0123\u0126\5,\27\2\u0124\u0125\7\31\2\2\u0125")
+        buf.write("\u0127\5^\60\2\u0126\u0124\3\2\2\2\u0126\u0127\3\2\2\2")
+        buf.write("\u0127\u012b\3\2\2\2\u0128\u0129\7\31\2\2\u0129\u012b")
+        buf.write("\5^\60\2\u012a\u0123\3\2\2\2\u012a\u0128\3\2\2\2\u012b")
+        buf.write("#\3\2\2\2\u012c\u012d\7\32\2\2\u012d\u012e\7\3\2\2\u012e")
+        buf.write("\u0130\5&\24\2\u012f\u0131\7\6\2\2\u0130\u012f\3\2\2\2")
+        buf.write("\u0130\u0131\3\2\2\2\u0131\u0132\3\2\2\2\u0132\u0133\7")
+        buf.write("\26\2\2\u0133\u0140\3\2\2\2\u0134\u0135\7\32\2\2\u0135")
+        buf.write("\u0136\7_\2\2\u0136\u0137\7\3\2\2\u0137\u0139\5&\24\2")
+        buf.write("\u0138\u013a\7\6\2\2\u0139\u0138\3\2\2\2\u0139\u013a\3")
+        buf.write("\2\2\2\u013a\u013b\3\2\2\2\u013b\u013c\7\26\2\2\u013c")
+        buf.write("\u0140\3\2\2\2\u013d\u013e\7\32\2\2\u013e\u0140\7_\2\2")
+        buf.write("\u013f\u012c\3\2\2\2\u013f\u0134\3\2\2\2\u013f\u013d\3")
+        buf.write("\2\2\2\u0140%\3\2\2\2\u0141\u0146\5(\25\2\u0142\u0143")
+        buf.write("\7\6\2\2\u0143\u0145\5(\25\2\u0144\u0142\3\2\2\2\u0145")
+        buf.write("\u0148\3\2\2\2\u0146\u0144\3\2\2\2\u0146\u0147\3\2\2\2")
+        buf.write("\u0147\'\3\2\2\2\u0148\u0146\3\2\2\2\u0149\u014c\7_\2")
+        buf.write("\2\u014a\u014b\7\7\2\2\u014b\u014d\5^\60\2\u014c\u014a")
+        buf.write("\3\2\2\2\u014c\u014d\3\2\2\2\u014d)\3\2\2\2\u014e\u014f")
+        buf.write("\t\4\2\2\u014f+\3\2\2\2\u0150\u0152\5\62\32\2\u0151\u0150")
+        buf.write("\3\2\2\2\u0151\u0152\3\2\2\2\u0152\u0154\3\2\2\2\u0153")
+        buf.write("\u0155\7$\2\2\u0154\u0153\3\2\2\2\u0154\u0155\3\2\2\2")
+        buf.write("\u0155\u0157\3\2\2\2\u0156\u0158\7%\2\2\u0157\u0156\3")
+        buf.write("\2\2\2\u0157\u0158\3\2\2\2\u0158\u015a\3\2\2\2\u0159\u015b")
+        buf.write("\7&\2\2\u015a\u0159\3\2\2\2\u015a\u015b\3\2\2\2\u015b")
+        buf.write("\u015c\3\2\2\2\u015c\u015f\5.\30\2\u015d\u015f\5\62\32")
+        buf.write("\2\u015e\u0151\3\2\2\2\u015e\u015d\3\2\2\2\u015f-\3\2")
+        buf.write("\2\2\u0160\u0164\7_\2\2\u0161\u0163\5\60\31\2\u0162\u0161")
+        buf.write("\3\2\2\2\u0163\u0166\3\2\2\2\u0164\u0162\3\2\2\2\u0164")
+        buf.write("\u0165\3\2\2\2\u0165\u0173\3\2\2\2\u0166\u0164\3\2\2\2")
+        buf.write("\u0167\u0169\7(\2\2\u0168\u016a\7$\2\2\u0169\u0168\3\2")
+        buf.write("\2\2\u0169\u016a\3\2\2\2\u016a\u016b\3\2\2\2\u016b\u016c")
+        buf.write("\5,\27\2\u016c\u016e\7)\2\2\u016d\u016f\5\60\31\2\u016e")
+        buf.write("\u016d\3\2\2\2\u016f\u0170\3\2\2\2\u0170\u016e\3\2\2\2")
+        buf.write("\u0170\u0171\3\2\2\2\u0171\u0173\3\2\2\2\u0172\u0160\3")
+        buf.write("\2\2\2\u0172\u0167\3\2\2\2\u0173/\3\2\2\2\u0174\u0175")
+        buf.write("\7*\2\2\u0175\u0176\5^\60\2\u0176\u0177\7+\2\2\u0177\u0185")
+        buf.write("\3\2\2\2\u0178\u0179\7*\2\2\u0179\u0185\7+\2\2\u017a\u017b")
+        buf.write("\7(\2\2\u017b\u017c\5\64\33\2\u017c\u017d\7)\2\2\u017d")
+        buf.write("\u0185\3\2\2\2\u017e\u017f\7(\2\2\u017f\u0180\5:\36\2")
+        buf.write("\u0180\u0181\7)\2\2\u0181\u0185\3\2\2\2\u0182\u0183\7")
+        buf.write("(\2\2\u0183\u0185\7)\2\2\u0184\u0174\3\2\2\2\u0184\u0178")
+        buf.write("\3\2\2\2\u0184\u017a\3\2\2\2\u0184\u017e\3\2\2\2\u0184")
+        buf.write("\u0182\3\2\2\2\u0185\61\3\2\2\2\u0186\u0188\7,\2\2\u0187")
+        buf.write("\u0189\5*\26\2\u0188\u0187\3\2\2\2\u0189\u018a\3\2\2\2")
+        buf.write("\u018a\u0188\3\2\2\2\u018a\u018b\3\2\2\2\u018b\u018d\3")
+        buf.write("\2\2\2\u018c\u018e\5\62\32\2\u018d\u018c\3\2\2\2\u018d")
+        buf.write("\u018e\3\2\2\2\u018e\u0193\3\2\2\2\u018f\u0190\7,\2\2")
+        buf.write("\u0190\u0193\5\62\32\2\u0191\u0193\7,\2\2\u0192\u0186")
+        buf.write("\3\2\2\2\u0192\u018f\3\2\2\2\u0192\u0191\3\2\2\2\u0193")
+        buf.write("\63\3\2\2\2\u0194\u019a\5\66\34\2\u0195\u0197\7\6\2\2")
+        buf.write("\u0196\u0198\7\37\2\2\u0197\u0196\3\2\2\2\u0197\u0198")
+        buf.write("\3\2\2\2\u0198\u0199\3\2\2\2\u0199\u019b\7-\2\2\u019a")
+        buf.write("\u0195\3\2\2\2\u019a\u019b\3\2\2\2\u019b\65\3\2\2\2\u019c")
+        buf.write("\u01a4\58\35\2\u019d\u019f\7\6\2\2\u019e\u01a0\7\37\2")
+        buf.write("\2\u019f\u019e\3\2\2\2\u019f\u01a0\3\2\2\2\u01a0\u01a1")
+        buf.write("\3\2\2\2\u01a1\u01a3\58\35\2\u01a2\u019d\3\2\2\2\u01a3")
+        buf.write("\u01a6\3\2\2\2\u01a4\u01a2\3\2\2\2\u01a4\u01a5\3\2\2\2")
+        buf.write("\u01a5\67\3\2\2\2\u01a6\u01a4\3\2\2\2\u01a7\u01ac\5\b")
+        buf.write("\5\2\u01a8\u01ab\5,\27\2\u01a9\u01ab\5> \2\u01aa\u01a8")
+        buf.write("\3\2\2\2\u01aa\u01a9\3\2\2\2\u01ab\u01ae\3\2\2\2\u01ac")
+        buf.write("\u01aa\3\2\2\2\u01ac\u01ad\3\2\2\2\u01ad\u01b0\3\2\2\2")
+        buf.write("\u01ae\u01ac\3\2\2\2\u01af\u01b1\7\37\2\2\u01b0\u01af")
+        buf.write("\3\2\2\2\u01b0\u01b1\3\2\2\2\u01b1\u01ba\3\2\2\2\u01b2")
+        buf.write("\u01b4\5\62\32\2\u01b3\u01b2\3\2\2\2\u01b4\u01b7\3\2\2")
+        buf.write("\2\u01b5\u01b3\3\2\2\2\u01b5\u01b6\3\2\2\2\u01b6\u01b8")
+        buf.write("\3\2\2\2\u01b7\u01b5\3\2\2\2\u01b8\u01ba\7_\2\2\u01b9")
+        buf.write("\u01a7\3\2\2\2\u01b9\u01b5\3\2\2\2\u01ba9\3\2\2\2\u01bb")
+        buf.write("\u01c0\7_\2\2\u01bc\u01bd\7\6\2\2\u01bd\u01bf\7_\2\2\u01be")
+        buf.write("\u01bc\3\2\2\2\u01bf\u01c2\3\2\2\2\u01c0\u01be\3\2\2\2")
+        buf.write("\u01c0\u01c1\3\2\2\2\u01c1;\3\2\2\2\u01c2\u01c0\3\2\2")
+        buf.write("\2\u01c3\u01c5\5\36\20\2\u01c4\u01c6\5> \2\u01c5\u01c4")
+        buf.write("\3\2\2\2\u01c5\u01c6\3\2\2\2\u01c6\u01c9\3\2\2\2\u01c7")
+        buf.write("\u01c9\5\24\13\2\u01c8\u01c3\3\2\2\2\u01c8\u01c7\3\2\2")
+        buf.write("\2\u01c9=\3\2\2\2\u01ca\u01cc\5\62\32\2\u01cb\u01cd\5")
+        buf.write("@!\2\u01cc\u01cb\3\2\2\2\u01cc\u01cd\3\2\2\2\u01cd\u01d0")
+        buf.write("\3\2\2\2\u01ce\u01d0\5@!\2\u01cf\u01ca\3\2\2\2\u01cf\u01ce")
+        buf.write("\3\2\2\2\u01d0?\3\2\2\2\u01d1\u01d2\7(\2\2\u01d2\u01d3")
+        buf.write("\5> \2\u01d3\u01d4\7)\2\2\u01d4\u01d7\3\2\2\2\u01d5\u01d7")
+        buf.write("\5B\"\2\u01d6\u01d1\3\2\2\2\u01d6\u01d5\3\2\2\2\u01d7")
+        buf.write("\u01db\3\2\2\2\u01d8\u01da\5B\"\2\u01d9\u01d8\3\2\2\2")
+        buf.write("\u01da\u01dd\3\2\2\2\u01db\u01d9\3\2\2\2\u01db\u01dc\3")
+        buf.write("\2\2\2\u01dcA\3\2\2\2\u01dd\u01db\3\2\2\2\u01de\u01df")
+        buf.write("\7*\2\2\u01df\u01eb\7+\2\2\u01e0\u01e1\7*\2\2\u01e1\u01e2")
+        buf.write("\5^\60\2\u01e2\u01e3\7+\2\2\u01e3\u01eb\3\2\2\2\u01e4")
+        buf.write("\u01e5\7(\2\2\u01e5\u01eb\7)\2\2\u01e6\u01e7\7(\2\2\u01e7")
+        buf.write("\u01e8\5\64\33\2\u01e8\u01e9\7)\2\2\u01e9\u01eb\3\2\2")
+        buf.write("\2\u01ea\u01de\3\2\2\2\u01ea\u01e0\3\2\2\2\u01ea\u01e4")
+        buf.write("\3\2\2\2\u01ea\u01e6\3\2\2\2\u01ebC\3\2\2\2\u01ec\u01f5")
+        buf.write("\5`\61\2\u01ed\u01ee\7\3\2\2\u01ee\u01f0\5F$\2\u01ef\u01f1")
+        buf.write("\7\6\2\2\u01f0\u01ef\3\2\2\2\u01f0\u01f1\3\2\2\2\u01f1")
+        buf.write("\u01f2\3\2\2\2\u01f2\u01f3\7\26\2\2\u01f3\u01f5\3\2\2")
+        buf.write("\2\u01f4\u01ec\3\2\2\2\u01f4\u01ed\3\2\2\2\u01f5E\3\2")
+        buf.write("\2\2\u01f6\u01fb\5D#\2\u01f7\u01f8\7\6\2\2\u01f8\u01fa")
+        buf.write("\5D#\2\u01f9\u01f7\3\2\2\2\u01fa\u01fd\3\2\2\2\u01fb\u01f9")
+        buf.write("\3\2\2\2\u01fb\u01fc\3\2\2\2\u01fcG\3\2\2\2\u01fd\u01fb")
+        buf.write("\3\2\2\2\u01fe\u0200\5`\61\2\u01ff\u0201\7\37\2\2\u0200")
+        buf.write("\u01ff\3\2\2\2\u0200\u0201\3\2\2\2\u0201\u0209\3\2\2\2")
+        buf.write("\u0202\u0203\7\6\2\2\u0203\u0205\5`\61\2\u0204\u0206\7")
+        buf.write("\37\2\2\u0205\u0204\3\2\2\2\u0205\u0206\3\2\2\2\u0206")
+        buf.write("\u0208\3\2\2\2\u0207\u0202\3\2\2\2\u0208\u020b\3\2\2\2")
+        buf.write("\u0209\u0207\3\2\2\2\u0209\u020a\3\2\2\2\u020aI\3\2\2")
+        buf.write("\2\u020b\u0209\3\2\2\2\u020c\u0213\5L\'\2\u020d\u020e")
+        buf.write("\7.\2\2\u020e\u0212\5L\'\2\u020f\u0210\7/\2\2\u0210\u0212")
+        buf.write("\5L\'\2\u0211\u020d\3\2\2\2\u0211\u020f\3\2\2\2\u0212")
+        buf.write("\u0215\3\2\2\2\u0213\u0211\3\2\2\2\u0213\u0214\3\2\2\2")
+        buf.write("\u0214K\3\2\2\2\u0215\u0213\3\2\2\2\u0216\u021f\5N(\2")
+        buf.write("\u0217\u0218\7,\2\2\u0218\u021e\5N(\2\u0219\u021a\7\60")
+        buf.write("\2\2\u021a\u021e\5N(\2\u021b\u021c\7\61\2\2\u021c\u021e")
+        buf.write("\5N(\2\u021d\u0217\3\2\2\2\u021d\u0219\3\2\2\2\u021d\u021b")
+        buf.write("\3\2\2\2\u021e\u0221\3\2\2\2\u021f\u021d\3\2\2\2\u021f")
+        buf.write("\u0220\3\2\2\2\u0220M\3\2\2\2\u0221\u021f\3\2\2\2\u0222")
+        buf.write("\u0223\7(\2\2\u0223\u0224\5<\37\2\u0224\u0225\7)\2\2\u0225")
+        buf.write("\u0226\5N(\2\u0226\u0229\3\2\2\2\u0227\u0229\5P)\2\u0228")
+        buf.write("\u0222\3\2\2\2\u0228\u0227\3\2\2\2\u0229O\3\2\2\2\u022a")
+        buf.write("\u023a\5R*\2\u022b\u022c\7\62\2\2\u022c\u023a\5P)\2\u022d")
+        buf.write("\u022e\7\63\2\2\u022e\u023a\5P)\2\u022f\u0230\5V,\2\u0230")
+        buf.write("\u0231\5N(\2\u0231\u023a\3\2\2\2\u0232\u0233\7\64\2\2")
+        buf.write("\u0233\u023a\5P)\2\u0234\u0235\7\64\2\2\u0235\u0236\7")
+        buf.write("(\2\2\u0236\u0237\5<\37\2\u0237\u0238\7)\2\2\u0238\u023a")
+        buf.write("\3\2\2\2\u0239\u022a\3\2\2\2\u0239\u022b\3\2\2\2\u0239")
+        buf.write("\u022d\3\2\2\2\u0239\u022f\3\2\2\2\u0239\u0232\3\2\2\2")
+        buf.write("\u0239\u0234\3\2\2\2\u023aQ\3\2\2\2\u023b\u023c\5X-\2")
+        buf.write("\u023c\u025a\b*\1\2\u023d\u023e\7*\2\2\u023e\u023f\5\\")
+        buf.write("/\2\u023f\u0240\7+\2\2\u0240\u0259\3\2\2\2\u0241\u0242")
+        buf.write("\7(\2\2\u0242\u0243\7)\2\2\u0243\u0259\b*\1\2\u0244\u0245")
+        buf.write("\7(\2\2\u0245\u0246\5H%\2\u0246\u0247\7)\2\2\u0247\u0248")
+        buf.write("\b*\1\2\u0248\u0259\3\2\2\2\u0249\u024a\7(\2\2\u024a\u024b")
+        buf.write("\5T+\2\u024b\u024c\7)\2\2\u024c\u0259\3\2\2\2\u024d\u024e")
+        buf.write("\7\65\2\2\u024e\u024f\7_\2\2\u024f\u0259\b*\1\2\u0250")
+        buf.write("\u0251\7,\2\2\u0251\u0252\7_\2\2\u0252\u0259\b*\1\2\u0253")
+        buf.write("\u0254\7\66\2\2\u0254\u0255\7_\2\2\u0255\u0259\b*\1\2")
+        buf.write("\u0256\u0259\7\62\2\2\u0257\u0259\7\63\2\2\u0258\u023d")
+        buf.write("\3\2\2\2\u0258\u0241\3\2\2\2\u0258\u0244\3\2\2\2\u0258")
+        buf.write("\u0249\3\2\2\2\u0258\u024d\3\2\2\2\u0258\u0250\3\2\2\2")
+        buf.write("\u0258\u0253\3\2\2\2\u0258\u0256\3\2\2\2\u0258\u0257\3")
+        buf.write("\2\2\2\u0259\u025c\3\2\2\2\u025a\u0258\3\2\2\2\u025a\u025b")
+        buf.write("\3\2\2\2\u025bS\3\2\2\2\u025c\u025a\3\2\2\2\u025d\u0262")
+        buf.write("\58\35\2\u025e\u025f\7\6\2\2\u025f\u0261\58\35\2\u0260")
+        buf.write("\u025e\3\2\2\2\u0261\u0264\3\2\2\2\u0262\u0260\3\2\2\2")
+        buf.write("\u0262\u0263\3\2\2\2\u0263U\3\2\2\2\u0264\u0262\3\2\2")
+        buf.write("\2\u0265\u0266\t\5\2\2\u0266W\3\2\2\2\u0267\u026e\7_\2")
+        buf.write("\2\u0268\u026e\5Z.\2\u0269\u026a\7(\2\2\u026a\u026b\5")
+        buf.write("\\/\2\u026b\u026c\7)\2\2\u026c\u026e\3\2\2\2\u026d\u0267")
+        buf.write("\3\2\2\2\u026d\u0268\3\2\2\2\u026d\u0269\3\2\2\2\u026e")
+        buf.write("Y\3\2\2\2\u026f\u028a\7b\2\2\u0270\u028a\7d\2\2\u0271")
+        buf.write("\u028a\7c\2\2\u0272\u028a\7`\2\2\u0273\u0275\7_\2\2\u0274")
+        buf.write("\u0273\3\2\2\2\u0275\u0278\3\2\2\2\u0276\u0274\3\2\2\2")
+        buf.write("\u0276\u0277\3\2\2\2\u0277\u027a\3\2\2\2\u0278\u0276\3")
+        buf.write("\2\2\2\u0279\u027b\7a\2\2\u027a\u0279\3\2\2\2\u027b\u027c")
+        buf.write("\3\2\2\2\u027c\u027a\3\2\2\2\u027c\u027d\3\2\2\2\u027d")
+        buf.write("\u027f\3\2\2\2\u027e\u0276\3\2\2\2\u027f\u0280\3\2\2\2")
+        buf.write("\u0280\u027e\3\2\2\2\u0280\u0281\3\2\2\2\u0281\u0285\3")
+        buf.write("\2\2\2\u0282\u0284\7_\2\2\u0283\u0282\3\2\2\2\u0284\u0287")
+        buf.write("\3\2\2\2\u0285\u0283\3\2\2\2\u0285\u0286\3\2\2\2\u0286")
+        buf.write("\u028a\3\2\2\2\u0287\u0285\3\2\2\2\u0288\u028a\7e\2\2")
+        buf.write("\u0289\u026f\3\2\2\2\u0289\u0270\3\2\2\2\u0289\u0271\3")
+        buf.write("\2\2\2\u0289\u0272\3\2\2\2\u0289\u027e\3\2\2\2\u0289\u0288")
+        buf.write("\3\2\2\2\u028a[\3\2\2\2\u028b\u0290\5`\61\2\u028c\u028d")
+        buf.write("\7\6\2\2\u028d\u028f\5`\61\2\u028e\u028c\3\2\2\2\u028f")
+        buf.write("\u0292\3\2\2\2\u0290\u028e\3\2\2\2\u0290\u0291\3\2\2\2")
+        buf.write("\u0291]\3\2\2\2\u0292\u0290\3\2\2\2\u0293\u0294\5f\64")
+        buf.write("\2\u0294_\3\2\2\2\u0295\u0296\5b\62\2\u0296\u0297\5d\63")
+        buf.write("\2\u0297\u0298\5`\61\2\u0298\u029b\3\2\2\2\u0299\u029b")
+        buf.write("\5f\64\2\u029a\u0295\3\2\2\2\u029a\u0299\3\2\2\2\u029b")
+        buf.write("a\3\2\2\2\u029c\u029d\5P)\2\u029dc\3\2\2\2\u029e\u029f")
+        buf.write("\t\6\2\2\u029fe\3\2\2\2\u02a0\u02a7\5h\65\2\u02a1\u02a2")
+        buf.write("\7D\2\2\u02a2\u02a3\5\\/\2\u02a3\u02a4\7\31\2\2\u02a4")
+        buf.write("\u02a5\5f\64\2\u02a5\u02a6\b\64\1\2\u02a6\u02a8\3\2\2")
+        buf.write("\2\u02a7\u02a1\3\2\2\2\u02a7\u02a8\3\2\2\2\u02a8g\3\2")
+        buf.write("\2\2\u02a9\u02ae\5j\66\2\u02aa\u02ab\7E\2\2\u02ab\u02ad")
+        buf.write("\5j\66\2\u02ac\u02aa\3\2\2\2\u02ad\u02b0\3\2\2\2\u02ae")
+        buf.write("\u02ac\3\2\2\2\u02ae\u02af\3\2\2\2\u02afi\3\2\2\2\u02b0")
+        buf.write("\u02ae\3\2\2\2\u02b1\u02b6\5l\67\2\u02b2\u02b3\7F\2\2")
+        buf.write("\u02b3\u02b5\5l\67\2\u02b4\u02b2\3\2\2\2\u02b5\u02b8\3")
+        buf.write("\2\2\2\u02b6\u02b4\3\2\2\2\u02b6\u02b7\3\2\2\2\u02b7k")
+        buf.write("\3\2\2\2\u02b8\u02b6\3\2\2\2\u02b9\u02be\5n8\2\u02ba\u02bb")
+        buf.write("\7G\2\2\u02bb\u02bd\5n8\2\u02bc\u02ba\3\2\2\2\u02bd\u02c0")
+        buf.write("\3\2\2\2\u02be\u02bc\3\2\2\2\u02be\u02bf\3\2\2\2\u02bf")
+        buf.write("m\3\2\2\2\u02c0\u02be\3\2\2\2\u02c1\u02c6\5p9\2\u02c2")
+        buf.write("\u02c3\7H\2\2\u02c3\u02c5\5p9\2\u02c4\u02c2\3\2\2\2\u02c5")
+        buf.write("\u02c8\3\2\2\2\u02c6\u02c4\3\2\2\2\u02c6\u02c7\3\2\2\2")
+        buf.write("\u02c7o\3\2\2\2\u02c8\u02c6\3\2\2\2\u02c9\u02ce\5r:\2")
+        buf.write("\u02ca\u02cb\7\67\2\2\u02cb\u02cd\5r:\2\u02cc\u02ca\3")
+        buf.write("\2\2\2\u02cd\u02d0\3\2\2\2\u02ce\u02cc\3\2\2\2\u02ce\u02cf")
+        buf.write("\3\2\2\2\u02cfq\3\2\2\2\u02d0\u02ce\3\2\2\2\u02d1\u02d6")
+        buf.write("\5t;\2\u02d2\u02d3\t\7\2\2\u02d3\u02d5\5t;\2\u02d4\u02d2")
+        buf.write("\3\2\2\2\u02d5\u02d8\3\2\2\2\u02d6\u02d4\3\2\2\2\u02d6")
+        buf.write("\u02d7\3\2\2\2\u02d7s\3\2\2\2\u02d8\u02d6\3\2\2\2\u02d9")
+        buf.write("\u02de\5v<\2\u02da\u02db\t\b\2\2\u02db\u02dd\5v<\2\u02dc")
+        buf.write("\u02da\3\2\2\2\u02dd\u02e0\3\2\2\2\u02de\u02dc\3\2\2\2")
+        buf.write("\u02de\u02df\3\2\2\2\u02dfu\3\2\2\2\u02e0\u02de\3\2\2")
+        buf.write("\2\u02e1\u02e6\5J&\2\u02e2\u02e3\t\t\2\2\u02e3\u02e5\5")
+        buf.write("J&\2\u02e4\u02e2\3\2\2\2\u02e5\u02e8\3\2\2\2\u02e6\u02e4")
+        buf.write("\3\2\2\2\u02e6\u02e7\3\2\2\2\u02e7w\3\2\2\2\u02e8\u02e6")
+        buf.write("\3\2\2\2\u02e9\u02f5\5\u0082B\2\u02ea\u02f5\5\u0084C\2")
+        buf.write("\u02eb\u02f5\5\u0088E\2\u02ec\u02f5\5\u008aF\2\u02ed\u02f5")
+        buf.write("\5\u008cG\2\u02ee\u02f5\5\u008eH\2\u02ef\u02f5\5\u0080")
+        buf.write("A\2\u02f0\u02f5\5z>\2\u02f1\u02f5\5|?\2\u02f2\u02f5\5")
+        buf.write("~@\2\u02f3\u02f5\5\n\6\2\u02f4\u02e9\3\2\2\2\u02f4\u02ea")
+        buf.write("\3\2\2\2\u02f4\u02eb\3\2\2\2\u02f4\u02ec\3\2\2\2\u02f4")
+        buf.write("\u02ed\3\2\2\2\u02f4\u02ee\3\2\2\2\u02f4\u02ef\3\2\2\2")
+        buf.write("\u02f4\u02f0\3\2\2\2\u02f4\u02f1\3\2\2\2\u02f4\u02f2\3")
+        buf.write("\2\2\2\u02f4\u02f3\3\2\2\2\u02f5y\3\2\2\2\u02f6\u02f8")
+        buf.write("\7Q\2\2\u02f7\u02f6\3\2\2\2\u02f7\u02f8\3\2\2\2\u02f8")
+        buf.write("\u02f9\3\2\2\2\u02f9\u02fa\7_\2\2\u02fa\u02fe\7(\2\2\u02fb")
+        buf.write("\u02fd\n\n\2\2\u02fc\u02fb\3\2\2\2\u02fd\u0300\3\2\2\2")
+        buf.write("\u02fe\u02fc\3\2\2\2\u02fe\u02ff\3\2\2\2\u02ff\u0301\3")
+        buf.write("\2\2\2\u0300\u02fe\3\2\2\2\u0301\u0302\7)\2\2\u0302\u0303")
+        buf.write("\7\4\2\2\u0303{\3\2\2\2\u0304\u0305\7R\2\2\u0305\u0309")
+        buf.write("\7\3\2\2\u0306\u0308\n\13\2\2\u0307\u0306\3\2\2\2\u0308")
+        buf.write("\u030b\3\2\2\2\u0309\u0307\3\2\2\2\u0309\u030a\3\2\2\2")
+        buf.write("\u030a\u030c\3\2\2\2\u030b\u0309\3\2\2\2\u030c\u030d\7")
+        buf.write("\26\2\2\u030d}\3\2\2\2\u030e\u030f\7S\2\2\u030f\u0313")
+        buf.write("\7\3\2\2\u0310\u0312\n\13\2\2\u0311\u0310\3\2\2\2\u0312")
+        buf.write("\u0315\3\2\2\2\u0313\u0311\3\2\2\2\u0313\u0314\3\2\2\2")
+        buf.write("\u0314\u0316\3\2\2\2\u0315\u0313\3\2\2\2\u0316\u0317\7")
+        buf.write("\26\2\2\u0317\177\3\2\2\2\u0318\u0319\7_\2\2\u0319\u031d")
+        buf.write("\7(\2\2\u031a\u031c\5\n\6\2\u031b\u031a\3\2\2\2\u031c")
+        buf.write("\u031f\3\2\2\2\u031d\u031b\3\2\2\2\u031d\u031e\3\2\2\2")
+        buf.write("\u031e\u0321\3\2\2\2\u031f\u031d\3\2\2\2\u0320\u0322\5")
+        buf.write("\u0086D\2\u0321\u0320\3\2\2\2\u0321\u0322\3\2\2\2\u0322")
+        buf.write("\u0324\3\2\2\2\u0323\u0325\5\\/\2\u0324\u0323\3\2\2\2")
+        buf.write("\u0324\u0325\3\2\2\2\u0325\u0326\3\2\2\2\u0326\u0327\7")
+        buf.write(")\2\2\u0327\u0081\3\2\2\2\u0328\u0329\7_\2\2\u0329\u032a")
+        buf.write("\7\31\2\2\u032a\u0334\5x=\2\u032b\u032c\7T\2\2\u032c\u032d")
+        buf.write("\5^\60\2\u032d\u032e\7\31\2\2\u032e\u032f\5x=\2\u032f")
+        buf.write("\u0334\3\2\2\2\u0330\u0331\7U\2\2\u0331\u0332\7\31\2\2")
+        buf.write("\u0332\u0334\5x=\2\u0333\u0328\3\2\2\2\u0333\u032b\3\2")
+        buf.write("\2\2\u0333\u0330\3\2\2\2\u0334\u0083\3\2\2\2\u0335\u0339")
+        buf.write("\7\3\2\2\u0336\u0338\5\n\6\2\u0337\u0336\3\2\2\2\u0338")
+        buf.write("\u033b\3\2\2\2\u0339\u0337\3\2\2\2\u0339\u033a\3\2\2\2")
+        buf.write("\u033a\u033d\3\2\2\2\u033b\u0339\3\2\2\2\u033c\u033e\5")
+        buf.write("\u0086D\2\u033d\u033c\3\2\2\2\u033d\u033e\3\2\2\2\u033e")
+        buf.write("\u033f\3\2\2\2\u033f\u0340\7\26\2\2\u0340\u0085\3\2\2")
+        buf.write("\2\u0341\u0343\5x=\2\u0342\u0341\3\2\2\2\u0343\u0344\3")
+        buf.write("\2\2\2\u0344\u0342\3\2\2\2\u0344\u0345\3\2\2\2\u0345\u0087")
+        buf.write("\3\2\2\2\u0346\u034b\7\4\2\2\u0347\u0348\5\\/\2\u0348")
+        buf.write("\u0349\7\4\2\2\u0349\u034b\3\2\2\2\u034a\u0346\3\2\2\2")
+        buf.write("\u034a\u0347\3\2\2\2\u034b\u0089\3\2\2\2\u034c\u034d\7")
+        buf.write("V\2\2\u034d\u034e\7(\2\2\u034e\u034f\5\\/\2\u034f\u0350")
+        buf.write("\7)\2\2\u0350\u0351\bF\1\2\u0351\u0354\5x=\2\u0352\u0353")
+        buf.write("\7W\2\2\u0353\u0355\5x=\2\u0354\u0352\3\2\2\2\u0354\u0355")
+        buf.write("\3\2\2\2\u0355\u035d\3\2\2\2\u0356\u0357\7X\2\2\u0357")
+        buf.write("\u0358\7(\2\2\u0358\u0359\5\\/\2\u0359\u035a\7)\2\2\u035a")
+        buf.write("\u035b\5x=\2\u035b\u035d\3\2\2\2\u035c\u034c\3\2\2\2\u035c")
+        buf.write("\u0356\3\2\2\2\u035d\u008b\3\2\2\2\u035e\u035f\7Y\2\2")
+        buf.write("\u035f\u0360\7(\2\2\u0360\u0361\5\\/\2\u0361\u0362\7)")
+        buf.write("\2\2\u0362\u0363\5x=\2\u0363\u0364\bG\1\2\u0364\u036f")
+        buf.write("\3\2\2\2\u0365\u0366\7Z\2\2\u0366\u0367\5x=\2\u0367\u0368")
+        buf.write("\7Y\2\2\u0368\u0369\7(\2\2\u0369\u036a\5\\/\2\u036a\u036b")
+        buf.write("\7)\2\2\u036b\u036c\7\4\2\2\u036c\u036d\bG\1\2\u036d\u036f")
+        buf.write("\3\2\2\2\u036e\u035e\3\2\2\2\u036e\u0365\3\2\2\2\u036f")
+        buf.write("\u008d\3\2\2\2\u0370\u0371\7[\2\2\u0371\u0372\7_\2\2\u0372")
+        buf.write("\u037e\7\4\2\2\u0373\u0374\7\\\2\2\u0374\u037e\7\4\2\2")
+        buf.write("\u0375\u0376\7]\2\2\u0376\u037e\7\4\2\2\u0377\u0378\7")
+        buf.write("^\2\2\u0378\u037e\7\4\2\2\u0379\u037a\7^\2\2\u037a\u037b")
+        buf.write("\5\\/\2\u037b\u037c\7\4\2\2\u037c\u037e\3\2\2\2\u037d")
+        buf.write("\u0370\3\2\2\2\u037d\u0373\3\2\2\2\u037d\u0375\3\2\2\2")
+        buf.write("\u037d\u0377\3\2\2\2\u037d\u0379\3\2\2\2\u037e\u008f\3")
+        buf.write("\2\2\2o\u0093\u0097\u009d\u00a6\u00a8\u00ab\u00b1\u00b6")
+        buf.write("\u00bd\u00bf\u00c3\u00cb\u00d0\u00d7\u00dd\u00f4\u00f9")
+        buf.write("\u00ff\u0108\u010f\u0117\u0119\u0120\u0126\u012a\u0130")
+        buf.write("\u0139\u013f\u0146\u014c\u0151\u0154\u0157\u015a\u015e")
+        buf.write("\u0164\u0169\u0170\u0172\u0184\u018a\u018d\u0192\u0197")
+        buf.write("\u019a\u019f\u01a4\u01aa\u01ac\u01b0\u01b5\u01b9\u01c0")
+        buf.write("\u01c5\u01c8\u01cc\u01cf\u01d6\u01db\u01ea\u01f0\u01f4")
+        buf.write("\u01fb\u0200\u0205\u0209\u0211\u0213\u021d\u021f\u0228")
+        buf.write("\u0239\u0258\u025a\u0262\u026d\u0276\u027c\u0280\u0285")
+        buf.write("\u0289\u0290\u029a\u02a7\u02ae\u02b6\u02be\u02c6\u02ce")
+        buf.write("\u02d6\u02de\u02e6\u02f4\u02f7\u02fe\u0309\u0313\u031d")
+        buf.write("\u0321\u0324\u0333\u0339\u033d\u0344\u034a\u0354\u035c")
+        buf.write("\u036e\u037d")
+        return buf.getvalue()
+
+
+class CParser ( Parser ):
+
+    grammarFileName = "C.g4"
+
+    atn = ATNDeserializer().deserialize(serializedATN())
+
+    decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+    sharedContextCache = PredictionContextCache()
+
+    literalNames = [ "<INVALID>", "'{'", "';'", "'typedef'", "','", "'='",
+                     "'extern'", "'static'", "'auto'", "'register'", "'STATIC'",
+                     "'void'", "'char'", "'short'", "'int'", "'long'", "'float'",
+                     "'double'", "'signed'", "'unsigned'", "'}'", "'struct'",
+                     "'union'", "':'", "'enum'", "'const'", "'volatile'",
+                     "'IN'", "'OUT'", "'OPTIONAL'", "'CONST'", "'UNALIGNED'",
+                     "'VOLATILE'", "'GLOBAL_REMOVE_IF_UNREFERENCED'", "'EFIAPI'",
+                     "'EFI_BOOTSERVICE'", "'EFI_RUNTIMESERVICE'", "'PACKED'",
+                     "'('", "')'", "'['", "']'", "'*'", "'...'", "'+'",
+                     "'-'", "'/'", "'%'", "'++'", "'--'", "'sizeof'", "'.'",
+                     "'->'", "'&'", "'~'", "'!'", "'*='", "'/='", "'%='",
+                     "'+='", "'-='", "'<<='", "'>>='", "'&='", "'^='", "'|='",
+                     "'?'", "'||'", "'&&'", "'|'", "'^'", "'=='", "'!='",
+                     "'<'", "'>'", "'<='", "'>='", "'<<'", "'>>'", "'__asm__'",
+                     "'_asm'", "'__asm'", "'case'", "'default'", "'if'",
+                     "'else'", "'switch'", "'while'", "'do'", "'goto'",
+                     "'continue'", "'break'", "'return'" ]
+
+    symbolicNames = [ "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "IDENTIFIER", "CHARACTER_LITERAL", "STRING_LITERAL",
+                      "HEX_LITERAL", "DECIMAL_LITERAL", "OCTAL_LITERAL",
+                      "FLOATING_POINT_LITERAL", "WS", "BS", "UnicodeVocabulary",
+                      "COMMENT", "LINE_COMMENT", "LINE_COMMAND" ]
+
+    RULE_translation_unit = 0
+    RULE_external_declaration = 1
+    RULE_function_definition = 2
+    RULE_declaration_specifiers = 3
+    RULE_declaration = 4
+    RULE_init_declarator_list = 5
+    RULE_init_declarator = 6
+    RULE_storage_class_specifier = 7
+    RULE_type_specifier = 8
+    RULE_type_id = 9
+    RULE_struct_or_union_specifier = 10
+    RULE_struct_or_union = 11
+    RULE_struct_declaration_list = 12
+    RULE_struct_declaration = 13
+    RULE_specifier_qualifier_list = 14
+    RULE_struct_declarator_list = 15
+    RULE_struct_declarator = 16
+    RULE_enum_specifier = 17
+    RULE_enumerator_list = 18
+    RULE_enumerator = 19
+    RULE_type_qualifier = 20
+    RULE_declarator = 21
+    RULE_direct_declarator = 22
+    RULE_declarator_suffix = 23
+    RULE_pointer = 24
+    RULE_parameter_type_list = 25
+    RULE_parameter_list = 26
+    RULE_parameter_declaration = 27
+    RULE_identifier_list = 28
+    RULE_type_name = 29
+    RULE_abstract_declarator = 30
+    RULE_direct_abstract_declarator = 31
+    RULE_abstract_declarator_suffix = 32
+    RULE_initializer = 33
+    RULE_initializer_list = 34
+    RULE_argument_expression_list = 35
+    RULE_additive_expression = 36
+    RULE_multiplicative_expression = 37
+    RULE_cast_expression = 38
+    RULE_unary_expression = 39
+    RULE_postfix_expression = 40
+    RULE_macro_parameter_list = 41
+    RULE_unary_operator = 42
+    RULE_primary_expression = 43
+    RULE_constant = 44
+    RULE_expression = 45
+    RULE_constant_expression = 46
+    RULE_assignment_expression = 47
+    RULE_lvalue = 48
+    RULE_assignment_operator = 49
+    RULE_conditional_expression = 50
+    RULE_logical_or_expression = 51
+    RULE_logical_and_expression = 52
+    RULE_inclusive_or_expression = 53
+    RULE_exclusive_or_expression = 54
+    RULE_and_expression = 55
+    RULE_equality_expression = 56
+    RULE_relational_expression = 57
+    RULE_shift_expression = 58
+    RULE_statement = 59
+    RULE_asm2_statement = 60
+    RULE_asm1_statement = 61
+    RULE_asm_statement = 62
+    RULE_macro_statement = 63
+    RULE_labeled_statement = 64
+    RULE_compound_statement = 65
+    RULE_statement_list = 66
+    RULE_expression_statement = 67
+    RULE_selection_statement = 68
+    RULE_iteration_statement = 69
+    RULE_jump_statement = 70
+
+    ruleNames =  [ "translation_unit", "external_declaration", "function_definition",
+                   "declaration_specifiers", "declaration", "init_declarator_list",
+                   "init_declarator", "storage_class_specifier", "type_specifier",
+                   "type_id", "struct_or_union_specifier", "struct_or_union",
+                   "struct_declaration_list", "struct_declaration", "specifier_qualifier_list",
+                   "struct_declarator_list", "struct_declarator", "enum_specifier",
+                   "enumerator_list", "enumerator", "type_qualifier", "declarator",
+                   "direct_declarator", "declarator_suffix", "pointer",
+                   "parameter_type_list", "parameter_list", "parameter_declaration",
+                   "identifier_list", "type_name", "abstract_declarator",
+                   "direct_abstract_declarator", "abstract_declarator_suffix",
+                   "initializer", "initializer_list", "argument_expression_list",
+                   "additive_expression", "multiplicative_expression", "cast_expression",
+                   "unary_expression", "postfix_expression", "macro_parameter_list",
+                   "unary_operator", "primary_expression", "constant", "expression",
+                   "constant_expression", "assignment_expression", "lvalue",
+                   "assignment_operator", "conditional_expression", "logical_or_expression",
+                   "logical_and_expression", "inclusive_or_expression",
+                   "exclusive_or_expression", "and_expression", "equality_expression",
+                   "relational_expression", "shift_expression", "statement",
+                   "asm2_statement", "asm1_statement", "asm_statement",
+                   "macro_statement", "labeled_statement", "compound_statement",
+                   "statement_list", "expression_statement", "selection_statement",
+                   "iteration_statement", "jump_statement" ]
+
+    EOF = Token.EOF
+    T__0=1
+    T__1=2
+    T__2=3
+    T__3=4
+    T__4=5
+    T__5=6
+    T__6=7
+    T__7=8
+    T__8=9
+    T__9=10
+    T__10=11
+    T__11=12
+    T__12=13
+    T__13=14
+    T__14=15
+    T__15=16
+    T__16=17
+    T__17=18
+    T__18=19
+    T__19=20
+    T__20=21
+    T__21=22
+    T__22=23
+    T__23=24
+    T__24=25
+    T__25=26
+    T__26=27
+    T__27=28
+    T__28=29
+    T__29=30
+    T__30=31
+    T__31=32
+    T__32=33
+    T__33=34
+    T__34=35
+    T__35=36
+    T__36=37
+    T__37=38
+    T__38=39
+    T__39=40
+    T__40=41
+    T__41=42
+    T__42=43
+    T__43=44
+    T__44=45
+    T__45=46
+    T__46=47
+    T__47=48
+    T__48=49
+    T__49=50
+    T__50=51
+    T__51=52
+    T__52=53
+    T__53=54
+    T__54=55
+    T__55=56
+    T__56=57
+    T__57=58
+    T__58=59
+    T__59=60
+    T__60=61
+    T__61=62
+    T__62=63
+    T__63=64
+    T__64=65
+    T__65=66
+    T__66=67
+    T__67=68
+    T__68=69
+    T__69=70
+    T__70=71
+    T__71=72
+    T__72=73
+    T__73=74
+    T__74=75
+    T__75=76
+    T__76=77
+    T__77=78
+    T__78=79
+    T__79=80
+    T__80=81
+    T__81=82
+    T__82=83
+    T__83=84
+    T__84=85
+    T__85=86
+    T__86=87
+    T__87=88
+    T__88=89
+    T__89=90
+    T__90=91
+    T__91=92
+    IDENTIFIER=93
+    CHARACTER_LITERAL=94
+    STRING_LITERAL=95
+    HEX_LITERAL=96
+    DECIMAL_LITERAL=97
+    OCTAL_LITERAL=98
+    FLOATING_POINT_LITERAL=99
+    WS=100
+    BS=101
+    UnicodeVocabulary=102
+    COMMENT=103
+    LINE_COMMENT=104
+    LINE_COMMAND=105
+
+    # @param  input Type: TokenStream
+    # @param  output= sys.stdout Type: TextIO
+    def __init__(self,input,output= sys.stdout):
+        super().__init__(input, output)
+        self.checkVersion("4.7.1")
+        self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
+        self._predicates = None
+
+
+
+
+    def printTokenInfo(self,line,offset,tokenText):
+        print(str(line)+ ',' + str(offset) + ':' + str(tokenText))
+
+    def StorePredicateExpression(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        PredExp = CodeFragment.PredicateExpression(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.PredicateExpressionList.append(PredExp)
+
+    def StoreEnumerationDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        EnumDef = CodeFragment.EnumerationDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.EnumerationDefinitionList.append(EnumDef)
+
+    def StoreStructUnionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        SUDef = CodeFragment.StructUnionDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.StructUnionDefinitionList.append(SUDef)
+
+    def StoreTypedefDefinition(self,StartLine,StartOffset,EndLine,EndOffset,FromText,ToText):
+        Tdef = CodeFragment.TypedefDefinition(FromText, ToText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.TypedefDefinitionList.append(Tdef)
+
+    def StoreFunctionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText,LeftBraceLine,LeftBraceOffset,DeclLine,DeclOffset):
+        FuncDef = CodeFragment.FunctionDefinition(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset), (LeftBraceLine, LeftBraceOffset), (DeclLine, DeclOffset))
+        FileProfile.FunctionDefinitionList.append(FuncDef)
+
+    def StoreVariableDeclaration(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText):
+        VarDecl = CodeFragment.VariableDeclaration(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.VariableDeclarationList.append(VarDecl)
+
+    def StoreFunctionCalling(self,StartLine,StartOffset,EndLine,EndOffset,FuncName,ParamList):
+        FuncCall = CodeFragment.FunctionCalling(FuncName, ParamList, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.FunctionCallingList.append(FuncCall)
+
+
+
+    class Translation_unitContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def external_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.External_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.External_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_translation_unit
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterTranslation_unit" ):
+                listener.enterTranslation_unit(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitTranslation_unit" ):
+                listener.exitTranslation_unit(self)
+
+
+
+
+    def translation_unit(self):
+
+        localctx = CParser.Translation_unitContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 0, self.RULE_translation_unit)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 145
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__41))) != 0) or _la==CParser.IDENTIFIER:
+                self.state = 142
+                self.external_declaration()
+                self.state = 147
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class External_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def function_definition(self):
+            return self.getTypedRuleContext(CParser.Function_definitionContext,0)
+
+
+        def macro_statement(self):
+            return self.getTypedRuleContext(CParser.Macro_statementContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_external_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExternal_declaration" ):
+                listener.enterExternal_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExternal_declaration" ):
+                listener.exitExternal_declaration(self)
+
+
+
+
+    def external_declaration(self):
+
+        localctx = CParser.External_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 2, self.RULE_external_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 166
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,4,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 149
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,1,self._ctx)
+                if la_ == 1:
+                    self.state = 148
+                    self.declaration_specifiers()
+
+
+                self.state = 151
+                self.declarator()
+                self.state = 155
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER:
+                    self.state = 152
+                    self.declaration()
+                    self.state = 157
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 158
+                self.match(CParser.T__0)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 160
+                self.function_definition()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 161
+                self.declaration()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 162
+                self.macro_statement()
+                self.state = 164
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__1:
+                    self.state = 163
+                    self.match(CParser.T__1)
+
+
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Function_definitionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.ModifierText = ''
+            self.DeclText = ''
+            self.LBLine = 0
+            self.LBOffset = 0
+            self.DeclLine = 0
+            self.DeclOffset = 0
+            self.d = None # Declaration_specifiersContext
+            self._declaration_specifiers = None # Declaration_specifiersContext
+            self._declarator = None # DeclaratorContext
+            self.a = None # Compound_statementContext
+            self.b = None # Compound_statementContext
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def compound_statement(self):
+            return self.getTypedRuleContext(CParser.Compound_statementContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_function_definition
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterFunction_definition" ):
+                listener.enterFunction_definition(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitFunction_definition" ):
+                listener.exitFunction_definition(self)
+
+
+
+
+    def function_definition(self):
+
+        localctx = CParser.Function_definitionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 4, self.RULE_function_definition)
+
+        ModifierText = '';
+        DeclText = '';
+        LBLine = 0;
+        LBOffset = 0;
+        DeclLine = 0;
+        DeclOffset = 0;
+
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 169
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,5,self._ctx)
+            if la_ == 1:
+                self.state = 168
+                localctx.d = localctx._declaration_specifiers = self.declaration_specifiers()
+
+
+            self.state = 171
+            localctx._declarator = self.declarator()
+            self.state = 180
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__2, CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9, CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36, CParser.IDENTIFIER]:
+                self.state = 173
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while True:
+                    self.state = 172
+                    self.declaration()
+                    self.state = 175
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+                    if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER):
+                        break
+
+                self.state = 177
+                localctx.a = self.compound_statement()
+                pass
+            elif token in [CParser.T__0]:
+                self.state = 179
+                localctx.b = self.compound_statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+
+            if localctx.d != None:
+                ModifierText = (None if localctx._declaration_specifiers is None else self._input.getText((localctx._declaration_specifiers.start,localctx._declaration_specifiers.stop)))
+            else:
+                ModifierText = ''
+            DeclText = (None if localctx._declarator is None else self._input.getText((localctx._declarator.start,localctx._declarator.stop)))
+            DeclLine = (None if localctx._declarator is None else localctx._declarator.start).line
+            DeclOffset = (None if localctx._declarator is None else localctx._declarator.start).column
+            if localctx.a != None:
+                LBLine = (None if localctx.a is None else localctx.a.start).line
+                LBOffset = (None if localctx.a is None else localctx.a.start).column
+            else:
+                LBLine = (None if localctx.b is None else localctx.b.start).line
+                LBOffset = (None if localctx.b is None else localctx.b.start).column
+
+            self._ctx.stop = self._input.LT(-1)
+
+            self.StoreFunctionDefinition(localctx.start.line, localctx.start.column, localctx.stop.line, localctx.stop.column, ModifierText, DeclText, LBLine, LBOffset, DeclLine, DeclOffset)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Declaration_specifiersContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def storage_class_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Storage_class_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Storage_class_specifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_specifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declaration_specifiers
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclaration_specifiers" ):
+                listener.enterDeclaration_specifiers(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclaration_specifiers" ):
+                listener.exitDeclaration_specifiers(self)
+
+
+
+
+    def declaration_specifiers(self):
+
+        localctx = CParser.Declaration_specifiersContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 6, self.RULE_declaration_specifiers)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 187
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 187
+                    self._errHandler.sync(self)
+                    token = self._input.LA(1)
+                    if token in [CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9]:
+                        self.state = 184
+                        self.storage_class_specifier()
+                        pass
+                    elif token in [CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.IDENTIFIER]:
+                        self.state = 185
+                        self.type_specifier()
+                        pass
+                    elif token in [CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36]:
+                        self.state = 186
+                        self.type_qualifier()
+                        pass
+                    else:
+                        raise NoViableAltException(self)
+
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 189
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,9,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class DeclarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.a = None # Token
+            self.b = None # Declaration_specifiersContext
+            self.c = None # Init_declarator_listContext
+            self.d = None # Token
+            self.s = None # Declaration_specifiersContext
+            self.t = None # Init_declarator_listContext
+            self.e = None # Token
+
+        def init_declarator_list(self):
+            return self.getTypedRuleContext(CParser.Init_declarator_listContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclaration" ):
+                listener.enterDeclaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclaration" ):
+                listener.exitDeclaration(self)
+
+
+
+
+    def declaration(self):
+
+        localctx = CParser.DeclarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 8, self.RULE_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 206
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__2]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 191
+                localctx.a = self.match(CParser.T__2)
+                self.state = 193
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,10,self._ctx)
+                if la_ == 1:
+                    self.state = 192
+                    localctx.b = self.declaration_specifiers()
+
+
+                self.state = 195
+                localctx.c = self.init_declarator_list()
+                self.state = 196
+                localctx.d = self.match(CParser.T__1)
+
+                if localctx.b is not None:
+                    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, (0 if localctx.d is None else localctx.d.line), localctx.d.column, (None if localctx.b is None else self._input.getText((localctx.b.start,localctx.b.stop))), (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+                else:
+                    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, (0 if localctx.d is None else localctx.d.line), localctx.d.column, '', (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+
+                pass
+            elif token in [CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9, CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36, CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 199
+                localctx.s = self.declaration_specifiers()
+                self.state = 201
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if ((((_la - 34)) & ~0x3f) == 0 and ((1 << (_la - 34)) & ((1 << (CParser.T__33 - 34)) | (1 << (CParser.T__34 - 34)) | (1 << (CParser.T__35 - 34)) | (1 << (CParser.T__37 - 34)) | (1 << (CParser.T__41 - 34)) | (1 << (CParser.IDENTIFIER - 34)))) != 0):
+                    self.state = 200
+                    localctx.t = self.init_declarator_list()
+
+
+                self.state = 203
+                localctx.e = self.match(CParser.T__1)
+
+                if localctx.t is not None:
+                    self.StoreVariableDeclaration((None if localctx.s is None else localctx.s.start).line, (None if localctx.s is None else localctx.s.start).column, (None if localctx.t is None else localctx.t.start).line, (None if localctx.t is None else localctx.t.start).column, (None if localctx.s is None else self._input.getText((localctx.s.start,localctx.s.stop))), (None if localctx.t is None else self._input.getText((localctx.t.start,localctx.t.stop))))
+
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Init_declarator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def init_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Init_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Init_declaratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_init_declarator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInit_declarator_list" ):
+                listener.enterInit_declarator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInit_declarator_list" ):
+                listener.exitInit_declarator_list(self)
+
+
+
+
+    def init_declarator_list(self):
+
+        localctx = CParser.Init_declarator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 10, self.RULE_init_declarator_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 208
+            self.init_declarator()
+            self.state = 213
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 209
+                self.match(CParser.T__3)
+                self.state = 210
+                self.init_declarator()
+                self.state = 215
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Init_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def initializer(self):
+            return self.getTypedRuleContext(CParser.InitializerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_init_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInit_declarator" ):
+                listener.enterInit_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInit_declarator" ):
+                listener.exitInit_declarator(self)
+
+
+
+
+    def init_declarator(self):
+
+        localctx = CParser.Init_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 12, self.RULE_init_declarator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 216
+            self.declarator()
+            self.state = 219
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__4:
+                self.state = 217
+                self.match(CParser.T__4)
+                self.state = 218
+                self.initializer()
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Storage_class_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_storage_class_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStorage_class_specifier" ):
+                listener.enterStorage_class_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStorage_class_specifier" ):
+                listener.exitStorage_class_specifier(self)
+
+
+
+
+    def storage_class_specifier(self):
+
+        localctx = CParser.Storage_class_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 14, self.RULE_storage_class_specifier)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 221
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.s = None # Struct_or_union_specifierContext
+            self.e = None # Enum_specifierContext
+
+        def struct_or_union_specifier(self):
+            return self.getTypedRuleContext(CParser.Struct_or_union_specifierContext,0)
+
+
+        def enum_specifier(self):
+            return self.getTypedRuleContext(CParser.Enum_specifierContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def type_id(self):
+            return self.getTypedRuleContext(CParser.Type_idContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_specifier" ):
+                listener.enterType_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_specifier" ):
+                listener.exitType_specifier(self)
+
+
+
+
+    def type_specifier(self):
+
+        localctx = CParser.Type_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 16, self.RULE_type_specifier)
+        try:
+            self.state = 247
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,16,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 223
+                self.match(CParser.T__10)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 224
+                self.match(CParser.T__11)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 225
+                self.match(CParser.T__12)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 226
+                self.match(CParser.T__13)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 227
+                self.match(CParser.T__14)
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 228
+                self.match(CParser.T__15)
+                pass
+
+            elif la_ == 7:
+                self.enterOuterAlt(localctx, 7)
+                self.state = 229
+                self.match(CParser.T__16)
+                pass
+
+            elif la_ == 8:
+                self.enterOuterAlt(localctx, 8)
+                self.state = 230
+                self.match(CParser.T__17)
+                pass
+
+            elif la_ == 9:
+                self.enterOuterAlt(localctx, 9)
+                self.state = 231
+                self.match(CParser.T__18)
+                pass
+
+            elif la_ == 10:
+                self.enterOuterAlt(localctx, 10)
+                self.state = 232
+                localctx.s = self.struct_or_union_specifier()
+
+                if localctx.s.stop is not None:
+                    self.StoreStructUnionDefinition((None if localctx.s is None else localctx.s.start).line, (None if localctx.s is None else localctx.s.start).column, (None if localctx.s is None else localctx.s.stop).line, (None if localctx.s is None else localctx.s.stop).column, (None if localctx.s is None else self._input.getText((localctx.s.start,localctx.s.stop))))
+
+                pass
+
+            elif la_ == 11:
+                self.enterOuterAlt(localctx, 11)
+                self.state = 235
+                localctx.e = self.enum_specifier()
+
+                if localctx.e.stop is not None:
+                    self.StoreEnumerationDefinition((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+
+                pass
+
+            elif la_ == 12:
+                self.enterOuterAlt(localctx, 12)
+                self.state = 238
+                self.match(CParser.IDENTIFIER)
+                self.state = 242
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,15,self._ctx)
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt==1:
+                        self.state = 239
+                        self.type_qualifier()
+                    self.state = 244
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,15,self._ctx)
+
+                self.state = 245
+                self.declarator()
+                pass
+
+            elif la_ == 13:
+                self.enterOuterAlt(localctx, 13)
+                self.state = 246
+                self.type_id()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_idContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_id
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_id" ):
+                listener.enterType_id(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_id" ):
+                listener.exitType_id(self)
+
+
+
+
+    def type_id(self):
+
+        localctx = CParser.Type_idContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 18, self.RULE_type_id)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 249
+            self.match(CParser.IDENTIFIER)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_or_union_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def struct_or_union(self):
+            return self.getTypedRuleContext(CParser.Struct_or_unionContext,0)
+
+
+        def struct_declaration_list(self):
+            return self.getTypedRuleContext(CParser.Struct_declaration_listContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_or_union_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_or_union_specifier" ):
+                listener.enterStruct_or_union_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_or_union_specifier" ):
+                listener.exitStruct_or_union_specifier(self)
+
+
+
+
+    def struct_or_union_specifier(self):
+
+        localctx = CParser.Struct_or_union_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 20, self.RULE_struct_or_union_specifier)
+        self._la = 0 # Token type
+        try:
+            self.state = 262
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,18,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 251
+                self.struct_or_union()
+                self.state = 253
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.IDENTIFIER:
+                    self.state = 252
+                    self.match(CParser.IDENTIFIER)
+
+
+                self.state = 255
+                self.match(CParser.T__0)
+                self.state = 256
+                self.struct_declaration_list()
+                self.state = 257
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 259
+                self.struct_or_union()
+                self.state = 260
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_or_unionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_or_union
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_or_union" ):
+                listener.enterStruct_or_union(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_or_union" ):
+                listener.exitStruct_or_union(self)
+
+
+
+
+    def struct_or_union(self):
+
+        localctx = CParser.Struct_or_unionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 22, self.RULE_struct_or_union)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 264
+            _la = self._input.LA(1)
+            if not(_la==CParser.T__20 or _la==CParser.T__21):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declaration_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def struct_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Struct_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Struct_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declaration_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declaration_list" ):
+                listener.enterStruct_declaration_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declaration_list" ):
+                listener.exitStruct_declaration_list(self)
+
+
+
+
+    def struct_declaration_list(self):
+
+        localctx = CParser.Struct_declaration_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 24, self.RULE_struct_declaration_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 267
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while True:
+                self.state = 266
+                self.struct_declaration()
+                self.state = 269
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER):
+                    break
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def specifier_qualifier_list(self):
+            return self.getTypedRuleContext(CParser.Specifier_qualifier_listContext,0)
+
+
+        def struct_declarator_list(self):
+            return self.getTypedRuleContext(CParser.Struct_declarator_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declaration" ):
+                listener.enterStruct_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declaration" ):
+                listener.exitStruct_declaration(self)
+
+
+
+
+    def struct_declaration(self):
+
+        localctx = CParser.Struct_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 26, self.RULE_struct_declaration)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 271
+            self.specifier_qualifier_list()
+            self.state = 272
+            self.struct_declarator_list()
+            self.state = 273
+            self.match(CParser.T__1)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Specifier_qualifier_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_specifierContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_specifier_qualifier_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterSpecifier_qualifier_list" ):
+                listener.enterSpecifier_qualifier_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitSpecifier_qualifier_list" ):
+                listener.exitSpecifier_qualifier_list(self)
+
+
+
+
+    def specifier_qualifier_list(self):
+
+        localctx = CParser.Specifier_qualifier_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 28, self.RULE_specifier_qualifier_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 277
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 277
+                    self._errHandler.sync(self)
+                    token = self._input.LA(1)
+                    if token in [CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36]:
+                        self.state = 275
+                        self.type_qualifier()
+                        pass
+                    elif token in [CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.IDENTIFIER]:
+                        self.state = 276
+                        self.type_specifier()
+                        pass
+                    else:
+                        raise NoViableAltException(self)
+
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 279
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,21,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declarator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def struct_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Struct_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Struct_declaratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declarator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declarator_list" ):
+                listener.enterStruct_declarator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declarator_list" ):
+                listener.exitStruct_declarator_list(self)
+
+
+
+
+    def struct_declarator_list(self):
+
+        localctx = CParser.Struct_declarator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 30, self.RULE_struct_declarator_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 281
+            self.struct_declarator()
+            self.state = 286
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 282
+                self.match(CParser.T__3)
+                self.state = 283
+                self.struct_declarator()
+                self.state = 288
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declarator" ):
+                listener.enterStruct_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declarator" ):
+                listener.exitStruct_declarator(self)
+
+
+
+
+    def struct_declarator(self):
+
+        localctx = CParser.Struct_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 32, self.RULE_struct_declarator)
+        self._la = 0 # Token type
+        try:
+            self.state = 296
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__37, CParser.T__41, CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 289
+                self.declarator()
+                self.state = 292
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__22:
+                    self.state = 290
+                    self.match(CParser.T__22)
+                    self.state = 291
+                    self.constant_expression()
+
+
+                pass
+            elif token in [CParser.T__22]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 294
+                self.match(CParser.T__22)
+                self.state = 295
+                self.constant_expression()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Enum_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def enumerator_list(self):
+            return self.getTypedRuleContext(CParser.Enumerator_listContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_enum_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnum_specifier" ):
+                listener.enterEnum_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnum_specifier" ):
+                listener.exitEnum_specifier(self)
+
+
+
+
+    def enum_specifier(self):
+
+        localctx = CParser.Enum_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 34, self.RULE_enum_specifier)
+        self._la = 0 # Token type
+        try:
+            self.state = 317
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,27,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 298
+                self.match(CParser.T__23)
+                self.state = 299
+                self.match(CParser.T__0)
+                self.state = 300
+                self.enumerator_list()
+                self.state = 302
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 301
+                    self.match(CParser.T__3)
+
+
+                self.state = 304
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 306
+                self.match(CParser.T__23)
+                self.state = 307
+                self.match(CParser.IDENTIFIER)
+                self.state = 308
+                self.match(CParser.T__0)
+                self.state = 309
+                self.enumerator_list()
+                self.state = 311
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 310
+                    self.match(CParser.T__3)
+
+
+                self.state = 313
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 315
+                self.match(CParser.T__23)
+                self.state = 316
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Enumerator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def enumerator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.EnumeratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.EnumeratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_enumerator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnumerator_list" ):
+                listener.enterEnumerator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnumerator_list" ):
+                listener.exitEnumerator_list(self)
+
+
+
+
+    def enumerator_list(self):
+
+        localctx = CParser.Enumerator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 36, self.RULE_enumerator_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 319
+            self.enumerator()
+            self.state = 324
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,28,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 320
+                    self.match(CParser.T__3)
+                    self.state = 321
+                    self.enumerator()
+                self.state = 326
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,28,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class EnumeratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_enumerator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnumerator" ):
+                listener.enterEnumerator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnumerator" ):
+                listener.exitEnumerator(self)
+
+
+
+
+    def enumerator(self):
+
+        localctx = CParser.EnumeratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 38, self.RULE_enumerator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 327
+            self.match(CParser.IDENTIFIER)
+            self.state = 330
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__4:
+                self.state = 328
+                self.match(CParser.T__4)
+                self.state = 329
+                self.constant_expression()
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_qualifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_qualifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_qualifier" ):
+                listener.enterType_qualifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_qualifier" ):
+                listener.exitType_qualifier(self)
+
+
+
+
+    def type_qualifier(self):
+
+        localctx = CParser.Type_qualifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 40, self.RULE_type_qualifier)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 332
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class DeclaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def direct_declarator(self):
+            return self.getTypedRuleContext(CParser.Direct_declaratorContext,0)
+
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclarator" ):
+                listener.enterDeclarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclarator" ):
+                listener.exitDeclarator(self)
+
+
+
+
+    def declarator(self):
+
+        localctx = CParser.DeclaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 42, self.RULE_declarator)
+        self._la = 0 # Token type
+        try:
+            self.state = 348
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,34,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 335
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__41:
+                    self.state = 334
+                    self.pointer()
+
+
+                self.state = 338
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__33:
+                    self.state = 337
+                    self.match(CParser.T__33)
+
+
+                self.state = 341
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__34:
+                    self.state = 340
+                    self.match(CParser.T__34)
+
+
+                self.state = 344
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__35:
+                    self.state = 343
+                    self.match(CParser.T__35)
+
+
+                self.state = 346
+                self.direct_declarator()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 347
+                self.pointer()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Direct_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def declarator_suffix(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Declarator_suffixContext)
+            else:
+                return self.getTypedRuleContext(CParser.Declarator_suffixContext,i)
+
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_direct_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDirect_declarator" ):
+                listener.enterDirect_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDirect_declarator" ):
+                listener.exitDirect_declarator(self)
+
+
+
+
+    def direct_declarator(self):
+
+        localctx = CParser.Direct_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 44, self.RULE_direct_declarator)
+        try:
+            self.state = 368
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 350
+                self.match(CParser.IDENTIFIER)
+                self.state = 354
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,35,self._ctx)
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt==1:
+                        self.state = 351
+                        self.declarator_suffix()
+                    self.state = 356
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,35,self._ctx)
+
+                pass
+            elif token in [CParser.T__37]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 357
+                self.match(CParser.T__37)
+                self.state = 359
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,36,self._ctx)
+                if la_ == 1:
+                    self.state = 358
+                    self.match(CParser.T__33)
+
+
+                self.state = 361
+                self.declarator()
+                self.state = 362
+                self.match(CParser.T__38)
+                self.state = 364
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 363
+                        self.declarator_suffix()
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 366
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,37,self._ctx)
+
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Declarator_suffixContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def parameter_type_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_type_listContext,0)
+
+
+        def identifier_list(self):
+            return self.getTypedRuleContext(CParser.Identifier_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declarator_suffix
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclarator_suffix" ):
+                listener.enterDeclarator_suffix(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclarator_suffix" ):
+                listener.exitDeclarator_suffix(self)
+
+
+
+
+    def declarator_suffix(self):
+
+        localctx = CParser.Declarator_suffixContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 46, self.RULE_declarator_suffix)
+        try:
+            self.state = 386
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,39,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 370
+                self.match(CParser.T__39)
+                self.state = 371
+                self.constant_expression()
+                self.state = 372
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 374
+                self.match(CParser.T__39)
+                self.state = 375
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 376
+                self.match(CParser.T__37)
+                self.state = 377
+                self.parameter_type_list()
+                self.state = 378
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 380
+                self.match(CParser.T__37)
+                self.state = 381
+                self.identifier_list()
+                self.state = 382
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 384
+                self.match(CParser.T__37)
+                self.state = 385
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class PointerContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_pointer
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPointer" ):
+                listener.enterPointer(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPointer" ):
+                listener.exitPointer(self)
+
+
+
+
+    def pointer(self):
+
+        localctx = CParser.PointerContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 48, self.RULE_pointer)
+        try:
+            self.state = 400
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,42,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 388
+                self.match(CParser.T__41)
+                self.state = 390
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 389
+                        self.type_qualifier()
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 392
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,40,self._ctx)
+
+                self.state = 395
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,41,self._ctx)
+                if la_ == 1:
+                    self.state = 394
+                    self.pointer()
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 397
+                self.match(CParser.T__41)
+                self.state = 398
+                self.pointer()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 399
+                self.match(CParser.T__41)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_type_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def parameter_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_type_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_type_list" ):
+                listener.enterParameter_type_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_type_list" ):
+                listener.exitParameter_type_list(self)
+
+
+
+
+    def parameter_type_list(self):
+
+        localctx = CParser.Parameter_type_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 50, self.RULE_parameter_type_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 402
+            self.parameter_list()
+            self.state = 408
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__3:
+                self.state = 403
+                self.match(CParser.T__3)
+                self.state = 405
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 404
+                    self.match(CParser.T__28)
+
+
+                self.state = 407
+                self.match(CParser.T__42)
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def parameter_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Parameter_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Parameter_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_list" ):
+                listener.enterParameter_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_list" ):
+                listener.exitParameter_list(self)
+
+
+
+
+    def parameter_list(self):
+
+        localctx = CParser.Parameter_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 52, self.RULE_parameter_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 410
+            self.parameter_declaration()
+            self.state = 418
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,46,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 411
+                    self.match(CParser.T__3)
+                    self.state = 413
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,45,self._ctx)
+                    if la_ == 1:
+                        self.state = 412
+                        self.match(CParser.T__28)
+
+
+                    self.state = 415
+                    self.parameter_declaration()
+                self.state = 420
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,46,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclaratorContext,i)
+
+
+        # @param  i=None Type: int
+        def abstract_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Abstract_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Abstract_declaratorContext,i)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def pointer(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.PointerContext)
+            else:
+                return self.getTypedRuleContext(CParser.PointerContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_declaration" ):
+                listener.enterParameter_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_declaration" ):
+                listener.exitParameter_declaration(self)
+
+
+
+
+    def parameter_declaration(self):
+
+        localctx = CParser.Parameter_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 54, self.RULE_parameter_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 439
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,51,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 421
+                self.declaration_specifiers()
+                self.state = 426
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while ((((_la - 34)) & ~0x3f) == 0 and ((1 << (_la - 34)) & ((1 << (CParser.T__33 - 34)) | (1 << (CParser.T__34 - 34)) | (1 << (CParser.T__35 - 34)) | (1 << (CParser.T__37 - 34)) | (1 << (CParser.T__39 - 34)) | (1 << (CParser.T__41 - 34)) | (1 << (CParser.IDENTIFIER - 34)))) != 0):
+                    self.state = 424
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,47,self._ctx)
+                    if la_ == 1:
+                        self.state = 422
+                        self.declarator()
+                        pass
+
+                    elif la_ == 2:
+                        self.state = 423
+                        self.abstract_declarator()
+                        pass
+
+
+                    self.state = 428
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 430
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 429
+                    self.match(CParser.T__28)
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 435
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while _la==CParser.T__41:
+                    self.state = 432
+                    self.pointer()
+                    self.state = 437
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 438
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Identifier_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        def getRuleIndex(self):
+            return CParser.RULE_identifier_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterIdentifier_list" ):
+                listener.enterIdentifier_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitIdentifier_list" ):
+                listener.exitIdentifier_list(self)
+
+
+
+
+    def identifier_list(self):
+
+        localctx = CParser.Identifier_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 56, self.RULE_identifier_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 441
+            self.match(CParser.IDENTIFIER)
+            self.state = 446
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 442
+                self.match(CParser.T__3)
+                self.state = 443
+                self.match(CParser.IDENTIFIER)
+                self.state = 448
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_nameContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def specifier_qualifier_list(self):
+            return self.getTypedRuleContext(CParser.Specifier_qualifier_listContext,0)
+
+
+        def abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Abstract_declaratorContext,0)
+
+
+        def type_id(self):
+            return self.getTypedRuleContext(CParser.Type_idContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_name
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_name" ):
+                listener.enterType_name(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_name" ):
+                listener.exitType_name(self)
+
+
+
+
+    def type_name(self):
+
+        localctx = CParser.Type_nameContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 58, self.RULE_type_name)
+        self._la = 0 # Token type
+        try:
+            self.state = 454
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,54,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 449
+                self.specifier_qualifier_list()
+                self.state = 451
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__37) | (1 << CParser.T__39) | (1 << CParser.T__41))) != 0):
+                    self.state = 450
+                    self.abstract_declarator()
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 453
+                self.type_id()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Abstract_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def direct_abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Direct_abstract_declaratorContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_abstract_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAbstract_declarator" ):
+                listener.enterAbstract_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAbstract_declarator" ):
+                listener.exitAbstract_declarator(self)
+
+
+
+
+    def abstract_declarator(self):
+
+        localctx = CParser.Abstract_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 60, self.RULE_abstract_declarator)
+        try:
+            self.state = 461
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__41]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 456
+                self.pointer()
+                self.state = 458
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,55,self._ctx)
+                if la_ == 1:
+                    self.state = 457
+                    self.direct_abstract_declarator()
+
+
+                pass
+            elif token in [CParser.T__37, CParser.T__39]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 460
+                self.direct_abstract_declarator()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Direct_abstract_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Abstract_declaratorContext,0)
+
+
+        # @param  i=None Type: int
+        def abstract_declarator_suffix(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Abstract_declarator_suffixContext)
+            else:
+                return self.getTypedRuleContext(CParser.Abstract_declarator_suffixContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_direct_abstract_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDirect_abstract_declarator" ):
+                listener.enterDirect_abstract_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDirect_abstract_declarator" ):
+                listener.exitDirect_abstract_declarator(self)
+
+
+
+    def direct_abstract_declarator(self):
+
+        localctx = CParser.Direct_abstract_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 62, self.RULE_direct_abstract_declarator)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 468
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,57,self._ctx)
+            if la_ == 1:
+                self.state = 463
+                self.match(CParser.T__37)
+                self.state = 464
+                self.abstract_declarator()
+                self.state = 465
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 2:
+                self.state = 467
+                self.abstract_declarator_suffix()
+                pass
+
+
+            self.state = 473
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 470
+                    self.abstract_declarator_suffix()
+                self.state = 475
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Abstract_declarator_suffixContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def parameter_type_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_type_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_abstract_declarator_suffix
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAbstract_declarator_suffix" ):
+                listener.enterAbstract_declarator_suffix(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAbstract_declarator_suffix" ):
+                listener.exitAbstract_declarator_suffix(self)
+
+
+
+
+    def abstract_declarator_suffix(self):
+
+        localctx = CParser.Abstract_declarator_suffixContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 64, self.RULE_abstract_declarator_suffix)
+        try:
+            self.state = 488
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,59,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 476
+                self.match(CParser.T__39)
+                self.state = 477
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 478
+                self.match(CParser.T__39)
+                self.state = 479
+                self.constant_expression()
+                self.state = 480
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 482
+                self.match(CParser.T__37)
+                self.state = 483
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 484
+                self.match(CParser.T__37)
+                self.state = 485
+                self.parameter_type_list()
+                self.state = 486
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class InitializerContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def assignment_expression(self):
+            return self.getTypedRuleContext(CParser.Assignment_expressionContext,0)
+
+
+        def initializer_list(self):
+            return self.getTypedRuleContext(CParser.Initializer_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_initializer
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInitializer" ):
+                listener.enterInitializer(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInitializer" ):
+                listener.exitInitializer(self)
+
+
+
+
+    def initializer(self):
+
+        localctx = CParser.InitializerContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 66, self.RULE_initializer)
+        self._la = 0 # Token type
+        try:
+            self.state = 498
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__37, CParser.T__41, CParser.T__43, CParser.T__44, CParser.T__47, CParser.T__48, CParser.T__49, CParser.T__52, CParser.T__53, CParser.T__54, CParser.IDENTIFIER, CParser.CHARACTER_LITERAL, CParser.STRING_LITERAL, CParser.HEX_LITERAL, CParser.DECIMAL_LITERAL, CParser.OCTAL_LITERAL, CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 490
+                self.assignment_expression()
+                pass
+            elif token in [CParser.T__0]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 491
+                self.match(CParser.T__0)
+                self.state = 492
+                self.initializer_list()
+                self.state = 494
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 493
+                    self.match(CParser.T__3)
+
+
+                self.state = 496
+                self.match(CParser.T__19)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Initializer_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def initializer(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.InitializerContext)
+            else:
+                return self.getTypedRuleContext(CParser.InitializerContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_initializer_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInitializer_list" ):
+                listener.enterInitializer_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInitializer_list" ):
+                listener.exitInitializer_list(self)
+
+
+
+
+    def initializer_list(self):
+
+        localctx = CParser.Initializer_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 68, self.RULE_initializer_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 500
+            self.initializer()
+            self.state = 505
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,62,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 501
+                    self.match(CParser.T__3)
+                    self.state = 502
+                    self.initializer()
+                self.state = 507
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,62,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Argument_expression_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def assignment_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Assignment_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Assignment_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_argument_expression_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterArgument_expression_list" ):
+                listener.enterArgument_expression_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitArgument_expression_list" ):
+                listener.exitArgument_expression_list(self)
+
+
+
+
+    def argument_expression_list(self):
+
+        localctx = CParser.Argument_expression_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 70, self.RULE_argument_expression_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 508
+            self.assignment_expression()
+            self.state = 510
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__28:
+                self.state = 509
+                self.match(CParser.T__28)
+
+
+            self.state = 519
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 512
+                self.match(CParser.T__3)
+                self.state = 513
+                self.assignment_expression()
+                self.state = 515
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 514
+                    self.match(CParser.T__28)
+
+
+                self.state = 521
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Additive_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def multiplicative_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Multiplicative_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Multiplicative_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_additive_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAdditive_expression" ):
+                listener.enterAdditive_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAdditive_expression" ):
+                listener.exitAdditive_expression(self)
+
+
+
+
+    def additive_expression(self):
+
+        localctx = CParser.Additive_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 72, self.RULE_additive_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 522
+            self.multiplicative_expression()
+            self.state = 529
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__43 or _la==CParser.T__44:
+                self.state = 527
+                self._errHandler.sync(self)
+                token = self._input.LA(1)
+                if token in [CParser.T__43]:
+                    self.state = 523
+                    self.match(CParser.T__43)
+                    self.state = 524
+                    self.multiplicative_expression()
+                    pass
+                elif token in [CParser.T__44]:
+                    self.state = 525
+                    self.match(CParser.T__44)
+                    self.state = 526
+                    self.multiplicative_expression()
+                    pass
+                else:
+                    raise NoViableAltException(self)
+
+                self.state = 531
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Multiplicative_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def cast_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Cast_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Cast_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_multiplicative_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMultiplicative_expression" ):
+                listener.enterMultiplicative_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMultiplicative_expression" ):
+                listener.exitMultiplicative_expression(self)
+
+
+
+
+    def multiplicative_expression(self):
+
+        localctx = CParser.Multiplicative_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 74, self.RULE_multiplicative_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 532
+            self.cast_expression()
+            self.state = 541
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__41) | (1 << CParser.T__45) | (1 << CParser.T__46))) != 0):
+                self.state = 539
+                self._errHandler.sync(self)
+                token = self._input.LA(1)
+                if token in [CParser.T__41]:
+                    self.state = 533
+                    self.match(CParser.T__41)
+                    self.state = 534
+                    self.cast_expression()
+                    pass
+                elif token in [CParser.T__45]:
+                    self.state = 535
+                    self.match(CParser.T__45)
+                    self.state = 536
+                    self.cast_expression()
+                    pass
+                elif token in [CParser.T__46]:
+                    self.state = 537
+                    self.match(CParser.T__46)
+                    self.state = 538
+                    self.cast_expression()
+                    pass
+                else:
+                    raise NoViableAltException(self)
+
+                self.state = 543
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Cast_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def type_name(self):
+            return self.getTypedRuleContext(CParser.Type_nameContext,0)
+
+
+        def cast_expression(self):
+            return self.getTypedRuleContext(CParser.Cast_expressionContext,0)
+
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_cast_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterCast_expression" ):
+                listener.enterCast_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitCast_expression" ):
+                listener.exitCast_expression(self)
+
+
+
+
+    def cast_expression(self):
+
+        localctx = CParser.Cast_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 76, self.RULE_cast_expression)
+        try:
+            self.state = 550
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,70,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 544
+                self.match(CParser.T__37)
+                self.state = 545
+                self.type_name()
+                self.state = 546
+                self.match(CParser.T__38)
+                self.state = 547
+                self.cast_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 549
+                self.unary_expression()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Unary_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def postfix_expression(self):
+            return self.getTypedRuleContext(CParser.Postfix_expressionContext,0)
+
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def unary_operator(self):
+            return self.getTypedRuleContext(CParser.Unary_operatorContext,0)
+
+
+        def cast_expression(self):
+            return self.getTypedRuleContext(CParser.Cast_expressionContext,0)
+
+
+        def type_name(self):
+            return self.getTypedRuleContext(CParser.Type_nameContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_unary_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterUnary_expression" ):
+                listener.enterUnary_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitUnary_expression" ):
+                listener.exitUnary_expression(self)
+
+
+
+
+    def unary_expression(self):
+
+        localctx = CParser.Unary_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 78, self.RULE_unary_expression)
+        try:
+            self.state = 567
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,71,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 552
+                self.postfix_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 553
+                self.match(CParser.T__47)
+                self.state = 554
+                self.unary_expression()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 555
+                self.match(CParser.T__48)
+                self.state = 556
+                self.unary_expression()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 557
+                self.unary_operator()
+                self.state = 558
+                self.cast_expression()
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 560
+                self.match(CParser.T__49)
+                self.state = 561
+                self.unary_expression()
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 562
+                self.match(CParser.T__49)
+                self.state = 563
+                self.match(CParser.T__37)
+                self.state = 564
+                self.type_name()
+                self.state = 565
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Postfix_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.FuncCallText = ''
+            self.p = None # Primary_expressionContext
+            self.a = None # Token
+            self.c = None # Argument_expression_listContext
+            self.b = None # Token
+            self.x = None # Token
+            self.y = None # Token
+            self.z = None # Token
+
+        def primary_expression(self):
+            return self.getTypedRuleContext(CParser.Primary_expressionContext,0)
+
+
+        # @param  i=None Type: int
+        def expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.ExpressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.ExpressionContext,i)
+
+
+        # @param  i=None Type: int
+        def macro_parameter_list(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Macro_parameter_listContext)
+            else:
+                return self.getTypedRuleContext(CParser.Macro_parameter_listContext,i)
+
+
+        # @param  i=None Type: int
+        def argument_expression_list(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Argument_expression_listContext)
+            else:
+                return self.getTypedRuleContext(CParser.Argument_expression_listContext,i)
+
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        def getRuleIndex(self):
+            return CParser.RULE_postfix_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPostfix_expression" ):
+                listener.enterPostfix_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPostfix_expression" ):
+                listener.exitPostfix_expression(self)
+
+
+
+
+    def postfix_expression(self):
+
+        localctx = CParser.Postfix_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 80, self.RULE_postfix_expression)
+
+        self.FuncCallText=''
+
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 569
+            localctx.p = self.primary_expression()
+            self.FuncCallText += (None if localctx.p is None else self._input.getText((localctx.p.start,localctx.p.stop)))
+            self.state = 600
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,73,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 598
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,72,self._ctx)
+                    if la_ == 1:
+                        self.state = 571
+                        self.match(CParser.T__39)
+                        self.state = 572
+                        self.expression()
+                        self.state = 573
+                        self.match(CParser.T__40)
+                        pass
+
+                    elif la_ == 2:
+                        self.state = 575
+                        self.match(CParser.T__37)
+                        self.state = 576
+                        localctx.a = self.match(CParser.T__38)
+                        self.StoreFunctionCalling((None if localctx.p is None else localctx.p.start).line, (None if localctx.p is None else localctx.p.start).column, (0 if localctx.a is None else localctx.a.line), localctx.a.column, self.FuncCallText, '')
+                        pass
+
+                    elif la_ == 3:
+                        self.state = 578
+                        self.match(CParser.T__37)
+                        self.state = 579
+                        localctx.c = self.argument_expression_list()
+                        self.state = 580
+                        localctx.b = self.match(CParser.T__38)
+                        self.StoreFunctionCalling((None if localctx.p is None else localctx.p.start).line, (None if localctx.p is None else localctx.p.start).column, (0 if localctx.b is None else localctx.b.line), localctx.b.column, self.FuncCallText, (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+                        pass
+
+                    elif la_ == 4:
+                        self.state = 583
+                        self.match(CParser.T__37)
+                        self.state = 584
+                        self.macro_parameter_list()
+                        self.state = 585
+                        self.match(CParser.T__38)
+                        pass
+
+                    elif la_ == 5:
+                        self.state = 587
+                        self.match(CParser.T__50)
+                        self.state = 588
+                        localctx.x = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText += '.' + (None if localctx.x is None else localctx.x.text)
+                        pass
+
+                    elif la_ == 6:
+                        self.state = 590
+                        self.match(CParser.T__41)
+                        self.state = 591
+                        localctx.y = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText = (None if localctx.y is None else localctx.y.text)
+                        pass
+
+                    elif la_ == 7:
+                        self.state = 593
+                        self.match(CParser.T__51)
+                        self.state = 594
+                        localctx.z = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText += '->' + (None if localctx.z is None else localctx.z.text)
+                        pass
+
+                    elif la_ == 8:
+                        self.state = 596
+                        self.match(CParser.T__47)
+                        pass
+
+                    elif la_ == 9:
+                        self.state = 597
+                        self.match(CParser.T__48)
+                        pass
+
+
+                self.state = 602
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,73,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Macro_parameter_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def parameter_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Parameter_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Parameter_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_macro_parameter_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMacro_parameter_list" ):
+                listener.enterMacro_parameter_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMacro_parameter_list" ):
+                listener.exitMacro_parameter_list(self)
+
+
+
+
+    def macro_parameter_list(self):
+
+        localctx = CParser.Macro_parameter_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 82, self.RULE_macro_parameter_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 603
+            self.parameter_declaration()
+            self.state = 608
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 604
+                self.match(CParser.T__3)
+                self.state = 605
+                self.parameter_declaration()
+                self.state = 610
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Unary_operatorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_unary_operator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterUnary_operator" ):
+                listener.enterUnary_operator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitUnary_operator" ):
+                listener.exitUnary_operator(self)
+
+
+
+
+    def unary_operator(self):
+
+        localctx = CParser.Unary_operatorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 84, self.RULE_unary_operator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 611
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__41) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Primary_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def constant(self):
+            return self.getTypedRuleContext(CParser.ConstantContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_primary_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPrimary_expression" ):
+                listener.enterPrimary_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPrimary_expression" ):
+                listener.exitPrimary_expression(self)
+
+
+
+
+    def primary_expression(self):
+
+        localctx = CParser.Primary_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 86, self.RULE_primary_expression)
+        try:
+            self.state = 619
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,75,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 613
+                self.match(CParser.IDENTIFIER)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 614
+                self.constant()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 615
+                self.match(CParser.T__37)
+                self.state = 616
+                self.expression()
+                self.state = 617
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class ConstantContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def HEX_LITERAL(self):
+            return self.getToken(CParser.HEX_LITERAL, 0)
+
+        def OCTAL_LITERAL(self):
+            return self.getToken(CParser.OCTAL_LITERAL, 0)
+
+        def DECIMAL_LITERAL(self):
+            return self.getToken(CParser.DECIMAL_LITERAL, 0)
+
+        def CHARACTER_LITERAL(self):
+            return self.getToken(CParser.CHARACTER_LITERAL, 0)
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        # @param  i=None Type: int
+        def STRING_LITERAL(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.STRING_LITERAL)
+            else:
+                return self.getToken(CParser.STRING_LITERAL, i)
+
+        def FLOATING_POINT_LITERAL(self):
+            return self.getToken(CParser.FLOATING_POINT_LITERAL, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_constant
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConstant" ):
+                listener.enterConstant(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConstant" ):
+                listener.exitConstant(self)
+
+
+
+
+    def constant(self):
+
+        localctx = CParser.ConstantContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 88, self.RULE_constant)
+        self._la = 0 # Token type
+        try:
+            self.state = 647
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.HEX_LITERAL]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 621
+                self.match(CParser.HEX_LITERAL)
+                pass
+            elif token in [CParser.OCTAL_LITERAL]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 622
+                self.match(CParser.OCTAL_LITERAL)
+                pass
+            elif token in [CParser.DECIMAL_LITERAL]:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 623
+                self.match(CParser.DECIMAL_LITERAL)
+                pass
+            elif token in [CParser.CHARACTER_LITERAL]:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 624
+                self.match(CParser.CHARACTER_LITERAL)
+                pass
+            elif token in [CParser.IDENTIFIER, CParser.STRING_LITERAL]:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 636
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 628
+                        self._errHandler.sync(self)
+                        _la = self._input.LA(1)
+                        while _la==CParser.IDENTIFIER:
+                            self.state = 625
+                            self.match(CParser.IDENTIFIER)
+                            self.state = 630
+                            self._errHandler.sync(self)
+                            _la = self._input.LA(1)
+
+                        self.state = 632
+                        self._errHandler.sync(self)
+                        _alt = 1
+                        while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                            if _alt == 1:
+                                self.state = 631
+                                self.match(CParser.STRING_LITERAL)
+
+                            else:
+                                raise NoViableAltException(self)
+                            self.state = 634
+                            self._errHandler.sync(self)
+                            _alt = self._interp.adaptivePredict(self._input,77,self._ctx)
+
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 638
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,78,self._ctx)
+
+                self.state = 643
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while _la==CParser.IDENTIFIER:
+                    self.state = 640
+                    self.match(CParser.IDENTIFIER)
+                    self.state = 645
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                pass
+            elif token in [CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 646
+                self.match(CParser.FLOATING_POINT_LITERAL)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class ExpressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def assignment_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Assignment_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Assignment_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExpression" ):
+                listener.enterExpression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExpression" ):
+                listener.exitExpression(self)
+
+
+
+
+    def expression(self):
+
+        localctx = CParser.ExpressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 90, self.RULE_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 649
+            self.assignment_expression()
+            self.state = 654
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 650
+                self.match(CParser.T__3)
+                self.state = 651
+                self.assignment_expression()
+                self.state = 656
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Constant_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_constant_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConstant_expression" ):
+                listener.enterConstant_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConstant_expression" ):
+                listener.exitConstant_expression(self)
+
+
+
+
+    def constant_expression(self):
+
+        localctx = CParser.Constant_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 92, self.RULE_constant_expression)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 657
+            self.conditional_expression()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Assignment_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def lvalue(self):
+            return self.getTypedRuleContext(CParser.LvalueContext,0)
+
+
+        def assignment_operator(self):
+            return self.getTypedRuleContext(CParser.Assignment_operatorContext,0)
+
+
+        def assignment_expression(self):
+            return self.getTypedRuleContext(CParser.Assignment_expressionContext,0)
+
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_assignment_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAssignment_expression" ):
+                listener.enterAssignment_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAssignment_expression" ):
+                listener.exitAssignment_expression(self)
+
+
+
+
+    def assignment_expression(self):
+
+        localctx = CParser.Assignment_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 94, self.RULE_assignment_expression)
+        try:
+            self.state = 664
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,82,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 659
+                self.lvalue()
+                self.state = 660
+                self.assignment_operator()
+                self.state = 661
+                self.assignment_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 663
+                self.conditional_expression()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class LvalueContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_lvalue
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLvalue" ):
+                listener.enterLvalue(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLvalue" ):
+                listener.exitLvalue(self)
+
+
+
+
+    def lvalue(self):
+
+        localctx = CParser.LvalueContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 96, self.RULE_lvalue)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 666
+            self.unary_expression()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Assignment_operatorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_assignment_operator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAssignment_operator" ):
+                listener.enterAssignment_operator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAssignment_operator" ):
+                listener.exitAssignment_operator(self)
+
+
+
+
+    def assignment_operator(self):
+
+        localctx = CParser.Assignment_operatorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 98, self.RULE_assignment_operator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 668
+            _la = self._input.LA(1)
+            if not(((((_la - 5)) & ~0x3f) == 0 and ((1 << (_la - 5)) & ((1 << (CParser.T__4 - 5)) | (1 << (CParser.T__55 - 5)) | (1 << (CParser.T__56 - 5)) | (1 << (CParser.T__57 - 5)) | (1 << (CParser.T__58 - 5)) | (1 << (CParser.T__59 - 5)) | (1 << (CParser.T__60 - 5)) | (1 << (CParser.T__61 - 5)) | (1 << (CParser.T__62 - 5)) | (1 << (CParser.T__63 - 5)) | (1 << (CParser.T__64 - 5)))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Conditional_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # Logical_or_expressionContext
+
+        def logical_or_expression(self):
+            return self.getTypedRuleContext(CParser.Logical_or_expressionContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_conditional_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConditional_expression" ):
+                listener.enterConditional_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConditional_expression" ):
+                listener.exitConditional_expression(self)
+
+
+
+
+    def conditional_expression(self):
+
+        localctx = CParser.Conditional_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 100, self.RULE_conditional_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 670
+            localctx.e = self.logical_or_expression()
+            self.state = 677
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__65:
+                self.state = 671
+                self.match(CParser.T__65)
+                self.state = 672
+                self.expression()
+                self.state = 673
+                self.match(CParser.T__22)
+                self.state = 674
+                self.conditional_expression()
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Logical_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def logical_and_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Logical_and_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Logical_and_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_logical_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLogical_or_expression" ):
+                listener.enterLogical_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLogical_or_expression" ):
+                listener.exitLogical_or_expression(self)
+
+
+
+
+    def logical_or_expression(self):
+
+        localctx = CParser.Logical_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 102, self.RULE_logical_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 679
+            self.logical_and_expression()
+            self.state = 684
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__66:
+                self.state = 680
+                self.match(CParser.T__66)
+                self.state = 681
+                self.logical_and_expression()
+                self.state = 686
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Logical_and_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def inclusive_or_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Inclusive_or_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Inclusive_or_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_logical_and_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLogical_and_expression" ):
+                listener.enterLogical_and_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLogical_and_expression" ):
+                listener.exitLogical_and_expression(self)
+
+
+
+
+    def logical_and_expression(self):
+
+        localctx = CParser.Logical_and_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 104, self.RULE_logical_and_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 687
+            self.inclusive_or_expression()
+            self.state = 692
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__67:
+                self.state = 688
+                self.match(CParser.T__67)
+                self.state = 689
+                self.inclusive_or_expression()
+                self.state = 694
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Inclusive_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def exclusive_or_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Exclusive_or_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Exclusive_or_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_inclusive_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInclusive_or_expression" ):
+                listener.enterInclusive_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInclusive_or_expression" ):
+                listener.exitInclusive_or_expression(self)
+
+
+
+
+    def inclusive_or_expression(self):
+
+        localctx = CParser.Inclusive_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 106, self.RULE_inclusive_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 695
+            self.exclusive_or_expression()
+            self.state = 700
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__68:
+                self.state = 696
+                self.match(CParser.T__68)
+                self.state = 697
+                self.exclusive_or_expression()
+                self.state = 702
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Exclusive_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def and_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.And_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.And_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_exclusive_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExclusive_or_expression" ):
+                listener.enterExclusive_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExclusive_or_expression" ):
+                listener.exitExclusive_or_expression(self)
+
+
+
+
+    def exclusive_or_expression(self):
+
+        localctx = CParser.Exclusive_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 108, self.RULE_exclusive_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 703
+            self.and_expression()
+            self.state = 708
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__69:
+                self.state = 704
+                self.match(CParser.T__69)
+                self.state = 705
+                self.and_expression()
+                self.state = 710
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class And_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def equality_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Equality_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Equality_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_and_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAnd_expression" ):
+                listener.enterAnd_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAnd_expression" ):
+                listener.exitAnd_expression(self)
+
+
+
+
+    def and_expression(self):
+
+        localctx = CParser.And_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 110, self.RULE_and_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 711
+            self.equality_expression()
+            self.state = 716
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__52:
+                self.state = 712
+                self.match(CParser.T__52)
+                self.state = 713
+                self.equality_expression()
+                self.state = 718
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Equality_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def relational_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Relational_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Relational_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_equality_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEquality_expression" ):
+                listener.enterEquality_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEquality_expression" ):
+                listener.exitEquality_expression(self)
+
+
+
+
+    def equality_expression(self):
+
+        localctx = CParser.Equality_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 112, self.RULE_equality_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 719
+            self.relational_expression()
+            self.state = 724
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__70 or _la==CParser.T__71:
+                self.state = 720
+                _la = self._input.LA(1)
+                if not(_la==CParser.T__70 or _la==CParser.T__71):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 721
+                self.relational_expression()
+                self.state = 726
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Relational_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def shift_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Shift_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Shift_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_relational_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterRelational_expression" ):
+                listener.enterRelational_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitRelational_expression" ):
+                listener.exitRelational_expression(self)
+
+
+
+
+    def relational_expression(self):
+
+        localctx = CParser.Relational_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 114, self.RULE_relational_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 727
+            self.shift_expression()
+            self.state = 732
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while ((((_la - 73)) & ~0x3f) == 0 and ((1 << (_la - 73)) & ((1 << (CParser.T__72 - 73)) | (1 << (CParser.T__73 - 73)) | (1 << (CParser.T__74 - 73)) | (1 << (CParser.T__75 - 73)))) != 0):
+                self.state = 728
+                _la = self._input.LA(1)
+                if not(((((_la - 73)) & ~0x3f) == 0 and ((1 << (_la - 73)) & ((1 << (CParser.T__72 - 73)) | (1 << (CParser.T__73 - 73)) | (1 << (CParser.T__74 - 73)) | (1 << (CParser.T__75 - 73)))) != 0)):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 729
+                self.shift_expression()
+                self.state = 734
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Shift_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def additive_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Additive_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Additive_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_shift_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterShift_expression" ):
+                listener.enterShift_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitShift_expression" ):
+                listener.exitShift_expression(self)
+
+
+
+
+    def shift_expression(self):
+
+        localctx = CParser.Shift_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 116, self.RULE_shift_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 735
+            self.additive_expression()
+            self.state = 740
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__76 or _la==CParser.T__77:
+                self.state = 736
+                _la = self._input.LA(1)
+                if not(_la==CParser.T__76 or _la==CParser.T__77):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 737
+                self.additive_expression()
+                self.state = 742
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class StatementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def labeled_statement(self):
+            return self.getTypedRuleContext(CParser.Labeled_statementContext,0)
+
+
+        def compound_statement(self):
+            return self.getTypedRuleContext(CParser.Compound_statementContext,0)
+
+
+        def expression_statement(self):
+            return self.getTypedRuleContext(CParser.Expression_statementContext,0)
+
+
+        def selection_statement(self):
+            return self.getTypedRuleContext(CParser.Selection_statementContext,0)
+
+
+        def iteration_statement(self):
+            return self.getTypedRuleContext(CParser.Iteration_statementContext,0)
+
+
+        def jump_statement(self):
+            return self.getTypedRuleContext(CParser.Jump_statementContext,0)
+
+
+        def macro_statement(self):
+            return self.getTypedRuleContext(CParser.Macro_statementContext,0)
+
+
+        def asm2_statement(self):
+            return self.getTypedRuleContext(CParser.Asm2_statementContext,0)
+
+
+        def asm1_statement(self):
+            return self.getTypedRuleContext(CParser.Asm1_statementContext,0)
+
+
+        def asm_statement(self):
+            return self.getTypedRuleContext(CParser.Asm_statementContext,0)
+
+
+        def declaration(self):
+            return self.getTypedRuleContext(CParser.DeclarationContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStatement" ):
+                listener.enterStatement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStatement" ):
+                listener.exitStatement(self)
+
+
+
+
+    def statement(self):
+
+        localctx = CParser.StatementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 118, self.RULE_statement)
+        try:
+            self.state = 754
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,92,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 743
+                self.labeled_statement()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 744
+                self.compound_statement()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 745
+                self.expression_statement()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 746
+                self.selection_statement()
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 747
+                self.iteration_statement()
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 748
+                self.jump_statement()
+                pass
+
+            elif la_ == 7:
+                self.enterOuterAlt(localctx, 7)
+                self.state = 749
+                self.macro_statement()
+                pass
+
+            elif la_ == 8:
+                self.enterOuterAlt(localctx, 8)
+                self.state = 750
+                self.asm2_statement()
+                pass
+
+            elif la_ == 9:
+                self.enterOuterAlt(localctx, 9)
+                self.state = 751
+                self.asm1_statement()
+                pass
+
+            elif la_ == 10:
+                self.enterOuterAlt(localctx, 10)
+                self.state = 752
+                self.asm_statement()
+                pass
+
+            elif la_ == 11:
+                self.enterOuterAlt(localctx, 11)
+                self.state = 753
+                self.declaration()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm2_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm2_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm2_statement" ):
+                listener.enterAsm2_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm2_statement" ):
+                listener.exitAsm2_statement(self)
+
+
+
+
+    def asm2_statement(self):
+
+        localctx = CParser.Asm2_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 120, self.RULE_asm2_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 757
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__78:
+                self.state = 756
+                self.match(CParser.T__78)
+
+
+            self.state = 759
+            self.match(CParser.IDENTIFIER)
+            self.state = 760
+            self.match(CParser.T__37)
+            self.state = 764
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,94,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 761
+                    _la = self._input.LA(1)
+                    if _la <= 0 or _la==CParser.T__1:
+                        self._errHandler.recoverInline(self)
+                    else:
+                        self._errHandler.reportMatch(self)
+                        self.consume()
+                self.state = 766
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,94,self._ctx)
+
+            self.state = 767
+            self.match(CParser.T__38)
+            self.state = 768
+            self.match(CParser.T__1)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm1_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm1_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm1_statement" ):
+                listener.enterAsm1_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm1_statement" ):
+                listener.exitAsm1_statement(self)
+
+
+
+
+    def asm1_statement(self):
+
+        localctx = CParser.Asm1_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 122, self.RULE_asm1_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 770
+            self.match(CParser.T__79)
+            self.state = 771
+            self.match(CParser.T__0)
+            self.state = 775
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__3) | (1 << CParser.T__4) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__22) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__38) | (1 << CParser.T__39) | (1 << CParser.T__40) | (1 << CParser.T__41) | (1 << CParser.T__42) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__45) | (1 << CParser.T__46) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__50) | (1 << CParser.T__51) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54) | (1 << CParser.T__55) | (1 << CParser.T__56) | (1 << CParser.T__57) | (1 << CParser.T__58) | (1 << CParser.T__59) | (1 << CParser.T__60) | (1 << CParser.T__61) | (1 << CParser.T__62))) != 0) or ((((_la - 64)) & ~0x3f) == 0 and ((1 << (_la - 64)) & ((1 << (CParser.T__63 - 64)) | (1 << (CParser.T__64 - 64)) | (1 << (CParser.T__65 - 64)) | (1 << (CParser.T__66 - 64)) | (1 << (CParser.T__67 - 64)) | (1 << (CParser.T__68 - 64)) | (1 << (CParser.T__69 - 64)) | (1 << (CParser.T__70 - 64)) | (1 << (CParser.T__71 - 64)) | (1 << (CParser.T__72 - 64)) | (1 << (CParser.T__73 - 64)) | (1 << (CParser.T__74 - 64)) | (1 << (CParser.T__75 - 64)) | (1 << (CParser.T__76 - 64)) | (1 << (CParser.T__77 - 64)) | (1 << (CParser.T__78 - 64)) | (1 << (CParser.T__79 - 64)) | (1 << (CParser.T__80 - 64)) | (1 << (CParser.T__81 - 64)) | (1 << (CParser.T__82 - 64)) | (1 << (CParser.T__83 - 64)) | (1 << (CParser.T__84 - 64)) | (1 << (CParser.T__85 - 64)) | (1 << (CParser.T__86 - 64)) | (1 << (CParser.T__87 - 64)) | (1 << (CParser.T__88 - 64)) | (1 << (CParser.T__89 - 64)) | (1 << (CParser.T__90 - 64)) | (1 << (CParser.T__91 - 64)) | (1 << (CParser.IDENTIFIER - 64)) | (1 << (CParser.CHARACTER_LITERAL - 64)) | (1 << (CParser.STRING_LITERAL - 64)) | (1 << (CParser.HEX_LITERAL - 64)) | (1 << (CParser.DECIMAL_LITERAL - 64)) | (1 << (CParser.OCTAL_LITERAL - 64)) | (1 << (CParser.FLOATING_POINT_LITERAL - 64)) | (1 << (CParser.WS - 64)) | (1 << (CParser.BS - 64)) | (1 << (CParser.UnicodeVocabulary - 64)) | (1 << (CParser.COMMENT - 64)) | (1 << (CParser.LINE_COMMENT - 64)) | (1 << (CParser.LINE_COMMAND - 64)))) != 0):
+                self.state = 772
+                _la = self._input.LA(1)
+                if _la <= 0 or _la==CParser.T__19:
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 777
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+            self.state = 778
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm_statement" ):
+                listener.enterAsm_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm_statement" ):
+                listener.exitAsm_statement(self)
+
+
+
+
+    def asm_statement(self):
+
+        localctx = CParser.Asm_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 124, self.RULE_asm_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 780
+            self.match(CParser.T__80)
+            self.state = 781
+            self.match(CParser.T__0)
+            self.state = 785
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__3) | (1 << CParser.T__4) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__22) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__38) | (1 << CParser.T__39) | (1 << CParser.T__40) | (1 << CParser.T__41) | (1 << CParser.T__42) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__45) | (1 << CParser.T__46) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__50) | (1 << CParser.T__51) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54) | (1 << CParser.T__55) | (1 << CParser.T__56) | (1 << CParser.T__57) | (1 << CParser.T__58) | (1 << CParser.T__59) | (1 << CParser.T__60) | (1 << CParser.T__61) | (1 << CParser.T__62))) != 0) or ((((_la - 64)) & ~0x3f) == 0 and ((1 << (_la - 64)) & ((1 << (CParser.T__63 - 64)) | (1 << (CParser.T__64 - 64)) | (1 << (CParser.T__65 - 64)) | (1 << (CParser.T__66 - 64)) | (1 << (CParser.T__67 - 64)) | (1 << (CParser.T__68 - 64)) | (1 << (CParser.T__69 - 64)) | (1 << (CParser.T__70 - 64)) | (1 << (CParser.T__71 - 64)) | (1 << (CParser.T__72 - 64)) | (1 << (CParser.T__73 - 64)) | (1 << (CParser.T__74 - 64)) | (1 << (CParser.T__75 - 64)) | (1 << (CParser.T__76 - 64)) | (1 << (CParser.T__77 - 64)) | (1 << (CParser.T__78 - 64)) | (1 << (CParser.T__79 - 64)) | (1 << (CParser.T__80 - 64)) | (1 << (CParser.T__81 - 64)) | (1 << (CParser.T__82 - 64)) | (1 << (CParser.T__83 - 64)) | (1 << (CParser.T__84 - 64)) | (1 << (CParser.T__85 - 64)) | (1 << (CParser.T__86 - 64)) | (1 << (CParser.T__87 - 64)) | (1 << (CParser.T__88 - 64)) | (1 << (CParser.T__89 - 64)) | (1 << (CParser.T__90 - 64)) | (1 << (CParser.T__91 - 64)) | (1 << (CParser.IDENTIFIER - 64)) | (1 << (CParser.CHARACTER_LITERAL - 64)) | (1 << (CParser.STRING_LITERAL - 64)) | (1 << (CParser.HEX_LITERAL - 64)) | (1 << (CParser.DECIMAL_LITERAL - 64)) | (1 << (CParser.OCTAL_LITERAL - 64)) | (1 << (CParser.FLOATING_POINT_LITERAL - 64)) | (1 << (CParser.WS - 64)) | (1 << (CParser.BS - 64)) | (1 << (CParser.UnicodeVocabulary - 64)) | (1 << (CParser.COMMENT - 64)) | (1 << (CParser.LINE_COMMENT - 64)) | (1 << (CParser.LINE_COMMAND - 64)))) != 0):
+                self.state = 782
+                _la = self._input.LA(1)
+                if _la <= 0 or _la==CParser.T__19:
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 787
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+            self.state = 788
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Macro_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def statement_list(self):
+            return self.getTypedRuleContext(CParser.Statement_listContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_macro_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMacro_statement" ):
+                listener.enterMacro_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMacro_statement" ):
+                listener.exitMacro_statement(self)
+
+
+
+
+    def macro_statement(self):
+
+        localctx = CParser.Macro_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 126, self.RULE_macro_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 790
+            self.match(CParser.IDENTIFIER)
+            self.state = 791
+            self.match(CParser.T__37)
+            self.state = 795
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,97,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 792
+                    self.declaration()
+                self.state = 797
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,97,self._ctx)
+
+            self.state = 799
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,98,self._ctx)
+            if la_ == 1:
+                self.state = 798
+                self.statement_list()
+
+
+            self.state = 802
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if ((((_la - 38)) & ~0x3f) == 0 and ((1 << (_la - 38)) & ((1 << (CParser.T__37 - 38)) | (1 << (CParser.T__41 - 38)) | (1 << (CParser.T__43 - 38)) | (1 << (CParser.T__44 - 38)) | (1 << (CParser.T__47 - 38)) | (1 << (CParser.T__48 - 38)) | (1 << (CParser.T__49 - 38)) | (1 << (CParser.T__52 - 38)) | (1 << (CParser.T__53 - 38)) | (1 << (CParser.T__54 - 38)) | (1 << (CParser.IDENTIFIER - 38)) | (1 << (CParser.CHARACTER_LITERAL - 38)) | (1 << (CParser.STRING_LITERAL - 38)) | (1 << (CParser.HEX_LITERAL - 38)) | (1 << (CParser.DECIMAL_LITERAL - 38)) | (1 << (CParser.OCTAL_LITERAL - 38)) | (1 << (CParser.FLOATING_POINT_LITERAL - 38)))) != 0):
+                self.state = 801
+                self.expression()
+
+
+            self.state = 804
+            self.match(CParser.T__38)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Labeled_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def statement(self):
+            return self.getTypedRuleContext(CParser.StatementContext,0)
+
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_labeled_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLabeled_statement" ):
+                listener.enterLabeled_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLabeled_statement" ):
+                listener.exitLabeled_statement(self)
+
+
+
+
+    def labeled_statement(self):
+
+        localctx = CParser.Labeled_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 128, self.RULE_labeled_statement)
+        try:
+            self.state = 817
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 806
+                self.match(CParser.IDENTIFIER)
+                self.state = 807
+                self.match(CParser.T__22)
+                self.state = 808
+                self.statement()
+                pass
+            elif token in [CParser.T__81]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 809
+                self.match(CParser.T__81)
+                self.state = 810
+                self.constant_expression()
+                self.state = 811
+                self.match(CParser.T__22)
+                self.state = 812
+                self.statement()
+                pass
+            elif token in [CParser.T__82]:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 814
+                self.match(CParser.T__82)
+                self.state = 815
+                self.match(CParser.T__22)
+                self.state = 816
+                self.statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Compound_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def statement_list(self):
+            return self.getTypedRuleContext(CParser.Statement_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_compound_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterCompound_statement" ):
+                listener.enterCompound_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitCompound_statement" ):
+                listener.exitCompound_statement(self)
+
+
+
+
+    def compound_statement(self):
+
+        localctx = CParser.Compound_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 130, self.RULE_compound_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 819
+            self.match(CParser.T__0)
+            self.state = 823
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,101,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 820
+                    self.declaration()
+                self.state = 825
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,101,self._ctx)
+
+            self.state = 827
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__41) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54))) != 0) or ((((_la - 79)) & ~0x3f) == 0 and ((1 << (_la - 79)) & ((1 << (CParser.T__78 - 79)) | (1 << (CParser.T__79 - 79)) | (1 << (CParser.T__80 - 79)) | (1 << (CParser.T__81 - 79)) | (1 << (CParser.T__82 - 79)) | (1 << (CParser.T__83 - 79)) | (1 << (CParser.T__85 - 79)) | (1 << (CParser.T__86 - 79)) | (1 << (CParser.T__87 - 79)) | (1 << (CParser.T__88 - 79)) | (1 << (CParser.T__89 - 79)) | (1 << (CParser.T__90 - 79)) | (1 << (CParser.T__91 - 79)) | (1 << (CParser.IDENTIFIER - 79)) | (1 << (CParser.CHARACTER_LITERAL - 79)) | (1 << (CParser.STRING_LITERAL - 79)) | (1 << (CParser.HEX_LITERAL - 79)) | (1 << (CParser.DECIMAL_LITERAL - 79)) | (1 << (CParser.OCTAL_LITERAL - 79)) | (1 << (CParser.FLOATING_POINT_LITERAL - 79)))) != 0):
+                self.state = 826
+                self.statement_list()
+
+
+            self.state = 829
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Statement_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def statement(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.StatementContext)
+            else:
+                return self.getTypedRuleContext(CParser.StatementContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_statement_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStatement_list" ):
+                listener.enterStatement_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStatement_list" ):
+                listener.exitStatement_list(self)
+
+
+
+
+    def statement_list(self):
+
+        localctx = CParser.Statement_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 132, self.RULE_statement_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 832
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 831
+                    self.statement()
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 834
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,103,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Expression_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_expression_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExpression_statement" ):
+                listener.enterExpression_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExpression_statement" ):
+                listener.exitExpression_statement(self)
+
+
+
+
+    def expression_statement(self):
+
+        localctx = CParser.Expression_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 134, self.RULE_expression_statement)
+        try:
+            self.state = 840
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__1]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 836
+                self.match(CParser.T__1)
+                pass
+            elif token in [CParser.T__37, CParser.T__41, CParser.T__43, CParser.T__44, CParser.T__47, CParser.T__48, CParser.T__49, CParser.T__52, CParser.T__53, CParser.T__54, CParser.IDENTIFIER, CParser.CHARACTER_LITERAL, CParser.STRING_LITERAL, CParser.HEX_LITERAL, CParser.DECIMAL_LITERAL, CParser.OCTAL_LITERAL, CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 837
+                self.expression()
+                self.state = 838
+                self.match(CParser.T__1)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Selection_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # ExpressionContext
+
+        # @param  i=None Type: int
+        def statement(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.StatementContext)
+            else:
+                return self.getTypedRuleContext(CParser.StatementContext,i)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_selection_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterSelection_statement" ):
+                listener.enterSelection_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitSelection_statement" ):
+                listener.exitSelection_statement(self)
+
+
+
+
+    def selection_statement(self):
+
+        localctx = CParser.Selection_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 136, self.RULE_selection_statement)
+        try:
+            self.state = 858
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__83]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 842
+                self.match(CParser.T__83)
+                self.state = 843
+                self.match(CParser.T__37)
+                self.state = 844
+                localctx.e = self.expression()
+                self.state = 845
+                self.match(CParser.T__38)
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                self.state = 847
+                self.statement()
+                self.state = 850
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,105,self._ctx)
+                if la_ == 1:
+                    self.state = 848
+                    self.match(CParser.T__84)
+                    self.state = 849
+                    self.statement()
+
+
+                pass
+            elif token in [CParser.T__85]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 852
+                self.match(CParser.T__85)
+                self.state = 853
+                self.match(CParser.T__37)
+                self.state = 854
+                self.expression()
+                self.state = 855
+                self.match(CParser.T__38)
+                self.state = 856
+                self.statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Iteration_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # ExpressionContext
+
+        def statement(self):
+            return self.getTypedRuleContext(CParser.StatementContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_iteration_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterIteration_statement" ):
+                listener.enterIteration_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitIteration_statement" ):
+                listener.exitIteration_statement(self)
+
+
+
+
+    def iteration_statement(self):
+
+        localctx = CParser.Iteration_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 138, self.RULE_iteration_statement)
+        try:
+            self.state = 876
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__86]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 860
+                self.match(CParser.T__86)
+                self.state = 861
+                self.match(CParser.T__37)
+                self.state = 862
+                localctx.e = self.expression()
+                self.state = 863
+                self.match(CParser.T__38)
+                self.state = 864
+                self.statement()
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                pass
+            elif token in [CParser.T__87]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 867
+                self.match(CParser.T__87)
+                self.state = 868
+                self.statement()
+                self.state = 869
+                self.match(CParser.T__86)
+                self.state = 870
+                self.match(CParser.T__37)
+                self.state = 871
+                localctx.e = self.expression()
+                self.state = 872
+                self.match(CParser.T__38)
+                self.state = 873
+                self.match(CParser.T__1)
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Jump_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_jump_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterJump_statement" ):
+                listener.enterJump_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitJump_statement" ):
+                listener.exitJump_statement(self)
+
+
+
+
+    def jump_statement(self):
+
+        localctx = CParser.Jump_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 140, self.RULE_jump_statement)
+        try:
+            self.state = 891
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,108,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 878
+                self.match(CParser.T__88)
+                self.state = 879
+                self.match(CParser.IDENTIFIER)
+                self.state = 880
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 881
+                self.match(CParser.T__89)
+                self.state = 882
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 883
+                self.match(CParser.T__90)
+                self.state = 884
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 885
+                self.match(CParser.T__91)
+                self.state = 886
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 887
+                self.match(CParser.T__91)
+                self.state = 888
+                self.expression()
+                self.state = 889
+                self.match(CParser.T__1)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+
+
+
+
diff --git a/BaseTools/Source/Python/Ecc/CParser4/__init__.py b/BaseTools/Source/Python/Ecc/CParser4/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/BaseTools/Source/Python/Ecc/Check.py b/BaseTools/Source/Python/Ecc/Check.py
index dfcc0302bc..a6c62359d0 100644
--- a/BaseTools/Source/Python/Ecc/Check.py
+++ b/BaseTools/Source/Python/Ecc/Check.py
@@ -221,11 +221,11 @@ class Check(object):
                 if Record[2].upper() not in EccGlobalData.gConfig.BinaryExtList:
                     op = open(Record[1], 'rb').readlines()
                     IndexOfLine = 0
                     for Line in op:
                         IndexOfLine += 1
-                        if not Line.endswith('\r\n'):
+                        if not bytes.decode(Line).endswith('\r\n'):
                             OtherMsg = "File %s has invalid line ending at line %s" % (Record[1], IndexOfLine)
                             EccGlobalData.gDb.TblReport.Insert(ERROR_GENERAL_CHECK_INVALID_LINE_ENDING, OtherMsg=OtherMsg, BelongsToTable='File', BelongsToItem=Record[0])
 
     # Check if there is no trailing white space in one line.
     def GeneralCheckTrailingWhiteSpaceLine(self):
@@ -233,11 +233,11 @@ class Check(object):
             EdkLogger.quiet("Checking trailing white space line in file ...")
             SqlCommand = """select ID, FullPath, ExtName from File where ExtName in ('.dec', '.inf', '.dsc', 'c', 'h')"""
             RecordSet = EccGlobalData.gDb.TblFile.Exec(SqlCommand)
             for Record in RecordSet:
                 if Record[2].upper() not in EccGlobalData.gConfig.BinaryExtList:
-                    op = open(Record[1], 'rb').readlines()
+                    op = open(Record[1], 'r').readlines()
                     IndexOfLine = 0
                     for Line in op:
                         IndexOfLine += 1
                         if Line.replace('\r', '').replace('\n', '').endswith(' '):
                             OtherMsg = "File %s has trailing white spaces at line %s" % (Record[1], IndexOfLine)
diff --git a/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py b/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py
index d12232cc6f..21fed59cad 100644
--- a/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py
+++ b/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py
@@ -19,14 +19,20 @@
 from __future__ import print_function
 from __future__ import absolute_import
 import re
 import Common.LongFilePathOs as os
 import sys
+if sys.version_info.major == 3:
+    import antlr4 as antlr
+    from Ecc.CParser4.CLexer import CLexer
+    from Ecc.CParser4.CParser import CParser
+else:
+    import antlr3 as antlr
+    antlr.InputString = antlr.StringStream
+    from Ecc.CParser3.CLexer import CLexer
+    from Ecc.CParser3.CParser import CParser
 
-import antlr3
-from Ecc.CLexer import CLexer
-from Ecc.CParser import CParser
 
 from Ecc import FileProfile
 from Ecc.CodeFragment import Comment
 from Ecc.CodeFragment import PP_Directive
 from Ecc.ParserWarning import Warning
@@ -501,26 +507,26 @@ class CodeFragmentCollector:
         # restore from ListOfList to ListOfString
         self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList]
         FileStringContents = ''
         for fileLine in self.Profile.FileLinesList:
             FileStringContents += fileLine
-        cStream = antlr3.StringStream(FileStringContents)
+        cStream = antlr.InputStream(FileStringContents)
         lexer = CLexer(cStream)
-        tStream = antlr3.CommonTokenStream(lexer)
+        tStream = antlr.CommonTokenStream(lexer)
         parser = CParser(tStream)
         parser.translation_unit()
 
     def ParseFileWithClearedPPDirective(self):
         self.PreprocessFileWithClear()
         # restore from ListOfList to ListOfString
         self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList]
         FileStringContents = ''
         for fileLine in self.Profile.FileLinesList:
             FileStringContents += fileLine
-        cStream = antlr3.StringStream(FileStringContents)
+        cStream = antlr.InputStream(FileStringContents)
         lexer = CLexer(cStream)
-        tStream = antlr3.CommonTokenStream(lexer)
+        tStream = antlr.CommonTokenStream(lexer)
         parser = CParser(tStream)
         parser.translation_unit()
 
     def CleanFileProfileBuffer(self):
         FileProfile.CommentList = []
diff --git a/BaseTools/Source/Python/Ecc/Configuration.py b/BaseTools/Source/Python/Ecc/Configuration.py
index 8f6886169c..c19a3990c7 100644
--- a/BaseTools/Source/Python/Ecc/Configuration.py
+++ b/BaseTools/Source/Python/Ecc/Configuration.py
@@ -32,11 +32,10 @@ _ConfigFileToInternalTranslation = {
     "CFunctionLayoutCheckAll":"CFunctionLayoutCheckAll",
     "CFunctionLayoutCheckDataDeclaration":"CFunctionLayoutCheckDataDeclaration",
     "CFunctionLayoutCheckFunctionBody":"CFunctionLayoutCheckFunctionBody",
     "CFunctionLayoutCheckFunctionName":"CFunctionLayoutCheckFunctionName",
     "CFunctionLayoutCheckFunctionPrototype":"CFunctionLayoutCheckFunctionPrototype",
-    "CFunctionLayoutCheckNoDeprecated":"CFunctionLayoutCheckNoDeprecated",
     "CFunctionLayoutCheckNoInitOfVariable":"CFunctionLayoutCheckNoInitOfVariable",
     "CFunctionLayoutCheckNoStatic":"CFunctionLayoutCheckNoStatic",
     "CFunctionLayoutCheckOptionalFunctionalModifier":"CFunctionLayoutCheckOptionalFunctionalModifier",
     "CFunctionLayoutCheckReturnType":"CFunctionLayoutCheckReturnType",
     "CheckAll":"CheckAll",
@@ -241,12 +240,10 @@ class Configuration(object):
         self.CFunctionLayoutCheckDataDeclaration = 1
         # Check whether no initialization of a variable as part of its declaration
         self.CFunctionLayoutCheckNoInitOfVariable = 1
         # Check whether no use of STATIC for functions
         self.CFunctionLayoutCheckNoStatic = 1
-        # Check whether no use of Deprecated functions
-        self.CFunctionLayoutCheckNoDeprecated = 1
 
         ## Include Files Checking
         self.IncludeFileCheckAll = 0
 
         #Check whether having include files with same name
diff --git a/BaseTools/Source/Python/Ecc/EccMain.py b/BaseTools/Source/Python/Ecc/EccMain.py
index 0f97447751..edb6c6d7d4 100644
--- a/BaseTools/Source/Python/Ecc/EccMain.py
+++ b/BaseTools/Source/Python/Ecc/EccMain.py
@@ -176,11 +176,11 @@ class Ecc(object):
                 ScanFolders.append(os.path.join(EccGlobalData.gTarget, specificDir))
         EdkLogger.quiet("Building database for meta data files ...")
         Op = open(EccGlobalData.gConfig.MetaDataFileCheckPathOfGenerateFileList, 'w+')
         #SkipDirs = Read from config file
         SkipDirs = EccGlobalData.gConfig.SkipDirList
-        SkipDirString = string.join(SkipDirs, '|')
+        SkipDirString = '|'.join(SkipDirs)
 #         p = re.compile(r'.*[\\/](?:%s)[\\/]?.*' % SkipDirString)
         p = re.compile(r'.*[\\/](?:%s^\S)[\\/]?.*' % SkipDirString)
         for scanFolder in ScanFolders:
             for Root, Dirs, Files in os.walk(scanFolder):
                 if p.match(Root.upper()):
diff --git a/BaseTools/Source/Python/Ecc/EccToolError.py b/BaseTools/Source/Python/Ecc/EccToolError.py
index 7663e90d7e..ae0a31af8a 100644
--- a/BaseTools/Source/Python/Ecc/EccToolError.py
+++ b/BaseTools/Source/Python/Ecc/EccToolError.py
@@ -45,11 +45,10 @@ ERROR_C_FUNCTION_LAYOUT_CHECK_FUNCTION_BODY = 5005
 ERROR_C_FUNCTION_LAYOUT_CHECK_DATA_DECLARATION = 5006
 ERROR_C_FUNCTION_LAYOUT_CHECK_NO_INIT_OF_VARIABLE = 5007
 ERROR_C_FUNCTION_LAYOUT_CHECK_NO_STATIC = 5008
 ERROR_C_FUNCTION_LAYOUT_CHECK_FUNCTION_PROTO_TYPE_2 = 5009
 ERROR_C_FUNCTION_LAYOUT_CHECK_FUNCTION_PROTO_TYPE_3 = 5010
-ERROR_C_FUNCTION_LAYOUT_CHECK_NO_DEPRECATE = 5011
 
 ERROR_INCLUDE_FILE_CHECK_ALL = 6000
 ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_1 = 6001
 ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_2 = 6002
 ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_3 = 6003
@@ -145,11 +144,10 @@ gEccErrorMessage = {
     ERROR_C_FUNCTION_LAYOUT_CHECK_FUNCTION_PROTO_TYPE_3 : "Function prototypes in include files have different parameter modifier with function definitions",
     ERROR_C_FUNCTION_LAYOUT_CHECK_FUNCTION_BODY : "The body of a function should be contained by open and close braces that must be in the first column",
     ERROR_C_FUNCTION_LAYOUT_CHECK_DATA_DECLARATION : "The data declarations should be the first code in a module",
     ERROR_C_FUNCTION_LAYOUT_CHECK_NO_INIT_OF_VARIABLE : "There should be no initialization of a variable as part of its declaration",
     ERROR_C_FUNCTION_LAYOUT_CHECK_NO_STATIC : "There should be no use of STATIC for functions",
-    ERROR_C_FUNCTION_LAYOUT_CHECK_NO_DEPRECATE : "The deprecated function should NOT be used",
 
     ERROR_INCLUDE_FILE_CHECK_ALL : "",
     ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_1 : "All include file contents should be guarded by a #ifndef statement.",
     ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_2 : "The #ifndef must be the first line of code following the file header comment",
     ERROR_INCLUDE_FILE_CHECK_IFNDEF_STATEMENT_3 : "The #endif must appear on the last line in the file",
@@ -167,11 +165,11 @@ gEccErrorMessage = {
     ERROR_DECLARATION_DATA_TYPE_CHECK_NESTED_STRUCTURE : "Complex types should be typedef-ed",
 
     ERROR_NAMING_CONVENTION_CHECK_ALL : "",
     ERROR_NAMING_CONVENTION_CHECK_DEFINE_STATEMENT : "Only capital letters are allowed to be used for #define declarations",
     ERROR_NAMING_CONVENTION_CHECK_TYPEDEF_STATEMENT : "Only capital letters are allowed to be used for typedef declarations",
-    ERROR_NAMING_CONVENTION_CHECK_IFNDEF_STATEMENT : "The #ifndef at the start of an include file should use a postfix underscore characters, '_'",
+    ERROR_NAMING_CONVENTION_CHECK_IFNDEF_STATEMENT : "The #ifndef at the start of an include file should use both prefix and postfix underscore characters, '_'",
     ERROR_NAMING_CONVENTION_CHECK_PATH_NAME : """Path name does not follow the rules: 1. First character should be upper case 2. Must contain lower case characters 3. No white space characters""",
     ERROR_NAMING_CONVENTION_CHECK_VARIABLE_NAME : """Variable name does not follow the rules: 1. First character should be upper case 2. Must contain lower case characters 3. No white space characters 4. Global variable name must start with a 'g'""",
     ERROR_NAMING_CONVENTION_CHECK_FUNCTION_NAME : """Function name does not follow the rules: 1. First character should be upper case 2. Must contain lower case characters 3. No white space characters""",
     ERROR_NAMING_CONVENTION_CHECK_SINGLE_CHARACTER_VARIABLE : "There should be no use of short (single character) variable names",
 
diff --git a/BaseTools/Source/Python/Ecc/FileProfile.py b/BaseTools/Source/Python/Ecc/FileProfile.py
index 4434981628..8084cbcb6c 100644
--- a/BaseTools/Source/Python/Ecc/FileProfile.py
+++ b/BaseTools/Source/Python/Ecc/FileProfile.py
@@ -45,11 +45,11 @@ class FileProfile :
     #
     def __init__(self, FileName):
         self.FileLinesList = []
         self.FileLinesListFromFile = []
         try:
-            fsock = open(FileName, "rb", 0)
+            fsock = open(FileName, "r")
             try:
                 self.FileLinesListFromFile = fsock.readlines()
             finally:
                 fsock.close()
 
diff --git a/BaseTools/Source/Python/Ecc/MetaDataParser.py b/BaseTools/Source/Python/Ecc/MetaDataParser.py
index d0a94153d4..4594716886 100644
--- a/BaseTools/Source/Python/Ecc/MetaDataParser.py
+++ b/BaseTools/Source/Python/Ecc/MetaDataParser.py
@@ -111,11 +111,11 @@ def ParseHeaderCommentSection(CommentList, FileName = None):
     #
     # first find the last copyright line
     #
     Last = 0
     HeaderCommentStage = HEADER_COMMENT_NOT_STARTED
-    for Index in xrange(len(CommentList)-1, 0, -1):
+    for Index in range(len(CommentList) - 1, 0, -1):
         Line = CommentList[Index][0]
         if _IsCopyrightLine(Line):
             Last = Index
             break
 
diff --git a/BaseTools/Source/Python/Ecc/c.py b/BaseTools/Source/Python/Ecc/c.py
index b8d6adde16..50505c10fd 100644
--- a/BaseTools/Source/Python/Ecc/c.py
+++ b/BaseTools/Source/Python/Ecc/c.py
@@ -33,11 +33,11 @@ ComplexTypeDict = {}
 SUDict = {}
 IgnoredKeywordList = ['EFI_ERROR']
 
 def GetIgnoredDirListPattern():
     skipList = list(EccGlobalData.gConfig.SkipDirList) + ['.svn']
-    DirString = string.join(skipList, '|')
+    DirString = '|'.join(skipList)
     p = re.compile(r'.*[\\/](?:%s)[\\/]?.*' % DirString)
     return p
 
 def GetFuncDeclPattern():
     p = re.compile(r'(?:EFIAPI|EFI_BOOT_SERVICE|EFI_RUNTIME_SERVICE)?\s*[_\w]+\s*\(.*\)$', re.DOTALL)
@@ -961,11 +961,11 @@ def StripComments(Str):
         # set comments to spaces
         elif InComment:
             ListFromStr[Index] = ' '
             Index += 1
         # check for // comment
-        elif ListFromStr[Index] == '/' and ListFromStr[Index + 1] == '/' and ListFromStr[Index + 2] != '\n':
+        elif ListFromStr[Index] == '/' and ListFromStr[Index + 1] == '/':
             InComment = True
             DoubleSlashComment = True
 
         # check for /* comment start
         elif ListFromStr[Index] == '/' and ListFromStr[Index + 1] == '*':
@@ -1295,11 +1295,11 @@ def CheckFuncLayoutReturnType(FullFileName):
         if EccGlobalData.gException.IsException(ERROR_C_FUNCTION_LAYOUT_CHECK_RETURN_TYPE, FuncName):
             continue
         Result0 = Result[0]
         if Result0.upper().startswith('STATIC'):
             Result0 = Result0[6:].strip()
-        Index = Result0.find(ReturnType)
+        Index = Result0.find(TypeStart)
         if Index != 0 or Result[3] != 0:
             PrintErrorMsg(ERROR_C_FUNCTION_LAYOUT_CHECK_RETURN_TYPE, '[%s] Return Type should appear at the start of line' % FuncName, 'Function', Result[1])
 
 def CheckFuncLayoutModifier(FullFileName):
     ErrorMsgList = []
diff --git a/BaseTools/Source/Python/Ecc/config.ini b/BaseTools/Source/Python/Ecc/config.ini
index 663ae293bd..00c98c6232 100644
--- a/BaseTools/Source/Python/Ecc/config.ini
+++ b/BaseTools/Source/Python/Ecc/config.ini
@@ -132,12 +132,10 @@ CFunctionLayoutCheckFunctionBody = 1
 CFunctionLayoutCheckDataDeclaration = 1
 # Check whether no initialization of a variable as part of its declaration
 CFunctionLayoutCheckNoInitOfVariable = 1
 # Check whether no use of STATIC for functions
 CFunctionLayoutCheckNoStatic = 1
-# Check whether no use of Deprecated functions
-CFunctionLayoutCheckNoDeprecated  = 1
 
 #
 # Include Files Checking
 #
 IncludeFileCheckAll = 0
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [Patch v2 33/33] BaseTools: Eot tool Python3 adaption
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (31 preceding siblings ...)
  2019-01-29  2:06 ` [Patch 32/33] BaseTools: ECC tool Python3 adaption Feng, Bob C
@ 2019-01-29  2:06 ` Feng, Bob C
  2019-01-29 13:07 ` [Patch v2 00/33] BaseTools python3 migration patch set Laszlo Ersek
  33 siblings, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-29  2:06 UTC (permalink / raw)
  To: edk2-devel; +Cc: Bob Feng, Liming Gao

v2:
The python files under CParser4 are generated by antlr4 and for
python3 usage. They have python3 specific syntax, for example
the data type declaration for the arguments of a function. That
is not compitable with python2. this patch is to remove these syntax.

Eot tool Python3 adaption.

Contributed-under: TianoCore Contribution Agreement 1.1
Signed-off-by: Bob Feng <bob.c.feng@intel.com>
Cc: Liming Gao <liming.gao@intel.com>
---
 BaseTools/Source/Python/Eot/{ => CParser3}/CLexer.py  |    0
 BaseTools/Source/Python/Eot/{ => CParser3}/CParser.py |    0
 BaseTools/Source/Python/Eot/CParser3/__init__.py      |    0
 BaseTools/Source/Python/Eot/CParser4/CLexer.py        |  633 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/CListener.py     |  814 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/CParser.py       | 6279 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 BaseTools/Source/Python/Eot/CParser4/__init__.py      |    0
 BaseTools/Source/Python/Eot/CodeFragmentCollector.py  |   22 ++--
 8 files changed, 7740 insertions(+), 8 deletions(-)

diff --git a/BaseTools/Source/Python/Eot/CLexer.py b/BaseTools/Source/Python/Eot/CParser3/CLexer.py
similarity index 100%
rename from BaseTools/Source/Python/Eot/CLexer.py
rename to BaseTools/Source/Python/Eot/CParser3/CLexer.py
diff --git a/BaseTools/Source/Python/Eot/CParser.py b/BaseTools/Source/Python/Eot/CParser3/CParser.py
similarity index 100%
rename from BaseTools/Source/Python/Eot/CParser.py
rename to BaseTools/Source/Python/Eot/CParser3/CParser.py
diff --git a/BaseTools/Source/Python/Eot/CParser3/__init__.py b/BaseTools/Source/Python/Eot/CParser3/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/BaseTools/Source/Python/Eot/CParser4/CLexer.py b/BaseTools/Source/Python/Eot/CParser4/CLexer.py
new file mode 100644
index 0000000000..5bc32cd7cb
--- /dev/null
+++ b/BaseTools/Source/Python/Eot/CParser4/CLexer.py
@@ -0,0 +1,633 @@
+# Generated from C.g4 by ANTLR 4.7.1
+from antlr4 import *
+from io import StringIO
+from typing.io import TextIO
+import sys
+
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+
+def serializedATN():
+    with StringIO() as buf:
+        buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\2k")
+        buf.write("\u0383\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7")
+        buf.write("\t\7\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r")
+        buf.write("\4\16\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23")
+        buf.write("\t\23\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30")
+        buf.write("\4\31\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36")
+        buf.write("\t\36\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%")
+        buf.write("\4&\t&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.")
+        buf.write("\t.\4/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64")
+        buf.write("\t\64\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:")
+        buf.write("\4;\t;\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\t")
+        buf.write("C\4D\tD\4E\tE\4F\tF\4G\tG\4H\tH\4I\tI\4J\tJ\4K\tK\4L\t")
+        buf.write("L\4M\tM\4N\tN\4O\tO\4P\tP\4Q\tQ\4R\tR\4S\tS\4T\tT\4U\t")
+        buf.write("U\4V\tV\4W\tW\4X\tX\4Y\tY\4Z\tZ\4[\t[\4\\\t\\\4]\t]\4")
+        buf.write("^\t^\4_\t_\4`\t`\4a\ta\4b\tb\4c\tc\4d\td\4e\te\4f\tf\4")
+        buf.write("g\tg\4h\th\4i\ti\4j\tj\4k\tk\4l\tl\4m\tm\4n\tn\4o\to\4")
+        buf.write("p\tp\4q\tq\4r\tr\3\2\3\2\3\3\3\3\3\4\3\4\3\4\3\4\3\4\3")
+        buf.write("\4\3\4\3\4\3\5\3\5\3\6\3\6\3\7\3\7\3\7\3\7\3\7\3\7\3\7")
+        buf.write("\3\b\3\b\3\b\3\b\3\b\3\b\3\b\3\t\3\t\3\t\3\t\3\t\3\n\3")
+        buf.write("\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\13\3\13\3\13\3\13\3\13")
+        buf.write("\3\13\3\13\3\f\3\f\3\f\3\f\3\f\3\r\3\r\3\r\3\r\3\r\3\16")
+        buf.write("\3\16\3\16\3\16\3\16\3\16\3\17\3\17\3\17\3\17\3\20\3\20")
+        buf.write("\3\20\3\20\3\20\3\21\3\21\3\21\3\21\3\21\3\21\3\22\3\22")
+        buf.write("\3\22\3\22\3\22\3\22\3\22\3\23\3\23\3\23\3\23\3\23\3\23")
+        buf.write("\3\23\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\24\3\25")
+        buf.write("\3\25\3\26\3\26\3\26\3\26\3\26\3\26\3\26\3\27\3\27\3\27")
+        buf.write("\3\27\3\27\3\27\3\30\3\30\3\31\3\31\3\31\3\31\3\31\3\32")
+        buf.write("\3\32\3\32\3\32\3\32\3\32\3\33\3\33\3\33\3\33\3\33\3\33")
+        buf.write("\3\33\3\33\3\33\3\34\3\34\3\34\3\35\3\35\3\35\3\35\3\36")
+        buf.write("\3\36\3\36\3\36\3\36\3\36\3\36\3\36\3\36\3\37\3\37\3\37")
+        buf.write("\3\37\3\37\3\37\3 \3 \3 \3 \3 \3 \3 \3 \3 \3 \3!\3!\3")
+        buf.write("!\3!\3!\3!\3!\3!\3!\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3")
+        buf.write("\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"")
+        buf.write("\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3#\3#\3#\3#\3#\3#\3#")
+        buf.write("\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3$\3%\3")
+        buf.write("%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3%\3")
+        buf.write("&\3&\3&\3&\3&\3&\3&\3\'\3\'\3(\3(\3)\3)\3*\3*\3+\3+\3")
+        buf.write(",\3,\3,\3,\3-\3-\3.\3.\3/\3/\3\60\3\60\3\61\3\61\3\61")
+        buf.write("\3\62\3\62\3\62\3\63\3\63\3\63\3\63\3\63\3\63\3\63\3\64")
+        buf.write("\3\64\3\65\3\65\3\65\3\66\3\66\3\67\3\67\38\38\39\39\3")
+        buf.write("9\3:\3:\3:\3;\3;\3;\3<\3<\3<\3=\3=\3=\3>\3>\3>\3>\3?\3")
+        buf.write("?\3?\3?\3@\3@\3@\3A\3A\3A\3B\3B\3B\3C\3C\3D\3D\3D\3E\3")
+        buf.write("E\3E\3F\3F\3G\3G\3H\3H\3H\3I\3I\3I\3J\3J\3K\3K\3L\3L\3")
+        buf.write("L\3M\3M\3M\3N\3N\3N\3O\3O\3O\3P\3P\3P\3P\3P\3P\3P\3P\3")
+        buf.write("Q\3Q\3Q\3Q\3Q\3R\3R\3R\3R\3R\3R\3S\3S\3S\3S\3S\3T\3T\3")
+        buf.write("T\3T\3T\3T\3T\3T\3U\3U\3U\3V\3V\3V\3V\3V\3W\3W\3W\3W\3")
+        buf.write("W\3W\3W\3X\3X\3X\3X\3X\3X\3Y\3Y\3Y\3Z\3Z\3Z\3Z\3Z\3[\3")
+        buf.write("[\3[\3[\3[\3[\3[\3[\3[\3\\\3\\\3\\\3\\\3\\\3\\\3]\3]\3")
+        buf.write("]\3]\3]\3]\3]\3^\3^\3^\7^\u02b2\n^\f^\16^\u02b5\13^\3")
+        buf.write("_\3_\3`\5`\u02ba\n`\3`\3`\3`\5`\u02bf\n`\3`\3`\3a\5a\u02c4")
+        buf.write("\na\3a\3a\3a\7a\u02c9\na\fa\16a\u02cc\13a\3a\3a\3b\3b")
+        buf.write("\3b\6b\u02d3\nb\rb\16b\u02d4\3b\5b\u02d8\nb\3c\3c\3c\7")
+        buf.write("c\u02dd\nc\fc\16c\u02e0\13c\5c\u02e2\nc\3c\5c\u02e5\n")
+        buf.write("c\3d\3d\6d\u02e9\nd\rd\16d\u02ea\3d\5d\u02ee\nd\3e\3e")
+        buf.write("\3f\3f\3f\3f\3f\3f\5f\u02f8\nf\3g\6g\u02fb\ng\rg\16g\u02fc")
+        buf.write("\3g\3g\7g\u0301\ng\fg\16g\u0304\13g\3g\5g\u0307\ng\3g")
+        buf.write("\5g\u030a\ng\3g\3g\6g\u030e\ng\rg\16g\u030f\3g\5g\u0313")
+        buf.write("\ng\3g\5g\u0316\ng\3g\6g\u0319\ng\rg\16g\u031a\3g\3g\5")
+        buf.write("g\u031f\ng\3g\6g\u0322\ng\rg\16g\u0323\3g\5g\u0327\ng")
+        buf.write("\3g\5g\u032a\ng\3h\3h\5h\u032e\nh\3h\6h\u0331\nh\rh\16")
+        buf.write("h\u0332\3i\3i\3j\3j\3j\5j\u033a\nj\3k\3k\3k\3k\3k\3k\3")
+        buf.write("k\3k\3k\5k\u0345\nk\3l\3l\3l\3l\3l\3l\3l\3m\3m\3m\3m\3")
+        buf.write("n\3n\3n\3n\3o\3o\3p\3p\3p\3p\7p\u035c\np\fp\16p\u035f")
+        buf.write("\13p\3p\3p\3p\3p\3p\3q\3q\3q\3q\7q\u036a\nq\fq\16q\u036d")
+        buf.write("\13q\3q\5q\u0370\nq\3q\3q\3q\3q\3r\3r\7r\u0378\nr\fr\16")
+        buf.write("r\u037b\13r\3r\5r\u037e\nr\3r\3r\3r\3r\3\u035d\2s\3\3")
+        buf.write("\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23\13\25\f\27\r\31\16")
+        buf.write("\33\17\35\20\37\21!\22#\23%\24\'\25)\26+\27-\30/\31\61")
+        buf.write("\32\63\33\65\34\67\359\36;\37= ?!A\"C#E$G%I&K\'M(O)Q*")
+        buf.write("S+U,W-Y.[/]\60_\61a\62c\63e\64g\65i\66k\67m8o9q:s;u<w")
+        buf.write("=y>{?}@\177A\u0081B\u0083C\u0085D\u0087E\u0089F\u008b")
+        buf.write("G\u008dH\u008fI\u0091J\u0093K\u0095L\u0097M\u0099N\u009b")
+        buf.write("O\u009dP\u009fQ\u00a1R\u00a3S\u00a5T\u00a7U\u00a9V\u00ab")
+        buf.write("W\u00adX\u00afY\u00b1Z\u00b3[\u00b5\\\u00b7]\u00b9^\u00bb")
+        buf.write("_\u00bd\2\u00bf`\u00c1a\u00c3b\u00c5c\u00c7d\u00c9\2\u00cb")
+        buf.write("\2\u00cde\u00cf\2\u00d1\2\u00d3\2\u00d5\2\u00d7\2\u00d9")
+        buf.write("f\u00dbg\u00ddh\u00dfi\u00e1j\u00e3k\3\2\20\6\2&&C\\a")
+        buf.write("ac|\4\2))^^\4\2$$^^\4\2ZZzz\5\2\62;CHch\6\2NNWWnnww\4")
+        buf.write("\2WWww\4\2NNnn\4\2GGgg\4\2--//\6\2FFHHffhh\t\2))^^ddh")
+        buf.write("hppttvv\5\2\13\f\16\17\"\"\4\2\f\f\17\17\2\u03a2\2\3\3")
+        buf.write("\2\2\2\2\5\3\2\2\2\2\7\3\2\2\2\2\t\3\2\2\2\2\13\3\2\2")
+        buf.write("\2\2\r\3\2\2\2\2\17\3\2\2\2\2\21\3\2\2\2\2\23\3\2\2\2")
+        buf.write("\2\25\3\2\2\2\2\27\3\2\2\2\2\31\3\2\2\2\2\33\3\2\2\2\2")
+        buf.write("\35\3\2\2\2\2\37\3\2\2\2\2!\3\2\2\2\2#\3\2\2\2\2%\3\2")
+        buf.write("\2\2\2\'\3\2\2\2\2)\3\2\2\2\2+\3\2\2\2\2-\3\2\2\2\2/\3")
+        buf.write("\2\2\2\2\61\3\2\2\2\2\63\3\2\2\2\2\65\3\2\2\2\2\67\3\2")
+        buf.write("\2\2\29\3\2\2\2\2;\3\2\2\2\2=\3\2\2\2\2?\3\2\2\2\2A\3")
+        buf.write("\2\2\2\2C\3\2\2\2\2E\3\2\2\2\2G\3\2\2\2\2I\3\2\2\2\2K")
+        buf.write("\3\2\2\2\2M\3\2\2\2\2O\3\2\2\2\2Q\3\2\2\2\2S\3\2\2\2\2")
+        buf.write("U\3\2\2\2\2W\3\2\2\2\2Y\3\2\2\2\2[\3\2\2\2\2]\3\2\2\2")
+        buf.write("\2_\3\2\2\2\2a\3\2\2\2\2c\3\2\2\2\2e\3\2\2\2\2g\3\2\2")
+        buf.write("\2\2i\3\2\2\2\2k\3\2\2\2\2m\3\2\2\2\2o\3\2\2\2\2q\3\2")
+        buf.write("\2\2\2s\3\2\2\2\2u\3\2\2\2\2w\3\2\2\2\2y\3\2\2\2\2{\3")
+        buf.write("\2\2\2\2}\3\2\2\2\2\177\3\2\2\2\2\u0081\3\2\2\2\2\u0083")
+        buf.write("\3\2\2\2\2\u0085\3\2\2\2\2\u0087\3\2\2\2\2\u0089\3\2\2")
+        buf.write("\2\2\u008b\3\2\2\2\2\u008d\3\2\2\2\2\u008f\3\2\2\2\2\u0091")
+        buf.write("\3\2\2\2\2\u0093\3\2\2\2\2\u0095\3\2\2\2\2\u0097\3\2\2")
+        buf.write("\2\2\u0099\3\2\2\2\2\u009b\3\2\2\2\2\u009d\3\2\2\2\2\u009f")
+        buf.write("\3\2\2\2\2\u00a1\3\2\2\2\2\u00a3\3\2\2\2\2\u00a5\3\2\2")
+        buf.write("\2\2\u00a7\3\2\2\2\2\u00a9\3\2\2\2\2\u00ab\3\2\2\2\2\u00ad")
+        buf.write("\3\2\2\2\2\u00af\3\2\2\2\2\u00b1\3\2\2\2\2\u00b3\3\2\2")
+        buf.write("\2\2\u00b5\3\2\2\2\2\u00b7\3\2\2\2\2\u00b9\3\2\2\2\2\u00bb")
+        buf.write("\3\2\2\2\2\u00bf\3\2\2\2\2\u00c1\3\2\2\2\2\u00c3\3\2\2")
+        buf.write("\2\2\u00c5\3\2\2\2\2\u00c7\3\2\2\2\2\u00cd\3\2\2\2\2\u00d9")
+        buf.write("\3\2\2\2\2\u00db\3\2\2\2\2\u00dd\3\2\2\2\2\u00df\3\2\2")
+        buf.write("\2\2\u00e1\3\2\2\2\2\u00e3\3\2\2\2\3\u00e5\3\2\2\2\5\u00e7")
+        buf.write("\3\2\2\2\7\u00e9\3\2\2\2\t\u00f1\3\2\2\2\13\u00f3\3\2")
+        buf.write("\2\2\r\u00f5\3\2\2\2\17\u00fc\3\2\2\2\21\u0103\3\2\2\2")
+        buf.write("\23\u0108\3\2\2\2\25\u0111\3\2\2\2\27\u0118\3\2\2\2\31")
+        buf.write("\u011d\3\2\2\2\33\u0122\3\2\2\2\35\u0128\3\2\2\2\37\u012c")
+        buf.write("\3\2\2\2!\u0131\3\2\2\2#\u0137\3\2\2\2%\u013e\3\2\2\2")
+        buf.write("\'\u0145\3\2\2\2)\u014e\3\2\2\2+\u0150\3\2\2\2-\u0157")
+        buf.write("\3\2\2\2/\u015d\3\2\2\2\61\u015f\3\2\2\2\63\u0164\3\2")
+        buf.write("\2\2\65\u016a\3\2\2\2\67\u0173\3\2\2\29\u0176\3\2\2\2")
+        buf.write(";\u017a\3\2\2\2=\u0183\3\2\2\2?\u0189\3\2\2\2A\u0193\3")
+        buf.write("\2\2\2C\u019c\3\2\2\2E\u01ba\3\2\2\2G\u01c1\3\2\2\2I\u01d1")
+        buf.write("\3\2\2\2K\u01e4\3\2\2\2M\u01eb\3\2\2\2O\u01ed\3\2\2\2")
+        buf.write("Q\u01ef\3\2\2\2S\u01f1\3\2\2\2U\u01f3\3\2\2\2W\u01f5\3")
+        buf.write("\2\2\2Y\u01f9\3\2\2\2[\u01fb\3\2\2\2]\u01fd\3\2\2\2_\u01ff")
+        buf.write("\3\2\2\2a\u0201\3\2\2\2c\u0204\3\2\2\2e\u0207\3\2\2\2")
+        buf.write("g\u020e\3\2\2\2i\u0210\3\2\2\2k\u0213\3\2\2\2m\u0215\3")
+        buf.write("\2\2\2o\u0217\3\2\2\2q\u0219\3\2\2\2s\u021c\3\2\2\2u\u021f")
+        buf.write("\3\2\2\2w\u0222\3\2\2\2y\u0225\3\2\2\2{\u0228\3\2\2\2")
+        buf.write("}\u022c\3\2\2\2\177\u0230\3\2\2\2\u0081\u0233\3\2\2\2")
+        buf.write("\u0083\u0236\3\2\2\2\u0085\u0239\3\2\2\2\u0087\u023b\3")
+        buf.write("\2\2\2\u0089\u023e\3\2\2\2\u008b\u0241\3\2\2\2\u008d\u0243")
+        buf.write("\3\2\2\2\u008f\u0245\3\2\2\2\u0091\u0248\3\2\2\2\u0093")
+        buf.write("\u024b\3\2\2\2\u0095\u024d\3\2\2\2\u0097\u024f\3\2\2\2")
+        buf.write("\u0099\u0252\3\2\2\2\u009b\u0255\3\2\2\2\u009d\u0258\3")
+        buf.write("\2\2\2\u009f\u025b\3\2\2\2\u00a1\u0263\3\2\2\2\u00a3\u0268")
+        buf.write("\3\2\2\2\u00a5\u026e\3\2\2\2\u00a7\u0273\3\2\2\2\u00a9")
+        buf.write("\u027b\3\2\2\2\u00ab\u027e\3\2\2\2\u00ad\u0283\3\2\2\2")
+        buf.write("\u00af\u028a\3\2\2\2\u00b1\u0290\3\2\2\2\u00b3\u0293\3")
+        buf.write("\2\2\2\u00b5\u0298\3\2\2\2\u00b7\u02a1\3\2\2\2\u00b9\u02a7")
+        buf.write("\3\2\2\2\u00bb\u02ae\3\2\2\2\u00bd\u02b6\3\2\2\2\u00bf")
+        buf.write("\u02b9\3\2\2\2\u00c1\u02c3\3\2\2\2\u00c3\u02cf\3\2\2\2")
+        buf.write("\u00c5\u02e1\3\2\2\2\u00c7\u02e6\3\2\2\2\u00c9\u02ef\3")
+        buf.write("\2\2\2\u00cb\u02f7\3\2\2\2\u00cd\u0329\3\2\2\2\u00cf\u032b")
+        buf.write("\3\2\2\2\u00d1\u0334\3\2\2\2\u00d3\u0339\3\2\2\2\u00d5")
+        buf.write("\u0344\3\2\2\2\u00d7\u0346\3\2\2\2\u00d9\u034d\3\2\2\2")
+        buf.write("\u00db\u0351\3\2\2\2\u00dd\u0355\3\2\2\2\u00df\u0357\3")
+        buf.write("\2\2\2\u00e1\u0365\3\2\2\2\u00e3\u0375\3\2\2\2\u00e5\u00e6")
+        buf.write("\7}\2\2\u00e6\4\3\2\2\2\u00e7\u00e8\7=\2\2\u00e8\6\3\2")
+        buf.write("\2\2\u00e9\u00ea\7v\2\2\u00ea\u00eb\7{\2\2\u00eb\u00ec")
+        buf.write("\7r\2\2\u00ec\u00ed\7g\2\2\u00ed\u00ee\7f\2\2\u00ee\u00ef")
+        buf.write("\7g\2\2\u00ef\u00f0\7h\2\2\u00f0\b\3\2\2\2\u00f1\u00f2")
+        buf.write("\7.\2\2\u00f2\n\3\2\2\2\u00f3\u00f4\7?\2\2\u00f4\f\3\2")
+        buf.write("\2\2\u00f5\u00f6\7g\2\2\u00f6\u00f7\7z\2\2\u00f7\u00f8")
+        buf.write("\7v\2\2\u00f8\u00f9\7g\2\2\u00f9\u00fa\7t\2\2\u00fa\u00fb")
+        buf.write("\7p\2\2\u00fb\16\3\2\2\2\u00fc\u00fd\7u\2\2\u00fd\u00fe")
+        buf.write("\7v\2\2\u00fe\u00ff\7c\2\2\u00ff\u0100\7v\2\2\u0100\u0101")
+        buf.write("\7k\2\2\u0101\u0102\7e\2\2\u0102\20\3\2\2\2\u0103\u0104")
+        buf.write("\7c\2\2\u0104\u0105\7w\2\2\u0105\u0106\7v\2\2\u0106\u0107")
+        buf.write("\7q\2\2\u0107\22\3\2\2\2\u0108\u0109\7t\2\2\u0109\u010a")
+        buf.write("\7g\2\2\u010a\u010b\7i\2\2\u010b\u010c\7k\2\2\u010c\u010d")
+        buf.write("\7u\2\2\u010d\u010e\7v\2\2\u010e\u010f\7g\2\2\u010f\u0110")
+        buf.write("\7t\2\2\u0110\24\3\2\2\2\u0111\u0112\7U\2\2\u0112\u0113")
+        buf.write("\7V\2\2\u0113\u0114\7C\2\2\u0114\u0115\7V\2\2\u0115\u0116")
+        buf.write("\7K\2\2\u0116\u0117\7E\2\2\u0117\26\3\2\2\2\u0118\u0119")
+        buf.write("\7x\2\2\u0119\u011a\7q\2\2\u011a\u011b\7k\2\2\u011b\u011c")
+        buf.write("\7f\2\2\u011c\30\3\2\2\2\u011d\u011e\7e\2\2\u011e\u011f")
+        buf.write("\7j\2\2\u011f\u0120\7c\2\2\u0120\u0121\7t\2\2\u0121\32")
+        buf.write("\3\2\2\2\u0122\u0123\7u\2\2\u0123\u0124\7j\2\2\u0124\u0125")
+        buf.write("\7q\2\2\u0125\u0126\7t\2\2\u0126\u0127\7v\2\2\u0127\34")
+        buf.write("\3\2\2\2\u0128\u0129\7k\2\2\u0129\u012a\7p\2\2\u012a\u012b")
+        buf.write("\7v\2\2\u012b\36\3\2\2\2\u012c\u012d\7n\2\2\u012d\u012e")
+        buf.write("\7q\2\2\u012e\u012f\7p\2\2\u012f\u0130\7i\2\2\u0130 \3")
+        buf.write("\2\2\2\u0131\u0132\7h\2\2\u0132\u0133\7n\2\2\u0133\u0134")
+        buf.write("\7q\2\2\u0134\u0135\7c\2\2\u0135\u0136\7v\2\2\u0136\"")
+        buf.write("\3\2\2\2\u0137\u0138\7f\2\2\u0138\u0139\7q\2\2\u0139\u013a")
+        buf.write("\7w\2\2\u013a\u013b\7d\2\2\u013b\u013c\7n\2\2\u013c\u013d")
+        buf.write("\7g\2\2\u013d$\3\2\2\2\u013e\u013f\7u\2\2\u013f\u0140")
+        buf.write("\7k\2\2\u0140\u0141\7i\2\2\u0141\u0142\7p\2\2\u0142\u0143")
+        buf.write("\7g\2\2\u0143\u0144\7f\2\2\u0144&\3\2\2\2\u0145\u0146")
+        buf.write("\7w\2\2\u0146\u0147\7p\2\2\u0147\u0148\7u\2\2\u0148\u0149")
+        buf.write("\7k\2\2\u0149\u014a\7i\2\2\u014a\u014b\7p\2\2\u014b\u014c")
+        buf.write("\7g\2\2\u014c\u014d\7f\2\2\u014d(\3\2\2\2\u014e\u014f")
+        buf.write("\7\177\2\2\u014f*\3\2\2\2\u0150\u0151\7u\2\2\u0151\u0152")
+        buf.write("\7v\2\2\u0152\u0153\7t\2\2\u0153\u0154\7w\2\2\u0154\u0155")
+        buf.write("\7e\2\2\u0155\u0156\7v\2\2\u0156,\3\2\2\2\u0157\u0158")
+        buf.write("\7w\2\2\u0158\u0159\7p\2\2\u0159\u015a\7k\2\2\u015a\u015b")
+        buf.write("\7q\2\2\u015b\u015c\7p\2\2\u015c.\3\2\2\2\u015d\u015e")
+        buf.write("\7<\2\2\u015e\60\3\2\2\2\u015f\u0160\7g\2\2\u0160\u0161")
+        buf.write("\7p\2\2\u0161\u0162\7w\2\2\u0162\u0163\7o\2\2\u0163\62")
+        buf.write("\3\2\2\2\u0164\u0165\7e\2\2\u0165\u0166\7q\2\2\u0166\u0167")
+        buf.write("\7p\2\2\u0167\u0168\7u\2\2\u0168\u0169\7v\2\2\u0169\64")
+        buf.write("\3\2\2\2\u016a\u016b\7x\2\2\u016b\u016c\7q\2\2\u016c\u016d")
+        buf.write("\7n\2\2\u016d\u016e\7c\2\2\u016e\u016f\7v\2\2\u016f\u0170")
+        buf.write("\7k\2\2\u0170\u0171\7n\2\2\u0171\u0172\7g\2\2\u0172\66")
+        buf.write("\3\2\2\2\u0173\u0174\7K\2\2\u0174\u0175\7P\2\2\u01758")
+        buf.write("\3\2\2\2\u0176\u0177\7Q\2\2\u0177\u0178\7W\2\2\u0178\u0179")
+        buf.write("\7V\2\2\u0179:\3\2\2\2\u017a\u017b\7Q\2\2\u017b\u017c")
+        buf.write("\7R\2\2\u017c\u017d\7V\2\2\u017d\u017e\7K\2\2\u017e\u017f")
+        buf.write("\7Q\2\2\u017f\u0180\7P\2\2\u0180\u0181\7C\2\2\u0181\u0182")
+        buf.write("\7N\2\2\u0182<\3\2\2\2\u0183\u0184\7E\2\2\u0184\u0185")
+        buf.write("\7Q\2\2\u0185\u0186\7P\2\2\u0186\u0187\7U\2\2\u0187\u0188")
+        buf.write("\7V\2\2\u0188>\3\2\2\2\u0189\u018a\7W\2\2\u018a\u018b")
+        buf.write("\7P\2\2\u018b\u018c\7C\2\2\u018c\u018d\7N\2\2\u018d\u018e")
+        buf.write("\7K\2\2\u018e\u018f\7I\2\2\u018f\u0190\7P\2\2\u0190\u0191")
+        buf.write("\7G\2\2\u0191\u0192\7F\2\2\u0192@\3\2\2\2\u0193\u0194")
+        buf.write("\7X\2\2\u0194\u0195\7Q\2\2\u0195\u0196\7N\2\2\u0196\u0197")
+        buf.write("\7C\2\2\u0197\u0198\7V\2\2\u0198\u0199\7K\2\2\u0199\u019a")
+        buf.write("\7N\2\2\u019a\u019b\7G\2\2\u019bB\3\2\2\2\u019c\u019d")
+        buf.write("\7I\2\2\u019d\u019e\7N\2\2\u019e\u019f\7Q\2\2\u019f\u01a0")
+        buf.write("\7D\2\2\u01a0\u01a1\7C\2\2\u01a1\u01a2\7N\2\2\u01a2\u01a3")
+        buf.write("\7a\2\2\u01a3\u01a4\7T\2\2\u01a4\u01a5\7G\2\2\u01a5\u01a6")
+        buf.write("\7O\2\2\u01a6\u01a7\7Q\2\2\u01a7\u01a8\7X\2\2\u01a8\u01a9")
+        buf.write("\7G\2\2\u01a9\u01aa\7a\2\2\u01aa\u01ab\7K\2\2\u01ab\u01ac")
+        buf.write("\7H\2\2\u01ac\u01ad\7a\2\2\u01ad\u01ae\7W\2\2\u01ae\u01af")
+        buf.write("\7P\2\2\u01af\u01b0\7T\2\2\u01b0\u01b1\7G\2\2\u01b1\u01b2")
+        buf.write("\7H\2\2\u01b2\u01b3\7G\2\2\u01b3\u01b4\7T\2\2\u01b4\u01b5")
+        buf.write("\7G\2\2\u01b5\u01b6\7P\2\2\u01b6\u01b7\7E\2\2\u01b7\u01b8")
+        buf.write("\7G\2\2\u01b8\u01b9\7F\2\2\u01b9D\3\2\2\2\u01ba\u01bb")
+        buf.write("\7G\2\2\u01bb\u01bc\7H\2\2\u01bc\u01bd\7K\2\2\u01bd\u01be")
+        buf.write("\7C\2\2\u01be\u01bf\7R\2\2\u01bf\u01c0\7K\2\2\u01c0F\3")
+        buf.write("\2\2\2\u01c1\u01c2\7G\2\2\u01c2\u01c3\7H\2\2\u01c3\u01c4")
+        buf.write("\7K\2\2\u01c4\u01c5\7a\2\2\u01c5\u01c6\7D\2\2\u01c6\u01c7")
+        buf.write("\7Q\2\2\u01c7\u01c8\7Q\2\2\u01c8\u01c9\7V\2\2\u01c9\u01ca")
+        buf.write("\7U\2\2\u01ca\u01cb\7G\2\2\u01cb\u01cc\7T\2\2\u01cc\u01cd")
+        buf.write("\7X\2\2\u01cd\u01ce\7K\2\2\u01ce\u01cf\7E\2\2\u01cf\u01d0")
+        buf.write("\7G\2\2\u01d0H\3\2\2\2\u01d1\u01d2\7G\2\2\u01d2\u01d3")
+        buf.write("\7H\2\2\u01d3\u01d4\7K\2\2\u01d4\u01d5\7a\2\2\u01d5\u01d6")
+        buf.write("\7T\2\2\u01d6\u01d7\7W\2\2\u01d7\u01d8\7P\2\2\u01d8\u01d9")
+        buf.write("\7V\2\2\u01d9\u01da\7K\2\2\u01da\u01db\7O\2\2\u01db\u01dc")
+        buf.write("\7G\2\2\u01dc\u01dd\7U\2\2\u01dd\u01de\7G\2\2\u01de\u01df")
+        buf.write("\7T\2\2\u01df\u01e0\7X\2\2\u01e0\u01e1\7K\2\2\u01e1\u01e2")
+        buf.write("\7E\2\2\u01e2\u01e3\7G\2\2\u01e3J\3\2\2\2\u01e4\u01e5")
+        buf.write("\7R\2\2\u01e5\u01e6\7C\2\2\u01e6\u01e7\7E\2\2\u01e7\u01e8")
+        buf.write("\7M\2\2\u01e8\u01e9\7G\2\2\u01e9\u01ea\7F\2\2\u01eaL\3")
+        buf.write("\2\2\2\u01eb\u01ec\7*\2\2\u01ecN\3\2\2\2\u01ed\u01ee\7")
+        buf.write("+\2\2\u01eeP\3\2\2\2\u01ef\u01f0\7]\2\2\u01f0R\3\2\2\2")
+        buf.write("\u01f1\u01f2\7_\2\2\u01f2T\3\2\2\2\u01f3\u01f4\7,\2\2")
+        buf.write("\u01f4V\3\2\2\2\u01f5\u01f6\7\60\2\2\u01f6\u01f7\7\60")
+        buf.write("\2\2\u01f7\u01f8\7\60\2\2\u01f8X\3\2\2\2\u01f9\u01fa\7")
+        buf.write("-\2\2\u01faZ\3\2\2\2\u01fb\u01fc\7/\2\2\u01fc\\\3\2\2")
+        buf.write("\2\u01fd\u01fe\7\61\2\2\u01fe^\3\2\2\2\u01ff\u0200\7\'")
+        buf.write("\2\2\u0200`\3\2\2\2\u0201\u0202\7-\2\2\u0202\u0203\7-")
+        buf.write("\2\2\u0203b\3\2\2\2\u0204\u0205\7/\2\2\u0205\u0206\7/")
+        buf.write("\2\2\u0206d\3\2\2\2\u0207\u0208\7u\2\2\u0208\u0209\7k")
+        buf.write("\2\2\u0209\u020a\7|\2\2\u020a\u020b\7g\2\2\u020b\u020c")
+        buf.write("\7q\2\2\u020c\u020d\7h\2\2\u020df\3\2\2\2\u020e\u020f")
+        buf.write("\7\60\2\2\u020fh\3\2\2\2\u0210\u0211\7/\2\2\u0211\u0212")
+        buf.write("\7@\2\2\u0212j\3\2\2\2\u0213\u0214\7(\2\2\u0214l\3\2\2")
+        buf.write("\2\u0215\u0216\7\u0080\2\2\u0216n\3\2\2\2\u0217\u0218")
+        buf.write("\7#\2\2\u0218p\3\2\2\2\u0219\u021a\7,\2\2\u021a\u021b")
+        buf.write("\7?\2\2\u021br\3\2\2\2\u021c\u021d\7\61\2\2\u021d\u021e")
+        buf.write("\7?\2\2\u021et\3\2\2\2\u021f\u0220\7\'\2\2\u0220\u0221")
+        buf.write("\7?\2\2\u0221v\3\2\2\2\u0222\u0223\7-\2\2\u0223\u0224")
+        buf.write("\7?\2\2\u0224x\3\2\2\2\u0225\u0226\7/\2\2\u0226\u0227")
+        buf.write("\7?\2\2\u0227z\3\2\2\2\u0228\u0229\7>\2\2\u0229\u022a")
+        buf.write("\7>\2\2\u022a\u022b\7?\2\2\u022b|\3\2\2\2\u022c\u022d")
+        buf.write("\7@\2\2\u022d\u022e\7@\2\2\u022e\u022f\7?\2\2\u022f~\3")
+        buf.write("\2\2\2\u0230\u0231\7(\2\2\u0231\u0232\7?\2\2\u0232\u0080")
+        buf.write("\3\2\2\2\u0233\u0234\7`\2\2\u0234\u0235\7?\2\2\u0235\u0082")
+        buf.write("\3\2\2\2\u0236\u0237\7~\2\2\u0237\u0238\7?\2\2\u0238\u0084")
+        buf.write("\3\2\2\2\u0239\u023a\7A\2\2\u023a\u0086\3\2\2\2\u023b")
+        buf.write("\u023c\7~\2\2\u023c\u023d\7~\2\2\u023d\u0088\3\2\2\2\u023e")
+        buf.write("\u023f\7(\2\2\u023f\u0240\7(\2\2\u0240\u008a\3\2\2\2\u0241")
+        buf.write("\u0242\7~\2\2\u0242\u008c\3\2\2\2\u0243\u0244\7`\2\2\u0244")
+        buf.write("\u008e\3\2\2\2\u0245\u0246\7?\2\2\u0246\u0247\7?\2\2\u0247")
+        buf.write("\u0090\3\2\2\2\u0248\u0249\7#\2\2\u0249\u024a\7?\2\2\u024a")
+        buf.write("\u0092\3\2\2\2\u024b\u024c\7>\2\2\u024c\u0094\3\2\2\2")
+        buf.write("\u024d\u024e\7@\2\2\u024e\u0096\3\2\2\2\u024f\u0250\7")
+        buf.write(">\2\2\u0250\u0251\7?\2\2\u0251\u0098\3\2\2\2\u0252\u0253")
+        buf.write("\7@\2\2\u0253\u0254\7?\2\2\u0254\u009a\3\2\2\2\u0255\u0256")
+        buf.write("\7>\2\2\u0256\u0257\7>\2\2\u0257\u009c\3\2\2\2\u0258\u0259")
+        buf.write("\7@\2\2\u0259\u025a\7@\2\2\u025a\u009e\3\2\2\2\u025b\u025c")
+        buf.write("\7a\2\2\u025c\u025d\7a\2\2\u025d\u025e\7c\2\2\u025e\u025f")
+        buf.write("\7u\2\2\u025f\u0260\7o\2\2\u0260\u0261\7a\2\2\u0261\u0262")
+        buf.write("\7a\2\2\u0262\u00a0\3\2\2\2\u0263\u0264\7a\2\2\u0264\u0265")
+        buf.write("\7c\2\2\u0265\u0266\7u\2\2\u0266\u0267\7o\2\2\u0267\u00a2")
+        buf.write("\3\2\2\2\u0268\u0269\7a\2\2\u0269\u026a\7a\2\2\u026a\u026b")
+        buf.write("\7c\2\2\u026b\u026c\7u\2\2\u026c\u026d\7o\2\2\u026d\u00a4")
+        buf.write("\3\2\2\2\u026e\u026f\7e\2\2\u026f\u0270\7c\2\2\u0270\u0271")
+        buf.write("\7u\2\2\u0271\u0272\7g\2\2\u0272\u00a6\3\2\2\2\u0273\u0274")
+        buf.write("\7f\2\2\u0274\u0275\7g\2\2\u0275\u0276\7h\2\2\u0276\u0277")
+        buf.write("\7c\2\2\u0277\u0278\7w\2\2\u0278\u0279\7n\2\2\u0279\u027a")
+        buf.write("\7v\2\2\u027a\u00a8\3\2\2\2\u027b\u027c\7k\2\2\u027c\u027d")
+        buf.write("\7h\2\2\u027d\u00aa\3\2\2\2\u027e\u027f\7g\2\2\u027f\u0280")
+        buf.write("\7n\2\2\u0280\u0281\7u\2\2\u0281\u0282\7g\2\2\u0282\u00ac")
+        buf.write("\3\2\2\2\u0283\u0284\7u\2\2\u0284\u0285\7y\2\2\u0285\u0286")
+        buf.write("\7k\2\2\u0286\u0287\7v\2\2\u0287\u0288\7e\2\2\u0288\u0289")
+        buf.write("\7j\2\2\u0289\u00ae\3\2\2\2\u028a\u028b\7y\2\2\u028b\u028c")
+        buf.write("\7j\2\2\u028c\u028d\7k\2\2\u028d\u028e\7n\2\2\u028e\u028f")
+        buf.write("\7g\2\2\u028f\u00b0\3\2\2\2\u0290\u0291\7f\2\2\u0291\u0292")
+        buf.write("\7q\2\2\u0292\u00b2\3\2\2\2\u0293\u0294\7i\2\2\u0294\u0295")
+        buf.write("\7q\2\2\u0295\u0296\7v\2\2\u0296\u0297\7q\2\2\u0297\u00b4")
+        buf.write("\3\2\2\2\u0298\u0299\7e\2\2\u0299\u029a\7q\2\2\u029a\u029b")
+        buf.write("\7p\2\2\u029b\u029c\7v\2\2\u029c\u029d\7k\2\2\u029d\u029e")
+        buf.write("\7p\2\2\u029e\u029f\7w\2\2\u029f\u02a0\7g\2\2\u02a0\u00b6")
+        buf.write("\3\2\2\2\u02a1\u02a2\7d\2\2\u02a2\u02a3\7t\2\2\u02a3\u02a4")
+        buf.write("\7g\2\2\u02a4\u02a5\7c\2\2\u02a5\u02a6\7m\2\2\u02a6\u00b8")
+        buf.write("\3\2\2\2\u02a7\u02a8\7t\2\2\u02a8\u02a9\7g\2\2\u02a9\u02aa")
+        buf.write("\7v\2\2\u02aa\u02ab\7w\2\2\u02ab\u02ac\7t\2\2\u02ac\u02ad")
+        buf.write("\7p\2\2\u02ad\u00ba\3\2\2\2\u02ae\u02b3\5\u00bd_\2\u02af")
+        buf.write("\u02b2\5\u00bd_\2\u02b0\u02b2\4\62;\2\u02b1\u02af\3\2")
+        buf.write("\2\2\u02b1\u02b0\3\2\2\2\u02b2\u02b5\3\2\2\2\u02b3\u02b1")
+        buf.write("\3\2\2\2\u02b3\u02b4\3\2\2\2\u02b4\u00bc\3\2\2\2\u02b5")
+        buf.write("\u02b3\3\2\2\2\u02b6\u02b7\t\2\2\2\u02b7\u00be\3\2\2\2")
+        buf.write("\u02b8\u02ba\7N\2\2\u02b9\u02b8\3\2\2\2\u02b9\u02ba\3")
+        buf.write("\2\2\2\u02ba\u02bb\3\2\2\2\u02bb\u02be\7)\2\2\u02bc\u02bf")
+        buf.write("\5\u00d3j\2\u02bd\u02bf\n\3\2\2\u02be\u02bc\3\2\2\2\u02be")
+        buf.write("\u02bd\3\2\2\2\u02bf\u02c0\3\2\2\2\u02c0\u02c1\7)\2\2")
+        buf.write("\u02c1\u00c0\3\2\2\2\u02c2\u02c4\7N\2\2\u02c3\u02c2\3")
+        buf.write("\2\2\2\u02c3\u02c4\3\2\2\2\u02c4\u02c5\3\2\2\2\u02c5\u02ca")
+        buf.write("\7$\2\2\u02c6\u02c9\5\u00d3j\2\u02c7\u02c9\n\4\2\2\u02c8")
+        buf.write("\u02c6\3\2\2\2\u02c8\u02c7\3\2\2\2\u02c9\u02cc\3\2\2\2")
+        buf.write("\u02ca\u02c8\3\2\2\2\u02ca\u02cb\3\2\2\2\u02cb\u02cd\3")
+        buf.write("\2\2\2\u02cc\u02ca\3\2\2\2\u02cd\u02ce\7$\2\2\u02ce\u00c2")
+        buf.write("\3\2\2\2\u02cf\u02d0\7\62\2\2\u02d0\u02d2\t\5\2\2\u02d1")
+        buf.write("\u02d3\5\u00c9e\2\u02d2\u02d1\3\2\2\2\u02d3\u02d4\3\2")
+        buf.write("\2\2\u02d4\u02d2\3\2\2\2\u02d4\u02d5\3\2\2\2\u02d5\u02d7")
+        buf.write("\3\2\2\2\u02d6\u02d8\5\u00cbf\2\u02d7\u02d6\3\2\2\2\u02d7")
+        buf.write("\u02d8\3\2\2\2\u02d8\u00c4\3\2\2\2\u02d9\u02e2\7\62\2")
+        buf.write("\2\u02da\u02de\4\63;\2\u02db\u02dd\4\62;\2\u02dc\u02db")
+        buf.write("\3\2\2\2\u02dd\u02e0\3\2\2\2\u02de\u02dc\3\2\2\2\u02de")
+        buf.write("\u02df\3\2\2\2\u02df\u02e2\3\2\2\2\u02e0\u02de\3\2\2\2")
+        buf.write("\u02e1\u02d9\3\2\2\2\u02e1\u02da\3\2\2\2\u02e2\u02e4\3")
+        buf.write("\2\2\2\u02e3\u02e5\5\u00cbf\2\u02e4\u02e3\3\2\2\2\u02e4")
+        buf.write("\u02e5\3\2\2\2\u02e5\u00c6\3\2\2\2\u02e6\u02e8\7\62\2")
+        buf.write("\2\u02e7\u02e9\4\629\2\u02e8\u02e7\3\2\2\2\u02e9\u02ea")
+        buf.write("\3\2\2\2\u02ea\u02e8\3\2\2\2\u02ea\u02eb\3\2\2\2\u02eb")
+        buf.write("\u02ed\3\2\2\2\u02ec\u02ee\5\u00cbf\2\u02ed\u02ec\3\2")
+        buf.write("\2\2\u02ed\u02ee\3\2\2\2\u02ee\u00c8\3\2\2\2\u02ef\u02f0")
+        buf.write("\t\6\2\2\u02f0\u00ca\3\2\2\2\u02f1\u02f8\t\7\2\2\u02f2")
+        buf.write("\u02f3\t\b\2\2\u02f3\u02f8\t\t\2\2\u02f4\u02f5\t\b\2\2")
+        buf.write("\u02f5\u02f6\t\t\2\2\u02f6\u02f8\t\t\2\2\u02f7\u02f1\3")
+        buf.write("\2\2\2\u02f7\u02f2\3\2\2\2\u02f7\u02f4\3\2\2\2\u02f8\u00cc")
+        buf.write("\3\2\2\2\u02f9\u02fb\4\62;\2\u02fa\u02f9\3\2\2\2\u02fb")
+        buf.write("\u02fc\3\2\2\2\u02fc\u02fa\3\2\2\2\u02fc\u02fd\3\2\2\2")
+        buf.write("\u02fd\u02fe\3\2\2\2\u02fe\u0302\7\60\2\2\u02ff\u0301")
+        buf.write("\4\62;\2\u0300\u02ff\3\2\2\2\u0301\u0304\3\2\2\2\u0302")
+        buf.write("\u0300\3\2\2\2\u0302\u0303\3\2\2\2\u0303\u0306\3\2\2\2")
+        buf.write("\u0304\u0302\3\2\2\2\u0305\u0307\5\u00cfh\2\u0306\u0305")
+        buf.write("\3\2\2\2\u0306\u0307\3\2\2\2\u0307\u0309\3\2\2\2\u0308")
+        buf.write("\u030a\5\u00d1i\2\u0309\u0308\3\2\2\2\u0309\u030a\3\2")
+        buf.write("\2\2\u030a\u032a\3\2\2\2\u030b\u030d\7\60\2\2\u030c\u030e")
+        buf.write("\4\62;\2\u030d\u030c\3\2\2\2\u030e\u030f\3\2\2\2\u030f")
+        buf.write("\u030d\3\2\2\2\u030f\u0310\3\2\2\2\u0310\u0312\3\2\2\2")
+        buf.write("\u0311\u0313\5\u00cfh\2\u0312\u0311\3\2\2\2\u0312\u0313")
+        buf.write("\3\2\2\2\u0313\u0315\3\2\2\2\u0314\u0316\5\u00d1i\2\u0315")
+        buf.write("\u0314\3\2\2\2\u0315\u0316\3\2\2\2\u0316\u032a\3\2\2\2")
+        buf.write("\u0317\u0319\4\62;\2\u0318\u0317\3\2\2\2\u0319\u031a\3")
+        buf.write("\2\2\2\u031a\u0318\3\2\2\2\u031a\u031b\3\2\2\2\u031b\u031c")
+        buf.write("\3\2\2\2\u031c\u031e\5\u00cfh\2\u031d\u031f\5\u00d1i\2")
+        buf.write("\u031e\u031d\3\2\2\2\u031e\u031f\3\2\2\2\u031f\u032a\3")
+        buf.write("\2\2\2\u0320\u0322\4\62;\2\u0321\u0320\3\2\2\2\u0322\u0323")
+        buf.write("\3\2\2\2\u0323\u0321\3\2\2\2\u0323\u0324\3\2\2\2\u0324")
+        buf.write("\u0326\3\2\2\2\u0325\u0327\5\u00cfh\2\u0326\u0325\3\2")
+        buf.write("\2\2\u0326\u0327\3\2\2\2\u0327\u0328\3\2\2\2\u0328\u032a")
+        buf.write("\5\u00d1i\2\u0329\u02fa\3\2\2\2\u0329\u030b\3\2\2\2\u0329")
+        buf.write("\u0318\3\2\2\2\u0329\u0321\3\2\2\2\u032a\u00ce\3\2\2\2")
+        buf.write("\u032b\u032d\t\n\2\2\u032c\u032e\t\13\2\2\u032d\u032c")
+        buf.write("\3\2\2\2\u032d\u032e\3\2\2\2\u032e\u0330\3\2\2\2\u032f")
+        buf.write("\u0331\4\62;\2\u0330\u032f\3\2\2\2\u0331\u0332\3\2\2\2")
+        buf.write("\u0332\u0330\3\2\2\2\u0332\u0333\3\2\2\2\u0333\u00d0\3")
+        buf.write("\2\2\2\u0334\u0335\t\f\2\2\u0335\u00d2\3\2\2\2\u0336\u0337")
+        buf.write("\7^\2\2\u0337\u033a\t\r\2\2\u0338\u033a\5\u00d5k\2\u0339")
+        buf.write("\u0336\3\2\2\2\u0339\u0338\3\2\2\2\u033a\u00d4\3\2\2\2")
+        buf.write("\u033b\u033c\7^\2\2\u033c\u033d\4\62\65\2\u033d\u033e")
+        buf.write("\4\629\2\u033e\u0345\4\629\2\u033f\u0340\7^\2\2\u0340")
+        buf.write("\u0341\4\629\2\u0341\u0345\4\629\2\u0342\u0343\7^\2\2")
+        buf.write("\u0343\u0345\4\629\2\u0344\u033b\3\2\2\2\u0344\u033f\3")
+        buf.write("\2\2\2\u0344\u0342\3\2\2\2\u0345\u00d6\3\2\2\2\u0346\u0347")
+        buf.write("\7^\2\2\u0347\u0348\7w\2\2\u0348\u0349\5\u00c9e\2\u0349")
+        buf.write("\u034a\5\u00c9e\2\u034a\u034b\5\u00c9e\2\u034b\u034c\5")
+        buf.write("\u00c9e\2\u034c\u00d8\3\2\2\2\u034d\u034e\t\16\2\2\u034e")
+        buf.write("\u034f\3\2\2\2\u034f\u0350\bm\2\2\u0350\u00da\3\2\2\2")
+        buf.write("\u0351\u0352\7^\2\2\u0352\u0353\3\2\2\2\u0353\u0354\b")
+        buf.write("n\2\2\u0354\u00dc\3\2\2\2\u0355\u0356\4\5\0\2\u0356\u00de")
+        buf.write("\3\2\2\2\u0357\u0358\7\61\2\2\u0358\u0359\7,\2\2\u0359")
+        buf.write("\u035d\3\2\2\2\u035a\u035c\13\2\2\2\u035b\u035a\3\2\2")
+        buf.write("\2\u035c\u035f\3\2\2\2\u035d\u035e\3\2\2\2\u035d\u035b")
+        buf.write("\3\2\2\2\u035e\u0360\3\2\2\2\u035f\u035d\3\2\2\2\u0360")
+        buf.write("\u0361\7,\2\2\u0361\u0362\7\61\2\2\u0362\u0363\3\2\2\2")
+        buf.write("\u0363\u0364\bp\2\2\u0364\u00e0\3\2\2\2\u0365\u0366\7")
+        buf.write("\61\2\2\u0366\u0367\7\61\2\2\u0367\u036b\3\2\2\2\u0368")
+        buf.write("\u036a\n\17\2\2\u0369\u0368\3\2\2\2\u036a\u036d\3\2\2")
+        buf.write("\2\u036b\u0369\3\2\2\2\u036b\u036c\3\2\2\2\u036c\u036f")
+        buf.write("\3\2\2\2\u036d\u036b\3\2\2\2\u036e\u0370\7\17\2\2\u036f")
+        buf.write("\u036e\3\2\2\2\u036f\u0370\3\2\2\2\u0370\u0371\3\2\2\2")
+        buf.write("\u0371\u0372\7\f\2\2\u0372\u0373\3\2\2\2\u0373\u0374\b")
+        buf.write("q\2\2\u0374\u00e2\3\2\2\2\u0375\u0379\7%\2\2\u0376\u0378")
+        buf.write("\n\17\2\2\u0377\u0376\3\2\2\2\u0378\u037b\3\2\2\2\u0379")
+        buf.write("\u0377\3\2\2\2\u0379\u037a\3\2\2\2\u037a\u037d\3\2\2\2")
+        buf.write("\u037b\u0379\3\2\2\2\u037c\u037e\7\17\2\2\u037d\u037c")
+        buf.write("\3\2\2\2\u037d\u037e\3\2\2\2\u037e\u037f\3\2\2\2\u037f")
+        buf.write("\u0380\7\f\2\2\u0380\u0381\3\2\2\2\u0381\u0382\br\2\2")
+        buf.write("\u0382\u00e4\3\2\2\2\'\2\u02b1\u02b3\u02b9\u02be\u02c3")
+        buf.write("\u02c8\u02ca\u02d4\u02d7\u02de\u02e1\u02e4\u02ea\u02ed")
+        buf.write("\u02f7\u02fc\u0302\u0306\u0309\u030f\u0312\u0315\u031a")
+        buf.write("\u031e\u0323\u0326\u0329\u032d\u0332\u0339\u0344\u035d")
+        buf.write("\u036b\u036f\u0379\u037d\3\2\3\2")
+        return buf.getvalue()
+
+
+class CLexer(Lexer):
+
+    atn = ATNDeserializer().deserialize(serializedATN())
+
+    decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+    T__0 = 1
+    T__1 = 2
+    T__2 = 3
+    T__3 = 4
+    T__4 = 5
+    T__5 = 6
+    T__6 = 7
+    T__7 = 8
+    T__8 = 9
+    T__9 = 10
+    T__10 = 11
+    T__11 = 12
+    T__12 = 13
+    T__13 = 14
+    T__14 = 15
+    T__15 = 16
+    T__16 = 17
+    T__17 = 18
+    T__18 = 19
+    T__19 = 20
+    T__20 = 21
+    T__21 = 22
+    T__22 = 23
+    T__23 = 24
+    T__24 = 25
+    T__25 = 26
+    T__26 = 27
+    T__27 = 28
+    T__28 = 29
+    T__29 = 30
+    T__30 = 31
+    T__31 = 32
+    T__32 = 33
+    T__33 = 34
+    T__34 = 35
+    T__35 = 36
+    T__36 = 37
+    T__37 = 38
+    T__38 = 39
+    T__39 = 40
+    T__40 = 41
+    T__41 = 42
+    T__42 = 43
+    T__43 = 44
+    T__44 = 45
+    T__45 = 46
+    T__46 = 47
+    T__47 = 48
+    T__48 = 49
+    T__49 = 50
+    T__50 = 51
+    T__51 = 52
+    T__52 = 53
+    T__53 = 54
+    T__54 = 55
+    T__55 = 56
+    T__56 = 57
+    T__57 = 58
+    T__58 = 59
+    T__59 = 60
+    T__60 = 61
+    T__61 = 62
+    T__62 = 63
+    T__63 = 64
+    T__64 = 65
+    T__65 = 66
+    T__66 = 67
+    T__67 = 68
+    T__68 = 69
+    T__69 = 70
+    T__70 = 71
+    T__71 = 72
+    T__72 = 73
+    T__73 = 74
+    T__74 = 75
+    T__75 = 76
+    T__76 = 77
+    T__77 = 78
+    T__78 = 79
+    T__79 = 80
+    T__80 = 81
+    T__81 = 82
+    T__82 = 83
+    T__83 = 84
+    T__84 = 85
+    T__85 = 86
+    T__86 = 87
+    T__87 = 88
+    T__88 = 89
+    T__89 = 90
+    T__90 = 91
+    T__91 = 92
+    IDENTIFIER = 93
+    CHARACTER_LITERAL = 94
+    STRING_LITERAL = 95
+    HEX_LITERAL = 96
+    DECIMAL_LITERAL = 97
+    OCTAL_LITERAL = 98
+    FLOATING_POINT_LITERAL = 99
+    WS = 100
+    BS = 101
+    UnicodeVocabulary = 102
+    COMMENT = 103
+    LINE_COMMENT = 104
+    LINE_COMMAND = 105
+
+    channelNames = [ u"DEFAULT_TOKEN_CHANNEL", u"HIDDEN" ]
+
+    modeNames = [ "DEFAULT_MODE" ]
+
+    literalNames = [ "<INVALID>",
+            "'{'", "';'", "'typedef'", "','", "'='", "'extern'", "'static'",
+            "'auto'", "'register'", "'STATIC'", "'void'", "'char'", "'short'",
+            "'int'", "'long'", "'float'", "'double'", "'signed'", "'unsigned'",
+            "'}'", "'struct'", "'union'", "':'", "'enum'", "'const'", "'volatile'",
+            "'IN'", "'OUT'", "'OPTIONAL'", "'CONST'", "'UNALIGNED'", "'VOLATILE'",
+            "'GLOBAL_REMOVE_IF_UNREFERENCED'", "'EFIAPI'", "'EFI_BOOTSERVICE'",
+            "'EFI_RUNTIMESERVICE'", "'PACKED'", "'('", "')'", "'['", "']'",
+            "'*'", "'...'", "'+'", "'-'", "'/'", "'%'", "'++'", "'--'",
+            "'sizeof'", "'.'", "'->'", "'&'", "'~'", "'!'", "'*='", "'/='",
+            "'%='", "'+='", "'-='", "'<<='", "'>>='", "'&='", "'^='", "'|='",
+            "'?'", "'||'", "'&&'", "'|'", "'^'", "'=='", "'!='", "'<'",
+            "'>'", "'<='", "'>='", "'<<'", "'>>'", "'__asm__'", "'_asm'",
+            "'__asm'", "'case'", "'default'", "'if'", "'else'", "'switch'",
+            "'while'", "'do'", "'goto'", "'continue'", "'break'", "'return'" ]
+
+    symbolicNames = [ "<INVALID>",
+            "IDENTIFIER", "CHARACTER_LITERAL", "STRING_LITERAL", "HEX_LITERAL",
+            "DECIMAL_LITERAL", "OCTAL_LITERAL", "FLOATING_POINT_LITERAL",
+            "WS", "BS", "UnicodeVocabulary", "COMMENT", "LINE_COMMENT",
+            "LINE_COMMAND" ]
+
+    ruleNames = [ "T__0", "T__1", "T__2", "T__3", "T__4", "T__5", "T__6",
+                  "T__7", "T__8", "T__9", "T__10", "T__11", "T__12", "T__13",
+                  "T__14", "T__15", "T__16", "T__17", "T__18", "T__19",
+                  "T__20", "T__21", "T__22", "T__23", "T__24", "T__25",
+                  "T__26", "T__27", "T__28", "T__29", "T__30", "T__31",
+                  "T__32", "T__33", "T__34", "T__35", "T__36", "T__37",
+                  "T__38", "T__39", "T__40", "T__41", "T__42", "T__43",
+                  "T__44", "T__45", "T__46", "T__47", "T__48", "T__49",
+                  "T__50", "T__51", "T__52", "T__53", "T__54", "T__55",
+                  "T__56", "T__57", "T__58", "T__59", "T__60", "T__61",
+                  "T__62", "T__63", "T__64", "T__65", "T__66", "T__67",
+                  "T__68", "T__69", "T__70", "T__71", "T__72", "T__73",
+                  "T__74", "T__75", "T__76", "T__77", "T__78", "T__79",
+                  "T__80", "T__81", "T__82", "T__83", "T__84", "T__85",
+                  "T__86", "T__87", "T__88", "T__89", "T__90", "T__91",
+                  "IDENTIFIER", "LETTER", "CHARACTER_LITERAL", "STRING_LITERAL",
+                  "HEX_LITERAL", "DECIMAL_LITERAL", "OCTAL_LITERAL", "HexDigit",
+                  "IntegerTypeSuffix", "FLOATING_POINT_LITERAL", "Exponent",
+                  "FloatTypeSuffix", "EscapeSequence", "OctalEscape", "UnicodeEscape",
+                  "WS", "BS", "UnicodeVocabulary", "COMMENT", "LINE_COMMENT",
+                  "LINE_COMMAND" ]
+
+    grammarFileName = "C.g4"
+
+    # @param  output= sys.stdout Type: TextIO
+    def __init__(self,input=None,output= sys.stdout):
+        super().__init__(input, output)
+        self.checkVersion("4.7.1")
+        self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
+        self._actions = None
+        self._predicates = None
+
+
+
+    def printTokenInfo(self,line,offset,tokenText):
+        print(str(line)+ ',' + str(offset) + ':' + str(tokenText))
+
+    def StorePredicateExpression(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        PredExp = CodeFragment.PredicateExpression(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.PredicateExpressionList.append(PredExp)
+
+    def StoreEnumerationDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        EnumDef = CodeFragment.EnumerationDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.EnumerationDefinitionList.append(EnumDef)
+
+    def StoreStructUnionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        SUDef = CodeFragment.StructUnionDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.StructUnionDefinitionList.append(SUDef)
+
+    def StoreTypedefDefinition(self,StartLine,StartOffset,EndLine,EndOffset,FromText,ToText):
+        Tdef = CodeFragment.TypedefDefinition(FromText, ToText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.TypedefDefinitionList.append(Tdef)
+
+    def StoreFunctionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText,LeftBraceLine,LeftBraceOffset,DeclLine,DeclOffset):
+        FuncDef = CodeFragment.FunctionDefinition(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset), (LeftBraceLine, LeftBraceOffset), (DeclLine, DeclOffset))
+        FileProfile.FunctionDefinitionList.append(FuncDef)
+
+    def StoreVariableDeclaration(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText):
+        VarDecl = CodeFragment.VariableDeclaration(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.VariableDeclarationList.append(VarDecl)
+
+    def StoreFunctionCalling(self,StartLine,StartOffset,EndLine,EndOffset,FuncName,ParamList):
+        FuncCall = CodeFragment.FunctionCalling(FuncName, ParamList, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.FunctionCallingList.append(FuncCall)
+
+
+
diff --git a/BaseTools/Source/Python/Eot/CParser4/CListener.py b/BaseTools/Source/Python/Eot/CParser4/CListener.py
new file mode 100644
index 0000000000..f745c33aad
--- /dev/null
+++ b/BaseTools/Source/Python/Eot/CParser4/CListener.py
@@ -0,0 +1,814 @@
+# Generated from C.g4 by ANTLR 4.7.1
+from antlr4 import *
+if __name__ is not None and "." in __name__:
+    from .CParser import CParser
+else:
+    from CParser import CParser
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+
+# This class defines a complete listener for a parse tree produced by CParser.
+class CListener(ParseTreeListener):
+
+    # Enter a parse tree produced by CParser#translation_unit.
+    # @param  ctx Type: CParser.Translation_unitContext
+    def enterTranslation_unit(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#translation_unit.
+    # @param  ctx Type: CParser.Translation_unitContext
+    def exitTranslation_unit(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#external_declaration.
+    # @param  ctx Type: CParser.External_declarationContext
+    def enterExternal_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#external_declaration.
+    # @param  ctx Type: CParser.External_declarationContext
+    def exitExternal_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#function_definition.
+    # @param  ctx Type: CParser.Function_definitionContext
+    def enterFunction_definition(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#function_definition.
+    # @param  ctx Type: CParser.Function_definitionContext
+    def exitFunction_definition(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declaration_specifiers.
+    # @param  ctx Type: CParser.Declaration_specifiersContext
+    def enterDeclaration_specifiers(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declaration_specifiers.
+    # @param  ctx Type: CParser.Declaration_specifiersContext
+    def exitDeclaration_specifiers(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declaration.
+    # @param  ctx Type: CParser.DeclarationContext
+    def enterDeclaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declaration.
+    # @param  ctx Type: CParser.DeclarationContext
+    def exitDeclaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#init_declarator_list.
+    # @param  ctx Type: CParser.Init_declarator_listContext
+    def enterInit_declarator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#init_declarator_list.
+    # @param  ctx Type: CParser.Init_declarator_listContext
+    def exitInit_declarator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#init_declarator.
+    # @param  ctx Type: CParser.Init_declaratorContext
+    def enterInit_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#init_declarator.
+    # @param  ctx Type: CParser.Init_declaratorContext
+    def exitInit_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#storage_class_specifier.
+    # @param  ctx Type: CParser.Storage_class_specifierContext
+    def enterStorage_class_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#storage_class_specifier.
+    # @param  ctx Type: CParser.Storage_class_specifierContext
+    def exitStorage_class_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_specifier.
+    # @param  ctx Type: CParser.Type_specifierContext
+    def enterType_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_specifier.
+    # @param  ctx Type: CParser.Type_specifierContext
+    def exitType_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_id.
+    # @param  ctx Type: CParser.Type_idContext
+    def enterType_id(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_id.
+    # @param  ctx Type: CParser.Type_idContext
+    def exitType_id(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_or_union_specifier.
+    # @param  ctx Type: CParser.Struct_or_union_specifierContext
+    def enterStruct_or_union_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_or_union_specifier.
+    # @param  ctx Type: CParser.Struct_or_union_specifierContext
+    def exitStruct_or_union_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_or_union.
+    # @param  ctx Type: CParser.Struct_or_unionContext
+    def enterStruct_or_union(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_or_union.
+    # @param  ctx Type: CParser.Struct_or_unionContext
+    def exitStruct_or_union(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declaration_list.
+    # @param  ctx Type: CParser.Struct_declaration_listContext
+    def enterStruct_declaration_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declaration_list.
+    # @param  ctx Type: CParser.Struct_declaration_listContext
+    def exitStruct_declaration_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declaration.
+    # @param  ctx Type: CParser.Struct_declarationContext
+    def enterStruct_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declaration.
+    # @param  ctx Type: CParser.Struct_declarationContext
+    def exitStruct_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#specifier_qualifier_list.
+    # @param  ctx Type: CParser.Specifier_qualifier_listContext
+    def enterSpecifier_qualifier_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#specifier_qualifier_list.
+    # @param  ctx Type: CParser.Specifier_qualifier_listContext
+    def exitSpecifier_qualifier_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declarator_list.
+    # @param  ctx Type: CParser.Struct_declarator_listContext
+    def enterStruct_declarator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declarator_list.
+    # @param  ctx Type: CParser.Struct_declarator_listContext
+    def exitStruct_declarator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#struct_declarator.
+    # @param  ctx Type: CParser.Struct_declaratorContext
+    def enterStruct_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#struct_declarator.
+    # @param  ctx Type: CParser.Struct_declaratorContext
+    def exitStruct_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enum_specifier.
+    # @param  ctx Type: CParser.Enum_specifierContext
+    def enterEnum_specifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enum_specifier.
+    # @param  ctx Type: CParser.Enum_specifierContext
+    def exitEnum_specifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enumerator_list.
+    # @param  ctx Type: CParser.Enumerator_listContext
+    def enterEnumerator_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enumerator_list.
+    # @param  ctx Type: CParser.Enumerator_listContext
+    def exitEnumerator_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#enumerator.
+    # @param  ctx Type: CParser.EnumeratorContext
+    def enterEnumerator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#enumerator.
+    # @param  ctx Type: CParser.EnumeratorContext
+    def exitEnumerator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_qualifier.
+    # @param  ctx Type: CParser.Type_qualifierContext
+    def enterType_qualifier(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_qualifier.
+    # @param  ctx Type: CParser.Type_qualifierContext
+    def exitType_qualifier(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declarator.
+    # @param  ctx Type: CParser.DeclaratorContext
+    def enterDeclarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declarator.
+    # @param  ctx Type: CParser.DeclaratorContext
+    def exitDeclarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#direct_declarator.
+    # @param  ctx Type: CParser.Direct_declaratorContext
+    def enterDirect_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#direct_declarator.
+    # @param  ctx Type: CParser.Direct_declaratorContext
+    def exitDirect_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#declarator_suffix.
+    # @param  ctx Type: CParser.Declarator_suffixContext
+    def enterDeclarator_suffix(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#declarator_suffix.
+    # @param  ctx Type: CParser.Declarator_suffixContext
+    def exitDeclarator_suffix(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#pointer.
+    # @param  ctx Type: CParser.PointerContext
+    def enterPointer(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#pointer.
+    # @param  ctx Type: CParser.PointerContext
+    def exitPointer(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_type_list.
+    # @param  ctx Type: CParser.Parameter_type_listContext
+    def enterParameter_type_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_type_list.
+    # @param  ctx Type: CParser.Parameter_type_listContext
+    def exitParameter_type_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_list.
+    # @param  ctx Type: CParser.Parameter_listContext
+    def enterParameter_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_list.
+    # @param  ctx Type: CParser.Parameter_listContext
+    def exitParameter_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#parameter_declaration.
+    # @param  ctx Type: CParser.Parameter_declarationContext
+    def enterParameter_declaration(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#parameter_declaration.
+    # @param  ctx Type: CParser.Parameter_declarationContext
+    def exitParameter_declaration(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#identifier_list.
+    # @param  ctx Type: CParser.Identifier_listContext
+    def enterIdentifier_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#identifier_list.
+    # @param  ctx Type: CParser.Identifier_listContext
+    def exitIdentifier_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#type_name.
+    # @param  ctx Type: CParser.Type_nameContext
+    def enterType_name(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#type_name.
+    # @param  ctx Type: CParser.Type_nameContext
+    def exitType_name(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#abstract_declarator.
+    # @param  ctx Type: CParser.Abstract_declaratorContext
+    def enterAbstract_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#abstract_declarator.
+    # @param  ctx Type: CParser.Abstract_declaratorContext
+    def exitAbstract_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#direct_abstract_declarator.
+    # @param  ctx Type: CParser.Direct_abstract_declaratorContext
+    def enterDirect_abstract_declarator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#direct_abstract_declarator.
+    # @param  ctx Type: CParser.Direct_abstract_declaratorContext
+    def exitDirect_abstract_declarator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#abstract_declarator_suffix.
+    # @param  ctx Type: CParser.Abstract_declarator_suffixContext
+    def enterAbstract_declarator_suffix(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#abstract_declarator_suffix.
+    # @param  ctx Type: CParser.Abstract_declarator_suffixContext
+    def exitAbstract_declarator_suffix(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#initializer.
+    # @param  ctx Type: CParser.InitializerContext
+    def enterInitializer(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#initializer.
+    # @param  ctx Type: CParser.InitializerContext
+    def exitInitializer(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#initializer_list.
+    # @param  ctx Type: CParser.Initializer_listContext
+    def enterInitializer_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#initializer_list.
+    # @param  ctx Type: CParser.Initializer_listContext
+    def exitInitializer_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#argument_expression_list.
+    # @param  ctx Type: CParser.Argument_expression_listContext
+    def enterArgument_expression_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#argument_expression_list.
+    # @param  ctx Type: CParser.Argument_expression_listContext
+    def exitArgument_expression_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#additive_expression.
+    # @param  ctx Type: CParser.Additive_expressionContext
+    def enterAdditive_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#additive_expression.
+    # @param  ctx Type: CParser.Additive_expressionContext
+    def exitAdditive_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#multiplicative_expression.
+    # @param  ctx Type: CParser.Multiplicative_expressionContext
+    def enterMultiplicative_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#multiplicative_expression.
+    # @param  ctx Type: CParser.Multiplicative_expressionContext
+    def exitMultiplicative_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#cast_expression.
+    # @param  ctx Type: CParser.Cast_expressionContext
+    def enterCast_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#cast_expression.
+    # @param  ctx Type: CParser.Cast_expressionContext
+    def exitCast_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#unary_expression.
+    # @param  ctx Type: CParser.Unary_expressionContext
+    def enterUnary_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#unary_expression.
+    # @param  ctx Type: CParser.Unary_expressionContext
+    def exitUnary_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#postfix_expression.
+    # @param  ctx Type: CParser.Postfix_expressionContext
+    def enterPostfix_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#postfix_expression.
+    # @param  ctx Type: CParser.Postfix_expressionContext
+    def exitPostfix_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#macro_parameter_list.
+    # @param  ctx Type: CParser.Macro_parameter_listContext
+    def enterMacro_parameter_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#macro_parameter_list.
+    # @param  ctx Type: CParser.Macro_parameter_listContext
+    def exitMacro_parameter_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#unary_operator.
+    # @param  ctx Type: CParser.Unary_operatorContext
+    def enterUnary_operator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#unary_operator.
+    # @param  ctx Type: CParser.Unary_operatorContext
+    def exitUnary_operator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#primary_expression.
+    # @param  ctx Type: CParser.Primary_expressionContext
+    def enterPrimary_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#primary_expression.
+    # @param  ctx Type: CParser.Primary_expressionContext
+    def exitPrimary_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#constant.
+    # @param  ctx Type: CParser.ConstantContext
+    def enterConstant(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#constant.
+    # @param  ctx Type: CParser.ConstantContext
+    def exitConstant(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#expression.
+    # @param  ctx Type: CParser.ExpressionContext
+    def enterExpression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#expression.
+    # @param  ctx Type: CParser.ExpressionContext
+    def exitExpression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#constant_expression.
+    # @param  ctx Type: CParser.Constant_expressionContext
+    def enterConstant_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#constant_expression.
+    # @param  ctx Type: CParser.Constant_expressionContext
+    def exitConstant_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#assignment_expression.
+    # @param  ctx Type: CParser.Assignment_expressionContext
+    def enterAssignment_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#assignment_expression.
+    # @param  ctx Type: CParser.Assignment_expressionContext
+    def exitAssignment_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#lvalue.
+    # @param  ctx Type: CParser.LvalueContext
+    def enterLvalue(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#lvalue.
+    # @param  ctx Type: CParser.LvalueContext
+    def exitLvalue(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#assignment_operator.
+    # @param  ctx Type: CParser.Assignment_operatorContext
+    def enterAssignment_operator(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#assignment_operator.
+    # @param  ctx Type: CParser.Assignment_operatorContext
+    def exitAssignment_operator(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#conditional_expression.
+    # @param  ctx Type: CParser.Conditional_expressionContext
+    def enterConditional_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#conditional_expression.
+    # @param  ctx Type: CParser.Conditional_expressionContext
+    def exitConditional_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#logical_or_expression.
+    # @param  ctx Type: CParser.Logical_or_expressionContext
+    def enterLogical_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#logical_or_expression.
+    # @param  ctx Type: CParser.Logical_or_expressionContext
+    def exitLogical_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#logical_and_expression.
+    # @param  ctx Type: CParser.Logical_and_expressionContext
+    def enterLogical_and_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#logical_and_expression.
+    # @param  ctx Type: CParser.Logical_and_expressionContext
+    def exitLogical_and_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#inclusive_or_expression.
+    # @param  ctx Type: CParser.Inclusive_or_expressionContext
+    def enterInclusive_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#inclusive_or_expression.
+    # @param  ctx Type: CParser.Inclusive_or_expressionContext
+    def exitInclusive_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#exclusive_or_expression.
+    # @param  ctx Type: CParser.Exclusive_or_expressionContext
+    def enterExclusive_or_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#exclusive_or_expression.
+    # @param  ctx Type: CParser.Exclusive_or_expressionContext
+    def exitExclusive_or_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#and_expression.
+    # @param  ctx Type: CParser.And_expressionContext
+    def enterAnd_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#and_expression.
+    # @param  ctx Type: CParser.And_expressionContext
+    def exitAnd_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#equality_expression.
+    # @param  ctx Type: CParser.Equality_expressionContext
+    def enterEquality_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#equality_expression.
+    # @param  ctx Type: CParser.Equality_expressionContext
+    def exitEquality_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#relational_expression.
+    # @param  ctx Type: CParser.Relational_expressionContext
+    def enterRelational_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#relational_expression.
+    # @param  ctx Type: CParser.Relational_expressionContext
+    def exitRelational_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#shift_expression.
+    # @param  ctx Type: CParser.Shift_expressionContext
+    def enterShift_expression(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#shift_expression.
+    # @param  ctx Type: CParser.Shift_expressionContext
+    def exitShift_expression(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#statement.
+    # @param  ctx Type: CParser.StatementContext
+    def enterStatement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#statement.
+    # @param  ctx Type: CParser.StatementContext
+    def exitStatement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm2_statement.
+    # @param  ctx Type: CParser.Asm2_statementContext
+    def enterAsm2_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm2_statement.
+    # @param  ctx Type: CParser.Asm2_statementContext
+    def exitAsm2_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm1_statement.
+    # @param  ctx Type: CParser.Asm1_statementContext
+    def enterAsm1_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm1_statement.
+    # @param  ctx Type: CParser.Asm1_statementContext
+    def exitAsm1_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#asm_statement.
+    # @param  ctx Type: CParser.Asm_statementContext
+    def enterAsm_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#asm_statement.
+    # @param  ctx Type: CParser.Asm_statementContext
+    def exitAsm_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#macro_statement.
+    # @param  ctx Type: CParser.Macro_statementContext
+    def enterMacro_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#macro_statement.
+    # @param  ctx Type: CParser.Macro_statementContext
+    def exitMacro_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#labeled_statement.
+    # @param  ctx Type: CParser.Labeled_statementContext
+    def enterLabeled_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#labeled_statement.
+    # @param  ctx Type: CParser.Labeled_statementContext
+    def exitLabeled_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#compound_statement.
+    # @param  ctx Type: CParser.Compound_statementContext
+    def enterCompound_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#compound_statement.
+    # @param  ctx Type: CParser.Compound_statementContext
+    def exitCompound_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#statement_list.
+    # @param  ctx Type: CParser.Statement_listContext
+    def enterStatement_list(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#statement_list.
+    # @param  ctx Type: CParser.Statement_listContext
+    def exitStatement_list(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#expression_statement.
+    # @param  ctx Type: CParser.Expression_statementContext
+    def enterExpression_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#expression_statement.
+    # @param  ctx Type: CParser.Expression_statementContext
+    def exitExpression_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#selection_statement.
+    # @param  ctx Type: CParser.Selection_statementContext
+    def enterSelection_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#selection_statement.
+    # @param  ctx Type: CParser.Selection_statementContext
+    def exitSelection_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#iteration_statement.
+    # @param  ctx Type: CParser.Iteration_statementContext
+    def enterIteration_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#iteration_statement.
+    # @param  ctx Type: CParser.Iteration_statementContext
+    def exitIteration_statement(self,ctx):
+        pass
+
+
+    # Enter a parse tree produced by CParser#jump_statement.
+    # @param  ctx Type: CParser.Jump_statementContext
+    def enterJump_statement(self,ctx):
+        pass
+
+    # Exit a parse tree produced by CParser#jump_statement.
+    # @param  ctx Type: CParser.Jump_statementContext
+    def exitJump_statement(self,ctx):
+        pass
+
+
diff --git a/BaseTools/Source/Python/Eot/CParser4/CParser.py b/BaseTools/Source/Python/Eot/CParser4/CParser.py
new file mode 100644
index 0000000000..08d8a423f4
--- /dev/null
+++ b/BaseTools/Source/Python/Eot/CParser4/CParser.py
@@ -0,0 +1,6279 @@
+# Generated from C.g4 by ANTLR 4.7.1
+# encoding: utf-8
+from antlr4 import *
+from io import StringIO
+from typing.io import TextIO
+import sys
+
+
+## @file
+# The file defines the parser for C source files.
+#
+# THIS FILE IS AUTO-GENENERATED. PLEASE DON NOT MODIFY THIS FILE.
+# This file is generated by running:
+# java org.antlr.Tool C.g
+#
+# Copyright (c) 2009 - 2010, Intel Corporation  All rights reserved.
+#
+# This program and the accompanying materials are licensed and made available
+# under the terms and conditions of the BSD License which accompanies this
+# distribution.  The full text of the license may be found at:
+#   http://opensource.org/licenses/bsd-license.php
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+import Ecc.CodeFragment as CodeFragment
+import Ecc.FileProfile as FileProfile
+
+def serializedATN():
+    with StringIO() as buf:
+        buf.write("\3\u608b\ua72a\u8133\ub9ed\u417c\u3be7\u7786\u5964\3k")
+        buf.write("\u0380\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t\7")
+        buf.write("\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r\4\16")
+        buf.write("\t\16\4\17\t\17\4\20\t\20\4\21\t\21\4\22\t\22\4\23\t\23")
+        buf.write("\4\24\t\24\4\25\t\25\4\26\t\26\4\27\t\27\4\30\t\30\4\31")
+        buf.write("\t\31\4\32\t\32\4\33\t\33\4\34\t\34\4\35\t\35\4\36\t\36")
+        buf.write("\4\37\t\37\4 \t \4!\t!\4\"\t\"\4#\t#\4$\t$\4%\t%\4&\t")
+        buf.write("&\4\'\t\'\4(\t(\4)\t)\4*\t*\4+\t+\4,\t,\4-\t-\4.\t.\4")
+        buf.write("/\t/\4\60\t\60\4\61\t\61\4\62\t\62\4\63\t\63\4\64\t\64")
+        buf.write("\4\65\t\65\4\66\t\66\4\67\t\67\48\t8\49\t9\4:\t:\4;\t")
+        buf.write(";\4<\t<\4=\t=\4>\t>\4?\t?\4@\t@\4A\tA\4B\tB\4C\tC\4D\t")
+        buf.write("D\4E\tE\4F\tF\4G\tG\4H\tH\3\2\7\2\u0092\n\2\f\2\16\2\u0095")
+        buf.write("\13\2\3\3\5\3\u0098\n\3\3\3\3\3\7\3\u009c\n\3\f\3\16\3")
+        buf.write("\u009f\13\3\3\3\3\3\3\3\3\3\3\3\3\3\5\3\u00a7\n\3\5\3")
+        buf.write("\u00a9\n\3\3\4\5\4\u00ac\n\4\3\4\3\4\6\4\u00b0\n\4\r\4")
+        buf.write("\16\4\u00b1\3\4\3\4\3\4\5\4\u00b7\n\4\3\4\3\4\3\5\3\5")
+        buf.write("\3\5\6\5\u00be\n\5\r\5\16\5\u00bf\3\6\3\6\5\6\u00c4\n")
+        buf.write("\6\3\6\3\6\3\6\3\6\3\6\3\6\5\6\u00cc\n\6\3\6\3\6\3\6\5")
+        buf.write("\6\u00d1\n\6\3\7\3\7\3\7\7\7\u00d6\n\7\f\7\16\7\u00d9")
+        buf.write("\13\7\3\b\3\b\3\b\5\b\u00de\n\b\3\t\3\t\3\n\3\n\3\n\3")
+        buf.write("\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n\3\n")
+        buf.write("\7\n\u00f3\n\n\f\n\16\n\u00f6\13\n\3\n\3\n\5\n\u00fa\n")
+        buf.write("\n\3\13\3\13\3\f\3\f\5\f\u0100\n\f\3\f\3\f\3\f\3\f\3\f")
+        buf.write("\3\f\3\f\5\f\u0109\n\f\3\r\3\r\3\16\6\16\u010e\n\16\r")
+        buf.write("\16\16\16\u010f\3\17\3\17\3\17\3\17\3\20\3\20\6\20\u0118")
+        buf.write("\n\20\r\20\16\20\u0119\3\21\3\21\3\21\7\21\u011f\n\21")
+        buf.write("\f\21\16\21\u0122\13\21\3\22\3\22\3\22\5\22\u0127\n\22")
+        buf.write("\3\22\3\22\5\22\u012b\n\22\3\23\3\23\3\23\3\23\5\23\u0131")
+        buf.write("\n\23\3\23\3\23\3\23\3\23\3\23\3\23\3\23\5\23\u013a\n")
+        buf.write("\23\3\23\3\23\3\23\3\23\5\23\u0140\n\23\3\24\3\24\3\24")
+        buf.write("\7\24\u0145\n\24\f\24\16\24\u0148\13\24\3\25\3\25\3\25")
+        buf.write("\5\25\u014d\n\25\3\26\3\26\3\27\5\27\u0152\n\27\3\27\5")
+        buf.write("\27\u0155\n\27\3\27\5\27\u0158\n\27\3\27\5\27\u015b\n")
+        buf.write("\27\3\27\3\27\5\27\u015f\n\27\3\30\3\30\7\30\u0163\n\30")
+        buf.write("\f\30\16\30\u0166\13\30\3\30\3\30\5\30\u016a\n\30\3\30")
+        buf.write("\3\30\3\30\6\30\u016f\n\30\r\30\16\30\u0170\5\30\u0173")
+        buf.write("\n\30\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31\3\31")
+        buf.write("\3\31\3\31\3\31\3\31\3\31\3\31\5\31\u0185\n\31\3\32\3")
+        buf.write("\32\6\32\u0189\n\32\r\32\16\32\u018a\3\32\5\32\u018e\n")
+        buf.write("\32\3\32\3\32\3\32\5\32\u0193\n\32\3\33\3\33\3\33\5\33")
+        buf.write("\u0198\n\33\3\33\5\33\u019b\n\33\3\34\3\34\3\34\5\34\u01a0")
+        buf.write("\n\34\3\34\7\34\u01a3\n\34\f\34\16\34\u01a6\13\34\3\35")
+        buf.write("\3\35\3\35\7\35\u01ab\n\35\f\35\16\35\u01ae\13\35\3\35")
+        buf.write("\5\35\u01b1\n\35\3\35\7\35\u01b4\n\35\f\35\16\35\u01b7")
+        buf.write("\13\35\3\35\5\35\u01ba\n\35\3\36\3\36\3\36\7\36\u01bf")
+        buf.write("\n\36\f\36\16\36\u01c2\13\36\3\37\3\37\5\37\u01c6\n\37")
+        buf.write("\3\37\5\37\u01c9\n\37\3 \3 \5 \u01cd\n \3 \5 \u01d0\n")
+        buf.write(" \3!\3!\3!\3!\3!\5!\u01d7\n!\3!\7!\u01da\n!\f!\16!\u01dd")
+        buf.write("\13!\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\3\"\5")
+        buf.write("\"\u01eb\n\"\3#\3#\3#\3#\5#\u01f1\n#\3#\3#\5#\u01f5\n")
+        buf.write("#\3$\3$\3$\7$\u01fa\n$\f$\16$\u01fd\13$\3%\3%\5%\u0201")
+        buf.write("\n%\3%\3%\3%\5%\u0206\n%\7%\u0208\n%\f%\16%\u020b\13%")
+        buf.write("\3&\3&\3&\3&\3&\7&\u0212\n&\f&\16&\u0215\13&\3\'\3\'\3")
+        buf.write("\'\3\'\3\'\3\'\3\'\7\'\u021e\n\'\f\'\16\'\u0221\13\'\3")
+        buf.write("(\3(\3(\3(\3(\3(\5(\u0229\n(\3)\3)\3)\3)\3)\3)\3)\3)\3")
+        buf.write(")\3)\3)\3)\3)\3)\3)\5)\u023a\n)\3*\3*\3*\3*\3*\3*\3*\3")
+        buf.write("*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3*\3")
+        buf.write("*\3*\3*\3*\7*\u0259\n*\f*\16*\u025c\13*\3+\3+\3+\7+\u0261")
+        buf.write("\n+\f+\16+\u0264\13+\3,\3,\3-\3-\3-\3-\3-\3-\5-\u026e")
+        buf.write("\n-\3.\3.\3.\3.\3.\7.\u0275\n.\f.\16.\u0278\13.\3.\6.")
+        buf.write("\u027b\n.\r.\16.\u027c\6.\u027f\n.\r.\16.\u0280\3.\7.")
+        buf.write("\u0284\n.\f.\16.\u0287\13.\3.\5.\u028a\n.\3/\3/\3/\7/")
+        buf.write("\u028f\n/\f/\16/\u0292\13/\3\60\3\60\3\61\3\61\3\61\3")
+        buf.write("\61\3\61\5\61\u029b\n\61\3\62\3\62\3\63\3\63\3\64\3\64")
+        buf.write("\3\64\3\64\3\64\3\64\3\64\5\64\u02a8\n\64\3\65\3\65\3")
+        buf.write("\65\7\65\u02ad\n\65\f\65\16\65\u02b0\13\65\3\66\3\66\3")
+        buf.write("\66\7\66\u02b5\n\66\f\66\16\66\u02b8\13\66\3\67\3\67\3")
+        buf.write("\67\7\67\u02bd\n\67\f\67\16\67\u02c0\13\67\38\38\38\7")
+        buf.write("8\u02c5\n8\f8\168\u02c8\138\39\39\39\79\u02cd\n9\f9\16")
+        buf.write("9\u02d0\139\3:\3:\3:\7:\u02d5\n:\f:\16:\u02d8\13:\3;\3")
+        buf.write(";\3;\7;\u02dd\n;\f;\16;\u02e0\13;\3<\3<\3<\7<\u02e5\n")
+        buf.write("<\f<\16<\u02e8\13<\3=\3=\3=\3=\3=\3=\3=\3=\3=\3=\3=\5")
+        buf.write("=\u02f5\n=\3>\5>\u02f8\n>\3>\3>\3>\7>\u02fd\n>\f>\16>")
+        buf.write("\u0300\13>\3>\3>\3>\3?\3?\3?\7?\u0308\n?\f?\16?\u030b")
+        buf.write("\13?\3?\3?\3@\3@\3@\7@\u0312\n@\f@\16@\u0315\13@\3@\3")
+        buf.write("@\3A\3A\3A\7A\u031c\nA\fA\16A\u031f\13A\3A\5A\u0322\n")
+        buf.write("A\3A\5A\u0325\nA\3A\3A\3B\3B\3B\3B\3B\3B\3B\3B\3B\3B\3")
+        buf.write("B\5B\u0334\nB\3C\3C\7C\u0338\nC\fC\16C\u033b\13C\3C\5")
+        buf.write("C\u033e\nC\3C\3C\3D\6D\u0343\nD\rD\16D\u0344\3E\3E\3E")
+        buf.write("\3E\5E\u034b\nE\3F\3F\3F\3F\3F\3F\3F\3F\5F\u0355\nF\3")
+        buf.write("F\3F\3F\3F\3F\3F\5F\u035d\nF\3G\3G\3G\3G\3G\3G\3G\3G\3")
+        buf.write("G\3G\3G\3G\3G\3G\3G\3G\5G\u036f\nG\3H\3H\3H\3H\3H\3H\3")
+        buf.write("H\3H\3H\3H\3H\3H\3H\5H\u037e\nH\3H\2\2I\2\4\6\b\n\f\16")
+        buf.write("\20\22\24\26\30\32\34\36 \"$&(*,.\60\62\64\668:<>@BDF")
+        buf.write("HJLNPRTVXZ\\^`bdfhjlnprtvxz|~\u0080\u0082\u0084\u0086")
+        buf.write("\u0088\u008a\u008c\u008e\2\f\3\2\b\f\3\2\27\30\3\2\33")
+        buf.write("\'\5\2,,./\679\4\2\7\7:C\3\2IJ\3\2KN\3\2OP\3\2\4\4\3\2")
+        buf.write("\26\26\2\u03d8\2\u0093\3\2\2\2\4\u00a8\3\2\2\2\6\u00ab")
+        buf.write("\3\2\2\2\b\u00bd\3\2\2\2\n\u00d0\3\2\2\2\f\u00d2\3\2\2")
+        buf.write("\2\16\u00da\3\2\2\2\20\u00df\3\2\2\2\22\u00f9\3\2\2\2")
+        buf.write("\24\u00fb\3\2\2\2\26\u0108\3\2\2\2\30\u010a\3\2\2\2\32")
+        buf.write("\u010d\3\2\2\2\34\u0111\3\2\2\2\36\u0117\3\2\2\2 \u011b")
+        buf.write("\3\2\2\2\"\u012a\3\2\2\2$\u013f\3\2\2\2&\u0141\3\2\2\2")
+        buf.write("(\u0149\3\2\2\2*\u014e\3\2\2\2,\u015e\3\2\2\2.\u0172\3")
+        buf.write("\2\2\2\60\u0184\3\2\2\2\62\u0192\3\2\2\2\64\u0194\3\2")
+        buf.write("\2\2\66\u019c\3\2\2\28\u01b9\3\2\2\2:\u01bb\3\2\2\2<\u01c8")
+        buf.write("\3\2\2\2>\u01cf\3\2\2\2@\u01d6\3\2\2\2B\u01ea\3\2\2\2")
+        buf.write("D\u01f4\3\2\2\2F\u01f6\3\2\2\2H\u01fe\3\2\2\2J\u020c\3")
+        buf.write("\2\2\2L\u0216\3\2\2\2N\u0228\3\2\2\2P\u0239\3\2\2\2R\u023b")
+        buf.write("\3\2\2\2T\u025d\3\2\2\2V\u0265\3\2\2\2X\u026d\3\2\2\2")
+        buf.write("Z\u0289\3\2\2\2\\\u028b\3\2\2\2^\u0293\3\2\2\2`\u029a")
+        buf.write("\3\2\2\2b\u029c\3\2\2\2d\u029e\3\2\2\2f\u02a0\3\2\2\2")
+        buf.write("h\u02a9\3\2\2\2j\u02b1\3\2\2\2l\u02b9\3\2\2\2n\u02c1\3")
+        buf.write("\2\2\2p\u02c9\3\2\2\2r\u02d1\3\2\2\2t\u02d9\3\2\2\2v\u02e1")
+        buf.write("\3\2\2\2x\u02f4\3\2\2\2z\u02f7\3\2\2\2|\u0304\3\2\2\2")
+        buf.write("~\u030e\3\2\2\2\u0080\u0318\3\2\2\2\u0082\u0333\3\2\2")
+        buf.write("\2\u0084\u0335\3\2\2\2\u0086\u0342\3\2\2\2\u0088\u034a")
+        buf.write("\3\2\2\2\u008a\u035c\3\2\2\2\u008c\u036e\3\2\2\2\u008e")
+        buf.write("\u037d\3\2\2\2\u0090\u0092\5\4\3\2\u0091\u0090\3\2\2\2")
+        buf.write("\u0092\u0095\3\2\2\2\u0093\u0091\3\2\2\2\u0093\u0094\3")
+        buf.write("\2\2\2\u0094\3\3\2\2\2\u0095\u0093\3\2\2\2\u0096\u0098")
+        buf.write("\5\b\5\2\u0097\u0096\3\2\2\2\u0097\u0098\3\2\2\2\u0098")
+        buf.write("\u0099\3\2\2\2\u0099\u009d\5,\27\2\u009a\u009c\5\n\6\2")
+        buf.write("\u009b\u009a\3\2\2\2\u009c\u009f\3\2\2\2\u009d\u009b\3")
+        buf.write("\2\2\2\u009d\u009e\3\2\2\2\u009e\u00a0\3\2\2\2\u009f\u009d")
+        buf.write("\3\2\2\2\u00a0\u00a1\7\3\2\2\u00a1\u00a9\3\2\2\2\u00a2")
+        buf.write("\u00a9\5\6\4\2\u00a3\u00a9\5\n\6\2\u00a4\u00a6\5\u0080")
+        buf.write("A\2\u00a5\u00a7\7\4\2\2\u00a6\u00a5\3\2\2\2\u00a6\u00a7")
+        buf.write("\3\2\2\2\u00a7\u00a9\3\2\2\2\u00a8\u0097\3\2\2\2\u00a8")
+        buf.write("\u00a2\3\2\2\2\u00a8\u00a3\3\2\2\2\u00a8\u00a4\3\2\2\2")
+        buf.write("\u00a9\5\3\2\2\2\u00aa\u00ac\5\b\5\2\u00ab\u00aa\3\2\2")
+        buf.write("\2\u00ab\u00ac\3\2\2\2\u00ac\u00ad\3\2\2\2\u00ad\u00b6")
+        buf.write("\5,\27\2\u00ae\u00b0\5\n\6\2\u00af\u00ae\3\2\2\2\u00b0")
+        buf.write("\u00b1\3\2\2\2\u00b1\u00af\3\2\2\2\u00b1\u00b2\3\2\2\2")
+        buf.write("\u00b2\u00b3\3\2\2\2\u00b3\u00b4\5\u0084C\2\u00b4\u00b7")
+        buf.write("\3\2\2\2\u00b5\u00b7\5\u0084C\2\u00b6\u00af\3\2\2\2\u00b6")
+        buf.write("\u00b5\3\2\2\2\u00b7\u00b8\3\2\2\2\u00b8\u00b9\b\4\1\2")
+        buf.write("\u00b9\7\3\2\2\2\u00ba\u00be\5\20\t\2\u00bb\u00be\5\22")
+        buf.write("\n\2\u00bc\u00be\5*\26\2\u00bd\u00ba\3\2\2\2\u00bd\u00bb")
+        buf.write("\3\2\2\2\u00bd\u00bc\3\2\2\2\u00be\u00bf\3\2\2\2\u00bf")
+        buf.write("\u00bd\3\2\2\2\u00bf\u00c0\3\2\2\2\u00c0\t\3\2\2\2\u00c1")
+        buf.write("\u00c3\7\5\2\2\u00c2\u00c4\5\b\5\2\u00c3\u00c2\3\2\2\2")
+        buf.write("\u00c3\u00c4\3\2\2\2\u00c4\u00c5\3\2\2\2\u00c5\u00c6\5")
+        buf.write("\f\7\2\u00c6\u00c7\7\4\2\2\u00c7\u00c8\b\6\1\2\u00c8\u00d1")
+        buf.write("\3\2\2\2\u00c9\u00cb\5\b\5\2\u00ca\u00cc\5\f\7\2\u00cb")
+        buf.write("\u00ca\3\2\2\2\u00cb\u00cc\3\2\2\2\u00cc\u00cd\3\2\2\2")
+        buf.write("\u00cd\u00ce\7\4\2\2\u00ce\u00cf\b\6\1\2\u00cf\u00d1\3")
+        buf.write("\2\2\2\u00d0\u00c1\3\2\2\2\u00d0\u00c9\3\2\2\2\u00d1\13")
+        buf.write("\3\2\2\2\u00d2\u00d7\5\16\b\2\u00d3\u00d4\7\6\2\2\u00d4")
+        buf.write("\u00d6\5\16\b\2\u00d5\u00d3\3\2\2\2\u00d6\u00d9\3\2\2")
+        buf.write("\2\u00d7\u00d5\3\2\2\2\u00d7\u00d8\3\2\2\2\u00d8\r\3\2")
+        buf.write("\2\2\u00d9\u00d7\3\2\2\2\u00da\u00dd\5,\27\2\u00db\u00dc")
+        buf.write("\7\7\2\2\u00dc\u00de\5D#\2\u00dd\u00db\3\2\2\2\u00dd\u00de")
+        buf.write("\3\2\2\2\u00de\17\3\2\2\2\u00df\u00e0\t\2\2\2\u00e0\21")
+        buf.write("\3\2\2\2\u00e1\u00fa\7\r\2\2\u00e2\u00fa\7\16\2\2\u00e3")
+        buf.write("\u00fa\7\17\2\2\u00e4\u00fa\7\20\2\2\u00e5\u00fa\7\21")
+        buf.write("\2\2\u00e6\u00fa\7\22\2\2\u00e7\u00fa\7\23\2\2\u00e8\u00fa")
+        buf.write("\7\24\2\2\u00e9\u00fa\7\25\2\2\u00ea\u00eb\5\26\f\2\u00eb")
+        buf.write("\u00ec\b\n\1\2\u00ec\u00fa\3\2\2\2\u00ed\u00ee\5$\23\2")
+        buf.write("\u00ee\u00ef\b\n\1\2\u00ef\u00fa\3\2\2\2\u00f0\u00f4\7")
+        buf.write("_\2\2\u00f1\u00f3\5*\26\2\u00f2\u00f1\3\2\2\2\u00f3\u00f6")
+        buf.write("\3\2\2\2\u00f4\u00f2\3\2\2\2\u00f4\u00f5\3\2\2\2\u00f5")
+        buf.write("\u00f7\3\2\2\2\u00f6\u00f4\3\2\2\2\u00f7\u00fa\5,\27\2")
+        buf.write("\u00f8\u00fa\5\24\13\2\u00f9\u00e1\3\2\2\2\u00f9\u00e2")
+        buf.write("\3\2\2\2\u00f9\u00e3\3\2\2\2\u00f9\u00e4\3\2\2\2\u00f9")
+        buf.write("\u00e5\3\2\2\2\u00f9\u00e6\3\2\2\2\u00f9\u00e7\3\2\2\2")
+        buf.write("\u00f9\u00e8\3\2\2\2\u00f9\u00e9\3\2\2\2\u00f9\u00ea\3")
+        buf.write("\2\2\2\u00f9\u00ed\3\2\2\2\u00f9\u00f0\3\2\2\2\u00f9\u00f8")
+        buf.write("\3\2\2\2\u00fa\23\3\2\2\2\u00fb\u00fc\7_\2\2\u00fc\25")
+        buf.write("\3\2\2\2\u00fd\u00ff\5\30\r\2\u00fe\u0100\7_\2\2\u00ff")
+        buf.write("\u00fe\3\2\2\2\u00ff\u0100\3\2\2\2\u0100\u0101\3\2\2\2")
+        buf.write("\u0101\u0102\7\3\2\2\u0102\u0103\5\32\16\2\u0103\u0104")
+        buf.write("\7\26\2\2\u0104\u0109\3\2\2\2\u0105\u0106\5\30\r\2\u0106")
+        buf.write("\u0107\7_\2\2\u0107\u0109\3\2\2\2\u0108\u00fd\3\2\2\2")
+        buf.write("\u0108\u0105\3\2\2\2\u0109\27\3\2\2\2\u010a\u010b\t\3")
+        buf.write("\2\2\u010b\31\3\2\2\2\u010c\u010e\5\34\17\2\u010d\u010c")
+        buf.write("\3\2\2\2\u010e\u010f\3\2\2\2\u010f\u010d\3\2\2\2\u010f")
+        buf.write("\u0110\3\2\2\2\u0110\33\3\2\2\2\u0111\u0112\5\36\20\2")
+        buf.write("\u0112\u0113\5 \21\2\u0113\u0114\7\4\2\2\u0114\35\3\2")
+        buf.write("\2\2\u0115\u0118\5*\26\2\u0116\u0118\5\22\n\2\u0117\u0115")
+        buf.write("\3\2\2\2\u0117\u0116\3\2\2\2\u0118\u0119\3\2\2\2\u0119")
+        buf.write("\u0117\3\2\2\2\u0119\u011a\3\2\2\2\u011a\37\3\2\2\2\u011b")
+        buf.write("\u0120\5\"\22\2\u011c\u011d\7\6\2\2\u011d\u011f\5\"\22")
+        buf.write("\2\u011e\u011c\3\2\2\2\u011f\u0122\3\2\2\2\u0120\u011e")
+        buf.write("\3\2\2\2\u0120\u0121\3\2\2\2\u0121!\3\2\2\2\u0122\u0120")
+        buf.write("\3\2\2\2\u0123\u0126\5,\27\2\u0124\u0125\7\31\2\2\u0125")
+        buf.write("\u0127\5^\60\2\u0126\u0124\3\2\2\2\u0126\u0127\3\2\2\2")
+        buf.write("\u0127\u012b\3\2\2\2\u0128\u0129\7\31\2\2\u0129\u012b")
+        buf.write("\5^\60\2\u012a\u0123\3\2\2\2\u012a\u0128\3\2\2\2\u012b")
+        buf.write("#\3\2\2\2\u012c\u012d\7\32\2\2\u012d\u012e\7\3\2\2\u012e")
+        buf.write("\u0130\5&\24\2\u012f\u0131\7\6\2\2\u0130\u012f\3\2\2\2")
+        buf.write("\u0130\u0131\3\2\2\2\u0131\u0132\3\2\2\2\u0132\u0133\7")
+        buf.write("\26\2\2\u0133\u0140\3\2\2\2\u0134\u0135\7\32\2\2\u0135")
+        buf.write("\u0136\7_\2\2\u0136\u0137\7\3\2\2\u0137\u0139\5&\24\2")
+        buf.write("\u0138\u013a\7\6\2\2\u0139\u0138\3\2\2\2\u0139\u013a\3")
+        buf.write("\2\2\2\u013a\u013b\3\2\2\2\u013b\u013c\7\26\2\2\u013c")
+        buf.write("\u0140\3\2\2\2\u013d\u013e\7\32\2\2\u013e\u0140\7_\2\2")
+        buf.write("\u013f\u012c\3\2\2\2\u013f\u0134\3\2\2\2\u013f\u013d\3")
+        buf.write("\2\2\2\u0140%\3\2\2\2\u0141\u0146\5(\25\2\u0142\u0143")
+        buf.write("\7\6\2\2\u0143\u0145\5(\25\2\u0144\u0142\3\2\2\2\u0145")
+        buf.write("\u0148\3\2\2\2\u0146\u0144\3\2\2\2\u0146\u0147\3\2\2\2")
+        buf.write("\u0147\'\3\2\2\2\u0148\u0146\3\2\2\2\u0149\u014c\7_\2")
+        buf.write("\2\u014a\u014b\7\7\2\2\u014b\u014d\5^\60\2\u014c\u014a")
+        buf.write("\3\2\2\2\u014c\u014d\3\2\2\2\u014d)\3\2\2\2\u014e\u014f")
+        buf.write("\t\4\2\2\u014f+\3\2\2\2\u0150\u0152\5\62\32\2\u0151\u0150")
+        buf.write("\3\2\2\2\u0151\u0152\3\2\2\2\u0152\u0154\3\2\2\2\u0153")
+        buf.write("\u0155\7$\2\2\u0154\u0153\3\2\2\2\u0154\u0155\3\2\2\2")
+        buf.write("\u0155\u0157\3\2\2\2\u0156\u0158\7%\2\2\u0157\u0156\3")
+        buf.write("\2\2\2\u0157\u0158\3\2\2\2\u0158\u015a\3\2\2\2\u0159\u015b")
+        buf.write("\7&\2\2\u015a\u0159\3\2\2\2\u015a\u015b\3\2\2\2\u015b")
+        buf.write("\u015c\3\2\2\2\u015c\u015f\5.\30\2\u015d\u015f\5\62\32")
+        buf.write("\2\u015e\u0151\3\2\2\2\u015e\u015d\3\2\2\2\u015f-\3\2")
+        buf.write("\2\2\u0160\u0164\7_\2\2\u0161\u0163\5\60\31\2\u0162\u0161")
+        buf.write("\3\2\2\2\u0163\u0166\3\2\2\2\u0164\u0162\3\2\2\2\u0164")
+        buf.write("\u0165\3\2\2\2\u0165\u0173\3\2\2\2\u0166\u0164\3\2\2\2")
+        buf.write("\u0167\u0169\7(\2\2\u0168\u016a\7$\2\2\u0169\u0168\3\2")
+        buf.write("\2\2\u0169\u016a\3\2\2\2\u016a\u016b\3\2\2\2\u016b\u016c")
+        buf.write("\5,\27\2\u016c\u016e\7)\2\2\u016d\u016f\5\60\31\2\u016e")
+        buf.write("\u016d\3\2\2\2\u016f\u0170\3\2\2\2\u0170\u016e\3\2\2\2")
+        buf.write("\u0170\u0171\3\2\2\2\u0171\u0173\3\2\2\2\u0172\u0160\3")
+        buf.write("\2\2\2\u0172\u0167\3\2\2\2\u0173/\3\2\2\2\u0174\u0175")
+        buf.write("\7*\2\2\u0175\u0176\5^\60\2\u0176\u0177\7+\2\2\u0177\u0185")
+        buf.write("\3\2\2\2\u0178\u0179\7*\2\2\u0179\u0185\7+\2\2\u017a\u017b")
+        buf.write("\7(\2\2\u017b\u017c\5\64\33\2\u017c\u017d\7)\2\2\u017d")
+        buf.write("\u0185\3\2\2\2\u017e\u017f\7(\2\2\u017f\u0180\5:\36\2")
+        buf.write("\u0180\u0181\7)\2\2\u0181\u0185\3\2\2\2\u0182\u0183\7")
+        buf.write("(\2\2\u0183\u0185\7)\2\2\u0184\u0174\3\2\2\2\u0184\u0178")
+        buf.write("\3\2\2\2\u0184\u017a\3\2\2\2\u0184\u017e\3\2\2\2\u0184")
+        buf.write("\u0182\3\2\2\2\u0185\61\3\2\2\2\u0186\u0188\7,\2\2\u0187")
+        buf.write("\u0189\5*\26\2\u0188\u0187\3\2\2\2\u0189\u018a\3\2\2\2")
+        buf.write("\u018a\u0188\3\2\2\2\u018a\u018b\3\2\2\2\u018b\u018d\3")
+        buf.write("\2\2\2\u018c\u018e\5\62\32\2\u018d\u018c\3\2\2\2\u018d")
+        buf.write("\u018e\3\2\2\2\u018e\u0193\3\2\2\2\u018f\u0190\7,\2\2")
+        buf.write("\u0190\u0193\5\62\32\2\u0191\u0193\7,\2\2\u0192\u0186")
+        buf.write("\3\2\2\2\u0192\u018f\3\2\2\2\u0192\u0191\3\2\2\2\u0193")
+        buf.write("\63\3\2\2\2\u0194\u019a\5\66\34\2\u0195\u0197\7\6\2\2")
+        buf.write("\u0196\u0198\7\37\2\2\u0197\u0196\3\2\2\2\u0197\u0198")
+        buf.write("\3\2\2\2\u0198\u0199\3\2\2\2\u0199\u019b\7-\2\2\u019a")
+        buf.write("\u0195\3\2\2\2\u019a\u019b\3\2\2\2\u019b\65\3\2\2\2\u019c")
+        buf.write("\u01a4\58\35\2\u019d\u019f\7\6\2\2\u019e\u01a0\7\37\2")
+        buf.write("\2\u019f\u019e\3\2\2\2\u019f\u01a0\3\2\2\2\u01a0\u01a1")
+        buf.write("\3\2\2\2\u01a1\u01a3\58\35\2\u01a2\u019d\3\2\2\2\u01a3")
+        buf.write("\u01a6\3\2\2\2\u01a4\u01a2\3\2\2\2\u01a4\u01a5\3\2\2\2")
+        buf.write("\u01a5\67\3\2\2\2\u01a6\u01a4\3\2\2\2\u01a7\u01ac\5\b")
+        buf.write("\5\2\u01a8\u01ab\5,\27\2\u01a9\u01ab\5> \2\u01aa\u01a8")
+        buf.write("\3\2\2\2\u01aa\u01a9\3\2\2\2\u01ab\u01ae\3\2\2\2\u01ac")
+        buf.write("\u01aa\3\2\2\2\u01ac\u01ad\3\2\2\2\u01ad\u01b0\3\2\2\2")
+        buf.write("\u01ae\u01ac\3\2\2\2\u01af\u01b1\7\37\2\2\u01b0\u01af")
+        buf.write("\3\2\2\2\u01b0\u01b1\3\2\2\2\u01b1\u01ba\3\2\2\2\u01b2")
+        buf.write("\u01b4\5\62\32\2\u01b3\u01b2\3\2\2\2\u01b4\u01b7\3\2\2")
+        buf.write("\2\u01b5\u01b3\3\2\2\2\u01b5\u01b6\3\2\2\2\u01b6\u01b8")
+        buf.write("\3\2\2\2\u01b7\u01b5\3\2\2\2\u01b8\u01ba\7_\2\2\u01b9")
+        buf.write("\u01a7\3\2\2\2\u01b9\u01b5\3\2\2\2\u01ba9\3\2\2\2\u01bb")
+        buf.write("\u01c0\7_\2\2\u01bc\u01bd\7\6\2\2\u01bd\u01bf\7_\2\2\u01be")
+        buf.write("\u01bc\3\2\2\2\u01bf\u01c2\3\2\2\2\u01c0\u01be\3\2\2\2")
+        buf.write("\u01c0\u01c1\3\2\2\2\u01c1;\3\2\2\2\u01c2\u01c0\3\2\2")
+        buf.write("\2\u01c3\u01c5\5\36\20\2\u01c4\u01c6\5> \2\u01c5\u01c4")
+        buf.write("\3\2\2\2\u01c5\u01c6\3\2\2\2\u01c6\u01c9\3\2\2\2\u01c7")
+        buf.write("\u01c9\5\24\13\2\u01c8\u01c3\3\2\2\2\u01c8\u01c7\3\2\2")
+        buf.write("\2\u01c9=\3\2\2\2\u01ca\u01cc\5\62\32\2\u01cb\u01cd\5")
+        buf.write("@!\2\u01cc\u01cb\3\2\2\2\u01cc\u01cd\3\2\2\2\u01cd\u01d0")
+        buf.write("\3\2\2\2\u01ce\u01d0\5@!\2\u01cf\u01ca\3\2\2\2\u01cf\u01ce")
+        buf.write("\3\2\2\2\u01d0?\3\2\2\2\u01d1\u01d2\7(\2\2\u01d2\u01d3")
+        buf.write("\5> \2\u01d3\u01d4\7)\2\2\u01d4\u01d7\3\2\2\2\u01d5\u01d7")
+        buf.write("\5B\"\2\u01d6\u01d1\3\2\2\2\u01d6\u01d5\3\2\2\2\u01d7")
+        buf.write("\u01db\3\2\2\2\u01d8\u01da\5B\"\2\u01d9\u01d8\3\2\2\2")
+        buf.write("\u01da\u01dd\3\2\2\2\u01db\u01d9\3\2\2\2\u01db\u01dc\3")
+        buf.write("\2\2\2\u01dcA\3\2\2\2\u01dd\u01db\3\2\2\2\u01de\u01df")
+        buf.write("\7*\2\2\u01df\u01eb\7+\2\2\u01e0\u01e1\7*\2\2\u01e1\u01e2")
+        buf.write("\5^\60\2\u01e2\u01e3\7+\2\2\u01e3\u01eb\3\2\2\2\u01e4")
+        buf.write("\u01e5\7(\2\2\u01e5\u01eb\7)\2\2\u01e6\u01e7\7(\2\2\u01e7")
+        buf.write("\u01e8\5\64\33\2\u01e8\u01e9\7)\2\2\u01e9\u01eb\3\2\2")
+        buf.write("\2\u01ea\u01de\3\2\2\2\u01ea\u01e0\3\2\2\2\u01ea\u01e4")
+        buf.write("\3\2\2\2\u01ea\u01e6\3\2\2\2\u01ebC\3\2\2\2\u01ec\u01f5")
+        buf.write("\5`\61\2\u01ed\u01ee\7\3\2\2\u01ee\u01f0\5F$\2\u01ef\u01f1")
+        buf.write("\7\6\2\2\u01f0\u01ef\3\2\2\2\u01f0\u01f1\3\2\2\2\u01f1")
+        buf.write("\u01f2\3\2\2\2\u01f2\u01f3\7\26\2\2\u01f3\u01f5\3\2\2")
+        buf.write("\2\u01f4\u01ec\3\2\2\2\u01f4\u01ed\3\2\2\2\u01f5E\3\2")
+        buf.write("\2\2\u01f6\u01fb\5D#\2\u01f7\u01f8\7\6\2\2\u01f8\u01fa")
+        buf.write("\5D#\2\u01f9\u01f7\3\2\2\2\u01fa\u01fd\3\2\2\2\u01fb\u01f9")
+        buf.write("\3\2\2\2\u01fb\u01fc\3\2\2\2\u01fcG\3\2\2\2\u01fd\u01fb")
+        buf.write("\3\2\2\2\u01fe\u0200\5`\61\2\u01ff\u0201\7\37\2\2\u0200")
+        buf.write("\u01ff\3\2\2\2\u0200\u0201\3\2\2\2\u0201\u0209\3\2\2\2")
+        buf.write("\u0202\u0203\7\6\2\2\u0203\u0205\5`\61\2\u0204\u0206\7")
+        buf.write("\37\2\2\u0205\u0204\3\2\2\2\u0205\u0206\3\2\2\2\u0206")
+        buf.write("\u0208\3\2\2\2\u0207\u0202\3\2\2\2\u0208\u020b\3\2\2\2")
+        buf.write("\u0209\u0207\3\2\2\2\u0209\u020a\3\2\2\2\u020aI\3\2\2")
+        buf.write("\2\u020b\u0209\3\2\2\2\u020c\u0213\5L\'\2\u020d\u020e")
+        buf.write("\7.\2\2\u020e\u0212\5L\'\2\u020f\u0210\7/\2\2\u0210\u0212")
+        buf.write("\5L\'\2\u0211\u020d\3\2\2\2\u0211\u020f\3\2\2\2\u0212")
+        buf.write("\u0215\3\2\2\2\u0213\u0211\3\2\2\2\u0213\u0214\3\2\2\2")
+        buf.write("\u0214K\3\2\2\2\u0215\u0213\3\2\2\2\u0216\u021f\5N(\2")
+        buf.write("\u0217\u0218\7,\2\2\u0218\u021e\5N(\2\u0219\u021a\7\60")
+        buf.write("\2\2\u021a\u021e\5N(\2\u021b\u021c\7\61\2\2\u021c\u021e")
+        buf.write("\5N(\2\u021d\u0217\3\2\2\2\u021d\u0219\3\2\2\2\u021d\u021b")
+        buf.write("\3\2\2\2\u021e\u0221\3\2\2\2\u021f\u021d\3\2\2\2\u021f")
+        buf.write("\u0220\3\2\2\2\u0220M\3\2\2\2\u0221\u021f\3\2\2\2\u0222")
+        buf.write("\u0223\7(\2\2\u0223\u0224\5<\37\2\u0224\u0225\7)\2\2\u0225")
+        buf.write("\u0226\5N(\2\u0226\u0229\3\2\2\2\u0227\u0229\5P)\2\u0228")
+        buf.write("\u0222\3\2\2\2\u0228\u0227\3\2\2\2\u0229O\3\2\2\2\u022a")
+        buf.write("\u023a\5R*\2\u022b\u022c\7\62\2\2\u022c\u023a\5P)\2\u022d")
+        buf.write("\u022e\7\63\2\2\u022e\u023a\5P)\2\u022f\u0230\5V,\2\u0230")
+        buf.write("\u0231\5N(\2\u0231\u023a\3\2\2\2\u0232\u0233\7\64\2\2")
+        buf.write("\u0233\u023a\5P)\2\u0234\u0235\7\64\2\2\u0235\u0236\7")
+        buf.write("(\2\2\u0236\u0237\5<\37\2\u0237\u0238\7)\2\2\u0238\u023a")
+        buf.write("\3\2\2\2\u0239\u022a\3\2\2\2\u0239\u022b\3\2\2\2\u0239")
+        buf.write("\u022d\3\2\2\2\u0239\u022f\3\2\2\2\u0239\u0232\3\2\2\2")
+        buf.write("\u0239\u0234\3\2\2\2\u023aQ\3\2\2\2\u023b\u023c\5X-\2")
+        buf.write("\u023c\u025a\b*\1\2\u023d\u023e\7*\2\2\u023e\u023f\5\\")
+        buf.write("/\2\u023f\u0240\7+\2\2\u0240\u0259\3\2\2\2\u0241\u0242")
+        buf.write("\7(\2\2\u0242\u0243\7)\2\2\u0243\u0259\b*\1\2\u0244\u0245")
+        buf.write("\7(\2\2\u0245\u0246\5H%\2\u0246\u0247\7)\2\2\u0247\u0248")
+        buf.write("\b*\1\2\u0248\u0259\3\2\2\2\u0249\u024a\7(\2\2\u024a\u024b")
+        buf.write("\5T+\2\u024b\u024c\7)\2\2\u024c\u0259\3\2\2\2\u024d\u024e")
+        buf.write("\7\65\2\2\u024e\u024f\7_\2\2\u024f\u0259\b*\1\2\u0250")
+        buf.write("\u0251\7,\2\2\u0251\u0252\7_\2\2\u0252\u0259\b*\1\2\u0253")
+        buf.write("\u0254\7\66\2\2\u0254\u0255\7_\2\2\u0255\u0259\b*\1\2")
+        buf.write("\u0256\u0259\7\62\2\2\u0257\u0259\7\63\2\2\u0258\u023d")
+        buf.write("\3\2\2\2\u0258\u0241\3\2\2\2\u0258\u0244\3\2\2\2\u0258")
+        buf.write("\u0249\3\2\2\2\u0258\u024d\3\2\2\2\u0258\u0250\3\2\2\2")
+        buf.write("\u0258\u0253\3\2\2\2\u0258\u0256\3\2\2\2\u0258\u0257\3")
+        buf.write("\2\2\2\u0259\u025c\3\2\2\2\u025a\u0258\3\2\2\2\u025a\u025b")
+        buf.write("\3\2\2\2\u025bS\3\2\2\2\u025c\u025a\3\2\2\2\u025d\u0262")
+        buf.write("\58\35\2\u025e\u025f\7\6\2\2\u025f\u0261\58\35\2\u0260")
+        buf.write("\u025e\3\2\2\2\u0261\u0264\3\2\2\2\u0262\u0260\3\2\2\2")
+        buf.write("\u0262\u0263\3\2\2\2\u0263U\3\2\2\2\u0264\u0262\3\2\2")
+        buf.write("\2\u0265\u0266\t\5\2\2\u0266W\3\2\2\2\u0267\u026e\7_\2")
+        buf.write("\2\u0268\u026e\5Z.\2\u0269\u026a\7(\2\2\u026a\u026b\5")
+        buf.write("\\/\2\u026b\u026c\7)\2\2\u026c\u026e\3\2\2\2\u026d\u0267")
+        buf.write("\3\2\2\2\u026d\u0268\3\2\2\2\u026d\u0269\3\2\2\2\u026e")
+        buf.write("Y\3\2\2\2\u026f\u028a\7b\2\2\u0270\u028a\7d\2\2\u0271")
+        buf.write("\u028a\7c\2\2\u0272\u028a\7`\2\2\u0273\u0275\7_\2\2\u0274")
+        buf.write("\u0273\3\2\2\2\u0275\u0278\3\2\2\2\u0276\u0274\3\2\2\2")
+        buf.write("\u0276\u0277\3\2\2\2\u0277\u027a\3\2\2\2\u0278\u0276\3")
+        buf.write("\2\2\2\u0279\u027b\7a\2\2\u027a\u0279\3\2\2\2\u027b\u027c")
+        buf.write("\3\2\2\2\u027c\u027a\3\2\2\2\u027c\u027d\3\2\2\2\u027d")
+        buf.write("\u027f\3\2\2\2\u027e\u0276\3\2\2\2\u027f\u0280\3\2\2\2")
+        buf.write("\u0280\u027e\3\2\2\2\u0280\u0281\3\2\2\2\u0281\u0285\3")
+        buf.write("\2\2\2\u0282\u0284\7_\2\2\u0283\u0282\3\2\2\2\u0284\u0287")
+        buf.write("\3\2\2\2\u0285\u0283\3\2\2\2\u0285\u0286\3\2\2\2\u0286")
+        buf.write("\u028a\3\2\2\2\u0287\u0285\3\2\2\2\u0288\u028a\7e\2\2")
+        buf.write("\u0289\u026f\3\2\2\2\u0289\u0270\3\2\2\2\u0289\u0271\3")
+        buf.write("\2\2\2\u0289\u0272\3\2\2\2\u0289\u027e\3\2\2\2\u0289\u0288")
+        buf.write("\3\2\2\2\u028a[\3\2\2\2\u028b\u0290\5`\61\2\u028c\u028d")
+        buf.write("\7\6\2\2\u028d\u028f\5`\61\2\u028e\u028c\3\2\2\2\u028f")
+        buf.write("\u0292\3\2\2\2\u0290\u028e\3\2\2\2\u0290\u0291\3\2\2\2")
+        buf.write("\u0291]\3\2\2\2\u0292\u0290\3\2\2\2\u0293\u0294\5f\64")
+        buf.write("\2\u0294_\3\2\2\2\u0295\u0296\5b\62\2\u0296\u0297\5d\63")
+        buf.write("\2\u0297\u0298\5`\61\2\u0298\u029b\3\2\2\2\u0299\u029b")
+        buf.write("\5f\64\2\u029a\u0295\3\2\2\2\u029a\u0299\3\2\2\2\u029b")
+        buf.write("a\3\2\2\2\u029c\u029d\5P)\2\u029dc\3\2\2\2\u029e\u029f")
+        buf.write("\t\6\2\2\u029fe\3\2\2\2\u02a0\u02a7\5h\65\2\u02a1\u02a2")
+        buf.write("\7D\2\2\u02a2\u02a3\5\\/\2\u02a3\u02a4\7\31\2\2\u02a4")
+        buf.write("\u02a5\5f\64\2\u02a5\u02a6\b\64\1\2\u02a6\u02a8\3\2\2")
+        buf.write("\2\u02a7\u02a1\3\2\2\2\u02a7\u02a8\3\2\2\2\u02a8g\3\2")
+        buf.write("\2\2\u02a9\u02ae\5j\66\2\u02aa\u02ab\7E\2\2\u02ab\u02ad")
+        buf.write("\5j\66\2\u02ac\u02aa\3\2\2\2\u02ad\u02b0\3\2\2\2\u02ae")
+        buf.write("\u02ac\3\2\2\2\u02ae\u02af\3\2\2\2\u02afi\3\2\2\2\u02b0")
+        buf.write("\u02ae\3\2\2\2\u02b1\u02b6\5l\67\2\u02b2\u02b3\7F\2\2")
+        buf.write("\u02b3\u02b5\5l\67\2\u02b4\u02b2\3\2\2\2\u02b5\u02b8\3")
+        buf.write("\2\2\2\u02b6\u02b4\3\2\2\2\u02b6\u02b7\3\2\2\2\u02b7k")
+        buf.write("\3\2\2\2\u02b8\u02b6\3\2\2\2\u02b9\u02be\5n8\2\u02ba\u02bb")
+        buf.write("\7G\2\2\u02bb\u02bd\5n8\2\u02bc\u02ba\3\2\2\2\u02bd\u02c0")
+        buf.write("\3\2\2\2\u02be\u02bc\3\2\2\2\u02be\u02bf\3\2\2\2\u02bf")
+        buf.write("m\3\2\2\2\u02c0\u02be\3\2\2\2\u02c1\u02c6\5p9\2\u02c2")
+        buf.write("\u02c3\7H\2\2\u02c3\u02c5\5p9\2\u02c4\u02c2\3\2\2\2\u02c5")
+        buf.write("\u02c8\3\2\2\2\u02c6\u02c4\3\2\2\2\u02c6\u02c7\3\2\2\2")
+        buf.write("\u02c7o\3\2\2\2\u02c8\u02c6\3\2\2\2\u02c9\u02ce\5r:\2")
+        buf.write("\u02ca\u02cb\7\67\2\2\u02cb\u02cd\5r:\2\u02cc\u02ca\3")
+        buf.write("\2\2\2\u02cd\u02d0\3\2\2\2\u02ce\u02cc\3\2\2\2\u02ce\u02cf")
+        buf.write("\3\2\2\2\u02cfq\3\2\2\2\u02d0\u02ce\3\2\2\2\u02d1\u02d6")
+        buf.write("\5t;\2\u02d2\u02d3\t\7\2\2\u02d3\u02d5\5t;\2\u02d4\u02d2")
+        buf.write("\3\2\2\2\u02d5\u02d8\3\2\2\2\u02d6\u02d4\3\2\2\2\u02d6")
+        buf.write("\u02d7\3\2\2\2\u02d7s\3\2\2\2\u02d8\u02d6\3\2\2\2\u02d9")
+        buf.write("\u02de\5v<\2\u02da\u02db\t\b\2\2\u02db\u02dd\5v<\2\u02dc")
+        buf.write("\u02da\3\2\2\2\u02dd\u02e0\3\2\2\2\u02de\u02dc\3\2\2\2")
+        buf.write("\u02de\u02df\3\2\2\2\u02dfu\3\2\2\2\u02e0\u02de\3\2\2")
+        buf.write("\2\u02e1\u02e6\5J&\2\u02e2\u02e3\t\t\2\2\u02e3\u02e5\5")
+        buf.write("J&\2\u02e4\u02e2\3\2\2\2\u02e5\u02e8\3\2\2\2\u02e6\u02e4")
+        buf.write("\3\2\2\2\u02e6\u02e7\3\2\2\2\u02e7w\3\2\2\2\u02e8\u02e6")
+        buf.write("\3\2\2\2\u02e9\u02f5\5\u0082B\2\u02ea\u02f5\5\u0084C\2")
+        buf.write("\u02eb\u02f5\5\u0088E\2\u02ec\u02f5\5\u008aF\2\u02ed\u02f5")
+        buf.write("\5\u008cG\2\u02ee\u02f5\5\u008eH\2\u02ef\u02f5\5\u0080")
+        buf.write("A\2\u02f0\u02f5\5z>\2\u02f1\u02f5\5|?\2\u02f2\u02f5\5")
+        buf.write("~@\2\u02f3\u02f5\5\n\6\2\u02f4\u02e9\3\2\2\2\u02f4\u02ea")
+        buf.write("\3\2\2\2\u02f4\u02eb\3\2\2\2\u02f4\u02ec\3\2\2\2\u02f4")
+        buf.write("\u02ed\3\2\2\2\u02f4\u02ee\3\2\2\2\u02f4\u02ef\3\2\2\2")
+        buf.write("\u02f4\u02f0\3\2\2\2\u02f4\u02f1\3\2\2\2\u02f4\u02f2\3")
+        buf.write("\2\2\2\u02f4\u02f3\3\2\2\2\u02f5y\3\2\2\2\u02f6\u02f8")
+        buf.write("\7Q\2\2\u02f7\u02f6\3\2\2\2\u02f7\u02f8\3\2\2\2\u02f8")
+        buf.write("\u02f9\3\2\2\2\u02f9\u02fa\7_\2\2\u02fa\u02fe\7(\2\2\u02fb")
+        buf.write("\u02fd\n\n\2\2\u02fc\u02fb\3\2\2\2\u02fd\u0300\3\2\2\2")
+        buf.write("\u02fe\u02fc\3\2\2\2\u02fe\u02ff\3\2\2\2\u02ff\u0301\3")
+        buf.write("\2\2\2\u0300\u02fe\3\2\2\2\u0301\u0302\7)\2\2\u0302\u0303")
+        buf.write("\7\4\2\2\u0303{\3\2\2\2\u0304\u0305\7R\2\2\u0305\u0309")
+        buf.write("\7\3\2\2\u0306\u0308\n\13\2\2\u0307\u0306\3\2\2\2\u0308")
+        buf.write("\u030b\3\2\2\2\u0309\u0307\3\2\2\2\u0309\u030a\3\2\2\2")
+        buf.write("\u030a\u030c\3\2\2\2\u030b\u0309\3\2\2\2\u030c\u030d\7")
+        buf.write("\26\2\2\u030d}\3\2\2\2\u030e\u030f\7S\2\2\u030f\u0313")
+        buf.write("\7\3\2\2\u0310\u0312\n\13\2\2\u0311\u0310\3\2\2\2\u0312")
+        buf.write("\u0315\3\2\2\2\u0313\u0311\3\2\2\2\u0313\u0314\3\2\2\2")
+        buf.write("\u0314\u0316\3\2\2\2\u0315\u0313\3\2\2\2\u0316\u0317\7")
+        buf.write("\26\2\2\u0317\177\3\2\2\2\u0318\u0319\7_\2\2\u0319\u031d")
+        buf.write("\7(\2\2\u031a\u031c\5\n\6\2\u031b\u031a\3\2\2\2\u031c")
+        buf.write("\u031f\3\2\2\2\u031d\u031b\3\2\2\2\u031d\u031e\3\2\2\2")
+        buf.write("\u031e\u0321\3\2\2\2\u031f\u031d\3\2\2\2\u0320\u0322\5")
+        buf.write("\u0086D\2\u0321\u0320\3\2\2\2\u0321\u0322\3\2\2\2\u0322")
+        buf.write("\u0324\3\2\2\2\u0323\u0325\5\\/\2\u0324\u0323\3\2\2\2")
+        buf.write("\u0324\u0325\3\2\2\2\u0325\u0326\3\2\2\2\u0326\u0327\7")
+        buf.write(")\2\2\u0327\u0081\3\2\2\2\u0328\u0329\7_\2\2\u0329\u032a")
+        buf.write("\7\31\2\2\u032a\u0334\5x=\2\u032b\u032c\7T\2\2\u032c\u032d")
+        buf.write("\5^\60\2\u032d\u032e\7\31\2\2\u032e\u032f\5x=\2\u032f")
+        buf.write("\u0334\3\2\2\2\u0330\u0331\7U\2\2\u0331\u0332\7\31\2\2")
+        buf.write("\u0332\u0334\5x=\2\u0333\u0328\3\2\2\2\u0333\u032b\3\2")
+        buf.write("\2\2\u0333\u0330\3\2\2\2\u0334\u0083\3\2\2\2\u0335\u0339")
+        buf.write("\7\3\2\2\u0336\u0338\5\n\6\2\u0337\u0336\3\2\2\2\u0338")
+        buf.write("\u033b\3\2\2\2\u0339\u0337\3\2\2\2\u0339\u033a\3\2\2\2")
+        buf.write("\u033a\u033d\3\2\2\2\u033b\u0339\3\2\2\2\u033c\u033e\5")
+        buf.write("\u0086D\2\u033d\u033c\3\2\2\2\u033d\u033e\3\2\2\2\u033e")
+        buf.write("\u033f\3\2\2\2\u033f\u0340\7\26\2\2\u0340\u0085\3\2\2")
+        buf.write("\2\u0341\u0343\5x=\2\u0342\u0341\3\2\2\2\u0343\u0344\3")
+        buf.write("\2\2\2\u0344\u0342\3\2\2\2\u0344\u0345\3\2\2\2\u0345\u0087")
+        buf.write("\3\2\2\2\u0346\u034b\7\4\2\2\u0347\u0348\5\\/\2\u0348")
+        buf.write("\u0349\7\4\2\2\u0349\u034b\3\2\2\2\u034a\u0346\3\2\2\2")
+        buf.write("\u034a\u0347\3\2\2\2\u034b\u0089\3\2\2\2\u034c\u034d\7")
+        buf.write("V\2\2\u034d\u034e\7(\2\2\u034e\u034f\5\\/\2\u034f\u0350")
+        buf.write("\7)\2\2\u0350\u0351\bF\1\2\u0351\u0354\5x=\2\u0352\u0353")
+        buf.write("\7W\2\2\u0353\u0355\5x=\2\u0354\u0352\3\2\2\2\u0354\u0355")
+        buf.write("\3\2\2\2\u0355\u035d\3\2\2\2\u0356\u0357\7X\2\2\u0357")
+        buf.write("\u0358\7(\2\2\u0358\u0359\5\\/\2\u0359\u035a\7)\2\2\u035a")
+        buf.write("\u035b\5x=\2\u035b\u035d\3\2\2\2\u035c\u034c\3\2\2\2\u035c")
+        buf.write("\u0356\3\2\2\2\u035d\u008b\3\2\2\2\u035e\u035f\7Y\2\2")
+        buf.write("\u035f\u0360\7(\2\2\u0360\u0361\5\\/\2\u0361\u0362\7)")
+        buf.write("\2\2\u0362\u0363\5x=\2\u0363\u0364\bG\1\2\u0364\u036f")
+        buf.write("\3\2\2\2\u0365\u0366\7Z\2\2\u0366\u0367\5x=\2\u0367\u0368")
+        buf.write("\7Y\2\2\u0368\u0369\7(\2\2\u0369\u036a\5\\/\2\u036a\u036b")
+        buf.write("\7)\2\2\u036b\u036c\7\4\2\2\u036c\u036d\bG\1\2\u036d\u036f")
+        buf.write("\3\2\2\2\u036e\u035e\3\2\2\2\u036e\u0365\3\2\2\2\u036f")
+        buf.write("\u008d\3\2\2\2\u0370\u0371\7[\2\2\u0371\u0372\7_\2\2\u0372")
+        buf.write("\u037e\7\4\2\2\u0373\u0374\7\\\2\2\u0374\u037e\7\4\2\2")
+        buf.write("\u0375\u0376\7]\2\2\u0376\u037e\7\4\2\2\u0377\u0378\7")
+        buf.write("^\2\2\u0378\u037e\7\4\2\2\u0379\u037a\7^\2\2\u037a\u037b")
+        buf.write("\5\\/\2\u037b\u037c\7\4\2\2\u037c\u037e\3\2\2\2\u037d")
+        buf.write("\u0370\3\2\2\2\u037d\u0373\3\2\2\2\u037d\u0375\3\2\2\2")
+        buf.write("\u037d\u0377\3\2\2\2\u037d\u0379\3\2\2\2\u037e\u008f\3")
+        buf.write("\2\2\2o\u0093\u0097\u009d\u00a6\u00a8\u00ab\u00b1\u00b6")
+        buf.write("\u00bd\u00bf\u00c3\u00cb\u00d0\u00d7\u00dd\u00f4\u00f9")
+        buf.write("\u00ff\u0108\u010f\u0117\u0119\u0120\u0126\u012a\u0130")
+        buf.write("\u0139\u013f\u0146\u014c\u0151\u0154\u0157\u015a\u015e")
+        buf.write("\u0164\u0169\u0170\u0172\u0184\u018a\u018d\u0192\u0197")
+        buf.write("\u019a\u019f\u01a4\u01aa\u01ac\u01b0\u01b5\u01b9\u01c0")
+        buf.write("\u01c5\u01c8\u01cc\u01cf\u01d6\u01db\u01ea\u01f0\u01f4")
+        buf.write("\u01fb\u0200\u0205\u0209\u0211\u0213\u021d\u021f\u0228")
+        buf.write("\u0239\u0258\u025a\u0262\u026d\u0276\u027c\u0280\u0285")
+        buf.write("\u0289\u0290\u029a\u02a7\u02ae\u02b6\u02be\u02c6\u02ce")
+        buf.write("\u02d6\u02de\u02e6\u02f4\u02f7\u02fe\u0309\u0313\u031d")
+        buf.write("\u0321\u0324\u0333\u0339\u033d\u0344\u034a\u0354\u035c")
+        buf.write("\u036e\u037d")
+        return buf.getvalue()
+
+
+class CParser ( Parser ):
+
+    grammarFileName = "C.g4"
+
+    atn = ATNDeserializer().deserialize(serializedATN())
+
+    decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
+
+    sharedContextCache = PredictionContextCache()
+
+    literalNames = [ "<INVALID>", "'{'", "';'", "'typedef'", "','", "'='",
+                     "'extern'", "'static'", "'auto'", "'register'", "'STATIC'",
+                     "'void'", "'char'", "'short'", "'int'", "'long'", "'float'",
+                     "'double'", "'signed'", "'unsigned'", "'}'", "'struct'",
+                     "'union'", "':'", "'enum'", "'const'", "'volatile'",
+                     "'IN'", "'OUT'", "'OPTIONAL'", "'CONST'", "'UNALIGNED'",
+                     "'VOLATILE'", "'GLOBAL_REMOVE_IF_UNREFERENCED'", "'EFIAPI'",
+                     "'EFI_BOOTSERVICE'", "'EFI_RUNTIMESERVICE'", "'PACKED'",
+                     "'('", "')'", "'['", "']'", "'*'", "'...'", "'+'",
+                     "'-'", "'/'", "'%'", "'++'", "'--'", "'sizeof'", "'.'",
+                     "'->'", "'&'", "'~'", "'!'", "'*='", "'/='", "'%='",
+                     "'+='", "'-='", "'<<='", "'>>='", "'&='", "'^='", "'|='",
+                     "'?'", "'||'", "'&&'", "'|'", "'^'", "'=='", "'!='",
+                     "'<'", "'>'", "'<='", "'>='", "'<<'", "'>>'", "'__asm__'",
+                     "'_asm'", "'__asm'", "'case'", "'default'", "'if'",
+                     "'else'", "'switch'", "'while'", "'do'", "'goto'",
+                     "'continue'", "'break'", "'return'" ]
+
+    symbolicNames = [ "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "<INVALID>", "<INVALID>", "<INVALID>",
+                      "<INVALID>", "IDENTIFIER", "CHARACTER_LITERAL", "STRING_LITERAL",
+                      "HEX_LITERAL", "DECIMAL_LITERAL", "OCTAL_LITERAL",
+                      "FLOATING_POINT_LITERAL", "WS", "BS", "UnicodeVocabulary",
+                      "COMMENT", "LINE_COMMENT", "LINE_COMMAND" ]
+
+    RULE_translation_unit = 0
+    RULE_external_declaration = 1
+    RULE_function_definition = 2
+    RULE_declaration_specifiers = 3
+    RULE_declaration = 4
+    RULE_init_declarator_list = 5
+    RULE_init_declarator = 6
+    RULE_storage_class_specifier = 7
+    RULE_type_specifier = 8
+    RULE_type_id = 9
+    RULE_struct_or_union_specifier = 10
+    RULE_struct_or_union = 11
+    RULE_struct_declaration_list = 12
+    RULE_struct_declaration = 13
+    RULE_specifier_qualifier_list = 14
+    RULE_struct_declarator_list = 15
+    RULE_struct_declarator = 16
+    RULE_enum_specifier = 17
+    RULE_enumerator_list = 18
+    RULE_enumerator = 19
+    RULE_type_qualifier = 20
+    RULE_declarator = 21
+    RULE_direct_declarator = 22
+    RULE_declarator_suffix = 23
+    RULE_pointer = 24
+    RULE_parameter_type_list = 25
+    RULE_parameter_list = 26
+    RULE_parameter_declaration = 27
+    RULE_identifier_list = 28
+    RULE_type_name = 29
+    RULE_abstract_declarator = 30
+    RULE_direct_abstract_declarator = 31
+    RULE_abstract_declarator_suffix = 32
+    RULE_initializer = 33
+    RULE_initializer_list = 34
+    RULE_argument_expression_list = 35
+    RULE_additive_expression = 36
+    RULE_multiplicative_expression = 37
+    RULE_cast_expression = 38
+    RULE_unary_expression = 39
+    RULE_postfix_expression = 40
+    RULE_macro_parameter_list = 41
+    RULE_unary_operator = 42
+    RULE_primary_expression = 43
+    RULE_constant = 44
+    RULE_expression = 45
+    RULE_constant_expression = 46
+    RULE_assignment_expression = 47
+    RULE_lvalue = 48
+    RULE_assignment_operator = 49
+    RULE_conditional_expression = 50
+    RULE_logical_or_expression = 51
+    RULE_logical_and_expression = 52
+    RULE_inclusive_or_expression = 53
+    RULE_exclusive_or_expression = 54
+    RULE_and_expression = 55
+    RULE_equality_expression = 56
+    RULE_relational_expression = 57
+    RULE_shift_expression = 58
+    RULE_statement = 59
+    RULE_asm2_statement = 60
+    RULE_asm1_statement = 61
+    RULE_asm_statement = 62
+    RULE_macro_statement = 63
+    RULE_labeled_statement = 64
+    RULE_compound_statement = 65
+    RULE_statement_list = 66
+    RULE_expression_statement = 67
+    RULE_selection_statement = 68
+    RULE_iteration_statement = 69
+    RULE_jump_statement = 70
+
+    ruleNames =  [ "translation_unit", "external_declaration", "function_definition",
+                   "declaration_specifiers", "declaration", "init_declarator_list",
+                   "init_declarator", "storage_class_specifier", "type_specifier",
+                   "type_id", "struct_or_union_specifier", "struct_or_union",
+                   "struct_declaration_list", "struct_declaration", "specifier_qualifier_list",
+                   "struct_declarator_list", "struct_declarator", "enum_specifier",
+                   "enumerator_list", "enumerator", "type_qualifier", "declarator",
+                   "direct_declarator", "declarator_suffix", "pointer",
+                   "parameter_type_list", "parameter_list", "parameter_declaration",
+                   "identifier_list", "type_name", "abstract_declarator",
+                   "direct_abstract_declarator", "abstract_declarator_suffix",
+                   "initializer", "initializer_list", "argument_expression_list",
+                   "additive_expression", "multiplicative_expression", "cast_expression",
+                   "unary_expression", "postfix_expression", "macro_parameter_list",
+                   "unary_operator", "primary_expression", "constant", "expression",
+                   "constant_expression", "assignment_expression", "lvalue",
+                   "assignment_operator", "conditional_expression", "logical_or_expression",
+                   "logical_and_expression", "inclusive_or_expression",
+                   "exclusive_or_expression", "and_expression", "equality_expression",
+                   "relational_expression", "shift_expression", "statement",
+                   "asm2_statement", "asm1_statement", "asm_statement",
+                   "macro_statement", "labeled_statement", "compound_statement",
+                   "statement_list", "expression_statement", "selection_statement",
+                   "iteration_statement", "jump_statement" ]
+
+    EOF = Token.EOF
+    T__0=1
+    T__1=2
+    T__2=3
+    T__3=4
+    T__4=5
+    T__5=6
+    T__6=7
+    T__7=8
+    T__8=9
+    T__9=10
+    T__10=11
+    T__11=12
+    T__12=13
+    T__13=14
+    T__14=15
+    T__15=16
+    T__16=17
+    T__17=18
+    T__18=19
+    T__19=20
+    T__20=21
+    T__21=22
+    T__22=23
+    T__23=24
+    T__24=25
+    T__25=26
+    T__26=27
+    T__27=28
+    T__28=29
+    T__29=30
+    T__30=31
+    T__31=32
+    T__32=33
+    T__33=34
+    T__34=35
+    T__35=36
+    T__36=37
+    T__37=38
+    T__38=39
+    T__39=40
+    T__40=41
+    T__41=42
+    T__42=43
+    T__43=44
+    T__44=45
+    T__45=46
+    T__46=47
+    T__47=48
+    T__48=49
+    T__49=50
+    T__50=51
+    T__51=52
+    T__52=53
+    T__53=54
+    T__54=55
+    T__55=56
+    T__56=57
+    T__57=58
+    T__58=59
+    T__59=60
+    T__60=61
+    T__61=62
+    T__62=63
+    T__63=64
+    T__64=65
+    T__65=66
+    T__66=67
+    T__67=68
+    T__68=69
+    T__69=70
+    T__70=71
+    T__71=72
+    T__72=73
+    T__73=74
+    T__74=75
+    T__75=76
+    T__76=77
+    T__77=78
+    T__78=79
+    T__79=80
+    T__80=81
+    T__81=82
+    T__82=83
+    T__83=84
+    T__84=85
+    T__85=86
+    T__86=87
+    T__87=88
+    T__88=89
+    T__89=90
+    T__90=91
+    T__91=92
+    IDENTIFIER=93
+    CHARACTER_LITERAL=94
+    STRING_LITERAL=95
+    HEX_LITERAL=96
+    DECIMAL_LITERAL=97
+    OCTAL_LITERAL=98
+    FLOATING_POINT_LITERAL=99
+    WS=100
+    BS=101
+    UnicodeVocabulary=102
+    COMMENT=103
+    LINE_COMMENT=104
+    LINE_COMMAND=105
+
+    # @param  input Type: TokenStream
+    # @param  output= sys.stdout Type: TextIO
+    def __init__(self,input,output= sys.stdout):
+        super().__init__(input, output)
+        self.checkVersion("4.7.1")
+        self._interp = ParserATNSimulator(self, self.atn, self.decisionsToDFA, self.sharedContextCache)
+        self._predicates = None
+
+
+
+
+    def printTokenInfo(self,line,offset,tokenText):
+        print(str(line)+ ',' + str(offset) + ':' + str(tokenText))
+
+    def StorePredicateExpression(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        PredExp = CodeFragment.PredicateExpression(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.PredicateExpressionList.append(PredExp)
+
+    def StoreEnumerationDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        EnumDef = CodeFragment.EnumerationDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.EnumerationDefinitionList.append(EnumDef)
+
+    def StoreStructUnionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,Text):
+        SUDef = CodeFragment.StructUnionDefinition(Text, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.StructUnionDefinitionList.append(SUDef)
+
+    def StoreTypedefDefinition(self,StartLine,StartOffset,EndLine,EndOffset,FromText,ToText):
+        Tdef = CodeFragment.TypedefDefinition(FromText, ToText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.TypedefDefinitionList.append(Tdef)
+
+    def StoreFunctionDefinition(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText,LeftBraceLine,LeftBraceOffset,DeclLine,DeclOffset):
+        FuncDef = CodeFragment.FunctionDefinition(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset), (LeftBraceLine, LeftBraceOffset), (DeclLine, DeclOffset))
+        FileProfile.FunctionDefinitionList.append(FuncDef)
+
+    def StoreVariableDeclaration(self,StartLine,StartOffset,EndLine,EndOffset,ModifierText,DeclText):
+        VarDecl = CodeFragment.VariableDeclaration(ModifierText, DeclText, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.VariableDeclarationList.append(VarDecl)
+
+    def StoreFunctionCalling(self,StartLine,StartOffset,EndLine,EndOffset,FuncName,ParamList):
+        FuncCall = CodeFragment.FunctionCalling(FuncName, ParamList, (StartLine, StartOffset), (EndLine, EndOffset))
+        FileProfile.FunctionCallingList.append(FuncCall)
+
+
+
+    class Translation_unitContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def external_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.External_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.External_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_translation_unit
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterTranslation_unit" ):
+                listener.enterTranslation_unit(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitTranslation_unit" ):
+                listener.exitTranslation_unit(self)
+
+
+
+
+    def translation_unit(self):
+
+        localctx = CParser.Translation_unitContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 0, self.RULE_translation_unit)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 145
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__41))) != 0) or _la==CParser.IDENTIFIER:
+                self.state = 142
+                self.external_declaration()
+                self.state = 147
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class External_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def function_definition(self):
+            return self.getTypedRuleContext(CParser.Function_definitionContext,0)
+
+
+        def macro_statement(self):
+            return self.getTypedRuleContext(CParser.Macro_statementContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_external_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExternal_declaration" ):
+                listener.enterExternal_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExternal_declaration" ):
+                listener.exitExternal_declaration(self)
+
+
+
+
+    def external_declaration(self):
+
+        localctx = CParser.External_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 2, self.RULE_external_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 166
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,4,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 149
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,1,self._ctx)
+                if la_ == 1:
+                    self.state = 148
+                    self.declaration_specifiers()
+
+
+                self.state = 151
+                self.declarator()
+                self.state = 155
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER:
+                    self.state = 152
+                    self.declaration()
+                    self.state = 157
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 158
+                self.match(CParser.T__0)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 160
+                self.function_definition()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 161
+                self.declaration()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 162
+                self.macro_statement()
+                self.state = 164
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__1:
+                    self.state = 163
+                    self.match(CParser.T__1)
+
+
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Function_definitionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.ModifierText = ''
+            self.DeclText = ''
+            self.LBLine = 0
+            self.LBOffset = 0
+            self.DeclLine = 0
+            self.DeclOffset = 0
+            self.d = None # Declaration_specifiersContext
+            self._declaration_specifiers = None # Declaration_specifiersContext
+            self._declarator = None # DeclaratorContext
+            self.a = None # Compound_statementContext
+            self.b = None # Compound_statementContext
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def compound_statement(self):
+            return self.getTypedRuleContext(CParser.Compound_statementContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_function_definition
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterFunction_definition" ):
+                listener.enterFunction_definition(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitFunction_definition" ):
+                listener.exitFunction_definition(self)
+
+
+
+
+    def function_definition(self):
+
+        localctx = CParser.Function_definitionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 4, self.RULE_function_definition)
+
+        ModifierText = '';
+        DeclText = '';
+        LBLine = 0;
+        LBOffset = 0;
+        DeclLine = 0;
+        DeclOffset = 0;
+
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 169
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,5,self._ctx)
+            if la_ == 1:
+                self.state = 168
+                localctx.d = localctx._declaration_specifiers = self.declaration_specifiers()
+
+
+            self.state = 171
+            localctx._declarator = self.declarator()
+            self.state = 180
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__2, CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9, CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36, CParser.IDENTIFIER]:
+                self.state = 173
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while True:
+                    self.state = 172
+                    self.declaration()
+                    self.state = 175
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+                    if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER):
+                        break
+
+                self.state = 177
+                localctx.a = self.compound_statement()
+                pass
+            elif token in [CParser.T__0]:
+                self.state = 179
+                localctx.b = self.compound_statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+
+            if localctx.d != None:
+                ModifierText = (None if localctx._declaration_specifiers is None else self._input.getText((localctx._declaration_specifiers.start,localctx._declaration_specifiers.stop)))
+            else:
+                ModifierText = ''
+            DeclText = (None if localctx._declarator is None else self._input.getText((localctx._declarator.start,localctx._declarator.stop)))
+            DeclLine = (None if localctx._declarator is None else localctx._declarator.start).line
+            DeclOffset = (None if localctx._declarator is None else localctx._declarator.start).column
+            if localctx.a != None:
+                LBLine = (None if localctx.a is None else localctx.a.start).line
+                LBOffset = (None if localctx.a is None else localctx.a.start).column
+            else:
+                LBLine = (None if localctx.b is None else localctx.b.start).line
+                LBOffset = (None if localctx.b is None else localctx.b.start).column
+
+            self._ctx.stop = self._input.LT(-1)
+
+            self.StoreFunctionDefinition(localctx.start.line, localctx.start.column, localctx.stop.line, localctx.stop.column, ModifierText, DeclText, LBLine, LBOffset, DeclLine, DeclOffset)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Declaration_specifiersContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def storage_class_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Storage_class_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Storage_class_specifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_specifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declaration_specifiers
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclaration_specifiers" ):
+                listener.enterDeclaration_specifiers(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclaration_specifiers" ):
+                listener.exitDeclaration_specifiers(self)
+
+
+
+
+    def declaration_specifiers(self):
+
+        localctx = CParser.Declaration_specifiersContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 6, self.RULE_declaration_specifiers)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 187
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 187
+                    self._errHandler.sync(self)
+                    token = self._input.LA(1)
+                    if token in [CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9]:
+                        self.state = 184
+                        self.storage_class_specifier()
+                        pass
+                    elif token in [CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.IDENTIFIER]:
+                        self.state = 185
+                        self.type_specifier()
+                        pass
+                    elif token in [CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36]:
+                        self.state = 186
+                        self.type_qualifier()
+                        pass
+                    else:
+                        raise NoViableAltException(self)
+
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 189
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,9,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class DeclarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.a = None # Token
+            self.b = None # Declaration_specifiersContext
+            self.c = None # Init_declarator_listContext
+            self.d = None # Token
+            self.s = None # Declaration_specifiersContext
+            self.t = None # Init_declarator_listContext
+            self.e = None # Token
+
+        def init_declarator_list(self):
+            return self.getTypedRuleContext(CParser.Init_declarator_listContext,0)
+
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclaration" ):
+                listener.enterDeclaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclaration" ):
+                listener.exitDeclaration(self)
+
+
+
+
+    def declaration(self):
+
+        localctx = CParser.DeclarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 8, self.RULE_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 206
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__2]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 191
+                localctx.a = self.match(CParser.T__2)
+                self.state = 193
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,10,self._ctx)
+                if la_ == 1:
+                    self.state = 192
+                    localctx.b = self.declaration_specifiers()
+
+
+                self.state = 195
+                localctx.c = self.init_declarator_list()
+                self.state = 196
+                localctx.d = self.match(CParser.T__1)
+
+                if localctx.b is not None:
+                    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, (0 if localctx.d is None else localctx.d.line), localctx.d.column, (None if localctx.b is None else self._input.getText((localctx.b.start,localctx.b.stop))), (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+                else:
+                    self.StoreTypedefDefinition(localctx.a.line, localctx.a.column, (0 if localctx.d is None else localctx.d.line), localctx.d.column, '', (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+
+                pass
+            elif token in [CParser.T__5, CParser.T__6, CParser.T__7, CParser.T__8, CParser.T__9, CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36, CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 199
+                localctx.s = self.declaration_specifiers()
+                self.state = 201
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if ((((_la - 34)) & ~0x3f) == 0 and ((1 << (_la - 34)) & ((1 << (CParser.T__33 - 34)) | (1 << (CParser.T__34 - 34)) | (1 << (CParser.T__35 - 34)) | (1 << (CParser.T__37 - 34)) | (1 << (CParser.T__41 - 34)) | (1 << (CParser.IDENTIFIER - 34)))) != 0):
+                    self.state = 200
+                    localctx.t = self.init_declarator_list()
+
+
+                self.state = 203
+                localctx.e = self.match(CParser.T__1)
+
+                if localctx.t is not None:
+                    self.StoreVariableDeclaration((None if localctx.s is None else localctx.s.start).line, (None if localctx.s is None else localctx.s.start).column, (None if localctx.t is None else localctx.t.start).line, (None if localctx.t is None else localctx.t.start).column, (None if localctx.s is None else self._input.getText((localctx.s.start,localctx.s.stop))), (None if localctx.t is None else self._input.getText((localctx.t.start,localctx.t.stop))))
+
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Init_declarator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def init_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Init_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Init_declaratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_init_declarator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInit_declarator_list" ):
+                listener.enterInit_declarator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInit_declarator_list" ):
+                listener.exitInit_declarator_list(self)
+
+
+
+
+    def init_declarator_list(self):
+
+        localctx = CParser.Init_declarator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 10, self.RULE_init_declarator_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 208
+            self.init_declarator()
+            self.state = 213
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 209
+                self.match(CParser.T__3)
+                self.state = 210
+                self.init_declarator()
+                self.state = 215
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Init_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def initializer(self):
+            return self.getTypedRuleContext(CParser.InitializerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_init_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInit_declarator" ):
+                listener.enterInit_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInit_declarator" ):
+                listener.exitInit_declarator(self)
+
+
+
+
+    def init_declarator(self):
+
+        localctx = CParser.Init_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 12, self.RULE_init_declarator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 216
+            self.declarator()
+            self.state = 219
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__4:
+                self.state = 217
+                self.match(CParser.T__4)
+                self.state = 218
+                self.initializer()
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Storage_class_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_storage_class_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStorage_class_specifier" ):
+                listener.enterStorage_class_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStorage_class_specifier" ):
+                listener.exitStorage_class_specifier(self)
+
+
+
+
+    def storage_class_specifier(self):
+
+        localctx = CParser.Storage_class_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 14, self.RULE_storage_class_specifier)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 221
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.s = None # Struct_or_union_specifierContext
+            self.e = None # Enum_specifierContext
+
+        def struct_or_union_specifier(self):
+            return self.getTypedRuleContext(CParser.Struct_or_union_specifierContext,0)
+
+
+        def enum_specifier(self):
+            return self.getTypedRuleContext(CParser.Enum_specifierContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def type_id(self):
+            return self.getTypedRuleContext(CParser.Type_idContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_specifier" ):
+                listener.enterType_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_specifier" ):
+                listener.exitType_specifier(self)
+
+
+
+
+    def type_specifier(self):
+
+        localctx = CParser.Type_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 16, self.RULE_type_specifier)
+        try:
+            self.state = 247
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,16,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 223
+                self.match(CParser.T__10)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 224
+                self.match(CParser.T__11)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 225
+                self.match(CParser.T__12)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 226
+                self.match(CParser.T__13)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 227
+                self.match(CParser.T__14)
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 228
+                self.match(CParser.T__15)
+                pass
+
+            elif la_ == 7:
+                self.enterOuterAlt(localctx, 7)
+                self.state = 229
+                self.match(CParser.T__16)
+                pass
+
+            elif la_ == 8:
+                self.enterOuterAlt(localctx, 8)
+                self.state = 230
+                self.match(CParser.T__17)
+                pass
+
+            elif la_ == 9:
+                self.enterOuterAlt(localctx, 9)
+                self.state = 231
+                self.match(CParser.T__18)
+                pass
+
+            elif la_ == 10:
+                self.enterOuterAlt(localctx, 10)
+                self.state = 232
+                localctx.s = self.struct_or_union_specifier()
+
+                if localctx.s.stop is not None:
+                    self.StoreStructUnionDefinition((None if localctx.s is None else localctx.s.start).line, (None if localctx.s is None else localctx.s.start).column, (None if localctx.s is None else localctx.s.stop).line, (None if localctx.s is None else localctx.s.stop).column, (None if localctx.s is None else self._input.getText((localctx.s.start,localctx.s.stop))))
+
+                pass
+
+            elif la_ == 11:
+                self.enterOuterAlt(localctx, 11)
+                self.state = 235
+                localctx.e = self.enum_specifier()
+
+                if localctx.e.stop is not None:
+                    self.StoreEnumerationDefinition((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+
+                pass
+
+            elif la_ == 12:
+                self.enterOuterAlt(localctx, 12)
+                self.state = 238
+                self.match(CParser.IDENTIFIER)
+                self.state = 242
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,15,self._ctx)
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt==1:
+                        self.state = 239
+                        self.type_qualifier()
+                    self.state = 244
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,15,self._ctx)
+
+                self.state = 245
+                self.declarator()
+                pass
+
+            elif la_ == 13:
+                self.enterOuterAlt(localctx, 13)
+                self.state = 246
+                self.type_id()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_idContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_id
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_id" ):
+                listener.enterType_id(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_id" ):
+                listener.exitType_id(self)
+
+
+
+
+    def type_id(self):
+
+        localctx = CParser.Type_idContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 18, self.RULE_type_id)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 249
+            self.match(CParser.IDENTIFIER)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_or_union_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def struct_or_union(self):
+            return self.getTypedRuleContext(CParser.Struct_or_unionContext,0)
+
+
+        def struct_declaration_list(self):
+            return self.getTypedRuleContext(CParser.Struct_declaration_listContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_or_union_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_or_union_specifier" ):
+                listener.enterStruct_or_union_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_or_union_specifier" ):
+                listener.exitStruct_or_union_specifier(self)
+
+
+
+
+    def struct_or_union_specifier(self):
+
+        localctx = CParser.Struct_or_union_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 20, self.RULE_struct_or_union_specifier)
+        self._la = 0 # Token type
+        try:
+            self.state = 262
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,18,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 251
+                self.struct_or_union()
+                self.state = 253
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.IDENTIFIER:
+                    self.state = 252
+                    self.match(CParser.IDENTIFIER)
+
+
+                self.state = 255
+                self.match(CParser.T__0)
+                self.state = 256
+                self.struct_declaration_list()
+                self.state = 257
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 259
+                self.struct_or_union()
+                self.state = 260
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_or_unionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_or_union
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_or_union" ):
+                listener.enterStruct_or_union(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_or_union" ):
+                listener.exitStruct_or_union(self)
+
+
+
+
+    def struct_or_union(self):
+
+        localctx = CParser.Struct_or_unionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 22, self.RULE_struct_or_union)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 264
+            _la = self._input.LA(1)
+            if not(_la==CParser.T__20 or _la==CParser.T__21):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declaration_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def struct_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Struct_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Struct_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declaration_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declaration_list" ):
+                listener.enterStruct_declaration_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declaration_list" ):
+                listener.exitStruct_declaration_list(self)
+
+
+
+
+    def struct_declaration_list(self):
+
+        localctx = CParser.Struct_declaration_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 24, self.RULE_struct_declaration_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 267
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while True:
+                self.state = 266
+                self.struct_declaration()
+                self.state = 269
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if not ((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0) or _la==CParser.IDENTIFIER):
+                    break
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def specifier_qualifier_list(self):
+            return self.getTypedRuleContext(CParser.Specifier_qualifier_listContext,0)
+
+
+        def struct_declarator_list(self):
+            return self.getTypedRuleContext(CParser.Struct_declarator_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declaration" ):
+                listener.enterStruct_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declaration" ):
+                listener.exitStruct_declaration(self)
+
+
+
+
+    def struct_declaration(self):
+
+        localctx = CParser.Struct_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 26, self.RULE_struct_declaration)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 271
+            self.specifier_qualifier_list()
+            self.state = 272
+            self.struct_declarator_list()
+            self.state = 273
+            self.match(CParser.T__1)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Specifier_qualifier_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        # @param  i=None Type: int
+        def type_specifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_specifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_specifierContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_specifier_qualifier_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterSpecifier_qualifier_list" ):
+                listener.enterSpecifier_qualifier_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitSpecifier_qualifier_list" ):
+                listener.exitSpecifier_qualifier_list(self)
+
+
+
+
+    def specifier_qualifier_list(self):
+
+        localctx = CParser.Specifier_qualifier_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 28, self.RULE_specifier_qualifier_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 277
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 277
+                    self._errHandler.sync(self)
+                    token = self._input.LA(1)
+                    if token in [CParser.T__24, CParser.T__25, CParser.T__26, CParser.T__27, CParser.T__28, CParser.T__29, CParser.T__30, CParser.T__31, CParser.T__32, CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__36]:
+                        self.state = 275
+                        self.type_qualifier()
+                        pass
+                    elif token in [CParser.T__10, CParser.T__11, CParser.T__12, CParser.T__13, CParser.T__14, CParser.T__15, CParser.T__16, CParser.T__17, CParser.T__18, CParser.T__20, CParser.T__21, CParser.T__23, CParser.IDENTIFIER]:
+                        self.state = 276
+                        self.type_specifier()
+                        pass
+                    else:
+                        raise NoViableAltException(self)
+
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 279
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,21,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declarator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def struct_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Struct_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Struct_declaratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declarator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declarator_list" ):
+                listener.enterStruct_declarator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declarator_list" ):
+                listener.exitStruct_declarator_list(self)
+
+
+
+
+    def struct_declarator_list(self):
+
+        localctx = CParser.Struct_declarator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 30, self.RULE_struct_declarator_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 281
+            self.struct_declarator()
+            self.state = 286
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 282
+                self.match(CParser.T__3)
+                self.state = 283
+                self.struct_declarator()
+                self.state = 288
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Struct_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_struct_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStruct_declarator" ):
+                listener.enterStruct_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStruct_declarator" ):
+                listener.exitStruct_declarator(self)
+
+
+
+
+    def struct_declarator(self):
+
+        localctx = CParser.Struct_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 32, self.RULE_struct_declarator)
+        self._la = 0 # Token type
+        try:
+            self.state = 296
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__33, CParser.T__34, CParser.T__35, CParser.T__37, CParser.T__41, CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 289
+                self.declarator()
+                self.state = 292
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__22:
+                    self.state = 290
+                    self.match(CParser.T__22)
+                    self.state = 291
+                    self.constant_expression()
+
+
+                pass
+            elif token in [CParser.T__22]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 294
+                self.match(CParser.T__22)
+                self.state = 295
+                self.constant_expression()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Enum_specifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def enumerator_list(self):
+            return self.getTypedRuleContext(CParser.Enumerator_listContext,0)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_enum_specifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnum_specifier" ):
+                listener.enterEnum_specifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnum_specifier" ):
+                listener.exitEnum_specifier(self)
+
+
+
+
+    def enum_specifier(self):
+
+        localctx = CParser.Enum_specifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 34, self.RULE_enum_specifier)
+        self._la = 0 # Token type
+        try:
+            self.state = 317
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,27,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 298
+                self.match(CParser.T__23)
+                self.state = 299
+                self.match(CParser.T__0)
+                self.state = 300
+                self.enumerator_list()
+                self.state = 302
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 301
+                    self.match(CParser.T__3)
+
+
+                self.state = 304
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 306
+                self.match(CParser.T__23)
+                self.state = 307
+                self.match(CParser.IDENTIFIER)
+                self.state = 308
+                self.match(CParser.T__0)
+                self.state = 309
+                self.enumerator_list()
+                self.state = 311
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 310
+                    self.match(CParser.T__3)
+
+
+                self.state = 313
+                self.match(CParser.T__19)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 315
+                self.match(CParser.T__23)
+                self.state = 316
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Enumerator_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def enumerator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.EnumeratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.EnumeratorContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_enumerator_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnumerator_list" ):
+                listener.enterEnumerator_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnumerator_list" ):
+                listener.exitEnumerator_list(self)
+
+
+
+
+    def enumerator_list(self):
+
+        localctx = CParser.Enumerator_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 36, self.RULE_enumerator_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 319
+            self.enumerator()
+            self.state = 324
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,28,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 320
+                    self.match(CParser.T__3)
+                    self.state = 321
+                    self.enumerator()
+                self.state = 326
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,28,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class EnumeratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_enumerator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEnumerator" ):
+                listener.enterEnumerator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEnumerator" ):
+                listener.exitEnumerator(self)
+
+
+
+
+    def enumerator(self):
+
+        localctx = CParser.EnumeratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 38, self.RULE_enumerator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 327
+            self.match(CParser.IDENTIFIER)
+            self.state = 330
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__4:
+                self.state = 328
+                self.match(CParser.T__4)
+                self.state = 329
+                self.constant_expression()
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_qualifierContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_qualifier
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_qualifier" ):
+                listener.enterType_qualifier(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_qualifier" ):
+                listener.exitType_qualifier(self)
+
+
+
+
+    def type_qualifier(self):
+
+        localctx = CParser.Type_qualifierContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 40, self.RULE_type_qualifier)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 332
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class DeclaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def direct_declarator(self):
+            return self.getTypedRuleContext(CParser.Direct_declaratorContext,0)
+
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclarator" ):
+                listener.enterDeclarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclarator" ):
+                listener.exitDeclarator(self)
+
+
+
+
+    def declarator(self):
+
+        localctx = CParser.DeclaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 42, self.RULE_declarator)
+        self._la = 0 # Token type
+        try:
+            self.state = 348
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,34,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 335
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__41:
+                    self.state = 334
+                    self.pointer()
+
+
+                self.state = 338
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__33:
+                    self.state = 337
+                    self.match(CParser.T__33)
+
+
+                self.state = 341
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__34:
+                    self.state = 340
+                    self.match(CParser.T__34)
+
+
+                self.state = 344
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__35:
+                    self.state = 343
+                    self.match(CParser.T__35)
+
+
+                self.state = 346
+                self.direct_declarator()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 347
+                self.pointer()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Direct_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def declarator_suffix(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Declarator_suffixContext)
+            else:
+                return self.getTypedRuleContext(CParser.Declarator_suffixContext,i)
+
+
+        def declarator(self):
+            return self.getTypedRuleContext(CParser.DeclaratorContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_direct_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDirect_declarator" ):
+                listener.enterDirect_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDirect_declarator" ):
+                listener.exitDirect_declarator(self)
+
+
+
+
+    def direct_declarator(self):
+
+        localctx = CParser.Direct_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 44, self.RULE_direct_declarator)
+        try:
+            self.state = 368
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 350
+                self.match(CParser.IDENTIFIER)
+                self.state = 354
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,35,self._ctx)
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt==1:
+                        self.state = 351
+                        self.declarator_suffix()
+                    self.state = 356
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,35,self._ctx)
+
+                pass
+            elif token in [CParser.T__37]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 357
+                self.match(CParser.T__37)
+                self.state = 359
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,36,self._ctx)
+                if la_ == 1:
+                    self.state = 358
+                    self.match(CParser.T__33)
+
+
+                self.state = 361
+                self.declarator()
+                self.state = 362
+                self.match(CParser.T__38)
+                self.state = 364
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 363
+                        self.declarator_suffix()
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 366
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,37,self._ctx)
+
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Declarator_suffixContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def parameter_type_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_type_listContext,0)
+
+
+        def identifier_list(self):
+            return self.getTypedRuleContext(CParser.Identifier_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_declarator_suffix
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDeclarator_suffix" ):
+                listener.enterDeclarator_suffix(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDeclarator_suffix" ):
+                listener.exitDeclarator_suffix(self)
+
+
+
+
+    def declarator_suffix(self):
+
+        localctx = CParser.Declarator_suffixContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 46, self.RULE_declarator_suffix)
+        try:
+            self.state = 386
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,39,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 370
+                self.match(CParser.T__39)
+                self.state = 371
+                self.constant_expression()
+                self.state = 372
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 374
+                self.match(CParser.T__39)
+                self.state = 375
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 376
+                self.match(CParser.T__37)
+                self.state = 377
+                self.parameter_type_list()
+                self.state = 378
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 380
+                self.match(CParser.T__37)
+                self.state = 381
+                self.identifier_list()
+                self.state = 382
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 384
+                self.match(CParser.T__37)
+                self.state = 385
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class PointerContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def type_qualifier(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Type_qualifierContext)
+            else:
+                return self.getTypedRuleContext(CParser.Type_qualifierContext,i)
+
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_pointer
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPointer" ):
+                listener.enterPointer(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPointer" ):
+                listener.exitPointer(self)
+
+
+
+
+    def pointer(self):
+
+        localctx = CParser.PointerContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 48, self.RULE_pointer)
+        try:
+            self.state = 400
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,42,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 388
+                self.match(CParser.T__41)
+                self.state = 390
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 389
+                        self.type_qualifier()
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 392
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,40,self._ctx)
+
+                self.state = 395
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,41,self._ctx)
+                if la_ == 1:
+                    self.state = 394
+                    self.pointer()
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 397
+                self.match(CParser.T__41)
+                self.state = 398
+                self.pointer()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 399
+                self.match(CParser.T__41)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_type_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def parameter_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_type_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_type_list" ):
+                listener.enterParameter_type_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_type_list" ):
+                listener.exitParameter_type_list(self)
+
+
+
+
+    def parameter_type_list(self):
+
+        localctx = CParser.Parameter_type_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 50, self.RULE_parameter_type_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 402
+            self.parameter_list()
+            self.state = 408
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__3:
+                self.state = 403
+                self.match(CParser.T__3)
+                self.state = 405
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 404
+                    self.match(CParser.T__28)
+
+
+                self.state = 407
+                self.match(CParser.T__42)
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def parameter_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Parameter_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Parameter_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_list" ):
+                listener.enterParameter_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_list" ):
+                listener.exitParameter_list(self)
+
+
+
+
+    def parameter_list(self):
+
+        localctx = CParser.Parameter_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 52, self.RULE_parameter_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 410
+            self.parameter_declaration()
+            self.state = 418
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,46,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 411
+                    self.match(CParser.T__3)
+                    self.state = 413
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,45,self._ctx)
+                    if la_ == 1:
+                        self.state = 412
+                        self.match(CParser.T__28)
+
+
+                    self.state = 415
+                    self.parameter_declaration()
+                self.state = 420
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,46,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Parameter_declarationContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def declaration_specifiers(self):
+            return self.getTypedRuleContext(CParser.Declaration_specifiersContext,0)
+
+
+        # @param  i=None Type: int
+        def declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclaratorContext,i)
+
+
+        # @param  i=None Type: int
+        def abstract_declarator(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Abstract_declaratorContext)
+            else:
+                return self.getTypedRuleContext(CParser.Abstract_declaratorContext,i)
+
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def pointer(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.PointerContext)
+            else:
+                return self.getTypedRuleContext(CParser.PointerContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_parameter_declaration
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterParameter_declaration" ):
+                listener.enterParameter_declaration(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitParameter_declaration" ):
+                listener.exitParameter_declaration(self)
+
+
+
+
+    def parameter_declaration(self):
+
+        localctx = CParser.Parameter_declarationContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 54, self.RULE_parameter_declaration)
+        self._la = 0 # Token type
+        try:
+            self.state = 439
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,51,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 421
+                self.declaration_specifiers()
+                self.state = 426
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while ((((_la - 34)) & ~0x3f) == 0 and ((1 << (_la - 34)) & ((1 << (CParser.T__33 - 34)) | (1 << (CParser.T__34 - 34)) | (1 << (CParser.T__35 - 34)) | (1 << (CParser.T__37 - 34)) | (1 << (CParser.T__39 - 34)) | (1 << (CParser.T__41 - 34)) | (1 << (CParser.IDENTIFIER - 34)))) != 0):
+                    self.state = 424
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,47,self._ctx)
+                    if la_ == 1:
+                        self.state = 422
+                        self.declarator()
+                        pass
+
+                    elif la_ == 2:
+                        self.state = 423
+                        self.abstract_declarator()
+                        pass
+
+
+                    self.state = 428
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 430
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 429
+                    self.match(CParser.T__28)
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 435
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while _la==CParser.T__41:
+                    self.state = 432
+                    self.pointer()
+                    self.state = 437
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                self.state = 438
+                self.match(CParser.IDENTIFIER)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Identifier_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        def getRuleIndex(self):
+            return CParser.RULE_identifier_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterIdentifier_list" ):
+                listener.enterIdentifier_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitIdentifier_list" ):
+                listener.exitIdentifier_list(self)
+
+
+
+
+    def identifier_list(self):
+
+        localctx = CParser.Identifier_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 56, self.RULE_identifier_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 441
+            self.match(CParser.IDENTIFIER)
+            self.state = 446
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 442
+                self.match(CParser.T__3)
+                self.state = 443
+                self.match(CParser.IDENTIFIER)
+                self.state = 448
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Type_nameContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def specifier_qualifier_list(self):
+            return self.getTypedRuleContext(CParser.Specifier_qualifier_listContext,0)
+
+
+        def abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Abstract_declaratorContext,0)
+
+
+        def type_id(self):
+            return self.getTypedRuleContext(CParser.Type_idContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_type_name
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterType_name" ):
+                listener.enterType_name(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitType_name" ):
+                listener.exitType_name(self)
+
+
+
+
+    def type_name(self):
+
+        localctx = CParser.Type_nameContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 58, self.RULE_type_name)
+        self._la = 0 # Token type
+        try:
+            self.state = 454
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,54,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 449
+                self.specifier_qualifier_list()
+                self.state = 451
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__37) | (1 << CParser.T__39) | (1 << CParser.T__41))) != 0):
+                    self.state = 450
+                    self.abstract_declarator()
+
+
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 453
+                self.type_id()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Abstract_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def pointer(self):
+            return self.getTypedRuleContext(CParser.PointerContext,0)
+
+
+        def direct_abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Direct_abstract_declaratorContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_abstract_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAbstract_declarator" ):
+                listener.enterAbstract_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAbstract_declarator" ):
+                listener.exitAbstract_declarator(self)
+
+
+
+
+    def abstract_declarator(self):
+
+        localctx = CParser.Abstract_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 60, self.RULE_abstract_declarator)
+        try:
+            self.state = 461
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__41]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 456
+                self.pointer()
+                self.state = 458
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,55,self._ctx)
+                if la_ == 1:
+                    self.state = 457
+                    self.direct_abstract_declarator()
+
+
+                pass
+            elif token in [CParser.T__37, CParser.T__39]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 460
+                self.direct_abstract_declarator()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Direct_abstract_declaratorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def abstract_declarator(self):
+            return self.getTypedRuleContext(CParser.Abstract_declaratorContext,0)
+
+
+        # @param  i=None Type: int
+        def abstract_declarator_suffix(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Abstract_declarator_suffixContext)
+            else:
+                return self.getTypedRuleContext(CParser.Abstract_declarator_suffixContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_direct_abstract_declarator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterDirect_abstract_declarator" ):
+                listener.enterDirect_abstract_declarator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitDirect_abstract_declarator" ):
+                listener.exitDirect_abstract_declarator(self)
+
+
+
+    def direct_abstract_declarator(self):
+
+        localctx = CParser.Direct_abstract_declaratorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 62, self.RULE_direct_abstract_declarator)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 468
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,57,self._ctx)
+            if la_ == 1:
+                self.state = 463
+                self.match(CParser.T__37)
+                self.state = 464
+                self.abstract_declarator()
+                self.state = 465
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 2:
+                self.state = 467
+                self.abstract_declarator_suffix()
+                pass
+
+
+            self.state = 473
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 470
+                    self.abstract_declarator_suffix()
+                self.state = 475
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,58,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Abstract_declarator_suffixContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def parameter_type_list(self):
+            return self.getTypedRuleContext(CParser.Parameter_type_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_abstract_declarator_suffix
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAbstract_declarator_suffix" ):
+                listener.enterAbstract_declarator_suffix(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAbstract_declarator_suffix" ):
+                listener.exitAbstract_declarator_suffix(self)
+
+
+
+
+    def abstract_declarator_suffix(self):
+
+        localctx = CParser.Abstract_declarator_suffixContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 64, self.RULE_abstract_declarator_suffix)
+        try:
+            self.state = 488
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,59,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 476
+                self.match(CParser.T__39)
+                self.state = 477
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 478
+                self.match(CParser.T__39)
+                self.state = 479
+                self.constant_expression()
+                self.state = 480
+                self.match(CParser.T__40)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 482
+                self.match(CParser.T__37)
+                self.state = 483
+                self.match(CParser.T__38)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 484
+                self.match(CParser.T__37)
+                self.state = 485
+                self.parameter_type_list()
+                self.state = 486
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class InitializerContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def assignment_expression(self):
+            return self.getTypedRuleContext(CParser.Assignment_expressionContext,0)
+
+
+        def initializer_list(self):
+            return self.getTypedRuleContext(CParser.Initializer_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_initializer
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInitializer" ):
+                listener.enterInitializer(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInitializer" ):
+                listener.exitInitializer(self)
+
+
+
+
+    def initializer(self):
+
+        localctx = CParser.InitializerContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 66, self.RULE_initializer)
+        self._la = 0 # Token type
+        try:
+            self.state = 498
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__37, CParser.T__41, CParser.T__43, CParser.T__44, CParser.T__47, CParser.T__48, CParser.T__49, CParser.T__52, CParser.T__53, CParser.T__54, CParser.IDENTIFIER, CParser.CHARACTER_LITERAL, CParser.STRING_LITERAL, CParser.HEX_LITERAL, CParser.DECIMAL_LITERAL, CParser.OCTAL_LITERAL, CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 490
+                self.assignment_expression()
+                pass
+            elif token in [CParser.T__0]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 491
+                self.match(CParser.T__0)
+                self.state = 492
+                self.initializer_list()
+                self.state = 494
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__3:
+                    self.state = 493
+                    self.match(CParser.T__3)
+
+
+                self.state = 496
+                self.match(CParser.T__19)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Initializer_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def initializer(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.InitializerContext)
+            else:
+                return self.getTypedRuleContext(CParser.InitializerContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_initializer_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInitializer_list" ):
+                listener.enterInitializer_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInitializer_list" ):
+                listener.exitInitializer_list(self)
+
+
+
+
+    def initializer_list(self):
+
+        localctx = CParser.Initializer_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 68, self.RULE_initializer_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 500
+            self.initializer()
+            self.state = 505
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,62,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 501
+                    self.match(CParser.T__3)
+                    self.state = 502
+                    self.initializer()
+                self.state = 507
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,62,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Argument_expression_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def assignment_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Assignment_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Assignment_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_argument_expression_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterArgument_expression_list" ):
+                listener.enterArgument_expression_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitArgument_expression_list" ):
+                listener.exitArgument_expression_list(self)
+
+
+
+
+    def argument_expression_list(self):
+
+        localctx = CParser.Argument_expression_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 70, self.RULE_argument_expression_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 508
+            self.assignment_expression()
+            self.state = 510
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__28:
+                self.state = 509
+                self.match(CParser.T__28)
+
+
+            self.state = 519
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 512
+                self.match(CParser.T__3)
+                self.state = 513
+                self.assignment_expression()
+                self.state = 515
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                if _la==CParser.T__28:
+                    self.state = 514
+                    self.match(CParser.T__28)
+
+
+                self.state = 521
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Additive_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def multiplicative_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Multiplicative_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Multiplicative_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_additive_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAdditive_expression" ):
+                listener.enterAdditive_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAdditive_expression" ):
+                listener.exitAdditive_expression(self)
+
+
+
+
+    def additive_expression(self):
+
+        localctx = CParser.Additive_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 72, self.RULE_additive_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 522
+            self.multiplicative_expression()
+            self.state = 529
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__43 or _la==CParser.T__44:
+                self.state = 527
+                self._errHandler.sync(self)
+                token = self._input.LA(1)
+                if token in [CParser.T__43]:
+                    self.state = 523
+                    self.match(CParser.T__43)
+                    self.state = 524
+                    self.multiplicative_expression()
+                    pass
+                elif token in [CParser.T__44]:
+                    self.state = 525
+                    self.match(CParser.T__44)
+                    self.state = 526
+                    self.multiplicative_expression()
+                    pass
+                else:
+                    raise NoViableAltException(self)
+
+                self.state = 531
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Multiplicative_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def cast_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Cast_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Cast_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_multiplicative_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMultiplicative_expression" ):
+                listener.enterMultiplicative_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMultiplicative_expression" ):
+                listener.exitMultiplicative_expression(self)
+
+
+
+
+    def multiplicative_expression(self):
+
+        localctx = CParser.Multiplicative_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 74, self.RULE_multiplicative_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 532
+            self.cast_expression()
+            self.state = 541
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__41) | (1 << CParser.T__45) | (1 << CParser.T__46))) != 0):
+                self.state = 539
+                self._errHandler.sync(self)
+                token = self._input.LA(1)
+                if token in [CParser.T__41]:
+                    self.state = 533
+                    self.match(CParser.T__41)
+                    self.state = 534
+                    self.cast_expression()
+                    pass
+                elif token in [CParser.T__45]:
+                    self.state = 535
+                    self.match(CParser.T__45)
+                    self.state = 536
+                    self.cast_expression()
+                    pass
+                elif token in [CParser.T__46]:
+                    self.state = 537
+                    self.match(CParser.T__46)
+                    self.state = 538
+                    self.cast_expression()
+                    pass
+                else:
+                    raise NoViableAltException(self)
+
+                self.state = 543
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Cast_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def type_name(self):
+            return self.getTypedRuleContext(CParser.Type_nameContext,0)
+
+
+        def cast_expression(self):
+            return self.getTypedRuleContext(CParser.Cast_expressionContext,0)
+
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_cast_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterCast_expression" ):
+                listener.enterCast_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitCast_expression" ):
+                listener.exitCast_expression(self)
+
+
+
+
+    def cast_expression(self):
+
+        localctx = CParser.Cast_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 76, self.RULE_cast_expression)
+        try:
+            self.state = 550
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,70,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 544
+                self.match(CParser.T__37)
+                self.state = 545
+                self.type_name()
+                self.state = 546
+                self.match(CParser.T__38)
+                self.state = 547
+                self.cast_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 549
+                self.unary_expression()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Unary_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def postfix_expression(self):
+            return self.getTypedRuleContext(CParser.Postfix_expressionContext,0)
+
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def unary_operator(self):
+            return self.getTypedRuleContext(CParser.Unary_operatorContext,0)
+
+
+        def cast_expression(self):
+            return self.getTypedRuleContext(CParser.Cast_expressionContext,0)
+
+
+        def type_name(self):
+            return self.getTypedRuleContext(CParser.Type_nameContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_unary_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterUnary_expression" ):
+                listener.enterUnary_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitUnary_expression" ):
+                listener.exitUnary_expression(self)
+
+
+
+
+    def unary_expression(self):
+
+        localctx = CParser.Unary_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 78, self.RULE_unary_expression)
+        try:
+            self.state = 567
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,71,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 552
+                self.postfix_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 553
+                self.match(CParser.T__47)
+                self.state = 554
+                self.unary_expression()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 555
+                self.match(CParser.T__48)
+                self.state = 556
+                self.unary_expression()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 557
+                self.unary_operator()
+                self.state = 558
+                self.cast_expression()
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 560
+                self.match(CParser.T__49)
+                self.state = 561
+                self.unary_expression()
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 562
+                self.match(CParser.T__49)
+                self.state = 563
+                self.match(CParser.T__37)
+                self.state = 564
+                self.type_name()
+                self.state = 565
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Postfix_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.FuncCallText = ''
+            self.p = None # Primary_expressionContext
+            self.a = None # Token
+            self.c = None # Argument_expression_listContext
+            self.b = None # Token
+            self.x = None # Token
+            self.y = None # Token
+            self.z = None # Token
+
+        def primary_expression(self):
+            return self.getTypedRuleContext(CParser.Primary_expressionContext,0)
+
+
+        # @param  i=None Type: int
+        def expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.ExpressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.ExpressionContext,i)
+
+
+        # @param  i=None Type: int
+        def macro_parameter_list(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Macro_parameter_listContext)
+            else:
+                return self.getTypedRuleContext(CParser.Macro_parameter_listContext,i)
+
+
+        # @param  i=None Type: int
+        def argument_expression_list(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Argument_expression_listContext)
+            else:
+                return self.getTypedRuleContext(CParser.Argument_expression_listContext,i)
+
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        def getRuleIndex(self):
+            return CParser.RULE_postfix_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPostfix_expression" ):
+                listener.enterPostfix_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPostfix_expression" ):
+                listener.exitPostfix_expression(self)
+
+
+
+
+    def postfix_expression(self):
+
+        localctx = CParser.Postfix_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 80, self.RULE_postfix_expression)
+
+        self.FuncCallText=''
+
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 569
+            localctx.p = self.primary_expression()
+            self.FuncCallText += (None if localctx.p is None else self._input.getText((localctx.p.start,localctx.p.stop)))
+            self.state = 600
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,73,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 598
+                    self._errHandler.sync(self)
+                    la_ = self._interp.adaptivePredict(self._input,72,self._ctx)
+                    if la_ == 1:
+                        self.state = 571
+                        self.match(CParser.T__39)
+                        self.state = 572
+                        self.expression()
+                        self.state = 573
+                        self.match(CParser.T__40)
+                        pass
+
+                    elif la_ == 2:
+                        self.state = 575
+                        self.match(CParser.T__37)
+                        self.state = 576
+                        localctx.a = self.match(CParser.T__38)
+                        self.StoreFunctionCalling((None if localctx.p is None else localctx.p.start).line, (None if localctx.p is None else localctx.p.start).column, (0 if localctx.a is None else localctx.a.line), localctx.a.column, self.FuncCallText, '')
+                        pass
+
+                    elif la_ == 3:
+                        self.state = 578
+                        self.match(CParser.T__37)
+                        self.state = 579
+                        localctx.c = self.argument_expression_list()
+                        self.state = 580
+                        localctx.b = self.match(CParser.T__38)
+                        self.StoreFunctionCalling((None if localctx.p is None else localctx.p.start).line, (None if localctx.p is None else localctx.p.start).column, (0 if localctx.b is None else localctx.b.line), localctx.b.column, self.FuncCallText, (None if localctx.c is None else self._input.getText((localctx.c.start,localctx.c.stop))))
+                        pass
+
+                    elif la_ == 4:
+                        self.state = 583
+                        self.match(CParser.T__37)
+                        self.state = 584
+                        self.macro_parameter_list()
+                        self.state = 585
+                        self.match(CParser.T__38)
+                        pass
+
+                    elif la_ == 5:
+                        self.state = 587
+                        self.match(CParser.T__50)
+                        self.state = 588
+                        localctx.x = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText += '.' + (None if localctx.x is None else localctx.x.text)
+                        pass
+
+                    elif la_ == 6:
+                        self.state = 590
+                        self.match(CParser.T__41)
+                        self.state = 591
+                        localctx.y = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText = (None if localctx.y is None else localctx.y.text)
+                        pass
+
+                    elif la_ == 7:
+                        self.state = 593
+                        self.match(CParser.T__51)
+                        self.state = 594
+                        localctx.z = self.match(CParser.IDENTIFIER)
+                        self.FuncCallText += '->' + (None if localctx.z is None else localctx.z.text)
+                        pass
+
+                    elif la_ == 8:
+                        self.state = 596
+                        self.match(CParser.T__47)
+                        pass
+
+                    elif la_ == 9:
+                        self.state = 597
+                        self.match(CParser.T__48)
+                        pass
+
+
+                self.state = 602
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,73,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Macro_parameter_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def parameter_declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Parameter_declarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.Parameter_declarationContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_macro_parameter_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMacro_parameter_list" ):
+                listener.enterMacro_parameter_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMacro_parameter_list" ):
+                listener.exitMacro_parameter_list(self)
+
+
+
+
+    def macro_parameter_list(self):
+
+        localctx = CParser.Macro_parameter_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 82, self.RULE_macro_parameter_list)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 603
+            self.parameter_declaration()
+            self.state = 608
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 604
+                self.match(CParser.T__3)
+                self.state = 605
+                self.parameter_declaration()
+                self.state = 610
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Unary_operatorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_unary_operator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterUnary_operator" ):
+                listener.enterUnary_operator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitUnary_operator" ):
+                listener.exitUnary_operator(self)
+
+
+
+
+    def unary_operator(self):
+
+        localctx = CParser.Unary_operatorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 84, self.RULE_unary_operator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 611
+            _la = self._input.LA(1)
+            if not((((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__41) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Primary_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def constant(self):
+            return self.getTypedRuleContext(CParser.ConstantContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_primary_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterPrimary_expression" ):
+                listener.enterPrimary_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitPrimary_expression" ):
+                listener.exitPrimary_expression(self)
+
+
+
+
+    def primary_expression(self):
+
+        localctx = CParser.Primary_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 86, self.RULE_primary_expression)
+        try:
+            self.state = 619
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,75,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 613
+                self.match(CParser.IDENTIFIER)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 614
+                self.constant()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 615
+                self.match(CParser.T__37)
+                self.state = 616
+                self.expression()
+                self.state = 617
+                self.match(CParser.T__38)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class ConstantContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def HEX_LITERAL(self):
+            return self.getToken(CParser.HEX_LITERAL, 0)
+
+        def OCTAL_LITERAL(self):
+            return self.getToken(CParser.OCTAL_LITERAL, 0)
+
+        def DECIMAL_LITERAL(self):
+            return self.getToken(CParser.DECIMAL_LITERAL, 0)
+
+        def CHARACTER_LITERAL(self):
+            return self.getToken(CParser.CHARACTER_LITERAL, 0)
+
+        # @param  i=None Type: int
+        def IDENTIFIER(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.IDENTIFIER)
+            else:
+                return self.getToken(CParser.IDENTIFIER, i)
+
+        # @param  i=None Type: int
+        def STRING_LITERAL(self,i=None):
+            if i is None:
+                return self.getTokens(CParser.STRING_LITERAL)
+            else:
+                return self.getToken(CParser.STRING_LITERAL, i)
+
+        def FLOATING_POINT_LITERAL(self):
+            return self.getToken(CParser.FLOATING_POINT_LITERAL, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_constant
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConstant" ):
+                listener.enterConstant(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConstant" ):
+                listener.exitConstant(self)
+
+
+
+
+    def constant(self):
+
+        localctx = CParser.ConstantContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 88, self.RULE_constant)
+        self._la = 0 # Token type
+        try:
+            self.state = 647
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.HEX_LITERAL]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 621
+                self.match(CParser.HEX_LITERAL)
+                pass
+            elif token in [CParser.OCTAL_LITERAL]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 622
+                self.match(CParser.OCTAL_LITERAL)
+                pass
+            elif token in [CParser.DECIMAL_LITERAL]:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 623
+                self.match(CParser.DECIMAL_LITERAL)
+                pass
+            elif token in [CParser.CHARACTER_LITERAL]:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 624
+                self.match(CParser.CHARACTER_LITERAL)
+                pass
+            elif token in [CParser.IDENTIFIER, CParser.STRING_LITERAL]:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 636
+                self._errHandler.sync(self)
+                _alt = 1
+                while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                    if _alt == 1:
+                        self.state = 628
+                        self._errHandler.sync(self)
+                        _la = self._input.LA(1)
+                        while _la==CParser.IDENTIFIER:
+                            self.state = 625
+                            self.match(CParser.IDENTIFIER)
+                            self.state = 630
+                            self._errHandler.sync(self)
+                            _la = self._input.LA(1)
+
+                        self.state = 632
+                        self._errHandler.sync(self)
+                        _alt = 1
+                        while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                            if _alt == 1:
+                                self.state = 631
+                                self.match(CParser.STRING_LITERAL)
+
+                            else:
+                                raise NoViableAltException(self)
+                            self.state = 634
+                            self._errHandler.sync(self)
+                            _alt = self._interp.adaptivePredict(self._input,77,self._ctx)
+
+
+                    else:
+                        raise NoViableAltException(self)
+                    self.state = 638
+                    self._errHandler.sync(self)
+                    _alt = self._interp.adaptivePredict(self._input,78,self._ctx)
+
+                self.state = 643
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+                while _la==CParser.IDENTIFIER:
+                    self.state = 640
+                    self.match(CParser.IDENTIFIER)
+                    self.state = 645
+                    self._errHandler.sync(self)
+                    _la = self._input.LA(1)
+
+                pass
+            elif token in [CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 646
+                self.match(CParser.FLOATING_POINT_LITERAL)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class ExpressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def assignment_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Assignment_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Assignment_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExpression" ):
+                listener.enterExpression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExpression" ):
+                listener.exitExpression(self)
+
+
+
+
+    def expression(self):
+
+        localctx = CParser.ExpressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 90, self.RULE_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 649
+            self.assignment_expression()
+            self.state = 654
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__3:
+                self.state = 650
+                self.match(CParser.T__3)
+                self.state = 651
+                self.assignment_expression()
+                self.state = 656
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Constant_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_constant_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConstant_expression" ):
+                listener.enterConstant_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConstant_expression" ):
+                listener.exitConstant_expression(self)
+
+
+
+
+    def constant_expression(self):
+
+        localctx = CParser.Constant_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 92, self.RULE_constant_expression)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 657
+            self.conditional_expression()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Assignment_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def lvalue(self):
+            return self.getTypedRuleContext(CParser.LvalueContext,0)
+
+
+        def assignment_operator(self):
+            return self.getTypedRuleContext(CParser.Assignment_operatorContext,0)
+
+
+        def assignment_expression(self):
+            return self.getTypedRuleContext(CParser.Assignment_expressionContext,0)
+
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_assignment_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAssignment_expression" ):
+                listener.enterAssignment_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAssignment_expression" ):
+                listener.exitAssignment_expression(self)
+
+
+
+
+    def assignment_expression(self):
+
+        localctx = CParser.Assignment_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 94, self.RULE_assignment_expression)
+        try:
+            self.state = 664
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,82,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 659
+                self.lvalue()
+                self.state = 660
+                self.assignment_operator()
+                self.state = 661
+                self.assignment_expression()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 663
+                self.conditional_expression()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class LvalueContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def unary_expression(self):
+            return self.getTypedRuleContext(CParser.Unary_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_lvalue
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLvalue" ):
+                listener.enterLvalue(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLvalue" ):
+                listener.exitLvalue(self)
+
+
+
+
+    def lvalue(self):
+
+        localctx = CParser.LvalueContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 96, self.RULE_lvalue)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 666
+            self.unary_expression()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Assignment_operatorContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_assignment_operator
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAssignment_operator" ):
+                listener.enterAssignment_operator(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAssignment_operator" ):
+                listener.exitAssignment_operator(self)
+
+
+
+
+    def assignment_operator(self):
+
+        localctx = CParser.Assignment_operatorContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 98, self.RULE_assignment_operator)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 668
+            _la = self._input.LA(1)
+            if not(((((_la - 5)) & ~0x3f) == 0 and ((1 << (_la - 5)) & ((1 << (CParser.T__4 - 5)) | (1 << (CParser.T__55 - 5)) | (1 << (CParser.T__56 - 5)) | (1 << (CParser.T__57 - 5)) | (1 << (CParser.T__58 - 5)) | (1 << (CParser.T__59 - 5)) | (1 << (CParser.T__60 - 5)) | (1 << (CParser.T__61 - 5)) | (1 << (CParser.T__62 - 5)) | (1 << (CParser.T__63 - 5)) | (1 << (CParser.T__64 - 5)))) != 0)):
+                self._errHandler.recoverInline(self)
+            else:
+                self._errHandler.reportMatch(self)
+                self.consume()
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Conditional_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # Logical_or_expressionContext
+
+        def logical_or_expression(self):
+            return self.getTypedRuleContext(CParser.Logical_or_expressionContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def conditional_expression(self):
+            return self.getTypedRuleContext(CParser.Conditional_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_conditional_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterConditional_expression" ):
+                listener.enterConditional_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitConditional_expression" ):
+                listener.exitConditional_expression(self)
+
+
+
+
+    def conditional_expression(self):
+
+        localctx = CParser.Conditional_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 100, self.RULE_conditional_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 670
+            localctx.e = self.logical_or_expression()
+            self.state = 677
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__65:
+                self.state = 671
+                self.match(CParser.T__65)
+                self.state = 672
+                self.expression()
+                self.state = 673
+                self.match(CParser.T__22)
+                self.state = 674
+                self.conditional_expression()
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Logical_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def logical_and_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Logical_and_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Logical_and_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_logical_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLogical_or_expression" ):
+                listener.enterLogical_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLogical_or_expression" ):
+                listener.exitLogical_or_expression(self)
+
+
+
+
+    def logical_or_expression(self):
+
+        localctx = CParser.Logical_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 102, self.RULE_logical_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 679
+            self.logical_and_expression()
+            self.state = 684
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__66:
+                self.state = 680
+                self.match(CParser.T__66)
+                self.state = 681
+                self.logical_and_expression()
+                self.state = 686
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Logical_and_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def inclusive_or_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Inclusive_or_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Inclusive_or_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_logical_and_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLogical_and_expression" ):
+                listener.enterLogical_and_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLogical_and_expression" ):
+                listener.exitLogical_and_expression(self)
+
+
+
+
+    def logical_and_expression(self):
+
+        localctx = CParser.Logical_and_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 104, self.RULE_logical_and_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 687
+            self.inclusive_or_expression()
+            self.state = 692
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__67:
+                self.state = 688
+                self.match(CParser.T__67)
+                self.state = 689
+                self.inclusive_or_expression()
+                self.state = 694
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Inclusive_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def exclusive_or_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Exclusive_or_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Exclusive_or_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_inclusive_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterInclusive_or_expression" ):
+                listener.enterInclusive_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitInclusive_or_expression" ):
+                listener.exitInclusive_or_expression(self)
+
+
+
+
+    def inclusive_or_expression(self):
+
+        localctx = CParser.Inclusive_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 106, self.RULE_inclusive_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 695
+            self.exclusive_or_expression()
+            self.state = 700
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__68:
+                self.state = 696
+                self.match(CParser.T__68)
+                self.state = 697
+                self.exclusive_or_expression()
+                self.state = 702
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Exclusive_or_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def and_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.And_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.And_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_exclusive_or_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExclusive_or_expression" ):
+                listener.enterExclusive_or_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExclusive_or_expression" ):
+                listener.exitExclusive_or_expression(self)
+
+
+
+
+    def exclusive_or_expression(self):
+
+        localctx = CParser.Exclusive_or_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 108, self.RULE_exclusive_or_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 703
+            self.and_expression()
+            self.state = 708
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__69:
+                self.state = 704
+                self.match(CParser.T__69)
+                self.state = 705
+                self.and_expression()
+                self.state = 710
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class And_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def equality_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Equality_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Equality_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_and_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAnd_expression" ):
+                listener.enterAnd_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAnd_expression" ):
+                listener.exitAnd_expression(self)
+
+
+
+
+    def and_expression(self):
+
+        localctx = CParser.And_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 110, self.RULE_and_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 711
+            self.equality_expression()
+            self.state = 716
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__52:
+                self.state = 712
+                self.match(CParser.T__52)
+                self.state = 713
+                self.equality_expression()
+                self.state = 718
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Equality_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def relational_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Relational_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Relational_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_equality_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterEquality_expression" ):
+                listener.enterEquality_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitEquality_expression" ):
+                listener.exitEquality_expression(self)
+
+
+
+
+    def equality_expression(self):
+
+        localctx = CParser.Equality_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 112, self.RULE_equality_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 719
+            self.relational_expression()
+            self.state = 724
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__70 or _la==CParser.T__71:
+                self.state = 720
+                _la = self._input.LA(1)
+                if not(_la==CParser.T__70 or _la==CParser.T__71):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 721
+                self.relational_expression()
+                self.state = 726
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Relational_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def shift_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Shift_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Shift_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_relational_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterRelational_expression" ):
+                listener.enterRelational_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitRelational_expression" ):
+                listener.exitRelational_expression(self)
+
+
+
+
+    def relational_expression(self):
+
+        localctx = CParser.Relational_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 114, self.RULE_relational_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 727
+            self.shift_expression()
+            self.state = 732
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while ((((_la - 73)) & ~0x3f) == 0 and ((1 << (_la - 73)) & ((1 << (CParser.T__72 - 73)) | (1 << (CParser.T__73 - 73)) | (1 << (CParser.T__74 - 73)) | (1 << (CParser.T__75 - 73)))) != 0):
+                self.state = 728
+                _la = self._input.LA(1)
+                if not(((((_la - 73)) & ~0x3f) == 0 and ((1 << (_la - 73)) & ((1 << (CParser.T__72 - 73)) | (1 << (CParser.T__73 - 73)) | (1 << (CParser.T__74 - 73)) | (1 << (CParser.T__75 - 73)))) != 0)):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 729
+                self.shift_expression()
+                self.state = 734
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Shift_expressionContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def additive_expression(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.Additive_expressionContext)
+            else:
+                return self.getTypedRuleContext(CParser.Additive_expressionContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_shift_expression
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterShift_expression" ):
+                listener.enterShift_expression(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitShift_expression" ):
+                listener.exitShift_expression(self)
+
+
+
+
+    def shift_expression(self):
+
+        localctx = CParser.Shift_expressionContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 116, self.RULE_shift_expression)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 735
+            self.additive_expression()
+            self.state = 740
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while _la==CParser.T__76 or _la==CParser.T__77:
+                self.state = 736
+                _la = self._input.LA(1)
+                if not(_la==CParser.T__76 or _la==CParser.T__77):
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 737
+                self.additive_expression()
+                self.state = 742
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class StatementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def labeled_statement(self):
+            return self.getTypedRuleContext(CParser.Labeled_statementContext,0)
+
+
+        def compound_statement(self):
+            return self.getTypedRuleContext(CParser.Compound_statementContext,0)
+
+
+        def expression_statement(self):
+            return self.getTypedRuleContext(CParser.Expression_statementContext,0)
+
+
+        def selection_statement(self):
+            return self.getTypedRuleContext(CParser.Selection_statementContext,0)
+
+
+        def iteration_statement(self):
+            return self.getTypedRuleContext(CParser.Iteration_statementContext,0)
+
+
+        def jump_statement(self):
+            return self.getTypedRuleContext(CParser.Jump_statementContext,0)
+
+
+        def macro_statement(self):
+            return self.getTypedRuleContext(CParser.Macro_statementContext,0)
+
+
+        def asm2_statement(self):
+            return self.getTypedRuleContext(CParser.Asm2_statementContext,0)
+
+
+        def asm1_statement(self):
+            return self.getTypedRuleContext(CParser.Asm1_statementContext,0)
+
+
+        def asm_statement(self):
+            return self.getTypedRuleContext(CParser.Asm_statementContext,0)
+
+
+        def declaration(self):
+            return self.getTypedRuleContext(CParser.DeclarationContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStatement" ):
+                listener.enterStatement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStatement" ):
+                listener.exitStatement(self)
+
+
+
+
+    def statement(self):
+
+        localctx = CParser.StatementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 118, self.RULE_statement)
+        try:
+            self.state = 754
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,92,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 743
+                self.labeled_statement()
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 744
+                self.compound_statement()
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 745
+                self.expression_statement()
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 746
+                self.selection_statement()
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 747
+                self.iteration_statement()
+                pass
+
+            elif la_ == 6:
+                self.enterOuterAlt(localctx, 6)
+                self.state = 748
+                self.jump_statement()
+                pass
+
+            elif la_ == 7:
+                self.enterOuterAlt(localctx, 7)
+                self.state = 749
+                self.macro_statement()
+                pass
+
+            elif la_ == 8:
+                self.enterOuterAlt(localctx, 8)
+                self.state = 750
+                self.asm2_statement()
+                pass
+
+            elif la_ == 9:
+                self.enterOuterAlt(localctx, 9)
+                self.state = 751
+                self.asm1_statement()
+                pass
+
+            elif la_ == 10:
+                self.enterOuterAlt(localctx, 10)
+                self.state = 752
+                self.asm_statement()
+                pass
+
+            elif la_ == 11:
+                self.enterOuterAlt(localctx, 11)
+                self.state = 753
+                self.declaration()
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm2_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm2_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm2_statement" ):
+                listener.enterAsm2_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm2_statement" ):
+                listener.exitAsm2_statement(self)
+
+
+
+
+    def asm2_statement(self):
+
+        localctx = CParser.Asm2_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 120, self.RULE_asm2_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 757
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if _la==CParser.T__78:
+                self.state = 756
+                self.match(CParser.T__78)
+
+
+            self.state = 759
+            self.match(CParser.IDENTIFIER)
+            self.state = 760
+            self.match(CParser.T__37)
+            self.state = 764
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,94,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 761
+                    _la = self._input.LA(1)
+                    if _la <= 0 or _la==CParser.T__1:
+                        self._errHandler.recoverInline(self)
+                    else:
+                        self._errHandler.reportMatch(self)
+                        self.consume()
+                self.state = 766
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,94,self._ctx)
+
+            self.state = 767
+            self.match(CParser.T__38)
+            self.state = 768
+            self.match(CParser.T__1)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm1_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm1_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm1_statement" ):
+                listener.enterAsm1_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm1_statement" ):
+                listener.exitAsm1_statement(self)
+
+
+
+
+    def asm1_statement(self):
+
+        localctx = CParser.Asm1_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 122, self.RULE_asm1_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 770
+            self.match(CParser.T__79)
+            self.state = 771
+            self.match(CParser.T__0)
+            self.state = 775
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__3) | (1 << CParser.T__4) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__22) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__38) | (1 << CParser.T__39) | (1 << CParser.T__40) | (1 << CParser.T__41) | (1 << CParser.T__42) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__45) | (1 << CParser.T__46) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__50) | (1 << CParser.T__51) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54) | (1 << CParser.T__55) | (1 << CParser.T__56) | (1 << CParser.T__57) | (1 << CParser.T__58) | (1 << CParser.T__59) | (1 << CParser.T__60) | (1 << CParser.T__61) | (1 << CParser.T__62))) != 0) or ((((_la - 64)) & ~0x3f) == 0 and ((1 << (_la - 64)) & ((1 << (CParser.T__63 - 64)) | (1 << (CParser.T__64 - 64)) | (1 << (CParser.T__65 - 64)) | (1 << (CParser.T__66 - 64)) | (1 << (CParser.T__67 - 64)) | (1 << (CParser.T__68 - 64)) | (1 << (CParser.T__69 - 64)) | (1 << (CParser.T__70 - 64)) | (1 << (CParser.T__71 - 64)) | (1 << (CParser.T__72 - 64)) | (1 << (CParser.T__73 - 64)) | (1 << (CParser.T__74 - 64)) | (1 << (CParser.T__75 - 64)) | (1 << (CParser.T__76 - 64)) | (1 << (CParser.T__77 - 64)) | (1 << (CParser.T__78 - 64)) | (1 << (CParser.T__79 - 64)) | (1 << (CParser.T__80 - 64)) | (1 << (CParser.T__81 - 64)) | (1 << (CParser.T__82 - 64)) | (1 << (CParser.T__83 - 64)) | (1 << (CParser.T__84 - 64)) | (1 << (CParser.T__85 - 64)) | (1 << (CParser.T__86 - 64)) | (1 << (CParser.T__87 - 64)) | (1 << (CParser.T__88 - 64)) | (1 << (CParser.T__89 - 64)) | (1 << (CParser.T__90 - 64)) | (1 << (CParser.T__91 - 64)) | (1 << (CParser.IDENTIFIER - 64)) | (1 << (CParser.CHARACTER_LITERAL - 64)) | (1 << (CParser.STRING_LITERAL - 64)) | (1 << (CParser.HEX_LITERAL - 64)) | (1 << (CParser.DECIMAL_LITERAL - 64)) | (1 << (CParser.OCTAL_LITERAL - 64)) | (1 << (CParser.FLOATING_POINT_LITERAL - 64)) | (1 << (CParser.WS - 64)) | (1 << (CParser.BS - 64)) | (1 << (CParser.UnicodeVocabulary - 64)) | (1 << (CParser.COMMENT - 64)) | (1 << (CParser.LINE_COMMENT - 64)) | (1 << (CParser.LINE_COMMAND - 64)))) != 0):
+                self.state = 772
+                _la = self._input.LA(1)
+                if _la <= 0 or _la==CParser.T__19:
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 777
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+            self.state = 778
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Asm_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_asm_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterAsm_statement" ):
+                listener.enterAsm_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitAsm_statement" ):
+                listener.exitAsm_statement(self)
+
+
+
+
+    def asm_statement(self):
+
+        localctx = CParser.Asm_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 124, self.RULE_asm_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 780
+            self.match(CParser.T__80)
+            self.state = 781
+            self.match(CParser.T__0)
+            self.state = 785
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            while (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__3) | (1 << CParser.T__4) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__22) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__38) | (1 << CParser.T__39) | (1 << CParser.T__40) | (1 << CParser.T__41) | (1 << CParser.T__42) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__45) | (1 << CParser.T__46) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__50) | (1 << CParser.T__51) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54) | (1 << CParser.T__55) | (1 << CParser.T__56) | (1 << CParser.T__57) | (1 << CParser.T__58) | (1 << CParser.T__59) | (1 << CParser.T__60) | (1 << CParser.T__61) | (1 << CParser.T__62))) != 0) or ((((_la - 64)) & ~0x3f) == 0 and ((1 << (_la - 64)) & ((1 << (CParser.T__63 - 64)) | (1 << (CParser.T__64 - 64)) | (1 << (CParser.T__65 - 64)) | (1 << (CParser.T__66 - 64)) | (1 << (CParser.T__67 - 64)) | (1 << (CParser.T__68 - 64)) | (1 << (CParser.T__69 - 64)) | (1 << (CParser.T__70 - 64)) | (1 << (CParser.T__71 - 64)) | (1 << (CParser.T__72 - 64)) | (1 << (CParser.T__73 - 64)) | (1 << (CParser.T__74 - 64)) | (1 << (CParser.T__75 - 64)) | (1 << (CParser.T__76 - 64)) | (1 << (CParser.T__77 - 64)) | (1 << (CParser.T__78 - 64)) | (1 << (CParser.T__79 - 64)) | (1 << (CParser.T__80 - 64)) | (1 << (CParser.T__81 - 64)) | (1 << (CParser.T__82 - 64)) | (1 << (CParser.T__83 - 64)) | (1 << (CParser.T__84 - 64)) | (1 << (CParser.T__85 - 64)) | (1 << (CParser.T__86 - 64)) | (1 << (CParser.T__87 - 64)) | (1 << (CParser.T__88 - 64)) | (1 << (CParser.T__89 - 64)) | (1 << (CParser.T__90 - 64)) | (1 << (CParser.T__91 - 64)) | (1 << (CParser.IDENTIFIER - 64)) | (1 << (CParser.CHARACTER_LITERAL - 64)) | (1 << (CParser.STRING_LITERAL - 64)) | (1 << (CParser.HEX_LITERAL - 64)) | (1 << (CParser.DECIMAL_LITERAL - 64)) | (1 << (CParser.OCTAL_LITERAL - 64)) | (1 << (CParser.FLOATING_POINT_LITERAL - 64)) | (1 << (CParser.WS - 64)) | (1 << (CParser.BS - 64)) | (1 << (CParser.UnicodeVocabulary - 64)) | (1 << (CParser.COMMENT - 64)) | (1 << (CParser.LINE_COMMENT - 64)) | (1 << (CParser.LINE_COMMAND - 64)))) != 0):
+                self.state = 782
+                _la = self._input.LA(1)
+                if _la <= 0 or _la==CParser.T__19:
+                    self._errHandler.recoverInline(self)
+                else:
+                    self._errHandler.reportMatch(self)
+                    self.consume()
+                self.state = 787
+                self._errHandler.sync(self)
+                _la = self._input.LA(1)
+
+            self.state = 788
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Macro_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def statement_list(self):
+            return self.getTypedRuleContext(CParser.Statement_listContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_macro_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterMacro_statement" ):
+                listener.enterMacro_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitMacro_statement" ):
+                listener.exitMacro_statement(self)
+
+
+
+
+    def macro_statement(self):
+
+        localctx = CParser.Macro_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 126, self.RULE_macro_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 790
+            self.match(CParser.IDENTIFIER)
+            self.state = 791
+            self.match(CParser.T__37)
+            self.state = 795
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,97,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 792
+                    self.declaration()
+                self.state = 797
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,97,self._ctx)
+
+            self.state = 799
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,98,self._ctx)
+            if la_ == 1:
+                self.state = 798
+                self.statement_list()
+
+
+            self.state = 802
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if ((((_la - 38)) & ~0x3f) == 0 and ((1 << (_la - 38)) & ((1 << (CParser.T__37 - 38)) | (1 << (CParser.T__41 - 38)) | (1 << (CParser.T__43 - 38)) | (1 << (CParser.T__44 - 38)) | (1 << (CParser.T__47 - 38)) | (1 << (CParser.T__48 - 38)) | (1 << (CParser.T__49 - 38)) | (1 << (CParser.T__52 - 38)) | (1 << (CParser.T__53 - 38)) | (1 << (CParser.T__54 - 38)) | (1 << (CParser.IDENTIFIER - 38)) | (1 << (CParser.CHARACTER_LITERAL - 38)) | (1 << (CParser.STRING_LITERAL - 38)) | (1 << (CParser.HEX_LITERAL - 38)) | (1 << (CParser.DECIMAL_LITERAL - 38)) | (1 << (CParser.OCTAL_LITERAL - 38)) | (1 << (CParser.FLOATING_POINT_LITERAL - 38)))) != 0):
+                self.state = 801
+                self.expression()
+
+
+            self.state = 804
+            self.match(CParser.T__38)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Labeled_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def statement(self):
+            return self.getTypedRuleContext(CParser.StatementContext,0)
+
+
+        def constant_expression(self):
+            return self.getTypedRuleContext(CParser.Constant_expressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_labeled_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterLabeled_statement" ):
+                listener.enterLabeled_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitLabeled_statement" ):
+                listener.exitLabeled_statement(self)
+
+
+
+
+    def labeled_statement(self):
+
+        localctx = CParser.Labeled_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 128, self.RULE_labeled_statement)
+        try:
+            self.state = 817
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.IDENTIFIER]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 806
+                self.match(CParser.IDENTIFIER)
+                self.state = 807
+                self.match(CParser.T__22)
+                self.state = 808
+                self.statement()
+                pass
+            elif token in [CParser.T__81]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 809
+                self.match(CParser.T__81)
+                self.state = 810
+                self.constant_expression()
+                self.state = 811
+                self.match(CParser.T__22)
+                self.state = 812
+                self.statement()
+                pass
+            elif token in [CParser.T__82]:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 814
+                self.match(CParser.T__82)
+                self.state = 815
+                self.match(CParser.T__22)
+                self.state = 816
+                self.statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Compound_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def declaration(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.DeclarationContext)
+            else:
+                return self.getTypedRuleContext(CParser.DeclarationContext,i)
+
+
+        def statement_list(self):
+            return self.getTypedRuleContext(CParser.Statement_listContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_compound_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterCompound_statement" ):
+                listener.enterCompound_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitCompound_statement" ):
+                listener.exitCompound_statement(self)
+
+
+
+
+    def compound_statement(self):
+
+        localctx = CParser.Compound_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 130, self.RULE_compound_statement)
+        self._la = 0 # Token type
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 819
+            self.match(CParser.T__0)
+            self.state = 823
+            self._errHandler.sync(self)
+            _alt = self._interp.adaptivePredict(self._input,101,self._ctx)
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt==1:
+                    self.state = 820
+                    self.declaration()
+                self.state = 825
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,101,self._ctx)
+
+            self.state = 827
+            self._errHandler.sync(self)
+            _la = self._input.LA(1)
+            if (((_la) & ~0x3f) == 0 and ((1 << _la) & ((1 << CParser.T__0) | (1 << CParser.T__1) | (1 << CParser.T__2) | (1 << CParser.T__5) | (1 << CParser.T__6) | (1 << CParser.T__7) | (1 << CParser.T__8) | (1 << CParser.T__9) | (1 << CParser.T__10) | (1 << CParser.T__11) | (1 << CParser.T__12) | (1 << CParser.T__13) | (1 << CParser.T__14) | (1 << CParser.T__15) | (1 << CParser.T__16) | (1 << CParser.T__17) | (1 << CParser.T__18) | (1 << CParser.T__20) | (1 << CParser.T__21) | (1 << CParser.T__23) | (1 << CParser.T__24) | (1 << CParser.T__25) | (1 << CParser.T__26) | (1 << CParser.T__27) | (1 << CParser.T__28) | (1 << CParser.T__29) | (1 << CParser.T__30) | (1 << CParser.T__31) | (1 << CParser.T__32) | (1 << CParser.T__33) | (1 << CParser.T__34) | (1 << CParser.T__35) | (1 << CParser.T__36) | (1 << CParser.T__37) | (1 << CParser.T__41) | (1 << CParser.T__43) | (1 << CParser.T__44) | (1 << CParser.T__47) | (1 << CParser.T__48) | (1 << CParser.T__49) | (1 << CParser.T__52) | (1 << CParser.T__53) | (1 << CParser.T__54))) != 0) or ((((_la - 79)) & ~0x3f) == 0 and ((1 << (_la - 79)) & ((1 << (CParser.T__78 - 79)) | (1 << (CParser.T__79 - 79)) | (1 << (CParser.T__80 - 79)) | (1 << (CParser.T__81 - 79)) | (1 << (CParser.T__82 - 79)) | (1 << (CParser.T__83 - 79)) | (1 << (CParser.T__85 - 79)) | (1 << (CParser.T__86 - 79)) | (1 << (CParser.T__87 - 79)) | (1 << (CParser.T__88 - 79)) | (1 << (CParser.T__89 - 79)) | (1 << (CParser.T__90 - 79)) | (1 << (CParser.T__91 - 79)) | (1 << (CParser.IDENTIFIER - 79)) | (1 << (CParser.CHARACTER_LITERAL - 79)) | (1 << (CParser.STRING_LITERAL - 79)) | (1 << (CParser.HEX_LITERAL - 79)) | (1 << (CParser.DECIMAL_LITERAL - 79)) | (1 << (CParser.OCTAL_LITERAL - 79)) | (1 << (CParser.FLOATING_POINT_LITERAL - 79)))) != 0):
+                self.state = 826
+                self.statement_list()
+
+
+            self.state = 829
+            self.match(CParser.T__19)
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Statement_listContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        # @param  i=None Type: int
+        def statement(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.StatementContext)
+            else:
+                return self.getTypedRuleContext(CParser.StatementContext,i)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_statement_list
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterStatement_list" ):
+                listener.enterStatement_list(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitStatement_list" ):
+                listener.exitStatement_list(self)
+
+
+
+
+    def statement_list(self):
+
+        localctx = CParser.Statement_listContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 132, self.RULE_statement_list)
+        try:
+            self.enterOuterAlt(localctx, 1)
+            self.state = 832
+            self._errHandler.sync(self)
+            _alt = 1
+            while _alt!=2 and _alt!=ATN.INVALID_ALT_NUMBER:
+                if _alt == 1:
+                    self.state = 831
+                    self.statement()
+
+                else:
+                    raise NoViableAltException(self)
+                self.state = 834
+                self._errHandler.sync(self)
+                _alt = self._interp.adaptivePredict(self._input,103,self._ctx)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Expression_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_expression_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterExpression_statement" ):
+                listener.enterExpression_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitExpression_statement" ):
+                listener.exitExpression_statement(self)
+
+
+
+
+    def expression_statement(self):
+
+        localctx = CParser.Expression_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 134, self.RULE_expression_statement)
+        try:
+            self.state = 840
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__1]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 836
+                self.match(CParser.T__1)
+                pass
+            elif token in [CParser.T__37, CParser.T__41, CParser.T__43, CParser.T__44, CParser.T__47, CParser.T__48, CParser.T__49, CParser.T__52, CParser.T__53, CParser.T__54, CParser.IDENTIFIER, CParser.CHARACTER_LITERAL, CParser.STRING_LITERAL, CParser.HEX_LITERAL, CParser.DECIMAL_LITERAL, CParser.OCTAL_LITERAL, CParser.FLOATING_POINT_LITERAL]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 837
+                self.expression()
+                self.state = 838
+                self.match(CParser.T__1)
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Selection_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # ExpressionContext
+
+        # @param  i=None Type: int
+        def statement(self,i=None):
+            if i is None:
+                return self.getTypedRuleContexts(CParser.StatementContext)
+            else:
+                return self.getTypedRuleContext(CParser.StatementContext,i)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_selection_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterSelection_statement" ):
+                listener.enterSelection_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitSelection_statement" ):
+                listener.exitSelection_statement(self)
+
+
+
+
+    def selection_statement(self):
+
+        localctx = CParser.Selection_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 136, self.RULE_selection_statement)
+        try:
+            self.state = 858
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__83]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 842
+                self.match(CParser.T__83)
+                self.state = 843
+                self.match(CParser.T__37)
+                self.state = 844
+                localctx.e = self.expression()
+                self.state = 845
+                self.match(CParser.T__38)
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                self.state = 847
+                self.statement()
+                self.state = 850
+                self._errHandler.sync(self)
+                la_ = self._interp.adaptivePredict(self._input,105,self._ctx)
+                if la_ == 1:
+                    self.state = 848
+                    self.match(CParser.T__84)
+                    self.state = 849
+                    self.statement()
+
+
+                pass
+            elif token in [CParser.T__85]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 852
+                self.match(CParser.T__85)
+                self.state = 853
+                self.match(CParser.T__37)
+                self.state = 854
+                self.expression()
+                self.state = 855
+                self.match(CParser.T__38)
+                self.state = 856
+                self.statement()
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Iteration_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+            self.e = None # ExpressionContext
+
+        def statement(self):
+            return self.getTypedRuleContext(CParser.StatementContext,0)
+
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_iteration_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterIteration_statement" ):
+                listener.enterIteration_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitIteration_statement" ):
+                listener.exitIteration_statement(self)
+
+
+
+
+    def iteration_statement(self):
+
+        localctx = CParser.Iteration_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 138, self.RULE_iteration_statement)
+        try:
+            self.state = 876
+            self._errHandler.sync(self)
+            token = self._input.LA(1)
+            if token in [CParser.T__86]:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 860
+                self.match(CParser.T__86)
+                self.state = 861
+                self.match(CParser.T__37)
+                self.state = 862
+                localctx.e = self.expression()
+                self.state = 863
+                self.match(CParser.T__38)
+                self.state = 864
+                self.statement()
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                pass
+            elif token in [CParser.T__87]:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 867
+                self.match(CParser.T__87)
+                self.state = 868
+                self.statement()
+                self.state = 869
+                self.match(CParser.T__86)
+                self.state = 870
+                self.match(CParser.T__37)
+                self.state = 871
+                localctx.e = self.expression()
+                self.state = 872
+                self.match(CParser.T__38)
+                self.state = 873
+                self.match(CParser.T__1)
+                self.StorePredicateExpression((None if localctx.e is None else localctx.e.start).line, (None if localctx.e is None else localctx.e.start).column, (None if localctx.e is None else localctx.e.stop).line, (None if localctx.e is None else localctx.e.stop).column, (None if localctx.e is None else self._input.getText((localctx.e.start,localctx.e.stop))))
+                pass
+            else:
+                raise NoViableAltException(self)
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+    class Jump_statementContext(ParserRuleContext):
+
+        # @param  parent=None Type: ParserRuleContext
+        # @param  invokingState=-1 Type: int
+        def __init__(self,parser,parent=None,invokingState=-1):
+            super().__init__(parent, invokingState)
+            self.parser = parser
+
+        def IDENTIFIER(self):
+            return self.getToken(CParser.IDENTIFIER, 0)
+
+        def expression(self):
+            return self.getTypedRuleContext(CParser.ExpressionContext,0)
+
+
+        def getRuleIndex(self):
+            return CParser.RULE_jump_statement
+
+        # @param  listener Type: ParseTreeListener
+        def enterRule(self,listener):
+            if hasattr( listener, "enterJump_statement" ):
+                listener.enterJump_statement(self)
+
+        # @param  listener Type: ParseTreeListener
+        def exitRule(self,listener):
+            if hasattr( listener, "exitJump_statement" ):
+                listener.exitJump_statement(self)
+
+
+
+
+    def jump_statement(self):
+
+        localctx = CParser.Jump_statementContext(self, self._ctx, self.state)
+        self.enterRule(localctx, 140, self.RULE_jump_statement)
+        try:
+            self.state = 891
+            self._errHandler.sync(self)
+            la_ = self._interp.adaptivePredict(self._input,108,self._ctx)
+            if la_ == 1:
+                self.enterOuterAlt(localctx, 1)
+                self.state = 878
+                self.match(CParser.T__88)
+                self.state = 879
+                self.match(CParser.IDENTIFIER)
+                self.state = 880
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 2:
+                self.enterOuterAlt(localctx, 2)
+                self.state = 881
+                self.match(CParser.T__89)
+                self.state = 882
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 3:
+                self.enterOuterAlt(localctx, 3)
+                self.state = 883
+                self.match(CParser.T__90)
+                self.state = 884
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 4:
+                self.enterOuterAlt(localctx, 4)
+                self.state = 885
+                self.match(CParser.T__91)
+                self.state = 886
+                self.match(CParser.T__1)
+                pass
+
+            elif la_ == 5:
+                self.enterOuterAlt(localctx, 5)
+                self.state = 887
+                self.match(CParser.T__91)
+                self.state = 888
+                self.expression()
+                self.state = 889
+                self.match(CParser.T__1)
+                pass
+
+
+        except RecognitionException as re:
+            localctx.exception = re
+            self._errHandler.reportError(self, re)
+            self._errHandler.recover(self, re)
+        finally:
+            self.exitRule()
+        return localctx
+
+
+
+
+
diff --git a/BaseTools/Source/Python/Eot/CParser4/__init__.py b/BaseTools/Source/Python/Eot/CParser4/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/BaseTools/Source/Python/Eot/CodeFragmentCollector.py b/BaseTools/Source/Python/Eot/CodeFragmentCollector.py
index 8a5e5df17e..b1e77a690a 100644
--- a/BaseTools/Source/Python/Eot/CodeFragmentCollector.py
+++ b/BaseTools/Source/Python/Eot/CodeFragmentCollector.py
@@ -19,17 +19,23 @@ from __future__ import print_function
 from __future__ import absolute_import
 import re
 import Common.LongFilePathOs as os
 import sys
 
-import antlr3
-from .CLexer import CLexer
-from .CParser import CParser
+if sys.version_info.major == 3:
+    import antlr4 as antlr
+    from Eot.CParser4.CLexer import CLexer
+    from Eot.CParser4.CParser import CParser
+else:
+    import antlr3 as antlr
+    antlr.InputStream = antlr.StringStream
+    from Eot.CParser3.CLexer import CLexer
+    from Eot.CParser3.CParser import CParser
 
-from . import FileProfile
-from .CodeFragment import PP_Directive
-from .ParserWarning import Warning
+from Eot import FileProfile
+from Eot.CodeFragment import PP_Directive
+from Eot.ParserWarning import Warning
 
 
 ##define T_CHAR_SPACE                ' '
 ##define T_CHAR_NULL                 '\0'
 ##define T_CHAR_CR                   '\r'
@@ -352,13 +358,13 @@ class CodeFragmentCollector:
         # restore from ListOfList to ListOfString
         self.Profile.FileLinesList = ["".join(list) for list in self.Profile.FileLinesList]
         FileStringContents = ''
         for fileLine in self.Profile.FileLinesList:
             FileStringContents += fileLine
-        cStream = antlr3.StringStream(FileStringContents)
+        cStream = antlr.InputStream(FileStringContents)
         lexer = CLexer(cStream)
-        tStream = antlr3.CommonTokenStream(lexer)
+        tStream = antlr.CommonTokenStream(lexer)
         parser = CParser(tStream)
         parser.translation_unit()
 
     ## CleanFileProfileBuffer() method
     #
-- 
2.20.1.windows.1



^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
                   ` (32 preceding siblings ...)
  2019-01-29  2:06 ` [Patch v2 33/33] BaseTools: Eot " Feng, Bob C
@ 2019-01-29 13:07 ` Laszlo Ersek
  2019-01-30  1:52   ` Gao, Liming
  2019-01-30  2:59   ` Feng, Bob C
  33 siblings, 2 replies; 50+ messages in thread
From: Laszlo Ersek @ 2019-01-29 13:07 UTC (permalink / raw)
  To: Feng, Bob C; +Cc: edk2-devel

Hi Bob,

On 01/29/19 03:05, Feng, Bob C wrote:
> BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
>
> V2:
> The python files under CParser4 folder of ECC/Eot tool
> are generated by antlr4 and forpython3 usage.
> They have python3 specific syntax, for example
> the data type declaration for the arguments of a function. That
> is not compitable with python2. this patch is to remove these syntax.
>
> The version2 patch set is commit to https://github.com/BobCF/edk2.git
> branch py3basetools_v2

(reusing the "test plan" from my email at
<http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@redhat.com>:)

I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot tool
Python3 adaption", 2019-01-29). Each test was performed in a clean tree
(after running "git clean -ffdx") and clean environment (I re-sourced
"edksetup.sh" for each test in separation). In addition, the base tools
were rebuilt (again from a clean tree) for each test, with the following
command [1]:

  nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)

(a) On my RHEL7.5 Workstation laptop, I have both the system-level
python packages installed (python-2.7.5-69.el7_5.x86_64), and the extra
python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).

(a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
utility picked

  PYTHON_COMMAND   = /usr/bin/python3.4

and I successfully built OvmfPkg for IA32, IA32X64, and X64; also
ArmVirtQemu for AARCH64. The built firmware images passed a smoke test
too.

(a2) I removed all the python34 packages (and the dependent packages)
from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
(This is the configuration what a "normal" RHEL7 environment would
provide.) The "build" utility didn't print any PYTHON_COMMAND setting,
but the same fw platform builds as in (a1) completed fine. The smoke
tests passed again as well.

(b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from
python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" from
platform-python-3.6.8-1.el8.x86_64.

(b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
utility picked

  PYTHON_COMMAND   = /usr/bin/python3.6

and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I don't
have a cross-compiler installed in this environment yet, nor a RHEL8
aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).

(b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't set
PYTHON3_ENABLE. The same builds as in (b1) succeeded.


For the series:

Tested-by: Laszlo Ersek <lersek@redhat.com>

Given that the testing is quite time consuming, I suggest that we push
v2 (assuming reviewers don't find critical issues), and address small
issues incrementally.

Thanks!
Laszlo


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-29 13:07 ` [Patch v2 00/33] BaseTools python3 migration patch set Laszlo Ersek
@ 2019-01-30  1:52   ` Gao, Liming
  2019-01-30  5:25     ` Feng, Bob C
  2019-01-30  2:59   ` Feng, Bob C
  1 sibling, 1 reply; 50+ messages in thread
From: Gao, Liming @ 2019-01-30  1:52 UTC (permalink / raw)
  To: Laszlo Ersek, Feng, Bob C; +Cc: edk2-devel@lists.01.org

Laszlo:
 I agree your proposal. Push this patch set first if no other comments, then continue to do minor bug fix. 
 
Thanks
Liming
> -----Original Message-----
> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of Laszlo Ersek
> Sent: Tuesday, January 29, 2019 9:07 PM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: edk2-devel@lists.01.org
> Subject: Re: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set
> 
> Hi Bob,
> 
> On 01/29/19 03:05, Feng, Bob C wrote:
> > BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
> >
> > V2:
> > The python files under CParser4 folder of ECC/Eot tool
> > are generated by antlr4 and forpython3 usage.
> > They have python3 specific syntax, for example
> > the data type declaration for the arguments of a function. That
> > is not compitable with python2. this patch is to remove these syntax.
> >
> > The version2 patch set is commit to https://github.com/BobCF/edk2.git
> > branch py3basetools_v2
> 
> (reusing the "test plan" from my email at
> <http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@redhat.com>:)
> 
> I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot tool
> Python3 adaption", 2019-01-29). Each test was performed in a clean tree
> (after running "git clean -ffdx") and clean environment (I re-sourced
> "edksetup.sh" for each test in separation). In addition, the base tools
> were rebuilt (again from a clean tree) for each test, with the following
> command [1]:
> 
>   nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)
> 
> (a) On my RHEL7.5 Workstation laptop, I have both the system-level
> python packages installed (python-2.7.5-69.el7_5.x86_64), and the extra
> python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).
> 
> (a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
> utility picked
> 
>   PYTHON_COMMAND   = /usr/bin/python3.4
> 
> and I successfully built OvmfPkg for IA32, IA32X64, and X64; also
> ArmVirtQemu for AARCH64. The built firmware images passed a smoke test
> too.
> 
> (a2) I removed all the python34 packages (and the dependent packages)
> from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
> (This is the configuration what a "normal" RHEL7 environment would
> provide.) The "build" utility didn't print any PYTHON_COMMAND setting,
> but the same fw platform builds as in (a1) completed fine. The smoke
> tests passed again as well.
> 
> (b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from
> python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" from
> platform-python-3.6.8-1.el8.x86_64.
> 
> (b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
> utility picked
> 
>   PYTHON_COMMAND   = /usr/bin/python3.6
> 
> and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I don't
> have a cross-compiler installed in this environment yet, nor a RHEL8
> aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).
> 
> (b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't set
> PYTHON3_ENABLE. The same builds as in (b1) succeeded.
> 
> 
> For the series:
> 
> Tested-by: Laszlo Ersek <lersek@redhat.com>
> 
> Given that the testing is quite time consuming, I suggest that we push
> v2 (assuming reviewers don't find critical issues), and address small
> issues incrementally.
> 
> Thanks!
> Laszlo
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-29 13:07 ` [Patch v2 00/33] BaseTools python3 migration patch set Laszlo Ersek
  2019-01-30  1:52   ` Gao, Liming
@ 2019-01-30  2:59   ` Feng, Bob C
  1 sibling, 0 replies; 50+ messages in thread
From: Feng, Bob C @ 2019-01-30  2:59 UTC (permalink / raw)
  To: Laszlo Ersek; +Cc: edk2-devel@lists.01.org

Hi Laszlo,

Thank you very much for the testing.

Thanks!
Bob

-----Original Message-----
From: Laszlo Ersek [mailto:lersek@redhat.com] 
Sent: Tuesday, January 29, 2019 9:07 PM
To: Feng, Bob C <bob.c.feng@intel.com>
Cc: edk2-devel@lists.01.org
Subject: Re: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set

Hi Bob,

On 01/29/19 03:05, Feng, Bob C wrote:
> BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
>
> V2:
> The python files under CParser4 folder of ECC/Eot tool are generated 
> by antlr4 and forpython3 usage.
> They have python3 specific syntax, for example the data type 
> declaration for the arguments of a function. That is not compitable 
> with python2. this patch is to remove these syntax.
>
> The version2 patch set is commit to https://github.com/BobCF/edk2.git 
> branch py3basetools_v2

(reusing the "test plan" from my email at
<http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@redhat.com>:)

I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot tool
Python3 adaption", 2019-01-29). Each test was performed in a clean tree (after running "git clean -ffdx") and clean environment (I re-sourced "edksetup.sh" for each test in separation). In addition, the base tools were rebuilt (again from a clean tree) for each test, with the following command [1]:

  nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)

(a) On my RHEL7.5 Workstation laptop, I have both the system-level python packages installed (python-2.7.5-69.el7_5.x86_64), and the extra
python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).

(a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build utility picked

  PYTHON_COMMAND   = /usr/bin/python3.4

and I successfully built OvmfPkg for IA32, IA32X64, and X64; also ArmVirtQemu for AARCH64. The built firmware images passed a smoke test too.

(a2) I removed all the python34 packages (and the dependent packages) from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
(This is the configuration what a "normal" RHEL7 environment would
provide.) The "build" utility didn't print any PYTHON_COMMAND setting, but the same fw platform builds as in (a1) completed fine. The smoke tests passed again as well.

(b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" from platform-python-3.6.8-1.el8.x86_64.

(b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build utility picked

  PYTHON_COMMAND   = /usr/bin/python3.6

and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I don't have a cross-compiler installed in this environment yet, nor a RHEL8
aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).

(b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't set PYTHON3_ENABLE. The same builds as in (b1) succeeded.


For the series:

Tested-by: Laszlo Ersek <lersek@redhat.com>

Given that the testing is quite time consuming, I suggest that we push
v2 (assuming reviewers don't find critical issues), and address small issues incrementally.

Thanks!
Laszlo

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-30  1:52   ` Gao, Liming
@ 2019-01-30  5:25     ` Feng, Bob C
  2019-01-31  8:23       ` Gao, Liming
  0 siblings, 1 reply; 50+ messages in thread
From: Feng, Bob C @ 2019-01-30  5:25 UTC (permalink / raw)
  To: Gao, Liming, Laszlo Ersek; +Cc: edk2-devel@lists.01.org

I agree this proposal. 
I plan to push python3 patch set to edk2 master in this Friday morning, Feb.1 PRC time if there is no more comments or no critical issues found.

Thanks,
Bob

-----Original Message-----
From: Gao, Liming 
Sent: Wednesday, January 30, 2019 9:53 AM
To: Laszlo Ersek <lersek@redhat.com>; Feng, Bob C <bob.c.feng@intel.com>
Cc: edk2-devel@lists.01.org
Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set

Laszlo:
 I agree your proposal. Push this patch set first if no other comments, then continue to do minor bug fix. 
 
Thanks
Liming
> -----Original Message-----
> From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of 
> Laszlo Ersek
> Sent: Tuesday, January 29, 2019 9:07 PM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: edk2-devel@lists.01.org
> Subject: Re: [edk2] [Patch v2 00/33] BaseTools python3 migration patch 
> set
> 
> Hi Bob,
> 
> On 01/29/19 03:05, Feng, Bob C wrote:
> > BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
> >
> > V2:
> > The python files under CParser4 folder of ECC/Eot tool are generated 
> > by antlr4 and forpython3 usage.
> > They have python3 specific syntax, for example the data type 
> > declaration for the arguments of a function. That is not compitable 
> > with python2. this patch is to remove these syntax.
> >
> > The version2 patch set is commit to 
> > https://github.com/BobCF/edk2.git branch py3basetools_v2
> 
> (reusing the "test plan" from my email at
> <http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@redh
> at.com>:)
> 
> I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot 
> tool
> Python3 adaption", 2019-01-29). Each test was performed in a clean 
> tree (after running "git clean -ffdx") and clean environment (I 
> re-sourced "edksetup.sh" for each test in separation). In addition, 
> the base tools were rebuilt (again from a clean tree) for each test, 
> with the following command [1]:
> 
>   nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)
> 
> (a) On my RHEL7.5 Workstation laptop, I have both the system-level 
> python packages installed (python-2.7.5-69.el7_5.x86_64), and the 
> extra
> python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).
> 
> (a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build 
> utility picked
> 
>   PYTHON_COMMAND   = /usr/bin/python3.4
> 
> and I successfully built OvmfPkg for IA32, IA32X64, and X64; also 
> ArmVirtQemu for AARCH64. The built firmware images passed a smoke test 
> too.
> 
> (a2) I removed all the python34 packages (and the dependent packages) 
> from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
> (This is the configuration what a "normal" RHEL7 environment would
> provide.) The "build" utility didn't print any PYTHON_COMMAND setting, 
> but the same fw platform builds as in (a1) completed fine. The smoke 
> tests passed again as well.
> 
> (b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from 
> python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" from 
> platform-python-3.6.8-1.el8.x86_64.
> 
> (b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build 
> utility picked
> 
>   PYTHON_COMMAND   = /usr/bin/python3.6
> 
> and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I don't 
> have a cross-compiler installed in this environment yet, nor a RHEL8
> aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).
> 
> (b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't 
> set PYTHON3_ENABLE. The same builds as in (b1) succeeded.
> 
> 
> For the series:
> 
> Tested-by: Laszlo Ersek <lersek@redhat.com>
> 
> Given that the testing is quite time consuming, I suggest that we push
> v2 (assuming reviewers don't find critical issues), and address small 
> issues incrementally.
> 
> Thanks!
> Laszlo
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-30  5:25     ` Feng, Bob C
@ 2019-01-31  8:23       ` Gao, Liming
  2019-02-01  3:13         ` Feng, Bob C
  0 siblings, 1 reply; 50+ messages in thread
From: Gao, Liming @ 2019-01-31  8:23 UTC (permalink / raw)
  To: Feng, Bob C, Laszlo Ersek; +Cc: edk2-devel@lists.01.org

Bob:
  I have no other comments on this patch set. Reviewed-by: Liming Gao <liming.gao@intel.com>

Thanks
Liming
> -----Original Message-----
> From: Feng, Bob C
> Sent: Wednesday, January 30, 2019 1:25 PM
> To: Gao, Liming <liming.gao@intel.com>; Laszlo Ersek <lersek@redhat.com>
> Cc: edk2-devel@lists.01.org
> Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set
> 
> I agree this proposal.
> I plan to push python3 patch set to edk2 master in this Friday morning, Feb.1 PRC time if there is no more comments or no critical issues
> found.
> 
> Thanks,
> Bob
> 
> -----Original Message-----
> From: Gao, Liming
> Sent: Wednesday, January 30, 2019 9:53 AM
> To: Laszlo Ersek <lersek@redhat.com>; Feng, Bob C <bob.c.feng@intel.com>
> Cc: edk2-devel@lists.01.org
> Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set
> 
> Laszlo:
>  I agree your proposal. Push this patch set first if no other comments, then continue to do minor bug fix.
> 
> Thanks
> Liming
> > -----Original Message-----
> > From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf Of
> > Laszlo Ersek
> > Sent: Tuesday, January 29, 2019 9:07 PM
> > To: Feng, Bob C <bob.c.feng@intel.com>
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: [edk2] [Patch v2 00/33] BaseTools python3 migration patch
> > set
> >
> > Hi Bob,
> >
> > On 01/29/19 03:05, Feng, Bob C wrote:
> > > BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
> > >
> > > V2:
> > > The python files under CParser4 folder of ECC/Eot tool are generated
> > > by antlr4 and forpython3 usage.
> > > They have python3 specific syntax, for example the data type
> > > declaration for the arguments of a function. That is not compitable
> > > with python2. this patch is to remove these syntax.
> > >
> > > The version2 patch set is commit to
> > > https://github.com/BobCF/edk2.git branch py3basetools_v2
> >
> > (reusing the "test plan" from my email at
> > <http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@redh
> > at.com>:)
> >
> > I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot
> > tool
> > Python3 adaption", 2019-01-29). Each test was performed in a clean
> > tree (after running "git clean -ffdx") and clean environment (I
> > re-sourced "edksetup.sh" for each test in separation). In addition,
> > the base tools were rebuilt (again from a clean tree) for each test,
> > with the following command [1]:
> >
> >   nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)
> >
> > (a) On my RHEL7.5 Workstation laptop, I have both the system-level
> > python packages installed (python-2.7.5-69.el7_5.x86_64), and the
> > extra
> > python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).
> >
> > (a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
> > utility picked
> >
> >   PYTHON_COMMAND   = /usr/bin/python3.4
> >
> > and I successfully built OvmfPkg for IA32, IA32X64, and X64; also
> > ArmVirtQemu for AARCH64. The built firmware images passed a smoke test
> > too.
> >
> > (a2) I removed all the python34 packages (and the dependent packages)
> > from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
> > (This is the configuration what a "normal" RHEL7 environment would
> > provide.) The "build" utility didn't print any PYTHON_COMMAND setting,
> > but the same fw platform builds as in (a1) completed fine. The smoke
> > tests passed again as well.
> >
> > (b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from
> > python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" from
> > platform-python-3.6.8-1.el8.x86_64.
> >
> > (b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build
> > utility picked
> >
> >   PYTHON_COMMAND   = /usr/bin/python3.6
> >
> > and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I don't
> > have a cross-compiler installed in this environment yet, nor a RHEL8
> > aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).
> >
> > (b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't
> > set PYTHON3_ENABLE. The same builds as in (b1) succeeded.
> >
> >
> > For the series:
> >
> > Tested-by: Laszlo Ersek <lersek@redhat.com>
> >
> > Given that the testing is quite time consuming, I suggest that we push
> > v2 (assuming reviewers don't find critical issues), and address small
> > issues incrementally.
> >
> > Thanks!
> > Laszlo
> > _______________________________________________
> > edk2-devel mailing list
> > edk2-devel@lists.01.org
> > https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-01-31  8:23       ` Gao, Liming
@ 2019-02-01  3:13         ` Feng, Bob C
  2019-02-01  8:50           ` Laszlo Ersek
  0 siblings, 1 reply; 50+ messages in thread
From: Feng, Bob C @ 2019-02-01  3:13 UTC (permalink / raw)
  To: Gao, Liming, Laszlo Ersek; +Cc: edk2-devel@lists.01.org

I have pushed py3 patch set to edk2 master.

Thanks,
Bob

-----Original Message-----
From: Gao, Liming 
Sent: Thursday, January 31, 2019 4:24 PM
To: Feng, Bob C <bob.c.feng@intel.com>; Laszlo Ersek <lersek@redhat.com>
Cc: edk2-devel@lists.01.org
Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch set

Bob:
  I have no other comments on this patch set. Reviewed-by: Liming Gao <liming.gao@intel.com>

Thanks
Liming
> -----Original Message-----
> From: Feng, Bob C
> Sent: Wednesday, January 30, 2019 1:25 PM
> To: Gao, Liming <liming.gao@intel.com>; Laszlo Ersek 
> <lersek@redhat.com>
> Cc: edk2-devel@lists.01.org
> Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch 
> set
> 
> I agree this proposal.
> I plan to push python3 patch set to edk2 master in this Friday 
> morning, Feb.1 PRC time if there is no more comments or no critical issues found.
> 
> Thanks,
> Bob
> 
> -----Original Message-----
> From: Gao, Liming
> Sent: Wednesday, January 30, 2019 9:53 AM
> To: Laszlo Ersek <lersek@redhat.com>; Feng, Bob C 
> <bob.c.feng@intel.com>
> Cc: edk2-devel@lists.01.org
> Subject: RE: [edk2] [Patch v2 00/33] BaseTools python3 migration patch 
> set
> 
> Laszlo:
>  I agree your proposal. Push this patch set first if no other comments, then continue to do minor bug fix.
> 
> Thanks
> Liming
> > -----Original Message-----
> > From: edk2-devel [mailto:edk2-devel-bounces@lists.01.org] On Behalf 
> > Of Laszlo Ersek
> > Sent: Tuesday, January 29, 2019 9:07 PM
> > To: Feng, Bob C <bob.c.feng@intel.com>
> > Cc: edk2-devel@lists.01.org
> > Subject: Re: [edk2] [Patch v2 00/33] BaseTools python3 migration 
> > patch set
> >
> > Hi Bob,
> >
> > On 01/29/19 03:05, Feng, Bob C wrote:
> > > BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=55
> > >
> > > V2:
> > > The python files under CParser4 folder of ECC/Eot tool are 
> > > generated by antlr4 and forpython3 usage.
> > > They have python3 specific syntax, for example the data type 
> > > declaration for the arguments of a function. That is not 
> > > compitable with python2. this patch is to remove these syntax.
> > >
> > > The version2 patch set is commit to 
> > > https://github.com/BobCF/edk2.git branch py3basetools_v2
> >
> > (reusing the "test plan" from my email at 
> > <http://mid.mail-archive.com/cab4fed6-4c5d-94a9-b29f-da41ad7f320e@re
> > dh
> > at.com>:)
> >
> > I ran the following tests, at commit 6edb6bd9f182 ("BaseTools: Eot 
> > tool
> > Python3 adaption", 2019-01-29). Each test was performed in a clean 
> > tree (after running "git clean -ffdx") and clean environment (I 
> > re-sourced "edksetup.sh" for each test in separation). In addition, 
> > the base tools were rebuilt (again from a clean tree) for each test, 
> > with the following command [1]:
> >
> >   nice make -C "$EDK_TOOLS_PATH" -j $(getconf _NPROCESSORS_ONLN)
> >
> > (a) On my RHEL7.5 Workstation laptop, I have both the system-level 
> > python packages installed (python-2.7.5-69.el7_5.x86_64), and the 
> > extra
> > python-3.4 stuff from EPEL-7 (python34-3.4.9-1.el7.x86_64).
> >
> > (a1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build 
> > utility picked
> >
> >   PYTHON_COMMAND   = /usr/bin/python3.4
> >
> > and I successfully built OvmfPkg for IA32, IA32X64, and X64; also 
> > ArmVirtQemu for AARCH64. The built firmware images passed a smoke 
> > test too.
> >
> > (a2) I removed all the python34 packages (and the dependent 
> > packages) from my laptop. Didn't set either of PYTHON3_ENABLE and PYTHON_COMMAND.
> > (This is the configuration what a "normal" RHEL7 environment would
> > provide.) The "build" utility didn't print any PYTHON_COMMAND 
> > setting, but the same fw platform builds as in (a1) completed fine. 
> > The smoke tests passed again as well.
> >
> > (b) RHEL-8 virtual machine, with "/usr/bin/python3.6" from 
> > python36-3.6.6-18.el8.x86_64, and "/usr/libexec/platform-python" 
> > from platform-python-3.6.8-1.el8.x86_64.
> >
> > (b1) Didn't set either PYTHON3_ENABLE or PYTHON_COMMAND. The build 
> > utility picked
> >
> >   PYTHON_COMMAND   = /usr/bin/python3.6
> >
> > and I successfully built OvmfPkg for IA32, IA32X64, and X64. (I 
> > don't have a cross-compiler installed in this environment yet, nor a 
> > RHEL8
> > aarch64 KVM guest, so I couldn't test ArmVirtQemu for now).
> >
> > (b2) I set PYTHON_COMMAND to "/usr/libexec/platform-python". Didn't 
> > set PYTHON3_ENABLE. The same builds as in (b1) succeeded.
> >
> >
> > For the series:
> >
> > Tested-by: Laszlo Ersek <lersek@redhat.com>
> >
> > Given that the testing is quite time consuming, I suggest that we 
> > push
> > v2 (assuming reviewers don't find critical issues), and address 
> > small issues incrementally.
> >
> > Thanks!
> > Laszlo
> > _______________________________________________
> > edk2-devel mailing list
> > edk2-devel@lists.01.org
> > https://lists.01.org/mailman/listinfo/edk2-devel


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [Patch v2 00/33] BaseTools python3 migration patch set
  2019-02-01  3:13         ` Feng, Bob C
@ 2019-02-01  8:50           ` Laszlo Ersek
  0 siblings, 0 replies; 50+ messages in thread
From: Laszlo Ersek @ 2019-02-01  8:50 UTC (permalink / raw)
  To: Feng, Bob C, Gao, Liming; +Cc: edk2-devel@lists.01.org

On 02/01/19 04:13, Feng, Bob C wrote:
> I have pushed py3 patch set to edk2 master.

Thanks! I've noted the commit range on the BZ
(cc01b26e053c..8189be6fd7d7) and closed the BZ (#55).

Laszlo


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
       [not found]   ` <20190902190211.GZ29255@bivouac.eciton.net>
@ 2019-09-02 19:04     ` Leif Lindholm
  2019-09-04  2:10       ` [edk2-devel] " Bob Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Leif Lindholm @ 2019-09-02 19:04 UTC (permalink / raw)
  To: Feng, Bob C; +Cc: devel, Liming Gao

Argh - forgot about the mailing list move, forwarding to current list.

/
   Leif

On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> Hi Bob,
> 
> I was running Ecc today, apparently for the first time since I
> switched to Python3 by default.
> 
> I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148 over
> the way Python3 hard codes use of antlr4, whereas it seems to me it
> should be possible to ue Python3 with antlr3 (but not Python2 with
> antlr4).
> 
> However, whilst that issue could be looked at without extreme urgency,
> I am curious as to what is causing the error I am seeing upon working
> around the import failure on my Debian installation (which lacks
> python3-antlr4).
> 
> The output I get when running
> $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> 
> is
> 
> ---
> /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
>   StartTime = time.clock()
> 11:44:43, Sep.02 2019 [00:00]
> 
> Loading ECC configuration ... done
> Building database for Meta Data File Done!
> Parsing //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> Traceback (most recent call last):
>   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
>     "__main__", mod_spec)
>   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
>     exec(code, run_globals)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
>     Ecc = Ecc()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
>     self.DetectOnlyScanDirs()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
>     self.BuildDatabase()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
>     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
>     collector.ParseFile()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
>     lexer = CLexer(cStream)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
>     Lexer.__init__(self, input)
>   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
>     BaseRecognizer.__init__(self, state)
>   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
>     .format(self.api_version))
> RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> ---
> 
> Any idea?
> 
> Best Regards,
> 
> Leif
> 
> On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > v2:
> > The python files under CParser4 are generated by antlr4 and for
> > python3 usage. They have python3 specific syntax, for example
> > the data type declaration for the arguments of a function. That
> > is not compitable with python2. this patch is to remove these syntax.
> > 
> > ECC tool Python3 adaption.
> > 
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > Cc: Liming Gao <liming.gao@intel.com>
> > ---
> >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  +
> >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> >  17 files changed, 8385 insertions(+), 23 deletions(-)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-02 19:04     ` [edk2] " Leif Lindholm
@ 2019-09-04  2:10       ` Bob Feng
  2019-09-04  8:38         ` Leif Lindholm
  0 siblings, 1 reply; 50+ messages in thread
From: Bob Feng @ 2019-09-04  2:10 UTC (permalink / raw)
  To: devel@edk2.groups.io, leif.lindholm@linaro.org; +Cc: Gao, Liming

Hi Leif,

I have no Debian environment. On Debian, can python3 work with antlr3? I checked the antlr3 python github repository, the source code is still in beta version and has not been updated for 7 years.

But If yes, I think the import statement in ECC can be changed as:
try:
    import antlr4 as antlr
    from Ecc.CParser4.CLexer import CLexer
    from Ecc.CParser4.CParser import CParser
except:
    import antlr3 as antlr
    antlr.InputStream = antlr.StringStream
    from Ecc.CParser3.CLexer import CLexer
    from Ecc.CParser3.CParser import CParser 


Thanks,
Bob


-----Original Message-----
From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Leif Lindholm
Sent: Tuesday, September 3, 2019 3:05 AM
To: Feng, Bob C <bob.c.feng@intel.com>
Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption

Argh - forgot about the mailing list move, forwarding to current list.

/
   Leif

On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> Hi Bob,
> 
> I was running Ecc today, apparently for the first time since I 
> switched to Python3 by default.
> 
> I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148 over 
> the way Python3 hard codes use of antlr4, whereas it seems to me it 
> should be possible to ue Python3 with antlr3 (but not Python2 with 
> antlr4).
> 
> However, whilst that issue could be looked at without extreme urgency, 
> I am curious as to what is causing the error I am seeing upon working 
> around the import failure on my Debian installation (which lacks 
> python3-antlr4).
> 
> The output I get when running
> $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> 
> is
> 
> ---
> /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
>   StartTime = time.clock()
> 11:44:43, Sep.02 2019 [00:00]
> 
> Loading ECC configuration ... done
> Building database for Meta Data File Done!
> Parsing 
> //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> Traceback (most recent call last):
>   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
>     "__main__", mod_spec)
>   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
>     exec(code, run_globals)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
>     Ecc = Ecc()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
>     self.DetectOnlyScanDirs()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
>     self.BuildDatabase()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
>     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
>     collector.ParseFile()
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
>     lexer = CLexer(cStream)
>   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
>     Lexer.__init__(self, input)
>   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
>     BaseRecognizer.__init__(self, state)
>   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
>     .format(self.api_version))
> RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> ---
> 
> Any idea?
> 
> Best Regards,
> 
> Leif
> 
> On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > v2:
> > The python files under CParser4 are generated by antlr4 and for
> > python3 usage. They have python3 specific syntax, for example the 
> > data type declaration for the arguments of a function. That is not 
> > compitable with python2. this patch is to remove these syntax.
> > 
> > ECC tool Python3 adaption.
> > 
> > Contributed-under: TianoCore Contribution Agreement 1.1
> > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > Cc: Liming Gao <liming.gao@intel.com>
> > ---
> >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  +
> >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> >  17 files changed, 8385 insertions(+), 23 deletions(-)




^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-04  2:10       ` [edk2-devel] " Bob Feng
@ 2019-09-04  8:38         ` Leif Lindholm
  2019-09-04  9:12           ` Bob Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Leif Lindholm @ 2019-09-04  8:38 UTC (permalink / raw)
  To: Feng, Bob C; +Cc: devel@edk2.groups.io, Gao, Liming

Hi Bob,

On Wed, Sep 04, 2019 at 02:10:23AM +0000, Feng, Bob C wrote:
> Hi Leif,
> 
> I have no Debian environment. On Debian, can python3 work with
> antlr3?

Yes. The below is equivalent to what I have already done.

Can you please respond to the question I asked about the API version
error I see when I then try to run it, included in my original email?

Best Regards,

Leif

> I checked the antlr3 python github repository, the source code is still in beta version and has not been updated for 7 years.
> 
> But If yes, I think the import statement in ECC can be changed as:
> try:
>     import antlr4 as antlr
>     from Ecc.CParser4.CLexer import CLexer
>     from Ecc.CParser4.CParser import CParser
> except:
>     import antlr3 as antlr
>     antlr.InputStream = antlr.StringStream
>     from Ecc.CParser3.CLexer import CLexer
>     from Ecc.CParser3.CParser import CParser 
> 
> 
> Thanks,
> Bob
> 
> 
> -----Original Message-----
> From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of Leif Lindholm
> Sent: Tuesday, September 3, 2019 3:05 AM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
> 
> Argh - forgot about the mailing list move, forwarding to current list.
> 
> /
>    Leif
> 
> On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> > Hi Bob,
> > 
> > I was running Ecc today, apparently for the first time since I 
> > switched to Python3 by default.
> > 
> > I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148 over 
> > the way Python3 hard codes use of antlr4, whereas it seems to me it 
> > should be possible to ue Python3 with antlr3 (but not Python2 with 
> > antlr4).
> > 
> > However, whilst that issue could be looked at without extreme urgency, 
> > I am curious as to what is causing the error I am seeing upon working 
> > around the import failure on my Debian installation (which lacks 
> > python3-antlr4).
> > 
> > The output I get when running
> > $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> > 
> > is
> > 
> > ---
> > /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
> >   StartTime = time.clock()
> > 11:44:43, Sep.02 2019 [00:00]
> > 
> > Loading ECC configuration ... done
> > Building database for Meta Data File Done!
> > Parsing 
> > //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> >     "__main__", mod_spec)
> >   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> >     exec(code, run_globals)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
> >     Ecc = Ecc()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
> >     self.DetectOnlyScanDirs()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
> >     self.BuildDatabase()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
> >     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
> >     collector.ParseFile()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
> >     lexer = CLexer(cStream)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
> >     Lexer.__init__(self, input)
> >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
> >     BaseRecognizer.__init__(self, state)
> >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
> >     .format(self.api_version))
> > RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> > ---
> > 
> > Any idea?
> > 
> > Best Regards,
> > 
> > Leif
> > 
> > On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > > v2:
> > > The python files under CParser4 are generated by antlr4 and for
> > > python3 usage. They have python3 specific syntax, for example the 
> > > data type declaration for the arguments of a function. That is not 
> > > compitable with python2. this patch is to remove these syntax.
> > > 
> > > ECC tool Python3 adaption.
> > > 
> > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > > Cc: Liming Gao <liming.gao@intel.com>
> > > ---
> > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> > >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> > >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  +
> > >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> > >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> > >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> > >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> > >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> > >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> > >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> > >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> > >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> > >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> > >  17 files changed, 8385 insertions(+), 23 deletions(-)
> 
> 
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-04  8:38         ` Leif Lindholm
@ 2019-09-04  9:12           ` Bob Feng
  2019-09-04  9:51             ` Leif Lindholm
  0 siblings, 1 reply; 50+ messages in thread
From: Bob Feng @ 2019-09-04  9:12 UTC (permalink / raw)
  To: Leif Lindholm; +Cc: devel@edk2.groups.io, Gao, Liming

Hi Leif,

The CLexer.py and CParser.py under CParser3 were generated with antlr3.0.1 (https://github.com/tianocore/tianocore.github.io/wiki/ECC-tool) . I think API version error may be due to antlr-python-runtime  on Debian has different version. What's the antlr-python-runtime on Debian?

Thanks,
Bob

-----Original Message-----
From: Leif Lindholm [mailto:leif.lindholm@linaro.org] 
Sent: Wednesday, September 4, 2019 4:38 PM
To: Feng, Bob C <bob.c.feng@intel.com>
Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption

Hi Bob,

On Wed, Sep 04, 2019 at 02:10:23AM +0000, Feng, Bob C wrote:
> Hi Leif,
> 
> I have no Debian environment. On Debian, can python3 work with antlr3?

Yes. The below is equivalent to what I have already done.

Can you please respond to the question I asked about the API version error I see when I then try to run it, included in my original email?

Best Regards,

Leif

> I checked the antlr3 python github repository, the source code is still in beta version and has not been updated for 7 years.
> 
> But If yes, I think the import statement in ECC can be changed as:
> try:
>     import antlr4 as antlr
>     from Ecc.CParser4.CLexer import CLexer
>     from Ecc.CParser4.CParser import CParser
> except:
>     import antlr3 as antlr
>     antlr.InputStream = antlr.StringStream
>     from Ecc.CParser3.CLexer import CLexer
>     from Ecc.CParser3.CParser import CParser
> 
> 
> Thanks,
> Bob
> 
> 
> -----Original Message-----
> From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of 
> Leif Lindholm
> Sent: Tuesday, September 3, 2019 3:05 AM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool 
> Python3 adaption
> 
> Argh - forgot about the mailing list move, forwarding to current list.
> 
> /
>    Leif
> 
> On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> > Hi Bob,
> > 
> > I was running Ecc today, apparently for the first time since I 
> > switched to Python3 by default.
> > 
> > I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148 
> > over the way Python3 hard codes use of antlr4, whereas it seems to 
> > me it should be possible to ue Python3 with antlr3 (but not Python2 
> > with antlr4).
> > 
> > However, whilst that issue could be looked at without extreme 
> > urgency, I am curious as to what is causing the error I am seeing 
> > upon working around the import failure on my Debian installation 
> > (which lacks python3-antlr4).
> > 
> > The output I get when running
> > $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> > 
> > is
> > 
> > ---
> > /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
> >   StartTime = time.clock()
> > 11:44:43, Sep.02 2019 [00:00]
> > 
> > Loading ECC configuration ... done
> > Building database for Meta Data File Done!
> > Parsing
> > //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> >     "__main__", mod_spec)
> >   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> >     exec(code, run_globals)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
> >     Ecc = Ecc()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
> >     self.DetectOnlyScanDirs()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
> >     self.BuildDatabase()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
> >     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
> >     collector.ParseFile()
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
> >     lexer = CLexer(cStream)
> >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
> >     Lexer.__init__(self, input)
> >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
> >     BaseRecognizer.__init__(self, state)
> >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
> >     .format(self.api_version))
> > RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> > ---
> > 
> > Any idea?
> > 
> > Best Regards,
> > 
> > Leif
> > 
> > On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > > v2:
> > > The python files under CParser4 are generated by antlr4 and for
> > > python3 usage. They have python3 specific syntax, for example the 
> > > data type declaration for the arguments of a function. That is not 
> > > compitable with python2. this patch is to remove these syntax.
> > > 
> > > ECC tool Python3 adaption.
> > > 
> > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > > Cc: Liming Gao <liming.gao@intel.com>
> > > ---
> > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> > >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> > >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  +
> > >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> > >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> > >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> > >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> > >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> > >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> > >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> > >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> > >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> > >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> > >  17 files changed, 8385 insertions(+), 23 deletions(-)
> 
> 
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-04  9:12           ` Bob Feng
@ 2019-09-04  9:51             ` Leif Lindholm
  2019-09-05  5:39               ` Bob Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Leif Lindholm @ 2019-09-04  9:51 UTC (permalink / raw)
  To: Feng, Bob C; +Cc: devel@edk2.groups.io, Gao, Liming

Hi Bob,

On Wed, Sep 04, 2019 at 09:12:30AM +0000, Feng, Bob C wrote:
> The CLexer.py and CParser.py under CParser3 were generated with
> antlr3.0.1
> (https://github.com/tianocore/tianocore.github.io/wiki/ECC-tool) . I
> think API version error may be due to antlr-python-runtime  on
> Debian has different version. What's the antlr-python-runtime on
> Debian?

Running the antlr3 executable, it says
"ANTLR Parser Generator Version 3.5.2".

I guess it's worth clarifying here that I am seeing the same behaviour
with python2 and antlr3.

So I am currently unable to run Ecc at all.

Best Regards,

Leif

> -----Original Message-----
> From: Leif Lindholm [mailto:leif.lindholm@linaro.org] 
> Sent: Wednesday, September 4, 2019 4:38 PM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
> 
> Hi Bob,
> 
> On Wed, Sep 04, 2019 at 02:10:23AM +0000, Feng, Bob C wrote:
> > Hi Leif,
> > 
> > I have no Debian environment. On Debian, can python3 work with antlr3?
> 
> Yes. The below is equivalent to what I have already done.
> 
> Can you please respond to the question I asked about the API version error I see when I then try to run it, included in my original email?
> 
> Best Regards,
> 
> Leif
> 
> > I checked the antlr3 python github repository, the source code is still in beta version and has not been updated for 7 years.
> > 
> > But If yes, I think the import statement in ECC can be changed as:
> > try:
> >     import antlr4 as antlr
> >     from Ecc.CParser4.CLexer import CLexer
> >     from Ecc.CParser4.CParser import CParser
> > except:
> >     import antlr3 as antlr
> >     antlr.InputStream = antlr.StringStream
> >     from Ecc.CParser3.CLexer import CLexer
> >     from Ecc.CParser3.CParser import CParser
> > 
> > 
> > Thanks,
> > Bob
> > 
> > 
> > -----Original Message-----
> > From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf Of 
> > Leif Lindholm
> > Sent: Tuesday, September 3, 2019 3:05 AM
> > To: Feng, Bob C <bob.c.feng@intel.com>
> > Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> > Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool 
> > Python3 adaption
> > 
> > Argh - forgot about the mailing list move, forwarding to current list.
> > 
> > /
> >    Leif
> > 
> > On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> > > Hi Bob,
> > > 
> > > I was running Ecc today, apparently for the first time since I 
> > > switched to Python3 by default.
> > > 
> > > I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148 
> > > over the way Python3 hard codes use of antlr4, whereas it seems to 
> > > me it should be possible to ue Python3 with antlr3 (but not Python2 
> > > with antlr4).
> > > 
> > > However, whilst that issue could be looked at without extreme 
> > > urgency, I am curious as to what is causing the error I am seeing 
> > > upon working around the import failure on my Debian installation 
> > > (which lacks python3-antlr4).
> > > 
> > > The output I get when running
> > > $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> > > 
> > > is
> > > 
> > > ---
> > > /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
> > >   StartTime = time.clock()
> > > 11:44:43, Sep.02 2019 [00:00]
> > > 
> > > Loading ECC configuration ... done
> > > Building database for Meta Data File Done!
> > > Parsing
> > > //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> > > Traceback (most recent call last):
> > >   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> > >     "__main__", mod_spec)
> > >   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> > >     exec(code, run_globals)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
> > >     Ecc = Ecc()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
> > >     self.DetectOnlyScanDirs()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
> > >     self.BuildDatabase()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
> > >     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
> > >     collector.ParseFile()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
> > >     lexer = CLexer(cStream)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
> > >     Lexer.__init__(self, input)
> > >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
> > >     BaseRecognizer.__init__(self, state)
> > >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
> > >     .format(self.api_version))
> > > RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> > > ---
> > > 
> > > Any idea?
> > > 
> > > Best Regards,
> > > 
> > > Leif
> > > 
> > > On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > > > v2:
> > > > The python files under CParser4 are generated by antlr4 and for
> > > > python3 usage. They have python3 specific syntax, for example the 
> > > > data type declaration for the arguments of a function. That is not 
> > > > compitable with python2. this patch is to remove these syntax.
> > > > 
> > > > ECC tool Python3 adaption.
> > > > 
> > > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > > > Cc: Liming Gao <liming.gao@intel.com>
> > > > ---
> > > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> > > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> > > >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> > > >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  +
> > > >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> > > >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> > > >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> > > >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> > > >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> > > >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> > > >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> > > >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> > > >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> > > >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> > > >  17 files changed, 8385 insertions(+), 23 deletions(-)
> > 
> > 
> > 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-04  9:51             ` Leif Lindholm
@ 2019-09-05  5:39               ` Bob Feng
  2019-09-05 10:37                 ` Leif Lindholm
  0 siblings, 1 reply; 50+ messages in thread
From: Bob Feng @ 2019-09-05  5:39 UTC (permalink / raw)
  To: Leif Lindholm; +Cc: devel@edk2.groups.io, Gao, Liming

Hi Leif,

Would you try to install antlr4-python3-runtime on debian.
pip install antlr4-python3-runtime

I think python3 + antlr3 would not be a good combination, since the antlr3 for python3 is still in beta and has not been update for 7 years. And I think there is no ECC test for such combination.

Thanks,
Bob

-----Original Message-----
From: Leif Lindholm [mailto:leif.lindholm@linaro.org] 
Sent: Wednesday, September 4, 2019 5:52 PM
To: Feng, Bob C <bob.c.feng@intel.com>
Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption

Hi Bob,

On Wed, Sep 04, 2019 at 09:12:30AM +0000, Feng, Bob C wrote:
> The CLexer.py and CParser.py under CParser3 were generated with
> antlr3.0.1
> (https://github.com/tianocore/tianocore.github.io/wiki/ECC-tool) . I 
> think API version error may be due to antlr-python-runtime  on Debian 
> has different version. What's the antlr-python-runtime on Debian?

Running the antlr3 executable, it says
"ANTLR Parser Generator Version 3.5.2".

I guess it's worth clarifying here that I am seeing the same behaviour with python2 and antlr3.

So I am currently unable to run Ecc at all.

Best Regards,

Leif

> -----Original Message-----
> From: Leif Lindholm [mailto:leif.lindholm@linaro.org]
> Sent: Wednesday, September 4, 2019 4:38 PM
> To: Feng, Bob C <bob.c.feng@intel.com>
> Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool 
> Python3 adaption
> 
> Hi Bob,
> 
> On Wed, Sep 04, 2019 at 02:10:23AM +0000, Feng, Bob C wrote:
> > Hi Leif,
> > 
> > I have no Debian environment. On Debian, can python3 work with antlr3?
> 
> Yes. The below is equivalent to what I have already done.
> 
> Can you please respond to the question I asked about the API version error I see when I then try to run it, included in my original email?
> 
> Best Regards,
> 
> Leif
> 
> > I checked the antlr3 python github repository, the source code is still in beta version and has not been updated for 7 years.
> > 
> > But If yes, I think the import statement in ECC can be changed as:
> > try:
> >     import antlr4 as antlr
> >     from Ecc.CParser4.CLexer import CLexer
> >     from Ecc.CParser4.CParser import CParser
> > except:
> >     import antlr3 as antlr
> >     antlr.InputStream = antlr.StringStream
> >     from Ecc.CParser3.CLexer import CLexer
> >     from Ecc.CParser3.CParser import CParser
> > 
> > 
> > Thanks,
> > Bob
> > 
> > 
> > -----Original Message-----
> > From: devel@edk2.groups.io [mailto:devel@edk2.groups.io] On Behalf 
> > Of Leif Lindholm
> > Sent: Tuesday, September 3, 2019 3:05 AM
> > To: Feng, Bob C <bob.c.feng@intel.com>
> > Cc: devel@edk2.groups.io; Gao, Liming <liming.gao@intel.com>
> > Subject: Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool
> > Python3 adaption
> > 
> > Argh - forgot about the mailing list move, forwarding to current list.
> > 
> > /
> >    Leif
> > 
> > On Mon, Sep 02, 2019 at 08:02:11PM +0100, Leif Lindholm wrote:
> > > Hi Bob,
> > > 
> > > I was running Ecc today, apparently for the first time since I 
> > > switched to Python3 by default.
> > > 
> > > I have raised https://bugzilla.tianocore.org/show_bug.cgi?id=2148
> > > over the way Python3 hard codes use of antlr4, whereas it seems to 
> > > me it should be possible to ue Python3 with antlr3 (but not 
> > > Python2 with antlr4).
> > > 
> > > However, whilst that issue could be looked at without extreme 
> > > urgency, I am curious as to what is causing the error I am seeing 
> > > upon working around the import failure on my Debian installation 
> > > (which lacks python3-antlr4).
> > > 
> > > The output I get when running
> > > $ Ecc -t /work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/ -s
> > > 
> > > is
> > > 
> > > ---
> > > /work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py:409: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
> > >   StartTime = time.clock()
> > > 11:44:43, Sep.02 2019 [00:00]
> > > 
> > > Loading ECC configuration ... done Building database for Meta Data 
> > > File Done!
> > > Parsing
> > > //work/git/edk2/EmbeddedPkg/Drivers/DtPlatformDxe/DtPlatformDxe.c
> > > Traceback (most recent call last):
> > >   File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
> > >     "__main__", mod_spec)
> > >   File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
> > >     exec(code, run_globals)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 410, in <module>
> > >     Ecc = Ecc()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 94, in __init__
> > >     self.DetectOnlyScanDirs()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 130, in DetectOnlyScanDirs
> > >     self.BuildDatabase()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/EccMain.py", line 150, in BuildDatabase
> > >     c.CollectSourceCodeDataIntoDB(EccGlobalData.gTarget)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/c.py", line 526, in CollectSourceCodeDataIntoDB
> > >     collector.ParseFile()
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CodeFragmentCollector.py", line 517, in ParseFile
> > >     lexer = CLexer(cStream)
> > >   File "/work/git/edk2/BaseTools/Source/Python/Ecc/CParser3/CLexer.py", line 147, in __init__
> > >     Lexer.__init__(self, input)
> > >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 1039, in __init__
> > >     BaseRecognizer.__init__(self, state)
> > >   File "/usr/lib/python3/dist-packages/antlr3/recognizers.py", line 169, in __init__
> > >     .format(self.api_version))
> > > RuntimeError: ANTLR version mismatch: The recognizer has been generated with API V0, but this runtime does not support this.
> > > ---
> > > 
> > > Any idea?
> > > 
> > > Best Regards,
> > > 
> > > Leif
> > > 
> > > On Tue, Jan 29, 2019 at 10:06:09AM +0800, Feng, Bob C wrote:
> > > > v2:
> > > > The python files under CParser4 are generated by antlr4 and for
> > > > python3 usage. They have python3 specific syntax, for example 
> > > > the data type declaration for the arguments of a function. That 
> > > > is not compitable with python2. this patch is to remove these syntax.
> > > > 
> > > > ECC tool Python3 adaption.
> > > > 
> > > > Contributed-under: TianoCore Contribution Agreement 1.1
> > > > Signed-off-by: Bob Feng <bob.c.feng@intel.com>
> > > > Cc: Liming Gao <liming.gao@intel.com>
> > > > ---
> > > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CLexer.py  |    0
> > > >  BaseTools/Source/Python/Ecc/{ => CParser3}/CParser.py |    0
> > > >  BaseTools/Source/Python/Ecc/CParser3/__init__.py      |    0
> > > >  BaseTools/Source/Python/Ecc/CParser4/C.g4             |  637 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CLexer.py        |  632 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CListener.py     |  815 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  BaseTools/Source/Python/Ecc/CParser4/CParser.py       | 6279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  +
> > > >  BaseTools/Source/Python/Ecc/CParser4/__init__.py      |    0
> > > >  BaseTools/Source/Python/Ecc/Check.py                  |    4 +-
> > > >  BaseTools/Source/Python/Ecc/CodeFragmentCollector.py  |   20 +--
> > > >  BaseTools/Source/Python/Ecc/Configuration.py          |    3 -
> > > >  BaseTools/Source/Python/Ecc/EccMain.py                |    2 +-
> > > >  BaseTools/Source/Python/Ecc/EccToolError.py           |    4 +-
> > > >  BaseTools/Source/Python/Ecc/FileProfile.py            |    2 +-
> > > >  BaseTools/Source/Python/Ecc/MetaDataParser.py         |    2 +-
> > > >  BaseTools/Source/Python/Ecc/c.py                      |    6 +-
> > > >  BaseTools/Source/Python/Ecc/config.ini                |    2 -
> > > >  17 files changed, 8385 insertions(+), 23 deletions(-)
> > 
> > 
> > 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-05  5:39               ` Bob Feng
@ 2019-09-05 10:37                 ` Leif Lindholm
  2019-09-05 13:53                   ` Laszlo Ersek
  0 siblings, 1 reply; 50+ messages in thread
From: Leif Lindholm @ 2019-09-05 10:37 UTC (permalink / raw)
  To: Feng, Bob C; +Cc: devel@edk2.groups.io, Gao, Liming, lersek

Hi Bob, (+Laszlo, due to a question at the end)

On Thu, Sep 05, 2019 at 05:39:05AM +0000, Feng, Bob C wrote:
> Would you try to install antlr4-python3-runtime on debian.
> pip install antlr4-python3-runtime

I'd rather not. For the reasons described by Laszlo in the
discussion leading to the creation of edk2-tools:
https://edk2.groups.io/g/devel/message/40380

Now, if Ecc was moved to edk2-tools, I guess that would be fine. It
also means we increase the hurdle for running Ecc.

> I think python3 + antlr3 would not be a good combination, since the
> antlr3 for python3 is still in beta and has not been update for 7
> years. And I think there is no ECC test for such combination.

Nevertheless python3-antlr3 was packaged by debian/ubuntu as late as
last year, as part of their OpenStack work. And is now part of both
distributions.

Laszlo - which python-antlr versions are packaged in
centos/fedora/redhat?

/
    Leif

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [edk2-devel] [edk2] [Patch 32/33] BaseTools: ECC tool Python3 adaption
  2019-09-05 10:37                 ` Leif Lindholm
@ 2019-09-05 13:53                   ` Laszlo Ersek
  0 siblings, 0 replies; 50+ messages in thread
From: Laszlo Ersek @ 2019-09-05 13:53 UTC (permalink / raw)
  To: Leif Lindholm, Feng, Bob C; +Cc: devel@edk2.groups.io, Gao, Liming

On 09/05/19 12:37, Leif Lindholm wrote:
> Hi Bob, (+Laszlo, due to a question at the end)
>
> On Thu, Sep 05, 2019 at 05:39:05AM +0000, Feng, Bob C wrote:
>> Would you try to install antlr4-python3-runtime on debian.
>> pip install antlr4-python3-runtime
>
> I'd rather not. For the reasons described by Laszlo in the
> discussion leading to the creation of edk2-tools:
> https://edk2.groups.io/g/devel/message/40380
>
> Now, if Ecc was moved to edk2-tools, I guess that would be fine. It
> also means we increase the hurdle for running Ecc.
>
>> I think python3 + antlr3 would not be a good combination, since the
>> antlr3 for python3 is still in beta and has not been update for 7
>> years. And I think there is no ECC test for such combination.
>
> Nevertheless python3-antlr3 was packaged by debian/ubuntu as late as
> last year, as part of their OpenStack work. And is now part of both
> distributions.
>
> Laszlo - which python-antlr versions are packaged in
> centos/fedora/redhat?

None.

* antlr4:

The following Fedora feature requests have been dormant for quite some
time:

- antlr4-4.7.1 is available
  https://bugzilla.redhat.com/show_bug.cgi?id=1596974

- antlr4: python 3 runtime support
  https://bugzilla.redhat.com/show_bug.cgi?id=1599015

This proved so much of a problem for the maintainers of the "coq"
package that they went ahead and bundled the python 3 runtime with the
"coq" package, for their own internal use only:

  https://koji.fedoraproject.org/koji/buildinfo?buildID=1370928

The SPEC file in the "coq-8.9.1-6.fc32.src.rpm" file, downloaded from
that link, says:

> # NOTE: Due to no action on bz 1596974 and bz 1599015 for months, we now bundle
> # the necessary python3 runtime for antlr4.  Once those bugs are addressed, we
> # can unbundle and use the system antlr4 python3 runtime.

So now you can install "antlr4-python3-runtime", but it's not built from
the "antlr4" source package, it's built from the "coq" one. I don't
think anyone would want to rely on that, for use cases not related to
"coq".

Regarding Python 2 support -- no need to look.

  https://fedoraproject.org/wiki/Changes/Mass_Python_2_Package_Removal

For example, the changelog of even the antlr-2 package says,

  https://koji.fedoraproject.org/koji/buildinfo?buildID=1321182

> * Wed Mar 27 2019 Miro Hron
ok <mhroncok@redhat.com> - 0:2.7.7-58
> - Subpackage python2-antlr was removed
>   https://fedoraproject.org/wiki/Changes/Mass_Python_2_Package_Removal

In Debian too, "python3-antlr4" is a wish-list / prospective package:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=897129


* Regarding antlr3, I couldn't find a standalone python support package
in Fedora.

And, the antlr3 source package itself:

  https://koji.fedoraproject.org/koji/buildinfo?buildID=1345133

doesn't seem to produce python bindings (see under "noarch").


* Given the Fedora situation, it's virtually impossible that CentOS or
RHEL ship the package (I haven't even checked).


* However; since the discussion that you link near the top, I've come
across the following blog post:

  https://developers.redhat.com/blog/2018/11/14/python-in-rhel-8/

and virtual environments were also mentioned by Mike and Sean, in the
same thread that you link:

- http://mid.mail-archive.com/E92EE9817A31E24EB0585FDF735412F5B9CA5998@ORSMSX113.amr.corp.intel.com
  https://edk2.groups.io/g/devel/message/40389

- http://mid.mail-archive.com/19931.1557456493073446522@groups.io
  https://edk2.groups.io/g/devel/message/40393

So I guess there should be a way to make "pip install" work, without
messing up the system. I've never tried.

Thanks
Laszlo

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2019-09-05 13:53 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-01-29  2:05 [Patch v2 00/33] BaseTools python3 migration patch set Feng, Bob C
2019-01-29  2:05 ` [Patch 01/33] BaseTool:Rename xrange() to range() Feng, Bob C
2019-01-29  2:05 ` [Patch 02/33] BaseTools:use iterate list to replace the itertools Feng, Bob C
2019-01-29  2:05 ` [Patch 03/33] BaseTools: Rename iteritems to items Feng, Bob C
2019-01-29  2:05 ` [Patch 04/33] BaseTools: replace get_bytes_le() to bytes_le Feng, Bob C
2019-01-29  2:05 ` [Patch 05/33] BaseTools: use OrderedDict instead of sdict Feng, Bob C
2019-01-29  2:05 ` [Patch 06/33] BaseTools: nametuple not have verbose parameter in python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 07/33] BaseTools: Remove unnecessary super function Feng, Bob C
2019-01-29  2:05 ` [Patch 08/33] BaseTools: replace long by int Feng, Bob C
2019-01-29  2:05 ` [Patch 09/33] BaseTools:Solve the data sorting problem use python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 10/33] BaseTools: Update argparse arguments since it not have version now Feng, Bob C
2019-01-29  2:05 ` [Patch 11/33] BaseTools:Similar to octal data rectification Feng, Bob C
2019-01-29  2:05 ` [Patch 12/33] BaseTools/UPT:merge UPT Tool use Python2 and Python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 13/33] BaseTools: update Test scripts support python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 14/33] BaseTools/Scripts: Porting PackageDocumentTools code to use Python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 15/33] Basetools: It went wrong when use os.linesep Feng, Bob C
2019-01-29  2:05 ` [Patch 16/33] BaseTools:Fv BaseAddress must set If it not set Feng, Bob C
2019-01-29  2:05 ` [Patch 17/33] BaseTools: Make sure AllPcdList valid Feng, Bob C
2019-01-29  2:05 ` [Patch 18/33] BaseTools:TestTools character encoding issue Feng, Bob C
2019-01-29  2:05 ` [Patch 19/33] BaseTools:Double carriage return inserted from Trim.py on Python3 Feng, Bob C
2019-01-29  2:05 ` [Patch 20/33] BaseTools:File open failed for VPD MapFile Feng, Bob C
2019-01-29  2:05 ` [Patch 21/33] BaseTools: change the Division Operator Feng, Bob C
2019-01-29  2:05 ` [Patch 22/33] BaseTools:There is extra blank line in datalog Feng, Bob C
2019-01-29  2:06 ` [Patch 23/33] BaseTools: Similar to octal data rectification Feng, Bob C
2019-01-29  2:06 ` [Patch 24/33] BaseTools: Update windows and linux run scripts file to use Python3 Feng, Bob C
2019-01-29  2:06 ` [Patch 25/33] BaseTools:Update build tool to print python version information Feng, Bob C
2019-01-29  2:06 ` [Patch 26/33] BaseTools:Linux Python highest version check Feng, Bob C
2019-01-29  2:06 ` [Patch 27/33] BaseTools: Update PYTHON env to PYTHON_COMMAND Feng, Bob C
2019-01-29  2:06 ` [Patch 28/33] BaseTools:Fixed Rsa issue and a set define issue Feng, Bob C
2019-01-29  2:06 ` [Patch 29/33] BaseTools:ord() don't match in py2 and py3 Feng, Bob C
2019-01-29  2:06 ` [Patch 30/33] BaseTools: the list and iterator translation Feng, Bob C
2019-01-29  2:06 ` [Patch 31/33] BaseTools: Handle the bytes and str difference Feng, Bob C
2019-01-29  2:06 ` [Patch 32/33] BaseTools: ECC tool Python3 adaption Feng, Bob C
     [not found]   ` <20190902190211.GZ29255@bivouac.eciton.net>
2019-09-02 19:04     ` [edk2] " Leif Lindholm
2019-09-04  2:10       ` [edk2-devel] " Bob Feng
2019-09-04  8:38         ` Leif Lindholm
2019-09-04  9:12           ` Bob Feng
2019-09-04  9:51             ` Leif Lindholm
2019-09-05  5:39               ` Bob Feng
2019-09-05 10:37                 ` Leif Lindholm
2019-09-05 13:53                   ` Laszlo Ersek
2019-01-29  2:06 ` [Patch v2 33/33] BaseTools: Eot " Feng, Bob C
2019-01-29 13:07 ` [Patch v2 00/33] BaseTools python3 migration patch set Laszlo Ersek
2019-01-30  1:52   ` Gao, Liming
2019-01-30  5:25     ` Feng, Bob C
2019-01-31  8:23       ` Gao, Liming
2019-02-01  3:13         ` Feng, Bob C
2019-02-01  8:50           ` Laszlo Ersek
2019-01-30  2:59   ` Feng, Bob C
  -- strict thread matches above, loose matches on Subject: below --
2019-01-25  4:55 [Patch " Feng, Bob C
2019-01-25  4:55 ` [Patch 03/33] BaseTools: Rename iteritems to items Feng, Bob C

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox